uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,088,654
arxiv
\section{Introduction} On scales relevant to the large-scale structure of reionization -- hundreds of Mpc -- numerical calculations cannot self-consistently track all the sources and sinks of ionizing radiation. Resolving the IGM on the small scales relevant to QSO absorption systems optically-thick at the Lyman edge, also known as Lyman limit systems \citep[e.g.,][]{weymann/etal:1981, tytler:1982, mo/miralda-escude:1996}, is particularly difficult. These absorption systems were likely to have impeded the progress of reionization \citep[e.g.,][]{miralda-escude/etal:2000, gnedin/fan:2006}. What are likely values for the comoving absorption system mean free path at the end of reionization, $\lambda_{\rm abs}$? \citet{prochaska/etal:2009} determined the abundance of Lyman-limit absorption systems, $dN/dz = 1.9 [(1+z)/4.7]^{5.2}$, over the redshift range $3.6<z<4.4$, which translates into a comoving mean free path of $\lambda_{\rm abs}=c/H(z)/(dN/dz) \sim 415$ Mpc at $z=3.7$. This dependence on redshift is steeper than reported previously \citep[e.g.,][]{storrie-lombardi/etal:1994}, and implies $\lambda_{\rm abs}\propto (1+z)^{-6.7}$. However, \citet{songaila/cowie:2010} extended the constraints out to $z\sim 6$, finding a much shallower dependence, $\lambda_{\rm abs}\propto (1+z)^{-3.44}$, with a measurement of $\lambda_{\rm abs}\simeq 34$~Mpc at $z\sim 5.7$. Comparison of the absorption spectra and hydrodynamic simulations can also constrain $\lambda_{\rm abs}$, with \citet{bolton/haehnelt:2007a} reporting $20 <\lambda_{\rm abs}<50$ Mpc at $z\sim 6$. The evolution of the mean free path at $z<6$ leaves open a wide range of possible values during the reionization epoch, from several to hundreds of Mpc. Our goal is to determine the sensitivity of the history and morphology of reionization to this wide range of possible mean free paths. Direct simulation of all the processes involved in reionization is not currently possible. Theoretical studies focus either on the small scale hydrodynamics and radiative transfer \cite[e.g.,][]{miralda-escude/etal:2000, gnedin:2000, haiman/etal:2001, shapiro/etal:2004, ciardi/etal:2006, pawlik/etal:2009, raicevic/theuns:2011, mcquinn/etal:2011}, or on the large scale morphology of reionization \cite[e.g.,][]{2002MNRAS.330L..53A, furlanetto/etal:2004, 2006MNRAS.369.1625I, mcquinn/etal:2007, trac/cen:2007, shin/etal:2008, friedrich/etal:2011}. On the largest scales, it has been somewhat surprising how well simplified ``semi-numerical'' approaches \citep[e.g.,][]{zahn/etal:2007, mesinger/furlanetto:2007, thomas/etal:2009, choudhury/etal:2009, alvarez/etal:2009, santos/etal:2010, zahn/etal:2011a, mesinger/etal:2011a} match much more computationally expensive radiative transfer techniques \citep[e.g.,][]{1999ApJ...523...66A,2001NewA....6..437G, sokasian/etal:2001, 2002MNRAS.330L..53A, 2006MNRAS.369.1625I, mcquinn/etal:2007, trac/cen:2007}. In order to efficiently survey the wide parameter space of $\lambda_{\rm abs}$ and focus on the large scale morphology and overall progress of reionization, we have chosen the simplified semi-numerical approach. Our implementation is similar to that of \citet{zahn/etal:2007}, with the the addition of a treatment of photon consumption in absorption systems. Our approach, in which we treat abundance of absorbers as an input parameter, is complementary to that taken by \citet{crociani/etal:2011}, where the semi-numerical approach was used to {\em determine} the large scale distribution of absorbers, rather than to model the effect of the absorbers on the progress of reionization, as we do here. Our results are parametrized by different values of for the minimum source halo mass, ionizing efficiency of collapsed matter, and the absorption system mean free path. We describe our simulations in \S2, followed by the main results in \S 3, ending with a discussion in \S 4. All calculations were done using $(\Omega_m,\Omega_\Lambda,h,\sigma_8,n_s)=(0.27,0.73,0.72,0.8,0.96)$, consistent with WMAP 7-year data \citep{komatsu/etal:2011}. All distances are comoving. \section{Simulations} Table 1 shows our simulation parameters: $\zeta$, the ionizing efficiency, $\lambda_{\rm abs}$, the comoving absorption system mean free path, $t_{\rm ev}$, the photoevaporation time for the evolving $\lambda_{\rm abs}$ models, and $M_{\rm min}$, the minimum halo mass of galaxies. Also shown are the redshifts when the global ionized fraction equals 0.5, 0.9, and 1 -- $z_{0.5}$, $z_{0.9}$, and $z_{\rm ov}$, respectively, and the bubble mean free path when the neutral fraction is 0.1, $\lambda_0\equiv \lambda_{\rm b}(x_{\rm HI}=0.1)$. The parameters were varied to give a Thomson scattering optical depth of($\tau_-,\tau_0,\tau_+$)$=$($0.06,0.09,0.12$), corresponding to the 2-$\sigma$ constraint from {\em WMAP} \citep{komatsu/etal:2011}, with the He~I fraction tracking H~I, and instantaneous He~II reionization at $z=3$. All simulations used $4096^3$ cells in a 2 Gpc$/h$ box. \begin{table}[t] \caption{Simulation parameters} \centering \begin{tabular}{l c c c c c c c c} \hline\hline $\tau_{\rm es}$&$M_{\rm min}$&$\lambda_{\rm abs}$&$t_{\rm ev}$&$\zeta$&$z_{1/2}$&$z_{0.9}$&$z_{\rm ov}$&$\lambda_0$\\ &$[M_\odot]$&[Mpc/h]&[Myr]&&&&&[Mpc]\\ \hline 0.06&$10^8$ & 8 & - & 20 & 8.1 & 5.9 & 4.3 & 87\\ 0.06&$10^8$ & 32 & - & 16 & 8 & 6.3 & 5 & 181\\ 0.06&$10^8$ & 256 & - & 14.5 & 8 & 6.8 & 6.2& 294 \\ 0.06&$10^9$ & 8 & - & 78 & 8.2 & 6.6 & 5.6 & 98\\ 0.06&$10^9$ & 32 & - & 60 & 8.1 & 6.8 & 6 & 210\\ 0.06&$10^9$ & 256 & - & 40 & 8.1 & 7.24 & 6.9 & 375\\ \hline 0.09&$10^8$ & 8 & - & 137 & 11 & 9.2 & 8 & 94\\ 0.09&$10^8$ & 32 & - & 110 & 10.9 & 9.5 & 8.5 & 201\\ 0.09&$10^8$ & 256 & - & 98 & 10.9 & 9.9 & 9.5 & 345\\ 0.09&$10^9$ & 8 & - & 1200 & 11.1 & 9.8 & 9.1 & 105\\ 0.09&$10^9$ & 32 & - & 920 & 11 & 10 & 9.3 & 232\\ 0.09&$10^9$ & 256 & - & 820 & 11 & 10.3 & 10 & 443\\ \hline 0.12&$10^8$ & 8 & - & 1050 & 13.5 & 12.5 & 11.1 & 99\\ 0.12&$10^8$ & 32 & - & 850 & 13.5 & 12.2 & 11.4 & 216\\ 0.12&$10^8$ & 256 & - & 750 & 13.4 & 12.6 & 12.2 & 396\\ 0.12&$10^9$ & 8 & - & 2.2e4 & 13.5 & 12.5 & 11.9 & 109\\ 0.12&$10^9$ & 32 & - & 1.7e4 & 13.5 & 12.6 & 12.1 & 248\\ 0.12&$10^9$ & 256 & - & 1.5e4 & 13.5 & 12.9 & 12.7 & 500\\ \hline 0.09\tablenotemark{a}&$10^8$ & 8 & - & 102 & 10.9 & 9.7 & 8.3 & -\\ 0.09\tablenotemark{a}&$10^8$ & 32 & - & 94 & 10.8 & 9.9 & 9.6 & -\\ 0.09\tablenotemark{a}&$10^8$ & 256 & - & 92 & 10.8 & 10 & 9.8 & -\\ \hline 0.06 & $10^8$ & - & 10 & 15 & 8.2 & 6.9 & 6.4 & 300\\ 0.06 & $10^8$ & - & 50 & 15 & 8.1 & 6.4 & 5.6 & 264\\ 0.06 & $10^8$ & - & 100 & 15 & 8.0 & 6.1 & 5.3 & 170\\ \label{table} \end{tabular} \tablenotetext{a}{Obtained using sharp $k$-space filtering} \end{table} \subsection{Mean reionization history} We model the mean reionization history wtih two spatially uniform mean free paths: that corresponding to ionized bubbles, $\lambda_{\rm b}$, and Lyman-limit systems, $\lambda_{\rm abs}$. Ionizing radiation is attenuated by their superposition, so that $\lambda^{-1}_{\rm mfp}=\lambda^{-1}_{\rm b}+\lambda_{\rm abs}^{-1}.$ The spatially averaged ionizing flux is $F(z)=\lambda_{\rm mfp}(z)\epsilon(z)$, where $\epsilon(z)$ is the ionizing photon emissivity. We set $\epsilon(z)=\zeta n_{H,0} \dot{f}_{\rm coll}(z)$, where $\zeta$ is number of ionizing photons per collapsed hydrogen atom, corrected for recombinations outside of absorption systems. The mean ionization rate is \begin{equation} \frac{d{{x}}}{dt}=\zeta\dot{f}_{\rm coll}(z)\frac{\lambda_{\rm abs}(z)}{\lambda_{\rm abs}(z)+\lambda_{\rm b}({x})}. \label{dxdt} \end{equation} When $\lambda_{\rm abs}\gg \lambda_{\rm b}$, photons typically reach the edges of bubbles without being absorbed by intervening Lyman-limit systems, and the photoionization rate is equal to the emission rate. When $\lambda_{\rm abs}\ll \lambda_{\rm b}$, the probability of a photon reaching the edge of a bubble is $\lambda_{\rm abs}/\lambda_{\rm b}$, and the ionization rate is suppressed. A more realistic treatment would involve allowing $F$, $\lambda_b$, and $\lambda_{\rm abs}$ to vary spatially, obtaining a spatially-dependent solution of equation (\ref{dxdt}), which can then be averaged to find the global reionization history. We have chosen the simpler uniform model of equation (\ref{dxdt}) as a starting point. Although this choice does not affect our results on the morphology of the ionization field at fixed ionized fraction, the assumptions underlying equation (\ref{dxdt}) should be kept in mind when interpreting the reionization history and photon consumption rates we find. \begin{figure} \begin{centering} \includegraphics[width=0.45\textwidth]{fig1.pdf} \caption{Bubble mean free path, $\lambda_{\rm b}$, as a function of mean neutral fraction, $x_{\rm HI}$. All models exhibit a characteristic scale of $\sim 10$~Mpc at the half-ionized epoch. During percolation, lower values of $\lambda_{\rm abs}$ lead to lower values of $\lambda_{\rm b}$. The solid line corresponds to the evaporating minihalo model with $t_{\rm ev}=100$~Myr. At $x_{\rm HI}<0.1$ the relation approaches approximately $\lambda_{\rm b}\propto x_{\rm HI}^{-2/3}$. } \label{bubblemfp} \end{centering} \end{figure} \label{sec:history} \subsection{Spatial variations} Our three-dimensional model is based on that of \citet{furlanetto/etal:2004}, later extended to three dimensions by \citet{zahn/etal:2007}. Its main assumption is that a region is fully ionized if its collapsed fraction is greater than some threshold, $\zeta f_{\rm coll} > 1$. As shown in \citet{alvarez/etal:2009}, by smoothing the linear density field over a range of scales, one can efficiently determine when each point is first reionized, $z_r$. We do not smooth over scales with radii larger than $\lambda_{\rm abs}$, since absorption systems shield radiation from these distances. We assume that the absorption systems contribute a spatially uniform opacity, and therefore the scale beyond which we do not smooth is the same everywhere. \begin{figure*} \begin{centering} \includegraphics[width=\textwidth]{fig2.pdf} \caption{Global reionization histories (bottom) and photon consumption (top). Different colors indicate different values of $\lambda_{\rm abs}$, while the panels indicate different Thomson scattering optical depths, $\tau_{\rm es} = 0.06$, 0.09, and 0.12, from left to right. Note that in all cases the early part of the reionization history is insensitive to $\lambda_{\rm abs}$, when the bubbles are relatively smaller. Lower values of $\lambda_{\rm abs}$ delay the end of reionization. } \label{history} \end{centering} \end{figure*} This approach is desirable because the order in which points are ionized is the same as that in which the \citet{furlanetto/etal:2004} criterion is first met around each point. However, the reionization redshifts do not generally result in the correct average ionization fraction for a sharp smoothing filter in real space \citep[see below for solutions using a sharp filter in $k$-space, which does conserve photons;][]{zahn/etal:2007}. To obtain a self-consistent solution, we determine a correction to the reionization redshift at each point, $z_{r,c}(z_r)$, by matching the simulated reionization history to the global reionization history, \begin{equation} \int_{z_{r}}^\infty\frac{dp}{dz}dz = \int_{z_{r,c}}^\infty\frac{d{x}}{dt}\frac{dt}{dz}dz, \label{match} \end{equation} where $dp/dz$ is the distribution of uncorrected simulated reionization redshifts, and $d{{x}}/dt$ is given by equation (\ref{dxdt}). This calibration procedure does not change the overall topology we find at fixed ionized fraction, but does change the overall reionization history. We calculate $\lambda_{\rm b}({x})$ by random raytracing. If a ray starts in an ionized region, it is extended in a random direction until it reaches a neutral cell. The bubble mean free path is then the average over the lengths of all such rays. The average converges with on the order of $10^4$ rays. About 30 logarithmically-spaced smoothing scales are sufficient to converge on the reionization morphology. Because we determine the reionization redshifts simultaneously, the amount of data stored and the operation count are greatly reduced compared to determining the ionization field at each redshift. Since the assumption that points are either highly-ionized or neutral is made in both cases, no information is lost by following the more efficient approach. \subsection{Sharp $k$-space filtering} Using a sharp $k$-space filter leads to a method which does conserve photons \citep{zahn/etal:2007} in the long mean free path limit, at the expense of using an unphysical filter which ``rings'' in real space, and an ambiguity in the assignment of a smoothing scale $R_f$ to the filter $k_f$ -- particularly troubling given that we want to exclude scales $R_f >\lambda_{\rm abs}$. However, for comparison we have calculated a few cases with sharp $k$-space filtering, shown in Table 1. The ionization history is found by integrating a diffusion equation for the distribution in density of points that have not crossed the barrier \citep{bond/etal:1991}, \beq \frac{\partial \Pi(\Lambda,\tilde{\delta},z)}{\partial \Lambda}=\frac{1}{2} \frac{\partial^2\Pi}{\partial\tilde{\delta}^2}-\frac{\partial \tilde{\delta}}{\partial\Lambda} \frac{\partial\Pi}{\partial\tilde{\delta}}, \label{diffusion} \eeq with boundary condition $\Pi(\Lambda,0,z)=0$, where $\tilde{\delta}\equiv B(\Lambda,z)-\delta$, $\delta$ is the density contrast, $B(\Lambda,z)=\delta_c(z)-{\rm erf}^{-1}(1-\zeta^{-1})\sqrt{2(\sigma_{\rm min}^2-\Lambda)}$ is the bubble barrier \citep{furlanetto/etal:2004}, and $\sigma^2_{\rm min}$ is the variance of the $z=0$ linear density field at the scale of $M_{\rm min}$, with the growth of structure encoded in the time dependence of $\delta_c(z)$. We smooth only over scales larger than $\lambda_{\rm abs}$ by starting integration of equation \ref{diffusion} at $\Lambda_i\equiv \sigma^2(\lambda_{\rm abs})$ -- the variance on the scale of the mean free path is matched to the one obtained using a top-hat filter in real space -- with initial condition \beq \Pi(\Lambda_i,\tilde{\delta},z)=\frac{1}{\sqrt{2\pi\Lambda_i}} \exp\left[-{\frac{(\tilde{\delta}-B(\Lambda_i,z))^2}{2\Lambda_i}}\right]. \eeq The total ionized fraction is obtained by integrating over the distribution at $\Lambda=\sigma_{\rm min}^2$, \beq x(z)=1-\int_0^\infty \Pi(\sigma_{\rm min}^2,\tilde{\delta},z)d\tilde{\delta}. \eeq If the initial condition for $\Pi$ is specified at $\Lambda=0$, (i.e. starting at a scale $\rightarrow \infty$), then this corresponds to smoothing over all scales, and \beq x(z)=\zeta{\rm erfc} \left[ \delta_c(z)/\sqrt{2\sigma_{\rm min}^2} \right]=\zeta f_{\rm coll}(z). \eeq \subsection{Evolving $\lambda_{\rm abs}$ due to minihalos} To model an evolving value of $\lambda_{\rm abs}$, we assume that the absorption systems are minihalos\citep[e.g.,][]{abel/mo:1998}. Given that the halo cross section increases as $M^{2/3}$, while $dn/d\ln M\propto M^{-1}$, small objects should dominate the mean free path, subject to photoevaporation effects \citep[e.g.,][]{haiman/etal:2001,shapiro/etal:2004}. Consider a uniform distribution of dark matter halos with a number density $n_h(z)\equiv dn/d\ln M_h$. We will assume that in neutral regions, halos at mass $M_h$ have gas fractions of unity and would serve as Lyman-limit absorptions systems, while in ionized regions, such halos survive for a time $t_{\rm ev}$ before being evaporated by the ionizing background. The number of absorption systems evolves according to \beq \frac{dn_{\rm abs}}{dz}=\frac{d\ln x}{dz}\left(n_{\rm h}-n_{\rm abs}\right) +\xi_{\rm ev} (1+z)^{-5/2} n_{\rm abs}(z), \label{nabs} \eeq where $\xi_{\rm ev}\equiv (H_0\Omega_{\rm m}^{1/2}t_{\rm ev})^{-1}\simeq 260\ (t_{\rm ev}/100\ {\rm Myr})^{-1}$. We choose $M_h=5.7\times 10^3 M_\odot[(1+z)/10]^{3/2}$, corresponding to the cosmological Jeans mass, and consider three models, in which $t_{\rm ev}=10$, 50, and 100~Myr. These timescales are relatively long compared with the rather short photoevaporation times for low-mass halos found by \citet{shapiro/etal:2004}. However, if reionization is ``photon-starved'', as observations are suggesting \citep[e.g.,][]{bolton/haehnelt:2007a}, then the flux could be low at the end of reionization, leading to longer evaporation times. The mean free path is \beq \lambda^{-1}_{\rm abs}(z)=\pi R^2_{\rm vir}(z)n_{\rm abs}(z), \label{lambdaabs} \eeq where $R_{\rm vir}$ corresponds to the halo virial radius. However, the boundaries of the absorption systems are not well-defined due to hydrodynamic effects and departures from spherical symmetry. For our purposes, the minihalo model we describe here is sufficient to incorporate the effect of an evolving absorption system mean free path in our calculations. More accurate modeling of absorption systems during reionization will need to incorporate the radiative transfer of ionizing radiation in cosmological hydrodynamics simulations which resolve the Jeans mass. We start with an initial guess for $\lambda_{\rm abs}(z)$, only allowing points to cross the barrier at some scale and redshift if that scale is below the current value of $\lambda_{\rm abs}$. The values obtained for $\lambda_b(x)$ from the simulated ionization field are then fed back into equations \ref{dxdt}, \ref{nabs}, and \ref{lambdaabs}, and the process is reapeated until convergence is reached. \section{Results} Fig.~\ref{bubblemfp} shows the bubble mean free path as a function of the neutral fraction for several parameter choices. At the half-ionized epoch, all models exhibit a characteristic bubble scale of $\lambda_{\rm b}\simeq 10$~Mpc, with models with rarer sources (i.e. increasing efficiency of ionizing radiation) leading to somewhat larger bubbles. During percolation, most of the variation in $\lambda_{\rm b}({x_{\rm HI}})$ comes from variation of $\lambda_{\rm abs}$, which is most pronounced at $x_{\rm HI}\leq 0.2$. The large values of $\lambda_b$ indicate that simulation volumes in excess of several hundred Mpc on a side are necessary in order to properly model the last ten per cent of the reionization process. At $x_{\rm HI}<0.1$, $\lambda_b\sim x_{\rm HI}^{-2/3}$, in agreement with the model of \citet{miralda-escude/etal:2000}, in which a constant number of neutral clouds decrease in size at the same fractional rate at the end of reionization. The reionization history for all of the fixed $\lambda_{\rm abs}$ models is shown in Fig.~\ref{history}. The early evolution is not sensitive to $\lambda_{\rm abs}$, because the photon mean free path is determined by the size of the ionized bubbles themselves. At later times, when the bubble sizes become comparable to $\lambda_{\rm abs}$, the reionization history is sensitive to $\lambda_{\rm abs}$, as can be seen by comparing the evolution for the bracketing values of 8 Mpc/h and 256 Mpc/h. For the $\lambda_{\rm abs}=8$~Mpc/h case, the redshift at which the overlap occurs is delayed by $\Delta z\sim 1.5$, relative to $\lambda_{\rm abs}=256$~Mpc/h, for $\tau_{\rm es}=0.09$ (see also the ``$z_{\rm ov}$'' entries in Table 1). Also shown in Fig.~\ref{history} is the number of ionizing photons per ionized atom, \beq \frac{n_\gamma(z)}{n_{\rm HII}(z)}=\frac{\zeta f_{\rm coll}(z)}{{x}(z)}=1+\frac{1}{{x}(z)}\int_0^{{x}(z)}\frac{\lambda_{\rm b}(x')}{\lambda_{\rm abs}(x')}dx'. \eeq As $\lambda_{\rm abs}\rightarrow \infty$, ${x} \rightarrow \zeta f_{\rm coll}$ and $n_\gamma/n_{\rm HII}\rightarrow 1$. We only count photons up to the moment of overlap (${x}=1$), as we are concerned here with the consumption of ionizing photons during the reionization process. This integral converges, since at the end of reionization we expect $\lambda_{\rm b}(x)\propto (1-x)^{-2/3}$ as the remaining diffuse neutral hydrogen patches dissappear. For $\lambda_{\rm abs}=8$~Mpc/h, about 3 photons per atom are consumed in the absorption systems by $z_{\rm ov}$. In the longest mean free path case, $\lambda_{\rm abs}=256$~Mpc/h, the corresponding fraction is about a half. \begin{figure} \begin{centering} \includegraphics[width=0.45\textwidth]{fig3.pdf} \caption{ (top) Neutral fraction vs. redshift for the three evolving $\lambda_{\rm abs}$ models considered here, in addition to the reference case, for which $\lambda_{\rm abs}\rightarrow \infty$ (right-most dotted curve). (bottom) Absorption mean free path vs. redshift. As the last neutral patches are cleared away, so are the last reservoirs of cold neutral gas in which the minihalos can survive as absorption systems, so $\lambda_{\rm abs}$ increases at the end of reionization. As the evaporation time decreases, the destruction rate of absorption systems increases, leading to longer mean free paths. Also shown are the constraints on $\lambda_{\rm abs}$ from \citet{bolton/haehnelt:2007a} at $z=5$ and 6. } \label{absmfp} \end{centering} \end{figure} Shown in Fig.~\ref{absmfp} is the evolution in neutral fraction and absorption system mean free path for the three minihalo absorption system models that we simulated. Models with longer evaporation times lead to a higher abundance of absorption systems, and hence shorter mean free paths and a relative delay in the time of overlap. The mean free path in the $t_{\rm ev}=100$~Myr model evolves from about 40 Mpc when $x_{\rm HI}\sim 0.1$, to about 80 Mpc at overlap. This model is also plotted as the solid black line in Fig.~\ref{bubblemfp}, which shows that the instantaneous $\lambda_{\rm b}$ values for the evolving $\lambda_{\rm abs}$ model roughly interpolate between those for the fixed $\lambda_{\rm abs}$ models. This can also be seen by noting that $\lambda_0\equiv \lambda_{\rm b}(x_{\rm HI}=0.1)$ for the evolving mean free path model, for which $\lambda_{\rm abs}(x_{\rm HI}=0.1)\simeq 32$~Mpc$/h$, is the same as that for the corresponding fixed mean free path model with $\lambda_{\rm abs}=32$~Mpc$/h$, $\lambda_0=170$~Mpc. Fig.~\ref{panels} shows the ionization field in a 5-Mpc$/h$ slice at ${x}=0.75$ for $M_{\rm min}=10^8 M_\odot$, with $\lambda_{\rm abs}=256$~Mpc$/h$ (top) and $\lambda_{\rm abs}=8$~Mpc$/h$ (bottom). The two cases are quite different, with many more small neutral patches for $\lambda_{\rm abs}=8$~Mpc$/h$ relative to $\lambda_{\rm abs}=256$~Mpc$/h$. Also shown in Fig.~\ref{panels} is the reioniztion redshift in a 0.5-Mpc$/h$ slice. Lower values of $\lambda_{\rm abs}$ result in a more extended reionization overlap period -- i.e. the maxima in reionization redshift are higher, and the minima are lower. \section{Discussion} \begin{figure*} \begin{centering} \includegraphics[width=0.47\textwidth]{fig4a.jpg} \includegraphics[width=0.47\textwidth]{fig4b.jpg} \caption{{\em left:} Reionization redshift in a 0.5 Mpc$/h$-thick slice for the $\lambda_{\rm abs}=256$ Mpc$/h$ simulation (top) and the $\lambda_{\rm abs}=8$ Mpc$/h$ simulation (bottom). Both cases were for $\tau_{\rm es}=0.09$ and $M_{\rm min}=10^8 M_\odot$. Clockwise beginning from top-left, regions shown are 2 Gpc$/h$, 500 Mpc$/h$, and 125 Mpc$/h$ across. The radii of the circles correspond to $\lambda_{\rm abs}$. {\em right:} Same as left panels but for the projected ionized fraction in a 5 Mpc$/h$-thick slice at a time when the mean volume-weighted ionized fraction is 0.75. Note that the ionization field contains considerably more small scale structure in the $\lambda_{\rm abs}=8$ Mpc$/h$ case.} \vspace{1.0cm} \label{panels} \end{centering} \end{figure*} We have carried out large-scale simulations of reionization in a 2 Gpc$/h$ volume, including a finite mean free path to absorption systems. Absorption systems have a significant effect on the characteristic scales at the end of reionization. For the Thomson scattering optical depth reported by {\em WMAP}, $\tau_{es}\simeq 0.09$, we find that the characteristic bubble size when the universe is 90 per cent ionized is quite sensitive to $\lambda_{\rm abs}$, with $\lambda_0\sim 440$~Mpc for $\lambda_{\rm abs}=256$~Mpc$/h$ and $M_{\rm min}=10^9 M_\odot$, while $\lambda_0\sim 94$~Mpc for $\lambda_{\rm abs}=8$~Mpc$/h$ and $M_{\rm min}=10^8 M_\odot$. Calculations using a sharp $k$-space filter lead to a more modest extension of the percolation phase (see Table 1) compared to the reionization history obtained by the solution of equation (\ref{dxdt}), on which the reionization histories shown in Figure 1 are based. For $\tau_{es}\simeq 0.09$, the difference in the redshift when the universe is 90 per cent ionized between the $\lambda_{\rm abs}=8$~Mpc$/h$ and $\lambda_{\rm abs}=256$~Mpc$/h$ cases is only $\Delta z_{0.9}\sim 0.3$, while the corresponding case using a sharp real space filter and equation (\ref{dxdt}) is $\Delta z_{0.9}\sim 0.7$. The delay in the overlap time in both cases is $\Delta z_{\rm ov}\sim 1.5$. Using either sharp $k$-space smoothing or equation (1) to model the effect of absorption systems on the reionization history has drawbacks. While equation (1) conserves photons in the long mean free path limit, a mean flux and opacity are assumed, and further work will be necessary to validate and/or improve upon the approach layed out in section \ref{sec:history}. Using a sharp $k$-space filter results in a reionization history which also conserves photons in the long mean free path limit ($k_F\longrightarrow 0$), but at the expense of using an oscillatory filter which can ``leak'' photons from high density regions into lower density ones, and for which there is no unique choice for the relation between $\lambda_{\rm abs}$ and $k_F$. While our results for the timing and photon consumption of the end of reionizaiton are obtained from equations (1) and (9), it is important to note that more work will be necessary to test their accuracy. However, results for the morphology of reionization (Figures 1 and 4) do not depend on the model for the global history, and are therefore more robust. Our results are consistent with those of \citet{furlanetto/oh:2005}, in which consumption of ionizing photons in dense systems extends the end of reionization considerably. \citet{choudhury/etal:2009} used the semi-numerical approach in a volume~$100$~Mpc$/h$ across, and found a similar trend with decreasing $\lambda_{\rm abs}$, indicating a transition to an ``outside-in'' morphology near the end of reionization. We find that neutral patches may also remain in voids, where formation of ionizing sources is delayed and radiation from the nearest sources is shielded. Previous studies that modeled the physical origin of absorption systems as minihalos \citep[e.g.,][]{ciardi/etal:2006,mcquinn/etal:2007} also found similar effects to those we find here, although these studies were more focused on the intermediate stages of reionization and on smaller simulated volumes, where the photon mean free path was not as small relative to the bubble sizes as we find in our low mean free path cases in the final, percolation phase. As mentioned in the introduction, our approach is complementary to that taken by \citet{crociani/etal:2011}, in which the semi-numerical approach was used to determine the distribution of absorbers during reionization. They found that their spatial distribution is quite inhomogenous, owing both to intrinsic density fluctuations in the IGM, as well as fluctuations in the ionizing radition field -- regions far from sources have a relatively low flux, resulting in a higher abundance of absorbers. This affect was also pointed out by \citet{mcquinn/etal:2011}, who used high-resolution hydrodynamic simulations post-processed with radiative transfer to determine the mean free path as a function of flux, finding a strongly nonlinear dependence of the mean free path on the emmissivity of ionizing radiation. These effects imply an important improvement to our model is not only a time-varying mean free path, but also a spatially varying one. To accomplish this, more work will need to be done on the ``sub-grid'' physics during reionization, using the results of high-resolution cosmological simulations with radiative transfer of a background radiation field, coupled to the hydrodynamics of the gas. However, given that the final patches to be ionized are those most distant from the most luminous sources, the delay in the very end of reionization that we find here is likely to persist when the inhomogeneity of the IGM opacity due to absorption systems is properly taken into account. The lingering neutral clouds we find would further complicate interpretation of quasar absorption spectra at $z\sim 6$ \citep[e.g.,][]{bolton/haehnelt:2007b, lidz/etal:2007, alvarez/abel:2007, wyithe/etal:2008, furlanetto/mesinger:2009}. As discussed by \citet{mesinger:2009}, gaps in quasar spectra can come from either these neutral clouds or mostly ionized but optically-thick absorption systems. The line-of-sight abundance of these neutral clouds at fixed ionized fraction increases with decreasing absorption system mean free path. As shown by \citet{weinmann/etal:2007} and \citet{alvarez/etal:2009}, the Local Group may have been reionized by external sources, i.e. the progenitors of the Virgo Cluster at a distance of $\sim$~20 Mpc. If $\lambda_{\rm abs}<20$~Mpc, however, local reionization would have been delayed until Local Group prongenitors had formed in sufficient number \citep[e.g.,][]{munoz/etal:2009}. Since the satellite population is sensitive to the timing of reionization \citep{busha/etal:2010}, lower values of $\lambda_{\rm abs}$ could imply a higher local satellite abundance. Finally, our results have important implications for the interpretation of temperature fluctuations in the cosmic microwave background at multipoles $l\sim 3000$ \citep{das/etal:2011, reichardt/etal:2011}, through the kinetic Sunyaev-Zel'dovich (kSZ) effect \citep[e.g.,][]{mcquinn/etal:2005, iliev/etal:2007}. In particular, \citet{mesinger/etal:2011b} used nearly the same three parameters and approach as we used here for their parameter study of the dependence of the kSZ signal on the reionization scenario. Reducing the mean free path shifts power to higher multipoles because of the accompanying decrease in the bubble size that we find. Although the kSZ effect is only just beginning to constrain the duration of reionization \citep{zahn/etal:2011b}, more detailed future observations and analysis will begin to provide constraints on the patchiness in addition, and understanding the effect of absorption systems on reionization will be crucial. \acknowledgments{We thank M.~Haehnelt, M.~McQuinn, A.~Mesinger, R.~Thomas, G.~Vasil for useful discussions, and J.~Chluba for providing the diffusion solver from CosmoRec. The simulations in this paper were performed on the GPC supercomputer at the SciNet HPC Consortium. SciNet is funded by: the Canada Foundation for Innovation under the auspices of Compute Canada; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. This work was partially supported by NASA ATFP grant NNX08AH26G, NSF AST-0807312, and NSF AST-0808398.}
2,877,628,088,655
arxiv
\section*{Acknowledgement} F.L. was supported by ERC grant NORIA and by the French government under management of Agence Nationale de la Recherche, as part of the ``Investissements d’avenir'' program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). F.L. also received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement no. 866274). \\ The work of F.-X. Vialard was partly supported by the Bézout Labex (New Monge Problems), funded by ANR, reference ANR-10-LABX-58. \section{Examples} \label{sec:applications} In this section, we discuss some examples that naturally arise without discussing the regularity assumptions imposed in Section \ref{sec:geometric_laplace} and only focusing on the geometric formula of the first-order term. We treat various cases in which simplifications of different kinds occur. The first case recovers the standard Laplace formula using a translation invariant cost. We treat the case of a parametrix for the heat kernel for which the second fundamental form vanishes. The likelihood in Bayesian modelling exhibits a flat geometry of both on $\Sigma$ and on $X \times Y$. Lastly, the Fenchel--Young gap on Euclidean spaces induces a Hessian geometry on the surface $\Sigma$ in a flat ambient space, and up to a change of variable, we obtain a remarkably simple formula. \subsection{The translation invariant cost and the usual Laplace formula} \label{sec:translation-invariant-cost} Let $U\colon{\mathbb R}^d \to {\mathbb R}$ be a strongly convex and nonnegative function such that $U(0) = 0$, the uniquely attained minimum of the function. We choose $X$ to be any bounded open subset of ${{\mathbb R}^d}$ (say, a ball), $Y={{\mathbb R}^d}$ and we consider the extended function $u\colon X \times {\mathbb R}^d \to {\mathbb R}$, $u(x,y) = U(x - y)$. This corresponds to a translation cost often used in optimal transportation. The function $u$ satisfies the conditions of Section \ref{sec:geometric_laplace} and the vanishing set is the diagonal $\Sigma = \{ (x,x) : x\in X\}$. We consider the density $r(dx,dy)= F(x - y)dxdy$, where $dx$ and $dy$ stand for the Lebesgue measures on $X$ and $Y$, respectively. Note that all the geometric quantities evaluated on $\Sigma$ are constant on $\Sigma$. Then the standard Laplace method is given a geometric formulation by our formula. Writing $U=U(z)$, we denote \[ U_i=\frac{\partial U}{\partial z^i}\,, \quad U_{ij}=\frac{\partial^2 U}{\partial z^i\partial z^j}\,, \] and so on. Then $u(x,y)=U(x-y)$ implies that $u_i=U_i$, $u_{\bar\imath}=-U_i$, etc, and therefore we can associate in this way a barred index on $u(x,y)$ with its non barred version on $U(z)$. The geometric quantities read as follows: \begin{enumerate}[(a)] \item The Kim--McCann metric on the product space is $\tilde{g} = \frac 12 \begin{pmatrix} 0 & D^2U \\ D^2U & 0\end{pmatrix}$. It is non degenerate since the Hessian matrix $D^2U=(U_{ij})$ is non singular. The volume form is $ \tilde{m} = \det(D^2U)$. \item On $\Sigma$, $g_{ij} = U_{ij}(0)$ which does not depend on $x,y$ and $m =\sqrt{ \operatorname{det}(D^2U(0))} = \sqrt{ \tilde{m}}$. Since this metric is constant the Christoffel symbols and curvature vanish. \item $f(x,y)\coloneqq \frac{dr(x,y)}{d\tilde{m}(x,y)}=\frac{F(x-y)}{\det D^2U(x-y)}$. \item $ {\tilde\Delta} f = -4U^{i j}\partial_{ij} \Big(F/\det D^2U\Big)$. \item $\tilde{\Gamma}_{ij}^k = U^{k\ell} U_{\ell ij} \mbox{ and } \tilde{\Gamma}_{\ibar {\bar\jmath}}^\bark = -U^{k\ell} U_{\ell ij}$. Since $\Gamma^k_{ij}=0$, \eqref{eq:Gamma-Gammat-h} gives $h^k_{ij} = U^{k\ell}(0) U_{\ell ij}(0) $. \item $2 \hR_{i{\bar\jmath} {\bar k} \ell} = u_{i{\bar\jmath} {\bar k} \ell} - u_{i \ell {\bar s} }u^{{\bar s} t}u_{t {\bar\jmath} {\bar k}} = U_{ijk\ell} - U_{i \ell s}U^{s t}U_{tjk} \,.$ \item $\hR = 8 u^{i{\bar k}} u^{{\bar\jmath} \ell} \hR_{i{\bar\jmath}{\bar k} \ell} = 4 U^{ik} U^{j\ell} (U_{ijk\ell} - U_{i \ell s}U^{s t}U_{tjk})$ and $R = 0$. \item The norms of the second fundamental form and the mean curvature are given by \begin{align*} & \bracket{h,h} = - U_{ijk}U_{\ell mn}U^{i\ell}U^{jm}U^{kn}\,, \\ & \bracket{H,H} = - U_{ijk}U_{\ell mn}U^{ij}U^{k\ell}U^{mn}\,, \end{align*} where all the objects on the right-hand side are evaluated at $z=0$. \end{enumerate} Finally we see that since all the quantities depending on the geometry and $F$ are constant on $\Sigma$, they are constant on $\partial\Sigma$ and thus the boundary terms vanish. Let us instantiate this example in one dimension. We have $h = \frac {U^{(3)}}{U^{(2)}}$, $H =\frac {U^{(3)}}{(U^{(2)})^2} $, $\hR = \frac{4}{(U^{(2)})^2}\left( U^{(4)} - \frac{(U^{(3)})^2}{U^{(2)}}\right)$, $\langle h,h\rangle = \langle H,H\rangle = -\frac{(U^{(3)})^2}{(U^{(2)})^3}$ and $-\frac 18 {\tilde\Delta} (\frac {F}{U^{(2)}}) = \frac{F^{(2)}}{2(U^{(2)})^2} - \frac{F' U^{(3)}}{(U^{(2)})^3} + \frac{F (U^{(3)})^2}{(U^{(2)})^4} - \frac{Fu^{(4)}}{2(U^{(2)})^3}$. The Laplace double integral \[ \int_X\int_{{\mathbb R}}\frac{e^{-U(x-y)/\varepsilon}}{(2\pi\varepsilon)^{1/2}}F(x-y)\,dxdy \] reduces to an integral over $z=x-y\in{\mathbb R}$ multiplied by the volume of $X$, and we get \begin{multline*} \int_{{\mathbb R}}\frac{e^{-U(z)/\varepsilon}}{(2\pi\varepsilon)^{1/2}}F(z)\,dz = \frac{F(0)}{\sqrt{U^{(2)}}(0)} \,+ \varepsilon \left( \frac{1}{2\sqrt{U^{(2)}(0)}}\left(\frac{F}{U^{(2)}}\right)''(0)\right) \\+ \varepsilon\frac {\sqrt{U^{(2)}}(0)H}2 \left( \frac {F}{U^{(2)}}\right)'+\varepsilon\Big( \frac{3}{32}{\hR} - \frac{1}{12}\bracket{h,h} \Big)\frac{F(0)}{\sqrt{U^{(2)}(0)}} + O(\varepsilon^2)\,. \end{multline*} We obtain, as in \cite[Chapter 6]{Bender1999}, \begin{multline*} \int_\RR e^{-U(z)/\epsilon}F(z)\,dz = \sqrt{\frac{2\pi\epsilon}{U^{(2)}(0)}}\\ \Bigg[F(0)+ \epsilon \left( \frac{F^{(2)}}{2U^{(2)}} - \frac{F U^{(4)}}{8 (U^{(2)})^2} - \frac{F' U^{(3)}}{2 (U^{(2)})^2} + \frac{5 F (U^{(3)})^2}{24 (U^{(2)})^3} \right) + O(\epsilon^2) \Bigg]\,. \end{multline*} The corresponding formula in higher dimension is more difficult to find in the literature\footnote{It can be found in \cite[Chapter 6, Lemma 6.5.3]{Kolassa1997}, but with errors in the renormalization factor and in some of the signs of the coefficients.}; it takes the form \begin{multline*} \int_{\RR^d} e^{-U(z)/\epsilon}F(z) \,dz = \frac{(2\pi\epsilon)^{d/2}}{\sqrt{\operatorname{det}(U_{ij})}}\Bigg[F(0)\\+ \epsilon \Big[\frac12 U^{ij}\partial_{ij}F- \frac12\partial_i F U_{jk\ell}U^{ij}U^{k\ell} +F\Big(-\frac18 U_{ijk\ell} U^{ij}U^{k\ell} \\ +\frac{1}{12}U_{ijk}U_{\ell mn}U^{i\ell}U^{jm}U^{kn} + \frac18U_{ijk}U_{\ell mn}U^{ij}U^{k\ell}U^{mn}\Big)\Big] \Bigg] + O(\epsilon^2)\,. \end{multline*} It is the expression given in Theorem~\ref{thm:quantitativelaplace}. To derive it from the translation-invariant cost, we use the additional formulas \begin{multline*} - \frac18{\tilde\Delta} (r/\tilde{m}) = \frac{1}{m}\Big[\frac12 U^{ij}\partial_{ij}F - \partial_i F U_{jk\ell}U^{ij}U^{k\ell} \\ + F\Big(-\frac12 U_{ijk\ell} U^{ij}U^{k\ell}+\frac12U_{ijk}U_{\ell mn}U^{i\ell}U^{jm}U^{kn}+\frac12U_{ijk}U_{\ell mn}U^{ij}U^{k\ell}U^{mn}\Big)\Big] \end{multline*} and \begin{equation*} \frac14\hnabla_{\!H}(r/\tilde{m})= \frac1{2m}U^{ij}U^{km}U_{ijm}\left(\partial_kF-FU_{kst}U^{st}\right)\,. \end{equation*} \subsection{Small-time limit of the heat kernel}\label{SecHeatKernel} Let $(M,g)$ be a Riemannian manifold without boundary. Take $X=Y=M$ and consider the function $u(x,y)=\frac12 d^2(x,y)$, where $d$ is the Riemannian distance on $M$. We note that $u$ is symmetric, $u(x,y)=u(y,x)$ and vanishes on the diagonal, i.e. $\Sigma=\{(x,x) : x\in M\}$. Then the restriction of the Kim--McCann metric to $\Sigma$ is precisely $g$ and therefore $\Sigma$ is an isometric copy of $M$~\cite[Example 3.6]{kim2007continuity}. Let us fix some notation. Since $y(x)=x$, the matrix $\partial_iy^{\bar\imath}$ is the identity and thus provides a way to identify a barred index with an unbarred index. For functions of one variable (either $x$ or $y$) we will therefore be allowed to identify $i \leftrightarrow {\bar\imath}$, $j \leftrightarrow {\bar\jmath}$, and so on. In this setting several simplifications occur. On $\Sigma$ we have: \begin{enumerate}[(a)] \item $g_{ij}(x)=u_{ij}(x,x)=-u_{i{\bar\jmath}}(x,x)$; \item $\tilde{\Gamma}^k_{ij}=\tilde{\Gamma}^{\bar k}_{{\bar\imath}{\bar\jmath}} = \Gamma^k_{ij}$; \item $h=0$, $H=0$; \item $\tilde{m}=m^2$, $\partial_i\log\tilde{m}=\partial_i\log m$ and $\partial_{i{\bar\jmath}}\log\tilde{m}=-\hR_{i{\bar\jmath}}$. \end{enumerate} We also note that (b)--(d) hold in greater generality, whenever $u$ is a symmetric function that vanishes on the diagonal. Consider now the heat equation on $M$, \[ \partial_tq = \Delta q, \] where $\Delta$ is the Laplacian on $(M,g)$ acting on scalar functions. The \emph{heat kernel} is the function $p_t(x,y)$ giving the solution at time $t>0$ from an initial condition, \[ q_t(y) = \int_M p_t(x,y)q_0(x)\,dm(x)\,. \] Here we integrate against the Riemannian volume form $m$. The heat kernel has well-known small time asymptotics of the following form~\cite{minakshisundaram_pleijel_1949}, \begin{equation*} p_t(x,y) = \frac{e^{-d(x,y)^2/4t}}{(4\pi t)^{d/2}} \sum_{k = 0}^\infty t^k \Phi_k(x,y)\,, \end{equation*} where the functions $\Phi_k$ (sometimes called ``Hadamard coefficients'') are solutions to some particular ``transport'' partial differential equations, see also \cite{rosenberg1997,Chavel1984EigenvaluesIR}. Let us also remark that in several applications, the reverse point of view is rather taken, \textit{i.e.} one solves the heat equation to obtain an estimation of the distance squared \cite{Crane}. Integrating $q_t$ against a test volume form $\mu$ we obtain a sum of double integrals of the type studied in Theorem~\ref{thm:laplace} (with $\varepsilon=2t$), \begin{equation}\label{eq:heat-kernel-series} \int_M q_t\,d\mu = \sum_{k=0}^\infty t^k\iint_{M\times M} \frac{e^{-u(x,y)/2t}}{(4\pi t)^{d/2}} \Phi_k(x,y) q_0(x)dm(x)d\mu(y)\,. \end{equation} Let us explore what happens in the small-time limit. We first focus on the first term $k=0$. Set $r(dx,dy)=\Phi_0(x,y) q_0(x)m(dx)\mu(dy)$. Then the zeroth-order term in our Laplace formula as $t\to 0^+$ is \[ \int_M \Phi_0(x,x) q_0(x)d\mu(x)\,. \] Since the other terms $k\ge 1$ are $O(t)$, equating left-hand side and right-hand side in~\eqref{eq:heat-kernel-series} leads to \[ \Phi_0(x,x)=1, \] which is the well-known first information one typically obtains on the Hadamard coefficients~\cite[Chapter 3.2]{rosenberg1997}. In other words, $\Phi_0=1$ on $\Sigma$ and this implies that the tangential gradient vanishes, $\nabla \Phi_0=0$. Let us now expand to first-order in $t$ the first two terms in~\eqref{eq:heat-kernel-series}, substract the zeroth-order term, divide by $t$ and take the limit $t\to 0^+$. We obtain \[ \int_M \partial_tq \,d\mu\Big|_{t=0} = 2\int_M \Big[- \frac18 {\tilde\Delta} f + f \Big(\frac{3}{32}{\hR} - \frac{1}{8}R \Big)\Big] \,dm + \int_M\Phi_1 q_0d\mu\,, \] with $f(x,y)\coloneqq \Phi_0(x,y)q_0(x)dm(x)d\mu(y)/d\tilde{m}(x,y)$. Note the factor $2$ coming from $\varepsilon=2t$. After some simplification the right-hand side can be written \begin{equation}\label{eq:heat-laplace-integral} \int_M \big[\Delta q_0 - \bracket{K\nabla^N\!\Phi_0,\nabla q_0} + \big(-\frac14{\tilde\Delta}\Phi_0-\frac12\div(K\nabla^N\!\Phi_0) + \Phi_1 - (\frac{\hR}{16}+\frac{R}{4})\big) q_0\big]\,d\mu. \end{equation} Since $\partial_tq=\Delta q$ it implies the following equations on the diagonal $\Sigma$, \[ \nabla^N\!\Phi_0 =0\quad\text{and therefore}\quad \hnabla\Phi_0 = 0, \] \[ -\frac14{\tilde\Delta}\Phi_0 + \Phi_1 =\frac{1}{16}\hR+\frac{1}{4}R\,. \] As a matter of fact, more is known on these coefficients, for instance $\Phi_1(x,x)=\frac16 R$~\cite[Chapter 3]{rosenberg1997}. Let us now consider a more general situation, the Fokker--Planck equation \begin{equation}\label{eq:fokker-planck} \partial_tq =\Delta q + \nabla_{\!a} q + cq\,, \end{equation} where $a$ is a vector field on $M$, $\nabla$ the covariant derivative and $c$ a scalar field on $M$. The small-time asymptotics of~\eqref{eq:fokker-planck} were recently studied by Bilal in~\cite{bilal2020}, who showed that formally there exists a small-time expansion of the form~\eqref{eq:heat-kernel-series}, with of course adjusted coefficients $\Phi_k(x,y)$. Starting from~\eqref{eq:heat-laplace-integral} we obtain the following equations on the diagonal $\Sigma$, \begin{gather*} \Phi_0(x,x)=1,\\ \nabla\Phi_0=0,\\ \nabla^N\!\Phi_0=-Ka,\\ -\frac14{\tilde\Delta}\Phi_0 +\Phi_1= c-\frac12\div(a)+\frac{1}{16}\hR+\frac14 R\,. \end{gather*} We also note that similarly to the heat kernel, Bilal obtains more information on the coefficients, for instance $\Phi_1(x,x)= \frac16 R -\frac12\div(a)-\frac14\abs{a}^2+c$. To conclude this example, we also look at the heat flow from a slightly different angle: given an initial $q_0$ on $M$ we consider the evolution flow \[ \tilde q_t(y)=\int_M \frac{e^{-d^2(x,y)/4t}}{(4\pi t)^{d/2}} q_0(x)dm(x)\,, \] and want to see how it deviates from the heat equation solution $q$ defined by~\eqref{eq:heat-kernel-series}. Similar computations to the ones above give \[ \partial_t\tilde q|_{t=0} = \Delta q_0-\Big(\frac{1}{16}\hR +\frac14 R\Big) q_0\,. \] We see that there are additional terms comprised of curvatures. While $R$ is the classical scalar curvature of $M$, $\hR$ is not a traditionally studied Riemannian invariant since it requires to endow $M\times M$ with the Kim--McCann geometry. However we see that it shows up naturally in this very classical problem. \subsection{Likelihood in Bayesian models with Gaussian priors}\label{SecBayesianApplication} Bayesian modeling postulates a model of the observed data $y \in {\mathbb R}^d$ as being generated by the combination of measurements on $x \in {\mathbb R}^k$ through a function $F\colon {\mathbb R}^k \to {\mathbb R}^d$ which can be nonlinear, as well as some model of noise. A standard model is $y = F(x) + \sqrt{\varepsilon} n$ where $n$ is a normal Gaussian variable on ${\mathbb R}^d$ and $\varepsilon$ a positive real parameter. The associated likelihood is given by $ \mathbb{P}(dy|x) = (2\pi\varepsilon)^{-d/2} e^{-\frac 1{2\varepsilon} \abs{y- F(x)}^2}\,. $ Let us consider the particular case where $k = d$ and $F$ is a $C^4$ diffeomorphism. It is a typical case where the probability is readily expressed under the Kim--McCann framework. In this case, the map is $y(x) = F(x)$ and the geometry on $\Sigma$ is flat whereas the ambient geometry is not. Therefore, several simplifications occur. The geometric quantities read \begin{enumerate}[(a)] \item The metric on the product space ${\mathbb R}^d \times {\mathbb R}^d$ is $\tilde{g} = \frac 12 \begin{pmatrix} 0 & DF \\ DF& 0\end{pmatrix}$. It is non degenerate if $DF$ is non singular, and $ \tilde{m} =\abs{\det(DF)}$. \item The metric on $\Sigma$ is the pull-back of the Euclidean metric on ${\mathbb R}^d$ by $F$, and $m = \tilde{m}$. Being the pull-back of a Euclidean metric, the curvature tensor of $g$ vanishes and $R = 0$. \item $ \tilde{\Gamma}_{ij}^k = \tilde{\Gamma}_{\ibar {\bar\jmath}}^{{\bar k}} = 0$ and it follows using Formulas \eqref{EqFormulaForh} that $h_{ij}^k = -\Gamma_{ij}^k = -\frac12 \frac{\partial [F^{-1}]^k}{\partial y^{\bar k}}\frac{\partial^2 F^{\bar k}}{\partial x^i\partial x^j}$. As a consequence, the curvature tensor $\hR _{i{\bar\jmath} {\bar k} \ell}$ vanishes and $\hR = 0$. \item $ {\tilde\Delta} f = 4 \operatorname{trace}([DF]^{-1} D^2_{xy}f)$. \end{enumerate} In this case, the geometric Laplace formula reads \begin{multline*} \iint_{{{\mathbb R}^d}\times {{\mathbb R}^d}}\frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}\,dr(x,y) = \int_{\Sigma} fdm \,+ \\ \varepsilon\int_\Sigma\Big[-\frac 18{\tilde\Delta} f+ \frac 14 \hnabla_{\!H} f + \left( -\frac{1}{8}\bracket{H,H} + \frac{1}{24}\bracket{h,h}\right) f\Big] \,dm + O(\varepsilon^2)\,, \end{multline*} with $f=dr/d\tilde{m}$, and gives the joint law of the random variable $(x,y)$. Other noise models such as multiplicative noise could be treated in a similar way, leading to different geometries. \subsection{Fenchel--Young duality gap} \label{sec:fenchel-young-gap} An example for which the formula can be made remarkably simple is the Fenchel--Young gap on Euclidean space. The corresponding ambient metric is flat and the metric on $\Sigma$ is a Hessian metric. Indeed, consider a convex function $F\colon {{\mathbb R}^d} \to {\mathbb R}$ and $F^*\colon {{\mathbb R}^d} \to {\mathbb R}$ its Legendre--Fenchel transform defined by $F^*(y) = \sup_x \langle y , x \rangle - F(x)$. Thus the Fenchel--Young duality gap defined by $0\leq u(x,y) \coloneqq F(x) + F^*(y) - \langle x, y\rangle$ satisfies our assumptions under smoothness hypothesis on $F$. The set $\Sigma$ is defined by the graph of $D F$, the derivative of $F$. In such a case, the geometric quantities are \begin{enumerate}[(a)] \item The metric on the product space is $\tilde{g} = \frac 12 \begin{pmatrix} 0 & \operatorname{Id} \\ \operatorname{Id}& 0\end{pmatrix}$. It is a flat metric and its Christoffel symbols and curvature tensor vanish. \item The metric on $\Sigma$ is the Hessian metric of $F$, $g(x)(v,v) = \langle v,D^2 F(x) v \rangle$. \item For such a Hessian metric on the Euclidean space, one has $ \Gamma^k_{ij} = F^{pk} F_{ijp} $, where $F_{ijp}=\partial_{ijp}F$ and $F^{pk}$ denotes the inverse matrix of $F_{kp}$. Since $\Gammat_{ij}^k = \Gamma_{ij}^k + h_{ij}^k = 0$, one has $h_{ij}^k = - \Gamma_{ij}^k = - F^{pk} F_{ijp}$. The curvature tensor reads $R_{ijk\ell} = \frac 14 F^{mn}(F_{jkm}F_{i\ell n} - F_{j\ell m}F_{ikn})$. \item The Laplacian on $X \times Y$ is $ {\tilde\Delta} f = 4\operatorname{trace}(D^2_{xy}f) $. \end{enumerate} The Laplace formula reads \begin{multline*} \iint_{X\times Y}\frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}f(x,y)\,d\tilde{m} (x,y) = \int_{\Sigma} fdm \,+ \\ \varepsilon\int_\Sigma\Big[-\frac 18{\tilde\Delta} f+ \frac 14 \hnabla_{\!H} f+ \left( - \frac{1}{8}R -\frac{1}{8}\bracket{H,H} + \frac{1}{24}\bracket{h,h}\right) f\Big] \,dm + O(\varepsilon^2)\,. \end{multline*} \par{\textbf{Change of variable. }} Now, we underline that, in practical cases, one is given a function $u$ on a manifold $X \times Y$. However, the Kim--McCann metric is defined via $u$ therefore it is not invariant to reparametrization. As explained in the introduction, changes of variables can be used in order to simplify the first order formula, the simplest formulation being obtained via a change of variable that transforms $u$ to a gaussian in the standard case of Section \ref{sec:translation-invariant-cost}. However, since the vanishing set of $u$, $\Sigma$, is a $d$-dimensional manifold such a change of variable cannot be performed. Yet, for the Fenchel--Young gap, a natural change of variables is given by the map $DF\colon X \to Y$ which will be assumed a smooth diffeomorphism. One can now consider the function $\tilde u\colon X \times X \to {\mathbb R}$ defined by \[ \tilde u(x,x') = u(x,D F(x'))= F(x) + F^*(D F(x')) - \langle D F(x'),x\rangle. \] It is known as the Bregman divergence of $F$, see Examples~\ref{ex:bregmandiv} and~\ref{ex:fenchelyounggap}. The Kim--McCann metric is $$\tilde{g}(x,x') = \frac 12 \begin{pmatrix} 0 & D^2 F(x') \\ D^2 F(x')& 0\end{pmatrix}\,,$$ $\Sigma$ is the diagonal $\{(x,x) \in X\times X\}$ and the metric on $\Sigma$ is again the Hessian metric of $F$. Note that the Kim--McCann metric does not depend on $x$ and thus its Christoffel symbols vanish. The interest of this change of variable is to set the second fundamental form to $0$. Therefore, in these coordinates, the Laplace formula is particularly simple \begin{equation*} \iint_{X\times X}\frac{e^{-\tilde u(x,x')/\varepsilon}}{(2\pi\varepsilon)^{d/2}}f(x,x')\,d\tilde{m} (x,x') = \int_{\Sigma} fdm - \frac\varepsilon 8 \int_\Sigma \big({\tilde\Delta} f + R f\big) \,dm + O(\varepsilon^2)\,. \end{equation*} Note that for practical applications, the previous formula necessitates the knowledge of $D F$, implicitly given by the Fenchel--Young gap. \section{Embeddings in the Kim--McCann geometry}\label{sec:KimMcCann} Consider the triple $(X, Y,c)$ where $X$, $Y$ are two domains of ${{\mathbb R}^d}$ and $c(x,y)$, $x\in X, y\in Y$ is a real-valued function. We assume for simplicity that $X$ and $Y$ are subsets of ${{\mathbb R}^d}$ but the general idea is that they could be smooth manifolds. In the context of optimal transport, Kim and McCann~\cite{kim2007continuity} introduced a new pseudo-metric on $X\times Y$ which forms the bedrock of the present work. Their goal was to give a geometric meaning to an intriguing quantity discovered by Ma, Trudinger and Wang~\cite{ma2005regularity} that plays an important role in the regularity of optimal transport. This quantity is now called the MTW tensor and it can be seen as a Kim--McCann curvature tensor. Let us first describe the Kim--McCann geometry informally, and delay precise definitions until Section~\ref{sec:review-kim-mccann}. Suppose that we have a bijection $X\to Y$, describing for instance a minimal cost matching between $X$ and $Y$, where the cost of matching an element $x\in X$ to $y\in Y$ is $c(x,y)$. Suppose that we are matching $x\mapsto y$ and $x+\xi\mapsto y+\eta$, where $\xi$ and $\eta$ are small displacements, and we are contemplating whether it would be advantageous to instead match $x\mapsto y+\eta$, $x+\xi\mapsto y$. The (positive or negative) loss we would incur is the \emph{cross-difference}~\cite{McCann_glimpse2014,McCann_line1999} \begin{equation}\label{eq:cross-difference} \delta:= [c(x+\xi,y)+c(x,y+\eta)]-[c(x,y)+c(x+\xi,y+\eta)]\,. \end{equation} A Taylor expansion in $\xi,\eta$ gives \begin{equation*} \delta = -D^2_{xy}c(x,y)(\xi,\eta) + o(\abs{\xi}^2+\abs{\eta}^2)\,. \end{equation*} The leading-order term $-D^2_{xy}c(x,y)(\xi,\eta)$ is precisely the Kim--McCann metric (see Definition~\ref{def:Kim-McCann} for a proper definition). Note that it only depends on the cost $c$ and in particular does not rely on any Euclidean or Riemannian structure that could exist on $X$ and $Y$. Frequently in addition to a fixed function $c$ we encounter a more problem-dependent quantity induced by a pair of functions $\phi(x)$ and $\psi(y)$. This quantity, which we denote by $u$, is of the form \[ u(x,y):=c(x,y)-\phi(x)-\psi(y)\,, \] and satisfies the properties \begin{align} & \,\,u(x,y)\ge 0, \label{eq:c-div:ineq}\\ &\inf_y \, u(x,y)=0 \quad\text{for each $x$}, \label{eq:c-div:eq1}\\ &\inf_x \, u(x,y)=0 \quad\text{for each $y$}.\label{eq:c-div:eq2} \end{align} It can be seen as ``rectifying'' the cost $c$ by the addition of $\phi$ and $\psi$ to form a nonnegative quantity. Moreover since $\phi$ only depends on $x$ and $\psi$ only depends on $y$ the function $u$ encodes in some sense the same interaction between $X$ and $Y$ as $c$ did. For example~\eqref{eq:cross-difference} remains unchanged when $c$ is replaced by $u$, and in particular the Kim--McCann metrics induced by $c$ and $u$ are the same. In optimal transport~\cite{villani2008optimal} $\phi,\psi$ are the Kantorovich potentials. In matching markets~\cite{Chiappori-McCann-Nesheim-2010,GalichonBook}, $\phi(x)$ and $\psi(y)$ represent the payoffs of (say) worker $x$ and firm $y$ respectively. In information geometry~\cite{AmariBook}, $u$ is called a divergence and generally $X=Y$ and $u$ vanishes on the diagonal $x=y$. We call $u$ a \emph{$c$-divergence} following Pal and Wong~\cite{PalWong_new_information_geometry2018}. The $c$-divergence generates a subset of $X\times Y$ defined by \[ \Sigma=\{(x,y) : u(x,y)=0\}\,. \] Assuming that each optimization problem in~\eqref{eq:c-div:eq1},~\eqref{eq:c-div:eq2} is attained at a unique minimizer, $\Sigma$ can then be described as the graph of a map either from $X$ or from $Y$. In the next subsections we develop the geometry of $\Sigma$ seen as a submanifold of $X\times Y$; this point of view is at the heart of our Laplace formula. We study the Levi-Civita connection $\nabla$ on $\Sigma$ and the associated Riemann curvature, the second fundamental form and the mean curvature. This is in contrast to the earlier approach of Wong and Yang in~\cite{wong2021pseudoriemannian}, in which they establish a link between the Kim--McCann framework and information geometry~\cite{AmariBook}. Wong and Yang showed the importance of the $c$-divergence and developed an ``information'' geometry on $\Sigma$ which is different from the one we present in this paper. Let us explain how. When presented with a submanifold $\Sigma\subset X\times Y$, the question arises how to produce a connection on $\Sigma$ from a given connection $\hnabla$ on $X\times Y$. Motivated by the product structure of $X\times Y$ and information geometry, Wong and Yang introduce two ``dual connections'' on $\Sigma$ which are defined as the projections of $\hnabla$ onto the first ($x$) and second ($y$) components, respectively. They then obtain two curvature tensors, one for each of the two connections. Note that these are not the curvatures induced by the metric $g$ on $\Sigma$. Instead we choose to follow the more mainstream route of projecting orthogonally $\hnabla$ onto $\Sigma$, or equivalently of studying the Levi-Civita connection of $\Sigma$. This is the standard in submanifold theory~\cite{ONeillBook,chen2014total,dajczer2019submanifold}, and general relativity~\cite{MisnerThorneWheelerBook} for instance. The advantages of our approach is to manipulate common objects (Levi-Civita connection, metric curvature) and it highlights the importance of extrinsic curvature (the second fundamental form). It also gives us advanced tools at our disposal such as the fundamental equations (Gauss and Codazzi equations). Beyond that, for future work it could help connecting our framework to other notions in submanifold theory, such as the first and second variation formulas~\cite{Simons1968,xin2018minimal} and minimal varieties in optimal transport~\cite{Kim-McCann-Warren-calibrates}. On the flip side, our formulas for the connection, see Prop.~\ref{prop:connections}, and Riemann curvature~\eqref{eq:def-R} are more complicated than Wong and Yang's formulas for the dual connections~\cite[Lemma 3]{wong2021pseudoriemannian} and the dual curvature tensors~\cite[Lemma 6]{wong2021pseudoriemannian}. Let us conclude this introduction by examples of $c$-divergences. \begin{example}[The distance squared cost] One of the simplest examples of a $c$-divergence is the square of the Euclidean distance, \[ u(x,y) = \frac12 \abs{x-y}^2\,, \] where $X=Y$ is a Euclidean space. Here $u$ can be seen as coming from either the quadratic cost $c(x,y)=\frac12 \abs{x-y}^2$ with $\phi=\psi=0$ or from the bilinear cost $c(x,y)=-x\cdot y$ with $\phi=\psi=-\frac12\abs{x}^2$; $\Sigma$ is the diagonal $\{(x,x) : x\in X\}$. The Kim--McCann pseudo-metric is $\tilde{g}((\xi,\eta),(\xi,\eta)) = \xi\cdot\eta$ and the induced metric on $\Sigma$ is $g(\xi,\xi) = \abs{\xi}^2$. On a Riemannian manifold, the corresponding cost is the squared Riemannian distance $\frac 12 d(x,y)^2$. Note that this cost is not smooth in general due to the presence of the cut locus. However, it is smooth on a neighborhood of the diagonal if $M$ is compact. The metric on the diagonal $\Sigma$ is the Riemannian metric of $M$, see Section \ref{SecHeatKernel}. \end{example} \begin{example}[Bregman divergence] \label{ex:bregmandiv} Let $X=Y$ be a $d$-dimensional vector space and let $f$ be a differentiable strictly convex function on $X$. Then \[ u(x,y) = f(x)-f(y)-\bracket{\nabla f(y), x-y} \] is called the Bregman divergence of $f$ and we denote it $f(x|y)$. It vanishes on the diagonal $\Sigma=\{(x,x) : x\in X\}$. The Kim--McCann metric is $\tilde{g}_{x,y}((\xi,\eta),(\xi,\eta))=\nabla^2f(y)(\xi,\eta)$ and the Riemannian metric on $\Sigma$ is $g_x(\xi,\xi)=\nabla^2f(y(x))(\xi,\xi)$. \end{example} \begin{example}[Fenchel--Young gap] \label{ex:fenchelyounggap} Let $X$ be a $d$-dimensional vector space and let $Y=X^*$, the dual vector space of $X$. Let $f$ be a differentiable strictly convex function on $X$ and take $c(x,y)=-\bracket{x,y}$, $\phi=-f$ and $\psi=-f^*$, where $f^*$ is the convex conjugate (Legendre--Fenchel transform) of $f$. Consider \[ u(x,y) = f(x)+f^*(y)-\bracket{x,y}. \] By the Fenchel--Young inequality $u(x,y)\ge 0$ and $u$ vanishes on $\{(x,\nabla f(x)):x\in X\}$. Moreover it can be checked that \[ u(x,y) = f\big(x|\nabla f^*(y)\big)\,, \] amd also $u(x,y) = f^*(y|\nabla f(x))$, so that $u$ is essentially a Bregman divergence up to a reparametrization in one of the two variables. Laplace expansions with Fenchel--Young gaps are explored in Section~\ref{sec:fenchel-young-gap}. \end{example} \begin{example}[Translation-invariant cost] Let $U\colon{{\mathbb R}^d}\to{\mathbb R}$ be a strictly convex nonnegative function satisfying $U(0)=0$ and consider \[ u(x,y) = U(x-y). \] Then $u\ge 0$ and $u$ vanishes on the diagonal $x=y$. Translation-invariant costs are natural in optimal transport~\cite{GangboMcCann1995,santambrogio2015optimal} and in Section~\ref{sec:translation-invariant-cost} we study them to recover the usual Laplace method on ${{\mathbb R}^d}$. Let us also mention the work of Khan and Zhang~\cite{KhanZhang2020}, in which the authors introduce a natural Kähler geometry associated to translation-invariant costs. This geometry is different from the Kim--McCann geometry and can be seen as a complementary framework. \end{example} \begin{example}[Log-divergence] Take $X=Y=\{x\in{{\mathbb R}^d} : x_i> 0\}$ the positive orthant and $\alpha > 0$. Consider the cost $c(x,y) = -\frac{1}{\alpha} \log(1+\alpha \bracket{x,y})$. Interestingly, this cost gives rise to a Kim--McCann metric with constant sectional curvature, specifically $-4\alpha$, as shown in \cite[Section 4.1]{wong2021pseudoriemannian}. Let $f\colon X\to{\mathbb R}$ be a differentiable function such that $e^{\alpha f}$ is convex. Then \[ u(x,y) = f(x)-f(y)- \frac{1}{\alpha} \log(1+\alpha \bracket{\nabla f(y),x-y}) \] is a nonnegative function which is $0$ if $x = y$ (here we assume that the quantity inside the logarithm is positive). This ``log-divergence'' was introduced in~\cite{PalWongArbitrage2016,WongLogDiv2018}. Note that when $\alpha \to 0$, we recover the Bregman divergence. \end{example} \subsection{Review of the Kim--McCann metric} \label{sec:review-kim-mccann} Let us recall the main geometric objects introduced by Kim and McCann in~\cite{kim2007continuity}. \begin{definition}\label{def:Kim-McCann} The Kim--McCann metric is the pseudo-metric defined on the product space $X\times Y$ by \[ \tilde{g}(x,y) = - \frac 12 \begin{pmatrix}0 & D^2_{xy}c(x,y) \\ D^2_{xy}c(x,y) & 0\end{pmatrix}. \] In other words, for a vector field $(\xi(x,y),\eta(x,y))$ on $X\times Y$ we have $\tilde{g}(x,y)((\xi,\eta),(\xi,\eta)) = -D^2_{xy}c(x,y)(\xi,\eta)$, where the mixed partial derivatives part of the Hessian $D^2_{xy}c(x,y)(-,-)$ is understood as a bilinear form. \end{definition} Due to its particular form, the signature of this pseudo-metric is $(d,d)$ if it is non-degenerate. Indeed, the \emph{para-complex} structure $T(X\times Y)\to T(X\times Y)$ defined by \begin{equation}\label{eq:def-K} K(\xi,\eta) = (\xi,-\eta) \end{equation} is an isomorphism of eigenspaces with eigenvalues $1$ and $-1$. The space $X\times Y$ endowed with $(\tilde{g},K)$ is known as a para-Kähler manifold~\cite{parakahlerReview,parakahler}. Since $K^2$ is the identity we have that $K=K^{-1}$. Additionally $K$ is skew-symmetric with respect to $\tilde{g}$ in the sense that \begin{equation} \label{eq:K-skew} \bracket{KU,V} = -\bracket{KV,U}, \end{equation} where we write $\bracket{-,-}=\tilde{g}(-,-)$. This also implies that $\bracket{KU,KV} = -\bracket{U,V}$. The volume form on $X\times Y$ induced by $\tilde{g}$ is $\sqrt{\abs{\det \tilde{g}}} = 2^{-d} \abs{\det D^2_{xy}c}$. To avoid writing the factor $2^{-d}$ everywhere we instead define \begin{equation} \label{eq:def-mt} \tilde{m} = \abs{\det D^2_{xy}c}. \end{equation} Due to the particular structure of $\tilde{g}$ several terms in the Christoffel symbols and the curvature tensor vanish. Indeed the only nonzero Christoffel symbols are \begin{equation}\label{EqChristoffelsKimMcCann} \Gammat^k_{ij} = c^{k{\bar m}}c_{{\bar m} ij}, \quad \Gammat^{\bar k}_{{\bar\imath}{\bar\jmath}} = c^{{\bar k} m}c_{m{\bar\imath}{\bar\jmath}}\,, \end{equation} and all the other combinations of barred and unbarred indices vanish. The Riemann curvature tensor is defined as \begin{equation} \label{eq:def-Rt} \hR(U, V)W = \hnabla_V \hnabla_U W - \hnabla_U\hnabla_V W - \hnabla_{[V, U]} W\,, \end{equation} following the sign convention of~\cite{kim2007continuity}, which is also the one of~\cite{ONeillBook}. The only components of the curvature tensor that do not vanish are those for which the number of barred and unbarred indices is equal and one has, for these terms \begin{equation} \label{EqCurvatureTensorRelations} \begin{aligned} & \hR_{ij{\bar k}{\bar\ell}} = 0\,,\\ & \hR_{i{\bar\jmath} {\bar k} \ell} = \frac12 \big(c_{i{\bar\jmath} {\bar k} \ell} - c_{i\ell \bar s }c^{\bar s t}c_{{\bar\jmath}{\bar k} t}\big)\,. \end{aligned} \end{equation} The other components follow by the standard symmetries of the curvature tensor. In particular we note that \begin{equation*} \hR_{i{\bar\jmath}{\bar k}\ell} = \hR_{i{\bar k}{\bar\jmath}\ell}\,, \end{equation*} which directly follows from~\eqref{EqCurvatureTensorRelations} or alternatively from the first Bianchi identity. As a direct consequence the Ricci curvature is \[ \hR_{i{\bar k}} = -2c^{{\bar\jmath}\ell}\hR_{i{\bar\jmath}{\bar k}\ell}\,, \] and vanishes when the number of barred and unbarred indices is not equal. The scalar curvature is then \begin{equation}\label{EqRicciHat} \tilde{R} = 8 c^{i {\bar\jmath}}c^{{\bar k}\ell} \tilde{R}_{i{\bar\jmath}{\bar k} \ell}\,. \end{equation} \subsection{Calculus on \texorpdfstring{$\Sigma$}{Sigma}} Let us now suppose that we have in addition to $c$ two functions $\phi(x)$ and $\psi(y)$ and that the function $u(x,y)=c(x,y)-\phi(x)-\psi(y)$ satisfies~\eqref{eq:c-div:ineq}--\eqref{eq:c-div:eq2}. In optimal transport language this says that $\phi$ and $\psi$ are $c$-conjugate. We assume that $\Sigma=\{(x,y) : u(x,y)=0\}$ is a smooth submanifold of $X\times Y$ that is the graph of a smooth map $X\to Y, x\mapsto y(x)$ as well as the graph of the inverse map $y(X)\to X, y'\mapsto x(y')$. Denote by $t_i$ the pushforward of the vector $e_i=\partial_i$ on $X$ by the embedding $\iota\colon x \mapsto (x,y(x))$, which gives \[ t_i = e_i + \partial_iy^{\bar\imath} e_{\bar\imath}. \] Note that $(t_i)$ is a basis of the tangent bundle $T\Sigma$. We then define $n_i = K(t_i)$, i.e. \[ n_i = e_i - \partial_iy^{\bar\imath} e_{\bar\imath}. \] Then $(n_i)$ gives a basis of the normal bundle $T^{\perp}\Sigma$ (this can be checked directly or using that $K$ is an involution). The inverse formulas read $e_i=\frac{t_i+n_i}{2}$, $e_{\bar\imath}=\frac{\partial x^i}{\partial y^{\bar\imath}}\frac{t_i-n_i}{2}$, so that a vector field $U$ on $X\times Y$ can be decomposed on $\Sigma$ into tangent and normal components as \begin{equation}\label{eq:ei-eib-ti-ni} U^ie_i+U^{\bar\imath} e_{\bar\imath} = \frac 12 \Big(U^i +\frac{\partial x^i}{\partial y^{\bar\imath}} U^{\bar\imath}\Big)t_i + \frac 12 \Big(U^i -\frac{\partial x^i}{\partial y^{\bar\imath}} U^{\bar\imath}\Big) n_i\,. \end{equation} We compute with coordinates on $\Sigma$ using the embedding $\iota$ define above. Therefore tangent vector fields on $\Sigma$ are expressed in the frame $t_i\coloneqq d\iota(e_i)$, as usual with embeddings. However we also have to deal with more complicated quantities such as normal vector fields, valued in the normal bundle $T^\perp\Sigma$. To streamline computations we adopt the following coordinate representation of these objects. \begin{notat}[Coordinate representation of normal field] \label{notation:normal} Whenever $N$ is a normal vector field, we define \[ N'=K(N), \] where $K$ is defined by~\eqref{eq:def-K}. Since $K$ maps $T^\perp\Sigma$ to $T\Sigma$, this turns $N$ into a tangent vector field $N'$. Then, we express $N'$ in coordinates, $N'=N'^k t_k$ and we systematically drop the prime and always write \[ N'=N^k t_k. \] Note that since $K=K^{-1}$ and $n_k=Kt_k$ we have $N=N^kn_k$. The reason we prefer $N'$ when computing with tensors in coordinates is that we can use classical formulas for the covariant derivative on $\Sigma$ (see for instance Section~\ref{sec:second-fundamental-form}). More general tensor fields $T$ that involve the normal bundle are expressed in coordinates in the same way: using $K$ when necessary we define a tensor $T'$ which only acts on $T\Sigma$ (and the cotangent space) and then express $T'$ in coordinates. For instance for the second fundamental form $h$ we define $h'(U,V)=Kh(U,V)$ and then write $h'(t_i,t_j)=:h_{ij}^kt_k$. \end{notat} We now present a set of ``computational rules'' valid on $\Sigma$. These are \begin{align} U^{\bar\imath} &=\partial_iy^{\bar\imath} U^i\quad\text{when $U$ is tangent},\label{eq:calculus:rule1}\\ u_{ij} &= -c_{i{\bar\jmath}} \partial_jy^{\bar\jmath}\,, \label{eq:calculus:rule2}\\ g_{ij} &= u_{ij}\,. \label{eq:calculus:rule3} \end{align} The first rule~\eqref{eq:calculus:rule1} says that when $U=U^ie_i+U^{\bar\imath} e_{\bar\imath}$ is a vector field on $X\times Y$ that is tangent to $\Sigma$, the object $\partial_iy^{\bar\imath}$ can be used to change an unbarred index to a barred index. \noindent The second rule~\eqref{eq:calculus:rule2} relates the three natural tensors of order $2$ on $\Sigma$. Because $u\ge 0$, the derivative $u_i$ identically vanishes on $\Sigma$. Differentiating in the direction $t_j$ (tangent to $\Sigma$) we obtain $0=\hnabla_{t_j}u_i=u_{ij} + \partial_jy^{\bar\jmath} u_{i{\bar\jmath}}$, and note that $u_{i{\bar\jmath}}=c_{i{\bar\jmath}}$ since mixed derivatives of $u$ and $c$ are always equal. This shows~\eqref{eq:calculus:rule2}. \noindent The third rule~\eqref{eq:calculus:rule3} expresses the metric $g$ induced by $\tilde{g}$ on $\Sigma$ in terms of $u$. If $U=U^ie_i+U^{\bar\imath} e_{\bar\imath},V=V^je_j+V^{\bar\jmath} e_{\bar\jmath}$ are two tangent vector fields then \begin{align*} \tilde{g}(U,V) &= U^iV^{\bar\jmath} \tilde{g}(e_i,e_{\bar\jmath}) + U^{\bar\imath} V^j \tilde{g}(e_{\bar\imath},e_j) \\ &= U^i \partial_jy^{\bar\jmath} V^j (-\frac12 c_{i{\bar\jmath}}) + \partial_iy^{\bar\imath} V^j (-\frac12 c_{{\bar\imath} j}) \\ &= u_{ij} U^iV^j\, \end{align*} where the last inequality follows from~\eqref{eq:calculus:rule2} and the symmetry of $u_{ij}$. Therefore $u_{ij}(x,y(x)) = g_{ij}(x)$. We note that $g$ is a priori not necessarily definite. However, in the rest of the paper, we assume that it is the case. This condition is the non-degeneracy condition in \cite{kim2007continuity}. \subsection{Second fundamental form and projected connections} \label{sec:second-fundamental-form} Viewing $\Sigma$ as a submanifold of $X\times Y$ leads to natural geometric objects: the second fundamental form which measures the extrinsic curvature of $\Sigma$ embedded in $X\times Y$, and connections on the tangent bundle $T\Sigma$ as well as the normal bundle $T^\perp\Sigma$. We recall some basic submanifold theory and point to~\cite{ONeillBook} for a reference on the subject. Let $\hnabla$ denote the Levi-Civita connection on $(X\times Y,\tilde{g})$. Let $U$, $V$, $N$ be vector fields on $X\times Y$ such that $U$ and $V$ are tangent to $\Sigma$ and $N$ is normal to $\Sigma$. Then on $\Sigma$ the covariant derivative $\hnabla_UV$ can be decomposed into \begin{equation}\label{EqSecondFundamentalFormFirst} \hnabla_UV = \nabla_UV + h(U,V), \end{equation} where $\nabla_UV$ and $h(U,V)$ denote the orthogonal projections of $\hnabla_UV$ onto the tangent bundle $T\Sigma$ and the normal bundle $T^\perp\Sigma$, respectively. Similarly we can decompose \[ \hnabla_U N = -A_N(U) + \nabla^\perp_U N, \] where the shape operator $A$ is valued in $T\Sigma$ and $\nabla^\perp$ is a torsion-free connection on the normal bundle. \begin{definition}[$h$ and $H$] As defined by Formula~\eqref{EqSecondFundamentalFormFirst}, $h$ is called the \emph{second fundamental form}. The \emph{mean curvature} $H$ is a normal vector field defined as the trace of $h$ with respect to $g$. \end{definition} We list important known results, relevant for the rest of the paper. \begin{prop}[\cite{ONeillBook}] The tangent part $\nabla_UV$ of $\hnabla_UV$ is precisely the Levi-Civita connection of the induced metric $g$ on $\Sigma$. The second fundamental form is a symmetric bilinear form, which is valued in the normal bundle $T^\perp\Sigma$. The second fundamental form and the shape operator are manifestations of the same object since they satisfy the identity $\bracket{A_N(U),V} = \bracket{h(U,V),N}$, where $\bracket{-,-}=\tilde{g}(-,-)$. \end{prop} Let us now derive the expressions of $\nabla_UV$, $h(U,V)$ and $\nabla^\perp_U N$ in our framework. First we recall that our coordinate representations always refer to objects valued in the tangent bundle, as described in Notation~\ref{notation:normal}. Notably $H^kt_k=H'$ with $H'=KH$ and this implies that $H=H^kn_k$. Similarly $h_{ij}^kt_k=h'(t_i,t_j)$ with $h'=Kh$, so that \[ h(t_i,t_j)=h_{ij}^kn_k\,. \] In addition, we work more favorably with the purely covariant version \begin{equation*} h'(U,V,W)\coloneqq\bracket{Kh(U,V),W}\,, \end{equation*} where $W$ is tangent and $\bracket{-,-}=\tilde{g}(-,-)$. In that way the second fundamental form can be seen as a (scalar) trilinear form on $T\Sigma$. In coordinates we have \[ h'_{ijk}=h'(t_i,t_j,t_k)=\bracket{h^\ell(t_i,t_j) t_\ell,t_k} = h_{ij}^\ell g_{k\ell}. \] Thus we see that the index is lowered using the metric $g$ as usual, and from now on we systematically drop the prime and write $h_{ijk}= h_{ij}^\ell g_{k\ell}$. Importantly we don't need to worry about the placement of indices because of the following result, a direct consequence of ~\eqref{eq:formula-uijk} in Lemma~\ref{lemma:derivatives-c-divergence}. \begin{prop}\label{prop:symmetryofh} $h'(U,V,W)$ is totally symmetric in $U, V, W$. \end{prop} Translated into the language of information geometry thanks to Wong and Yang's article \cite{wong2021pseudoriemannian} (see also the discussion at the beginning of Section~\ref{sec:KimMcCann}), $h'$ seems to be related to the so-called \emph{cubic tensor}, defined as the difference of the dual connections and which is known to be symmetric~\cite{AmariBook}. We note that the cubic tensor seems to remain a bit of a mysterious quantity. Thus connecting it to the second fundamental form may be of interest to the information geometry community. We also record that $\tilde{g}(t_i,t_j)=g_{ij}$ (this is by definition) and $\tilde{g}(n_i,n_j)=-g_{ij}$ (this follows from~\eqref{eq:K-skew}). Thus in the basis $((t_i)_i,(n_i)_i)$ the Kim--McCann metric takes the form \begin{equation*} \begin{pmatrix} g_{ij} & 0\\ 0& -g_{ij}\end{pmatrix}. \end{equation*} We now ready to state the main results of this section. \begin{prop}\label{prop:connections} The second fundamental form is given by \begin{equation}\label{EqFormulaForh} h_{ij}^k=\frac 12 \Big(\tilde{\Gamma}_{ij}^k - \frac{\partial x^k}{\partial y^{\bar k}}\frac{\partial y^{\bar\imath}}{\partial x^i}\frac{\partial y^{\bar\jmath}}{\partial x^j} \tilde{\Gamma}_{{\bar\imath}{\bar\jmath}}^{\bar k} - \frac{\partial x^k}{\partial y^{\bar k}}\frac{\partial^2 y^{\bar k}}{\partial x^i\partial x^j}\Big)\,. \end{equation} The mean curvature is \begin{equation} \label{eq:def-H} H^k=u^{ij}h^k_{ij}. \end{equation} The Christoffel symbols for the Levi-Civita connection $\nabla$ on $\Sigma$ are \begin{equation}\label{EqConnection1} \Gamma_{ij}^k=\frac 12 \Big(\tilde{\Gamma}_{ij}^k + \frac{\partial x^k}{\partial y^{\bar k}}\frac{\partial y^{\bar\imath}}{\partial x^i}\frac{\partial y^{\bar\jmath}}{\partial x^j} \tilde{\Gamma}_{{\bar\imath}{\bar\jmath}}^{\bar k} + \frac{\partial x^k}{\partial y^{\bar k}}\frac{\partial^2 y^{\bar k}}{\partial x^i\partial x^j}\Big)\,. \end{equation} \end{prop} \begin{prop} \label{prop:K} The involution $K$ is parallel with respect to $\hnabla$, or in other words $K$ and $\hnabla$ commute in the sense that for any vector fields $U$ and $V$, $K(\hnabla_U V)=\hnabla_UK(V)$. In particular the normal connection can be obtained from the tangent connection (Levi-Civita on $\Sigma$): if $U$ is tangent, $N$ is normal and $V=K(N)$ then \begin{equation}\label{eq:K-commutes} \nabla_U^\perp N = K(\nabla_UV)\,. \end{equation} \end{prop} Before proving Prop.~\ref{prop:connections} and~\ref{prop:K}, let us explain how we will use~\eqref{EqFormulaForh} and~\eqref{eq:K-commutes}. Thanks to~\eqref{eq:K-commutes} we can differentiate normal vector fields as if they were tangent vector fields. In particular because we defined $n_j=K(t_j)$ we have \[ \nabla^\perp_{t_i} n_j=\nabla_{t_i}(Kt_j)=K(\nabla_{t_i}t_j)=K(\Gamma^k_{ij}t_k)=\Gamma^k_{ij}n_k\,. \] In other words, the Christoffel symbols for $\nabla^\perp$ and $\nabla$ are the same. This explains our choice of coordinate representation: we can do Ricci calculus as usual and write formulas such as \[ \nabla_iH^j = \partial_iH^j + \Gamma^j_{ik} H^k \] or \[ \nabla_\ell h_{ijk} = \partial_\ell h_{ijk}-(\Gamma^s_{k\ell} h_{ijs}+\Gamma^s_{j\ell} h_{isk}+\Gamma^s_{i\ell} h_{sjk})\,. \] As for~\eqref{EqFormulaForh}, it allows us to express second derivatives of $y(x)$ in terms of $h$: \begin{equation}\label{eq:ddy_h} \partial_{ij}y^{\bar k} = 2c^{{\bar k} m} h_{i j m} -c^{{\bar k} m} c_{m{\bar\imath} {\bar\jmath}} \partial_{i} y^{\bar\jmath} \partial_{j} y^{\bar\imath}+ c^{k {\bar m}} c_{i j {\bar m}} \partial_{k} y^{\bar k}\,, \end{equation} where we prefer to work with the $(0,3)$ version $h_{ijm}$. Summing~\eqref{EqFormulaForh} and~\eqref{EqConnection1} and we also obtain \begin{equation} \label{eq:Gamma-Gammat-h} \Gamma^k_{ij} = \tilde{\Gamma}^k_{ij}-h^k_{ij} = c^{k{\bar m}} c_{ij{\bar m}} - h^k_{ij}\,. \end{equation} \begin{proof}[Proof of Prop.~\ref{prop:connections}] Let $U$ and $V$ be two vector fields on $X\times Y$ that are tangent to $\Sigma$. We have \begin{align*} \hnabla_U V&=U^i\hnabla_{e_i}( V^je_j+ V^{\bar\jmath} e_{\bar\jmath})+U^{\bar\imath}\hnabla_{e_{\bar\imath}}( V^je_j+ V^{\bar\jmath} e_{\bar\jmath})\\ &= U^i\partial_i V^je_j +U^i V^j\Gammat_{ij}^ke_k + U^i\partial_i V^{\bar\jmath} e_{\bar\jmath} \\ &\qquad + U^{\bar\imath}\partial_{\bar\imath} V^je_j+U^{\bar\imath}\partial_{\bar\imath} V^{\bar\jmath} e_{\bar\jmath} +U^{\bar\imath} V^{\bar\jmath}\Gammat_{{\bar\jmath}{\bar\imath}}^\bark e_\bark\,. \end{align*} Since $U$ and $ V$ are tangent to $\Sigma$ we have $U^{\bar\imath}=\partial_iy^{\bar\imath} U^i$ and $ V^{\bar\jmath}=\partial_jy^{\bar\jmath} V^j$ by~\eqref{eq:calculus:rule1}. Thus \[ U^i\partial_i V^j+U^{\bar\imath}\partial_{\bar\imath} V^j=U^i\partial_i\{ V^j(x,y(x))\}\,. \] Moreover using~\eqref{eq:calculus:rule1} twice we write \begin{align*} U^i\partial_i V^{\bar\jmath}+U^{\bar\imath}\partial_{\bar\imath} V^{\bar\jmath} &= U^i\partial_i\{ V^{\bar\jmath}(x,y(x))\}= U^i\partial_i\{ \partial_jy^{\bar\jmath}(x)V^{\bar\jmath}(x,y(x))\}\\ &= \frac{\partial y^{\bar\jmath}}{\partial x^j}U^i\partial_i\{ V^j(x,y(x))\}+U^i V^j \frac{\partial^2 y^{\bar\jmath}}{\partial x^i\partial x^j}\,. \end{align*} Grouping terms we deduce that \begin{equation*} \hnabla_U V = U^i\partial_i\{ V^j(x,y(x))\} t_j + U^i V^j \frac{\partial^2 y^{\bar k}}{\partial x^i\partial x^j}e_{\bar k} + U^i V^j\Gammat_{ij}^ke_k + U^{\bar\imath} V^{\bar\jmath}\Gammat_{{\bar\imath}{\bar\jmath}}^\bark e_\bark\,. \end{equation*} By using~\eqref{eq:ei-eib-ti-ni} we can express each term in the frame $(t_i,n_j)$ and match against the desired expression \[ \hnabla_U V = U^i\partial_i\{ V^j(x,y(x))\} t_j + U^iV^j\Gamma^k_{ij}t_k + U^iV^jh^k_{ij}n_k\,. \] This gives~\eqref{EqFormulaForh} and~\eqref{EqConnection1}. As for~\eqref{eq:def-H} it directly follows from~\eqref{eq:calculus:rule3}. \end{proof} \begin{proof}[Proof of Prop.~\ref{prop:K}] The commuting property $K(\hnabla_UV)=\hnabla_UK(V)$ can be checked directly. Take $U=e_i$ and $V=ve_j$ for a scalar function $v$. Then $KV=V$ and \[ \hnabla_UV = \partial_iv^je_j + v\tilde{\Gamma}^k_{ij}e_k\,. \] Therefore $K(\hnabla_UV)=\hnabla_UV$. When $V=ve_{\bar\jmath}$, we have $KV=-V$ and \[ \hnabla_UV = \partial_iv^je_{\bar\jmath} + 0\,. \] Therefore $K(\hnabla_UV)=-\hnabla_UV$. A similar argument works for $U=e_{\bar\imath}$. Formula~\eqref{eq:K-commutes} then follows since $\nabla^\perp_U N$ is the normal component of $\hnabla_UN=K(\hnabla_UV)=K(\nabla_UV + h(U,V))$ whose normal component is $K(\nabla_UV)$ since $K$ maps $T\Sigma$ to $T^\perp\Sigma$ and vice versa. \end{proof} \subsection{Curvatures: the Gauss equation} The Gauss equation relates several intrinsic and extrinsic curvatures of the embedded manifold $\Sigma$. First let us define the curvature tensor intrinsic to $\Sigma$, \begin{equation} \label{eq:def-R} R(U,V)W = \nabla_V \nabla_UW - \nabla_U\nabla_VW - \nabla_{[V,U]}W\,. \end{equation} Note that it follows the same sign convention as~\eqref{eq:def-Rt}. The Gauss equation is~\cite{ONeillBook} \begin{equation} \label{eq:gauss-general} \bracket{\hR(U,V)W,Z}=\bracket{R(U,V)W,Z} + \bracket{h(U,Z), h(V,W)} - \bracket{h(V,Z), h(U,W)} \,, \end{equation} where $U,V,W,Z$ are any tangent vectors and $\bracket{-,-} = \tilde{g}(-,-)$. In coordinates we obtain the following result. \begin{lemma}[Gauss equation] \label{lemma:Gauss-equation} We have \begin{multline} \label{eq:formula-gauss-other} \hR_{i {\bar\jmath} {\bar k} \ell} \partial_j y^{\bar\jmath} \partial_k y^{\bar k}+\hR_{i {\bar\jmath} k {\bar\ell}} \partial_j y^{\bar\jmath} \partial_\ell y^{\bar\ell} + \hR_{{\bar\imath} j {\bar k} \ell} \partial_i y^{\bar\imath} \partial_k y^{\bar k}+\hR_{{\bar\imath} j k {\bar\ell}} \partial_i y^{\bar\imath} \partial_\ell y^{\bar\ell} \\ = R_{i j k \ell} + h_{i k s} h_{j \ell t} u^{s t} - h_{i \ell s} h_{j k t} u^{s t}\,. \end{multline} Contracting twice leads to the formula \begin{equation} \label{eq:formula-gauss} \hR_{i {\bar\jmath} k {\bar\ell}} u^{i k} u^{{\bar\jmath} {\bar\ell}} = - \frac18 \hR + \frac12 R - \frac12 \bracket{H,H} + \frac12 \bracket{h,h}\,. \end{equation} \end{lemma} \begin{proof} In formula~\eqref{eq:gauss-general} choose $U=t_i,V=t_j,W=t_k,Z=t_\ell$. Then writing $t_i=e_i+\partial_iy^{\bar\imath} e_{\bar\imath}$ and similarly for $t_j$, $t_k$ and $t_\ell$ we have \[ \bracket{\hR(t_i,t_j,t_k),t_\ell} = \hR_{i {\bar\jmath} {\bar k} \ell} \partial_j y^{\bar\jmath} \partial_k y^{\bar k}+\hR_{i {\bar\jmath} k {\bar\ell}} \partial_j y^{\bar\jmath} \partial_\ell y^{\bar\ell} + \hR_{{\bar\imath} j {\bar k} \ell} \partial_i y^{\bar\imath} \partial_k y^{\bar k}+\hR_{{\bar\imath} j k {\bar\ell}} \partial_i y^{\bar\imath} \partial_\ell y^{\bar\ell}\,. \] As for the right-hand side of~\eqref{eq:gauss-general}, we have \begin{equation*} \bracket{h(U,Z), h(V,W)} = \bracket{h_{i\ell}^sn_s, h_{jk}^tn_t} = h_{i\ell}^sh_{jk}^t \bracket{n_s,n_t} = - h_{i\ell}^sh_{jk}^t u_{st}\,. \end{equation*} Lowering indices and repeating the argument for $\bracket{h(V,Z), h(U,W)}$ leads to~\eqref{eq:formula-gauss-other}. Next we perform the contraction by mutliplying by $u^{ik} u^{j\ell}$. In the left-hand side of~\eqref{eq:formula-gauss-other} we obtain \begin{multline*} c^{i{\bar k}} c^{{\bar\jmath}\ell} \hR_{i {\bar\jmath} {\bar k} \ell} + u^{ik} u^{{\bar\jmath}{\bar\ell}} \hR_{i {\bar\jmath} k {\bar\ell}} + u^{{\bar\imath}{\bar k}}u^{j\ell} \hR_{{\bar\imath} j {\bar k} \ell} + c^{{\bar\imath} k} c^{j{\bar\ell}} \hR_{{\bar\imath} j k {\bar\ell}}\\ = 2 u^{ik} u^{{\bar\jmath}{\bar\ell}} \hR_{i {\bar\jmath} k {\bar\ell}} + 2c^{i{\bar k}} c^{{\bar\jmath}\ell} \hR_{i {\bar\jmath} {\bar k} \ell} = 2 u^{ik} u^{{\bar\jmath}{\bar\ell}} \hR_{i {\bar\jmath} k {\bar\ell}} + \frac14 \hR\,, \end{multline*} using the symmetries of $\hR$ and~\eqref{EqRicciHat}. The right-hand side of~\eqref{eq:formula-gauss-other} becomes \[ R + H_sH_tu^{st} - h_{i \ell s} h_{j k t} u^{s t}u^{ik} u^{j\ell} = R - \bracket{H,H} + \bracket{h,h}. \] We recall that the minus sign in front of the brackets occurs because $\bracket{n_s,n_t}=-u_{st}$. An alternative point of view is that following the convention outlined in Notation~\ref{notation:normal}, $H_s$ describes here $H'=HK$, and $H_sH_tu^{st}=\bracket{KH,KH}=\bracket{-KKH,H}=-\bracket{H,H}$. Similarly $\bracket{Kh,Kh}=-\bracket{h,h}$. \end{proof} \subsection{Various formulas} We collect below useful formulas for our geometric Laplace expansion. \begin{lemma}[Laplacian on $X \times Y$]\label{lemma:Laplacians} Let $f(x,y)$ be a scalar function. The Laplacian of $f$ with respect to $\tilde{g}$ is \begin{equation}\label{EqLaplacianXY} {\tilde\Delta} f = -4c^{{\bar\imath} j}\partial_{{\bar\imath} j}f\,. \end{equation} \end{lemma} \begin{proof} The standard formula for the Hessian in coordinates is \[ \hnabla^2 f(e_\alpha,e_\beta) = \partial_{\alpha\beta}f - \tilde{\Gamma}_{\alpha\beta}^\gamma\partial_\gamma f\,, \] where Greek letters $\alpha,\beta,\dots$ denote either barred or unbarred indices. By~\eqref{EqChristoffelsKimMcCann}, \begin{equation*} [\hnabla^2 f] = \begin{pmatrix} \partial_{ij}f-\tilde{\Gamma}^k_{ij}\partial_kf & \partial_{i{\bar\jmath}} f\\ \partial_{{\bar\imath} j}f & \partial_{{\bar\imath}{\bar\jmath}}f-\tilde{\Gamma}^{\bar k}_{{\bar\imath}{\bar\jmath}}\partial_{\bar k} f \end{pmatrix}\,. \end{equation*} Contracting against the inverse of the Kim--McCann metric $\begin{pmatrix} 0 & -2c^{i{\bar\jmath}} \\ -2c^{{\bar\imath} j} & 0 \end{pmatrix}$ gives us~\eqref{EqLaplacianXY}. \end{proof} \begin{lemma}[Derivatives of $\tilde{m}$] \label{lemma:derivatives-mt} On $X\times Y$ we have the formulas \begin{equation} \label{eq:formula-dmt} \partial_i \tilde{m} = c^{j{\bar k}} c_{ij{\bar k}}\,\tilde{m}\,, \end{equation} and \begin{equation}\label{eq:formula-ddmt} \partial_{ij}\tilde{m} = \big(c^{k {\bar\ell}} c_{i j k {\bar\ell}} - c^{k {\bar n}} c^{{\bar\ell} m} c_{i k{\bar\ell} } c_{j m {\bar n}} + c^{k{\bar\ell}} c^{m{\bar n}} c_{ik{\bar\ell}} c_{jm{\bar n}}\big)\,\tilde{m}\,. \end{equation} \end{lemma} \begin{proof} The derivative of the determinant is given by the formula $\partial_\alpha\log\abs{\det \tilde{g}_{\beta\gamma}} = \tilde{g}^{\beta\gamma}\partial_\alpha\tilde{g}_{\beta\gamma}$, which yields the formula giving the derivative of the metric volume form standard in semi-Riemannian geometry \[ \partial_i\log\tilde{m} = \tilde{\Gamma}^j_{ij}\,. \] Note that here $\tilde{m}$ is equal to the volume form up to a multiplicative constant and thus satisfies the same formula. This gives us~\eqref{eq:formula-dmt}. Formula~\eqref{eq:formula-ddmt} follows from~\eqref{eq:formula-dmt} by taking a derivative. \end{proof} \begin{lemma}[Derivatives of $u$] \label{lemma:derivatives-c-divergence} On $\Sigma$ we have the formulas \begin{equation}\label{eq:formula-uijk} u_{ijk} = -2h_{i j k}-(c_{{\bar\imath} j k} \partial_{i} y^{\bar\imath}+c_{i {\bar\jmath} k} \partial_{j} y^{{\bar\jmath}}+c_{i j {\bar k}} \partial_{k} y^{\bar k})\,, \end{equation} and \begin{multline} \label{eq:formula-uijkl} u_{i j k \ell} = -2\partial_\ell {h_{i j k}} - (c_{{\bar\imath} j k {\bar\ell}} \partial_i y^{\bar\imath} + c_{i {\bar\jmath} k {\bar\ell}} \partial_j {y^{\bar\jmath}} + c_{i j {\bar k} {\bar\ell}} \partial_k {y^{\bar k}}) \,\partial_\ell {y^{\bar\ell}}\\ - (c_{{\bar\imath} j k \ell} \partial_i y^{\bar\imath} + c_{i {\bar\jmath} k \ell} \partial_j y^{\bar\jmath} + c_{i j {\bar k} \ell} \partial_k y^{\bar k} + c_{i j k {\bar\ell}} \partial_\ell y^{\bar\ell} )\\ + \partial_\ell {y^{\bar\ell}} c^{{\bar s} t} (c_{{\bar\imath} {\bar\ell} t} c_{j k {\bar s}} \partial_i y^{\bar\imath} + c_{i k {\bar s}} c_{{\bar\jmath} {\bar\ell} t} \partial_j y^{\bar\jmath} + c_{i j {\bar s}} c_{{\bar k} {\bar\ell} t} \partial_k y^{\bar k} )\\ + u^{{\bar s} {\bar t}}(c_{i j {\bar s}} c_{k \ell {\bar t}} + c_{i k {\bar s}} c_{j \ell {\bar t}}+c_{i \ell {\bar s}} c_{j k {\bar t}}) - 2c^{{\bar s} t}(c_{i j {\bar s}} h_{k \ell t} + c_{i k {\bar s}} h_{j \ell t} + c_{j k {\bar s}} h_{i \ell t}) \,. \end{multline} \end{lemma} \begin{proof} When $(x,y)\in\Sigma$ we have the relation \[ u_{ij}(x,y)=-c_{i{\bar\jmath}}(x,y)\partial_jy^{\bar\jmath}(x)\,, \] see~\eqref{eq:calculus:rule2}. Differentiating both sides in the direction $t_k$, i.e. applying the operator $\hnabla_{t_k}=\partial_k + \partial_ky^{\bar k}\partial_{\bar k}$, we obtain in the left-hand side $u_{ijk} + \partial_ky^{\bar k} u_{ij{\bar k}}$. Since $u$ and $c$ only differ by functions of $x$ only and $y$ only their mixed derivatives always agree, so $u_{ij{\bar k}}=c_{ij{\bar k}}$. In the right-hand side we obtain various derivatives of $c$ and $y(x)$ and we use~\eqref{eq:ddy_h} to substitute second derivatives of the map $y(x)$. This leads to~\eqref{eq:formula-uijk}. Doing the same process again, we differentiate~\eqref{eq:formula-uijk} in direction $t_\ell$. In the left-hand side we obtain $u_{ijk\ell} + \partial_\ell y^{\bar\ell} c_{ijk{\bar\ell}}$ and in the right-hand side we obtain derivatives of various quantities. We systematically replace second derivatives of $y(x)$ by $h$ quantities thanks to~\eqref{eq:ddy_h}. For completeness we also verified formulas~\eqref{eq:formula-uijk} and~\eqref{eq:formula-uijkl} using the symbolic algebra program Cadabra~\cite{peeters2007,cadabra,peeters2007arXiv} which specializes in symbolic tensor computations.\footnote{Code available at \url{https://github.com/flavienleger/geometric-laplace}.} \end{proof} Let $f(x,y)$ be a scalar function. The gradient of $f$ with respect to $\tilde{g}$ is the vector field $\tilde{G}$ defined by $\tilde{g}(\tilde{G},U)=\hnabla_Uf$ for any vector field $U$. On $\Sigma$, we can decompose \begin{equation*} \tilde{G}=G+N\,, \end{equation*} where $G$ and $N$ are tangent and normal vector fields, respectively. Following our convention to only work with coordinates on $T\Sigma$ we then define $N'=KN$. $G$ and $N'$ are expressed in coordinates as \[ G=G^i t_i, \quad N'=N^i t_i. \] We also note that $G$ is the gradient of $f$ on $(\Sigma,g)$. Sometimes, when the distinction between vectors and covectors is not so important we write \begin{equation}\label{eq:def-G-N} \hnabla f = \tilde{G}, \quad \nabla f=G\quad\text{and}\quad\nabla^N\!f = N. \end{equation} \begin{lemma}[Derivatives of $f$] \label{lemma:derivatives-f} On $\Sigma$ we have the formulas \begin{equation} \label{eq:formula-dif} \partial_if = \frac12 G_i - \frac12 N_i \end{equation} and \begin{equation} \label{eq:formula-dijf} \partial_{i j} f = -\partial_{{\bar\imath} j}{f} \partial_{i} y^{\bar\imath}+\frac{1}{2}\partial_{i} G_{j} - \frac{1}{2}\partial_{i} N_{j}\,. \end{equation} \end{lemma} \begin{proof} For any vector field $U$, $\hnabla_Uf = \bracket{G+N,U}$, denoting $\bracket{-,-} = \tilde{g}(-,-)$. Taking $U=e_i$, we have \[ \partial_i f = \hnabla_{e_i} f = \bracket{G,e_i} + \bracket{KN', e_i} = \bracket{G,e_i} - \bracket{N', Ke_i}\,. \] Note that $Ke_i=e_i$. Also in the basis $(t,n)$ we have $e_i = \frac12 (t_i+n_i)$ and since $G$ and $N'$ are tangent, \[ \bracket{G,e_i} - \bracket{N', Ke_i} = \bracket{G,\frac12 t_i} - \bracket{N', \frac12 t_i}\,. \] We deduce that \[ \partial_if = \frac12 G^j \bracket{t_j,t_i} - \frac12 N^j \bracket{t_j, t_i} = \frac12 G^j g_{ij}- \frac12 N^j g_{ij}\,. \] This proves~\eqref{eq:formula-dif}. To obtain~\eqref{eq:formula-dijf}, we keep in mind that in formula~\eqref{eq:formula-dif} the quantity $\partial_if$ is a function of $(x,y)$ while $G_i$ and $N_i$ are functions of $x$ (they are only defined on $\Sigma$ and read through the embedding $x\mapsto y(x)$). Therefore~\eqref{eq:formula-dif} should be understood as \[ \partial_if(x,y(x)) = \frac12 G_i(x) - \frac12 N_i(x)\,. \] Differentiating with respect to $x^j$ leads to the desired result, after switching indices $i,j$. \end{proof} \section{Geometric Laplace expansion} \label{sec:geometric_laplace} \subsection{The main result} Let $X$ and $Y$ be two domains of ${{\mathbb R}^d}$ and let $u$ be a nonnegative function on the product space $X\times Y$, such that $(X,Y,u)$ satisfies Assumption~\ref{assumption:XYu} below. The Kim--McCann geometry induced by $u$ provides the following structures: a pseudo-Riemannian metric $\tilde{g}$ over $X\times Y$ equipped with a special mapping $K$ called a para-complex structure, and a submanifold theory for the vanishing set of $u$. This material is presented in Section~\ref{sec:KimMcCann}. We note that the Euclidean structure of $X$ and $Y$ inherited from ${{\mathbb R}^d}$ plays in itself no role in our geometric framework. Thus $X$ and $Y$ could be more general $d$-dimensional smooth manifolds. \begin{table}[h] \centering \begin{tabular}{c|c|c} \label{table:geometric-quantities} & Quantity & Defined by...\\ \hline On $X\times Y$ & $\tilde{g}$ & \eqref{def:Kim-McCann} \\ & $K$ & \eqref{eq:def-K}\\ & $\tilde{m}$ & \eqref{eq:def-mt}\\ & $\hnabla$ & Levi-Civita connection\\ & ${\tilde\Delta}$ & \eqref{EqLaplacianXY}\\ & $\hR$ & \eqref{EqRicciHat}\\ \hline On $\Sigma$ & $R$ & \eqref{eq:def-R} \\ & $h$ & \eqref{EqSecondFundamentalFormFirst}\\ & $H$ & \eqref{eq:def-H}\\ & $\nabla f,\nabla^N\!f$ & \eqref{eq:def-G-N} \end{tabular} \caption{Geometric quantities} \end{table} From the Kim--McCann pseudo-metric can be derived a number of geometric quantities which appear in our Laplace formula. They are defined in Section~\ref{sec:KimMcCann} and listed in Table~\ref{table:geometric-quantities}. \begin{assumption}[Assumptions on $X,Y,u$] \label{assumption:XYu} \leavevmode \begin{enumerate}[(i)] \item \label{assumption:XYu:XY} $X$ and $Y$ are open subsets of ${{\mathbb R}^d}$ with smooth boundaries or no boundaries and $u$ is a nonnegative measurable function over $X\times Y$. \item\label{assumption:XYu:Sigma} The vanishing set $\Sigma=\{(x,y)\in X\times Y : u(x,y)=0\}$ is the graph $(x,y(x))$ of a map $y\colon X\to Y$ which is a $C^3$-diffeomorphism onto its image. \item\label{assumption:XYu:ball} There exists $\delta>0$ such that $Y$ contains the ball $B(y(x),\delta)$ for all $x\in X$. We then define a tubular neighborhood of $\Sigma$, \[ \Sigma_\delta\coloneqq \{(x,y')\in X\times Y : y'\in B(y(x),\delta)\}. \] \item\label{assumption:XYu:regularity} $u\in C^6(\Sigma_\delta)$. \item \label{assumption:XYu:lowerbound} There exists $\lambda>0$ such that \begin{align*} u(x,y') &\ge \frac\lambda 2\abs{y'-y(x)}^2 \quad\text{for all $(x,y')\in \Sigma_\delta,$} \\ u(x,y') &\ge \frac\lambda 2\delta^2 \quad\text{for all $(x,y')\in (X\!\times\! Y)\setminus \Sigma_\delta.$} \end{align*} Here $\abs{\cdot}$ denotes the Euclidean norm in $Y$. \end{enumerate} \end{assumption} Let us make a few comments on these assumptions. About~\ref{assumption:XYu:XY}, note that the boundaries of $X$ and $\Sigma$ are in a one-to-one correspondance via the map $y(x)$. We ask for $\Sigma$ to have a smooth boundary since the Laplace formula~\eqref{eq:mainthm} contains a boundary term integrated over $\partial\Sigma$. As for $Y$, it doesn't in fact need to have a smooth boundary. In~\ref{assumption:XYu:Sigma}, we only ask for the map $y(x)$ to be a diffeomorphism onto its image. Indeed we should have the freedom to extend the space $Y$ if we so wish (while keeping $X$ fixed), since in the Laplace method only the neighborhood of the points $y(x)$ really plays a role. Finally, \ref{assumption:XYu:ball}, \ref{assumption:XYu:regularity} and \ref{assumption:XYu:lowerbound} are roughly the counterparts of Assumption~\ref{ass:ux-localized}\ref{ass:ux-localized:ball}, \ref{ass:ux-localized:regularity} and~\ref{ass:ux-localized:bound-below} respectively. Before we state our main result, we define the norm \[ \norm{r}_{L^1_xW^{4,\infty}_y(\Sigma_\delta)} = \int_X\int_Y \norm{r(x,\cdot)}_{W^{4,\infty}(B(y(x),\delta))}dx, \] where $W^{4,\infty}$ stands for the usual Sobolev space. \begin{theorem} \label{thm:laplace} Suppose that $X$, $Y$ and $u$ satisfy Assumption~\ref{assumption:XYu} and let $r\in L^1_xW^{4,\infty}_y(\Sigma_\delta)\cap L^1(X\times Y)$. Then there exists a constant $C>0$ such that for all $\varepsilon>0$, \begin{multline} \label{eq:mainthm} \iint_{X\times Y}\frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,dr(x,y) = \int_{\Sigma} fdm \,+ \\ \varepsilon\int_\Sigma\Big[- \frac18 {\tilde\Delta} f +\frac14 \hnabla_{\!H} f + f \Big(\frac{3}{32}{\hR} - \frac{1}{8}R + \frac{1}{24}\bracket{h,h} -\frac{1}{8}\bracket{H,H} \Big)\Big] \,dm \\ + \varepsilon\int_{\partial\Sigma} \frac14 \bracket{\nabla f - K\nabla^N\!f + fKH, \nu}\,d\sigma + \varepsilon^2\mathcal{R}(\varepsilon), \end{multline} with $f\coloneqq dr/d\tilde{m}$ and with \[ \abs{\mathcal{R}(\varepsilon)} \le C \big(\norm{r}_{L^1_xW^{4,\infty}_y(\Sigma_\delta)} + \norm{r}_{L^1(X\times Y)}\big)\,. \] The constant $C$ depends on $\lambda$, $\delta$, $d$ and $\norm{D^ku}_{L^{\infty}(\Sigma_\delta)}$ for $3\le k\le 6$. In the boundary term, $\nu$ is the outer normal and $\sigma$ is the volume form induced by $g$ on $\partial\Sigma$. \end{theorem} In~\eqref{eq:mainthm}, $r$ should be seen as a test function, i.e. a smooth function we integrate against in order to understand $e^{-u(x,y)/\varepsilon}$. Geometrically $r$ is a volume form over $X\!\times\! Y$, which is why we write it as $dr(x,y)$. Then on the right-hand side, $f$ is a scalar function defined as the ratio of two $2d$-forms ($2d$ is the dimension of $X\!\times\! Y$). Observe that $f$ and its derivatives only play a role on $\Sigma$. Therefore we only need to define $f$ on the tubular neighborhood $\Sigma_\delta$. Since $u$ is $C^6$ on $\Sigma_\delta$, the quantity $\tilde{m}=\abs{\det D^2_{xy}u}$ is well-defined on $\Sigma_\delta$ and thus so is $f$. The geometric quantities that appear in~\eqref{eq:mainthm} can be looked up in Table~\ref{table:geometric-quantities}. The brackets $\bracket{-,-}$ denote the pseudo metric $\tilde{g}(-,-)$. Since $\tilde{g}$ is a non-degenerate bilinear form it extends to tensors of any given type, and can therefore be applied to $h$. \begin{remark} Formula \eqref{eq:mainthm} suggests that $u$ not only induces the Kim--McCann metric but also the volume form denoted by $\tilde{m}$, which appears in the definition of $f$; these two natural objects being only needed in the neighborhood of $\Sigma$. In particular, the measure $e^{-u(x,y)/\varepsilon} d\tilde{m}(x,y)$ is defined without any reference to the volume form chosen in the Laplace formula and it can be integrated against the function $f$. \end{remark} \paragraph{A convergence of measures.} Writing $r=f\tilde{m}$ and viewing $f$ as a scalar test function, Theorem~\ref{thm:laplace} can be interpreted as the measure $\mu_\varepsilon\coloneqq (2\pi\varepsilon)^{-d/2} e^{-u/\varepsilon} \tilde{m}$ converging towards a distribution concentrated on $\Sigma$. More precisely, \[ \frac{1}{\varepsilon}\bigg\{\mu_\varepsilon - m\delta_\Sigma - \varepsilon\Big[-\frac 18{\tilde\Delta}+ \frac 14 \hnabla_{\!H}+ \Big(\text{curvatures}\Big)\Big] m\delta_\Sigma\bigg\} \to 0, \] as $\varepsilon\to0$, where $\delta_\Sigma$ denotes the Dirac measure supported on $\Sigma$. The above convergence certainly holds in the sense of distribution (i.e. against smooth test functions with compact support). In view of the remainder term $\mathcal{R}(\varepsilon)$ it also holds in a certain dual Sobolev space that we don't wish to make explicit. When $f=1$, i.e. $r=\tilde{m}$, in the right-hand side of~\eqref{eq:mainthm} the $\varepsilon$ term simplifies into a mix of intrinsic and extrinsic curvature terms. These terms are strongly reminiscent of the \emph{second variation formula}, which describes how the volume of a family of submanifolds $\Sigma_t$ changes around $\Sigma\coloneqq\Sigma_0$ (see for instance~\cite{Simons1968}). Note that the volume of $\Sigma_t$ is the total mass of the measure $\delta_{\Sigma_t}$ and that the left-hand side of~\eqref{eq:mainthm} is the total mass of $\mu_\varepsilon$ (assuming $r=\tilde{m}$). Thus our result could be understood as a variation formula around $\Sigma$, but where the variation of $\Sigma$ consists of smoothed out measures $\mu_\varepsilon$ instead of neighboring surfaces $\Sigma_t$. \subsection{Proof of the main result} \begin{proof}[Proof of Theorem~\ref{thm:laplace}] Write $r(x,y) = f(x,y)\tilde{m}(x,y)$ and let \[ I(\varepsilon) = \int_X\int_Y \frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}r(x,y)\,dxdy. \] In order to obtain a Laplace expansion of $I(\varepsilon)$, we proceed for each $x\in X$ to do the Laplace expansion of $\int_Y \frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}r(x,y)\,dy$, using Corollary~\ref{cor:quantitativelaplace}. Here comes a small notational problem: $\int_Y \frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}r(x,y)\,dy$ is an integral over $y$, while Corollary~\ref{cor:quantitativelaplace} uses $x$. We prefer to keep the same variable since that also impacts how we write derivatives, with $y$ derivatives using barred indices and $x$ derivatives using unbarred indices. Therefore for the entirety of this proof, we switch the roles of $X$ and $Y$. Assumption~\ref{assumption:XYu} is adjusted as follows: \ref{assumption:XYu:Sigma} $\Sigma$ is the graph of a function $x(y)$ and \ref{assumption:XYu:lowerbound} $u(x',y)\ge \frac\lambda 2 \abs{x'-x(y)}^2$. We therefore freeze $y\in Y$ and do the Laplace expansion of $\int_X \frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}r(x,y)\,dx$. The various assumptions put in place in Assumption~\ref{assumption:XYu} directly correspond to the needed Assumption~\ref{ass:ux-localized} to apply Corollary~\ref{cor:quantitativelaplace}. In particular note that $x(y)$ is the unique minimizer of $x'\mapsto u(x',y)$ and it corresponds to $x_*$ in Corollary~\ref{cor:quantitativelaplace}. Combining the obtained expansions for each $y$ we have \begin{multline*} I(\varepsilon) = \int_Y \Big(\frac{1}{\sqrt{\det[u_{ij}(x(y),y)]}}\Big[r + \varepsilon\Big(\frac 12 u^{ij}\partial_{ij}r-\frac 12 u_{jk\ell}u^{ij}u^{k\ell}\partial_ir \\ + \frac 18 ru_{ijk}u_{\ell mn}u^{ij}u^{k\ell}u^{mn}+\frac{1}{12} r u_{ijk}u_{\ell mn} u^{i\ell} u^{jm} u^{kn} \\- \frac 18 r u_{ijk\ell}u^{ij}u^{k\ell}\Big)\Big]_{(x(y),y)}+ \varepsilon^2\,\mathcal{R}(\varepsilon,y)\Big)\,dy, \end{multline*} where $\mathcal{R}$ a priori depends on $y$ and satisfies the bound \[ \abs{\mathcal{R}(\varepsilon,y)}\le C\,\norm{r(\cdot,y)}_{W^{4,\infty}(B(x(y), \delta))} + \norm{r(\cdot,y)}_{L^1(X)}. \] $C$ depends on $d$, $\lambda$, $\delta$ and $\norm{D^k_xu(\cdot,y)}_{L^{\infty}(B(x(y), \delta))}$ for $3\le k\le 6$, which we further bound by $\norm{D^ku}_{L^{\infty}(\Sigma_\delta)}$ which does not depend on $y$. We expand the brackets and break down \[ I(\varepsilon)=:I_0 + \varepsilon I_1 + \varepsilon^2I_2(\varepsilon). \] We interpret $I_0 + \varepsilon I_1$ as an integral over $\Sigma$ parametrized by $Y$ via $y\mapsto (x(y),y)$. In order to reveal the volume form $\bar m=\sqrt{\det[u_{{\bar\imath}{\bar\jmath}}]}$ induced by the Riemannian metric in $y$ coordinates, we take determinants in the identity $u_{ij}=c_{i{\bar\jmath}}c_{{\bar\imath} j} u^{{\bar\imath}{\bar\jmath}}$ to obtain \[ \frac{1}{\sqrt{\det[u_{ij}]}} = \frac{\bar m}{\tilde{m}}\,\cdot \] Therefore \begin{multline*} I_0 + \varepsilon I_1 = \int_\Sigma \Big[\frac{r}{\tilde{m}} + \varepsilon\frac{1}{\tilde{m}}\Big(\frac 12 u^{ij}\partial_{ij}r-\frac 12 u_{jk\ell}u^{ij}u^{k\ell}\partial_ir + \frac 18 ru_{ijk}u_{\ell mn}u^{ij}u^{k\ell}u^{mn}\\ +\frac{1}{12} r u_{ijk}u_{\ell mn} u^{i\ell} u^{jm} u^{kn} - \frac 18 r u_{ijk\ell}u^{ij}u^{k\ell}\Big)\Big]\,dm. \end{multline*} Note that we slightly abused notation by writing $\bar m$ for the ``coordinate expression'' $\bar m(y)dy$ and $m$ for the more abstract geometric volume form on $\Sigma$. \paragraph{Term $I_0$.} Since $r=f\tilde{m}$ we have \[ I_0 = \int_\Sigma f\,dm. \] \paragraph{Term $I_1$.} Let us now simplify \begin{multline*} I_1 = \int_\Sigma \Big(\frac 12 u^{ij}\frac{\partial_{ij}(f\tilde{m})}{\tilde{m}} - \frac 12 u_{jk\ell}u^{ij}u^{k\ell}\frac{\partial_i(f\tilde{m})}{\tilde{m}} + \\ \frac 18 fu_{ijk}u_{\ell mn}u^{ij}u^{k\ell}u^{mn}+\frac{1}{12} f u_{ijk}u_{\ell mn} u^{i\ell} u^{jm} u^{kn} \\ -\frac 18 f u_{ijk\ell}u^{ij}u^{k\ell}\Big)\,dm, \end{multline*} which we write as \[ I_1 =: \int_\Sigma L\,dm. \] We want to express $L$ using only geometric objects (metric, covariant derivative, curvature), either intrinsic and extrinsic to $\Sigma$. We break down $L$ into \begin{equation*} \begin{aligned} L_1 &= \frac12 u^{ij} \frac{\partial_{ij}(f\tilde{m})}{\tilde{m}},\\ L_2 &= -\frac12 u_{jk\ell}u^{ij}u^{k\ell}\frac{\partial_i(f\tilde{m})}{\tilde{m}},\\ L_3 &= \frac{1}{8} f u_{ijk}u_{\ell mn}u^{ij}u^{k\ell}u^{mn} , \\ L_4 &= \frac{1}{12}fu_{ijk}u_{\ell mn} u^{i\ell} u^{jm} u^{kn}, \\ L_5 &= -\frac{1}{8} f u_{ijk\ell}u^{ij}u^{k\ell}, \end{aligned} \end{equation*} so that $L = L_1 + L_2 + L_3 + L_4 + L_5$. We compute each of these five terms using tedious but straightforward computations. Since some of the formulas involved can become lengthy we have also checked them using the symbolic algebra program Cadabra~\cite{peeters2007,cadabra,peeters2007arXiv} which specializes in symbolic tensor computations.\footnote{Code available at \url{https://github.com/flavienleger/geometric-laplace}.} Using the product rule where necessary, we replace the quantities $\partial_{ij}f$, $\partial_if$, $\partial_{ij}\tilde{m}$, $\partial_i\tilde{m}$, $u_{ijk}$, $u_{ijk\ell}$ by their expressions in formulas~\eqref{eq:formula-dijf}, \eqref{eq:formula-dif}, \eqref{eq:formula-ddmt}, \eqref{eq:formula-dmt}, \eqref{eq:formula-uijk} and~\eqref{eq:formula-uijkl}. We obtain after simplification the following expressions. \begin{multline*} L_1 = \frac14 u^{i j}\partial_{i}{G_{j}} - \frac14 u^{i j}\partial_{i}{N_{j}} + \frac12 c^{i {\bar\jmath}}\partial_{i {\bar\jmath}}{f} + \frac12 c^{{\bar\imath} j} u^{k \ell} c_{{\bar\imath} j k} G_\ell - \frac12 c^{{\bar\imath} j} u^{k \ell} c_{{\bar\imath} j k} N_\ell\\ + f \Big(\frac12 c^{{\bar\imath} j} u^{k \ell} c_{{\bar\imath} j k \ell} + \frac12 c^{i {\bar k}} c^{m {\bar n}} u^{j \ell} c_{i j {\bar k}} c_{\ell m{\bar n}} - \frac12 c^{i {\bar n}} c^{{\bar k} m} u^{j \ell} c_{i j {\bar k}} c_{\ell m {\bar n}}\Big). \end{multline*} \begin{multline*} L_2 = \Big(- \frac14 c^{{\bar\imath} \ell} u^{j k} c_{{\bar\imath} j k} - \frac12 c^{{\bar\imath} j} u^{k \ell} c_{{\bar\imath} j k} + \frac12 u^{i j} u^{k \ell} h_{i j k}\Big) (G_\ell - N_\ell)\\ + f \Big(c^{i {\bar k}} u^{j \ell} u^{m n} c_{i j {\bar k}} h_{\ell m n} - c^{i {\bar k}} c^{m {\bar n}} u^{j \ell} c_{i j {\bar k}} c_{\ell m {\bar n}} - \frac12 c^{i {\bar n}} c^{j {\bar k}} u^{\ell m} c_{i j {\bar k}} c_{\ell m {\bar n}} \Big). \end{multline*} \begin{multline*} L_3 = f \Big(\frac12 c^{i {\bar k}} c^{j {\bar n}} u^{\ell m} c_{i j {\bar k}} c_{\ell m {\bar n}} + \frac12 c^{j {\bar k}} c^{m {\bar n}} u^{i\ell} c_{i j {\bar k}} c_{\ell m {\bar n}} + \frac18 u^{i j} u^{\ell m} u^{{\bar k} {\bar n}} c_{i j {\bar k}} c_{\ell m {\bar n}} \\ - c^{j {\bar k}} u^{i \ell} u^{m n} c_{i j {\bar k}} h_{\ell m n} - \frac12 c^{{\bar k} \ell} u^{i j} u^{m n} c_{i j {\bar k}} h_{\ell m n} + \frac12 u^{i j} u^{k \ell} u^{m n} h_{i j k} h_{\ell m n} \Big). \end{multline*} \begin{multline*} L_4 = f \Big( \frac12 c^{j {\bar n}} c^{{\bar k} m} u^{i \ell} c_{i j {\bar k}} c_{\ell m {\bar n}} + \frac14 u^{i \ell} u^{j m} u^{{\bar k} {\bar n}} c_{i j {\bar k}} c_{\ell m {\bar n}} - c^{{\bar k} n} u^{i \ell} u^{j m} c_{i j {\bar k}} h_{\ell m n} + \frac13 u^{i \ell} u^{j m} u^{k n} h_{i j k} h_{\ell m n} \Big). \end{multline*} \begin{multline*} L_5 = f \Big( \frac14 u^{i j} u^{k \ell} \partial_i h_{j k \ell} + \frac14 c^{i {\bar\jmath}} c^{k {\bar\ell}} c_{i {\bar\jmath} k {\bar\ell}} - \frac12 c^{{\bar\imath} j} u^{k \ell} c_{{\bar\imath} j k \ell} + \frac18 u^{i j} u^{{\bar k} {\bar\ell}} c_{i j {\bar k} {\bar\ell}}\\ - \frac14 c^{i {\bar\ell}} c^{j {\bar m}} c^{{\bar k} n} c_{i j {\bar k}} c_{{\bar\ell} {\bar m} n} - \frac18 c^{{\bar k} \ell} u^{i j} u^{{\bar m} {\bar n}} c_{i j {\bar k}} c_{\ell {\bar m} {\bar n}} - \frac18 u^{i j} u^{{\bar k} {\bar\ell}} u^{m n} c_{i j {\bar k}} c_{{\bar\ell} m n} - \frac14 u^{i \ell} u^{j m} u^{{\bar k} {\bar n}} c_{i j {\bar k}} c_{\ell m {\bar n}} \\ + \frac14 c^{{\bar k} \ell} u^{i j} u^{m n} c_{i j {\bar k}} h_{\ell m n} + \frac12 c^{{\bar\imath} \ell} u^{j m} u^{k n} c_{{\bar\imath} j k} h_{\ell m n} \Big). \end{multline*} Summing $L_1$ through $L_5$, many terms cancel out and after simplification we are left with \begin{multline*} L = \frac14 u^{i j} \partial_{i}{G_{j}} - \frac14 u^{i j} \partial_{i}{N_{j}} + \frac12 c^{i {\bar\jmath}} \partial_{i {\bar\jmath}} f - \frac14 c^{{\bar k} \ell} u^{i j} c_{i j {\bar k}} G_\ell + \frac12 G_{i} h_{j k l} u^{i j} u^{k l} + \frac14 N_{i} c^{i {\bar\jmath}} u^{k \ell} c_{{\bar\jmath} k \ell} \\ - \frac12 N_{i} h_{j k l} u^{i j} u^{k l} + f \left(\frac14 \partial_{i}{h_{j k l}} u^{i j} u^{k l} + \frac14 c^{i {\bar\jmath}} c^{k {\bar\ell}} c_{i {\bar\jmath} k {\bar\ell}} + \frac{1}{8} u^{i k} u^{{\bar\jmath} {\bar\ell}} c_{i {\bar\jmath} k {\bar\ell}} - \frac14 c^{i {\bar\imath}} c^{j {\bar\jmath}} c^{k {\bar k}} c_{i j {\bar k}} c_{{\bar\imath} {\bar\jmath} k} \right.\\ - \frac{1}{8}c^{i {\bar\imath}} c_{i {\bar\jmath} {\bar k}} c_{{\bar\imath} j k} u^{j k} u^{{\bar\jmath} {\bar k}} - \frac14 c^{i {\bar\imath}} c_{{\bar\imath} j k} h_{i l m} u^{j k} u^{l m} - \frac12 c^{i {\bar\imath}} c_{{\bar\imath} j k} h_{i l m} u^{j l} u^{k m}+\frac12 h_{i j k} h_{l m n} u^{i j} u^{k l} u^{m n}\\ \left.+\frac{1}{3}h_{i j k} h_{l m n} u^{i l} u^{j m} u^{k n}\right). \end{multline*} We now replace the partial derivatives $\partial_iG_j,\partial_iN_j,\partial_ih_{jk\ell}$ with covariant derivatives. Importantly recall our convention outlined in Notation~\ref{notation:normal}: coordinate expressions always represent quantities in the frame $(t_i)$, thus $G=G^it_i$ since $G$ is a tangent vector field to begin with, $N^it_i = KN$ since $N$ is a normal vector field and $h^k_{ij}t_k = Kh(t_i,t_j)$. We adopt the classical index notation where in an expression such as $\nabla_iG_j$, the covariant derivative is applied first and then the resulting oject is evaluated at $(t_i,t_j)$. We therefore have \[ \nabla_iG_j = \partial_iG_j - \Gamma^k_{ij} G_k. \] The Christoffel symbols $\Gamma^k_{ij}$ can be expressed in terms of $c$ and $h$ by~\eqref{eq:Gamma-Gammat-h} which leads to \[ \partial_iG_j = \nabla_iG_j + c^{k{\bar m}} c_{ij{\bar m}} G_k + u^{k\ell} h_{ij\ell} G_k. \] A similar formula holds for $N$ and for $h$ it takes the form \begin{multline*} \partial_{\ell}{h_{i j k}} = \nabla_{\ell} h_{i j k} + c^{{\bar\imath} m} c_{i \ell {\bar\imath}} h_{j k m} + c^{{\bar\imath} m} c_{j \ell {\bar\imath}} h_{i k m} + c^{{\bar\imath} m} c_{k \ell {\bar\imath}} h_{i j m}\\-h_{i j m} h_{k \ell n} u^{m n} -h_{i k m} h_{j \ell n} u^{m n}-h_{i \ell m} h_{j k n} u^{m n}. \end{multline*} We also recognize within $L$ the expression of the curvature tensor $\hR$~\eqref{EqCurvatureTensorRelations}. We obtain after simplification \begin{multline*} L = \frac14 \nabla_i G_j u^{i j} - \frac14 \nabla_i N_j u^{i j}+\frac12 c^{i {\bar\jmath}} \partial_{i {\bar\jmath}}{f} +\frac14 G_{i} h_{j k l} u^{i j} u^{k l} - \frac14 N_{i} h_{j k l} u^{i j} u^{k l} - \frac12 c^{i {\bar\jmath}} c^{k {\bar\ell}} {\tilde R}_{i {\bar\jmath} k {\bar\ell}} f \\- \frac14 u^{i k} u^{{\bar\jmath} {\bar\ell}} {\tilde R}_{i {\bar\jmath} k {\bar\ell}} f+\frac14 \nabla_{i}{h_{j k \ell}} f u^{i j} u^{k l}+\frac14 f h_{i j k} h_{\ell m n} u^{i j} u^{k \ell} u^{m n} - \frac{1}{6}f h_{i j k} h_{\ell m n} u^{i \ell} u^{j m} u^{k n}. \end{multline*} Here we can recognize several geometric quantities. First recall that the divergence of a tensor is the trace of its covariant derivative. So for a vector field $V=V^it_i$ we have \[ \div V = \nabla_i V^i = \nabla_i(V_j u^{ij}) = (\nabla_iV_j)u^{ij}. \] Note that since the metric is compatible with the connection, traces can be taken inside or outside the covariant derivative and there is no ambiguity in writing $\div V = \nabla_iV_ju^{ij}$. In the expression of $L$ we recognize $\nabla_i G_j u^{i j}=\div G$. For $N$ it is the same but recall that $N^it_i = KN$, thus $\nabla_i N_j u^{i j} = \div(KN)$. Finally $\nabla_ih_{jk\ell} u^{ij} u^{k\ell} =\nabla_i(h_{jk\ell} u^{k\ell})u^{ij} = \nabla_iH_j u^{ij}=\div(KH)$. We also recognize the $X\times Y$ Laplacian $c^{i{\bar\jmath}}\partial_{i{\bar\jmath}}f = -\frac14 {\tilde\Delta} f$ (see Lemma~\ref{lemma:Laplacians}), and the scalar curvature $\tilde{R} = 8 c^{{\bar\jmath} l}c^{i{\bar k}} \tilde{R}_{i{\bar\jmath}{\bar k} l}$. We obtain \begin{multline*} L = \frac14 \div(G) - \frac14 \div(KN) -\frac18 {\tilde\Delta} f + \frac14 \bracket{G, KH} - \frac14 \bracket{KN,KH} +\frac{1}{16}f\hR \\ - \frac14 u^{ik} u^{{\bar\jmath}{\bar\ell}} \hR_{i{\bar\jmath} k{\bar\ell}} f +\frac14 \div(KH) f + \frac14 \bracket{KH,KH} f - \frac16 \bracket{Kh,Kh}. \end{multline*} We combine $\frac14 \bracket{G, KH} + \frac14 \div(KH) f = \frac14 \div(f KH)$, simplify some $K$'s, and obtain \begin{multline*} L = \frac14 \div(G) - \frac14 \div(KN) -\frac18 {\tilde\Delta} f + \frac14 \div(f KH) + \frac14 \bracket{N,H} +\frac{1}{16}f\hR \\ - \frac14 u^{ik} u^{{\bar\jmath}{\bar\ell}} \hR_{i{\bar\jmath} k{\bar\ell}} f - \frac14 \bracket{H,H} f + \frac16 \bracket{h,h}. \end{multline*} Finally for the quantity $u^{ik} u^{{\bar\jmath}{\bar\ell}} \hR_{i{\bar\jmath} k{\bar\ell}}$ we use the Gauss equation (Lemma~\ref{lemma:Gauss-equation}). We obtain $L$ written purely in terms of geometric quantities, \begin{multline*} L = - \frac18 {\tilde\Delta} f +\frac14 \hnabla_H f + f \Big( - \frac{1}{8}R + \frac{3}{32}{\hR} -\frac{1}{8}\bracket{H,H} + \frac{1}{24}\bracket{h,h}\Big) \\+ \frac14 \div(\nabla f - KN + fKH). \end{multline*} Integrating over $\Sigma$, the divergence gives us boundary terms and we obtain \begin{multline*} I_1 = \int_\Sigma \Big[- \frac18 {\tilde\Delta} f +\frac14 \hnabla_H f + f \Big( - \frac{1}{8}R + \frac{3}{32}{\hR} -\frac{1}{8}\bracket{H,H} + \frac{1}{24}\bracket{h,h}\Big)\Big]dm \\ + \int_{\partial\Sigma} \frac14 \bracket{\nabla f - KN + fKH, \nu}d\sigma. \end{multline*} Here $\nu$ is the outer normal and $\sigma$ is the volume form induced by $g$ on $\partial\Sigma$. \paragraph{Term $I_2$.} We have \begin{equation*} \abs{I_2(\varepsilon)} \le \int_Y \abs{\mathcal{R}(\varepsilon,y)}\,dy \le C\int_Y\norm{r(\cdot,y)}_{W^{4,\infty}(B(x(y), \delta))} + \norm{r(\cdot,y)}_{L^1(X)}, \end{equation*} which we write as $\abs{I_2(\varepsilon)} \le C (\norm{r}_{L^1_yW^{4,\infty}_x(\Sigma_\delta)} + \norm{r}_{L^1(X\times Y)})$. \end{proof} \section{Introduction} \label{sec:introduction} In its simplest form, the Laplace method consists in studying the behavior of the integral $\int_{{\mathbb R}^d} e^{-u(x)/\varepsilon} \,r(x)dx$ as $\varepsilon\to 0^+$. If $u\colon {\mathbb R}^d \to {\mathbb R}$ is sufficiently smooth and has a non-degenerate minimum at a unique point $x_*$, the Laplace method gives at first order in $\varepsilon$, for instance when $r(x) = 1$,~\cite[Section 2]{ShunMcCullagh} \begin{multline}\label{EqSimpleLaplace} \int_{{{\mathbb R}^d}} \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,dx = \frac{e^{-u/\varepsilon}}{\sqrt{\det[u_{ij}]}}\bigg(1 + \frac{\varepsilon}{8} u_{ijk}u_{\ell mn}u^{ij}u^{k\ell}u^{mn}\\ +\frac{\varepsilon}{12} u_{ijk}u_{\ell mn} u^{i\ell} u^{jm} u^{kn} - \frac{\varepsilon}{8} u_{ijk\ell}u^{ij}u^{k\ell}+ O(\varepsilon^2)\bigg)\,. \end{multline} In this formula, $u_{ij}\coloneqq\partial_{ij}u$, $u_{ijk}\coloneqq\partial_{ijk}u$, etc, and we denote by $u^{ij}$ the inverse matrix of $u_{ij}$. All the quantities involving $u$ on the right-hand side of~\eqref{EqSimpleLaplace} are evaluated at $x_*$ and we use the usual index summation convention. In one dimension, taking into account a non constant density $r$, one has \cite[Chapter 6]{Bender1999} \begin{multline}\label{EqLaplaceFormula1D} \int_\RR \frac{e^{-u(x)/\epsilon}}{(2\pi\varepsilon)^{1/2}}r(x)dx = \frac{e^{-u/\epsilon}}{(u^{(2)})^{1/2} } \\ \bigg(r + \epsilon \Big( \frac{r^{(2)}}{2u^{(2)}} - \frac{r' u^{(3)}}{2 (u^{(2)})^2} - \frac{r \,u^{(4)}}{8 (u^{(2)})^2} + \frac{5 r\, (u^{(3)})^2}{24 (u^{(2)})^3} \Big) + O(\epsilon^2) \bigg)\,, \end{multline} where again all the quantities involving $u$ and $r$ are evaluated at $x_*$. Such approximation formulas are ubiquitous in several areas of mathematics and are a very classical subject of interest \cite{WongBook}. It can be found in the literature in different forms, under the name of Laplace method, saddlepoint approximation, or Edgeworth expansion in statistics (see \cite{ReidSaddlepoint,barndorff-nielsen} and \cite{TierneyKadane}). It also appears in statistical physics and particularly in probability in the context of large deviations \cite{bolthausen}. A survey concerned with statistic applications can be found in \cite{ReviewStrawderman}. Formula \eqref{EqSimpleLaplace} thus quantifies the discrepancy to the Gaussian approximation of such integrals and higher-order expansions are available, see in particular \cite{ShunMcCullagh,Kolassa1997}. In this article, we are interested in a geometric formulation of the Laplace formula, in its multivariate first-order expansion, in a case where the global minimum is attained on a closed manifold, rather than at a unique point. Although in Formula \eqref{EqSimpleLaplace}, there is a priori no need to use a particular geometric structure to formulate the results, the first-order Laplace expansion contains fourth-order derivatives of the function $u$ which resemble curvature terms of a metric associated to the Hessian of $u$. The main purpose of our work is to make explicit such a geometric formulation with a metric that only depends on the function $u$. A better geometric understanding of this term is of interest, for instance in recognizing divergence and curvature terms for further downstream applications. Note that there are very few works on a geometric formulation of the terms in the Laplace method; we mention in this direction Amari's work in the context of exponential families \cite{AmariExponential}. A geometric Laplace formula which applies to closed manifolds is presented in \cite{Ludewig} and it is given at any order, however it makes use of an operator which is not explicit in terms of the function $u$. Let us discuss first a possible issue in developing such a geometric formulation. A standard scheme of proof for the Laplace method consists in using the Morse lemma that finds a local change of coordinates such that the function $u$ becomes a nonnegative quadratic form, thereby trivializing all the higher-order (greater than or equal to $3$) derivatives of $u$ in the Laplace formula. However, in doing so the volume form $r$ has been pushed forward by this diffeomorphism so that it affects the results for instance in Equation \eqref{EqLaplaceFormula1D}, and these quantities are rather implicit in $u$. Due to the presence of such terms, such a reduction does not bring a clear gain for a geometric understanding of the Laplace formula in terms of the function $u$. What we propose is a geometric study of the first two terms in the asymptotic expansion as $\varepsilon\to 0^+$ of the integral \[ I(\varepsilon) = \iint_{X\times Y}\frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}\,dr(x,y)\,, \] for any given volume form $r$ on $X\times Y$. As it can be expected, $I(\varepsilon)$ concentrates where $u$ is minimal, see for instance~\cite{hwang1980}. We are interested in the particular setting where the set of zeros of $u\ge 0$ is a $d$-dimensional surface in a $2d$-dimensional manifold. This situation naturally arises in optimal transport problems. More precisely, we are concerned with a function $u(x,y)$ where $x,y$ are of equal dimensions, say on a Euclidean space, vanishing on a submanifold that can be described as a graph in these coordinates, i.e. $(x,y(x))$. Then we show that a natural geometry in which the Laplace method can be written is the Kim--McCann geometry \cite{kim2007continuity}. It was proposed as a natural pseudo-Riemannian metric for optimal transport problems since it offered a new interpretation as a curvature tensor of a quantity appearing when studying regularity of transport problems, the so-called Ma--Trudinger--Wang tensor~\cite{ma2005regularity}. Our main result is, in the context explained above, a Laplace formula at first-order in which all the terms are geometric invariants. The usual case of the Laplace method in Euclidean space, that is for a general function $u$ with a unique non-degenerate point for its minimum, can be retrieved as a particular case of our setting. Aside from the main contribution, we prove a quantitative version of the standard first-order Laplace method with explicit error bounds. Our main result reads \begin{theorem*}[Informal] Let $X,Y$ be two manifolds of equal dimension $d$. Suppose that $u\colon X \times Y \to {\mathbb R}$ is sufficiently smooth, nonnegative and vanishes on a $d$-dimensional manifold in $X \times Y$ that can be described as the graph $\Sigma$ of a diffeomorphism $(x,y(x))$ staying away from $\partial Y$. Then, the following first-order expansion holds: \begin{multline*} \iint_{X\times Y}\frac{e^{-u(x,y)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}f(x,y)\,d\tilde{m}(x,y) = \int_{\Sigma} fdm \,+ \\ \varepsilon\int_\Sigma\Big[- \frac18 {\tilde\Delta} f +\frac14 \hnabla_H f + f \Big(\frac{3}{32}{\hR} - \frac{1}{8}R + \frac{1}{24}\bracket{h,h} -\frac{1}{8}\bracket{H,H} \Big)\Big] \,dm \\ + \varepsilon\int_{\partial\Sigma} \frac14 \bracket{\nabla f - K\nabla^N\!f + fKH, \nu}\,d\sigma + O(\varepsilon^2) , \end{multline*} with a term $O(\varepsilon^2)$ controlled explicitly. \end{theorem*} In the formula above, there are two (pseudo-) Riemannian metrics involved, $\tilde{g}$ and $g$. On $X\times Y$, $\tilde{g}$ is the (pseudo-Riemannian) metric introduced by Kim and McCann in \cite{kim2007continuity}, henceforth called the Kim--McCann metric. Once restricted to $\Sigma$, it gives, under additional conditions, a Riemannian metric $g$. The volume forms $\tilde{m}, m$ are, up to rescaling, the ones induced by $\tilde{g}$ and $g$; $\hR$ and $R$ are the respective scalar curvatures of $\tilde{g}$ and $g$, and ${\tilde\Delta}$ denotes the pseudo-Riemannian Laplacian associated to $\tilde{g}$. Associated to $\Sigma$ seen as a submanifold of $X\times Y$, the second fundamental form is denoted by $h$ and $H$ is the mean curvature. Then, $\hnabla_{\!H}$ is the covariant derivative in direction $H$. The quantities $\bracket{h,h}$ and $\bracket{H,H}$ denote the pseudo-norms of the second fundamental form and the mean curvature, respectively. Finally, $K$ is the para-complex structure coming from the Kim--McCann geometry and $\nabla^N\!f$ is the normal component of the gradient of $f$. The material involving the Kim--McCann geometry is derived in Section \ref{sec:KimMcCann} in which we detail the inner and outer geometry of the submanifold $\Sigma$ and derive useful geometric quantities, which are of interest in themselves. The quantitative estimates for the Laplace formula are detailed in Section \ref{SecQuantitativeLaplace} and the main result is given in Section~\ref{sec:geometric_laplace}. Our proposed framework of a function $u$ on a product manifold $X \times Y$ might a priori seem too constrained to encompass the usual Laplace formula, for instance the one-dimensional case with a unique nondegenerate minimizer. In particular, if any curvature terms had to be expected in the Laplace formula \eqref{EqLaplaceFormula1D}, this would likely be a quantity similar to the curvature of the corresponding graph of some function evaluated at the critical point. Based on the Kim--McCann metric, we propose in Section \ref{sec:applications} a possible solution for a geometric formulation of the standard multidimensional Laplace method that also applies in the one dimensional case with a nondegenerate global minimum. Furthermore, this framework of a decomposition into a product space also naturally appears in different situations, a natural one being the parametrix of the heat kernel treated in Section \ref{SecHeatKernel}. More generally, our result allows to sometimes write simple formulas for the Laplace method, for instance in the case of the likelihood in Bayesian modelling. In optimal transport, this decomposition is readily present and the entropic regularization method leads to such integrals. A directly related application, which will be treated in a separate article, is the Taylor expansion of the entropic potentials with respect to the regularization parameter. \paragraph{Notation.} Coordinates on $X$ are denoted by $x^i, x^j,\dots$, while coordinates on $Y$ are denoted by $y^{\bar\imath},y^{\bar\jmath},\dots$ with barred indices. We write partial derivatives as $\partial_i=\frac{\partial}{\partial x^i}$, $\partial_{\bar\imath}=\frac{\partial}{\partial y^{\bar\imath}}$, $\partial_{ij}=\frac{\partial^2}{\partial x^i\partial x^j}$, $\partial_{i{\bar\jmath}}=\frac{\partial^2}{\partial x^i\partial y^{\bar\jmath}}$, etc. For the derivatives of $c$ and $u$ we write \[ c_i\coloneqq \partial_ic, \: c_{ij}\coloneqq\partial_{ij}c,\: u_i\coloneqq\partial_iu, \] and so on. The $d\times d$ inverse matrix of $c_{i{\bar\jmath}}$ is denoted by $c^{{\bar\jmath} i}$, and we adopt the Einstein summation convention where summation over repeated indices is not explicitly written. We never use $c^{{\bar\jmath} i}$ to raise indices (or $c_{i {\bar\jmath}}$ to lower them). Vector fields on $X\times Y$ are expressed in the coordinate frame $(e_i,e_{\bar\imath})$, where we set $e_i=\partial_i$ and $e_{\bar\imath}=\partial_{\bar\imath}$. In general, geometric quantities on $X\times Y$ are denoted with a tilde: $\tilde{g},\tilde{m},\tilde{\Gamma}^k_{ij},\hR_{i{\bar\jmath} k{\bar\ell}}$ while quantities without tilde denote objects that live on $\Sigma$: $g,m,\Gamma^k_{ij},R_{ijk\ell}$. \section{Laplace method with quantitative remainder} \label{SecQuantitativeLaplace} \subsection{Statement of the result} Recall that the classical Laplace method consists in studying the behavior as $\varepsilon\to 0^+$ of the integral \[ \int_{{\mathbb R}^d} e^{-u(x)/\varepsilon} \,r(x)dx\,. \] In this section we prove a Laplace formula with explicit zeroth- and first-order terms and a quantitative remainder. First let us introduce some notations. When $f$ is a function defined over ${{\mathbb R}^d}$, the ``norm of its $k$th derivative'' is defined as \begin{equation*} \abs{D^kf} \coloneqq \sum_\alpha \abs{\partial_{1}^{\alpha_1}\partial_{2}^{\alpha_2}\dots \partial_{d}^{\alpha_d}f}, \end{equation*} where the sum runs over all multi-indices $\alpha$ such that $\alpha_1+\alpha_2+\dots+\alpha_d=k$. We also denote \begin{equation} \label{eq:def-norm-derivatives-upto} \abs{D^{\le k}f} \coloneqq \sum_{j=0}^k\abs{D^jf}. \end{equation} For a fixed $x\in {{\mathbb R}^d}$, the Taylor remainder of $f$ about $x$ is defined as the function \begin{equation} \label{eq:def-taylor-remainder} R_nf(z)=f(x+z)-\sum_{k=0}^{n-1}\frac{1}{k!} \partial_{i_1\dots i_k}f(x)\,z^{i_1}\dots z^{i_k}\,, \end{equation} for $n\ge 1$. Here $z^i$ denotes the $i$th component of $z$, the indices $i_j$ run from $1$ to $d$ and following the Einstein summation convention the sum over $i_1,\dots,i_k$ is not explicitly written. We set $G(z)=(2\pi)^{-d/2} e^{-\frac{1}{2}\abs{z}^2}$ and define \[ G_\tau(z) = \tau^{-d/2}G(z/\sqrt{\tau}), \] whenever $\tau>0$. We also introduce the convolution kernel \[ K(z) = \frac{e^{-\abs{z}^2/4}}{\abs{z}^{d-1}}\,, \] and set for any $\tau>0$, \begin{equation} \label{eq:def-Keps:1} K_\tau(z)=\tau^{-d/2}K(z/\sqrt{\tau}). \end{equation} Note that $K\in L^1({{\mathbb R}^d})$ and that $\norm{K_\tau}_{L^1} = \norm{K}_{L^1}$. We are now ready to state our quantitative version of the Laplace expansion at first order, although the strategy of the proof easily supports the extension to higher orders. We initially consider the following assumptions on $u$ (but see Assumption~\ref{ass:ux-localized} and Corollary~\ref{cor:quantitativelaplace} below):\\ \begin{assumption}\label{ass:ux} \leavevmode \begin{enumerate}[(i)] \item \label{ass:ux:regularity}$u\in C^6({{\mathbb R}^d})$ and $\abs{D^ku}\in L^\infty({{\mathbb R}^d})$ for $3\le k\le 6$. \item \label{ass:ux:bound-below} There exist $\lambda>0$ and a point $x_*\in{{\mathbb R}^d}$ such that $u(x_*)=0$ and \[ u(x)\ge \frac\lambda 2\abs{x-x_*}^2\,. \] \end{enumerate} \end{assumption} Note that~\ref{ass:ux:bound-below} implies that $u(x)\ge 0$ and that $u$ attains its minimum value $0$ at a unique point, $x_*$. \begin{theorem} \label{thm:quantitativelaplace} Let $u$ be a function satisfying Assumption~\ref{ass:ux} and let $r\in C^4({{\mathbb R}^d})$. Then there exists $C>0$ such that for all $\varepsilon>0$, \begin{multline*} \int_{{{\mathbb R}^d}} \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)dx = \\\frac{1}{\sqrt{\det[u_{ij}]}}\bigg[r + \varepsilon\Big(\frac 12 u^{ij}\partial_{ij}r-\frac 12 u_{jk\ell}u^{ij}u^{k\ell}\partial_ir + \frac 18 r\,u_{ijk}u_{\ell mn}u^{ij}u^{k\ell}u^{mn}\\+\frac{1}{12} r\, u_{ijk}u_{\ell mn} u^{i\ell} u^{jm} u^{kn} - \frac 18 r\, u_{ijk\ell}u^{ij}u^{k\ell}\Big)\bigg]+ \varepsilon^2\,\mathcal{R}(\varepsilon)\,, \end{multline*} where the right-hand side is evaluated at $x=x_*$, and where \[ \abs{\mathcal{R}(\varepsilon)}\le C\,\Big[\abs{D^{\le 2}r}(x_*) + (K_{\varepsilon/\lambda} \!*\! \abs{D^{\le 4}r})(x_*)\Big]\,. \] The constant $C$ only depends on $d$, $\lambda$ and $\norm{D^ku}_{L^{\infty}({{\mathbb R}^d})}$ for $3\le k\le 6$. The quantity $\abs{D^{\le 4}r}$ is defined by~\eqref{eq:def-norm-derivatives-upto} and $K_{\varepsilon/\lambda}$ is defined by~\eqref{eq:def-Keps:1}. \end{theorem} \begin{remark} In particular if $r\in W^{4,\infty}({{\mathbb R}^d})$, where $W^{4,\infty}({{\mathbb R}^d})$ stands for the usual Sobolev space of functions with four derivatives in $L^\infty$, then the remainder can be taken independent of $\varepsilon$, \begin{equation} \label{eq:remainer-infty} \abs{\mathcal{R}(\varepsilon)} \le C\norm{r}_{W^{4,\infty}({{\mathbb R}^d})}\,. \end{equation} \end{remark} We now proceed to localize Theorem~\ref{thm:quantitativelaplace}. Indeed when doing a Laplace expansion, we expect that the regularity and the behavior of the various quantities at play should mostly matter on a neighborhood of the minimizer $x_*$. We therefore fix an open subset $X\subset{{\mathbb R}^d}$ and consider functions $u$ and $r$ defined over $X$. In the following assumptions, $B(x_*, \delta)$ denotes the ball of radius $\delta$ and center $x_*$ and $\abs{\cdot}$ denotes the Euclidean norm of ${{\mathbb R}^d}$. \begin{assumption}[Assumptions on $X$ and $u$]\label{ass:ux-localized} \leavevmode \begin{enumerate}[(i)] \item\label{ass:ux-localized:gen} $u$ is a measurable function over $X$, $u\ge 0$ and there exists $x_*\in X$ such that $u(x_*)=0$. \item\label{ass:ux-localized:ball} There exists $\delta>0$ such that $B \coloneqq B(x_*, \delta) \subset X$. \item\label{ass:ux-localized:regularity} $u\in C^6(B)$. \item\label{ass:ux-localized:bound-below} There exists $\lambda>0$ such that \begin{align*} u(x) &\ge \frac\lambda 2\abs{x-x_*}^2 \quad\text{for all $x\in B$}, \\ u(x) &\ge \frac\lambda 2\delta^2 \quad\text{for all $x\in X \setminus B.$} \end{align*} \end{enumerate} \end{assumption} Assumption~\ref{ass:ux-localized}\ref{ass:ux-localized:bound-below} can be replaced by demanding that $D^2u(x_*)\ge \lambda I_d$ and that $u$ be bounded below by a strictly positive constant outside of $B$ (this constant is taken here as $\frac\lambda 2\delta^2$ but its precise value matters little). We now state a local version of Theorem~\ref{thm:quantitativelaplace} taking $r\in W^{4,\infty}(X)\cap L^1(X)$ for simplicity. \begin{corollary} \label{cor:quantitativelaplace} Let $u$ be a function satisfying Assumption~\ref{ass:ux-localized} and let $r\in W^{4,\infty}(B)\cap L^1(X)$. Then there exists $C>0$ such that for all $\varepsilon>0$, \begin{multline*} \int_X \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)dx = \\\frac{1}{\sqrt{\det[u_{ij}]}}\bigg[r + \varepsilon\Big(\frac 12 u^{ij}\partial_{ij}r-\frac 12 u_{jk\ell}u^{ij}u^{k\ell}\partial_ir + \frac 18 r\,u_{ijk}u_{\ell mn}u^{ij}u^{k\ell}u^{mn}\\+\frac{1}{12} r\, u_{ijk}u_{\ell mn} u^{i\ell} u^{jm} u^{kn} - \frac 18 r\, u_{ijk\ell}u^{ij}u^{k\ell}\Big)\bigg]+ \varepsilon^2\,\mathcal{R}(\varepsilon)\,, \end{multline*} where the right-hand side is evaluated at $x=x_*$, and where \[ \abs{\mathcal{R}(\varepsilon)}\le C\,\Big[\norm{r}_{W^{4,\infty}(B)} + \norm{r}_{L^1(X)}]\,. \] The constant $C$ depends on $d$, $\lambda$, $\delta$ and $\norm{D^ku}_{L^{\infty}(B)}$ for $3\le k\le 6$. \end{corollary} \subsection{Proof of Theorem~\ref{thm:quantitativelaplace}} We start with a lemma providing upper bounds on certain types of Gaussian integrals of $R_nf$ in terms of the norm of the $n$th derivative of $f$. \begin{lemma}\label{lemma:taylor-remainder-convolution} There exists a constant $C=C(d,n,k)$ such that \[ \int_{{{\mathbb R}^d}}\abs{z}^k \abs{R_nf(z)} \,G_\tau(z)dz \le C \,\tau^{\frac{k+n}{2}} \, (K_\tau * \abs{D^nf})(x)\,, \] for any $\tau>0$ and any integers $k\ge 0$ and $n\ge 1$. \end{lemma} \begin{proof} By the Taylor remainder theorem applied to $s\mapsto f(x+sz)$ we have \[ R_nf(z)=\int_0^1\frac{(1-s)^{n-1}}{(n-1)!}\partial_{i_1\dots i_n}f(x+sz)z^{i_1}\dots z^{i_n}\,ds\,. \] By Jensen's inequality, \[ \abs{R_nf}(z) \le \int_0^1\frac{1}{(n-1)!}(1-s)^{n-1}\abs{D^nf}(x+sz)\abs{z}^{n}\,ds\,, \] and we further bound $(1-s)^{n-1}$ by $1$. Therefore \[ \int_{{{\mathbb R}^d}}\abs{z}^k\abs{R_nf(z)} \, G_\tau(z)dz \le \int_{{{\mathbb R}^d}}\int_{0}^1 \frac{1}{(n-1)!}\abs{D^nf}(x+sz)\abs{z}^{k+n} G_\tau(z)\,dsdz\,. \] Doing sequentially the following changes of variables $z\leftarrow sz$ and $s\leftarrow \frac{\abs{z}}{s\sqrt{\tau}} $, the right-hand side becomes \[ \int_{{\mathbb R}^d}\frac{1}{(n-1)!}\abs{D^nf}(x+z) \frac{\tau^{\frac{k+n-1}{2}} }{\abs{z}^{d-1}} \int_{\abs{z}/\sqrt{\tau}}^\infty s^{k+n+d-2} \frac{e^{-s^2/2}}{(2\pi)^{d/2}}\,ds\,dz\,. \] Write now $e^{-s^2/2} = e^{-s^2/4}e^{-s^2/4} \le e^{-\abs{z}^2/4\tau} e^{-s^2/4}$ when $s\ge \abs{z}/\sqrt{\tau}$, and bound the $s$ integral by the integral over $(0, \infty)$. This results in \begin{multline*} \int_{{{\mathbb R}^d}}\abs{z}^k\abs{R_nf(z)} \, G_\tau(z)dz \le \frac{1}{(n-1)!}\int_0^\infty s^{k+n+d-2} \frac{e^{-s^2/4}}{(2\pi)^{d/2}}\,ds \\ \int_{{\mathbb R}^d} \abs{D^nf}(x+z) \tau^\frac{k+n}{2} K_\tau(z)\,dz. \end{multline*} We obtain the desired result with the constant $C(d,n,k) = \int_0^\infty s^{k+n+d-2} \frac{e^{-s^2/4}}{(2\pi)^{d/2}}\,ds$, finite since $k+n+d-2\ge 0$. \end{proof} We will also compute explicitly certain Gaussian moments. We therefore recall Isserlis' formula (see~\cite{GVK025155202} which also contains an application to a formal Laplace expansion). \begin{lemma}[Isserlis' formula] \label{lemma:isserlis} Let \[ p_\varepsilon(z) = \sqrt{\smash[b]{\det[u_{ij}(x_*)]}} \frac{e^{-\frac{1}{2\varepsilon} u_{ij}(x_*)z^iz^j}}{(2\pi\varepsilon)^{d/2}} \] be the Gaussian density with zero mean and covariance matrix $[\varepsilon u^{ij}(x_*)]$, and fix indices $1\le i_k\le d$ for $k=1,\dots,2n$. Then \[ \int_{{{\mathbb R}^d}}z^{i_1}\dots z^{i_{2n}} \,p_\varepsilon(z)dz=\varepsilon^{n}\sum_P \prod_{\{a,b\}\in P}u^{i_ai_b}(x_*)\,, \] where the sum runs over all the partitions $P$ of $\{1,\dots,2n\}$ into pairs $\{a,b\}$. \end{lemma} We are now ready to prove Theorem~\ref{thm:quantitativelaplace}. \begin{proof}[Proof of Theorem~\ref{thm:quantitativelaplace}] Let $\mathcal{U}$ be the class of functions $u$ satisfying Assumption~\ref{ass:ux}. It is easy to check that $\mathcal{U}$ is convex (in fact it is a convex cone). For the entirety of the proof we fix a function $u\in\mathcal{U}$, a function $r\in C^4({{\mathbb R}^d})$ as well as $\varepsilon>0$. In view of Assumption~\ref{ass:ux} the unique minimizer of $u$ is denoted by $x_*$. The proof revolves around the functional \[ F_\varepsilon(w)=V \int_{{{\mathbb R}^d}} \frac{e^{-w(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}\,r(x) dx\,, \] defined for $w\in\mathcal{U}$ and where we write \[ V = \sqrt{\smash[b]{\det[u_{ij}(x_*)]}}\,. \] Since $V$ depends on $u$ and not $w$, it is therefore constant throughout the proof. We choose to include it in $F_\varepsilon$ to avoid writing many $\det[u_{ij}(x_*)]$ terms later on. We also set \[ u_0(x)=\frac 1 2 u_{ij}(x_*)(x^i-x_*^i)(x^j-x_*^j). \] Note that $u_0\in\mathcal{U}$ and that the zeroth, first and second-order derivatives of $u_0$ and $u$ coincide at $x_*$. The main idea of the proof is then to estimate the desired quantity $F_\varepsilon(u)$ by its Taylor expansion at $u_0$, in the form \begin{multline}\label{eq:taylor-expansion-Feps} \Big\lvert F_\varepsilon(u)-F_\varepsilon(u_0)-\delta F_\varepsilon(u_0)(u-u_0)-\frac 12 \delta^2\!F_\varepsilon(u_0)(u-u_0)^{\otimes 2} - \frac 16 \delta^3\!F_\varepsilon(u_0)(u-u_0)^{\otimes 3}\Big\rvert \le \\\frac{1}{24}\sup_{w\in\mathcal{U}}\abs{\delta^4\!F_\varepsilon(w)(u-u_0)^{\otimes 4}}\,. \end{multline} Here it is important that $\mathcal{U}$ is convex. In the preceding formula, we denoted the $n$-th directional derivative of $F_\varepsilon$ in direction $h$ by \[ \delta^n\!F_\varepsilon(w)(h)^{\otimes n} = \left.\frac{d^n}{dt^n}\right\lvert_{t=0} F_\varepsilon(w+th)\,. \] The notation $(h)^{\otimes n}$ stands for $(h, \dots, h)$ ($n$ times). Let us take a look at the directional derivatives of $F_\varepsilon$. They can be easily computed: for $n\ge 1$, \begin{equation*} \delta^n\!F_\varepsilon(w)(h)^{\otimes n} = (-\varepsilon)^{-n} V \int_{{{\mathbb R}^d}} h(x)^n \frac{e^{-w(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}\,r(x) dx\,. \end{equation*} Since this proof makes repeated use of \emph{Taylor remainders}, we recall that for a function $f$ defined over ${{\mathbb R}^d}$ we denote $R_nf$ the Taylor remainder of order $n$ at the point $x_*$, see~\eqref{eq:def-taylor-remainder}. Thus $R_0f(z)=f(x_*+z)$, $R_1f(z)=f(x_*+z)-f(x_*)$, \[ R_2f(z)=f(x_*+z)-(f(x_*)+\partial_if(x_*)z^i) \] (with the implied sum of repeated indices), and \[ R_3f(z)=f(x_*+z)-(f(x_*)+\partial_if(x_*)z^i-\frac 12 \partial_{ij}f(x_*)z^iz^j)\,. \] When $f$ is controlled in an $L^\infty$-type Sobolev space, we use the standard Taylor bound \begin{equation*} \abs{R_nf(z)}\le \frac{1}{n!}\norm{D^nf}_{L^{\infty}({{\mathbb R}^d})}\abs{z}^n\,. \end{equation*} We also need the identity \begin{equation} \label{eq:identity-Rnf} R_nf(z)=\frac{1}{n!}\partial_{i_1\dots i_n}f(x_*)z^{i_1}\dots z^{i_n} + R_{n+1}f(z)\,, \end{equation} which is immediate to derive, as well as higher-order generalizations of it. Finally to alleviate notation we write \[ p_\varepsilon(z) = V \frac{e^{-\frac{1}{2\varepsilon} u_{ij}(x_*)z^iz^j}}{(2\pi\varepsilon)^{d/2}}\,, \] which is the Gaussian density with zero mean and covariance matrix $[\varepsilon u^{ij}(x_*)]$. The moments of $p_\varepsilon$ can be computed by Isserli's formula recalled in Lemma~\ref{lemma:isserlis}. We now proceed to evaluate the Taylor terms in~\eqref{eq:taylor-expansion-Feps}. For each term we compute exactly the terms of order $0$ and $1$ in $\varepsilon$ and derive an explicit $O(\varepsilon^2)$ bound for the remainder. \paragraph{First term $F_\varepsilon(u_0)$.} The first term is \[ F_\varepsilon(u_0)=\int_{{{\mathbb R}^d}} r(x_* + z) \, p_\varepsilon(z)dz \,. \] We start by expanding $r(x)$ as a Taylor sum up to $O(\varepsilon^2)$ terms. We thus need a third-order Taylor approximation on $r$, \[ r(x_*\!+z) = r(x_*)+\partial_ir(x_*)z^i+\frac 12 \partial_{ij}r(x_*)z^iz^j + \frac 16 \partial_{ijk}r(x_*) z^iz^jz^k + R_4r(z)\,.\] By symmetry in the Gaussian integral all the odd-order terms cancel and we are left with \[ F_\varepsilon(u_0)= \int_{{{\mathbb R}^d}} \Big[r(x_*) + \frac 12 \partial_{ij}r(x_*)z^iz^j + R_4r(z) \Big] \, p_\varepsilon(z)dz\,. \] By expanding the brackets we obtain three integrals. The first one sums up to $r(x_*)$. The second and third integrals are denoted $I_1$ and $I_2$ respectively. For $I_1$ we use the Isserlis formula given by Lemma~\ref{lemma:isserlis} to compute the Gaussian moment of order $2$ and find that \[ I_1 = \frac \varepsilon 2 u^{ij}\partial_{ij}r\Big|_{x=x_*}\,. \] The other integral $I_2=\int_{{{\mathbb R}^d}}R_4r(z) p_\varepsilon(z)dz$ can be bounded by $O(\varepsilon^2)$ terms as follows. First Assumption~\ref{ass:ux} gives us a control on the Hessian of $u$ at $x_*$, \[ u_{ij}(x_*)z^iz^j\ge \lambda\abs{z}^2. \] This implies an inequality we will reuse throughout the proof, \begin{equation}\label{eq:upper-bound-p_eps} p_\varepsilon(z) \le V \lambda^{-d/2} G_{\varepsilon/\lambda}(z)\,, \end{equation} with $G_{\varepsilon/\lambda}(z)=(2\pi\varepsilon/\lambda)^{-d/2}e^{-\lambda/(2\varepsilon)\abs{z}^2}$. Therefore \[ \abs{I_2} \le V\lambda^{-d/2}\int_{{{\mathbb R}^d}} \abs{R_4r(z)} G_{\varepsilon/\lambda}(z)\,dz\,. \] Bounds on this type of integrals are provided by Lemma~\ref{lemma:taylor-remainder-convolution}. Here it gives us \[ \int_{{{\mathbb R}^d}} \abs{R_4r(z)} G_{\varepsilon/\lambda}(z)\,dz \le \varepsilon^2\lambda^{-2}\, c(d) (K_{\varepsilon/\lambda}*\abs{D^4r})(x_*)\,, \] where $c(d)$ is a constant depending only on the dimension and $K_{\varepsilon/\lambda}$ is the kernel defined by~\eqref{eq:def-Keps:1}. In conclusion we showed \begin{equation} \label{laplace-prop-proof:term1} F_\varepsilon(u_0) = r +\frac\varepsilon 2u^{ij}\partial_{ij}r + \varepsilon^2 \mathcal{R}_\varepsilon\,, \end{equation} where the right-hand side is evaluated at $x=x_*$ and with \[ \abs{\mathcal{R}_\varepsilon(x_*)} \le c(d) V\lambda^{-d/2}\lambda^{-2} (K_{\varepsilon/\lambda}*\abs{D^4r})(x_*)\,. \] \paragraph{Second term $\delta F_\varepsilon(u_0)(u-u_0)$.} The next term in the expansion~\eqref{eq:taylor-expansion-Feps} is \[ \delta F_\varepsilon(u_0)(u-u_0) = -\varepsilon^{-1}\int_{{{\mathbb R}^d}} r(x_*+z) \big(u(x_*+z)-u_0(x_*+z)\big) \, p_\varepsilon(z)dz. \] We write $r(x_*+z)=r + \partial_irz^i + \frac 12 \partial_{ij}rz^{i}z^{j} + R_3r(z)|_{x=x_*}$ and expand the sum against $u(x_*+z)-u_0(x_*+z)$. Since $u(x_*)=0$ and the first derivative also vanishes, $ u_i(x_*)=0$, the expression $u(x_*+z)-u_0(x_*+z)$ turns out to be exactly the Taylor remainder $R_3u(z)=u(x_*+z)-(u+u_iz^i+\frac 12 u_{ij}z^iz^j)(x_*)$. Then $R_3u(z)$ is successively expressed with more or fewer terms, depending on which $r$ term is in front of it, in order to obtain the desired order in $\varepsilon$, \begin{align*} r(x_*+z) \, R_3u(z) = &r \Big(\frac{1}{3!}u_{ijk}z^iz^jz^k + \frac{1}{4!}u_{ijk\ell}z^iz^jz^kz^{\ell} + \frac{1}{5!}u_{ijk\ell m}z^iz^jz^kz^\ell z^m \\+ R_6u(z)\Big) &+ \partial_irz^i \Big(\frac{1}{3!}u_{jk\ell}z^jz^kz^{\ell} + \frac{1}{4!}u_{jk\ell m}z^jz^kz^\ell z^m + R_5u(z)\Big) \\ &+ \frac 12\partial_{ij}rz^iz^j \Big(\frac{1}{3!} u_{k\ell m}z^kz^\ell z^m +R_4u(z)\Big) + R_3r(z) \,R_3u(z)\,, \end{align*} where all the functions on the right-hand side are evaluated at $x=x_*$. Each expression enclosed in brackets is a different way to write $R_3u(z)$, as in~\eqref{eq:identity-Rnf}. After taking advantage of cancellations in the Gaussian integral due to symmetry we are left with \begin{multline*} \delta F_\varepsilon(u_0)(u-u_0)= -\varepsilon^{-1}\int_{{\mathbb R}^d} \Big(\frac{1}{4!} r u_{ijk\ell} + \frac{1}{3!} \partial_i ru_{jk\ell}\Big)(x_*) z^iz^jz^kz^\ell \, p_\varepsilon(z)dz -\\ \varepsilon^{-1}\int_{{\mathbb R}^d} \Big(r R_6u(z)+\partial_i rz^i R_5u(z) + \frac 12\partial_{ij}rz^iz^jR_4u(z) + R_3r(z) \,R_3u(z)\Big)_{x=x_*} \, p_\varepsilon(z)dz\,. \end{multline*} The first integral is denoted by $I_1$, it is a first-order term in $\varepsilon$. The second integral is denoted $I_2$ and contains higher-order terms. Let us focus first on $I_1$. Using the Isserlis formula in Lemma~\ref{lemma:isserlis} we calculate (dropping $x_*$ for convenience) \begin{align*} I_1 = -\varepsilon\Big(\frac{1}{4!} r \,u_{ijk\ell} + \frac{1}{3!} \partial_ir\,u_{jk\ell}\Big)\big(u^{ij}u^{k\ell}+u^{i\ell}u^{jk}+u^{ik}u^{j\ell}\big)\,. \end{align*} The object $u_{ijk\ell}$ is invariant by a permutation of the indices $i,j,k,\ell$. As for $\partial_ir\partial_{jk\ell}u$ it is not totally symmetric in the indices but its invariance to permutation of the indices $j,k,\ell$ implies that the expression of the rightmost bracket can be simplified to $3u^{ij}u^{k\ell}$. Therefore \[ I_1 = -\frac\varepsilon 8 r\, u_{ijk\ell}\,u^{ij}u^{k\ell} - \frac\varepsilon 2 \partial_ir \,u_{jk\ell}\,u^{ij}u^{k\ell}\,. \] We now turn our attention to \begin{multline*} I_2 = -\varepsilon^{-1}\int_{{{\mathbb R}^d}} \Big(r R_6u(z)+\partial_i rz^i R_5u(z) + \frac 12\partial_{ij}rz^iz^jR_4u(z) + \\ R_3r(z) \,R_3u(z)\Big)_{x=x_*} \, p_\varepsilon(z)dz\,. \end{multline*} We proceed similarly as for $F_\varepsilon(u_0)$. Expressions of $u$ are bounded in $L^\infty$ by $\abs{R_nu(z)}\le\frac{1}{n!}\normlinf{D^nu}\abs{z}^n$ and $p_\varepsilon$ is bounded using~\eqref{eq:upper-bound-p_eps}. We obtain \begin{equation*} \abs{I_2}\le c\,\varepsilon^{-1}V\lambda^{-d/2} \norm{D^3u}_{W^{3,\infty}} \int_{{{\mathbb R}^d}} \Big( \abs{z}^6\abs{D^{\le 2}r (x_*)} + \abs{z}^3\abs{R_3r(z)} \Big) G_{\varepsilon/\lambda}(z)\,dz\,, \end{equation*} for a numeric constant $c>0$. The first term $\abs{z}^6\abs{D^{\le 2}r (x_*)}$ yields an elementary Gaussian moment and we use Lemma~\ref{lemma:taylor-remainder-convolution} for $\abs{R_3r(z)}$. We obtain \[ \abs{I_2} \le \varepsilon^2\,c(d)V\lambda^{-d/2}\lambda^{-3} \norm{D^3u}_{W^{3,\infty}({{\mathbb R}^d})} (\abs{D^{\le 2}r} + (K_{\varepsilon/\lambda}*\abs{D^3r}))(x_*)\,. \] In conclusion we have proven \begin{equation} \label{laplace-prop-proof:term2} \delta F_\varepsilon(u_0)(u-u_0) = -\frac\varepsilon 8 r\, u_{ijk\ell}\,u^{ij}u^{k\ell} -\frac\varepsilon 2 \partial_ir\, u_{jk\ell}\,u^{ij}u^{k\ell} + \varepsilon^2 \mathcal{R}_\varepsilon\,, \end{equation} where the right-hand side is evaluated at $x=x_*$, and with \begin{equation*} \abs{\mathcal{R}_\varepsilon(x_*)} \le c(d)V\lambda^{-d/2}\lambda^{-3} \norm{D^3u}_{W^{3,\infty}({{\mathbb R}^d})} \Big[\abs{D^{\le 2}r}(x_*) + (K_{\varepsilon/\lambda}*\abs{D^3r})(x_*)\Big]\,. \end{equation*} \paragraph{Third term $\frac 12\delta^2\!F_\varepsilon(u_0)(u-u_0)^{\otimes 2}$.} The next term in~\eqref{eq:taylor-expansion-Feps} is \[ \frac 12\delta^2\!F_\varepsilon(u_0)(u-u_0)^{\otimes 2} = \frac 12\varepsilon^{-2}\int_{{{\mathbb R}^d}} r(x_*+z)\big(u(x_*+z)-u_0(x_*+z)\big)^2 \, p_\varepsilon(z)dz. \] Following the line of proof for the previous $F_\varepsilon$ terms, we write $r(x_*+z)= r + \partial_irz^i + R_2r(z)$ and $u(x_*+z)-u_0(x_*+z)=R_3u(z)$. Then the term $r(x_*+z) (R_3u(z))^2$ is broken up as follows, \begin{align*} & r(x_*+z) \, R_3u(z) R_3u(z) = \\ & \quad r\, \frac{1}{3!} u_{ijk} z^iz^jz^k \Big(\frac{1}{3!}u_{\ell mn}z^\ell z^mz^n + \frac{1}{4!}u_{\ell mns}z^\ell z^mz^nz^s + R_5u(z)\Big)\\ & +r\, \frac{1}{4!}u_{ijk\ell} z^iz^jz^kz^\ell \Big(\frac{1}{3!}u_{mns}z^mz^nz^s + R_4u(z)\Big)\\ & +r\, R_5u(z) R_3u(z) \\ & +\partial_ir z^i \Big(\frac{1}{3!} u_{jk\ell}z^j z^kz^\ell \Big) \Big(\frac{1}{3!} u_{mns}z^mz^nz^s + R_4u(z)\Big) \\ & + \partial_ir z^i \, R_4u(z) \, R_3u(z) \\ & + R_2r(z)\, R_3u(z) \, R_3u(z)\,, \end{align*} where the right-hand side is evaluated at $x=x_*$ where needed. Under the Gaussian integral, we obtain first a moment of order $6$, \[ I_1 := \varepsilon^{-2}\frac{1}{2\cdot (3!)^2} r\,u_{ijk}u_{\ell mn}\Big\rvert_{x=x_*}\int_{{{\mathbb R}^d}} z^iz^jz^kz^\ell z^mz^n \,p_\varepsilon(z) dz\,. \] The moments of order $7$ vanish and all the other terms form various moments of order $8$ that we denote $I_2$. Let us first compute $I_1$. Using Isserlis' formula and symmetries of the partial derivatives we have {\begin{align*} I_1 &=\frac{\varepsilon}{72}r\,u_{ijk}\,u_{\ell mn}\big(9u^{ij}u^{k\ell}u^{mn} +6 u^{i\ell}u^{jm}u^{kn} \big)\\ &=\frac{\varepsilon}{8}r\,u_{ijk}u_{\ell mn} \,u^{ij}u^{k\ell}u^{mn} +\frac{\varepsilon}{12}r\,u_{ijk}\,u_{\ell mn}u^{i\ell}u^{jm}u^{kn}\,. \end{align*}} It remains to bound $I_2$. By following a similar line of proof as was done for the $\delta F_\varepsilon(u_0)(u-u_0)$ term, we obtain \[ \abs{I_2} \le c\,\varepsilon^{-2}V\lambda^{-d/2} \norm{D^3u}^2_{W^{2,\infty}} \int_{{{\mathbb R}^d}}\Big(\abs{z}^8\abs{D^{\le 1} r(x_*)} + \abs{z}^6\abs{R_2r(z)}\Big)\,G_{\varepsilon/\lambda}(z)dz\,. \] By Lemma~\ref{lemma:taylor-remainder-convolution} we deduce \[ \abs{I_2} \le \varepsilon^2\,c(d)V\lambda^{-d/2}\lambda^{-4} \norm{D^3u}^2_{W^{2,\infty}} (\abs{D^{\le 1}r} + (K_{\varepsilon/\lambda}*\abs{D^2r}))(x_*)\,. \] In conclusion we have proven \begin{equation} \label{laplace-prop-proof:term3} \frac 12\delta^2 F_\varepsilon(u_0)(u\!-\!u_0)^{\otimes 2} = \frac{\varepsilon}{8}r\,u_{ijk}u_{\ell mn} \,u^{ij}u^{k\ell}u^{mn} +\frac{\varepsilon}{12}r\,u_{ijk}\,u_{\ell mn}u^{i\ell}u^{jm}u^{kn} + \varepsilon^2 \mathcal{R}_\varepsilon \,, \end{equation} evaluated at $x=x_*$, and with \begin{equation*} \abs{\mathcal{R}_\varepsilon(x_*)} \le c(d)V\lambda^{-d/2}\lambda^{-4} \norm{D^3u}^2_{W^{2,\infty}} \Big[\abs{D^{\le 1}r}(x_*) + (K_{\varepsilon/\lambda}*\abs{D^2r})(x_*)\Big]\,. \end{equation*} \paragraph{Fourth term $\frac 16\delta^3\!F_\varepsilon(u_0)(u-u_0)^{\otimes 3}$.} We use similar arguments as for the previous terms to deal with \[ \delta^3\!F_\varepsilon(u_0)(u-u_0)^{\otimes 3} = -\varepsilon^{-3}\int_{{{\mathbb R}^d}} r(x_*+z)\big(u(x_*+z)-u_0(x_*+z)\big)^3 \, p_\varepsilon(z)dz\,. \] This term is $O(\varepsilon^2)$ so we simply bound it: we split \begin{align*} & \quad r(x_*+z) (R_3u(z))^3 = \\ & +r\,\frac{1}{3!}u_{ijk}z^iz^jz^k\frac{1}{3!}u_{\ell mn}z^\ell z^mz^n \Big(\frac{1}{3!}u_{abc}z^az^bz^c + R_4u(z)\Big)\\ & +r\,\frac{1}{3!}u_{ijk}z^iz^jz^k R_4u(z)R_3u(z) \\ & +r\,R_4u(z)R_3u(z) R_3u(z)\\ & +R_1r(z) R_3u(z)R_3u(z)R_3u(z)\,. \end{align*} We end up with \begin{equation} \label{laplace-prop-proof:term4} \delta^3\!F_\varepsilon(u_0)(u-u_0)^{\otimes 3} = \varepsilon^2 \mathcal{R}_\varepsilon(x_*)\,, \end{equation} with \[ \abs{\mathcal{R}_\varepsilon(x_*)} \le \, c(d) V\lambda^{-d/2}\lambda^{-5}\norm{D^3u}^3_{W^{1,\infty}} \Big[\abs{r(x_*)} + (K_{\varepsilon/\lambda} * \abs{Dr})(x_*)\Big]\,. \] \paragraph{Remainder term $\delta^4\!F(w)(u-u_0)^{\otimes 4}$.} The final step is to bound \begin{multline*} \delta^4\!F(w)(u-u_0)^{\otimes 4}=\\ \varepsilon^{-4}V\int_{{\mathbb R}^d} r(x_*+z) \big(u(x_*+z)-u_0(x_*+z)\big)^4 \frac{e^{-w(x_*+z)/\varepsilon}}{(2\pi\varepsilon)^{d/2}}\,dz \end{multline*} uniformly over $w\in\mathcal{U}$. Contrary to the previous terms we cannot use~\eqref{eq:upper-bound-p_eps} but Assumption~\ref{ass:ux}\ref{ass:ux:bound-below} gives us precisely what we need. Indeed for any $w$ satisfying Assumption~\ref{ass:ux}\ref{ass:ux:bound-below} we have $e^{-w(x_*+z)/\varepsilon}\le e^{-\lambda\abs{z}^2/(2\varepsilon)}$, and therefore \[ \frac{e^{-w(x_*+z)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \le \lambda^{-d/2} G_{\varepsilon/\lambda}(z). \] From then we can finish up as for the previous $F_\varepsilon$ terms. We bound \begin{align*} \abs{r(x_*+z) \, (R_3u(z))^4} \le c\, \normlinf{D^3u}^4 \abs{r(x_*+z)}\abs{z}^{12}, \end{align*} which leads to \begin{equation} \label{laplace-prop-proof:term5} \delta^4\!F_\varepsilon(w)(u-u_0)^{\otimes 4} = \varepsilon^{2} \mathcal{R}_\varepsilon(x_*)\,, \end{equation} with \[ \abs{\mathcal{R}_\varepsilon(x_*)}\le c(d)V\lambda^{-d/2}\lambda^{-6}\norm{D^3u}^4_{L^{\infty}} (K_{\varepsilon/\lambda} * \abs{r})(x_*)\,. \] \paragraph{Putting everything together.} We see that in each estimate \eqref{laplace-prop-proof:term1}, \eqref{laplace-prop-proof:term2}, \eqref{laplace-prop-proof:term3}, \eqref{laplace-prop-proof:term4} and \eqref{laplace-prop-proof:term5} the remainder term $\mathcal{R}_\varepsilon$ can be bounded above by \[ c(d, \lambda, \norm{D^3u}_{W^{3,\infty}}) V \Big[\abs{D^{\le 2}r}(x_*) + (K_{\varepsilon/\lambda} * \abs{D^{\le 4}r})(x_*)\Big]\,. \] Dividing by $V=\sqrt{\smash[b]{\det[u_{ij}(x_*)]}}$, the desired result follows. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:quantitativelaplace}] Let $B'$ denote the open ball $B(x_*,\delta/2)$. By standard arguments there exists a function $\chi\in C^\infty({{\mathbb R}^d})$ such that $\chi\ge 0$, $\chi=1$ on $B'$ and $\chi$ has support inside $B$. We then define \[ \begin{cases} \tilde r(x) = \chi(x) r(x) & \text{ for }x\in B\\ r(x)=0 & \text{ for }x\in {{\mathbb R}^d}\setminus B. \end{cases} \] Let $q(x)= \frac\lambda 2\abs{x-x_*}^2$. We similarly define \[ \begin{cases} \tilde u(x) = \chi(x)(u(x)-q(x)) + q(x) & \text{for } x\in B\\ u(x)=q(x) &\text{ for }x\in {{\mathbb R}^d}\setminus B. \end{cases} \] We want to apply Theorem~\ref{thm:quantitativelaplace} with remainder in the form~\eqref{eq:remainer-infty}. Because of how $\tilde r$ was constructed it is easy to check that $\tilde r\in W^{4,\infty}({{\mathbb R}^d})$ with $\norm{\tilde r}_{W^{4,\infty}({{\mathbb R}^d})}\le C \norm{r}_{W^{4,\infty}(B)}$, for a constant $C$ that depends on $\delta$ and $d$. It is also straightforward to check that $\tilde u$ satisfies Assumption~\ref{ass:ux}. Also note that $\tilde r$ (resp. $\tilde u$) only depends on the values of $r$ (resp. $u$) inside $B$. Our goal is then to replace the integral over $X$, namely $\int_X \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)dx$ by the integral over ${{\mathbb R}^d}$, namely $\int_{{\mathbb R}^d} \frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\tilde r(x)dx$. We bound \begin{multline} \label{eq:proof-cor-quant-eq1} \Big\lvert\int_X \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)dx-\int_{{\mathbb R}^d} \frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\tilde r(x)dx\Big\rvert \le \\ \int_X \Big\lvert\frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)-\frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\tilde r(x)\Big\rvert dx + \int_{{{\mathbb R}^d}\setminus X} \frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\abs{\tilde r(x)}\,dx\,. \end{multline} In the first term on the right-hand side, the integrand vanishes on $B'$. Outside of $B'$, we simply bound the difference by the sum, thus \[ \int_X \Big\lvert\frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)-\frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\tilde r(x)\Big\rvert dx \le \int_{X\setminus B'} \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\abs{r(x)}+\frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\abs{\tilde r(x)}dx\,. \] Combining with~\eqref{eq:proof-cor-quant-eq1} we obtain \begin{multline*} \Big\lvert\int_X \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)dx-\int_{{\mathbb R}^d} \frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\tilde r(x)dx\Big\rvert \le \\ \int_{X\setminus B'} \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\abs{r(x)} + \int_{{{\mathbb R}^d}\setminus B'} \frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\abs{\tilde r(x)}\,dx\,. \end{multline*} On $X\setminus B'$ we have $u(x)\ge\frac\lambda 2\delta^2$ and on ${{\mathbb R}^d}\setminus B'$ we have that $\tilde u(x)\ge \frac\lambda 2\abs{x-x_*}^2\ge \frac\lambda 2\delta^2$ . Thus \begin{multline} \label{eq:proof-cor-quant-eq2} \Big\lvert\int_X \frac{e^{-u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,r(x)dx-\int_{{\mathbb R}^d} \frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\tilde r(x)dx\Big\rvert \le \\ \le (2\pi\varepsilon)^{-d/2} e^{-\lambda \delta^2/(2\varepsilon)} \big(\norm{r}_{L^1(X)} + \norm{\tilde r}_{L^1({{\mathbb R}^d})} \big)\,. \end{multline} We have $\norm{\tilde r}_{L^1({{\mathbb R}^d})}\le C \norm{r}_{L^1(B)}$ and thus there exists a constant $C(\lambda, \delta,d)$ such that for all $\varepsilon>0$, the right-hand side of~\eqref{eq:proof-cor-quant-eq2} can be bounded by \[ C \varepsilon^2\norm{r}_{L^1(X)}. \] On the other hand, we now perform the Laplace expansion of \[ \int_{{\mathbb R}^d} \frac{e^{-\tilde u(x)/\varepsilon}}{(2\pi\varepsilon)^{d/2}} \,\tilde r(x)dx\,. \] The zeroth and first-order terms consist of quantities depending on derivatives of $\tilde u$ and $\tilde r$ evaluated at $x_*$. Since $\tilde u,u$ and $\tilde r,r$ are equal on $B'$ we may remove the tilde. The remainder, in the form~\eqref{eq:remainer-infty} can be bounded by \[ \abs{\mathcal{R}(\varepsilon)} \le C\norm{\tilde r}_{W^{4,\infty}({{\mathbb R}^d})} \le C\norm{r}_{W^{4,\infty}(B)}\,. \] \end{proof}
2,877,628,088,656
arxiv
\section{Introduction} There are two familiar versions of (RNS covariant open) superstring field theory\footnote{See~\cite{Fuchs:2008cc} for a review of recent developments in string field theory. More detailed background on the construction of superstring field theories can be found in section 8 therein.}. The first to be constructed is the modified cubic theory of~\cite{Preitschopf:1989fc,Arefeva:1989cm,Arefeva:1989cp}, in which the NS string field carries zero picture number. This is an improved version of Witten's theory~\cite{Witten:1986qs}, in which collisions of picture changing operators destroy the gauge symmetry and invaliadate the evaluation of scattering amplitudes~\cite{Wendt:1987zh}. Despite correcting the problems of collisions of picture changing operators, some difficulties still remain. Most familiar is the criticism against the appearance of the picture changing operator $Y_{-2}$ in the definition of this theory, on the ground that this operator possesses a non-trivial kernel. It is not clear to us if this actually poses a problem, since one can claim that the problematic string fields should not be considered as legitimate ones\footnote{See the companion paper~\cite{Kroyter:2009zj} for a more detailed discussions on this subject.}. However, there is also a genuine problem with the cubic formulation: Its Ramond sector is inconsistent, since its linearized gauge transformations cannot be exponentiate to give finite gauge transformations, due to (again) collisions of picture changing operators. This fact was unnoticed so far. We prove it in section~\ref{sec:BadGauge}. The second formulation is the non-polynomial theory, constructed by Berkovits~\cite{Berkovits:1995ab}. This theory lives in the large Hilbert space of the fermionized RNS superghosts~\cite{Friedan:1985ge}. The physical degrees of freedom behave like $\xi$ times the usual RNS degrees of freedom. Hence, the NS string field carries ghost number zero (instead of one) and picture number zero (instead of the ``natural'' minus one picture). A novel gauge symmetry is introduced in order to reduce the degrees of freedom by half, that is, in order to obtain the same amount of degrees of freedom one has in the small Hilbert space. It was recently discovered that a mapping exists between these two theories, sending solutions of the one to solutions of the other, such that gauge orbits are sent bijectively to gauge orbits~\cite{Fuchs:2008zx}. The cohomologies around solutions and the actions of solutions are invariant under this mappings. Moreover, it was also shown in~\cite{Fuchs:2008zx} that this formalism can be extended to include also the NS$-\,$ sectors of both formulations~\cite{Berkovits:2000hf,Arefeva:2002mb}. It was concluded that the theories are classically equivalent. It should be noted that (in one direction) the mapping is performed by a mid-point insertion on top of the field of the cubic theory. Furthermore, a regularization had to be employed for the evaluation of the action under the mapping and while the regularization was not explicitly constructed, it was explained what should its properties be (symmetry arguments). This state of affairs brings about two possible points of view for interpreting the results of~\cite{Fuchs:2008zx}. One option is that the mid-point insertion is a singular limit of another family of mappings, which are more complicated but regular. If this is the case then the theories are genuinely equivalent. The other option is that the cubic theory should be thought of as a singular gauge limit of the non-polynomial theory. Being a singular limit does not imply that it is wrong. It only implies that one has to use some care when working with it. For example, it was realised~\cite{Kiermaier:2007jg,Kiermaier:2008jy,Kiermaier:2008qu} that Schnabl's gauge~\cite{Schnabl:2005gv} is a singular limit of a regular family of ``linear $b$-gauges''. Nonetheless, it was also shown that it is possible to regularize expressions in this gauge and get a reliable final result. There are at least three obvious matters that should be dealt with regarding the equivalence of the two theories. We already mentioned the somewhat singular nature of the equivalence. The second issue is the inclusion of the Ramond sector in the equivalence, while the third is the extension of the mapping to the quantum level. In fact, the second is a pre-requirement of the third, since Ramond states can be produced in loops even when calculating processes involving only NS fields as external states. However, as we already mentioned, the Ramond sector of the cubic theory is inconsistent. Hence, a genuine correspondence cannot exists. Nonetheless, we prove the existence of a formal equivalence between the theories in section~\ref{sec:main}. Then, in section~\ref{sec:general}, we extend the formalism to describe arbitrary brane systems. We believe that the cubic theory with the Ramond sector should be thought of as being a singular limit, or a bad gauge fixing, of another, benign theory. One candidate for the consistent theory is the non-polynomial theory. However, the fact that the formal manipulations performed in section~\ref{sec:main} show that (up to mid-point regularization issues) the cubic and non-polynomial theories are formally equivalent, seems to suggest that regardless of its inconsistency in its current formulation, not all is bad in the cubic theory. A possibility for defining a consistent cubic theory is discussed in section~\ref{sec:2FermiCub}, where we study a theory with two Ramond string fields that has some attractive features. Nonetheless, since in this theory the Ramond sector is doubled, it should somehow be further modified in order to give the cubic theory that we seek. We did not find a way to do that in a manner that avoids the infamous picture changing collisions. Hence, this theory is only a first step towards the goal of a consistent cubic theory. We comment on that and on some other issues in the conclusions section~\ref{sec:conc}. In the non-polynomial theory, the incorporation of the Ramond sector was performed in~\cite{Berkovits:1995ab,Berkovits:2001im,Michishita:2004by}. While this theory seems to be consistent, it is not known how to define it covariantly using a single Ramond string field. The best one can do covariantly is to write an action with two Ramond fields, at pictures $\pm \frac{1}{2}$ and add a constraint relating them. One could try to implement the constraint using a Lagrangian multiplier. However, this will introduce a non-trivial equation of motion for the new Lagrangian-multiplier-string-field. It might be possible to get around this issue by making this field a part of a quartet or by adding more Lagrangian multipliers along the lines of~\cite{Berkovits:1997wj}, but this was not done so far. The parity of the Ramond string fields in both theories is the same as that of the NS string fields of the same theory. This might come as a surprise, since the vertex operators in the Ramond sector have an opposite parity to that of the NS+ sector. However, since the components of the Ramond string field represent fermions in space-time, the coefficient fields themselves have to be odd. This brings the total parity of the Ramond string field to the desired value, i.e., they are odd in the cubic theory and they are even in the non-polynomial one. We refer the reader to appendix~\ref{app:picGh} for some tables of the quantum numbers of the relevant operators and string fields. We end this introduction by presenting some conventions and some of the operators we use. Throughout this paper $[A,B]$ denotes the graded commutator, i.e., \begin{equation} \label{gradCom} [A,B]\equiv AB-(-)^{AB}BA\,, \end{equation} where $A$ and $B$ in the exponent represent the parity of $A$ and $B$. An important building block in our construction is the operator, \begin{equation} \label{Pdef} P(z)=-c\xi\partial \xi e^{-2\phi}(z)\,. \end{equation} This operator is a contracting homotopy operator for $Q$ in the large Hilbert space, i.e., \begin{subequations} \begin{equation} \label{PQ} [Q,P(z)]=1\,. \end{equation} Other important relations are, \begin{align} \label{Qxi} [Q,\xi(z)]&=X(z)\,,\\ \label{EtaXi} [\eta_0,\xi(z)]&=1\,,\\ [\eta_0,P(z)]&=Y(z)\,, \end{align} \end{subequations} where $X$ and $Y$ are the picture changing operator and the inverse picture changing operator respectively. These operators obey the OPE, \begin{equation} \label{XY} X Y \sim 1\,. \end{equation} It is also possible to define the double picture changing operators $X_2$ and $Y_{-2}$, obeying, \begin{equation} \label{Ym2XY} Y_{-2}X\sim Y\,,\qquad X_2 Y \sim X\,, \qquad X_2 Y_{-2} \sim 1\,. \end{equation} We will also use the following (regular) OPE's, \begin{subequations} \label{OPE} \begin{align} \label{PX} PX &\sim \xi\,,\\ \label{xiY} \xi Y &\sim P\,,\\ \label{PP} P P &\sim 0\,,\\ \label{XiXi} \xi \xi &\sim 0\,,\\ \label{Pxi} P \xi &\sim 0\,. \end{align} \end{subequations} The divergent OPE's are $XX$, $YY$, $X\xi$ and $YP$. We will use the operators appearing in~(\ref{OPE}) as mid-point insertions on string fields. This is not a-priori excluded, since they are all primaries of zero conformal weight~\cite{Kroyter:2009zj}. When these operators are inserted at the mid-point over the string fields $\Psi_{1,2}$ having no other mid-point insertions one finds, \begin{equation} \label{factorize} (\cO_1\Psi_1)\star(\cO_2\Psi_2)=(-)^{\cO_2\Psi_1} (\cO_1\cO_2)(\Psi_1 \star \Psi_2)=(-)^{\cO_2(\cO_1+\Psi_1)} (\cO_2\cO_1)(\Psi_1 \star \Psi_2)\,, \end{equation} that is, the star algebra factorizes to a product of a graded-Abelian mid-point operator-insertion algebra and a regular string field star algebra. We shall often refer to~(\ref{OPE}), when we actually mean its part within expressions of the form of~(\ref{factorize}). Henceforth, we shall omit the star product, as it is the only possible product of string fields. \section{The Ramond sector of the cubic theory is inconsistent} \label{sec:BadGauge} Witten's cubic superstring field theory~\cite{Witten:1986qs} was shown to be inconsistent, due to singularities in its gauge transformations~\cite{Wendt:1987zh,Arefeva:1988nn}. These singularities emerge from collisions of picture changing operators: The picture changing operator $X$ is inserted at the string mid-point, which is invariant under the star product. Hence, the double pole in the $XX$ OPE gives rise to infinities when the linearized gauge transformation is plugged into the action. The origin of this problem lies in the presence of the picture changing operator $X$ in the linearized gauge transformation in Witten's theory, \begin{equation} \delta A=Q\La+X[A,\La]\,. \end{equation} This in turn is inevitable, since the NS string field $A$ and the NS gauge string field $\La$ are both of picture number $-1$. Changing the picture of $\La$ to zero, would force one to introduce an insertion of the inverse picture changing operator $Y$ on top of the $Q\La$ term and collisions would still occur, now due to the double pole in the $YY$ OPE. In order to remedy this problem, it was suggested that the NS string field $A$ should have a zero picture number~\cite{Preitschopf:1989fc,Arefeva:1989cm,Arefeva:1989cp}. The Ramond sector is then described by the picture $-\frac{1}{2}$ string field $\al$~\cite{Preitschopf:1989fc}. The modified action reads, \begin{subequations} \label{cubAction} \begin{eqnarray} S&=& S_{NS}+S_R\,,\\ \label{cubActionNS} S_{NS}&= &-\int Y_{-2}\big(\frac{1}{2}A Q A+\frac{1}{3}A^3\big)\,,\\ \label{cubActionR} S_R&=& -\int Y\big(\frac{1}{2}\al Q \al+A \al^2\big), \end{eqnarray} \end{subequations} and its equations of motion are, \begin{subequations} \label{CubEOMY} \begin{eqnarray} Y_{-2}(Q A + A^2) + Y \al^2 &=& 0\,,\\ Y (Q \al + [A,\al]) &=& 0\,. \end{eqnarray} \end{subequations} Acting on these equations with $X_2, X$ respectively brings them to the form \begin{subequations} \label{CubEOM} \begin{eqnarray} \label{eomCubA} Q A + A^2 + X \al^2 &=& 0\,,\\ \label{eomCubAl} Q \al + [A,\al] &=& 0\,, \end{eqnarray} \end{subequations} where~(\ref{XY}) and~(\ref{Ym2XY}) were used. The action~(\ref{cubAction}) is invariant under the following infinitesimal gauge transformations, \begin{subequations} \label{cubGauge} \begin{eqnarray} \delta A &=& Q\La+[A,\La]+X[\al,\chi]\,,\\ \delta \al &=& Q\chi+[\al,\La]+[A,\chi]\,, \end{eqnarray} \end{subequations} where $\La$ and $\chi$ are the NS and Ramond gauge string fields respectively. It is clear that no collisions can emerge when only the NS sector is considered, since no picture changing operators appear in this case. In~\cite{Preitschopf:1989fc}, it was claimed that this gauge transformation is regular also in the Ramond sector. To ``prove'' that, the infinitesimal transformation~(\ref{cubGauge}) was plugged into the equations of motion~(\ref{CubEOM}). It was found that this does not lead to collisions of picture changing operators at the leading order in the gauge string fields. However, if one considers also the term quadratic with respect to the gauge string fields, collisions do occur. One may hope that adding the next (first non-linear) order to~(\ref{cubGauge}) produces another singular term that cancels the former. This is plausible a-priori, since both terms are quadratic with respect to $\chi$. Nonetheless, a direct evaluation reveals that this is not the case. Moreover, the singularity of the transformation can be seen even without referring to the action. Consider for simplicity the case $A_0=0$, $\al=\al_0\neq 0$ and act with a gauge transformation in the Ramond sector, i.e., take $\La=0$ and $\chi\neq 0$. Further, assume for simplicity that $Q\chi=0$. Explicit iterations of the linearized gauge transformations give, \begin{subequations} \label{itLinGauge} \begin{align} \al &\rightarrow \quad\:\al_0\hspace{0.5cm}\rightarrow \al_0 +X\big[[\al_0,\chi],\chi\big]\hspace{0.14cm} \rightarrow\hspace{0.14cm}\ldots\,,\\ A & \rightarrow X[\al_0,\chi]\rightarrow \hspace{0.84cm} 2X[\al_0,\chi] \hspace{0.72cm}\rightarrow 3X[\al_0,\chi]+ X^2\Big[\!\big[[\al_0,\chi],\chi\big],\chi\Big]. \end{align} \end{subequations} We see that the expression we get for $A$ after the third iteration is generically divergent. Furthermore, by plugging this expression into the action and expanding it to second order with respect to $\chi$ we also obtain divergences, as stated. One may think that iterating the linearized transformation, as we do here, is too naive and that the full non-linear transformation will somehow manage to avoid this problem. This is not the case. The full non-linear transformation is obtained from iterating the linearized one, while rescaling the gauge fields at the $n^{\tth}$ iteration as \begin{equation} \La\rightarrow \frac{\la}{n}\La\,,\qquad \chi\rightarrow \frac{\la}{n}\chi\,, \end{equation} where $\la$ is a fixed parameter. This fixes the coefficient of the linearized transformation to $\la$, while assuring the infinitesimal character of the gauge field. Hence, the only difference between the expression above and the exact one is in the numerical values of the coefficients, which will not change dramatically. Nonetheless, in order to remove any doubts let us consider also the full non-linear transformation, pretending for the moment that it exists. The differential equation defining it is of the form, \begin{equation} \label{difGaugeEq} \frac{d}{d\la} \vec A(\la)=V+\cL\vec A(\la)\,. \end{equation} Here, we defined, \begin{equation} \vec A(\la)\equiv \left(\begin{array}{c}A(\la)\\ \al(\la)\end{array} \right). \end{equation} Now, $\la$ serves as an evolution parameter for the gauge transformation and the initial condition $\vec A(0)$ is the string field before the gauge transformation. The fixed string field $V$ and linear operator $\cL$ in our case are, \begin{align} V &=\left(\begin{array}{c}Q\La\\ Q\chi\end{array} \right),\\ \cL &=\left( \begin{array}{cc} \La_R-\La_L & X(\chi_R-\chi_L)\\ \chi_R-\chi_L & \La_R-\La_L \end{array} \right), \end{align} where $\La_R$ and $\chi_R$ represent multiplication from the right by $\La$ or $\chi$, while $\La_L$ and $\chi_L$ operate from the left. Note, that left and right operations commute. The general solution of~(\ref{difGaugeEq}) is, \begin{equation} \label{gaugeSol} \vec A=e^{\la\cL}\vec A(0)+\frac{e^{\la\cL}-1}{\cL}V\,, \end{equation} where the operator multiplying $V$ is defined by its Taylor series and is regular. It is easy to verify that in the pure NS case~(\ref{gaugeSol}) reduces to the familiar form, \begin{equation} A(\la)=e^{-\la\La}\big(A(0)+Q \big)e^{\la\La}\,. \end{equation} For the example considered in~(\ref{itLinGauge}), we see that the divergent term $X^2\Big[\!\big[[\al_0,\chi],\chi\big],\chi\Big]$ indeed appears and its coefficient is $\frac{\la^3}{6}$. Higher order terms have higher degree of divergence, i.e., higher powers of $X$, multiplying other conformal fields and the total expression has the form of an essential singularity. A way out could have still existed. The powers of $X$ in~(\ref{gaugeSol}) are partially correlated with the power of the gauge string field $\chi$. Hence, constraining $\chi$ to always carry a factor of $Y$ in its definition could potentially eliminate the singularity. However, this will introduce singularities due to collisions of $Y$ in~(\ref{gaugeSol}). In fact, in this case singularities will emerge even earlier, since the action also contains a factor of $Y$. An operator that could have worked is $P$, since its OPE with $X$ give $\xi$, which has a trivial OPE with $P$ and on the other hand, no singularities can emerge from iterations of $P$ itself~(\ref{OPE}). Nonetheless, this cannot be an accepted resolution, since $P$ and $\xi$ do not live in the small Hilbert space, to which $\vec A$ belongs. Replacing $P(i)$ by, say, $P(i)-P(0)$ that does live in the small Hilbert space would not solve the problem, since some of the singularities would still be left. The only option for eliminating the singularities in such a way would be to use $P(i)-P(-i)$. This, however, does not resolve the problem of singularities in the action, since the $YP$ OPE diverges. Moreover, if one constrains the gauge string field to contain some given mid-point insertion, it seems to us that the physical string field should also be constrained in a similar way, since otherwise the linearized equation of motion would not correspond to the world sheet theory results. Constraining the physical string field to have some insertions as part of its definition essentially leads to a redefinition of the theory. The redefined theory can be Witten's theory or some other variant, e.g.,~\cite{Lechtenfeld:1988tr}. However, it seems that there is no simple modification of the mid-point structure that resolves the singularities both in the gauge transformation and in the action. Another possible resolution would be to replace the $X$ in the definition of the linearized gauge transformation by some sort of an operator that will effectively induce a projection to the correct space of string fields\footnote{This possibility was proposed to me by Scott Yost.}. It might be the case that a variant of the regularized $X$ insertions of~\cite{Preitschopf:1989fc} would work. These variants are non-local operators, that were introduced in order to resolve the singularities in the tree-level scattering amplitudes. Presumably, if they are good for resolving one sort of a singularity of the theory they might also help to resolve the other. However, for the sake of the linearized gauge transformation, the mid-point character of the $X$ insertion is important. Specifically, the relation, \begin{equation} X(\Psi_1\Psi_2)=(X \Psi_1)\Psi_2=\Psi_1(X\Psi_2)\,, \end{equation} is imperative for proving that this is actually a symmetry. It is not clear to us how could a non-local variant of $X$ obey this relation. Also, one would still have to verify that the resulting (finite) gauge symmetry is singularity-free. These points would have to be addressed in any attempt to resolve the gauge symmetry singularities along these lines. Such a resolution, if possible at all, would be a non-trivial redefinition of the gauge symmetry. We conclude that, for the current formulation of the theory, the well-defined (up to issues regarding the space of string fields) linearized gauge transformation cannot be exponentiated to a finite gauge transformation. Hence, the fermionic gauge invariance is lost. One might be tempted to give up this invariance. However, this would result in a wrong number of fermionic degrees of freedom, the theory is then no longer open string theory and there is no reason to believe that it would be well defined quantum mechanically. Hence, the Ramond part of the cubic action in its current form cannot be trusted. \section{The equivalence} \label{sec:main} Regardless of the inconsistency of the cubic theory, we would like to examine to what extent could the NS sector equivalence of~\cite{Fuchs:2008zx} be extended to the Ramond sector. We manage to construct mappings between solutions of both theories in~\ref{sec:mainMap} and we prove in~\ref{sec:action} that (up to the question of the existence of a regularization) the action of corresponding solutions is the same. All that is still needed for establishing a classical equivalence is to prove that gauge orbits are sent to gauge orbits bijectively. This cannot be quite the case, since in the case of the cubic theory the gauge orbits are not well defined. Nonetheless, we can check these statements at the linearized level, where the cubic theory does not suffer from any obvious problems. We find in~\ref{main:gauge} that the correspondence indeed holds at that level. This also implies that the cohomologies around equivalent solutions are the same. The cubic theory was already presented in section~\ref{sec:BadGauge}. Let us now turn to the non-polynomial theory, due to Berkovits~\cite{Berkovits:1995ab}. In this theory picture changing operators are avoided altogether by working in the large Hilbert space. This enables one to work with a string field of ghost and picture number zero in the NS sector. The doubling of the degrees of freedom is compensated by a novel gauge symmetry. Again, the action can be written as a sum, \begin{subequations} \label{NPaction} \begin{equation} S=S_{NS}+S_R\,. \end{equation} The NS part of the action was given in~\cite{Berkovits:1995ab}, \begin{equation} \label{BerkoAction} S_{NS}=\frac{1}{2}\oint\Big( e^{-\Phi} Q e^\Phi e^{-\Phi} \eta_0 e^\Phi-\int_0^1 dt\, \Phi [e^{-t\Phi}\eta_0 e^{t \Phi},e^{-t\Phi} Q e^{t \Phi}] \Big), \end{equation} where the integral $\oint$ represents integration in the large Hilbert space\footnote{A novel representation of this action was given in~\cite{Berkovits:2004xh}, $S_{NS}=-\oint\!\int_0^1 dt\, \eta A_t A_Q$, where, $A_\nabla$ stands for $e^{-t\Phi} \nabla e^{t\Phi}$ and $\nabla$ is an arbitrary derivation of the star product. Here, $\nabla$ can represent one of the odd canonical derivations $Q$ and $\eta_0$, as well as the variation $\delta$ or the derivative with respect to the parameter $t$, $\partial_t$. To get from~(\ref{BerkoAction}) to this form one has to rewrite the former as, $S_{NS}=\int_0^1 dt\,\oint\Big(\partial_t(A_Q A_\eta)-A_t[A_\eta,A_Q]\Big)$ and use several times the identity, $F_{\nabla_1 \nabla_2}\equiv \nabla_1 A_{\nabla_2} -(-)^{\nabla_1 \nabla_2}\nabla_2 A_{\nabla_1} +[A_{\nabla_1},A_{\nabla_2}]=0\,.$}. It is not easy to add the Ramond sector to the non-polynomial theory. The equations of motion were found in~\cite{Berkovits:1995ab}, but it seems that they cannot be derived from a covariant action. In~\cite{Michishita:2004by}, the difficulty with the Ramond sector was attributed to a self duality property of the string field\footnote{The Ramond string field contains a massless chiral fermion.}. It was suggested there, in analogy with the treatment of the type IIB self dual RR form, to introduce another string field and impose a self duality constraint between the two Ramond string fields. The Ramond part of the action is then, \begin{equation} \label{NPactionR} S_R=-\frac{1}{2} \oint e^{-\Phi} Q \Xi e^\Phi \eta_0 \Psi\,, \end{equation} where $\Xi$ and $\Psi$ are the Ramond string fields. The action should be supplemented with the constraint, \begin{equation} \label{constraint} e^{-\Phi} Q\Xi e^\Phi = \eta_0 \Psi\,, \end{equation} \end{subequations} which should be imposed only after deriving the equations of motion. The equations of motion are now, \begin{subequations} \label{NPeqNoConstr} \begin{eqnarray} \label{NPeqPhi} \eta_0 (e^{-\Phi} Q e^\Phi) + \frac{1}{2}[\eta_0 \Psi, e^{-\Phi} Q \Xi e^\Phi]&=&0\,,\\ \label{NPeqXi} \eta_0 (e^{-\Phi} Q\Xi e^\Phi) &=& 0\,,\\ \label{NPeqPsi} Q (e^{\Phi} \eta_0 \Psi e^{-\Phi}) &=& 0\,. \end{eqnarray} \end{subequations} Taking the constraint~(\ref{constraint}) into consideration this set of equations reduces to, \begin{subequations} \label{NPEOM} \begin{eqnarray} \eta_0 (e^{-\Phi} Q e^\Phi) + (\eta_0 \Psi)^2&=& 0\,,\\ Q (e^{\Phi} \eta_0 \Psi e^{-\Phi}) &=& 0\,. \end{eqnarray} \end{subequations} \subsection{The mapping} \label{sec:mainMap} For the NS sector, the mapping is given by \begin{equation} \label{mapA} \tilde A=e^{-\Phi} Q e^\Phi\,. \end{equation} We append this mapping with the Ramond counterpart, \begin{equation} \label{mapAl} \al=i \eta_0 \Psi\,. \end{equation} In these variables the equations of motion~(\ref{NPEOM}) take the form, \begin{subequations} \begin{eqnarray} \label{AnonSmallH} \eta_0 \tilde A - \al^2&=& 0\,,\\ \label{alEOM} Q \al +[\tilde A, \al] &=& 0\,. \end{eqnarray} These two equations should be appended with the consistency conditions that follow from the definitions~(\ref{mapA}) and~(\ref{mapAl}), \begin{eqnarray} \label{Aeom} Q\tilde A+\tilde A^2 &=& 0\,,\\ \label{alSmall} \eta_0 \al &=& 0\,. \end{eqnarray} \end{subequations} The last equation implies that $\al$ is defined in the small Hilbert space. However, the introduction of a non-trivial Ramond field implies that $\tilde A$ is no longer a member of the small Hilbert space, as can be read from~(\ref{AnonSmallH}). To remedy this problem we define, \begin{equation} A=\tilde A-\xi \al^2\,. \end{equation} This string field does live in the small Hilbert space as a result of the definition~(\ref{AnonSmallH}) and the commutation relation~(\ref{EtaXi}). Now, both variables live in the small Hilbert space as a result of~(\ref{AnonSmallH}) and~(\ref{alSmall}) and the role of the equations of motion is played by~(\ref{alEOM}) and~(\ref{Aeom}). Rewriting these equations in terms of $A,\al$ we get exactly the equations of motion of the cubic theory~(\ref{CubEOM}). To that end, one has to use the nilpotency property~(\ref{XiXi}) as well as the fact that $\xi$ is a midpoint insertion. To summarize, our mapping is given by, \begin{subequations} \label{Map} \begin{eqnarray} \label{A} A &=& e^{-\Phi} Q e^\Phi + \xi (\eta_0 \Psi)^2\,,\\ \label{alOfPsi} \al &=& i \eta_0 \Psi\,. \end{eqnarray} \end{subequations} We discuss why the NS sector mapping of~(\ref{A}) has to be modified in appendix~\ref{App:Why}. For the inverse mapping we define, \begin{subequations} \label{invMap} \begin{eqnarray} \label{PhiofAmap} \Phi &=& P A \,,\\ \label{PsiofAl} \Psi &=& -i \xi \al\,. \end{eqnarray} These definitions are enough for showing that the equations of motion are invariant. However, since we are also interested in proving the invariance of the action, we also add the definition, \begin{equation} \ \ \Xi=-iP\al\,. \end{equation} \end{subequations} One can check that the definitions of $\Psi$ and $\Xi$ are consistent with the constraint~(\ref{constraint}). Composing the maps~(\ref{Map}) and~(\ref{invMap}) one gets, \begin{subequations} \begin{eqnarray} \label{Anew} A_{\op{new}}&=& \, e^{-\Phi} Q e^\Phi-\xi \al^2=(1-PA)Q(PA)-\xi \al^2\\ \nonumber &=& \, A-P(QA+A^2)-\xi \al^2=A+P X \al^2 -\xi \al^2=A\,,\\ \label{alNew} \al_{\op{new}}&=&\eta_0 \xi \al=\al\,. \end{eqnarray} \end{subequations} In~(\ref{Anew}) we used the nilpotency of $P$~(\ref{PP}), the equation of motion~(\ref{eomCubA}), as well as the OPE~(\ref{PX}), while in~(\ref{alNew}) we used the fact that $\al$ is defined in the small Hilbert space. In the opposite direction one gets, \begin{subequations} \label{newAPhi} \begin{eqnarray} \label{PhiofA} e^\Phi_{\op{new}}&=& 1+P\big(-Q e^{-\Phi} e^\Phi-\xi \al^2\big)= Q P e^{-\Phi} e^\Phi=e^{-QP\Phi} e^\Phi\,,\\ \label{NewPsi} \Psi_{\op{new}}&=& \xi \eta_0 \Psi=\Psi-\eta_0 \xi \Psi\,, \end{eqnarray} \end{subequations} where in~(\ref{PhiofA}) we first used the OPE~(\ref{Pxi}) and then we used~(\ref{PP}) and (\ref{PQ}). In this direction the composition of the transformations gives the identity operator only for a specific gauge choice for $\Psi$. Otherwise, it gives a gauge equivalent string field as will be shown in~\ref{main:gauge}. \subsection{The action} \label{sec:action} We want to prove that the action of solutions is the same in both theories. For the NS part, this was shown to hold in~\cite{Fuchs:2008zx}. In fact, what was proven there is the following: Given the mapping~(\ref{PhiofAmap}), the values of the actions~(\ref{cubActionNS}) and~(\ref{BerkoAction}) are the same. For this proof the equations of motion were not used. Hence, we can use it here, regardless of the fact that the equations of motion of the NS sector are modified by Ramond terms. To complete the proof we now show that in both theories the Ramond part of the action of an arbitrary solution is zero. For the cubic theory this results from the fact that both terms of the Ramond part of the action are proportional to $\al^2$ and hence to the star product of $\al$ with the $\al$ equation of motion~(\ref{eomCubAl}). For the non-polynomial theory,~(\ref{NPeqXi}) implies that the Ramond part of the action integrand is annihilated by $\eta_0$. Hence, its large-Hilbert-space integral is zero. While we proved that the Ramond sector poses no new problems for the equality of the action, one should remember that for the proof of equality in the NS sector, the existence of an adequate regularization should be assumed~\cite{Fuchs:2008zx}. We again assume that such a regularization exists. \subsection{Gauge transformations} \label{main:gauge} As we already stressed, the finite gauge transformation of the cubic theory is not well defined. Hence, strictly speaking, there is no way to match gauge orbits between the two theories. Nevertheless, when restricted to linearized gauge transformations, all expressions make sense. We would therefore like to study the linearized gauge transformations, assuming that the action~(\ref{cubAction}) is some sort of a singular limit of a well behaved cubic theory. Since the singularity of the cubic theory does not expresses itself at the linearized level, we expect that, at that level, the formal equivalence that we study works. For the gauge transformations on the side of the non-polynomial theory, one should distinguish between the gauge symmetries of the action~(\ref{NPaction}) and that of the equations of motion~(\ref{NPEOM}), which is obtained after the use of the constraint~(\ref{constraint}). For the action, the gauge symmetries are, \begin{subequations} \label{actionGauge} \begin{eqnarray} \delta e^\Phi &=& e^\Phi \eta_0 \La_1+Q\La_0 e^\Phi\,,\\ \delta \Psi &=& \eta_0 \La_{\frac{3}{2}}+[\Psi,\eta_0\La_1]\,,\\ \delta \Xi &=& Q \La_{-\frac{1}{2}}+[Q\La_0,\Xi]\,, \end{eqnarray} \end{subequations} where the four gauge string fields are labeled by their picture numbers. This symmetry is consistent with the constraint~(\ref{constraint}). With this constraint imposed, the gauge transformation generated by $\La_{-\frac{1}{2}}$ becomes trivial. On the other hand, on the constraint surface there is an enhancement of symmetry, i.e., a new gauge string field $\La_{\frac{1}{2}}$ generates gauge transformations on this surface~\cite{Michishita:2004by}. The gauge symmetry takes the form\footnote{Note that $\La_0$ here is $-\La_Q$ of~\cite{Fuchs:2008zx}.}, \begin{subequations} \label{EOMgaugeSym} \begin{eqnarray} \delta e^\Phi &=& e^\Phi (\eta_0 \La_1-[\eta_0\Psi,\La_{\frac{1}{2}}]) +Q\La_0 e^\Phi\,,\\ \delta \Psi &=& \eta_0 \La_{\frac{3}{2}}+[\Psi,\eta_0\La_1]+ Q\La_{\frac{1}{2}}+[e^{-\Phi} Q e^\Phi,\La_{\frac{1}{2}}]\,. \end{eqnarray} \end{subequations} Suppose now that we map a solution of the cubic theory to the non-polynomial one and that this cubic solution is modified by a gauge transformation. This induces the following transformation on the side of the non-polynomial theory, \begin{subequations} \label{gaugeCubNP} \begin{eqnarray} \delta \Phi &=& P\delta A=P(Q\La+[A,\La]+X[\al,\chi])\,,\\ \delta \Psi &=& -i\xi \delta \al = -i \xi (Q\chi+[\al,\La]+[A,\chi])\,. \end{eqnarray} \end{subequations} The map~(\ref{PhiofAmap}) together with the nilpotency of P~(\ref{PP}) implies that \begin{subequations} \begin{eqnarray} e^\Phi &=& 1+\Phi=1+PA\,,\\ \delta e^\Phi &=& \delta\Phi\,. \end{eqnarray} \end{subequations} Now, define \begin{equation} \La_0=-P\La\,,\qquad \La_{\frac{1}{2}}=i\xi \chi\,,\qquad \La_1=\xi \La\,,\qquad \La_{\frac{3}{2}}=-i \tilde \xi X \chi\,. \end{equation} Here, $\tilde \xi$ in the definition of $\La_{\frac{3}{2}}$ represents a $\xi$ insertion at any arbitrary point other than the midpoint, in order to avoid singularities from the OPE of $X$ and $\xi$. Alternatively, we can take the normal ordered product $:\!\!\xi X\!\!:$. Since $\La_{\frac{3}{2}}$ appears only in the combination $\eta_0 \La_{\frac{3}{2}}$, the point at which $\tilde \xi$ is inserted is of no consequence. With these gauge string fields the transformation~(\ref{EOMgaugeSym}) takes the form \begin{subequations} \begin{eqnarray} &\phantom{.}& \begin{aligned} \delta \Phi &= (1+P A) (\eta_0 \xi \La-[\eta_0 \xi \al,\xi \chi]) -Q P\La (1+P A)\\ &= (1+P A) (\La-[\al,\xi \chi]) +(PQ \La-\La) (1+P A)\\ &= \La+P A\La+\xi[\al,\chi]+PQ \La-\La-P\La A\,, \end{aligned} \\ &\phantom{.}& \begin{aligned} \delta \Psi &= -i \eta_0 \tilde \xi X\chi-i[\xi\al,\eta_0 \xi \La]+ i Q \xi \chi+i[(1-P A) Q (1+P A),\xi \chi]\\ &= -i X\chi-i\xi[\al,\La]+i X\chi- i \xi Q \chi-i\xi[A, \chi] \,. \end{aligned} \end{eqnarray} \end{subequations} These transformations coincide with those of~(\ref{gaugeCubNP}). Hence, a gauge transformation of the cubic theory induces a gauge transformation of the non-polynomial theory. Let there now be two gauge equivalent solutions of the non-polynomial theory. There are four gauge string fields relating these two solutions, $\La_0,\ \La_{\frac{1}{2}},\La_1,\ \La_{\frac{3}{2}}$. It is easy to see that $\La_0$ and $\La_{\frac{3}{2}}$ do not induce any variation of $A$ and $\al$. As for $\La_{\frac{1}{2}}$ and $\La_1$, one can see that they induce a gauge transformation, where the gauge string fields in the side of the cubic theory are given by, \begin{equation} \chi=-i\eta_0 \La_{\frac{1}{2}}\,,\qquad \La=\eta_0\La_1+[\al,\chi]\,. \end{equation} The proof is similar to the above, even if somewhat longer, and makes use of the equation of motion~(\ref{eomCubAl}), the OPE~(\ref{XiXi}), the relation~(\ref{Qxi}) and the graded Jacobi identity\footnote{Recall that $[\cdot,\cdot]$ is the graded commutator in our notations~(\ref{gradCom}).} for the fields $\al,\chi$ and $A$, \begin{equation} [A,[\al,\chi]]-[\al,[\chi,A]]+[\chi,[A,\al]]=0\,. \end{equation} Finally, we have to show that mapping a solution of the non-polynomial theory to the cubic theory and then back to the non-polynomial one results in a solution, which is gauge equivalent to the original one. The opposite assertion is trivial, since as we showed, composing the mappings in the opposite order results in the identity mapping. Now, we have to use the finite form of the gauge transformation, since the original solution and the one obtained after the mappings are by no means infinitesimally close. There is no problem in working with the finite gauge transformation, since we now consider the side of the non-polynomial theory, where the finite gauge transformation is well defined. The expressions for the fields after the mappings are given by~(\ref{newAPhi}). It is clear that we do not have to exponentiate the full linearized gauge symmetry~(\ref{EOMgaugeSym}). All that is needed is to consider the following non-zero gauge fields, \begin{equation} \La_0=-P\Psi\,,\qquad \La_{\frac{3}{2}}=-\xi\Psi\,. \end{equation} The finite gauge transformation generated by these fields gives exactly~(\ref{newAPhi}). We can now conclude that (on-shell and up to the problems in the Ramond sector of the cubic theory) gauge orbits in one theory correspond to gauge orbits in the other theory. Also, the fact that the equations of motion and gauge symmetries are mapped to each other in both directions implies that the same holds also for their linearized versions. Hence, both theories have the same cohomologies around solutions. \section{The equivalence for general D-brane configurations} \label{sec:general} The general D-brane system introduces the need for Chan-Paton factors, as well as the choice of sectors (NS$\pm$ / R$\pm$) that enter into each entry of the Chan-Paton matrix\footnote{Here, we consider explicitly the ten dimensional, flat, Poincar\'e-invariant cases. Generalization to the case with lower dimensional D-branes should be simple.}. The study of the possible Chan-Paton factors and NS/R sectors is nothing but the classification of possible open string theories\footnote{See~\cite{Bianchi:1990yu} for a thorough discussion on open string classification.}. For this classification we need to impose the requirements of mutual locality and closure of all the OPE's, as well as the consistency of the interaction with closed strings. The requirement of local OPE's of all fields involved, implies that the NS$-$ sector cannot exist in the same Chan-Paton entry with Ramond fields and that the $R\pm$ fields are also mutually exclusive. Another requirement is the closure of the OPE, which implies that the NS+ sector is always present. Hence (at any given Chan-Paton entry) one may have either only the NS+ sector, or the NS+ sector with NS$-$ or with one of the R sectors. The combination of NS+ with either of R$\pm$ can be realised on the D-brane and on the \= D-brane. The NS$\pm$ case is realised on the non-BPS D-brane. The pure NS+ case can also be realised~\cite{Thorn:2008ay}. The introduction of Chan-Paton factors into string field theory is easy. One simply tensors each string field with the appropriate Chan-Paton matrix and adds to the definition of the integral also a normalized trace over the Chan-Paton space~\cite{Berkovits:2000hf}. The operators of the theories, namely $Q$, $\eta_0$ and $Y_{-2}$ are not affected and can be thought of as being multiplied by the identity matrix in the Chan-Paton space. Adding the NS$-$ sector to the NS+ one, when working with a non-BPS D-brane (or at an off-diagonal entry of the Chan-Paton matrix that represents strings stretching between a D-brane and a \= D-brane), can be achieved by tensoring the previous structure also with ``internal Chan-Paton'' (two by two) matrices and adding to the definition of the integral also a normalized trace over this sector\footnote{The internal Chan-Paton matrices compensate for the opposite Grassmann parity of the two sectors and restrict the interaction terms only to those that respect the GSO parity.}. This structure was first introduced for the non-polynomial theory in~\cite{Berkovits:2000hf}, where it was shown that the NS+ sector should be tensored with the two by two identity matrix and the NS$-$ sector should be tensored with the Pauli matrix $\sigma_1$. The gauge string fields for these sectors are tensored with $\sigma_3$ and $i\sigma_2$ respectively and the operators $Q$ and $\eta_0$ are also tensored with $\sigma_3$. For the cubic theory this structure was introduced in~\cite{Arefeva:2002mb}, where it was shown that the roles of the string fields and the gauge fields are revered, e.g., the NS$-$ string field $A_-$ gets the $i\sigma_2$ factor and the NS$-$ gauge field gets the $\sigma_1$. The kinetic operator $Q$ retains the $\sigma_3$ factor that should also be granted to $Y_{-2}$. This similarity in the structures of describing the NS$\pm$ sectors in both theories, makes the generalization of the mapping to the NS$-$ sector straightforward. Indeed, the generalization of the mapping for the NS$-$ sector and for the case of Chan-Paton factors was already given in~\cite{Fuchs:2008zx}. All that is needed is to tensor $\xi$ and $P$ with $\sigma_3$ (and with the identity matrix in the genuine Chan-Paton space) and the mappings work. Now, we want to consider the case of adding also R$\pm$ sectors in various entries of the Chan-Paton matrix. As mentioned in the introduction, the Ramond string fields are odd, just like the NS+ string field. Furthermore, we never deal with both Ramond sectors or with a Ramond sector together with an NS$-$ sector. Hence, there is no need to append any of the Ramond sectors with internal Chan-Paton factors. Nevertheless, in the case where some other entries of the Chan-Paton matrix contain an NS$-$ sector, it is possible, just for the sake of a uniform description of the whole Chan-Paton matrix, to append internal Chan-Paton factors also to the Ramond sector string fields. In such a case one should define an internal Chan-Paton factor for the Ramond string field, $\sigma_R$. The $\sigma_3$ factors on $A$, $Q$ and $Y_{-2}$ imply that $\sigma_R$ should square to the identity matrix and commute with $\sigma_3$. Hence, one should either choose $\sigma_R=\sigma_3$ or $\sigma_R=\One$. The same can be done for the non-polynomial theory. There, the string fields $\Xi$ and $\Psi$ would have to be appended with $\sigma_R$ that obeys the same conditions as in the case of the cubic theory. Now, commutativity with $\sigma_3$ should be implied due to its presence in $Q$ and $\eta_0$. For our mapping to work in these cases as well, one should make the opposite choice for $\sigma_R$ in the two theories, since $\sigma_3$ appears with an odd power everywhere in our mappings~(\ref{Map}) and~(\ref{invMap}). The most natural definition would be to define $\sigma_R=\One$ in the non-polynomial theory and $\sigma_R=\sigma_3$ in the cubic theory. Then, all the string fields (other than the NS$-$ ones) at any of the two theories, are appended with the same factor. The same uniformity holds also for the gauge string fields. All the good properties of our mappings are maintained. \section{A cubic action with two fields in the Ramond sector} \label{sec:2FermiCub} Establishing the correspondence between the cubic theory~(\ref{cubAction}) and the non-polynomial theory~(\ref{NPaction}) including the constraint~(\ref{constraint}), one may ask whether there is some sort of an extension of the mapping for the unconstrained non-polynomial theory as well. For such a mapping to exist, we have to consider a modification of the cubic theory with two Ramond fields. In fact, this is a natural avenue from the perspective of the cubic theory as well, as we explain below. Furthermore, the fact, discussed above, that the Ramond sector of the cubic theory is ill-defined, calls for a refined, mid-point-insertion-free formulation. Recall that the problems of the formalism originate from the mid-point insertions in the definitions of the gauge transformations, not from the mid-point insertion in the definition of the action. In fact, when string fields with mid-point insertions are assumed to be outside the space of allowed string fields, the use of mid-point insertions in the definition of the action does not lead to any problems. We refer again to~\cite{Kroyter:2009zj} for further discussion on this issue. The modified cubic theory solved the problems of the original cubic theory by working with NS fields in the ``neutral'' (zero) picture. This cannot be imposed on the Ramond fields, since they carry half-integer picture numbers. The closest one can get is to use a $\pm\frac{1}{2}$ picture. Of these two options, the ``natural'' Ramond ($-\frac{1}{2}$) picture was selected. The fact that a non-zero string field is used implies that picture changing operators appear in the equations of motion and gauge transformations. These operators must be inserted at the mid-point, leading to potential singularities of the theory. If two Ramond fields are allowed, one can write an action, whose equations of motion will not include picture changing operators other than the global $Y_{-2}$ insertion. Of course, the physical theory has only one Ramond field. Hence, a constraint should be imposed. This constraint is bound to include picture changing operators, leading to the usual equations of motion~(\ref{CubEOM}). So one may be under the impression that nothing is gained. Still, the fact that the new action is equivalent (as we shall promptly see) to the unconstrained non-polynomial action may suggest that these two actions might have some meaning even before the constraint is imposed. Moreover, one might speculate that the new action can be useful for quantization and for generalizations of the theory. From the above discussion one can immediately guess the action\footnote{The proofs of the various properties of the mapping between this action and the unconstrained non-polynomial action are quite similar to those of~\ref{sec:main}. Hence, we tend to be brief here.}, \begin{equation} \label{2RCubaction} S=-\int Y_{-2}\Big(\frac{1}{2}A Q A+\frac{1}{3}A^3 + \frac{1}{2} \tilde \al Q \al + \frac{1}{2}A [\al,\tilde\al]\Big)\,, \end{equation} with $\al$ having picture number $n_p(\al)=-\frac{1}{2}$ as before and $n_p(\tilde \al)=\frac{1}{2}$. From this action follow the equations of motion (omitting the global $Y_{-2}$), \begin{subequations} \label{EOM2R} \begin{align} \label{2RAEOM} QA+A^2+\frac{1}{2} [\al,\tilde\al] &= 0\,,\\ \label{2RalEOM} Q\al+ [A,\al] &= 0\,,\\ \label{TilAlEOM} Q\tilde\al+[A,\tilde\al] &= 0\,. \end{align} \end{subequations} It is interesting to notice that the three string fields can be unified in terms of a single string field simply by adding them. Define, \begin{equation} \hat A=A+\frac{\al+\tilde\al}{\sqrt{2}}\,, \end{equation} and expand in terms of these constituents the natural action for $\hat A$, \begin{equation} \label{compactAction} S=-\int Y_{-2}\Big(\frac{1}{2}\hat A Q\hat A+\frac{1}{3}\hat A^3\Big). \end{equation} Expanding the integrand one gets the integrand of the action~(\ref{2RAEOM}) together with some other terms. However, all other terms have wrong picture numbers and hence can be safely dropped out of the action. The constraint we should impose is, \begin{equation} \label{cubConst} \tilde \al=X\al \quad \Longleftrightarrow \quad \al=Y\tilde \al\,. \end{equation} While both representations above are correct, the left one is more accurate in the sense that $\tilde \al$ is the string field with the actual mid-point $X$ insertion. A genuine $Y$ insertion is prohibited, since it would lead to singularities with the $Y_{-2}$ insertion in the action. Keeping this constraint in mind, we also prohibit explicit $X$ insertions in $\al$. Applying this constraint, the action and the equations of motion reduce to~(\ref{cubAction}) and~(\ref{CubEOM}) respectively. At the linearized level the action~(\ref{2RCubaction}) in invariant under three independent gauge transformations, \begin{equation} \delta A=Q\La\,,\qquad \delta \al=Q\chi\,,\qquad \delta \tilde \al=Q\tilde \chi\,. \end{equation} However, only the first of these gauge transformations has an extension at the non-linearized level. Hence, the complete gauge invariance of the theory reads, \begin{subequations} \label{CubGauge2Fer} \begin{align} \delta A=&\, Q\La+[A,\La]\,,\\ \delta \al=&\, [\al,\La]\,,\\ \delta \tilde\al=&\, [\tilde\al,\La]\,. \end{align} \end{subequations} This can be understood from the compact form of the action~(\ref{compactAction}). This form implies that the gauge symmetry is, \begin{equation} \hat A \rightarrow e^{-\hat\La}(Q+\hat A)e^{\hat \La}\,. \end{equation} If one tries to substitute into $\hat \La$ any component, whose picture is non-zero, it will result in taking the string field $\hat A$ out of the allowed picture-number range, $-\frac{1}{2}\leq p\leq \frac{1}{2}$. Hence, $\hat \La$ should be restricted to have only the zero picture component $\La$ and the gauge transformation reduces (in its linearized form) to~(\ref{CubGauge2Fer}). Imposing the constraint~(\ref{cubConst}) leads to an enhancement of the gauge symmetry to~(\ref{cubGauge}) (modulo the consistency problem of this gauge transformation), in analogy with the situation in the non-polynomial theory. In fact, the absence of gauge symmetries for the fermionic string fields should be expected, if we indeed believe (and shortly prove) that this theory is equivalent to the non-constrained non-polynomial theory. This can be seen by counting degrees of freedom. The non-polynomial theory resides in the large Hilbert space, that has double the degrees of freedom of the small Hilbert space, due to the presence of the $\xi_0$ mode. In this space both $Q$ and $\eta_0$ are trivial and hence gauge transformations based on them reduce the degrees of freedom by a half. For the boson field, this theory has the $\La_1$ gauge symmetry that effectively implies that the degrees of freedom of the theory are isomorphic to those of the small Hilbert space. Then, on top of this gauge symmetry there is also the $\La_0$ gauge symmetry that reduces the degrees of freedom of the theory in exactly the same way that $\La$ reduces those of the cubic theory. Note that we can no longer claim that this is a reduction by ``a half'', since in the small Hilbert space $Q$ is no longer trivial. The two fermionic gauge symmetries of the theory, namely $\La_{-\frac{1}{2}}$ and $\La_{\frac{3}{2}}$ reduce to a half the degrees of freedom of $\Xi$ and $\Psi$ respectively, rendering them potentially equivalent to $\al$ and $\tilde \al$. Had the cubic theory had more gauge symmetry, its degrees of freedom could not have matched those of the non-polynomial one. Our goal now is to find the mapping between the two theories. We propose the mapping, \begin{equation} \label{2RMap} \Phi=PA\,,\qquad \Psi=-i P\tilde \al\,,\qquad \Xi=-iP \al\,. \end{equation} We have to show that under this mapping solutions of the equations of motion~(\ref{EOM2R}) are mapped to solutions of~(\ref{NPeqNoConstr}). From~(\ref{2RMap}) we find that the l.h.s of~(\ref{NPeqPhi}) is given by, \begin{equation} \eta_0 (e^{-\Phi} Q e^\Phi)+ \frac{1}{2}[\eta_0 \Psi, e^{-\Phi} Q \Xi e^\Phi]=-Y\big(QA+A^2 +\frac{1}{2}[\al,\tilde \al]\big), \end{equation} which vanishes in light of the equation of motion~(\ref{2RAEOM}). Then, we find, \begin{equation} e^{-\Phi}Q\Xi e^\Phi=-i\al\,, \end{equation} where we used~(\ref{PP}). This implies that~(\ref{NPeqXi}) holds. For calculating~(\ref{NPeqPsi}), define \begin{equation} \hat \al\equiv -i Y \tilde \al\,, \end{equation} and evaluate, \begin{equation} \label{FirstCol} Q\big(e^\Phi\eta_0\Psi e^{-\Phi}\big)=Q\big((1+PA)\hat \al (1-PA)\big)= Q\hat \al + [A,\hat\al] - P Q[A,\hat\al]=0\,, \end{equation} where the last equality follows from~(\ref{TilAlEOM}). The last manipulation can be criticized on the ground that a factor of $Y$ was ``hidden'' in the definition of $\hat \al$. When considered explicitly this factor would produce divergences with the factors of $P$ that multiply it. We can avoid this problem in one of two ways. One way is to recall that when we enforce the constraint relating $\al$ and $\tilde \al$ the later has an explicit factor of $X$ multiplying it. We can declare that $\tilde \al$ has to have such an insertion even without the constraint. This does not change a priori the amount of degrees of freedom of $\tilde \al$ and solves the problem. Nonetheless, this resolution is not quite satisfactory since our aim was to obtain a theory, which at least a priori is free from explicit mid-point insertions over string fields. The other way is to rely on the fact that we have to regularize the mappings anyway, in order to produce a sensible action. Then, we can declare that obtaining the above result without any finite corrections is a symmetry principle for the regularization scheme. This resolution further constrains the needed regularization. However, we have more string fields now in our disposal, so it is plausible that a regularization exists. For the inverse mapping we choose, \begin{subequations} \label{invMap2R} \begin{align} \label{alOfXi} \al &= ie^{-\Phi}Q\Xi e^\Phi\,,\\ \label{TilAlOfPsi} \tilde \al &= i\eta_0 X \Psi\,,\\ A &= e^{-\Phi} Q e^\Phi + \frac{1}{2}\xi [\eta_0 \Psi, e^{-\Phi} Q \Xi e^\Phi]\,. \end{align} \end{subequations} The equations of motion of the non-polynomial theory~(\ref{NPeqNoConstr}) imply that $A$, $\al$ and $\tilde \al$ live in the small Hilbert space as they should. Next, we have to verify that the equations of motion~(\ref{EOM2R}) also follow from~(\ref{NPeqNoConstr}) and the mapping~(\ref{invMap2R}). The proof for~(\ref{2RAEOM}) and~(\ref{2RalEOM}) is straightforward. For the evaluation of the last equation~(\ref{TilAlEOM}), define \begin{equation} \hat \Psi=i X\Psi\,. \end{equation} We then get, \begin{align} Q\tilde \al+[A,\tilde \al]&=\, Q\eta_0 \hat \Psi+\big[e^{-\Phi}Q e^\Phi+ \frac{1}{2}\xi[\eta_0 \Psi,e^{-\Phi}Q \Xi e^\Phi],\eta_0 \hat\Psi\big]\\ \nonumber &=\,Q\eta_0 \hat \Psi+\big[(1-\xi \eta_0)e^{-\Phi}Q e^\Phi,\eta_0 \hat\Psi]= Q\eta_0 \hat \Psi-\eta_0 \xi Q\eta_0 \hat\Psi=\xi \eta_0 Q \eta_0 \hat \Psi =0\,, \end{align} where in the second equality~(\ref{NPeqPhi}) was used and in the next equality~(\ref{NPeqPsi}) was used. Note, that similarly to what we had in~(\ref{FirstCol}), a singularity due to the collision of $X$ and $\xi$ is hidden in the definition of $\hat \Psi$. Here, we have no excuse for claiming that $\Psi$ should always have a factor $Y$ in its definition for cancelling the $X$ that multiplies it. Hence, the only way out is to require the existence of a good regularization scheme. We stress again, that as in the previous cases where we relied on the existence of the regularization, i.e., in the evaluation of the action in~\cite{Fuchs:2008zx} and in~(\ref{FirstCol}), we have neither an explicit form of the regularization nor a proof of its existence. We return to this point in section~\ref{sec:conc}. It is straightforward to see that composing the mapping~(\ref{invMap2R}) on~(\ref{2RMap}) results in the identity mapping of the cubic theory. Suppose now that we compose the mappings in the opposite order. We expect to get a finite gauge transformation. In fact we get, \begin{subequations} \label{2FerGaugeTrans} \begin{equation} e^\Phi_{\op{new}}=e^{-QP\Phi} e^\Phi\,, \end{equation} as before, while for the fermionic fields we get, \begin{align} \Psi_{\op{new}}=&\, \Psi-\eta_0 \xi \Psi\,,\\ \Xi_{\op{new}}=&\, P e^{-\Phi}Q \Xi e^\Phi=PQPe^{-\Phi}Q \Xi QP e^\Phi= e^{-QP \Phi}(\Xi-QP\Xi) e^{QP\Phi} \,. \end{align} \end{subequations} Exponentiating the various transformations of~(\ref{actionGauge}) in order to get the form of the finite gauge transformations one can see that~(\ref{2FerGaugeTrans}) can be obtained by performing the following finite gauge transformations in the order they are written, \begin{equation} \La_{\frac{3}{2}}=-\xi\Psi\,, \qquad \La_{-\frac{1}{2}}=-P\Xi\,, \qquad \La_0=-P\Phi\,. \end{equation} Let there be two gauge equivalent solutions of the non-polynomial theory. The infinitesimal gauge transformations are generated by the four gauge fields $\La_p$ with $p=-\frac{1}{2},0,1,\frac{3}{2}$. Considering each of these transformations and mapping it to the cubic theory we see that only $\La_1$ induces a non-trivial transformation of the cubic string fields. This variation is given by~(\ref{CubGauge2Fer}), with the identification \begin{equation} \La=\eta_0\La_1\,. \end{equation} We conclude that gauge equivalent configurations are mapped to gauge equivalent configurations in this direction. In the side of the cubic theory we have only one gauge transformation to consider. Substituting~(\ref{CubGauge2Fer}) into the mapping gives, \begin{equation} \delta e^\Phi=e^\Phi\La-QP\La e^\Phi\,,\qquad \delta\Psi=[\Psi,\La]\,, \qquad \delta\Xi=[\Xi,QP\La]\,. \end{equation} This is a gauge transformation in the side of the non-polynomial theory with the gauge string fields given by, \begin{equation} \La_0=-P\La\,,\qquad \La_1=\xi \La\,,\qquad \La_{-\frac{1}{2}}=\La_{\frac{3}{2}}=0\,. \end{equation} We can now conclude that, when no constraints are imposed, the mapping of gauge orbits between the two-Ramond-field cubic theory and the unconstrained non-polynomial theory is bijective. The proof of the equality of the action of solutions in both theories is very similar to what we presented in~\ref{sec:action}. Again, we prove that the Ramond-sector contribution to the action is zero for solutions in both theories. For the non-polynomial theory the proof only used~(\ref{NPeqXi}), which does not depend on the constraint. Hence, there is nothing new to prove here. For the cubic theory the Ramond part of the action integrand is bi-linear in $\al$ and $\tilde \al$, which implies that it is equal to the star product of $\al$ with its equation of motion, as well as to the star product of $\tilde \al$ with its equation of motion. Any one of this equalities is enough to conclude that the action of solutions gets no contribution from this sector. We proved the equivalence of gauge orbits, which also implies the identity of the cohomologies around solutions as well as the equality of the action of corresponding solutions. Hence, we conclude that the two theories are classically equivalent. Finally, let us illustrate that the constraints one has to impose on both theories are equivalent. Starting at the cubic theory we have to impose~(\ref{cubConst}). This immediately implies, \begin{equation} \eta_0 \Psi=-iY\tilde\al=-i\al=e^{-\Phi}Q\Xi e^\Phi\,, \end{equation} which is just the constraint of the non-polynomial theory~(\ref{constraint}). The other direction is as straightforward. The equivalence of the constraints together with the other results of this section give an ``alternative'' derivation of all the results of section~\ref{sec:main}, since imposing the constraints on both theories reduce them to the theories studied there. \section{Conclusions} \label{sec:conc} In this work we proved the inconsistency of the modified cubic superstring field theory. The inconsistency stems from collisions of picture changing operators in the finite form of its Ramond-sector gauge transformations. This state of affairs implies that the cubic theory should either be abandoned or modified. We believe that the later is the more sensible option, for two reasons. The first, stressed throughout this paper, is the formal equivalence between this theory and the non-polynomial theory. The second reason is the success of this theory in describing the NS sector. In particular vacuum solutions and dynamical tachyon condensation were studied both numerically and analytically, with very impressive results~\cite{Aref'eva:2000mb,Ohmori:2003vq,Erler:2007xt,Aref'eva:2008ad, Fuchs:2008zx,Calcagni:2009tx}. This can be compared to Witten's superstring field theory that failed to reproduce any such results~\cite{DeSmet:2000je}. We believe that had the cubic theory been completely wrong, it would have not produced these results even in the NS sector. Hence, we need a theory whose NS part reduces to the NS sector of the cubic theory in some limit. This theory should also consistently include the Ramond sector. A possible modification for the cubic theory was recently proposed using non-minimal sectors~\cite{Berkovits:2009gi,Kroyter:2009zj}. It seems, however, that it cannot solve the consistency problem of the Ramond sector~\cite{Kroyter:2009zj}. In section~\ref{sec:2FermiCub}, we introduced a first step towards a different sort of a modification. There, the Ramond sector was doubled in order to avoid mid-point insertions of picture changing operators on string fields. The doubling of the Ramond sector kills supersymmetry. In order to restore it, one might consider doubling the NS sector as well. One might even speculate that such a doubling may be useful for constructing closed superstring field theories, especially in light of~\cite{Hull:2009mi}. At any rate, the cubic theory with doubled Ramond sector is by itself not satisfactory, since it does not have the correct amount of degrees of freedom. Following the example of~\cite{Michishita:2004by}, we tried to resolve this problem by introducing a constraint. It seems, however, that such a constraint is bound to include explicit mid-point insertions, which we have to avoid. Hence, we have to look for another sort of resolution. An appealing possibility is to further enlarge the field content as well as the gauge symmetry, such that the final amount of degrees of freedom is reduced to the correct one. We currently study this possibility and generalizations thereof. The second issue studied in this work is the formal equivalence of the cubic and non-polynomial theories. All the properties studied in this work were shown to be invariant under our mappings, supporting the equivalence. However, there is one more invariant that one might wish to consider. This is the boundary state constructed from the solution~\cite{Kiermaier:2008qu} (see also~\cite{Hashimoto:2001sm,Gaiotto:2001ji,Ellwood:2008jh,Kawano:2008ry, Kishimoto:2008zj}). To study this in the context of our equivalence one would first have to combine some ideas from~\cite{Michishita:2004rx} and~\cite{Kiermaier:2008qu}, in order to define boundary states for the supersymmetric theories. We currently study this subject. It might seem strange that the Ramond parts of our mappings~(\ref{alOfPsi}) and~(\ref{PsiofAl}), include factors of $i$. One might worry that our construction is inconsistent with the reality condition of the string field. In fact, it is the other way around. Recall that, in the Ramond sector, the coefficient fields are Grassmann odd. The reality condition is obtained by composing Hermitian conjugation and BPZ conjugation. While the first inverts the order of insertions in the usual way, the second does not change the formal Grassmann order~\cite{Taylor:2003gn}. Hence, the reality of the string field $\al$ implies that the string field $\xi \al$ is imaginary. The $i$'s take care just of that. Much of the recent renewal of interest in string field theory is due to Schnabl's solution~\cite{Schnabl:2005gv} and subsequent work~\cite{Okawa:2006vm,Fuchs:2006hw,Fuchs:2006an,Rastelli:2006ap, Ellwood:2006ba,Fuchs:2006gs,Okawa:2006sn, Erler:2006hw,Erler:2006ww,Schnabl:2007az,Kiermaier:2007ba,Erler:2007rh, Okawa:2007ri,Fuchs:2007yy,Ellwood:2007xr, Kishimoto:2007bb,Fuchs:2007gw,Kiermaier:2007vu, Kiermaier:2007ki}. Having explicit analytical expressions for string field theory solutions evoked the realisation that these solutions can be formally represented as pure gauge solutions. In the bosonic theory, solutions can be written using singular gauge string fields~\cite{Ellwood:2009zf}. For the cubic superstring field theory in the NS sector, the equivalence of~\cite{Fuchs:2008zx} provides a formal gauge form for the solutions. The ``gauge string field'' is formal in this case since it resides in the large Hilbert space. The extension of the equivalence to the Ramond sector presented here does not seem to define a solution with a form of a formal gauge solution. This statement, however, is ill-defined, since the finite gauge transformation of the Ramond sector does not exist for the cubic theory. One might suspect that in a well defined refinement of the cubic formalism it would be possible to write solutions as formal gauge ones. However, in the cubic formalism with two Ramond fields that we introduced in~\ref{sec:2FermiCub}, one sees that the gauge transformation of the Ramond fields~(\ref{invMap2R}) is zero for a solution with zero Ramond fields. Thus, one cannot get a solution with non-trivial Ramond fields as a gauge solution around the vacuum. This can be traced to the fact that we have no gauge symmetry in this theory whose generators are fermionic. There is also no supersymmetry in this theory. It might be the case that in a more physical refinement of the cubic theory there will be a natural way for writing solutions as formal gauge ones. While studying the mappings between the cubic and non-polynomial two-Ramond-field theories, we got twice expressions, which were formally of the form of a zero times a divergence that came from a mid-point collision of operators. This implies that a regularization is needed in which these expressions could be consistently set to zero. We thus have two requirements from a consistent regularization, on top of the requirement that we had from calculating the NS action~\cite{Fuchs:2008zx}. On the other hand, we also have two more string fields, i.e., the Ramond ones and two more mappings (in each direction) to modify for defining the regularizations. Hence, the existence of a sound regularization is as plausible as it is for the NS case. We would like to stress that even regardless of the issue of singularities, a regularization is desirable, since the mappings we introduced include various mid-point insertions on string fields. Mid-point insertions on string fields are highly constrained and a formulation that avoids them altogether would be more reliable~\cite{Kroyter:2009zj}. \section*{Acknowledgments} I would like to thank Nathan Berkovits, Ted Erler, Udi Fuchs, Michael Kiermaier, Leonardo Rastelli, Scott Yost and Barton Zwiebach for many discussions on the issues covered in this work. It is a pleasure to thank the organizers and participants of the KITP workshop ``Fundamental Aspects of Superstring Theory'', where part of this work was performed, for hospitality and for providing a very stimulating and enjoyable environment. While at the KITP, this research was supported by the National Science Foundation under Grant No. PHY05-51164. It is likewise a pleasure to thank the Simons Center for Geometry and Physics and the organizers and participants of the ``Simons Workshop on String Field Theory'' for a great hospitality and for many discussions on and around the topics presented in this manuscript. This work is supported by the U.S. Department of Energy (D.O.E.) under cooperative research agreement DE-FG0205ER41360. My research is supported by an Outgoing International Marie Curie Fellowship of the European Community. The views presented in this work are those of the author and do not necessarily reflect those of the European Community.
2,877,628,088,657
arxiv
\section{Introduction} Early spectroscopic measurements of emission lines formed at $\sim$10~MK obtained during the rise phase of solar flares revealed blue-shifted components corresponding to plasma upflows of several hundred km~s$^{-1}$\ \citep{doschek80,antonucci82}. Theoretical models of 1D solar loops in which energy is deposited near the loop top demonstrated that such large upflows could be generated as low-lying chromospheric plasma is heated to coronal temperatures and ``evaporates'' into the coronal part of the loop \citep{1983ApJ...265.1090C}. The plasma upflow sites have also been correlated with brightenings in the chromosphere \citep{1986ApJ...309..435M}, transition region and corona \citep{2006SoPh..234...95D}. In the present work we define intense brightenings that occur during the flare rise phase as \emph{flare kernels}. It is not known if flare kernels always exhibit fast upflows in $\sim$10~MK emission lines but the many reported measurements of hot, blue-shifted emission lines in X-ray and ultraviolet spectra suggests that this is possbile. The availability since 2010 of high spatial and time resolution EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO), coupled with high resolution EUV spectra from the EUV Imaging Spectrometer (EIS) on board \emph{Hinode}\ gives an unprecedented capability for studying flare kernels. The present work focusses on one particular kernel observed by both instruments on 2011 February 16 during the rise phase of a M1 class flare. Flare kernels can be interpreted in terms of the Standard Flare Model \citep[see][and references therein]{benz08}, which posits an energy release site in the corona that leads to either a stream of non-thermal particles or a thermal conduction front being directed down the coronal loop legs towards the photosphere. Heating then occurs in the chromosphere leading to flare ribbons and brightenings, and hot plasma rises towards the flare site, giving bright post-flare loops. The flare kernels are then the sites of chromospheric heating, and would be expected to be bright in emission lines formed at all temperatures from the chromosphere to $\sim$10~MK. In addition, large plasma flows would be expected as heated plasma rises into the corona. The case for which non-thermal particles heat the chromosphere has been investigated theoretically, and two evaporation scenarios identified: for low particle fluxes ($< 3\times 10^{10}$~\ecs) ``gentle'' evaporation occurs with speeds up to 30~km~s$^{-1}$; while for high particle fluxes ``explosive'' evaporation occurs with speeds of several hundred km~s$^{-1}$\ in the corona \citep{1985ApJ...289..414F}. Evidence for both evaporation scenarios has been found \citep[e.g.,][]{2006ApJ...642L.169M,2003ApJ...588..596T} and, further, evidence for the momentum of hot upflowing plasma balancing the momentum of cool, downflowing plasma during explosive evaporation is indicated in some events \citep{2006A&A...455.1123T}. The detection of blueshifted emission components of hot lines was initially confined to spatially-unresolved X-ray spectra \citep{doschek80,antonucci82}, which always showed a dominant emission component near the rest wavelength of the line with the high velocity component present as a weaker shoulder on the short wavelength side. The dominant, at-rest emission component can not be explained by chromospheric evaporation models unless an ensemble of many events occurring at different times is assumed \citep{1983ApJ...265.1090C,1997ApJ...489..426H,1998ApJ...500..492H,2005ApJ...618L.157W}. Spatially resolved spectra would be expected to resolve flare kernels and thus reveal a dominant, highly-blueshifted plasma component but a detailed study had to await the launch of the Coronal Diagnostic Spectrometer (CDS) on board the Solar and Heliospheric Observatory (SOHO) in 1995. CDS observed the \ion{Fe}{xix} \lam592.2 line, formed at 9~MK, with a spatial resolution of around 6--8\arcsec. Many papers reported blueshifts of the \ion{Fe}{xix} line, with upflow speeds of up to 230~km~s$^{-1}$\ \citep{2003ApJ...586.1417B, 2004ApJ...613..580B, 2005A&A...438.1099H,2006ApJ...638L.117M,2006ApJ...642L.169M, 2006SoPh..234...95D}. The fairly low spectral resolution of CDS meant that multi-component fitting of the \ion{Fe}{xix} line was generally not possible, although \citet{2003ApJ...588..596T}, \citet{2006ApJ...638L.117M} and \citet{2006A&A...455.1123T} reported observations for which two components could be fit. For the latter two papers the high velocity component was dominant and the blueshifted components implied upflow speeds of 230 and 200~km~s$^{-1}$. These results were the first instances whereby a dominant upflow component was measured during the impulsive phase of a flare, suggesting that the evaporation site had been resolved. We note that the upflow velocities are somewhat smaller than the values derived by multiple component fitting of spatially unresolved X-ray data for which values of around 400~km~s$^{-1}$\ could be measured \citep{antonucci82,1989ApJ...344..991F}. This may reflect the lower temperature of formation of the \ion{Fe}{xix} line compared to the X-ray lines. The EIS instrument presents a significant improvement in both spatial and spectral resolution over CDS, and it also has access to hotter emission lines from \ion{Fe}{xxiii} and \ion{Fe}{xxiv}, formed in the range 10--30~MK. The first study of the impulsive phase of a flare observed by EIS was performed by \citet{milligan09}, who presented Doppler measurements at the footpoints of a C1.1 flare. Emission lines formed below 5~MK were found to have Gaussian profiles, and Doppler shifts showed a change from redshifts to blueshifts at around 2~MK. This was cited as evidence of explosive evaporation, whereby cooler plasma recoils towards the photosphere and hotter plasma rises upwards towards the corona. Lines from the hottest species, \ion{Fe}{xxiii} and \ion{Fe}{xxiv} (10--30~MK), showed two emission components, the dominant being close to the rest wavelengths of the lines, the weaker at velocities of $< -200$~km~s$^{-1}$. This is similar to the earlier X-ray observations, a surprise given that the footpoints are resolved by EIS. The most relevant study for the present work is that of \citet{2010ApJ...719..213W} who presented observations of a C9.7 confined flare. The region displayed four intense brightenings during the flare rise phase that were interpreted as loop footpoints. Three spectra of \ion{Fe}{xxiii} \lam263.77 (formed at 15~MK) obtained of one of the footpoints during the rise phase and separated by 160~s show (1) a small blueshift of $-55$~km~s$^{-1}$, (2) a two component profile with a dominant component at a velocity of at least $-382$~km~s$^{-1}$, and (3) a single component profile at a velocity of $-40$~km~s$^{-1}$. For spectrum 2, the \ion{Fe}{xvi} \lam262.99 line (3~MK) showed a two component profile with the weaker blueshifted component at $-116$~km~s$^{-1}$. All cooler lines do not show evidence of a blueshifted component in any of the spectra. The dominant high velocity \ion{Fe}{xxiii} upflow found in spectrum 2 is similar to that found from CDS data by \citet{2006ApJ...638L.117M} and \citet{2006A&A...455.1123T} only that the magnitude of the velocity is significantly larger. We note that the properties derived from spectrum 2 of \citet{2010ApJ...719..213W} are quite similar to those found in the present work. The impulsive phase of a smaller B2 class flare was studied by \citet{2011A&A...526A...1D} and a key result was the finding of blue-shifted components of \ion{Fe}{xiv, xv} and {\sc xvi} lines (2--3~MK) with velocities of 40--60~km~s$^{-1}$. The rest components of the lines were stronger in intensity. The \ion{Fe}{xxiii} and \ion{Fe}{xxiv} lines were weak in this flare, and velocity results were not discussed. \citet{2011A&A...532A..27G} presented observations of a C7 flare and found a blue wing enhancement to the \ion{Fe}{xvi} \lam262.98 emission line (formed at 2.5~MK), corresponding to upflow velocities of up to 140~km~s$^{-1}$. As noted above, a blue-shifted component for \ion{Fe}{xvi} was found by \citet{2010ApJ...719..213W} and \citet{2011A&A...526A...1D}, and is also found in the present work. \citet{2011A&A...532A..27G} did not find any high velocity components for other lines, and Doppler shifts were generally in the $-20$ to 0~km~s$^{-1}$\ range. The first EIS observation of a flare with a sit-and-stare study was recently reported by \citet{2013ApJ...762..133B} for a C1 flare. Although the slit position did not lie directly on the flare loop footpoint, line-of-sight velocities measured in the leg of the loop from \ion{Fe}{xxiii} gave a peak velocity of $-208$~km~s$^{-1}$. Note that the emission line was completely blueshifted for 156~s, and then displayed a two component profile with a dominant stationary component and weak blueshifted component for a further 56~s. \citet{doschek12} studied a M1.8 class flare observed by EIS in 2012 March, and a spectrum obtained during the rise phase did not reveal any significant Doppler flows in \ion{Fe}{xxiii} \lam263.77 (formed at 15~MK), however emission lines of \ion{Fe}{xiii} and \ion{Fe}{xiv} (formed at 2~MK) showed extended short wavelength wings that are due to downflowing plasma. Similar profiles are found in the present work for \ion{Fe}{xiv}. Doppler shifts of $\approx$~$-100$~km~s$^{-1}$\ for \ion{Fe}{xxiii} are found at a later phase of the flare by \citet{doschek12} suggesting the presence of hot, evaporating plasma. The new aspect of the present work is the ability to accurately co-align the EIS data with high spatial and temporal images from the AIA instrument, allowing the flare evaporation site to be studied in great detail. Our analysis begins with a summary of the data-sets used (Sect.~\ref{sect.dataset}) and an overview of the active region and flare (Sect.~\ref{sect.overview}). Sect.~\ref{sect.eis} presents the analysis of the EIS flare kernel spectrum, and Sects.~\ref{sect.aia} and \ref{sect.mag} present analysis of the imaging data-sets from SDO and \emph{Hinode}. Results are summarized in Sect.~\ref{sect.summary}. \section{Dataset overview}\label{sect.dataset} The data analyzed in the present work principally come from the EUV Imaging Spectrometer \citep[EIS;][]{culhane07} on board the \emph{Hinode}\ satellite, and the Atmospheric Imaging Assembly \citep[AIA;][]{lemen12} on the Solar Dynamics Observatory (SDO). Additional data come from the Helioseismic and Magnetic Imager \citep[HMI;][]{scherrer12} on board SDO and the \emph{Hinode}/X-Ray Telescope \citep[XRT;][]{2007SoPh..243...63G}. Unfortunately the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) was in spacecraft night during the rise phase of the flare and so no data were available. Much of the data calibration and analysis performed for the present work made use of IDL software routines that are part of the \emph{Solarsoft} distribution. AIA data downloaded from the Joint Science Operations Center (JSOC) or the Virtual Solar Observatory (VSO) are provided in level-1 format which means that they have been flat-fielded, de-spiked and calibrated. The AIA data shown in this work were ``re-spiked'' using the IDL routine AIA\_RESPIKE as it was found that some of the flare kernels were incorrectly flagged by the AIA de-spiking routine. The files were then processed with AIA\_PREP to convert them to level-1.5 format, which places all of the different AIA filter images onto the same plate scale. The HMI data were also processed with AIA\_PREP to place them on the same plate scale as the AIA images. The AIA detectors register events in ``data numbers'' (DN) such that DN values between 0 and 2$^{14}-1$ (=16,383) can be measured in a single exposure. During flares the maximum value can be reached, leading to saturation, and the extent of saturation varies depending on the sensitivity of the different AIA channels and the strength of the radiation that the channels measure. Usually fixed exposure times for the AIA channels are used, but during flares the exposure times can be automatically reduced to help prevent saturation. For the present observation, however, most of the AIA channels were badly affected by saturation during the flare, with the 94~\AA\ channel the only one completely unaffected. (Note that in flare conditions this channel is dominated by \ion{Fe}{xviii} \lam93.93, formed at 7~MK.) AIA obtains full-disk images at a 12~s cadence in seven different EUV filters, and at a 24~s cadence in two UV filters. The filters are identified by the wavelength of peak sensitivity, and we use the shorthand A94, A131, etc., to refer to the filters with wavelengths 94~\AA, 131~\AA, etc. The pixel size of the images corresponds to an angular area of 0.6 $\times$ 0.6~arcsec$^{2}$ on the Sun. EIS spectroscopic observations are obtained by scanning a narrow slit over an area of the Sun. At each pixel position along the slit, spectra covering the ranges 170--212~\AA\ and 246--292~\AA\ are obtained, however due to telemetry restrictions only narrow wavelength windows centered on specific emission lines are usually downloaded. On 2011 February 16 EIS continuously ran a single observation study called HH\_Flare\_180x160\_v2 for the period 06:17--12:23~UT, yielding 64 rasters in all. For each raster the 2\arcsec\ slit scanned an area of 180 $\times$ 160~arcsec$^2$ in 5~min 45~s. An 8~s exposure time was used and the slit jumped 5\arcsec\ between exposure positions. Ten wavelength windows are obtained with the HH\_Flare\_180x160\_v2 study, and the particular emission lines studied in the present work are listed in Table~\ref{tbl.eis}. The EIS data were calibrated with the EIS\_PREP routine in \emph{Solarsoft} using the standard processing options listed on the EIS wiki\footnote{http://msslxr.mssl.ucl.ac.uk:8080/eiswiki/}, and ``missing'' data (due to warm pixels, cosmic rays, etc.) were interpolated using the procedure described in \citet{eis_sw13}. The /CORRECT\_SENSITIVITY keyword was used, which implements a wavelength-independent sensitivity decay for the instrument with an e-folding time of 1894~days, which means that the line intensities tabulated in the present work are a factor 2.3 higher than if the pre-launch calibration had been used. At the time of writing the EIS calibration is under revision by the EIS team, and it seems likely that some wavelength regions have decayed less strongly than suggested by the earlier analysis. If correct, then this would lower the intensities given in the present work. Given the high time cadence of the instruments used for this work, it is important to state observation times precisely. When times of individual exposures of AIA, HMI and EIS are referred to, the time corresponds to the midpoint of the exposure given in Coordinated Universal Time (UTC). For the SDO instruments, this is the parameter T\_OBS stored in the data header, although for HMI this value has to be converted from International Atomic Time to UTC. For EIS the midpoint of an exposure is defined as the time midway between the shutter open and close times, which are stored in the EIS data file headers. \section{Overview of active region of AR 11458}\label{sect.overview} Active region AR 11458 was the first major flaring active region of solar cycle 24 and so many aspects of the region's flares and magnetic evolution have already been studied. The dominant flaring activity took place during 2011 February 13--18, with the largest event an X2.2 class flare on February 15 that produced a sunquake \citep{2011ApJ...734L..15K} and Earth-directed coronal mass ejection \citep[CME;][]{2011ApJ...738..167S}. A M1.6 flare on February 16 also produced a CME that has been spectroscopically studied with the EIS instrument \citep{harra11, veronig11}. Prior to 15~UT on February 15 the flares from AR 11458 were generally eruptive events with significant jetting and CME activity. From 15~UT up to the M1.6 flare at 14:25~UT on February 16, however, the five C5 or greater flares that took place were all confined flares. For this work we focus on the fourth flare in this sequence: a M1.1 class flare that peaked in the GOES 1--8~\AA\ X-ray light curve at 07:44~UT on February 16. This flare was chosen as the EIS observation revealed the location of a high velocity upflow at temperatures of 10--30~MK during the flare rise phase. Such locations are often missed by EIS raster scans as they have short lifetimes and they occur in only specific parts of the active region. The topology of the active region and post-flare loops also gives a relatively clean view of the upflow site, with a clear correlation with intense, compact brightenings in AIA images. The EIS spectroscopic properties of the upflow site are similar to those measured by \citet{2010ApJ...719..213W} from a C9.7 flare observed in 2007, but the availability of SDO data gives a greater insight into the properties of the flare upflow region. Figure~\ref{fig.ar} shows before-and-after images of AR 11458 obtained with the AIA 193~\AA\ filter, which demonstrate that the peripheral loop structures remain largely unchanged following the flare. The HMI LOS magnetogram and continuum images in Figure~\ref{fig.ar} show the photospheric structure of the active region prior to the flare. \begin{figure}[h] \epsscale{0.9} \plotone{f1.eps} \caption{The upper panels show AIA 193~\AA\ images obtained before and after the 2011 February 16 07:44~UT flare. A reverse intensity scaling has been applied and the maximum intensity value is 2500~DN~s$^{-1}$ in both images. The lower panels show HMI LOS magnetogram (left) and white light continuum (right) images from before the flare. The blue contours on the top-left image show the locations of the sunspot umbrae.} \label{fig.ar} \end{figure} The flare emission is best seen in the AIA 94~\AA\ filter images and Figure~\ref{fig.goes} shows two images, one from the rise phase (a) and one taken a few minutes after the flare peak (b). The image times are indicated on the GOES X-ray light curve (Figure~\ref{fig.goes}c). During the rise phase, flare kernels are seen at three locations in the active region that approximately correspond to the footpoints of the post-flare loops that eventually appear. The most intense flare loops connect the two central sunspots (see also Figure~\ref{fig.ar}), while another fainter, more twisted set of loops connects the central, positive polarity sunspot to the negative polarity sunspot at the east side of the active region (Figure~\ref{fig.goes}b). \begin{figure}[h] \epsscale{1.0} \plotone{f2.eps} \caption{Panels a and b show AR 11458 at two different times, as seen in the AIA 94~\AA\ filter. A negative intensity scaling is used, with black corresponding to the brightest regions. Both images have been saturated to a level of 3000~DN~s$^{-1}$; the actual intensity maxima are 5301 and 5328~DN~s$^{-1}$. An arrow indicates a group of flare kernels, one of which is studied in the present work. Blue contours show the location of sunspot umbrae as determined from co-temporal HMI data. Panel c shows the GOES 1--8~\AA\ light curve, with two vertical lines representing the times at which the A94 images were obtained.} \label{fig.goes} \end{figure} \section{EIS data analysis}\label{sect.eis} The group of A94 flare kernels highlighted in Figure~\ref{fig.goes}a were scanned by EIS with a raster that began at 07:38:26~UT and finished at 07:44:22~UT (note that EIS rasters west-to-east). Five consecutive raster positions obtained between 07:40:16~UT and 07:40:51~UT revealed five intense, compact brightenings in most of the EIS lines (Figure~\ref{fig.eis-ims}). The hot \ion{Fe}{xxiii} \lam263.77 line (formed at 15~MK) does not show the brightenings as a short, intense post flare loop dominates the image. However an image formed in the short wavelength wing of the line at around $-350$~km~s$^{-1}$\ does show the brightenings (lower-right panel of Figure~\ref{fig.eis-ims}). For this work we focus on the right-most of the five brightenings, which was observed by EIS at 07:40:16~UT. \begin{figure}[h] \epsscale{1.0} \plotone{f3.eps} \caption{Six images from the EIS raster that began at 07:38:36~UT. A reverse intensity scaling has been applied to each image. The grid on each image shows lines of latitude and longitude, spaced at $1^{\circ}$ intervals. The leftmost line of longitude is $+24^{\circ}$, and the lowermost line of latitude is $-22^{\circ}$. The \ion{Fe}{xxiii} \lam263.77 blue wing corresponds to \ion{Fe}{xxiii} upflow velocities of around 350~km~s$^{-1}$. See main text for more details.} \label{fig.eis-ims} \end{figure} Figure~\ref{fig.eis-xs} shows intensity cross-sections through the selected brightening in the solar-Y direction (i.e., along the EIS slit). Each intensity cross-section is co-spatial in the X-direction and co-temporal by the nature of spectrometer observations. The cross-sections have been aligned with each other by making use of the known spatial offsets in the solar-Y direction (due to grating tilt and CCD spatial offsets) that are obtained from the IDL routine EIS\_CCD\_OFFSET. A striking feature of the brightening is that the intensity peaks at the same pixel for all temperatures from 0.3~MK (\ion{O}{vi}) to 20~MK (\ion{Fe}{xxiv}). The intensity profiles are also very similar: Gaussian shapes with a full-width at half-maximum of about 4 pixels. (The \ion{Fe}{xvi} profile is affected by significant background coronal emission.) The EIS spatial resolution has been independently estimated as 3--4\arcsec\ based on comparisons with AIA images and studies of transition region brightenings\footnote{See discussion at \url{http://msslxr.mssl.ucl.ac.uk:8080/eiswiki}.}, suggesting the observed feature is not resolved by EIS. This is confirmed by the higher spatial resolution images from AIA (Sect.~\ref{sect.aia}). \begin{figure}[h] \epsscale{1.0} \plotone{f4.eps} \caption{The dotted lines in each plot show the intensity cross-section of \ion{O}{vi} \lam184.11. The solid lines show the intensity cross-sections of the four species identified in the top-left corner of each plot. The \ion{Fe}{xxiv} \lam192.03 (blue) intensity was obtained at a wavelength of 191.77~\AA, corresponding to a \lam192.03 velocity of $-400$~km~s$^{-1}$. The intensity of the \lam192.03 (blue) cross-section has been divided by a factor 2. No scaling has been applied to the other cross-sections.} \label{fig.eis-xs} \end{figure} At the location of the brightening it is found that \ion{Fe}{xxiv} \lam192.03 has a strong emission component at a velocity of $\approx$ $-400$~km~s$^{-1}$\ -- see discussion later. The intensity cross-section for this blue component is also shown in Figure~\ref{fig.eis-xs} and is consistent with the others. The presence of plasma with a wide range of temperatures compacted into a single bright feature is expected for the standard model of chromospheric evaporation whereby chromospheric plasma at the footpoint of the flare loop is rapidly heated to multi-million degree temperatures. Further support comes from the presence of rapidly upflowing plasma at the hottest temperatures, however the finding of multiple velocity components in the emission lines is not compatible with a simple, single loop model and suggests further complexity. In the following sections we study the spectroscopic properties of the brightening in more detail. To do this we create a single spectrum for the brightening in order to measure line intensities, widths and Doppler shifts. The process is complicated however by the fact that the background coronal emission is significant for some of the emission lines. Sect~\ref{sect.extract} describes how the background emission is dealt with and how the spectrum is extracted. Sect.~\ref{sect.fitting} discusses blending issues for the emission lines, and Sects.~\ref{sect.vel}--\ref{sect.dens} present the results for Doppler shifts, line widths, emission measure and density. \begin{deluxetable}{cccccccc} \tablecaption{EIS emission line parameters.\label{tbl.eis}} \tabletypesize{\footnotesize} \tablehead{ & & & & & & \colhead{Non-thermal} & \colhead{Log$_{10}$ (Column} \\ \colhead{Ion} & \colhead{Wavelength} & \colhead{Log\,$T_{\rm mem}$} & \colhead{Component} & \colhead{Intensity} & \colhead{LOS velocity} & \colhead{velocity} & \colhead{emission measure} \\ & & & & \colhead{(\ecss)} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)} & \colhead{/cm$^{-5}$)} \\ } \startdata \input{table.tex} \enddata \end{deluxetable} \subsection{Extracting the spectrum of the flare kernel}\label{sect.extract} As discussed in the previous section, EIS does not spatially resolve the flare kernel site and so we assume that the Gaussian-shaped intensity profiles shown in Figure~\ref{fig.eis-xs} all come from a single, unresolved structure. Practically this means that we sum the intensity from 9 Y-pixels distributed over the intensity cross-sections to yield a single spectrum for the flare kernel. However, Figure~\ref{fig.eis-xs} shows that there is background emission against which the kernel appears in all of the EIS lines. This background emission is particularly significant for \ion{Fe}{xv} and \ion{Fe}{xvi}. In addition to affecting the intensity of the emission lines, we also find that this emission distorts the emission line profiles, which then affects the measurement of line widths and velocities. For this reason we subtract a pre-flare spectrum from the flare kernel spectrum. The pre-flare spectrum was obtained from the previous EIS raster, which began at 07:32~UT. This raster showed much reduced emission at the positions of the five flare kernels and so we can consider it to give a good representation of the pre-flare corona at the locations of the brightenings. The background corona subtraction is performed with the \emph{Solarsoft} routine EIS\_MASK\_SPECTRUM, which performs the procedure as follows. Each spectral window from an EIS raster yields a 3D intensity array of wavelength, solar-X and solar-Y pixels. Since the satellite pointing does not change between the pre-flare and flare rasters, then potentially one can simply take the spatial pixels corresponding to the brightening and subtract the pre-flare spectra from the flare spectra. The situation is complicated, however, by the fact that the EIS wavelength scale drifts with time due to the thermally-induced motion of the EIS grating during an orbit (referred to as spectrum drift). Therefore in order to perform the pre-flare spectrum subtraction, it is necessary to place the spectra at each spatial pixel in each raster onto a common wavelength scale. This is done with the IDL routine EIS\_SHIFT\_SPEC, which shifts the spectra for each spatial pixel of a raster onto a common wavelength scale by making use of the spectrum drift and slit tilt corrections of \citet{2010SoPh..266..209K}. The pre-flare spectrum can then be subtracted from the flare spectrum at each spatial pixel. The background-subtracted spectra from the 9 Y-pixels that span the flare kernel are then summed to yield the final spectrum. Figure~\ref{fig.sub-spec} shows how the subtracted spectrum (blue) compares with the un-subtracted spectrum (black) and the pre-flare spectrum (red) for four emission lines. For \ion{Fe}{xv} \lam284.16 and \ion{Fe}{xvi} \lam262.98 it is clear that the pre-flare emission lines, which are close to at rest, when subtracted result in the centroid being pushed to shorter wavelengths. The method for setting the rest wavelength for these spectra is discussed in Sect.~\ref{sect.vel}. \begin{figure}[h] \epsscale{1.0} \plotone{f5.eps} \caption{Four emission lines observed by EIS are shown. The red lines show the pre-flare spectrum, the black lines the flare kernel spectrum, and the blue lines the spectrum obtained by subtracting the pre-flare spectrum from the flare spectrum.} \label{fig.sub-spec} \end{figure} A key uncertainty in the background subtraction method is the EIS pointing stability: if the pointing changed between the 07:32 and 07:38~UT rasters, then the pre-flare background at the site of the brightening is no longer valid. The \emph{Hinode}\ pointing is known to exhibit peak-to-peak pointing fluctuations of up to 3\arcsec\ in both X and Y directions \citep{2010ApJ...713..573M}, but a benefit for the present analysis is that \emph{Hinode}\ was pointed at AR 11458 for long periods of time, meaning that the satellite and the instruments received a nearly fixed illumination from the Sun (re-pointing of the satellite leads to varying illumination and thus thermal effects). The IDL routine EIS\_JITTER returns estimates of the instrument pointing jitter, and this shows variations of $\le 0.25$\arcsec\ in solar-X and $\le 1.2$\arcsec\ in solar-Y. For the solar-Y direction, pointing can be checked by comparing intensity cross-sections and it can be seen that features in solar-Y away from the flare site are well-matched to within a pixel between the 07:32 and 07:38~UT rasters. We thus believe our background subtraction method is not significantly affected by pointing jitter. \subsection{Line fitting notes}\label{sect.fitting} The spectrum extraction method described above yields a single EIS spectrum for the flare kernel. The emission lines were fit with Gaussian functions using the IDL routine SPEC\_GAUSS\_EIS. Most of the lines have non-Gaussian shapes, suggesting the presence of multiple plasma components at different velocities and specific details on how each emission line was treated are given below. The line fit parameters, expressed as intensity, line-of-sight (LOS) velocity and non-thermal broadening, are given in Table~\ref{tbl.eis}. From the line intensity the column emission measure can be derived, and this quantity is also shown in Table~\ref{tbl.eis}. The temperature of maximum emission, $T_{\rm mem}$, given in Table~\ref{tbl.eis} is the temperature at which a line's contribution function peaks and is derived using version 7.1 of the CHIANTI database \citep{dere97,chianti71}. The error estimates shown in Table~\ref{tbl.eis} ultimately derive from photon statistics errors, which are generally small due to the strength of the emission lines. As discussed earlier there is some uncertainty in the absolute calibration of EIS and this is not reflected in the intensity uncertainties. The lack of an absolute wavelength calibration for EIS means that there is a systematic uncertainty of $\approx$~10~km~s$^{-1}$\ (see Sect.~\ref{sect.vel}) that is not included in Table~\ref{tbl.eis}. The uncertainty in the non-thermal broadening is discussed in Sect.~\ref{sect.wid}. The column emission measure is directly proportional to the line intensity, and so the uncertainty (not displayed in Table~\ref{tbl.eis}) can be obtained from the line intensity uncertainty. The two coolest lines, \ion{O}{vi} \lam184.12 and \ion{Fe}{x} \lam184.54, were both fit with single Gaussians. The former is well-isolated in the spectrum, but the line's doublet partner at 183.94~\AA\ is present at the edge of the window and masks out any high velocity (200--300~km~s$^{-1}$) component that may be present. However the partial line that is present is consistent with the expected \lam183.94/\lam184.11 intensity ratio, suggesting that there is no significant high velocity component. \ion{Fe}{x} \lam184.54 is blended on the short wavelength side with \ion{Fe}{xi} \lam184.41, which is noticeably stronger (relative to \lam184.54) than in typical active region conditions, likely due to the high density of the brightening. The two lines were fit simultaneously with two Gaussians, and fits were found to be good, with in particular no suggestion that \lam184.54 is asymmetric. The emission lines of \ion{Fe}{xii--xvi} all show non-Gaussian profiles and they have been fit with multiple Gaussians. As EIS emission lines typically have a full-width at half-maximum of about 3 pixels then they are not ideally suited for detailed modeling in this manner. In particular, the parameters of weak components can be quite uncertain. For this work, we force the multiple Gaussian components for a single line to have the same width. While there is no physical justification for this, it does provide a baseline against which the plasma components of different ions can be compared and the results described later suggest that this is valid. \ion{Fe}{xii} \lam192.39 line is very close to the edge of its wavelength window and also lies on a sloping background level resulting from the nearby strong \ion{Fe}{xxiv} \lam192.03 line. The line shows a clear asymmetry, with a steeper blue side to the profile, and has been fit with two Gaussians. The stronger \lam195.12 line also shows a clear asymmetry, but the profile is complicated by the presence of \ion{Fe}{xii} \lam195.18 in the red wing. This line becomes quite strong at high densities \citep[see, e.g.,][]{young09} and thus, if the two Gaussian model for \lam192.39 is assumed for \lam195.12 then \lam195.18 would enhance the component on the red side of the line. Fitting two Gaussians to the profile confirms that this is the case and, given the difficulties in deconvolving the blend we choose not to present the parameters for \lam195.12. The density sensitive \ion{Fe}{xiv} lines at 264.79 and 274.20~\AA\ are both blended: \lam264.79 with \ion{Fe}{xi} \lam264.77, and \lam274.20 with \ion{Si}{vii} \lam274.18. It is not possible to directly estimate the contributions of either of these lines to the \ion{Fe}{xiv} lines from the flare spectra, however the authors have studied complete EIS spectra of an active region and find that the contributions are typically $<5$\%\ when the \ion{Fe}{xiv} lines are strong. We therefore believe it is reasonable to neglect these blending species. Both \ion{Fe}{xiv} lines show clear asymmetries such that the long wavelength sides of the profiles are less steep than the short wavelength sides. The \lam274.20 long wavelength wing extends beyond the edge of the wavelength window (which was only 16 pixels wide) and a two Gaussian fit with a constant background was used. We note that the \lam274.20 profile is very similar to that found by \citet{doschek12} during the rise phase of a M1.8 flare. The \lam264.79 wavelength window is wider than that for \lam274.20 but the long wavelength wing is still found to extend beyond the window edge, and a three Gaussian fit was performed with a constant background. The third, weakest Gaussian is at a velocity of $+166$~km~s$^{-1}$. There is a nearby \ion{Fe}{xvi} line at 265.00~\AA\ which, if it is assumed to have the same velocity as the dominant \ion{Fe}{xvi} \lam262.98 component, would place it at $+180$~km~s$^{-1}$\ in the \ion{Fe}{xiv} \lam264.79 reference frame. The \ion{Fe}{xvi} \lam265.00/\lam262.98 ratio has a fixed value of 0.096 based on atomic physics parameters, while the intensity ratio of the third Gaussian to the dominant intensity component of \lam262.98 is 0.16. We thus conclude that the third Gaussian component to \ion{Fe}{xiv} \lam264.79 is dominated by \ion{Fe}{xvi} and so we do not list it in Table~\ref{tbl.eis}. The two Gaussian fits to \lam264.79 and \lam274.20 lead to very similar LOS velocities and non-thermal velocities for the two components, giving confidence that the fits accurately model the line profiles. The intensity ratios of the two components are different, however, which reflects different densities for the two plasma components -- this is discussed in Sect.~\ref{sect.dens}. \ion{Fe}{xv} \lam284.16 shows a dominant intensity component with two weaker components either side and was fit with three Gaussians with the same width. \lam284.16 is known to be partly blended with \ion{Al}{ix} \lam284.03, but this is negligible for the present spectrum (\lam284.16 is the strongest line by intensity of the lines measured). \ion{Fe}{xvi} \lam262.98 shows a similar line profile to \ion{Fe}{xv} \lam284.16 with a dominant intensity component and weaker emission on each side. The short wavelength emission could not be accurately fit with a single Gaussian and so two Gaussians were necessary, giving four in all. A weak, unknown line is found in EIS spectra at 262.70~\AA\ \citep{brown08}, corresponding to a velocity of $-320$~km~s$^{-1}$\ which is too far away to account for the \ion{Fe}{xvi} component at $-249$~km~s$^{-1}$. We note that \citet{2011A&A...526A...1D} showed a \ion{Fe}{xvi} \lam262.98 line profile for a flare kernel that also displayed an extended blue wing that was fit with two Gaussian components corresponding to upflow speeds of 60 and 170~km~s$^{-1}$. The \ion{Fe}{xxiii} and \ion{Fe}{xxiv} line profiles are the most striking in the flare kernel spectrum as they show strong blue-shifted emission at around $-400$~km~s$^{-1}$\ in addition to a weaker component near the rest velocities of the lines (Figure~\ref{fig.hot-profiles}). We note that the \ion{Fe}{xxiii} \lam263.77 profile is quite similar to that shown by \citet{2010ApJ...719..213W} from a C9.7 flare kernel. The approximate background levels in the spectrum, as determined from the minimum intensity within each line's wavelength window, are also shown in Figure~\ref{fig.hot-profiles}. The narrowness of the wavelength windows used for the study means that the background levels are likely to be over-estimates of the real background. \begin{figure}[h] \epsscale{1.0} \plotone{f6.eps} \caption{Emission line profiles from the flare kernel spectrum for (a) \ion{Fe}{xxiii} \lam263.77, (b) \ion{Fe}{xxiv} \lam192.03, and (c) \ion{Fe}{xxiv} \lam255.11. The dashed lines indicate the background level in the spectra.} \label{fig.hot-profiles} \end{figure} The similarity of the line profiles for all three lines provides strong evidence that the highly-blueshifted components are real and not due to strong emission in emission lines on the short wavelength sides of each line, but we briefly discuss known blending issues for each. \citet{2011A&A...526A...1D} demonstrated that two weak lines blend with \ion{Fe}{xxiii} \lam263.77 between $-50$ and $-150$~km~s$^{-1}$\ (relative to \lam263.77). Figure~13 of their work also showed that there is another line at about $-350$~km~s$^{-1}$\ that potentially could compromise the high velocity component found here. This line is found in quiet Sun and active region spectra \citep{brown08}, so it is probably formed at temperatures of $\log\,T=6.0$--6.2. From an active region spectral atlas obtained on 2011 February 20, we find that \ion{Fe}{xii} \lam192.39 is around a factor 170 times more intense than the line at 263.4~\AA. From Table~\ref{tbl.eis}, \lam192.39 is only a factor 3 stronger than the line at 263.4~\AA\ (interpreted as the blue wing of \lam263.77) giving strong evidence that the strong blue-shifted component seen in Figure~\ref{fig.hot-profiles}(a) is in fact due to \ion{Fe}{xxiii}. Several lines blend directly with \ion{Fe}{xxiv} \lam192.03, but when the flare line becomes strong it easily overwhelms these lines. The blending lines are \ion{Fe}{vii} \lam192.01, \ion{Fe}{viii} \lam192.04, \ion{Fe}{xi} \lam192.02 and an unknown line at 192.09~\AA\ \citep{2009ApJ...707..173Y,2009A&A...508..513D,2010A&A...514A..41D}, and a useful guide to the extent of the blending can be made by considering the strength of the nearby \ion{Fe}{xii} \lam192.39. The combined blending lines between 192.01 and 192.09~\AA\ can exceed the strength of the \lam192.39 line in some circumstances \citep{landi09}, but generally they are much weaker. It is not possible to estimate the extent of blending in the present spectra, but it is possible that they make a significant contribution to the rest component of the \ion{Fe}{xxiv}. The width of the line is consistent with formation from a very hot ion, however. On the short wavelength side of the profile there is \ion{Fe}{xiv} \lam191.81 which blends with the highly-blueshifted component of the \ion{Fe}{xxiv} line. The CHIANTI \ion{Fe}{xiv} model predicts that this line is about 2\%\ of the strength of \ion{Fe}{xiv} \lam274.20 and so based on the intensities in Table~\ref{tbl.eis} this line can be shown to be negligible. \ion{Fe}{xxiv} \lam255.11 has several lines nearby, and the narrow width of the EIS observing window makes it difficult to fit the spectrum. The lines near to \lam255.11 are an unknown line at 254.70~\AA, \ion{Fe}{xvii} \lam254.86, \ion{Fe}{viii} \lam255.10, \ion{Fe}{viii} \lam255.37 and \ion{Fe}{x} \lam255.39. The \ion{Fe}{xvii} line is particularly important when studying blue-shifted components of the \ion{Fe}{xxiv} line as it becomes strong in flares and lies within the blue wing, however there is no evidence that the line is significant in the present spectrum, and so the observed spectral features are fit with two Gaussians to represent the rest and highly-blueshifted components of \ion{Fe}{xxiv} \lam255.11. The similarity of the measured velocities to those of \ion{Fe}{xxiv} \lam192.03 give confidence in the fit, although the non-thermal velocities are somewhat lower. This could be due to the fact that the background level is difficult to estimate due to the many blending lines. We highlight here the fact that the \ion{Fe}{xxiv} \lam192.03/\lam255.11 ratios for the two velocity components vary significantly from the value expected from atomic theory. The CHIANTI atomic model gives an expected ratio of 2.5, which is independent of density and only very slightly temperature dependent. The ratios found here are 3.9 and 6.2 for the blue-shifted and rest components, respectively. The rest component of \lam192.03 could be over-estimated because of the blending noted earlier, while the blending lines for \lam255.11 could also lead to over-estimation of this line's components although we feel this is likely to be offset by an over-estimate of the spectrum background level. To investigate the \ion{Fe}{xxiv} ratio further, flare data from 2011 February 16 obtained after the flare peak were studied. At such times the \ion{Fe}{xxiv} emission shows little dynamics (in terms of broadening or Doppler shifts) and is very strong so it should be free from blending. We found \lam192.03/\lam255.11 ratios ranging from 4.7 to 5.1 in six spectra, again significantly above the theoretical ratio. G.~Del Zanna has suggested in several papers \citep{2009A&A...508.1517D,2011A&A...533A..12D,2012A&A...537A..38D} that there is a calibration problem for EIS whereby lines near 250~\AA\ are too weak relative to lines in the EIS SW channel by a factor of up to two. The \ion{Fe}{xxiv} results found here appear to confirm this. \subsection{Dynamics: Doppler shifts}\label{sect.vel} The line-of-sight (LOS) velocities given in Table~\ref{tbl.eis} are derived from Doppler shifts of the lines relative to ``rest'' wavelengths. There is no direct means to determine an absolute wavelength scale for EIS, and so indirect methods are needed -- see \citet{young12} for a discussion. For the present work, we used the 10 Y-pixels at the bottom of the \ion{Fe}{xv} \lam284.16 slit image to generate a pixel mask that represented a background region. The region at the bottom of the raster, while still dominated by active region plasma, is well-separated from the locations where the main flare dynamics occur, and thus is relatively stable. From these background spectra, the centroids of the following lines are measured: \ion{Fe}{x} \lam184.54, \ion{Fe}{xii} $\lambda$\lam192.39, 195.12, \ion{Fe}{xiv} $\lambda$\lam264.79, 274.20, \ion{Fe}{xv} \lam284.16 and \ion{Fe}{xvi} \lam262.98, which are assumed to correspond to rest wavelengths. The rest wavelengths at the location of the flare kernel are then determined by simply applying the known EIS slit tilt values \citep{2010SoPh..266..209K}. The remaining lines are either not present in the background spectra or are too weak to be measured reliably. These lines are paired with the nearest of the measured background lines, and the ``rest separations'' of the lines are assumed to be those determined from the rest wavelengths within the CHIANTI atomic database \citep{dere97,chianti71}. This then allows the rest wavelengths of the lines not measured in the background spectra to be determined from those lines that are. This method of determining rest wavelengths is not as accurate as that described by \citet{young12}, and we estimate an accuracy of $\pm$~10~km~s$^{-1}$. As can be seen from Table~\ref{tbl.eis}, however, velocities much larger than this are seen in the flare spectra. Figure~\ref{fig.vel}a shows the velocities derived for the EIS emission lines and their various components. The velocities are divided into five groups that we believe are physically connected. For example, the lines of \ion{Fe}{xii--xvi} all show a weak red-shifted component and are represented by the light gray points. For the lines formed over $\log\,T=5.5$--6.5, the velocities of the dominant emission components are represented by the blue points. The change from redshift to blueshift between \ion{Fe}{x} and \ion{Fe}{xii} is similar to the patterns found by \citet{milligan09} and \citet{2010ApJ...719..213W}, which were cited as evidence of explosive evaporation. A dashed line connects the \ion{Fe}{xvi} high velocity component to those of \ion{Fe}{xxiii} and \ion{Fe}{xxiv} to indicate that they might be related, although this is speculative. \begin{figure}[h] \epsscale{1.0} \plotone{f7.eps} \caption{Plots showing (a) LOS velocity, (b) non-thermal velocity and (c) emission measure values derived from the EIS spectrum of the flare kernel. The colors of the points plotted in (b) and (c) correspond to the groups of velocity components shown in plot (a). Grey and light blue points are not shown on plot (b) as they have the same values as the set of blue points. The unfilled square in plot (b) shows the non-thermal velocity derived from \ion{Fe}{xii} \lam192.39 when it is fit with a single Gaussian. The cross on plot (c) shows the value derived from the AIA 94~\AA\ channel.} \label{fig.vel} \end{figure} The \ion{Fe}{xxiii} and \ion{Fe}{xxiv} lines both show a highly-blueshifted plasma component that is stronger than the plasma component that is close to the rest velocity, and the profiles are shown in Fig.~\ref{fig.hot-profiles}. This result demonstrates that a \emph{dominant}, high-velocity upflow component can be seen during flares when a spectrometer has high spatial resolution. As discussed in the introduction, earlier X-ray spectra averaged over the whole disk always revealed a dominant plasma component near the rest velocity of high temperature lines, with the high velocity component present as a weak shoulder to this component. A previous observation of a dominant high velocity component in the \ion{Fe}{xxiii} \lam263.77 line was presented by \citet{2010ApJ...719..213W} -- see the middle panel of Figure~4 of their work -- who found a velocity of $-382$~km~s$^{-1}$\ from during the rise phase of a C9.7 flare. This suggests that such profiles may be typical of M-class flares, although the observation is sensitive to the timing and position of the EIS slit. \subsection{Dynamics: non-thermal line broadening}\label{sect.wid} The measured widths of the EIS emission lines comprise three components: an instrumental width, a thermal width, and a non-thermal width. The instrumental width is known and varies with the positions of the spectra along the EIS slit \citep{eis_sw7}. The thermal width is determined by assuming an isothermal temperature for the plasma emitting the particular ion species under consideration. For the present case we use the $T_{\rm mem}$ values given in Table~\ref{tbl.eis}. The non-thermal width is the remaining line width after the thermal and instrumental widths have been subtracted and is usually expressed as a velocity, $\xi$, defined as \begin{equation} 4 \ln 2 \left( {\lambda\over c} \right)^2 \xi^2 = W^2 - W_{\rm I}^2 - W_{\rm th}^2, \end{equation} where $\lambda$ is the wavelength of the emission line, $c$ the speed of light, $W$ is the full-width at half-maximum (FWHM) of the fitted Gaussian function, $W_{\rm I}$ is the instrumental width and $W_{\rm th}$ is the thermal width, both expressed as FWHM values. The uncertainties on $\xi$ are determined from the 1-$\sigma$ measurement errors on $W$ and an estimated uncertainty of 3~m\AA\ on $W_{\rm I}$ \citep{eis_sw7}. The values of $\xi$ are given in Table~\ref{tbl.eis} and displayed graphically in Figure~\ref{fig.vel}b. The blue points show the values derived from \ion{O}{vi} through \ion{Fe}{xvi}. As noted previously, the emission lines for \ion{Fe}{xii}--\ion{Fe}{xvi} were fit with multiple Gaussians forced to have the same width, and it is noticeable that a large decrease in $\xi$ occurs between \ion{Fe}{x} and \ion{Fe}{xii}. We believe this is due to the fact that the \ion{Fe}{x} line was fit with one Gaussian, while \ion{Fe}{xii} was fit with two. The black, unfilled square in Figure~\ref{fig.vel}b shows the non-thermal velocity if the \ion{Fe}{xii} line is fit with a single Gaussian, giving a much higher value. This suggests that the large width of the \ion{Fe}{x} line arises from multiple velocity components that are unresolved in the line profile. The relatively small non-thermal velocities of \ion{Fe}{xii--xvi} suggest that, when the background coronal emission is subtracted and individual plasma components can be identified through multiple Gaussian fits, then these components do not show any extra broadening over typical coronal plasma. Therefore enhanced broadening of coronal lines found in previous measurements is likely to be due to the superposition of multiple plasma components with different velocities rather than, say broadening due to turbulence. The red and green points in Figure~\ref{fig.vel}b show the $\xi$ values for the fast upflowing and stationary components of the \ion{Fe}{xxiii} and \ion{Fe}{xxiv} lines (see also Figure~\ref{fig.vel}a), and they are found to be very similar at around 120~km~s$^{-1}$. \citet{milligan11} previously presented non-thermal velocity measurements from a flare kernel and found remarkably similar values to those found here, although in that case only an at-rest plasma component was detected. \subsection{Emission measure} The fourth quantity shown in Table~\ref{tbl.eis} is the column emission measure, which is determined directly from the measured line intensity through the expression: \begin{equation}\label{eq.em} 4\pi I= {hc \over \lambda} \epsilon(X) C_\lambda EM(d) \end{equation} where $I$ is the measured line intensity, $h$ Planck's constant, $c$ the speed of light, $\lambda$ the wavelength of the emission line, $\epsilon(X)$ the element abundance of element X relative to hydrogen, $C_\lambda$ contains various atomic parameters and $EM(d)$ is the column emission measure. The atomic parameters contained in $C_\lambda$ are computed using the CHIANTI routine INTEGRAL\_CALC, available in \emph{Solarsoft}. The temperatures shown in Figure~\ref{fig.vel}c are the temperatures of maximum emission, $T_{\rm mem}$. The coronal abundances of \citet{2012ApJ...755...33S} have been used, the ionization balance calculations are those from CHIANTI \citep{dere09}, and a density of $3.4\times 10^{10}$~cm$^{-3}$ was assumed (see Sect.~\ref{sect.dens}). We note that the emission measure values are insensitive to the precise density used, however. An additional emission measure point shown in Figure~\ref{fig.vel}c was derived from the AIA 94~\AA\ channel as follows. At temperatures of $\approx$ 10~MK, this channel is dominated by \ion{Fe}{xviii} \lam93.93 and so an emission measure can be derived by assuming that the plasma is isothermal at the $\log\,T_{\rm mem}=6.85$ value of this line. We compute an isothermal spectrum from CHIANTI for the A94 channel, convolve it with the A94 response function, and sum the result. The count rate per second in the A94 channel, $C$, (expressed as data numbers per second) can then be related to the column emission measure as \begin{equation}\label{eq.aia-em} EM = 5.25 \times 10^{26} \alpha C, \end{equation} where $\alpha$ accounts for the difference in spatial pixel size between AIA and EIS due to the fact that the flare kernels are not resolved by either instrument (see Sect.~\ref{sect.aia}), and takes a value of 0.18. The isothermal spectrum used for this calculation was computed using the CHIANTI ionization balance, the hybrid coronal abundances of \citet{2012ApJ...755...33S}, a density of $3\times 10^{10}$~cm$^{-3}$, and unit column emission measure; the pre-flight A94 response function, available through the \emph{Solarsoft} AIA\_GET\_RESPONSE routine, was used. To determine the A94 count rate, the EIS-AIA co-alignment procedure (Sect.~\ref{sect.coalign}) enabled the position of the EIS slit relative to AIA to be found. The region in the AIA image corresponding to the 2\arcsec\ wide EIS slit was extracted, the A94 counts summed over the brightening, and a background level subtracted, leaving a total count rate of 11,470~DN~s$^{-1}$. The A94 emission measure is then $6.02\times 10^{30}$~cm$^{-5}$. Due to the method used we consider this an upper limit to the real emission measure. Figure~\ref{fig.vel}c shows the A94 emission measure point lies within a temperature gap in the EIS emission measure distribution, and thus provides valuable information. EIS does have access to \ion{Ca}{xvii} \lam192.86 (formed at 6~MK) and unblended \ion{Fe}{xvii} lines (formed at 4~MK), but these lines were not obtained in the present study. Including the AIA point, the emission measure distribution is fairly uniform from 0.3~MK to 30~MK. The rest components of the hottest ions (green points in Figure~\ref{fig.vel}c) have similar values to those derived from the dominant emission components of the cooler ions (blue points in Figure~\ref{fig.vel}c). The fast upflowing hot plasma (red points) has an emission measure around a factor 2.5 larger than that for the rest components, and the A94 emission measure value is closer to this value, perhaps suggesting that \ion{Fe}{xviii} also has a dominant intensity component from the upflowing plasma. The redshifted components of \ion{Fe}{xii--xvi} have significantly lower emission measures (gray points in Figure~\ref{fig.vel}c), with a sharp drop between \ion{Fe}{xiv} and \ion{Fe}{xv}, while the fast upflowing component of \ion{Fe}{xvi} is much stronger than that of \ion{Fe}{xv} (light blue points). All of the EIS emission lines used in the present study are from iron ions, except for \ion{O}{vi} and we note that the choice of abundance file would affect the \ion{O}{vi} emission measure value. If photospheric abundances had been used, then the \ion{O}{vi} emission measure value would be reduced by a factor three relative to the iron values. \subsection{Density and emitting volume}\label{sect.dens} Two density diagnostics are available from the HH\_Flare\_180x160\_v2 study: \ion{Fe}{xii} \lam195.18/\lam195.12 and \ion{Fe}{xiv} \lam264.79/\lam274.20. The \ion{Fe}{xii} lines can not be accurately separated due to the non-Gaussian profiles in the flare kernel, but the \ion{Fe}{xiv} ratio is useful and has been applied in previous EIS analyses by, for example, \citet{doschek07}, \citet{milligan11} and \citet{2011A&A...526A...1D}. Table~\ref{tbl.fe14} shows the electron number densities, $N_{\rm e}$, derived from the \ion{Fe}{xiv} line intensities given in Table~\ref{tbl.eis} for the two plasma components. Component 1 has the stronger emission in both lines and has a velocity of $\approx$ $-30$~km~s$^{-1}$, while component 2 has a velocity of $+65$ to $+70$~km~s$^{-1}$. Atomic data for the density diagnostic come from version 7.1 of the CHIANTI database \citep{chianti71}. \begin{deluxetable}{lccccc} \tablecaption{Quantities derived from the \ion{Fe}{xiv} \lam264.79/\lam274.20 ratio.\label{tbl.fe14}} \tablehead{ \colhead{Plasma} & \colhead{Ratio} & \colhead{$\log\,(N_{\rm e}/{\rm cm}^{-3})$} & \colhead{$d$/arcsec} & \colhead{$d_{\rm c}$/arcsec} \\ \colhead{Component} & } \startdata 1 & $2.44\pm 0.05$ & $10.53\pm 0.05$ & $4.77\pm 1.10$ & 2.12 \\ 2 & $2.02\pm 0.14$ & $10.15\pm 0.11$ & $8.44\pm 4.30$ & 2.57 \\ \enddata \end{deluxetable} The column depth, $d$, is an important parameter that can be derived from spectroscopic data as it can potentially yield smaller length scales than those obtained by directly imaging a plasma. For example, \citet{2011A&A...526A...1D} found a column depth of 10~km for a flare kernel observed by EIS. Unfortunately the column depth also has large uncertainties associated with it due to the dependence on the square of the density -- the uncertainties shown in Table~\ref{tbl.fe14} are derived by propagating the uncertainties on $I$ and $\log\,N_{\rm e}$. In addition, we cannot be certain of the absolute iron abundance: the \citet{2012ApJ...755...33S} hybrid coronal abundance value\footnote{We use the standard abundance notation here whereby the abundance is expressed as $\log\,\epsilon({\rm Fe})+12$.} of 7.85 lies between the \citet{feldman92} coronal value of 8.10 that is commonly used, and the photospheric value of 7.52 \citep{2011SoPh..268..255C}, thus introducing an additional factor two uncertainty. A further uncertainty lies in the EIS absolute calibration. Pre-launch, the uncertainty was considered to be 22\%\ \citep{2006ApOpt..45.8689L} but the degree to which the instrument sensitivity has degraded, and any wavelength sensitivity that this degradation has, are uncertain at the time of writing. As highlighted by the \ion{Fe}{xxiv} \lam192.03/\lam255.11 ratio discussed earlier, a factor of two uncertainty is possible. The values of $d$ derived here may thus, at worst, be uncertain to a factor four, however the relative values of $d$ between the two plasma components is much more accurate as the abundance and absolute calibration uncertainties do not apply. Implicit in the calculation of $d$ is the assumption that the emission line intensities come from a region with cross-sectional area 2\arcsec\ $\times$ 1\arcsec. If we assume instead that the emitting volume is a cube, then the side of this cube is $d_{\rm c}=\sqrt[3]{2d}$ -- a parameter that we refer to as the cubic column depth. We discuss the size of the emitting volume in more detail in the following section, where higher resolution AIA images are presented. \section{SDO data analysis}\label{sect.aia} Figure~\ref{fig.aia-ims} shows images of the flare kernel site obtained by the AIA instrument. The mid-point of the EIS exposure that produced the kernel spectrum discussed in the previous section was 07:40:16~UT and so the AIA images closest to this time were chosen. AIA has nine different EUV and UV filters, but four of these gave badly saturated images. Even for the images displayed, only A94 and A335 are not saturated at any location. \begin{figure}[h] \epsscale{1.0} \plotone{f8.eps} \caption{The top three panels, and two lower-left panels show AIA images of the flare kernel site at times ranging from 07:40:11 to 07:40:18~UT. The intensity scale is reversed such that dark areas correspond to high intensity. For each image the count rate of the brightest pixel in the image is displayed in the top-left. The yellow contours show the areas of highest intensity from the A94 image. The two vertical lines on the A335 image show the position of the EIS slit as determined from the co-alignment method. The two short, thick lines in the A94 image indicate the X and Y pixels used to create the intensity cross-sections plotted in the bottom-right panel.} \label{fig.aia-ims} \end{figure} An important point to note is that the basic morphology of these images is very similar, and inspection of other data from the sequence of AIA images confirms that this is generally true. This is also consistent with the intensity images from EIS (Figures~\ref{fig.eis-ims} and \ref{fig.eis-xs}). We are thus confident that the flare kernels emit over a continuous range of temperatures from the chromosphere (as observed through the 1600 and 1700~\AA\ filters of AIA) through to temperatures of $\approx$~30~MK (the \ion{Fe}{xxiv} emission lines observed by EIS). The yellow contours from the A94 image suggest that there are slight spatial offsets between the different AIA filters of up to 2 pixels. These are consistent with the expected accuracy of the AIA image alignments (R.A.~Shine, private communication 2011), and so it is likely that the flare kernels are co-spatial at different temperatures. The size of the flare kernels is very small and in fact the AIA de-spiking routine sometimes flags them as cosmic rays. For this reason it is necessary to use the AIA\_RESPIKE \emph{Solarsoft} routine to put flagged spikes back into the data. Two A94 intensity cross-sections are shown in the lower-right panel of Figure~\ref{fig.aia-ims}; one in the X-direction and one in the Y-direction. The column and row chosen for these cross-sections are indicated on the A94 image. The two different flare kernels shown have narrow, Gaussian shapes and by selecting a few such cross-sections we find average FWHMs of 2.2 and 2.0 pixels for the Y and X directions, respectively, with an uncertainty of about 0.10 pixels. These values are actually narrower than the AIA spatial resolutions of 2.5--3.0~pixels quoted in \citet{boerner12}, suggesting the instrument is performing somewhat better than expected. The twin vertical lines over-plotted on the A335 image of Figure~\ref{fig.aia-ims} show the location of the EIS slit as determined from the co-alignment method described in Appendix~\ref{sect.coalign}. It can be seen that EIS does not necessarily observe a single flare kernel, but instead a patch that may include two or more. This complicates interpretation of the column depth values $d$ and $d_c$ discussed in Sect.~\ref{sect.dens}. The narrow intensity width across the line of flare kernels suggests an actual width significantly smaller than an AIA pixel. We can thus make a simple model whereby the emission is uniform along the line of kernels in the region observed by the EIS slit, and has a width across the kernel line of 0.3\arcsec\ (half of an AIA pixel). This would then imply the column depth of the kernel site is $d/0.3= 16$\arcsec\ from the EIS \ion{Fe}{xiv} diagnostic. However, the appearance of the kernels in the AIA image should then be a series of ``spikes'', unless the spikes are aligned along the observer's line of sight. We conclude that the column depth derived from EIS is thus incompatible with the kernel sizes observed by AIA. The discrepancy could be resolved through one or more of the following: (i) the density is actually higher than derived from the \ion{Fe}{xiv} ratio, (ii) the EIS sensitivity is higher than assumed at the wavelengths of the \ion{Fe}{xiv} lines, and (iii) the abundance of iron is higher than assumed. With the approximate position of the EIS slit established, it is possible to construct light curves for the kernel site observed by EIS in different AIA filters, and Figure~\ref{fig.aia-lc} shows four such curves (the remaining AIA filters are affected by saturation). The light curves were derived by taking AIA images at the highest cadence (12~s for the EUV filters; 24~s for A1700) and extracting the spatial region corresponding to the 2\arcsec\ $\times$ 9\arcsec\ region that was used to create the EIS spectrum. The counts in this region were then summed. For Figure~\ref{fig.aia-lc} the four light curves were normalized such that they each have the same pre-flare intensity and the same maximum intensity. It is clear that all four channels brighten simultaneously (at least within the resolution of the AIA instrument), with the rise to maximum taking place in about 40~s from 07:39:00 to 07:39:40~UT. There is also a small intensity rise (a few percent) in all channels beginning at 07:38:00~UT which may be related to the following large increase. The EIS observation begins within 30~s of the kernel reaching its maximum intensity level. \begin{figure}[h] \epsscale{0.7} \plotone{f9.eps} \caption{AIA light curves for the EIS flare kernel. Each curve has been normalized such that the median of the values before 07:38 is set to 0.1 and the maximum of the curve is set to 1.0. The vertical line denotes the midpoint time of the EIS exposure. A XRT light curve derived from the Be-thin filter is over-plotted with crosses.} \label{fig.aia-lc} \end{figure} A further light curve is shown in Figure~\ref{fig.aia-lc} and was obtained from the X-Ray Telescope (XRT) on board \emph{Hinode}. We have chosen data from the thin beryllium filter (Be-thin) which were obtained at about 30--45~s cadence over the period 07:35 to 07:41~UT. The observing mode changed after 07:41~UT, presumably because a flare mode was triggered, and Be-thin images were not obtained again until 07:50~UT. The flare kernels are prominent in the Be-thin images and Figure~\ref{fig.xrt} compares the 07:40:14~UT image with the A94 image from 07:40:15~UT (see also Figure~\ref{fig.aia-ims}). The XRT image has been co-aligned with the A94 image using the brightenings in the middle-right of the images. Note that part of the XRT image is saturated. Based on the displayed co-alignment, the XRT light curve was extracted and normalized in the same manner as for the AIA images discussed earlier. The Be-thin filter has a peak temperature response at $\log\,T=7.0$ \citep{2007SoPh..243...63G} and confirms the high temperatures present in the flare kernel. \begin{figure}[h] \epsscale{0.5} \plotone{f10.eps} \caption{The upper panel shows an XRT image of the flare brightenings obtained at 07:40:14~UT with the thin beryllium filter. The lower panel shows the A94 image from 07:40:15~UT. The twin vertical lines show the location of the EIS slit at 07:40:16~UT. The contours on the lower panel show the XRT emission. } \label{fig.xrt} \end{figure} At this point we note that \citet{2012A&A...540A..24B} recently suggested that all of the AIA channels can brighten simultaneously in the early stages of a flare due to a large increase in emission at temperatures of 0.1--0.7~MK. This is evidenced by the fact that all of the AIA channels brightened simultaneously and several minutes before the hard X-ray bursts for a B5 microflare observed on 2010 July 21, in a manner consistent with previous observations of transition region emission lines observed with the SOHO/CDS instrument. The XRT light curve demonstrates that this is not the case in the present flare as the Be-thin filter has negligible sensitivity below $\log\,T=6.0$ \citep{2007SoPh..243...63G} while, in addition, the EIS emission measure plot of Figure~\ref{fig.vel}c shows that the emission measure at $\log\,T=5.5$ is not significantly larger than at other temperatures observed by EIS. The flare kernel studied here can thus not be explained as a transition region event. \section{Relation to magnetic field}\label{sect.mag} Figure~\ref{fig.hmi-0740} shows where the flare kernels from the A94 07:40:15~UT exposure occur in relation to the LOS magnetic field and visible continuum intensity as measured from the HMI instrument. Alignment between the two instruments was checked by comparing images obtained 150\arcsec\ to solar-north and south of the region displayed in Figure~\ref{fig.hmi-0740}. In such quiet Sun regions, small bright points in A1700 images generally correspond well with bright points seen in plots of the absolute LOS magnetic field strength. In the present case it was found that changing the A1700 image center by $(+0.3\arcsec ,-0.5\arcsec)$ gave an improved alignment between the bright points for both quiet Sun pointings. Now, comparisons of A1700 and A94 images of the flare kernels suggest that there is a spatial offset of $(0.0\arcsec ,-0.8\arcsec )$ between the two, i.e., the A94 image needs to be moved to solar-north to better match the A1700 image. Therefore the offset between A94 and the HMI LOS magnetograms is $(+0.3\arcsec ,+0.3\arcsec)$. Given the assumptions made in this estimate, the accuracy may be as large as $\approx$1\arcsec, however we can have some confidence that the brightenings are aligned along the ridge of positive magnetic polarity in Figure~\ref{fig.hmi-0740} (upper panel), rather than the narrow channel of weak opposite polarity just to the south of it, or the area of weak magnetic field to the north. Averaging a number of spatial pixels around the location of the flare kernel site considered in the present work yields an average line-of-sight magnetic field strength of $1030\pm 150$~G. The comparison with the continuum intensity image (lower panel of Figure~\ref{fig.hmi-0740}) shows that the brightenings lie within penumbra regions and extend towards, but not into, the sunspot umbra on the right side of Figure~\ref{fig.hmi-0740}. The HMI magnetograms are obtained at 45~s cadence and it is possible to search for changes in the signal between frames. Changes of up to 100~G are found at the location of one of the kernels between frames at 07:39:05 and 07:39:50~UT, and between 07:40:35 and and 07:41:20~UT, however it is likely that such changes may simply represent plasma dynamics (velocity shift and/or line broadening) in the \ion{Fe}{i} line used for the magnetogram measurements rather than an actual magnetic field change, and so we choose not to identify them as magnetic field changes. \begin{figure}[h] \epsscale{0.5} \plotone{f11.eps} \caption{The upper panel shows a HMI LOS magnetogram, and the lower panel a HMI white light continuum image. The blue lines show intensity contours from the AIA 94~\AA\ image. The parallel vertical lines indicate the position of the EIS slit at 07:40:16~UT.} \label{fig.hmi-0740} \end{figure} \section{Discussion and summary}\label{sect.summary} Active region AR 11458 produced a confined M1.1 class flare on 2011 February 16 that peaked at 07:44~UT. On one side of the active region a number of intense flare kernels were observed spectroscopically by the \emph{Hinode}/EIS instrument between 07:40 and 07:41~UT. The present work focussed on one flare kernel site observed by EIS at 07:40:16~UT, and various spectroscopic parameters were measured. In addition, images from the SDO/AIA instrument were used to study temporal and spatial properties of the kernel site. We believe the analysis presented here gives the most complete set of ultraviolet observations yet obtained of flare kernels and present an important reference data-set against which other observations of these important energy release events can be compared. The key results are summarized below. AIA and XRT images demonstrated that the flare kernel site reached maximum brightness in about 40~s, with a weak intensity enhancement for a minute immediately prior to this. Of the four AIA channels that were not saturated (spanning temperatures from the chromosphere to 10~MK) the intensity increase was simultaneous within the resolution limits of the AIA instrument. The flare kernel is one of several that lie along a line of length $\approx$~25\arcsec\ that is found to align with a ridge of strong, positive magnetic field that is related to a nearby sunspot. The kernel is located in the penumbra of this sunspot where the magnetic field strength is $\approx 1000$~G. The flare kernel sizes are at the resolution limit of the AIA instrument, suggesting sizes of $<0.6$\arcsec\ ($\lesssim 400$~km). In addition they have a similar morphology at all temperatures from the chromosphere to 30~MK, and are co-spatial to within the alignment uncertainties of the AIA channels. A single EIS spectrum of the flare kernel was obtained about 30~s after the kernel reached maximum intensity, and the following properties were determined. \begin{itemize} \item The LOS velocities of the dominant emission components of lines from \ion{O}{vi} to \ion{Fe}{xvi} (0.3--2.5~MK) decrease monotonically from $+35$~km~s$^{-1}$\ (downflows) at $\log\,T=5.5$ to $-60$~km~s$^{-1}$\ (upflows) at $\log\,T=6.4$. The transition from downflows to upflows occurs around $\log\,T=6.1$. \item \ion{Fe}{xxiii} and \ion{Fe}{xxiv} lines (formed at 10--30~MK) show two plasma components, one at $\approx$~-400~km~s$^{-1}$, and a weaker one at $\approx$~0~km~s$^{-1}$. \item Lines from \ion{Fe}{xii--xvi} show two to four emission components. Each line has a red-shifted component at around $+60$ to $+70$~km~s$^{-1}$, while \ion{Fe}{xv} and \ion{Fe}{xvi} have blue-shifted components at around $-150$~km~s$^{-1}$. There is also evidence for a further \ion{Fe}{xvi} component at $-250$~km~s$^{-1}$. \item All lines show non-thermal velocity broadening, although for \ion{Fe}{xii--xvi} -- lines possessing multiple plasma components -- the values are quite small at around 20--30~km~s$^{-1}$. The cooler \ion{O}{vi} and \ion{Fe}{x} lines have larger broadenings of 60--80~km~s$^{-1}$, perhaps reflecting unresolved plasma components. The hottest lines, \ion{Fe}{xxiii} and \ion{Fe}{xxiv} have large broadenings of around 100--120~km~s$^{-1}$. \item Emission measure values derived from EIS are fairly uniform with temperature, with values of 3--10 $\times 10^{29}$~cm$^{-5}$. The AIA 94~\AA\ emission measure value fills a gap in the temperature coverage of EIS and is consistent with the EIS values. \item The dominant, blueshifted emission component of \ion{Fe}{xiv} (2~MK) has a density of $3.4\times 10^{10}$~cm$^{-3}$; the weaker, redshifted component has a density of $1.4\times 10^{10}$~cm$^{-3}$. The column depths implied by these densities are significantly larger than the observed sizes of the flare kernels. \end{itemize} Prior to the availability of AIA and EIS, the limited spatial resolution of previous flare kernel observations prevented direct comparisons with theoretical models of the chromospheric evaporation process, and so one method for modeling the solar emission lines was to construct ensemble models of multiple events to match the data \citep{1997ApJ...489..426H,1998ApJ...500..492H,2005ApJ...618L.157W}. The new data from AIA and \emph{Hinode}\ presented here demonstrate that it is possible to obtain high quality data of individual kernel sites. Many 1D models of the evaporation process have been performed previously and we briefly consider the model of \citet{2005ApJ...630..573A}, which is a development of the earlier work of \citet{1985ApJ...289..414F}, \citet{1994ApJ...426..387H} and \citet{1999ApJ...521..906A}. Firstly, the observation of upflowing plasma at 400~km~s$^{-1}$\ is clear evidence for explosive evaporation and by comparing with the light curves from XRT and AIA we can say that chromospheric evaporation was underway 80~s after line intensities began their large increase, and about 30~s after the intensities reached their peak value. For comparison, the models of \citet{2005ApJ...630..573A} showed that explosive evaporation began 73~s and 1~s after the start of events with heat fluxes of $10^{10}$ and $10^{11}$~\ecs, respectively. The time taken to reach maximum intensity is around 80~s and 2~s \citep[based on Figure~15 of][]{2005ApJ...630..573A} for these two cases compared to the observed rise of 40~s. We speculate that the rise time observed from AIA data may serve as a proxy for the heat flux, and thus the heat flux for the present flare kernel may be between $10^{10}$ and $10^{11}$~\ecs. The small size of the flare kernels as seen by AIA suggest scales of $<0.6$\arcsec\ ($\lesssim 0.4$~Mm) at all temperatures. The 1D models of \citet{2005ApJ...630..573A} were computed over heights of 0 to 10~Mm, with chromospheric lines formed over 0--1.5~Mm, and coronal lines over 1--10~Mm. Taking into account density, the coronal emission is likely concentrated over 1--3~Mm regions. The observations are thus not inconsistent with the models. The high densities measured from the EIS \ion{Fe}{xiv} diagnostic are consistent with the later phases of the \citet{2005ApJ...630..573A} models when dense chromospheric plasma has been heated to coronal temperatures. \citet{2005ApJ...630..573A} focussed on the modeling of chromospheric emission lines, and so comparisons with the higher temperature EIS velocities are not possible. We note that the measurement of multiple emission components of the lines from \ion{Fe}{xii} to \ion{Fe}{xvi}, together with the large line widths of other lines, suggest that there are several flow patterns within the flare kernel, which may simply imply that there are multiple loop footpoints within the kernel that are not resolved by EIS. If this is the case, though, then there is some coherence between how these footpoints behave as evidenced by distinct velocity features in the line profiles. Finally, the emission measure results show fairly uniform emission over the temperature range 0.3--30~MK, which constrains the heating profile for upper transition region and coronal plasma. The similarity of the flare kernel results presented here with those of \citet{2010ApJ...719..213W} for a similar size flare, and also to aspects of the results from \citet{milligan09}, \citet{milligan11}, \citet{2011A&A...526A...1D} and \citet{doschek12}, suggests that flare kernels may exhibit consistent properties. A survey of such events with EIS and AIA would thus be extremely valuable. RHESSI observations are extremely important too as they can constrain the energy input to the flare kernel site. \acknowledgments This work was funded by NASA under a contract to the U.S.\ Naval Research Laboratory. Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway). {\it Facilities:} \facility{Hinode(EIS)}, \facility{Hinode(XRT)}, \facility{SDO(AIA)}, \facility{SDO(HMI)}, \facility{GOES}.
2,877,628,088,658
arxiv
\section{Introduction} Two operator algebras $A$ and $B$ are called stably isomorphic if the algebras $A\otimes \cl K$ and $B\otimes \cl K$ are isomorphic as operator algebras. Here, $\cl K$ is the set of compact operators acting on $l^2(\bb N).$ Stably isomorphic $C^*$-algebras are strongly Morita equivalent in the sense of Rieffel. The same is true of non-self-adjoint operator algebras if we consider the strong Morita equivalence that was introduced by Blecher, Muhly, and Paulsen in \cite{bmp}. Meanwhile, the converse is not true, even in the case of $C^*$-algebras \cite{bgr}. We introduce a new Morita type equivalence between operator algebras: Let $A$ and $B$ be operator algebras that are possibly non-self-adjoint. We say that $A$ and $B$ are $\sigma-$strongly $ \Delta $-equivalent, and write $A\sim _{\sigma \Delta }B$, if there exist completely isometric homomorphisms $\alpha: A\rightarrow \alpha(A),\;\; \beta: B\rightarrow \beta (B)$ and a $\sigma $-ternary ring of operators $M$ such that \begin{equation}\label{giasemi}\alpha (A)=\overline{[M^*\beta (B)M]}^{\|\cdot\|} , \;\; \beta (B)=\overline{[M\alpha (A)M^*] }^{\|\cdot\|}. \end{equation} See the definition of the $\sigma -$ternary ring of operators in Definition \ref{110}. In the proof of \cite[Theorem 3.2]{elehoust}, see also \cite[Lemma 3.4]{elehoust}, we noticed that if $A, B$ are operator algebras possessing countable approximate identities, $M$ is a ternary ring of operators and the triple $(A, B, M)$ satisfies (\ref{giasemi}) then $M$ is necessarily a $\sigma -$ternary ring of operators. We used this fact in order to prove that $A$ and $B$ are stably isomorphic. Subsequently in \cite[Theorem 4.6]{elekak} we extended the proof in the case of operator spaces. In the present paper, we prove that $\sim _{\sigma \Delta }$ is an equivalence relation in the class of operator algebras and we use this fact to prove that $A\sim _{\sigma \Delta }B$ if and only if $A$ and $B$ are stably isomorphic. In \cite{elehoust}, we studied the relationship between $A$ and $B$ when (\ref{giasemi}) holds for a ternary ring (TRO) of operators $M$ that is not necessarily a $\sigma $-TRO. This relation is not equivalent to the existence of an operator algebra isomorphism between $A\otimes \cl K$ and $B\otimes \cl K.$ We also consider a weaker relation $\subset _{\sigma \Delta }$ between operator algebras: We say that $A ,$ $\sigma \Delta $-embeds into $B$ if there exists a projection $p$ in the center of $\Delta (B^{**}),$ where $\Delta (B^{**})$ is the diagonal of the second dual operator algebra of $B,$ such that $pBp$ is an operator algebra and $A\sim _{\sigma \Delta }pBp.$ In this case, we write $A\subset _{\sigma \Delta }B.$ We prove that $\subset _{\sigma \Delta }$ is transitive. For the case of $C^*$-algebras, we prove that $A\subset _{\sigma \Delta }B$ if and only if there exists an onto $*$-homomorphism from $B\otimes \cl K$ onto $A\otimes \cl K,$ which is true if and only if there exists an ideal $I$ of $B$ such that $A\sim _{\sigma \Delta }B/I.$ We investigate whether it is true that $A\sim _{\sigma \Delta }B$ if $A\subset _{\sigma \Delta }B$ and $B\subset _{\sigma \Delta }A.$ In general, this is not true (see Section \ref{non}). It is also not true even in the case of $C^*$- algebras (see Example \ref{2000000}). However, we prove that if $A$ and $B$ are $C^*$-algebras such that $A\subset _{\sigma \Delta } B$ and $B\subset _{\sigma \Delta } A $, then there exist projections $r, \hat r$ in the centers of $A^{**}$ and $B^{**}$, respectively, such that $Ar\sim _{\sigma \Delta }B\hat r$ and $A (id_{A^{**}}-r) \sim _{\sigma \Delta }B(id_{B^{**}}-\hat r). $ A dual version of the results obtained in this article can be found in \cite{elest}. In the following we describe the notations and symbols used in this paper. If $H, K$ are Hilbert spaces, then $B(H, K)$ is the space of bounded operators from $H$ to $K.$ We write $B(H)$ for $B(H, H).$ A ternary ring of operators (TRO) is a subspace of some $B(H, K)$ satisfying $MM^*M\subseteq M$ (see the definition of a $\sigma $-TRO in Definition \ref{110}). An operator algebra is an operator space and Banach algebra for which there exists a completely isometric homomorphism $\alpha: A \rightarrow B(H).$ In this article, when we consider an operator algebra, we mean an operator algebra with a contractive approximate identity. We note that $C^*$-algebras possess contractive approximate identities automatically. If $X$ is an operator space, then $M_\infty (X)$ is the set of $\infty \times \infty $ matrices whose finite submatrices have uniformly bounded norm. The space $M_\infty (X)$ is an operator space. In addition, $M_\infty ^{fin}(X)$ will denote the subspace of $M_\infty (X)$ consisting of "finitely supported matrices." We write $K_\infty (X)$ for the norm closure in $M_\infty (X)$ of $M_\infty ^{fin}(X).$ It is well-known that the space $K_\infty (X)$ is completely isometric isomorphic with $X\otimes \cl K,$ where $\otimes $ is the minimal tensor product \cite{bm}. For further details on the operator space theory that is used in this paper, we refer the reader to the books by \cite{bm}, \cite{er}, \cite{paul}, and \cite{pis}. A nest $\cl N\subseteq B(H)$ is a totally ordered set of orthogonal projections containing the zero and identity operators that are closed under arbitrary suprema and infima. Given a nest $\cl N\subseteq B(H),$ by $\Alg{ \cl N}$ we denote the corresponding nest algebra: $$\{x\in B(H): (I_H-n)xn=0,\;\;\forall \;\;n\;\in \;\cl N\}.$$ Given an operator algebra $A,$ we denote its center by $Z(A)$ and its diagonal $A\cap A^*$ by $\Delta (A).$ If $S$ is a subset of a vector space, then we denote the linear span of the elements of $S$ by $[S]$. \section{Preliminaries} The purpose of this section is to prove Lemma \ref{150}, which is required to prove that $\sim _{\sigma \Delta }$ is an equivalence relation in Section \ref{xxxxxx}. \begin{definition}\label{110} Let $H,K$ be Hilbert spaces, and $M\subseteq B(H,K)$ be a norm closed TRO. We call $M$ $\sigma $-TRO if there exist sequences $\{m_i,\; n_i,\; i\in \bb N\}\subseteq M$ such that $$\lim_l\sum_{i=1}^lm_im_i^*m=m, \;\;\lim_l\sum_{i=1}^lmn_i^*n_i=m, \;\;\forall m\in M$$ and $$ \nor{ \sum_{i=1}^l m_im_i^*}\leq 1 , \;\;\nor{ \sum_{i=1}^l n_i^*n_i}\leq 1, \;\forall\; l.$$ \end{definition} \begin{remark} A norm-closed TRO $M$ is a $\sigma $- TRO if and only if the $C^*$ algebras $\overline{[M^*M]}^{\|\cdot\|}, \overline{[MM^*]}^{\|\cdot\|} $ are $\sigma$-unital. A proof of this fact can be found in Theorem 2.1 in \cite{brown}. \end{remark} \begin{lemma}\label{120} Let $A\subseteq B(H), B\subseteq B(K)$ be $C^*$ algebras and $M\subseteq B(H,K)$ be a $\sigma $-TRO such that $$B=\overline{[M^*AM]}^ {\|\cdot\|}, \;\;\; MBM^*\subseteq A.$$ If $A$ is $\sigma $-unital, then $B$ is $\sigma $-unital. \end{lemma} \begin{proof} Suppose that $(a_n)_{n\in \bb N}\subseteq A$ such that $\lim_n a_na=a\; \forall \;\;a\in\; A.$ In addition, let $\{m_i: i\in \bb N\}\subseteq M$ be such that $\lim_l\sum _{i=1}^lm_i^*m_im^*=m^* \forall\; m\;\in M$ and $\nor{ \sum_{i=1}^l m_i^*m_i}\leq 1 \;\forall\; l.$ It suffices to prove that $B$ contains a strictly positive element. Define $$b= \sum_{l=1}^\infty \sum_{n=1}^\infty \sum_{i,j=1}^l\frac{m_i^*a_nm_im_j^*a_n^*m_j}{ 2^n2^l\|a_n\|^2}.$$ Because $$\nor{ \sum_{i,j=1}^l m_i^*a_nm_im_j^*a_n^*m_j }=\nor{\sum_{i=1}^lm_i^*a_n^*m_i}^2\leq \|a_n\|^2,$$ we have that $$\sum_{l=1}^\infty \sum_{n=1}^\infty \nor{\sum_{i,j=1}^l\frac{m_i^*a_nm_im_j^*a_n^*m_j }{2^n2^l\|a_n\|^2}}<+\infty .$$ Thus, the element $b$ is well defined. Observe that $b\geq 0$ if $\phi $ is a state of $B$ such that $$\phi (b)=0\Rightarrow \phi (\sum_{i,j=1}^l m_i^*a_nm_im_j^*a_n^*m_j ) =0,\;\;\forall \;\; n,\;l.$$ If $a\in A, m,s,t,r,\in M,$ then the Cauchy-Schwartz inequality for $\phi $ implies that $$\phi (\sum_{i=1}^lm_i^*a_nm_im^*ast^*r)=0\;\; \forall\;\; n,\;l.$$ Because $$m_im^*ast^*r\in MM^*AMM^*M\subseteq AM,$$ we have that $$\lim_n a_nm_im^*st^*r=m_im^*st^*r.$$ Thus, $\phi (\sum_{i=1}^lm_i^*m_im^*astr^*)=0 ,\; \forall\; l.$ Because $$\lim_l\sum_{i=1}^lm_i^*m_im^*=m^*,$$ we have that $\phi (m^*ast^*r)=0$ for all $m,s, t,r\in M, a\in A.$ Because $B=\overline{[M^*AMM^*M]} ^{\|\cdot\|}$, we conclude that $\phi =0$. This contradiction shows that $b$ is strictly positive. \end{proof} \begin{lemma}\label{130} Let $E,F,M_1, M_2$ be TROs such that the algebra $\overline{[M_2^*M_2]} ^{\|\cdot\|}$ is $\sigma $ - unital and $$E=\overline{[M_2^*FM_1]} ^{\|\cdot\|}, \;\;\; F=\overline{[M_2EM_1^*]} ^{\|\cdot\|}.$$ If it also holds that the algebra $ \overline{[EE^*]} ^{\|\cdot\|} $ is $\sigma $-unital, then the algebra $\overline{[FF^*]} ^{\|\cdot\|} $ is also $\sigma $-unital. \end{lemma} \begin{proof} Observe that $$ \overline{[FF^*]} ^{\|\cdot\|} =\overline{[M_2EE^*M_2^*]} ^{\|\cdot\|} .$$ Let $\{m_i: i\in \bb N\}\subseteq M_2$ be such that $\sum_{i=1}^lm_im_i^*m=m \forall\; m\;\in\; M_2,$$$ \nor{\sum_{i=1}^lm_im_i^*}\leq 1\;\forall \;l,$$ and let $(a_n)_n\subseteq \overline{[EE^*]} ^{\|\cdot\|} $ be a $\sigma $-unit. As in Lemma \ref{120}, we can prove that the element $$b= \sum_{l=1}^\infty \sum_{n=1}^\infty \sum_{i,j=1}^l\frac{m_ia_nm_i^*m_ja_n^*m_j^*}{ 2^n2^l\|a_n\|^2}$$ is strictly positive in $\overline{[FF^*]} ^{\|\cdot\|}.$ Thus, $\overline{[FF^*]} ^{\|\cdot\|}$ is $\sigma $-unital. \end{proof} \begin{lemma}\label{140} Let $E, F, M_1, M_2$ be TROs such that $M_1, M_2, F$ are $\sigma $-TROs and $$E= \overline{[M_2FM_1^*]} ^{\|\cdot\|}, \;\; M_2^*EM_1\subseteq F.$$ Then, $E$ is a $\sigma $-TRO. \end{lemma} \begin{proof} It suffices to prove that the $C^*$-algebras $ \overline{[EE^*]} ^{\|\cdot\|}, \overline{[E^*E]} ^{\|\cdot\|} $ are $\sigma $-unital. Define the $C^*$-algebras $$ \Pi (E)=\left(\begin {array}{clr} \overline{[E^*E]} ^{\|\cdot\|} & E^*\\ E& \overline{[EE^*]} ^{\|\cdot\|} \end {array}\right), \;\;\; \Pi (F)=\left(\begin {array}{clr} \overline{[F^*F]} ^{\|\cdot\|} & F^*\\ F& \overline{[FF^*]} ^{\|\cdot\|} \end {array}\right). $$ Because $F$ is a $\sigma $-TRO, the algebra $\Pi (F)$ is $\sigma $-unital. Furthermore, it easy to see that $$ (M_1\oplus M_2)\Pi (F)(M_1\oplus M_2)^*=\Pi (E) $$ and $$(M_1\oplus M_2)^*\Pi (E)(M_1\oplus M_2)^*\subseteq \Pi (F) .$$ Lemma \ref{120} implies that $\Pi (E)$ is $\sigma $-unital. Thus, the $C^*$-algebras $ \overline{[EE^*]} ^{\|\cdot\|}, \overline{[E^*E]} ^{\|\cdot\|} $ are $\sigma $-unital. \end{proof} \begin{lemma}\label{150} Let $H, K, L$ be Hilbert spaces, $M\subseteq B(H,K), N\subseteq B(K, L)$ be $\sigma $-TROs, and $D$ be the $C^*$ algebra generated by the sets $MM^*, N^*N.$ Then, $T=\overline{[NDM]} ^{\|\cdot\|}$ is a $\sigma $-TRO. \end{lemma} \begin{proof} We have that $$NDMM^*DN^*NDM\subseteq NDM.$$ Thus, $TT^*T\subseteq T$, and so $T$ is a TRO. We define the TRO $$Z=\left(\begin{array}{clr} \overline{[M^*D]} ^{\|\cdot\|}\\ \overline{[ND]} ^{\|\cdot\|}\end{array}\right).$$ Then, $$ZZ^*=\left(\begin{array}{clr} M^*DM & M^*DN^* \\ NDM & NDN^*\end{array}\right).$$ Let $$\{m_i: i\in \bb N\}\subseteq M,\;\; \{n_i: i\in \bb N\}\subseteq N$$ be such that $$\nor{\sum _{i=1}^lm_i^*m_i}\leq 1, \;\; \nor{\sum _{i=1}^ln_in_i^*}\leq 1, \forall l$$ and $$ \lim_l \sum _{i=1}^lm_i^*m_i m^*=m^* \;\forall\; m\;\in M,\;\; \lim_l\sum _{i=1}^ln_in_i^*n=n\; \forall\; n\;\in \;N. $$ The elements $$a_l= \left(\begin{array}{clr} \sum _{i=1}^lm_i^*m_i & 0 \\ 0 & \sum _{i=1}^ln_in_i^*\end{array} \right), l\in \bb N$$ belong to $$\left(\begin{array}{clr} M^*M & 0 \\ 0 & NN^* \end{array} \right)\subseteq ZZ^*, $$ and satisfy $\lim_l a_lx=x, \forall x\in \overline{[ZZ^*]} ^{\|\cdot\|}.$ Thus, $ \overline{[ZZ^*]} ^{\|\cdot\|} $ is a $\sigma $-unital $C^*$- algebra. Now, we have that $Z=\overline{[ZD]} ^{\|\cdot\|}$ and $$\overline{[Z^*Z]} ^{\|\cdot\|}= \overline{DMM^*D+DNN^*D} ^{\|\cdot\|}.$$ We can easily see that $D=\overline{[MM^*D+N^*ND]} ^{\|\cdot\|}$, and thus $$\overline{[Z^*Z]} ^{\|\cdot\|} =\overline{DD} ^{\|\cdot\|}=D.$$ Now, apply Lemma \ref{130} for $$M_1=\bb C,\; M_2=Z^*,\; E=Z,\; F=D. $$ We obtain that $$ \overline{[M_2EM_1^*]} ^{\|\cdot\|}=\overline{[Z^*Z]} ^{\|\cdot\|}=D=F,\;\;\; \overline{[M_2^*FM_1]} ^{\|\cdot\|}=\overline{[ZD]} ^{\|\cdot\|}=Z=E, $$ $\overline{[M^*_2M_2]} ^{\|\cdot\|}=\overline{[ZZ^*]} ^{\|\cdot\|}=\overline{[EE^*]} ^{\|\cdot\|}$ is $\sigma $-unital. Lemma \ref{130} implies that $\overline{[FF^*]} ^{\|\cdot\|}$ is $\sigma $-unital, and thus $D$ is $\sigma $-unital $C^* $-algebra. Now, $$\overline{[NDM]} ^{\|\cdot\|}=T, \;\;N^*TM^*=N^*NDMM^*\subseteq D.$$ Lemma \ref{140} implies that $T$ is a $\sigma $-TRO. \end{proof} \section{ $\sigma $-strong $\Delta $-equivalence}\label{xxxxxx} \begin{definition} \label{221}Let $A$ and $B$ be operator algebras acting on the Hilbert spaces $H$ and $L$, respectively. We call them $\sigma $-strongly $TRO $-equivalent if there exists a $\sigma $-TRO $M\subseteq B(L, H)$ such that $$A=\overline{[M^*BM]}^{\|\cdot\|} , \;\; B=\overline{[MAM^*] }^{\|\cdot\|}. $$ In this case, we write $A\sim _{\sigma TRO}B.$ \end{definition} \begin{definition} \label{222}Let $A$ and $B$ be operator algebras. We call these $\sigma $-strongly $\Delta $-equivalent if there exist completely isometric homomorphisms $\alpha: A\rightarrow \alpha(A), \; \beta: B\rightarrow \beta (B)$ such that $\alpha (A)\sim _{\sigma TRO}\beta (B).$ In this case, we write $A\sim _{\sigma \Delta }B.$ \end{definition} \begin{theorem}\label{223} Let $A, B$ be $\sigma $-strongly $\Delta $-equivalent operator algebras. Then, for every completely isometric homomorphism $\alpha: A\rightarrow \alpha(A)$ there exists a completely isometric homomorphism $\beta: B\rightarrow \beta (B)$ such that $\alpha (A)\sim _{\sigma TRO}\beta (B).$ \end{theorem} \begin{proof} We may assume that $H, L, M$ are as in Definition \ref{221}. By $Y$, we denote the space $Y=\overline{[BMA]} ^{\|\cdot\|}.$ Let $K$ be the $A$-balanced Haagerup tensor product $K=Y\otimes ^h_AH.$ This is a Hilbert space \cite{bmp}. Define $$\beta : B\rightarrow B(K), \;\;\;\beta (b)(y\otimes h)=(by)\otimes h.$$ By Lemma 2.10 in \cite{elehoust}, $\beta $ is a completely isometric homomorphism. From the same article, if $m\in M,$ we define $$\mu (m): H\rightarrow K, \; \; \mu (m)(\alpha (a)(h))=(ma)\otimes h.$$ The map $\mu: M \rightarrow \mu (M)$ is a TRO homomorphism. Thus, $\mu (M)$ is a $\sigma $-TRO. By Theorem 2.12 in \cite{elehoust}, we have that $$\alpha (A)=\overline{[\mu (M)^*\beta (B)\mu (M)]}^{\|\cdot\|} , \;\; \beta (B)=\overline{[\mu (M)\alpha (A)\mu (M)^*] }^{\|\cdot\|}. $$ The proof is complete. \end{proof} \begin{theorem}\label{224} The $\sigma $-strong $\Delta $-equivalence of operator algebras is an equivalence relation in the class of operator algebras. \end{theorem} \begin{proof} It suffices to prove the transitivity property. Let $A, B,$ and $C$ be operator algebras such that $A\sim _{\sigma \Delta }B$ and $B\sim _{\sigma \Delta }C.$ Therefore, there exists a $\sigma $-TRO $M$ and completely isometric homomorphisms $\alpha: A\rightarrow \alpha(A), \; \beta: B\rightarrow \beta (B)$ such that $$\alpha (A)=\overline{[M^*\beta (B)M]}^{\|\cdot\|} , \;\; \beta (B)=\overline{[M\alpha (A)M^*] }^{\|\cdot\|}. $$ By Theorem \ref{223}, there exists a $\sigma $-TRO $N$ and a completely isometric homomorphism $\gamma: C \rightarrow \gamma (C)$ such that $$\beta (B)=\overline{[N^*\gamma (C)N]}^{\|\cdot\|} , \;\; \gamma (C)=\overline{[N\beta (B)N^*] } ^{\|\cdot\|} . $$ Let $D$ be the $C^*$-algebra generated by the set $\{MM^*\}\cup \{N^*N\}.$ By Lemma \ref{150}, the space $T=\overline{[NDM]} ^{\|\cdot\|}$ is a $\sigma $-TRO. As in the proof of Theorem 2.1 in \cite{elehoust}, we can prove that $$\alpha (A)=\overline{[T^*\gamma (C)T]} ^{\|\cdot\|} , \gamma (C)=\overline{[T\alpha (A)T^*]} ^{\|\cdot\|} .$$ Thus, $A\sim _{\sigma \Delta }C. $ \end{proof} \begin{theorem}\label{225} Let $A, B$ be operator algebras. Then, $A$ and $B$ are $\sigma $-strongly $\Delta $-equivalent if and only if they are stably isomorphic. \end{theorem} \begin{proof} We assume that $M$ is a $\sigma $-TRO satisfying $$A=\overline{[M^*BM]}^{\|\cdot\|} , \;\; B=\overline{[MAM^*] }^{\|\cdot\|}. $$ Theorem 4.6 in \cite{elekak} implies that there exists a completely isometric onto linear map $K_\infty (A)\rightarrow K_\infty (B).$ By using the Banach-Stone theorem for operator algebras, we may assume that this map is also a homomorphism \cite[4.5.13]{bm}. For the converse, suppose that $K_\infty (A)$ and $ K_\infty (B)$ are completely isometrically isomorphic as operator algebras. Let $R_\infty $ be the space of infinite rows consisting of compact operators. Then, $R_\infty $ is a $\sigma $-TRO, and we have that $$R_\infty K_\infty (A)R_\infty^*=A, \;\;\; \overline{[R_\infty ^*AR_\infty ]}^{\|\cdot\|}=K_\infty (A). $$ Thus, $A\sim _{\sigma TRO}K_\infty (A).$ Therefore, $A\sim _{\sigma \Delta }K_\infty (B).$ By the same arguments, $B\sim _{\sigma TRO}K_\infty (B).$ Therefore, Theorem \ref{224} implies that $A\sim _{\sigma \Delta }B.$ \end{proof} \begin{corollary}\label{226} Rieffel's strong Morita equivalence of $C^*$-algebras is weaker than $\sigma $-strong $\Delta $-equivalence. \end{corollary} \begin{proof} It is well-known \cite{bgr} that there exist $C^*$-algebras that are strongly Morita equivalent in the sense of Rieffel but are not stably isomorphic. Thus, by Theorem \ref{225} these $C^*$-algebras cannot be $\sigma $-strongly $\Delta $-equivalent. \end{proof} \begin{corollary}\label{227} Two $\sigma $-unital $C^*$-algebras are strongly Morita equivalent in the sense of Rieffel if and only if they are $\sigma $-strongly $\Delta $-equivalent. \end{corollary} \begin{proof} By \cite{bgr}, two $\sigma $-unital $C^*$-algebras are strongly Morita equivalent in the sense of Rieffel if and only if they are stably isomorphic. The conclusion is implied by Theorem \ref{225}. \end{proof} \section{Strong Morita embeddings} In \cite{elest}, we defined a new relation $\subset _\Delta $ between dual operator algebras: Given two unital dual operator algebras $A$ and $B$, we say that $A\subset _\Delta B$ if there exists an orthogonal projection $p\in B$ such that $A$ and $pBp$ are weakly stably isomorphic. In this case, there exists a projection $q\in Z(\Delta (B))$ such that $pBp$ and $qBq$ are weakly stably isomorphic \cite[Lemma 2.11]{elest}. In the present section, we aim to investigate the strong version of the previously stated relation for operator algebras. \begin{definition}\label{2100} Let $A$ and $B$ be operator algebras. We say that $A ,$ $\sigma \Delta $-embeds into $B,$ if there exists a projection $p\in Z(\Delta (B^{**}))$ such that $pBp$ is an operator algebra and $A\sim _{\sigma \Delta }pBp.$ In this case, we write $A\subset _{\sigma \Delta }B.$ \end{definition} \begin{remark} Let $A$ be a $C^*$-algebra, and $p$ be a central projection of $A^{**}.$ Because the map $A\rightarrow A^{**}, \;a\rightarrow ap$ is a $*$-homomorphism, it has norm-closed range. Thus, $Ap$ is a $C^*$-algebra. \end{remark} In the following, we prove that $\subset _{\sigma \Delta }$ is transitive. \begin{theorem}\label{2300} Let $A, B, C$ be operator algebras. If $A\subset _{\sigma \Delta }B$ and $B\subset _{\sigma \Delta }C$, then $A\subset _{\sigma \Delta }C.$ \end{theorem} \begin{proof} Let $p\in Z(\Delta (B^{**})), q\in Z(\Delta (C^{**}))$ be such that $pBp, qCq$ are operator algebras and $ A\sim _{\sigma \Delta }pBp , B\sim _{\sigma \Delta }qCq.$ We write $$ \hat A=K_\infty (A),\; \hat B=K_\infty (B),\; \hat C=K_\infty (C).$$ Then, $$ \hat A^{**}=M_\infty (A^{**}),\; \hat B^{**}=M_\infty (B^{**}),\; \hat C^{**}=M_\infty ^w(C^{**}). $$ There exist completely isometric homomorphisms $$\theta : \hat A\rightarrow \hat B^{**}, \;\;\;\rho : \hat B\rightarrow \hat C^{**},$$ such that $$\theta(\hat A)= p^\infty \hat B p^\infty , \;\;\;\rho(\hat B)= q^\infty \hat Cq^\infty .$$ There exists a completely isometric homomorphism $\rho _0: \hat B^{**}\rightarrow \hat C^{**}$ such that $$\rho _0|_{\hat B}=\rho , \;\;\rho _0(\hat B^{**})\subseteq q^\infty \hat C^{**}q^\infty .$$ Because $p^\infty \in Z(\Delta (\hat B^{**}))$ and $\rho _0(p^\infty )\leq q^\infty $, there exists $q_0\in Z(\Delta (\hat C^{**}))$ such that $\rho _0(p^\infty )= q_0^\infty. $ Now, $$\rho (\theta (\hat A))=\rho _0(\theta (\hat A))=\rho _0(p^\infty \hat Bp^\infty )=$$ $$\rho _0(p^\infty)\rho _0( \hat B)\rho _0(p^\infty )= q_0^\infty q^\infty \hat Cq^\infty q_0^\infty = q_0^\infty \hat C q_0^\infty . $$ Thus, $$\rho \circ \theta (K_\infty (A))=K_\infty (q_0Cq_0).$$ Because $\rho \circ \theta $ is a completely isometric homomorphism, we have that $$A\sim _{\sigma \Delta }q_0Cq_0\Rightarrow A\subset _{\sigma \Delta }C.$$ \end{proof} \begin{remark} Following this theorem, one should expect that $\subset _{\sigma \Delta }$ is a partial order relation in the class of operator algebras if we identify those operator algebras that are $\sigma $-strongly $\Delta $-equivalent. This means that the additional property holds that $$A\subset_{\sigma \Delta } B, \;\;B\subset_{\sigma \Delta } A\Rightarrow A\sim _{\sigma \Delta }B.$$ However, this is not true, as we will prove in Section \ref{non}. \end{remark} \subsection{The case of $C^*$-algebras} In this subsection, we investigate the relation $\subset _{\sigma \Delta }$ in the case of $C^*$-algebras. \begin{theorem}\label{21100} Let $A, B$ be $C^*$-algebras. The following are equivalent: (i) $$A\subset _{\sigma \Delta }B$$. (ii) There exists an onto $*$-homomorphism $\theta : K_\infty (B)\rightarrow K_\infty (A).$ (iii) There exists an ideal $I$ of $B$ such that $$A\sim _{\sigma \Delta }B/I.$$ (iv) For every $*$-isomorphism $\alpha : A\rightarrow \alpha (A)$, there exists a $*$-homomorphism (not necessarily faithful) $\beta: B \rightarrow \beta (B)$ such that $\alpha (A)\sim _{\sigma TRO}\beta (B).$ \end{theorem} \begin{proof} (i) $\Rightarrow (ii)$ By Definition \ref{2100} and Theorem \ref{225}, there exist a projection $p\in Z(B^{**})$ and a $*$-isomorphism $\rho : K_\infty (pB)\rightarrow K_\infty (A).$ Define the onto $*$-homomorphism $\tau : K_\infty (B)\rightarrow K_\infty (pB)$ given by $\tau ( (b_{i,j})_{i,j} )=(pb_{i,j})_{i,j}.$ We denote $\theta =\rho \circ \tau .$ (ii) $\Rightarrow (i)$ Suppose that $\theta ^{**}: M_\infty (B^{**}) \rightarrow M_\infty (A^{**}) $ is the second dual of $\theta,$ then there exists a projection $q\in Z(B^{**})$ such that $$\theta^{**}(xq^\infty )= \theta^{**} (x), \;\;\forall \;x\in M_\infty(B^{**}),$$ and $\theta |_{M_\infty (B^{**}q)}$ is a $*$-homomorphism. Thus, if $x\in K_\infty (B)$, we have that $\theta (xq^\infty )=\theta (x).$ Therefore, $$K_\infty (Bq)\cong K_\infty (A),$$ which implies that $$A\sim _{\sigma \Delta }Bq\Rightarrow A\subset _{\sigma \Delta }B.$$ (iii) $\Rightarrow (ii)$ If $A\sim _{\sigma \Delta }B/I,$ then $$K_\infty (A)\cong K_\infty (B/I)\cong K_\infty (B)/K_\infty (I).$$ Because $K_\infty (I) $ is an ideal of $K_\infty (B),$ there exists an onto $*$-homomorphism $\theta : K_\infty (B)\rightarrow K_\infty (A).$ (ii) $\Rightarrow (iii)$ Suppose that $\theta : K_\infty (B)\rightarrow K_\infty (A)$ is an onto $*$-homomorphism. Then, there exists an ideal $J\subseteq K_\infty (B)$ such that $$K_\infty (B)/J\cong K_\infty (A).$$ The ideal $J$ is of the form $K_\infty (I)$ for an ideal $I $ of $B.$ Thus, $$K_\infty (B/I)\cong K_ \infty (B)/K_\infty (I)\cong K_\infty (B)/J\cong K_\infty (A).$$ Therefore, $A\sim _{\sigma \Delta }B/I.$ (iv) $\Rightarrow (iii)$ Suppose that $\alpha : A\rightarrow \alpha (A), \beta : B \rightarrow \beta (B)$ are $*$-homomorphisms such that $Ker \alpha =\{0\}$ and $\alpha (A)\sim _{\sigma TRO}\beta (B).$ Let $I$ be the ideal $Ker \beta .$ Then, $\beta (B)\cong B/I,$ and thus $A\sim _{\sigma \Delta }B/I.$ (iii) $\Rightarrow (iv)$ We assume that $\alpha: A\rightarrow \alpha (A)$ is a faithful $*$-homomorphism, and that $A\sim _{\sigma \Delta }B/I.$ By Theorem \ref{223}, there exists a faithful $*$-homomorphism $\gamma : B/I\rightarrow \gamma (B/I)$ such that $$\alpha (A)\sim _{\sigma TRO}\gamma (B/I).$$ If $\pi : B\rightarrow B/I$ is the natural mapping and $\beta =\gamma \circ \pi $, then $$\alpha (A)\sim _{\sigma TRO}\beta (B).$$ \end{proof} \begin{remark}\label{000000}\em {If $A$ and $B$ are $W^*$-algebras and $\alpha : A\rightarrow B, \;\;\;\beta: B \rightarrow A$ are $w^*$-continuous onto $*$-homomorphisms, then $A$ and $B$ are $*$-isomorphic. Indeed, there exist projections $e_1\in Z(A), f_1\in Z(B)$ such that $$Ae_1\cong B, \;\;\;Bf_1\cong A.$$ Thus, there exists a projection $e_2\in Z(A), e_2\leq e_1$ such that $$Ae_2\cong Bf_1\Rightarrow Ae_2\cong A.$$ From the proof of Lemma 2.17 in \cite{elest}, we have that $A\cong Ae_1,$ and thus $A\cong B.$ In Example \ref{2000000}, we will present non-isomorphic $C^*$-algebras $A$ and $B$ for which there exist onto $*$-homomorphisms $\alpha : A\rightarrow B, \;\;\;\beta: B \rightarrow A.$ These algebras are not $W^*$-algebras. }\end{remark} \begin{remark}\em{ As we have previously mentioned, in \cite{elest} we defined an analogous relation $\subset _\Delta $ between unital dual operator algebras. We have proven that if $A\subset _\Delta B,$ where $A, B$ are unital dual operator algebras, then there exists a central projection $p$ in $\Delta (B)$ and a Hilbert space $H$ such that $A\bar \otimes B(H)$ and $(pBp)\bar \otimes B(H)$ are isomorphic as dual operator algebras. Here, $\bar \otimes $ is the normal spatial tensor product. In the case of $W^*$-algebras, we have proven that $A\subset _\Delta B$ if and only if there exists a a Hilbert space $H$ and a $w^*$-continuous $*$-homomorphism from $B\bar \otimes B(H)$ onto $ A\bar \otimes B(H)).$ We have also proven that if $A$ and $B$ are $W^*$-algebras such that $A\subset _\Delta B$ and $B\subset _\Delta A$, then $A$ and $B$ are stably isomorphic in the weak sense. We present a new proof of this fact here. Suppose that $A\subset _\Delta B$ and $B\subset _\Delta A.$ Then, there exist Hilbert spaces $H$ and $K$ and $w^*$-continuous $*$-homomorphisms from $B\bar \otimes B(H) $ onto $A\bar \otimes B(H)$ and from $A\bar \otimes B(K) $ onto $B\bar \otimes B(K).$ We conclude that there exist $w^*$-continuous $*$-homomorphisms from $B\bar \otimes B(H)\bar \otimes B(K) $ onto $A\bar \otimes B(H)\bar \otimes B(K) $ and from $A\bar \otimes B(K)\bar \otimes B(H) $ onto $A\bar \otimes B(K)\bar \otimes B(H) .$ Therefore, by Remark \ref{000000}, $$A\bar \otimes B(K)\bar \otimes B(H) \cong B\bar \otimes B(H)\bar \otimes B(K). $$ Because $$B(H)\bar \otimes B(K)) \cong B(K)\bar \otimes B(H) \cong B(K\otimes H),$$ we have that $$A\bar \otimes B(K\otimes H)\cong B\bar \otimes B(K\otimes H).$$ Thus, $A$ and $B$ are stably isomorphic. } \end{remark} \begin{remark}\em{ The relation $\subset_ \Delta $ between $W^*$-algebras is a partial order relation up to weak stable isomorphism \cite{elest}. This means that it has the following properties: (i) $A\subset _\Delta A$. (ii) $A\subset _\Delta B,\;\;\;B\subset _\Delta C\Rightarrow A\subset _\Delta C.$ (iii)If $A\subset _\Delta B$ and $B\subset _\Delta A$, then $A$ and $B$ are weakly stably isomorphic. Therefore, it is natural to ask whether $\subset _{\sigma \Delta }$ is a partial order relation up to strong stable isomorphism for $C^*$-algebras. Although $\subset _{\sigma \Delta }$ satisfies the properties (i) and (ii), it does not satisfy property (iii), as we show in Example \ref{2000000}. Nevertheless, $\subset _{\sigma \Delta }$ satisfies the property described in Theorem \ref{vary}.} \end{remark} \begin{example}\label{1000000}\em{Let $X, Y$ be compact metric spaces, $\theta : X\rightarrow Y$ be a continuous one-to-one function, and $C(X)$ and $C(Y)$ be the algebras of continuous functions from $X$ and $Y$, respectively, into the complex plane $\bb C$, equipped with the supremum norm. Then, the map $$\rho : C(Y)\rightarrow C(X), \;\;\rho (f)=f\circ \theta $$ is an onto $*$-homomorphism, and thus $C(X)\subset _{\sigma \Delta }C(Y).$ Indeed, if $g\in C(X)$ we define $$f_0:\theta (X)\rightarrow \bb C,\;\;f_0(\theta (x))=g(x).$$ Because $\theta : X\rightarrow \theta (X)$ is a homeomorphism, $f_0$ is continuous. By Tietze's theorem, there exists $f\in C(Y)$ such that $f|_{\theta (X)}=f_0.$ We have that $f\circ \theta (x)=g(x)$ for all $x\in X,$ and thus $\rho (f)=g.$ } \end{example} \begin{example}\label{2000000}\em{There exist commutative $C^*$-algebras $A$ and $B$ such that $A\subset_{\sigma \Delta }B, \;\;\; B \subset _{\sigma \Delta }A,$ but $A$ and $B$ are not strongly Morita equivalent. Thus, $A$ and $B$ are not $\sigma \Delta $- equivalent. We denote the following subsets of $\bb C:$ $$ X=\{z\in \bb C: 1\leq |z|\leq 5\} , \;\;\;Y=\{z\in \bb C: |z|\leq 5\} .$$ We write $A=C(X),\;\;B=C(Y).$ Because $X\subseteq Y$, by Example \ref{1000000} we have that $A\subset_{\sigma \Delta }B.$ All of the closed discs of $\bb C$ are homeomorphic, and thus there exists a homeomorphism $\theta : Y\rightarrow X_0,$ where $X_0=\{z\in \bb C: |z-3|\leq 1\}.$ Because $X_0\subseteq X,$ Example \ref{1000000} implies that $B\subset_{\sigma \Delta }A.$ If $A$ and $B $ were strongly Morita equivalent, then they would also be $*$-isomorphic. The Stone-Banach theorem implies that $X$ and $Y$ would then be homeomorphic. However, this contradicts the fact that $Y$ is a simply connected set and $X$ is not. } \end{example} Next, we will prove Theorem \ref{vary}, which states the following: $$A\subset _{\sigma \Delta }B, \;\;\;B\subset _{\sigma \Delta }A\Rightarrow Ar \sim_{\sigma \Delta }B\hat r , \;\;\;\; A(id_{A^{**}}-r) \sim _{\sigma \Delta } B(id_{B^{**}}-\hat r), $$ for central projections $r\in A^{**}, \hat r\in B^{**}.$ \begin{lemma}\label{10000} Let $A, B$ be operator algebras and $\hat A, \hat B$ be unital dual operator algebras such that $ \hat A=\overline{A}^{w^*} , \hat B=\overline{B}^{w^*}. $ Furthermore, let $M$ be a TRO such that $$ A=\overline{[M^*BM]}^{\|\cdot\|},\;\;\; B=\overline{[MAM^*]}^{\|\cdot\|}, $$ and let $\alpha : \hat A \rightarrow \alpha (\hat A)$ be a $w^*$-continuous completely isometric homomorphism such that $H=\overline{ \alpha (A)(H)}.$ Then, there exist a Hilbert space $K,$ a $w^*$-continuous completely isometric honomorphism $\beta : \hat B\rightarrow B(K)$ such that $K=\overline{\beta (B)(K)}$, and a TRO homomorphism $\mu : M\rightarrow B(H,K)$ such that the following hold: A) If $a\in A, b\in B, m,n\in M$ such that $a=m^*bn$, then $\alpha (a)=\mu (m)^* \beta (b)\mu (n).$ B) If $a\in A, b\in B, m,n\in M$ such that $b=man^*$, then $\beta (b)=\mu (m)\alpha (a)\mu (n)^*.$ Therefore, $$ \alpha (\hat A)=\overline{[\mu (M)^*\beta (\hat B)\mu (M)]}^{w^*},\;\;\; \beta (\hat B) =\overline{[\mu (M)\alpha (\hat A)\mu (M)^*]}^{w^*} $$ and $$ \alpha (A)=\overline{[\mu (M)^*\beta (B)\mu (M)]}^{\|\cdot\|},\;\;\; \beta (B) =\overline{[\mu (M)\alpha (A))\mu (M)^*]}^{\|\cdot\|} .$$ \end{lemma} The proof of this lemma can be inferred from the proof of Theorem 2.12 in \cite{elehoust}, with the addition of some simple modifications. \begin{definition} \label{20000} Let $\hat A, \hat B$ be von Neumann algebras, and $A$ (resp. $B$) be a $C^*$-subalgebra of $\hat A$ (resp. $\hat B$) such that $ \hat A=\overline{A}^{w^*}$ (resp. $\hat B=\overline{B}^{w^*}, $ ). We write $(A, \hat A)\sim _\Delta (B, \hat B)$ if there exist $w^*$-continuous and injective $*$-homomorpisms $\alpha: \hat A \rightarrow \alpha (\hat A), \;\;\beta: \hat B\rightarrow \beta (\hat B)$ and a $\sigma $-TRO $M$ such that \begin{equation} \label{refer} \alpha (A)=\overline{[M^*\beta (B)M]}^{\|\cdot\|} , \;\; \beta (B)=\overline{[M\alpha (A)M^*] }^{\|\cdot\|}. \end{equation} \end{definition} \begin{remarks} \label{easy}\em{(i) If (\ref{refer}) holds then $$\alpha (\hat A)=\overline{[M^*\beta (\hat B)M]}^{w^*} , \;\; \beta (\hat B)=\overline{[M\alpha (\hat A)M^*] }^{w^*}. $$ (ii) Lemma \ref{10000} implies that if $(A,\hat A)\sim _\Delta (B,\hat B)$ and $\gamma : \hat A\rightarrow \gamma (\hat A)$ is a $w^*$-continuous $*$-isomorphism, then there exists a $w^*$-continuous $*$-isomorphism $\delta : \hat B\rightarrow \delta (\hat B)$ and a $\sigma $-TRO $N$ such that $$\gamma (A)=\overline{[N^*\delta (B)N]}^{\|\cdot\|}, \;\;\;\delta (B)=\overline{[N\gamma (A)N^*]}^{\|\cdot\|}.$$ (iii) The above remark and Theorem \ref{224} both imply that if $\hat A, \hat B, \hat C$ are von Neumann algebras $A,B,C$ are, respectively, $w^*$-dense $C^*$-subalgebras of these, and $(A,\hat A)\sim _\Delta (B,\hat B)$ and $(B,\hat B)\sim _\Delta (C,\hat C), $ then $(A,\hat A)\sim _\Delta (C,\hat C).$ } \end{remarks} In the following, we assume that $A$ is a $C^*$-algebra such that $A\subseteq A^{**}\subseteq B(H)$ for some Hilbert space $H$, and $e_2$ is a central projection of $A^{**}.$ We also assume that $A\sim _{\sigma \Delta }Ae_2.$ \begin{lemma}\label{10000a} There exist a $w^*$-continuous $*$-isomorphism $\theta _1: A^{**}\rightarrow \theta _1(A^{**})$ and a $\sigma $-TRO $M$ such that $$\theta _1(A)=\overline{[M^*Ae_2M]}^{\|\cdot\|} , \;\; Ae_2=\overline{[M\theta _1(A)M^*] }^{\|\cdot\|}. $$ \end{lemma} \begin{proof} Let $B$ be a $C^*$ algebra. We assume that $B\subseteq B^{**}\subseteq B(H).$ Let $\cl K$ be the algebra of compact operators acting on $l^2(\bb N)$, and $p\in \cl K$ be a rank one projection. We define the $\sigma $-TRO $M=I_H\otimes p\cl K.$ Then, we have that $$ B\otimes p=\overline{[M(B\otimes \cl K)M^*]}^{\|\cdot\|}, \;\;\; B\otimes \cl K=\overline{[M^*(B\otimes p)M]}^{\|\cdot\|}, $$ where $\otimes $ is the minimal tensor product. Because $ \overline{B\otimes p}^{w^*}=B^{**}\bar \otimes p , \;\;\; \overline{B\otimes \cl K}^{w^*}=B^{**}\bar \otimes B(l^2(\bb N))$, here $\bar \otimes $ is the spatial tensor product, we have $$(B\otimes p, B^{**}\bar \otimes p)\sim _\Delta (B\otimes \cl K, B^{**}\bar \otimes B(l^2(\bb N))).$$ Because there exists a $*$-isomorphism from $B^{**}$ onto $B^{**}\bar \otimes p$ mapping $B$ onto $B \otimes p$, we can conclude that $(B, B^{**})\sim _\Delta (B\otimes \cl K, B^{**}\bar \otimes B(l^2(\bb N))).$ Therefore, $$(A, A^{**})\sim _\Delta ( A\otimes \cl K , A^{**}\bar \otimes B(l^2(\bb N)) )$$ and $$(Ae_2, A^{**}e_2) \sim _\Delta ((Ae_2)\otimes \cl K, (A^{**}e_2)\bar \otimes B(l^2(\bb N))).$$ Because $A\sim _{\sigma \Delta }Ae_2$, there exists a $*$-isomorphism from $A^{**}\bar \otimes B(l^2(\bb N))$ onto $(A^{**}e_2)\bar \otimes B(l^2(\bb N))$ mapping $A\otimes \cl K$ onto $(Ae_2)\otimes \cl K $ and, therefore, $$( A\otimes \cl K , A^{**}\bar \otimes B(l^2(\bb N)) )\sim _\Delta ((Ae_2)\otimes \cl K, (A^{**}e_2)\bar \otimes B(l^2(\bb N))).$$ Now Remark \ref{easy}, (iii), implies that $(A, A^{**})\sim _\Delta (Ae_2, A^{**}e_2).$ By Remark \ref{easy}, (ii), for the identity map $id: A^{**}e_2 \rightarrow A^{**}e_2 $ there exist a $w^*$-continuous $*$-isomorphism $\theta _1: A^{**}\rightarrow \theta _1(A^{**})$ and a $\sigma $-TRO $M$ such that $$\theta _1(A)=\overline{[M^*Ae_2M]}^{\|\cdot\|} , \;\; Ae_2=\overline{[M\theta _1(A)M^*] }^{\|\cdot\|}. $$ \end{proof} \begin{lemma}\label{30000}Let $M, \theta _1$ be as in Lemma \ref{10000a}. Then, there exist $w^*$-continuous $*$- isomorphisms $\rho _k: A^{**}\rightarrow \rho _k(A^{**})$ and TRO homomorphims $\phi _k: M\rightarrow \phi _k(M), k=0,1,2,...$ where $\rho _0=id_{A^{**}}, \phi _0=id_M,$ such that if $a\in A^{**}, x\in A^{**}e_2, m,n \in M,$ the equality $\rho _k(a)=\phi _{k-1}(m)^*\rho _{k-1}(x) \phi _{k-1}(n)$ implies that $\rho _{k+1}(a)=\phi _{k}(m)^*\rho _{k}(x) \phi _{k}(n)$ and the equality $\rho _{k-1}(x)=\phi _{k-1}(m)\rho _{k}(a) \phi _{k-1}(n)^*$ implies that $\rho _{k}(x)=\phi _{k}(m)\rho _{k+1}(a) \phi _{k}(n)^*$ for all $k=1,2,...$ Therefore, $$\rho _k( A^{**})=\overline{[ \phi _{k-1}(M)^*\rho _{k-1}( A^{**}e_2)\phi _{k-1}(M) ]}^{w^*} ,$$$$ \rho _{k-1}(A^{**}e_2)=\overline{[ \phi _{k-1}(M)\rho _k(A^{**})\phi _{k-1}(M)^* ] }^{w^*} $$ and $$\rho _k(A)=\overline{[\phi _{k-1}(M)^*\rho _{k-1}( Ae_2)\phi _{k-1}(M) ]}^{\|\cdot\|} ,$$$$ \rho _{k-1}(Ae_2)=\overline{[ \phi _{k-1}(M)\rho _k(A)\phi _{k-1}(M)^* ] }^{\|\cdot\|} $$ for all $k=1,2,...$ \end{lemma} \begin{proof} By Lemma \ref{10000}, given the representation $\theta _1|_{A^{**}e_2}$, there exists a $*$-isomorphism $$\theta _2: \theta _1(A^{**})\rightarrow \theta_2( \theta_1(A^{**})) $$ and a TRO homomorphism $\phi _1: M\rightarrow \phi _1(M)$ such that $$\theta _2(\theta _1( A^{**}))=\overline{[\phi _1(M)^*\theta _1( A^{**}e_2)\phi _1(M)]}^{w^*} , \;\; \theta _1(A^{**}e_2)=\overline{[\phi _1(M)\theta _2(\theta _1(A^{**}))\phi _1(M)^*] }^{w^*} $$ and $$\theta _2(\theta _1(A))=\overline{[\phi _1(M)^*\theta _1(Ae_2)\phi _1(M)]}^{\|\cdot\|} , \;\; \theta _1(Ae_2)=\overline{[\phi _1(M)\theta _2(\theta _1(A))\phi _1(M)^*] }^{\|\cdot\|}, $$ and such that if $a\in A^{**}, x\in A^{**}e_2, m,n \in M$, the equality $\theta _1(a)=m^*xn$ implies that $\theta_2( \theta_1(a))= \phi _1(m)^*\theta _1(x)\phi _1(n) $ and the equality $x=m\theta _1(a)n^*$ implies that $\theta _1(x)=\phi _1(m)\theta_2( \theta_1(a)) \phi _1(n)^*. $ We write $\rho _0=id_{A^{**}}, \rho _1=\theta _1, \rho _2=\theta_2 \circ \theta _1$ and continue inductively. \end{proof} Let $M, \theta _1$ be as in Lemma \ref{10000a}. Given the $*$-isomorphism $\theta _1^{-1}: \theta _1(A^{**})\rightarrow A^{**},$ Lemma \ref{10000} implies that there exist a $*$-isomorphism $\sigma _1: A^{**}e_2\rightarrow \sigma _1(A^{**}e_2)$ and a TRO homomorphism $\chi_0 : M\rightarrow \chi _0(M)$ such that if $\chi (m)=\chi_0(m)^*,\;\;\forall \;m\;\in \;M, $ then $$ A^{**}=\overline{[\chi (M)\sigma _1( A^{**}e_2)\chi (M)^*]}^{w^*} , \;\; \sigma _1(A^{**}e_2)= \overline{[\chi (M)^*A^{**}\chi (M)] }^{w^*} $$ and $$A=\overline{[\chi (M)\sigma _1(Ae_2)\chi (M)^*]}^{\|\cdot\|} , \;\; \sigma _1(Ae_2)= \overline{[\chi (M)^*A\chi (M)] }^{\|\cdot\|} .$$ Furthermore, if $a\in A^{**}, m,n\in M, x\in A^{**}e_2$ then the equality $\theta _1(a)=m^*xn$ implies that $a=\chi(m)\sigma _1(x) \chi (n)^*.$ \begin{lemma}\label{40001} Let $M, \chi , \theta _1$ be as in the previous discussion, then there exists a $w^*$-continuous $*$-isomorphism $\tau _1: A^{**}\rightarrow \tau _1(A^{**})$ and a TRO homomorphism $\psi _1: \chi (M)\rightarrow \psi _1(\chi (M))$ such that if $a\in A^{**}, m,n\in M, x\in A^{**}e_2$, then the equality $a=\chi(m)\sigma _1(x) \chi (n)^*$ implies that $a=\psi _1(\chi (m))\tau _1(x)\psi _1(\chi (n))^*$ and $\sigma _1(x)=\chi (m)^*a \chi(n) $ implies that $\tau _1(x)=\psi _1(\chi (m))^*a \psi _1(\chi (n)). $ Thus, $$ A^{**}=\overline{[ \psi _{1}(\chi (M))\tau _{1}( A^{**}e_2)\psi _{1}(\chi (M))^* ]}^{w^*} ,$$ $$ A=\overline{[ \psi _{1}(\chi (M))\tau _{1}( Ae_2)\psi _{1}(\chi (M))^* ]}^{\|\cdot\|} ,$$ $$ \tau _{1}(A^{**}e_2)=\overline{[ \psi _{1}(\chi (M))^*A^{**}\psi _{1}(\chi (M)) ] }^{w^*} $$ $$ \tau _{1}(Ae_2)=\overline{[ \psi _{1}(\chi (M))^*A\psi _{1}(\chi (M)) ] }^{\|\cdot\|} .$$ \end{lemma} \begin{proof} Define the $*$-isomorphism $\tau _1: A^{**}\rightarrow \sigma _1(A^{**}e_2)\oplus A^{**}e_2^\bot ,$ given by $\tau _1(a)=\sigma _1(ae_2)\oplus ae_2^\bot ,$ and the TRO homomorphism $\psi_1: \chi (M)\rightarrow \psi_1(\chi (M)) $ given by $\psi _1(\chi (m))=(\chi (m) \;\;0).$ If $a\in A^{**}, m,n\in M, x\in A^{**}e_2$ satisfies $a=\chi(m)\sigma _1(x) \chi (n)^*$, then $$a=(\chi (m) \;\;0) \left (\begin{array}{clr}\sigma _1(x) & 0 \\ 0 & 0 \end{array}\right) (\chi (n)^*\;\; 0)^t=\psi _1(\chi (m))\tau _1(x)\psi _1(\chi (n))^*.$$ Furthermore, if $\sigma _1(x)=\chi (m)^*a \chi(n) $, then $$\tau _1(x)=\left(\begin{array}{clr}\sigma _1(x) & 0 \\ 0 & 0 \end{array}\right) =\left(\begin{array}{clr}\chi (m)^*a\chi (n) & 0 \\ 0 & 0\end{array}\right)=$$$$(\chi (m)^* \;\;0)^ta(\chi (n) \;\;0)= \psi _1(\chi (m))^*a \psi _1(\chi (n)). $$ \end{proof} \begin{lemma}\label{40002} Let $\tau _1, M, \chi , \psi _1$ be as in Lemma \ref{40001}. Then, there exist $w^*$- continuous $*$-isomorphisms $\tau _k: A^{**}\rightarrow \tau _k(A^{**})$ and TRO homomorphisms $\psi _k:\chi ( M)\rightarrow \psi _k(\chi (M))$ such that if $a\in A^{**}, m,n\in M, x\in A^{**}e_2$ the equality $a=\psi _1(\chi (m))\tau _1(x) \psi _1(\chi (n))^*$ implies that $\tau _k(a)=\psi _{k+1}(\chi (m)) \tau _{k+1}(x)\psi _{k+1}(\chi (n))^*$ and $\tau _1(x)=\psi _1(\chi (m))^*a \psi _1(\chi (n)) $ implies that $\tau _{k+1}(x)=\psi _{k+1}(\chi (m))^*\tau _k(a) \psi _{k+1}(\chi (n))$ for all $k=1,2,\ldots$. Thus, $$ \tau _k(A^{**})=\overline{[ \psi _{k+1}(\chi (M)) \tau _{k+1}( A^{**}e_2)\psi _{k+1}(\chi (M))^* ]}^{w^*} ,$$ $$ \tau _k(A)=\overline{[ \psi _{k+1}(\chi (M))\tau _{k+1}( Ae_2)\psi _{k+1}(\chi (M))^* ]}^{\|\cdot\|} ,$$ $$ \tau _{k+1}(A^{**}e_2)=\overline{[ \psi _{k+1}(\chi (M))^*\tau _k(A^{**})\psi _{k+1}(\chi (M)) ] }^{w^*} $$ $$ \tau _{k+1}(Ae_2)=\overline{[ \psi _{k+1}(\chi (M))^*\tau _k(A)\psi _{k+1}(\chi (M)) ] }^{\|\cdot\|} .$$ \end{lemma} \begin{proof}Lemma \ref{10000} implies that given the $*$-isomorphism $\tau_1: A^{**}\rightarrow \tau _1(A^{**})$, there exist a $w^*$-continuous $*$-isomorphism $\tau_{2,0}: A^{**}e_2\rightarrow \tau _{2,0}(A^{**}e_2)$ and a TRO homomorphism $\zeta : \chi (M)\rightarrow \zeta (\chi (M)) $ such that if $a\in A^{**}, m,n\in M, x\in A^{**}e_2$, then the equality $a=\psi _1(\chi (m))\tau _1(x) \psi _1(\chi (n))^*$ implies that $\tau _1(a)=\zeta (\chi (m))\tau _{2,0}(x)\zeta (\chi (n))^*$ and $\tau _1(x)=\psi _1(\chi (m))^*a \psi _1(\chi ( n)) $ implies that $\tau_{2,0}(x)=\zeta (\chi (m))^*\tau _1(a)\zeta (\chi (n)). $ For every $a\in A^{**}, m\in M,$ we define $$\tau _2(a)=\tau _{2,0}(ae_2)\oplus ae_2^\bot ,\;\;\;\psi _2(\chi (m))=(\zeta (\chi (m)) \;\; 0).$$ If $a\in A^{**}, m,n\in M, x\in A^{**}e_2$, then the equality $a=\psi _1(\chi (m))\tau _1(x) \psi _1(\chi (n))^*$ implies that $$\tau _1(a)=\zeta (\chi (m))\tau _{2,0}(x)\zeta (\chi (n))^*=(\zeta (\chi (m)) \;\; 0) \left(\begin{array}{clr} \tau _{2,0}(x) &0 \\ 0& 0\end{array}\right)(\zeta (\chi (n))^*\;\; 0)^t =$$ $$ \psi _2(\chi (m))\tau _2(x)\psi _2(\chi (n))^*$$ and the equality $\tau _1(x)=\psi _1(\chi (m))^*a \psi _1(\chi (n)) $ implies that $$\tau _2(x)=\left(\begin{array}{clr} \tau _{2,0}(x) &0 \\ 0& 0\end{array}\right)= \left(\begin{array}{clr} \zeta (\chi (m))^*\tau _{1}(a)\zeta (\chi (n)) &0 \\ 0& 0\end{array}\right)=$$$$ (\zeta (\chi (m))^* \;\; 0)^t\tau _1(a)(\zeta (\chi (n)) \;\; 0)=\psi _2(\chi (m))^*\tau _1(a)\psi _2(\chi (n)). $$ We continue inductively. \end{proof} \begin{lemma}\label{50000} There exist a faithful $*$-homomorphism $\alpha : A^{**}\rightarrow B(L),$ where $L$ is a Hilbert space such that $\overline{\alpha (A)(L)}=L,$ and a $\sigma $-TRO $N\subseteq B(\alpha (e_2)(L), L)$ such that $$\alpha (A^{**})=\overline{[N\alpha (A^{**}e_2)N^*]}^{w^*} , \;\; \alpha (A^{**}e_2)=\overline{[N^*\alpha (A^{**})N] }^{w^*} $$ and $$\alpha (A)=\overline{[N\alpha (Ae_2)N^*]}^{\|\cdot\|} , \;\; \alpha (Ae_2)=\overline{[N^*\alpha (A)N] }^{\|\cdot\|}. $$ \end{lemma} \begin{proof} We recall the maps $\theta _1, \tau _k, \rho _k$ from Lemmas \ref{30000}, \ref{40001}, and \ref{40002}. We denote $$\alpha (a)=\ldots\oplus \tau _2(a)\oplus \tau _1(a)\oplus a\oplus \rho _1(a)\oplus \rho _2(a)\oplus \ldots$$ for all $a\in A^{**}.$ We also recall the maps $\psi _k, \phi _k, \chi $, and for each $m\in M,$ we let $\zeta (m)$ be the $\infty \times \infty $ matrix whose first diagonal under the main diagonal is $$(\ldots, \psi _2(\chi (m)),\;\; \psi _1(\chi (m)),\;\; m^*,\;\; \phi _1(m)^*,\;\; \phi _2(m)^*, \ldots)$$ where the other diagonals have zero entries. Clearly, $\zeta (M)$ is a $\sigma $-TRO. Let $a\in A^{**}, x\in A^{**}e_2, m,n \in M$ be such that $\rho _1(a)=\theta _1(a)=m^*xn. $ Then, by Lemma \ref{30000} we have that $$\rho _{k+1}(a)=\phi _k(m)^*\rho _k(x)\phi _k(n),\;\;\forall \;k=1,2,3,...$$ Furthermore, following the discussion for the previous Lemma \ref{40001}, we have that $a=\chi (m)\sigma _1(x)\chi (n)^*,$ which by Lemma \ref{40001} implies that $a=\psi _1(\chi (m))\tau _1(x)\psi _1(\chi (n))^* .$ By Lemma \ref{40002}, we have that $$\tau _k(a)=\psi _{k+1}(\chi (m))\tau _{k+1}(x)\psi _{k+1}(\chi (n))^*, \;\;\forall \;k=1,2,3,...$$ Therefore, \begin{align*}& \zeta (m)\alpha (x)\zeta (n)^*=\\ & \ldots \psi _2(\chi (m))\tau _2(x)\psi _2(\chi (n))^* \oplus \psi _1(\chi (m))\tau _1(x)\psi _1(\chi (n))^*\oplus \\ & m^*xn\oplus \phi _1(m)^*\rho _1(x)\phi _1(n) \oplus \phi _2(m)^*\rho _2(x)\phi _2(n) \oplus \ldots=\\& \ldots \oplus \tau _1(a)\oplus a\oplus \rho _1(a)\oplus \rho _2(a)\oplus \rho _3(a)\ldots=\alpha (a). \end{align*} We conclude that $$ \alpha (A^{**})=\overline{[\zeta (M)\alpha (A^{**}e_2)\zeta (M)^*]}^{w^*} ,\;\;\; \alpha (A)=\overline{[\zeta (M)\alpha (Ae_2)\zeta (M)^*]}^{\|\cdot\|}.$$ Similarly, we can see that $$ \alpha (A^{**}e_2)=\overline{[\zeta (M)^*\alpha (A^{**})\zeta (M)] }^{w^*},\;\; \alpha (Ae_2)=\overline{[\zeta (M)^*\alpha (A)\zeta (M)] }^{\|\cdot\|}. $$ \end{proof} \begin{lemma}\label{60000} Let $A$ be a $C^*$ algebra and $e_1, e_2\in Z(A^{**})$ be projections such that $Ae_2$ is a $C^*$- algebra, $A\sim _{\sigma \Delta }Ae_2$, and $e_2\leq e_1\leq e_0=id_{A^{**}}, e_2\neq e_1\neq e_0.$ Then, there exist central projections $q, p, r\in A^{**}$ such that $$e_0=p\oplus q,\; e_1=r\oplus q,\; p\bot q,\; r\bot q,$$ and $$Ap\sim _{\sigma \Delta }Ar.$$ \end{lemma} \begin{proof} From Lemma \ref{50000}, we may assume that $$A\subseteq A^{**}\subseteq B(H), e_0=I_H$$ and there exists a $\sigma $-TRO $M\subseteq B(e_2(H), H)$ such that $$A^{**}=\overline{[M A^{**}e_2M^*]}^{w^*} , \;\; A^{**}e_2= \overline{[M^*A^{**}M] }^{w^*} $$ and $$A=\overline{[M Ae_2M^*]}^{\|\cdot\|} , \;\; Ae_2=\overline{[M^*AM] }^{\|\cdot\|}. $$ By Proposition 2.8 and Theorem 3.3 in \cite{eletro}, there exists a $*$-isomorphism $$\phi : (A^{**})^\prime \rightarrow (A^{**})^\prime e_2\subseteq B(e_2(H))$$ such that $$am=m\phi (a),\;\;\forall \;a\in (A^{**})^\prime ,\;\;m\in M.$$ By induction, there exist central projections $\{e_n: n\in \bb N\}\subseteq A^{**}$ such that $$\phi (e_n)=e_{n+2}, \;\;\;e_{n+1}\leq e_n, \;\;\forall n=0,1,2,\ldots$$ Define $$p= \sum_{n=0}^\infty (e_{2n}-e_{2n+1}) , \;\;\; q=\sum_{n=0}^\infty (e_{2n+1}-e_{2n+2})\oplus (\wedge _n e_n). $$ Then, $e_0=p\oplus q.$ If $$r=\phi (p)=\sum_{n=0}^\infty (e_{2n+2}-e_{2n+3}), $$ then $e_1=r\oplus q.$ We define $N=pMr.$ Because $$NN^*N=pMrM^*pMr=pM\phi (p)M^*pM\phi (p)=pMM^*M \phi (p)\subseteq pMr=N,$$ $N$ is a TRO. Furthermore, the fact that $M$ is a $\sigma $-TRO implies that $N$ is a $\sigma $-TRO. We have that $$Ar=A\phi (p)= Ae_2\phi (p)= \overline{[\phi (p)M^*AM\phi (p)] } ^{\|\cdot\|} .$$ Thus, because $pM=M\phi (p),$ we have that $$Ar= \overline{[N^*ApN]} ^{\|\cdot\|}. $$ Similarly, we can prove that $Ap=\overline{[NArN^*]}^{\|\cdot\|}.$ Therefore, $Ap \sim _{\sigma \Delta }Ar.$ \end{proof} \begin{theorem}\label{vary} Let $A, B$ be $C^*$-algebras such that $A\subset _{\sigma \Delta }B, \;\;\;B\subset _{\sigma \Delta }A.$ Assume that $e_0=id_{A^{**}}, \;\hat e_0=id_{B^{**}}.$ Then, there exist projections $ r\in Z(A^{**}) , \hat r\in Z(B^{**}) $ such that $$ Ar\sim _{\sigma \Delta} B\hat r ,\;\;\;A(e_0-r)\sim _{\sigma \Delta} B(\hat e_0-\hat r ). $$ \end{theorem} \begin{proof} There exist projections $e_1\in Z(A^{**}), f_1\in Z(B^{**})$ such that $$A\sim _{\sigma \Delta }Bf_1,\;\;B\sim _{\sigma \Delta }Ae_1.$$ Thus, there exists a projection $e_2\in Z(A^{**})$ such that $e_2\leq e_1$ and $Bf_1\sim _{\sigma \Delta }Ae_2.$ Therefore, $A\sim _{\sigma \Delta }Ae_2.$ By Lemma \ref{60000}, there exist projections $p,q,r\in Z(A^{**})$ such that $$e_1=p\oplus q,\; e_0=r\oplus q,\; p\bot q,\; r\bot q$$ and $$Ap\sim _{\sigma \Delta }Ar.$$ Assume that $\psi : K_\infty (Ae_1)\rightarrow K_\infty (B)$ is a $*$-isomorphism. Again, by $\psi $ we denote the second dual of $\psi .$ Because $p\leq e_1, $ there exists $\hat p \in Z(B^{**})$ such that $\psi (K_\infty (Ae_1)^{**}p^\infty )=K_\infty (B)^{**}\hat p^\infty .$ We have that $$\psi (K_\infty (Ap))=\psi (K_\infty (Ae_1)) \psi (p^\infty )=K_\infty (B\hat p).$$ Similarly, there exists a projection $\hat q\in Z(B^{**})$ such that $$\psi (K_\infty (Aq))= K_\infty (B\hat q).$$ Because $p\perp q$, we have that $\hat p \bot \hat q.$ Furthermore, because $e_1=p\oplus q\Rightarrow \hat e_0=\hat p\oplus \hat q,$ we conclude that $$Ar\sim _{\sigma \Delta }Ap\sim _{\sigma \Delta }B\hat p$$ and $$A(e_0-r)=Aq\sim _{\sigma \Delta }B\hat q=B(\hat e_0-\hat p).$$ We write $\hat r $ for $\hat p.$ The proof is now complete. \end{proof} \section{Examples in the non-self-adjoint case}\label{non} In this section, we will present a counterexample of two non-self-adjoint operator algebras $\hat A, \hat B$ such that $\hat A\subset _{\sigma \Delta }\hat B, \;\; \hat B\subset _{\sigma \Delta }\hat A$ but $\hat A$ and $ \hat B$ are not $\sigma \Delta $-strongly equivalent. Let $\cl N, \cl M$ be nests acting on the separable Hilbert spaces $H_1$ and $K_1$, respectively. These nests are called similar if there exists an invertible operator $s: H_1\rightarrow K_1$ such that $$\cl M=\{sn(H_1): n\in \cl N\}.$$ In this case, the map $$\theta _s: \cl N\rightarrow \cl M, \;\;\theta _s(n)=sn(H_1)$$ is a nest isomorphism. This means that $\theta _s$ is one-to-one, onto, and order-preserving. We can easily check that $\Alg {\cl M}=s\Alg {\cl N}s^{-1}.$ If $n\in \cl N,$ we write $$n_-=\vee \{l\in \cl N: l\leq n, l\neq n\}.$$ In the case where $n_-$ is strictly contained in $n,$ the projection $a=n-n_-$ is called an atom of $\cl N.$ \begin{theorem}\label{22100} \cite[13.20]{dav} The nests $\cl N, \cl M$ are similar if and only if there exists a nest isomorphism $\theta : \cl N\rightarrow \cl M$ such that $$ dim((n-n_-)(H_1))) = dim((\theta (n)-\theta (n_-))(H_2))) $$ for all $n\in \cl N.$ \end{theorem} The lemmas \ref{22200} and \ref{22300} can be inferred from Section 13 in \cite{dav}. We present the proofs here for completeness. \begin{lemma}\label{22200} Let $\cl N, \cl M$ be separably-acting nests, and $\theta : \cl N\rightarrow \cl M$ be a nest isomorphism preserving the dimensions of the atoms. For every $0<\epsilon <1$, there exists an invertible operator $s,$ a unitary $u$, and a compact operator $k$ such that $$s=u+k, \;\;\|k\|<\epsilon , \;\;\|s^{-1}\|<1+\epsilon $$ and $\theta =\theta_s. $ \end{lemma} \begin{proof} By Theorem \cite[13.20]{dav}, there exists a compact operator $k,$ a unitary $u$, an invertible operator $s=u+k,$ such that $\theta =\theta_s $ and $\|k\|<\frac{\epsilon }{1+\epsilon }.$ Observe that $\|k\|<\epsilon .$ We have that $u^*s=I+u^*k\Rightarrow \|I-u^*s\|<\epsilon .$ Therefore, \begin{equation}\label{star} (u^*s)^{-1}= \sum_{n=0}^\infty (I-u^*s)^n= \sum_{n=0}^\infty (-u^*k)^n . \end{equation} We conclude that $$s^{-1}=\sum_{n=0}^\infty (-u^*k)^n u^*.$$ We have that $$\|s^{-1}\|\leq \sum_{n=0}^\infty \|u^*k\|^n=\sum_{n=0}^\infty \|k\|^n=\frac{1}{1-\|k\|} <1+\epsilon . $$ \end{proof} \begin{lemma}\label{22300} Let $\cl N, \cl M$ be separably-acting nests and $\theta : \cl N\rightarrow \cl M$ be a nest isomorphism preserving the dimensions of the atoms. For every $0<\epsilon <1$, there exists an invertible operator $s,$ a unitary $u,$ and compact operators $k,l$ such that $$s=u+k, \;\;s^{-1}=u^*+l, \;\;\|k\|<\epsilon , \;\;\|l\|<\epsilon $$ and $\theta =\theta_s. $ \end{lemma} \begin{proof} Choose $0<\delta <1$ such that $(1+\delta )\delta <\epsilon ,\;\; \delta <\epsilon .$ By Lemma \ref{22200}, there exist a unitary $u$ and compact $k$ such that $s=u+k$ is an invertible operator and $\|k\|<\delta , \;\|s^{-1}\|<1+\delta , \;\theta =\theta_s. $ Define $l_0=-u^*ks^{-1}u.$ We have that $$l_0u^*s=-u^*k\Rightarrow l_0(I+u^*k)=-u^*k\Rightarrow I=I+u^*k+l_0(I+u^*k)\Rightarrow$$$$ I=(I+l_0)(I+u^*k).$$ Because $\|u^*k\|<\delta <1,$ the operator $I+u^*k$ is invertible, and thus $$I+l_0=(I-(-u^*k))^{-1}= \sum_{n=0}^\infty (-u^*k)^n.$$ By (\ref{star}), we have that $$I+l_0=s^{-1}u\Rightarrow s^{-1}=u^*+l_0u^*.$$ If $l=l_0u^*,$ then $l$ is a compact operator, and \begin{align*} \|l\|=&\|l_0u^*\|=\|s^{-1}-u^*\|=\|s^{-1}u-I\|=\|s^{-1}(u-s)\|=\\& \|s^{-1}k\|\leq \|s^{-1}\|\|k\|<(1+\delta )\delta < \epsilon . \end{align*} Thus, $s^{-1}=u^*+l$ and $\|l\|<\epsilon.$ \end{proof} In the following, we fix similar nests $\cl N$ and $\cl M$ acting on the Hilbert spaces $H_1$ and $H_2$, respectively, and a nest isomorphism $\theta : \cl N\rightarrow \cl M$ preserving the dimensions of atoms. Suppose that $a_i=n_i-(n_i)_-, \;\;b_i=\theta (n_i)-\theta (n_i)_-, i=1,2,3,...$ are the atoms of $\cl N$ and $\cl M$, respectively. We also assume that $p=\vee _ia_i$, $p$ is strictly contained in $I_{H_1}, I_{H_2}=\vee _ib_i$, and $dim(a_i)=dim(b_i)<+\infty $ for all $i.$ By Lemma \ref{22300}, there exists a sequence of invertible operators $(s_n)_n $ such that $\theta =\theta_{s_n}, $ a sequence of unitary $(u_n)_n$, and sequences of compact operators $(k_n)_n, (l_n)_n$ such that $$s_n=u_n+k_n,\;\;s_n^{-1}=u_n^*+l_n$$ for all $n\in \bb N $ and $\|k_n\|\rightarrow 0, \|l_n\|\rightarrow 0.$ We can also assume that $\|s_n\|<2, \;\;\|s^{-1}_n\|<2 $ for all $n \in \bb N,$ and $$ w^*-\lim_n u_n=s , \;\; w^*-\lim_n s_n=s,\;\; w^*-\lim_n s_n^{-1}=s^*.$$ \begin{lemma}\label{22400} (i) $$SOT-\lim_n \sum_{i=1}^\infty b_is_na_i = \sum_{i=1}^\infty b_isa_i =s_0.$$ (ii) $$SOT-\lim_n \sum_{i=1}^\infty a_is_n^{-1}b_i = \sum_{i=1}^\infty a_is^*b_i =s_0^*.$$ \end{lemma} \begin{proof} We shall prove (i), while statement (ii) follows by symmetry. Fix $i\in \bb N$, and assume that $$b_i(\xi )= \sum_{j=1}^n\sca{\xi , x_j}y_j ,\;\;\forall \;\xi \;\in \;H_1$$ for $x_j, y_j\in H_1.$ For all $\xi \in H_1$, we have that $$b_is_na_i(\xi )= \sum_{j=1}^n\sca{s_na_i(\xi) , x_j}y_j \rightarrow \sum_{j=1}^n\sca{sa_i(\xi) , x_j}y_j =b_isa_i(\xi ). $$ Thus, $$SOT-\lim_n b_is_na_i=b_isa_i, \;\;\;\forall \;i.$$ If $\xi \in H_1$ for all $k\in \bb N,$ we have that \begin{align*}& \nor{ \sum_{i=1}^\infty b_is_na_i(\xi ) - \sum_{i=1}^\infty b_isa_i(\xi )}^2 = \sum_{i=1}^\infty \nor{b_is_na_i(\xi ) - b_isa_i(\xi )}^2 =\\& \sum_{i=1}^k \nor{b_is_na_i(\xi ) - b_isa_i(\xi )}^2 + \sum_{i>k}^\infty \nor{b_is_na_i(\xi ) - b_isa_i(\xi )}^2 \leq \\ & \sum_{i=1}^k \nor{b_is_na_i(\xi ) - b_isa_i(\xi )}^2 + 2 \sum_{i>k}^\infty \nor{a_i(\xi )}^2 . \end{align*} Fix $\epsilon >0.$ Then, there exists $k_0\in \bb N$ such that $\sum_{i>k_0}^\infty \nor{a_i(\xi )}^2 <\epsilon .$ Thus, $$\nor{ \sum_{i=1}^\infty b_is_na_i(\xi ) - \sum_{i=1}^\infty b_isa_i(\xi )}^2\leq \sum_{i=1}^{k_0} \nor{b_is_na_i(\xi ) - b_isa_i(\xi )}^2 + 2 \epsilon ,\;\;\forall \;n\;\in \;\bb N.$$ We let $n\rightarrow \infty ,$ and we have that $$\limsup_n \nor{ \sum_{i=1}^\infty b_is_na_i(\xi ) - \sum_{i=1}^\infty b_isa_i(\xi )}^2\leq 0+2\epsilon =2\epsilon . $$ Thus, $$\lim_n \nor{ \sum_{i=1}^\infty b_is_na_i(\xi ) - \sum_{i=1}^\infty b_isa_i(\xi )}=0.$$ \end{proof} \begin{lemma}\label{22500} For every $j, i\in \bb N$, we have that $$a_is_j^{-1}b_is_ja_i=a_i=a_is^*b_isa_i.$$ \end{lemma} \begin{proof} Because $$ s_j(n_i(H_1))=\theta (n_i)(H_2) , \;\;\;s_j((n_i)_-(H_1))=\theta ((n_i)_-)(H_2) , $$ if $\xi \in a_i(H_1)$, then $\xi=n_i( \xi)-(n_i)_-( \xi). $ Thus, there exist $\xi_j, \; \omega_j\;\in H_2, $ such that \begin{align*} s_j(\xi )= & \theta (n_i) (\xi _j)-\theta (n_i) _-(\omega _j) =(\theta (n_i)-\theta (n_i)_-)(\xi _j)+ (\theta (n_i)_- (\xi _j)-\theta (n_i) _-(\omega _j) )=\\& b_i(\xi _j)+\theta (n_i)_-(\xi_j- \omega _j). \end{align*} Because $b_i=\theta(n_i)- \theta(n_i)_-, $ we have that $b_is_j(\xi )=b_i(\xi _j).$ Therefore, $$s_j^{-1}b_is_j(\xi )=s_j^{-1}(b_i(\xi _j))=s_j^{-1}(s_j(\xi )-\theta (n_i)_-(\xi_j- \omega _j))= \xi - s_j^{-1}(\theta (n_i)_-(\xi _j-\omega _j)) .$$ However, $s_j^{-1}(\theta (n_i)_-(H_2))=(n_i)_-(H_1).$ Thus, there exists $\phi _j\;\in \;H_1,$ such that $$ s_j^{-1}(\theta (n_i)_-(\xi _j-\omega _j))= (n_i)_-(\phi _j)$$ We have proved that $$s_j^{-1}b_is_j(\xi )=\xi -(n_i)_-(\phi _j),$$ which implies that $$a_is_j^{-1}b_is_j(\xi )=a_i(\xi )-a_i(n_i)_-(\phi _j).$$ Because $a_i(n_i)_-=0$, we have that $$a_is_j^{-1}b_is_j(\xi )=a_i(\xi ), \;\;\forall \;i,j.$$ Because $$ SOT-\lim_ja_is_j^{-1}b_i=a_is^*b_i, \;\;\; SOT-\lim_jb_is_ja_i=b_is^*a_i,$$ we obtain $$a_is^*b_isa_i= a_i, \;\;\forall \;i.$$ \end{proof} \begin{lemma}\label{22600} Let $s_0$ be as in Lemma \ref{22400}. Then, $$s_0^*s_0=p, \;\;s_0s_0^*=I_{H_2}.$$ \end{lemma} \begin{proof} We shall prove that $s_0^*s_0=p.$ Because the span of the atoms of $\cl M$ is $I_{H_2},$ the other equality follows from symmetry. By Lemma \ref{22400}, we have that $$s_0=SOT-\lim_n \sum_{i=1}^n b_isa_i, \;\;\;s_0^*=SOT-\lim_n \sum_{i=1}^n a_is^*b_i.$$ Thus, $$s_0^*s_0= SOT-\lim_n( \sum_{i=1}^n a_is^*b_i )(\sum_{j=1}^nb_jsa_j )= SOT-\lim_n \sum_{i=1}^n a_is^*b_isa_i.$$ Because $p=\vee _ia_i,$ Lemma \ref{22500} implies that $s_0^*s_0=p.$ \end{proof} Suppose that $A$ (resp. $B$) is the subalgebra of compact operators of the algebra $\Alg{\cl N}$ (resp. $\Alg{\cl M}$ ). It is well-known that $\Alg{\cl N}=A^{**}, \Alg{\cl M}=B^{**}.$ We define a map $\rho : B\rightarrow A$ such that $$\rho (k)=s_0^*ks_0,\;\;\forall \; k\in B.$$ Because $s_0s_0^*=I_{H_2}$, this map is a homomorphism. If $k\in A$, then $$pkp=s_0^*s_0ks_0^*s_0=\rho (s_0ks_0^*).$$ Thus $\rho (B)=pAp.$ Because $p\in \Delta (A^{**})^\prime =Z(\Delta (A^{**}))$, we have that $B\subset _{\sigma \Delta }A.$ In the following, we additionally assume that the dimensions of the atoms of $\cl N$ and $\cl M$ are one and that $\Delta (A^{**}), \;\Delta (B^{**})$ are maximal abelian self-adjoint algebras (MASAs). Such nests exist, see, for instance, Example 13.15 in \cite{dav}. We denote the algebras $$\hat B=B\oplus A\oplus A\oplus \ldots, \;\;\hat A=A\oplus A\oplus \ldots.$$ Because $B\subset _{\sigma \Delta }A$, we have that $\hat B\subset_{\sigma \Delta }\hat A.$ Furthermore, $$\hat A\cong (0\oplus \bb C I_{H_1} \oplus \bb C I_{H_1} \oplus \ldots ) \hat B (0\oplus \bb C I_{H_1} \oplus \bb C I_{H_1} \oplus \ldots ) .$$ Thus, $\hat A\subset_{\sigma \Delta }\hat B.$ If $\subset _{\sigma \Delta }$ was a partial-order relation for non-self-adjoint algebras, then up to stable isomorphism we should have that $\hat A\sim _{\sigma \Delta }\hat B.$ Thus, the algebras $$\Omega =B^{**}\oplus A^{**}\oplus A^{**}\oplus \ldots, \;\;\;\Xi =A^{**}\oplus A^{**}\oplus \ldots$$ would be weakly stably isomorphic. Because $\Omega$ and $ \Xi $ are CSL algebras (see the definition of a CSL algebra in \cite{dav}), it follows from Theorem 3.2 in \cite{eleref} and Theorem 3.3 in \cite{eletro} that there would exists a $*$-isomorphism $$\theta : \Delta( \Omega)^\prime \rightarrow \Delta (\Xi)^\prime $$ such that $\theta (\Lat{\Omega })=\Lat{\Xi }.$ However, $\Delta( \Omega)$ and $\Delta (\Xi)$ are MASAs, and thus there exists a unitary $u$ such that $$\theta (x)=u^*xu, \;\;\;\forall \;x\;\in \Delta (\Omega ).$$ Therefore, $$u^*\Omega u=\Xi .$$ There exist completely contractive homomorphisms $\rho _k: B^{**}\rightarrow A^{**}, \;\;k=1,2,...$ such that $$u^*(x\oplus 0\oplus ...)u= \rho_1(x)\oplus \rho _2(x)\oplus ...,\;\;\;\forall \; x\; \in B^{**}.$$ Suppose that $$u^*( I_{H_2}\oplus 0\oplus ... )u=p_1\oplus p_2\oplus ...$$ Because $$0\oplus ...\oplus 0\oplus p_i\oplus 0\oplus ...\leq p_1\oplus p_2\oplus ...$$ for all $i,$ we have that $$u(0\oplus ...0\oplus p_i\oplus 0\oplus ...)u^*\leq I_{H_2}\oplus 0\oplus ... $$ Thus, $$u (0\oplus ...0\oplus p_i\oplus 0\oplus ...) u^*=\hat p_i \oplus 0\oplus ... $$ for orthogonal projections $\hat p_i\in \Delta (B^{**}), i\in \bb N.$ Observe that $\hat p_i\hat p_j=0$ for $i\neq j.$ If $x\in B^{**},$ then $$u^*(x\oplus 0\oplus ...)u u^*(I_{H_2}\oplus 0...)u =u^*(I_{H_2}\oplus 0...)u u^*(x\oplus 0\oplus ...)u.$$ Thus, $$ (\rho_1(x)\oplus \rho _2(x)\oplus ...) (p_1\oplus p_2\oplus ...)=(p_1\oplus p_2\oplus ...)(\rho_1(x)\oplus \rho _2(x)\oplus ...). $$ We conclude that $$\rho _i(x)p_i=p_i\rho _i(x),\;\;\forall i\in \bb N, \;\;x\in B^{**}.$$ Thus, for all $x\in B^{**},$ we have that \begin{align*} & u^*(x\oplus 0...)u u^*(\hat p_i\oplus 0\oplus ...)u = (\rho_1(x)\oplus \rho _2(x)\oplus ...) (0\oplus ...0\oplus p_i\oplus 0\oplus ...)=\\ & (0\oplus ...0\oplus p_i\oplus 0\oplus ...)(\rho_1(x)\oplus \rho _2(x)\oplus ...) = u^*(\hat p_i\oplus 0\oplus ...)uu^*(x\oplus 0...)u . \end{align*} Therefore, $\hat p_i$ is in the center of $B^{**}.$ However, as a nest algebra, $B^{**}$ has a trivial center. We can, therefore, conclude that there exists $i$ such that $$\hat p_i=I_{B^{**}}$$ and $\hat p_j=0$ for all $j\neq i.$ We obtain that \begin{equation} \label{xx}u^*(B^{**}\oplus 0\oplus ...)u=(0\oplus ...0\oplus p_iAp_i\oplus 0\oplus ...) \end {equation} By the same arguments, for the nest algebra $A^{**}$ there exists exactly one of the algebras $q_1B^{**}q_1, q_2A^{**}q_2, q_3A^{**}q_3,...$ with $q_1\in B^{**}, q_k\in A^{**}, k\geq 2$ such that $$u(0\oplus ...0\oplus A^{**}\oplus 0\oplus ...)u=(q_1B^{**}q_1 \oplus 0\oplus 0\oplus ...)$$ or $$u(0\oplus ...0\oplus A^{**}\oplus 0\oplus ...)u=(0\oplus ...\oplus 0\oplus q_jA^{**}q_j \oplus 0\oplus ...).$$ Here, in the left-hand side, $A^{**}$ is in the $i$-th position. The equality (\ref{xx}) implies that $$u(0\oplus ...0\oplus A^{**}\oplus 0\oplus ...)u=(q_1B^{**}q_1 \oplus 0\oplus 0\oplus ...)$$ We have proven that $$u^*(B^{**}\oplus 0\oplus ...)u\subseteq (0\oplus ...\oplus 0\oplus A^{**}\oplus 0\oplus ...)$$ and $$u(0\oplus ...\oplus 0\oplus A^{**}\oplus 0\oplus ...)u^*\subseteq (B^{**}\oplus 0\oplus ...)$$ We conclude that $$u^*(B^{**}\oplus 0\oplus ...)u=(0\oplus ...\oplus 0\oplus A^{**}\oplus 0\oplus ...)$$ Thus, the nest algebras $A^{**}$ and $B^{**}$ are completely isometrically isomorphic. It follows that their diagonals $\Delta (A^{**})$ and $\Delta (B^{**})$ are $*$-isomorphic. However, $\Delta (B^{**})$ is an atomic MASA, and $\Delta (A^{**})$ is a MASA with a nontrivial continuous part. This contradiction shows that $\hat A$ and $\hat B$ are not $\sigma$-strongly $ \Delta $-equivalent.
2,877,628,088,659
arxiv
\section{Introduction} To present knowledge leptons have dimensions of less than $10^{-18}m$ and may therefore be regarded as point-like objects. The muonium atom ($M=\mu^+e^-$) is the hydrogen-like bound state of leptons from two different particle generations, an antimuon($\mu^+$) and an electron($e^-$) \cite{Hug_90,Jun_99}. The dominant interaction within the M atom is electromagnetic and level energies can be calculated in bound state Quantum Electrodynamics (QED) to sufficiently high accuracy for modern high precision spectroscopic experiments. There are also contributions from weak interactions arising from $Z^0$-boson exchange and from strong interactions due to vacuum polarization loops containing hadrons. They both can be obtained to the required level of precision using standard theory. In contrast to natural atoms and ions as well as artificial atomic systems, which contain hadrons, M has the advantage that there are no complications arising from the finite size and the internal structure of any of its constituents. Precision experiments in M can therefore provide sensitive tests of the standard theory and searches for new and yet unknown forces in nature. Parameters of speculative theories, which try to expand the standard model in order to gain deeper insight into some of its not well understood features, can be restricted. In addition, fundamental constants like the muon mass $m_{\mu}$, its magnetic moment $\mu_{\mu}$ and anomaly $a_{\mu}$ and the fine structure constant $\alpha$ can be obtained. All high precision experiments in M up to date atom have involved the 1s ground state (see Fig.\ref{FIG1}), in which the atoms can be produced in sufficient quantities \cite{Jun_99}. The most efficient mechanism is $e^-$ capture after stopping $\mu^+$ in a suitable noble gas, where yields of 80(10)\% were achieved for Kr gas \cite{Hug_90}. This technique was used in the most recent precision measurements of the atom's ground state hyperfine structure splitting $\Delta \nu_{HFS}$ and $\mu_{\mu}$ at the Los Alamos Meson Physics Facility (LAMPF) in Los Alamos, USA \cite{Liu_99}. Muonium at thermal velocities in vacuum can be obtained by stopping $\mu^+$ close to the surface of a SiO$_2$ powder target, where the atoms are formed through $e^-$ capture and some of which diffuse through the target surface into the surrounding vacuum. This process has an efficiency of a few \% and was an essential prerequisite for Doppler-free two-photon laser spectroscopy of the 1$^2$S$_{1/2}$-2$^2$S$_{1/2}$ interval $\Delta \nu_{1s2s}$ at the Rutherford Appleton Laboratory (RAL) in Chilton, United Kingdom \cite{Mey_99}, which yields an accurate value for $m_{\mu}$. Electromagnetic transitions in excited states, particularly the 2$^2$S$_{1/2}$-2$^2$P$_{1/2}$ classical Lamb shift and 2$^2$S$_{1/2}$-2$^2$P$_{3/2}$ fine structure splitting could be induced by microwave spectroscopy. However, because only moderate numbers of atoms in the metastable 2s state can be produced with a beam foil technique, the experimental accuracy is now the 1.5~\% level \cite{Ora_84,Bad_84}, which represents not yet a severe test of theory. \begin{figure}[t] \label{FIG1} \begin{minipage}{2.5 in} \centering{ \hspace*{-0.1in} \mbox{ \epsfig{file=levels.ps,width=2.8in,clip=} } } \end{minipage} \hspace*{0.5in} \begin{minipage}{2.5in} \centering{ \hspace*{-0.15in} \mbox{ \epsfig{file=brra.ps,width=2.3in,clip=} } } \end{minipage} \caption[]{ Left: Muonium n=1 and n=2 states. All indicated transitions could be induced to date. Right: Ground state Zeeman levels in an external magnetic field. } \end{figure} \section{Ground State Hyperfine Structure} The most recent experiment at LAMPF used a Kr gas target inside of a microwave cavity at typically atmospheric density and in a homogeneous magnetic field of 1.7 Tesla. Microwave transitions between the two energetically highest respectively two lowest Zeeman sublevels of the n=1 state at the frequencies $\nu_{12}$ and $\nu_{34}$ (Fig.\ref{FIG1}) involve a muon spin flip. They were detected through a change in the spatial distribution of $e^+$ from $\mu^+$ decays, since due to parity violation in the $\mu^+$ decay the $e^+$ are preferentially emitted in the $mu^+$ spin direction. As a consequence of the Breit-Rabi equation, which describes the behaviour of the levels in a magnetic field, the sum of these frequencies equals at any field value the splitting in zero field $\Delta \nu_{HFS}$ and their difference yields in a known field $\mu_{\mu}$. The experiment utilized the technique of "old muonium", which allowed to reduce the linewidth of the signals below half of the "natural" linewidth $\delta \nu_{nat}= (\pi \cdot \tau_{\mu})^{-1}$=145kHz, where $\tau_{\mu}$ is the muon lifetime of 2.2 $\mu$ (Fig.\ref{FIG2}). For this purpose an essentially continuous muon beam was chopped by an electrostatic kicking device into 4 $\mu$s long pulses with 14 $\mu$s separation. Only atoms which were interacting coherently with the microwave field for periods longer than several muon lifetimes were detected \cite{Bos_95}. The results are mainly statistics limited and improve the knowledge of both $\Delta \nu_{HFS}$ and $\mu_{\mu}$ by a factor of three \cite{Liu_99} over previous measurements \cite{Mar_82}. The zero field splitting is determined to $\Delta \nu_{HFS}$=$ \nu_{12} + \nu_{34}$ = 4 463 302 765(53) Hz (12 ppb) which agrees well with the theoretical prediction of $\Delta \nu_{theory}$= 4 463 302 563(520)(34)($\leq$100) Hz (120 ppb)~~\cite{Kin_98}. Here the first quoted uncertainty is due to the accuracy to which the muon-electron mass ratio $m_{\mu}/m_e$ is known, the second error is from the knowledge of $\alpha$ as obtained in electron g-2 measurements, and the third value corresponds to estimates of uncalculated higher order terms. The strong interaction contributes 250 Hz and a parity conserving weak interaction amounts to -65 Hz. Among the possible exotic interactions which could contribute to $\Delta \nu_{HFS}$ is the conversion of muonium to antimuonium, which is in the lepton sector an analogous process to the well known ${\rm K}_0$-$\overline{{\rm K}_0}$ oscillations in the quark sector. From a recent direct search at the Paul Scherrer Institute (PSI) in Villigen, Switzerland, which itself could significantly restrict several speculative models, an upper limit of 9 Hz can be concluded for an expected line splitting \cite{Wil_99,Jun_99}. Recently generic extensions of the standard model in which both Lorentz invariance and CPT invariance are not assumed have attracted widespread attention in physics \cite{Blu_98}. Such models suggest diurnal variations of the ratio $({\Delta \nu_{12} - \Delta \nu_{34} })/({\Delta \nu_{12} + \Delta \nu_{34} })$ \cite{Kos_99} which are being searched for \cite{KJ_99}. \begin{figure}[thb] \label{FIG2} \unitlength 1.0cm \begin{picture}(15,7.2) \centering{ \hspace*{2.0cm} \mbox{ \epsfig{file=hfs21.ps,width=9cm,clip=} } } \end{picture}\par \caption[]{Samples of conventional and `old' M resonances at frequency $\nu_{12}$. The narrow `old' lines are also higher. The lines in right column were recorded using a sweep of the magnetic field, which was measured in units of the proton NMR frequency $\nu_P$. The lines to the left were obtained using microwave frequency scans.} \end{figure} The magnetic moment results from the measurements as $\mu_{\mu}/\mu_p$ = 3.183 345 24(37) (120 ppb) which translates into $m_{\mu}/m_e$ = 206.768 277(24) (120 ppb). The hyperfine splitting is proportional to $\alpha^2 R_{\infty}$, with the very precisely known Rydberg constant $R_{\infty}$. Comparing experiment and theory yields $\alpha^{-1}_{2}$= 137.035 996 3(80) (58 ppb) \cite{Liu_99}. If $R_{\infty}$ is decomposed into even more fundamental constants, one finds $\Delta \nu_{HFS}$ to be proportional to $\alpha^4 m_e/\hba{\!\!\!\!\mathchar'26\,}$. Using the value $\hba{\!\!\!\!\mathchar'26\,}/m_e$ as determined in measurements of the neutron de Broglie wavelength \cite{Kru_97} gives $ \alpha^{-1}_{4}$ = 137.036 004 7(48) (35 ppb). In the near future a small improvement in $ \alpha^{-1}_{4}$ can be expected from ongoing determinations of $\hba{\!\!\!\!\mathchar'26\,}/m_e$ in measurements of the photon recoil in Cs atom spectroscopy and a Cs atomic mass measurement. The present limitation for accuracy of $ \alpha^{-1}_{4}$ arises mainly from the muon mass uncertainty. Therefore any better determination of the muon mass, e.g. through a precise measurement of the reduced mass shift in $\Delta \nu_{1s2s}$, will result in an improvement of $\alpha^{-1}_4$. At present the good agreement within two standard deviations between the fine structure constant determined from M hyperfine structure and the one from the electron magnetic anomaly is generally considered the best test of internal consistency of QED, as one case involves bound state QED and the other one QED of free particles. \section{1s-2s Energy Interval} Doppler-free excitation of the 1s-2s transition has been achieved in the past at KEK in Tsukuba, Japan, \cite{Chu_88} and at RAL \cite{Maas_94}. The accuracy of the latter measurement was limited by ac Stark effect and a frequency chirp caused by rapid changes of the index of refraction in the dye solutions of the amplifier stages in the employed high power laser system. A new measurement has been performed very recently at the worlds brightest pulsed surface muon source at RAL \cite{Mey_99}. \begin{figure}[thb] \label{FIG3} \begin{minipage}{2.5 in} \centering{ \hspace*{-0.3in} \mbox{ \epsfig{figure=m1s2s_laser.ps,width=3.0in,clip=} } } \end{minipage} \hspace*{0.2in} \begin{minipage}{2.5in} \centering{ \hspace*{-0.0in} \mbox{ \epsfig{figure=m1s2s_m.ps,width=2.5in} } } \end{minipage} \centering\caption[] {Left: Pulsed laser system in the M 1s-2s experiment. Right: Muonium 1s-2s signal. The frequency corresponds to the offset of the Ti:sapphire laser from the iodine reference line. The open circles are the observed signal, the solid squares represent the theoretical expectation based on measured laser beam parameters and a line shape model {\protect \cite{Yak_99}}. } \end{figure} The 1$^2$S$_{1/2}$(F=1) $\rightarrow$ 2$^2$S$_{1/2}$(F=1) transition was induced when thermal muonium atoms interacted with the light field of two counter-propagating laser beams of wavelength 244~nm. The two-photon excitation was detected by photoinization of the 2s state in the same light field. The muons released thereby were identified and counted. Their number as a function of laser frequency represents the experimental signal (Fig.\ref{FIG3}). The necessary high power UV laser light was generated by frequency tripling the output of an alexandrite ring laser amplifier in crystals of LBO and BBO. The alexandrite laser was seeded with light from a continuous wave Ar ion laser pumped Ti:sapphire laser at 732~nm. Fluctuations of the optical phase during the laser pulse (chirping) were compensated with two electro-optic devices in the resonator of the ring amplifier to give a swing of the laser lights frequency chirping of less than about 5~MHz. The fundamental optical frequency was calibrated by frequency modulation saturation spectroscopy of the a$_{15}$ hyperfine component of the 5-13 R(26) line in thermally excited $^{127}\rm{I}_2$ vapour which lies about 700~MHz lower than 1/6 of the M transition frequency. It has been calibrated to 0.4~MHz \cite{Cor_99}. The cw light was frequency up-shifted by passing through two acousto-optic modulators (AOM's). The experiment yields $\Delta \nu_{{\rm 1s2s}}$(expt.) = 2\,455\,528\,941.0(9.8)~MHz in good agreement with a theoretical value of $\Delta \nu_{{\rm 1s2s}}$(theory) = 2\,455\,528\,935.4(1.4)~MHz \cite{Pac_98}. From these values the muon-electron mass ratio is found to be $m_{\mu^+}/m_{e^-}$ = 206.768\,38(17). Alternatively, using $m_{\mu^+}/m_{e^-}$ extracted from the M hyperfine structure experiment a comparison of $\Delta \nu_{{\rm 1s2s}}$(expt.) and $\Delta \nu_{{\rm 1s2s}}$(theory) yields the $\mu^+$-$e^-$ charge ratio as $Z= q_{\mu^+}/q_{e^-}=-1-1.1(2.1)\cdot 10^{-9}$. This is the best verification of charge equality in the first two generations of particles. The existence of one single universal quantized unit of charge is solely an experimental fact for which no associated underlying symmetry has yet been revealed. Gauge invariance assures charge quantization only within one generation of particles. \section{Muon Magnetic Anomaly} The muon magnetic anomaly $a_{\mu}$ is given, like in case of the electron, mostly by photon and by electron-positron fields. However, the effects of heavier particles is enhanced by the square of the mass ratio $m_{\mu}/m_e \approx 4 \cdot 10^4$. The contributions of the strong interaction, which can be determined from a dispersion relation with the input from experimental data on $e^+$-$e^-$ annihilation into hadrons and hadronic $\tau$-decays, amounts to 58 ppm. The weak interaction adds 1.3 ppm. At present standard theory yields $a_{\mu}$ to 0.66 ppm. Contributions from physics beyond the standard model may be as large as a few ppm. Such could arise from, e.g., supersymmetry, compositeness of fundamental fermions and bosons, CPT violation and many others. A new determination of $a_{\mu}$ \cite{Car_99} is presently carried out in a superferric magnetic storage Ring \cite{Jun_98} at the Brookhaven National Laboratory (BNL) in Upton, USA. It is a g-2 experiment in which the difference of the spin precession and the cyclotron frequencies is measured. In a first startup run, approximately the same level of accuracy for $\mu^+$ could be reached as the final result for this particle in a preceding experiment at CERN \cite{Bai_79}. Several technical improvements were installed since, the most significant of which is a magnetic kicker, which allows to inject muons directly into the storage ring. This enhances the number of stored particles by almost two orders of magnitude compared to the early stages of the experiment when the stored muons were born in the decays of injected pions. Data have been taken which are expected to yield $a_{\mu}$ to 1~ppm. The data analyzed so far have give the value with 5 ppm uncertainty. The value agrees with the prediction of standard theory. The experiment aims for a final precision of 0.35 ppm. To be able to reach this goal, it is essential to have $\mu_{\mu}$ to the 0.1 ppm level from muonium spectroscopy, since this quantity is important in the extraction of the experimental result. The experiment is planed for both $\mu^+$ and $\mu^-$ as a test of CPT invariance. This is of particular interest in view of the suggestion by Bluhm et al. \cite{Blu_98} and Dehmelt et al. \cite{Deh_99} to compare tests of CPT invariance in different systems on a common basis, i.e. the energies of the involved states. For measurements of magnetic anomalies this means that the energies of particles with spin down in an external field need to be compared to the energies of antiparticles with spin up. The nature of g-2 experiments is such that they provide a figure of merit $r = |a^- - a^+| \cdot \frac{\hba{\!\!\!\!\mathchar'26\,}\omega_c}{m \cdot c^2}$ for a CPT test, where $a^-$ and $a^+$ are the positive and negative particles magnetic anomalies, $\omega_c$ is the cyclotron frequency used in the measurement and $m$ is the particle mass. For the past electron and positron measurements one has $r_e = 1.2 \cdot 10^{-21}$ \cite{Deh_99} which is a much tighter test than in the case of the neutral kaon system, were the mass differences between $K^0$ and $\overline{K^0}$ yield $r_{K} = 1\cdot 10^{-18}$. An even more stringent CPT test arises from the past muon magnetic anomaly measurements were $r_{\mu} = 3.5 \cdot 10^{-24}$, which may therefore already be viewed as the presently best \begin{table}[h] \label{ACCEL} \caption[]{Muon fluxes of some existing and future facilities, Rutherford Appleton Lab\-orat\-ory (RAL), Japanese Hadron Facility (JHF), a new Neutron Spallation Source (NSS), Muon collider (MC). } \begin{tabular}{|c|cccccc|} \hline &RAL($\mu^+$) &PSI($\mu^+$) & PSI($\mu^-$) &JHF($\mu^+$)$^\dag$ &NSS($\mu^+$) &MC ($\mu^+$, $\mu^-$)\\ \hline Intensity ($\mu$/s)& $3\times 10^6$ &$3\times 10^8$ &$1\times 10^8$ &$4.5\times 10^{11}$ &$4.5\times 10^7$ &$7.5\times 10^{13}$ \\ Momentum bite &&&&&&\\ \hspace*{4mm} $\Delta$ pm/p[\%] & 10& 10 & 10 & 10 & 10 & 5-10 \\ Spot size &&&&&&\\ (cm $\times$ cm) & 1.2$\times$2.0 &3.3$\times$2.0 &3.3$\times$2.0 & 1.5$\times$2.0 &1.5$\times$2.0 & few$\times$few \\ Pulse structure & 82 ns & 50 MHz & 50 MHz & 300 ns & 300 ns & 50 ps\\ & 50 Hz & contin. & contin.& 50 Hz & 50 Hz & 15 Hz\\ \hline \end{tabular} \end{table} known CPT test based on system energies. With improvement expected in the BNL g-2 experiment one can look forward to a 20 times more precise test of this fundamental symmetry. \section{Future possibilities} All precision M experiments are now limited by statistics. Therefore significant improvements can be expected from either more efficient M formation, which might in principle be possible to a small extent in the case of thermal M in vacuum. The best solution, however, would be muon sources of higher intensities. Such may become available in the intermediate future the Japanese Hadron Facility (JHF), or the Oak Ridge (or a possible European) Spallation Neutron Source (NSS) Also the discussed Oak Ridge neutron spallation source. The most promising facility is, however, a muon collider \cite{Palmer_98}; its front end will provide muon rates 5-6 orders of magnitude higher than present beams (Table \ref{ACCEL}). At such facilities there is in addition to more precise measurements in M a variety of experiments on artificial atoms and ions like muonic hydrogen and muonic helium which will allow to extract important parameters describing the hadronic particles within these systems or fundamental interactions, which could in no physical experiment thus far be accessed with suf\-fic\-ient precision for atomic, nuclear and particle theory \cite{Bos_96,Kaw_97}. It should be noted that new experimental approaches \cite{Wil_99,Jun_95} would also become feasible which might beneficially take advantage of, e.g., the time evolution of the atomic systems. \section{Conclusions} Although the nature of the muon - the reason for its existence - still remains a mystery, both the theoretical and experimental work in fundamental muon physics, have contributed to an improved understanding of basic particle interactions and symmetries in physics. Particularly muonium spectroscopy has verified the nature of the muon as a point-like heavy lepton which differs only in its mass related parameters from the others. This fact is fundamentally assumed in every precision calculation within standard theory. In addition, the measurements provide accurate values of fundamental constants.\\ \section{Acknowledgments} The author wishes to acknowledge the work of the members of the different collaborations which produced the reported results. This work was supported by The German BMBF, the German DAAD and a NATO research grant.
2,877,628,088,660
arxiv
\section{Introduction} Lovelock gravity is the most natural extension of general relativity (GR) in dimension higher than four, as it retains the basic character of the theory --the equation of motion remains second order despite the action being higher order in Riemann curvature. No other purely gravitational theory preserves this crucial feature. Yet another interesting feature of GR is the fact that it is {\it kinematic} in three dimensions and it turns dynamical in the next even dimension, {\it i.e.} $d=4$. In three dimensions the Riemann curvature tensor can be written in terms of the Ricci so that there exist no non-trivial vacuum solution. If we want this property to remain true as we go to higher odd dimensions there is a unique choice that corresponds to pure Lovelock gravity \cite{dgj, xd}. The action reduces in this case to a single $N$th order Lovelock term, with or without cosmological constant \footnote{The vanishing cosmological constant case, that will be the focus of this paper, has been also referred to as Chern-Simons or Born-Infeld gravity in odd and even dimensions respectively (see for instance \cite{Zanelli2000a}).} in dimensions $d=2N+1,2N+2$. This is the maximal order term in the Lovelock series, higher order terms being either topological or zero. There is no sum over lower order terms and in particular there is no Einstein term in the action. It is possible to define an analogue of the Riemann tensor for $N$th order Lovelock gravity, its characterizing property being that the trace of its Bianchi derivative vanishes, yielding the corresponding divergence free analogue of the Einstein tensor \cite{bianchi}. This is exactly the same as the one obtained from the variation of the $N$th order Lovelock action term. For the appropriate definition of Lovelock-Riemann tensor and zero cosmological constant, any pure Lovelock vacuum in odd $d = 2N + 1$ dimensions is Lovelock flat, {\it i.e.} any vacuum solution of the theory has vanishing Lovelock-Riemann tensor \cite{xd, kastor}. Likewise, even for non-zero cosmological constant, the Weyl curvature vanishes in $3$ dimensions, and so does the Lovelock-Weyl tensor for pure Lovelock in all odd $d=2N+1$ dimensions. That is, pure Lovelock gravity is {\it kinematic} relative to the Lovelock analogue of the Riemann tensor in all odd $d=2N+1$ dimensions. Kinematicity of pure Lovelock gravity was shown to be true for static spacetimes \cite{dgj} and conjectured to hold in general. This has indeed been proven in general recently \cite{kastor,xd} using a different definition for the Lovelock-Riemann tensor due to Kastor \cite{kastor}. Using purely algebraic properties, it has been shown that this $N$th order Lovelock-Riemann tensor can be entirely written in terms of the corresponding Lovelock-Ricci in odd $d=2N+1$ dimensions. This clearly establishes kinematicity of pure Lovelock gravity, as defined above, in all generality. There is no other gravity theory satisfying this property. Furthermore, for kinematicity to hold in all odd dimensions we should restrict the pure Lovelock equation to two dimensionalities only, $d=2N+1, 2N+2$, else this property would be violated. That is, for any given Lovelock order, $N$, the pertinent dimensions are only these two. We will be always referring to them when mentioning odd and even dimensions in the text. Conversely for a given dimension, $d$, the order is fixed as $N=[(d-1)/2]$. Henceforth by pure $N$th Lovelock we would refer to this theory in those only two relevant dimensionalities, $d=2N+1, 2N+2$. In this paper we focus on Kasner-type metrics in pure Lovelock gravity. These give a very interesting class of cosmological solutions, which are in fact the simplest instances of homogeneous but anisotropic spaces, and have been very effectively employed for studying the approach to the big-bang singularity \cite{Belinskii2006}. In the general relativistic case in four dimensions, the approach to the singularity oscillates between several Kasner-like phases, contracting along two axes and expanding along the third, followed by a switch over to a different expanding direction, and so on. The subsequent phases and the transitions between them can be represented as a map from the space of Kasner solutions to itself, the {\it Kasner map}, that has a very nice geometric structure \footnote{A nice description can be found in \cite{Heinzle2009a}.}. In the higher dimensional setting, while considering general Lovelock gravity, the leading order behavior close to the singularity is captured by the highest order term, thus pure Lovelock gravity would describe the relevant gravitational dynamics in the approach to the big-bang singularity within the full Lovelock family of theories. The Kasner metrics were also very instrumental in finding the right definition of Lovelock-Riemann tensor verifying kinematicity \cite{xd}. For pure Lovelock static vacuum spacetimes in $d=2N+1$ both definitions of Lovelock-Riemann tensors due to Dadhich \cite{bianchi} and Kastor \cite{kastor} vanish and the difference became apparent only while studying solutions of reduced symmetry. In fact, pure Lovelock Kasner metrics in odd dimensions have zero Lovelock-Riemann in Kastor's formulation whereas Dadhich's analogue is in general non-zero. In the following sections we will analyze pure Lovelock equations for vacuum and perfect fluid spacetimes and obtain solutions in the Kasner class. In Sections II and III, we set up the Lovelock framework and write the equation of motion for Kasner spaces. Isotropy of spatial stresses is required for both vacuum and perfect fluid solutions. These conditions are solved in Section IV, finding several classes of solutions or isotropy types. Sections V and VI follow with a detailed analysis of perfect fluid and vacuum solutions respectively. For each of the vacuum families of solutions, we compute higher order curvature tensors that will allow us to actually distinguish between the different families. Kastor's and Dadhich's Lovelock-Riemann tensors provide an efficient characterization of the solutions. Finally, in Section VII we consider a family of exponential solutions, very closely related to the Kasner class, and we conclude with a discussion. In the appendix we entertain the possibility of considering Kasner-type solutions with complex exponents. Even though in some cases the metric can still be brought to real form, these spaces contain closed timelike curves and thus cannot be considered as viable solutions of any gravitational theory. \section{Lovelock Lagrangian and equation of motion} The action of Lovelock gravity and the corresponding equation of motion are given by a sum of homogeneous polynomial terms in the Riemann curvature, each of them multiplied by a coupling $c_k$ with length dimension $L^{2(k-1)}$ relative to the Einstein-Hilbert term. Action and equations for these theories are most simply written in terms of differential forms \begin{eqnarray} \mathcal{L}&=&\sum_{k=1}^Nc_k\,\frac{2^k}{(2k)!(d-2k)!}\epsilon_{a_1 a_2\cdots a_d}\,R^{a_1a_2}\wedge \cdots \wedge R^{a_{2k-1}a_{2k}}\wedge e^{a_{2k+1}}\wedge \cdots \wedge e^{a_d} ~,\\ G_{\ c}^{b}&=& \sum_{k=1}^N c_k\,\frac{2^{k-1}}{(2k)!(d-2k-1)!}\,\epsilon_{a_1 a_2\cdots a_{d-1} c}\,R^{a_1a_2}\wedge \cdots \wedge R^{a_{2k-1}a_{2k}}\wedge e^{a_{2k+1}}\wedge \cdots \wedge e^{a_{d-1}}\wedge e^b ~. \label{einstein} \end{eqnarray} In this language, the torsion and curvature forms are defined via Cartan's structure equations, \begin{eqnarray} T^a &=& De^a=de^a+\omega^a_{\ b}\wedge e^b ~,\label{streq}\\ R^a_{\ b}&=&d\omega^a_{\ b}+\omega^a_{\ c}\wedge \omega^c_{\ b} ~,\nonumber \end{eqnarray} for which we have introduced a covariant exterior derivative, $D$, with the corresponding connection 1-form $\omega^a_{\ b}$, in addition to the usual exterior operator $d$. In order to make contact with the usual tensorial formulation one imposes that the torsion is zero and solves for the spin connection in terms of the vielbein. The equation of motion above is obtained upon variation with respect to the vielbein only, leaving the spin connection unchanged, as the spin connection variation is proportional to the torsion, therefore set to zero. Alternatively, one can introduce a set of $(2k,2k)$-rank tensors \cite{kastor} product of $k$ Riemann tensors, completely antisymmetric, both in its upper and lower indices, \begin{equation} \left.\right.^{(k)}\mathbb{R}^{b_1 b_2 \cdots b_{2k}}_{a_1 a_2 \cdots a_{2k}}= R^{[b_1 b_2}_{\quad \quad [a_1 a_2}\cdots R^{b_{2k-1} b_{2k}]}_{\qquad \qquad a_{2k-1} a_{2k}]}~. \label{kastensor} \end{equation} With all indices lowered, this tensor is also symmetric under the exchange of both groups of indices, $a_i\leftrightarrow b_i$. In a similar way we will denote the contractions of $\mathbb{R}$ simply as \begin{equation} \left.\right.^{(k)}\!\mathbb{R}^{b_1 b_2 \cdots b_{J}}_{a_1 a_2 \cdots a_{J}}=\left.\right._{(k)}\!\mathbb{R}^{b_1 b_2 \cdots b_{J} c_{J+1} \cdots c_{2k}}_{a_1 a_2 \cdots a_{J} c_{J+1} \cdots c_{2k}} \qquad ; \quad \forall \, J<2k ~. \end{equation} In terms of these new objects we can now write \begin{equation} \mathcal{L}=-\sum_{k=1}^N c_k\, {}^{(k)}\mathbb{R} \qquad \text{and} \qquad G^a_{\ b}=\sum_{k=1}^N c_k\left(k\,{}^{(k)}\mathbb{R}^a_{\ b}- \frac12 {}^{(k)}\mathbb{R}\,\delta^a_{\ b}\right) \end{equation} In any of the formulations, this reduces to the usual Einstein-Hilbert action ($N=1$) for $d=4$ with the first non-trivial $N=2$ Gauss-Bonnet (GB) correction appearing in five or higher dimensions. Note that the $N$th order term in the action gives a non-trivial contribution to the equations only for $d\geq2N+1$. The GB contribution to the action can be written \begin{equation} \mathcal{L}_{GB}\equiv R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2 \end{equation} Varying this term with respect to the metric tensor $g_{\mu\nu}$ (equivalently the vielbein), we obtain \begin{eqnarray} G^{(2)\mu}{}_\nu=2(R^\mu{}_{\sigma\rho\alpha}R_{\nu}{}^{\sigma\rho\alpha} -2R^\mu{}_{\rho\nu\sigma}R^{\rho\sigma} -2R^\mu{}_{\sigma}R^{\sigma}{}_\nu+RR^\mu{}_\nu)-\frac{1}{2}L_{GB}\delta^\mu{}_\nu ,\nonumber \end{eqnarray} the GB analogue of the Einstein tensor. For reasons that will become clear later, we will be concerned here with the case of pure Lovelock theories. For any given dimensionality $d=2N+1$ or $d=2N+2$, we will consider the maximal degree $N$th order term as the only one in the Lovelock series, {\it i.e.} $c_N=1$ and $c_k=0$ for $k\neq N$. This particular class of theories has some special properties not shared by any other Lovelock theory. Notice that it includes Einstein gravity in three and four dimensions. In fact, pure Lovelock theories will preserve some features of $d=3,4$ general relativity that are not respected by higher dimensional Einstein's theory \cite{dgj,dgj2,dad2}. As already mentioned, in dimension three the Riemann tensor can be written in terms of the Ricci, in such a way that Ricci flat solutions are actually completely flat, they have zero Riemann. In dimension four this does not happen and the Weyl tensor is in general non-zero. Pure Lovelock gravities as described above generalize this property to any dimension, as it is easy to show. In odd $d=2N+1$ dimensions the ($2N,2N$)-rank tensor defined previously can be written completely in terms of the Lovelock-Ricci $\mathbb{R}^a_{\ b}$ (or the corresponding Einstein), \begin{equation} \mathbb{R}^{b_1\cdots b_{2N}}_{a_1\cdots a_{2N}}=\frac{1}{(2N)!}\epsilon^{b_1\cdots b_{2N+1}}\,\epsilon_{a_1\cdots a_{2N+1}}G^{a_{2N+1}}_{\qquad b_{2N+1}}~. \label{main} \end{equation} which clearly shows that it vanishes when the corresponding Einstein tensor vanishes. However for even dimensionality, $d=2N+2$, the above expression is not valid. The analogous would not vanish because it expression involves the ($4$th rank) Lovelock-Riemann, $\mathbb{R}^{ab}_{\ \ cd}$. This proves in general, without any reference to any particular solution, that pure Lovelock gravity is {\it kinematic} in all odd $d=2N+1$ dimensions. In case of non-zero cosmological constant, it can be easily verified that the Lovelock-Weyl tensor, the traceless part of $\mathbb{R}^{ab}_{\ \ cd}$, vanishes as is the case for the usual Weyl in three dimensional Einstein gravity. In all relevant odd or even dimensions the whole information about the $(2N,2N)$-rank tensor is contained in $\mathbb{R}^{ab}_{\ \ cd}$ so we will be referring to this quantity only for all practical purposes. We will be comparing the results obtained for this Lovelock-Riemann tensor with those of other 4th rank tensors defined below. In particular for $N=2$ we may define two $4$th rank tensors (see \cite{xd} for details and definitions for any $N$) \begin{equation}{}^{(2)}\mathcal{R}^{\alpha\beta}{}_{\chi\gamma}\equiv R^{\mu\nu\alpha\beta} R_{\mu\nu\chi\gamma}+ 4 R^{[\alpha\mu} R^{\beta]}{}_{\mu\chi\gamma}+R R^{\alpha\beta}{}_{\chi\gamma} \end{equation} % and \begin{equation} {}^{(2)}\mathcal{M}^{\alpha\beta}{}_{\chi\gamma}\equiv R^{[\alpha\mu}{}_{\chi\gamma} R^{\beta]}{}_\mu-R^{[\alpha}{}_{[\chi}R^{\beta]}{}_{\gamma]}+ R_{\mu[\chi\nu}{}^{[\alpha}R^{\beta]\mu}{}_{\gamma]}{}^\nu \end{equation} Related to the first of these objects, one of us \cite{bianchi} defined an alternative Lovelock-Riemann analogue \begin{equation} {}^{(2)}\mathcal{F}^{\alpha\beta}{}_{\chi\gamma}\equiv {}^{(2)}\mathcal{R}^{\alpha\beta}{}_{\chi\gamma}-\frac{\mathcal{R}}{(d-1)(d-2)} \delta^\alpha{}_{[\chi}\delta^\beta_{}{\gamma]} \end{equation} % where $\mathcal{R}$ is the complete contraction of $\mathcal{R}^{\alpha\beta}{}_{\chi\gamma}$ and $d=5,6$. Kastor's Lovelock-Riemann \cite{kastor}, in turn, can be simply written as \begin{equation} {}^{(2)}\mathbb{R}^{\alpha\beta}{}_{\chi\gamma}\equiv \frac29\left(\frac14\, {}^{(2)}\mathcal{R}^{\alpha\beta}{}_{\chi\gamma} - {}^{(2)}\mathcal{M}^{\alpha\beta}{}_{\chi\gamma}\right) \end{equation} In Sec. VI, it would be shown that for a particular class of pure GB vacuum solutions in odd $d=5$, ${}^{(2)}\mathcal{R}^{\alpha\beta}{}_{\chi\gamma}$ would be be non-zero while ${}^{(2)}\mathbb{R}^{\alpha\beta}{}_{\chi\gamma}$ would be actually vanishing. \section{Kasner metrics} Consider a $d$-dimensional metric with flat $t=\text{const.}$ slices, for which each of the $n=d-1$ spatial directions scales differently with time in a polynomial fashion, \begin{equation} ds^2= \sum_{i=1}^n t^{2\,p_i} dx_i^2 -dt^2~. \label{Kasner} \end{equation} This is known as the Kasner metric and is a homogeneous but anisotropic space. These type of metrics were also considered long ago by Deruelle \cite{Deruelle1989} in the GB case (see also \cite{Kitaura1991,Kitaura1993a} for a more general account of anisotropic models in Lovelock theories). For this class of metrics, each additional power of the Riemann curvature adds a factor $t^{-2}$ to the Lovelock-Riemann tensor and the strength of singularity increases accordingly with the Lovelock order. Focusing in the dynamics near the big-bang singularity located at $t=0$, this means that the dominant contribution will come from the highest order Lovelock term in the action. Therefore, pure Lovelock theories capture the leading order behavior in this regime. In fact, Kasner metrics, as written above, will not, in general, be exact solutions of the equations of motion when considering terms of different curvature order in the action. If we want to analyze the subleading behaviour of the metric we may keep the next lower order Lovelock term and expand the metric further with two polynomial terms in time, $g_{ii}\sim t^{2p_i}+\beta_i t^{2q_i}$. Lower Lovelock terms become relevant for sufficiently long times, {\it i.e.} $\mathcal{L}_{N-k}/\mathcal{L}_N\sim c_{N-k}t^{2k}\sim 1$. Since for pure Lovelock the relevant dimensions are only two $d=2N+1, 2N+2$ we would therefore be considering the corresponding cases, $n=2N$ and $n=2N+1$, respectively for odd and even dimensions. In what follows, we write the Lovelock-Einstein tensor, $G^{(N)}{}^a{}_b$, corresponding to the $N$th order Lovelock term, for the metric (\ref{Kasner}) for odd and even dimensions. Just the diagonal components are non-zero. For odd $d=2N+1$ dimensionality, there are $2N$ exponents $p_i$ and the spatial components of the Lovelock-Einstein tensor are \begin{equation}\label{eq14} G^{(N)}{}^i{}_i=\frac{N(2N-2)!}{t^{2N}}\left( 2N-1-\sum_{m=1}^n p_m\right) \underbrace{p_j \cdots p_l}_{2N-1},\quad m,j,l\neq i,\quad j<\cdots<l~, \end{equation} whereas the time component yields \begin{equation}\label{eq15} G^{(N)}{}^{t}{}_{t}=-\frac{N(2N-1)!}{t^{2N}} p_1\cdots p_{2N}\,. \end{equation} Note that $G^{(N)}{}^i{}_i$ does not involve $p_i$; {\it i.e} there are only $2N-1$ exponents $p_{j\neq i}$, in the last product or the sum. In particular, for $d=5$ GB these expressions reduce to $$ G^{(2)}{}^1{}_1=\frac{4}{t^{4}}( 3-p_2-p_3-p_4)p_2p_3p_4 $$ and $$G^{(2)}{}^{t}{}_{t}=-\frac{12}{t^{4}} p_1p_2p_3p_4.$$ In even $d=2N+2$ dimensions the expressions are similar, yet a bit more involved, \begin{equation}\label{eq16} G^{(N)}{}^i{}_i=\frac{N(2N-2)!}{t^{2N}}\left( \left(2N-1-\sum_{m=1}^n p_m\right)\left(\sum_{j\cdots l=1}^{2N+1} \underbrace{p_j \cdots p_l}_{2N-1}\right)+ \underbrace{p_j \cdots p_l}_{2N}\right)\,\,j,k,l,m\neq i;\,\,j<k\cdots<l \end{equation} The number of $p_j$ in each product term in the sum is $2N-1$, and there are $\binom{2N}{2N-1}=2N$ such terms. Besides, the time component reads \begin{equation} G^{(N)}{}^{t}{}_{t}=-\frac{N(2N-1)!}{t^{2N}} \sum_{j,\cdots,l=1}^n \underbrace{p_j \cdots p_l}_{2N}=-\frac{N(2N-1)!}{t^{2N}}p_1\cdots p_{2N+1} \sum_{i=1}^{2N+1}\frac{1}{p_i},\quad j<k\cdots<l\label{tcomponent} \end{equation} for which each term in the sum is a product of $2N$ exponents $p_i$. Each product involves a combination of all the $2N+1$ $p_i$, {\it i.e} there are $\binom{2N+1}{2N}=2N+1$ terms. For GB in $d=6$ we may write $$ G^{(2)}{}^1{}_1=\frac{4}{t^{4}}\left[( 3-p_2-p_3-p_4-p_5)(p_2p_3p_4+p_2p_3p_5+p_2p_4p_5+p_3p_4p_5)+p_2p_3p_4p_5\right] $$ and $$G^{(2)}{}^{t}{}_{t}=-\frac{12}{t^{4}}(p_1p_2p_3p_4+p_1p_2p_3p_5+p_1p_2p_4p_5+p_1p_3p_4p_5+p_2p_3p_4p_5)=-\frac{12}{t^{4}}p_1p_2p_3p_4p_5\left(\frac1p_1\!+\!\frac1p_2\!+\!\frac1p_3\!+\!\frac1p_4\!+\!\frac1p_5\right) ~. $$ \section{Isotropy conditions} Even though Kasner spacetimes are anisotropic in general, in many situations we are interested in the isotropic case for which all the spatial components of the Lovelock-Einstein tensor are equal. This is the case for vacuum or perfect fluid solutions that will be the focus of our analysis in the following sections. We shall therefore analyze the isotropy conditions \begin{equation} G^{(N)}{}^i{}_i-G^{(N)}{}^j{}_j=0 \label{isocond} \end{equation} for every pair $i,j$. Not all equations written in this way are independent, only $n-1$ are, and we can choose, for instance, a set with fixed $i$ or $j$, say $i=1$. \subsection{Odd dimension $d=2N+1$} For odd dimensionality these conditions read \begin{equation} G^{(N)}{}^i{}_i-G^{(N)}{}^j{}_j\sim(p_i-p_j)\left(\sum_{k=1}^{2N}p_k-2N+1\right) \underbrace{p_l\cdots p_m}_{2N-2}=0, \quad l\cdots m \neq i,j. \end{equation} In particular fr five dimensional pure GB, it would reduce to \begin{equation} G^{(2)}{}^1{}_1-G^{(2)}{}^2{}_2\sim(p_1-p_2)\left(p_1+p_2+p_3+p_4-3\right) p_3 p_4 =0 \end{equation} and two similar equations for $G^{(2)}{}^1{}_1-G^{(2)}{}^3{}_3$ and $G^{(2)}{}^1{}_1-G^{(2)}{}^4{}_4$. \subsection{Even dimension $d=2N+2$} In turn, in even dimensions we find \begin{equation} G^{(N)}{}^i{}_i-G^{(N)}{}^j{}_j\sim(p_i-p_j)\left(\sum_{k=1}^{2N+1}p_k-2N+1\right) \sum \underbrace{p_l\cdots p_m}_{2N-2}=0,\quad l\cdots m \neq i,j \end{equation} where each product in the sum contains $2N-2$ different $p_l$ that are a combination of all $(2N-1)$ available exponents with $l\neq i,j$, {\it i.e.} there are $\binom{2N-1}{2N-2}=2N-1$ of those terms. in six dimensions, $N=2$, we shall write \begin{equation} G^{(2)}{}^1{}_1-G^{(2)}{}^2{}_2\sim(p_1-p_2)\left(p_1+p_2+p_3+p_4+p_5-3\right) (p_3 p_4+p_3p_5+p_4p_5) =0 \end{equation} and three similar equations for $G^{(2)}{}^1{}_1-G^{(2)}{}^3{}_3$, $G^{(2)}{}^1{}_1-G^{(2)}{}^4{}_4$ and $G^{(2)}{}^1{}_1-G^{(2)}{}^5{}_5\, .$ \subsection{Isotropy types} There are several ways of solving the isotropy conditions. In both odd and even dimensions these reduce to a product of three factors. Therefore, for each pair $i,j$ at least one of those factors must vanish. Depending on the choice of the vanishing factors we will have several families of solutions or {\it isotropy types}. \begin{itemize} \item The first family --type {\bf (a)}-- corresponds to the trivial solution $p_i=p_j$ for every $i,j$, {\it i.e.} all the exponents are equal and the metric trivially isotropic. \item Another very simple solution --type {\bf (b)}-- corresponds to putting to zero the second factor, that in fact does not depend on the pair $i,j$ that we pick, \begin{equation} 2N-1-\sum_{k=1}^{d-1}p_k=0 ~. \end{equation} A single condition ensures isotropy in this case. Whereas {\bf (a)} type solutions form a one parameter family of metrics, {\bf (b)} type spaces form a $n-1$ parameter family of solutions. \item If we choose to set the third factor of the isotropy conditions to zero instead --type {\bf (c)}--, for every pair $i\neq j$ we get different conditions in odd and even dimensions. In odd $d=2N+1$ we have to solve \begin{equation} \underbrace{p_l\cdots p_m}_{2N-2}=0 \quad l\cdots m \neq i,j~, \label{oddc} \end{equation} whereas in even $d=2N+2$ dimensions the condition is \begin{equation} \sum \underbrace{p_l\cdots p_m}_{2N-2}= \underbrace{p_l\cdots p_m}_{2N-1}\sum{\frac{1}{p_k}}=0 \quad l\cdots m,k \neq i,j~. \label{evenc} \end{equation} The form of the solutions is similar though. As we will see below, in odd $d=2N+1$ this istropy type implies that two exponents vanish, say $p_1=p_2=0$, whereas in even $d=2N+2$ three exponents have to vanish, $p_1=p_2=p_3=0$ for instance. In both cases, these solutions correspond to vacuum spacetimes and will be considered in detail in Section VI. In the $N=1$ case with $d=3, 4$, $2N-2=0$, and hence these conditions become vacuous. There are no solutions in this class for Einstein-Hilbert gravity. \item The last possibility --type {\bf (d)}-- is to combine isotropy types {\bf (a)} and {\bf (c)} in such a way that for some pairs $i,j$ we have $p_i=p_j$, whereas for others the third factor in the corresponding isotropy condition is the one that vanishes. As we will see below, this will only be possible in even $d=2N+2$ dimensions. \end{itemize} Isotropy types {\bf (a)} and {\bf (b)} are simple enough and do not require any more explanation. Let us discuss type {\bf (c)} and {\bf (d)} in more detail. In odd dimensions enforcing just one of type {\bf (c)} conditions readily implies that one of the exponents vanishes, say $p_1=0$. Automatically all the components of the Lovelock-Einstein tensor vanish except for $G^1_{\ 1}$, but isotropy implies that the remaining component must also vanish, and hence another $p_{i\neq 1}=0$. The rest of the exponents may take any value. To be more concrete let us focus for a moment in $d=5$ pure GB gravity, for which (\ref{oddc}) reds, say for $i=1$, $$ p_2p_3=0,\;p_2p_4=0,\; p_3p_4=0~.$$ This implies that two of the $p_i$ exponents with $i\neq 1$ must be zero. All the components of the Lovelock-Einstein tensor vanish in that case, therefore all solutions in this isotropy type are vacuum spacetimes. This is also true in general, that is, isotropy type {\bf (c)} in all odd $d=2N+1$ dimensions implies vacuum spacetime. We will comment more on this later on. The even dimensional counterpart of the above statement is a bit more complicated to get. For $d=6$ pure GB again, we have for $i=1$, $$ p_4p_5+p_3p_5+p_3p_4=0,\;p_4p_5+p_2p_4+p_2p_5=0,\;p_2p_3+p_2p_5+p_3p_5=0,\; p_3p_4+p_2p_4+p_2p_3=0$$ This implies that three of $p_i$ must be zero. Note that, if we set one of $p_i$ to zero, say, $p_1=0$, then the isotropy conditions would read as $$ p_2(p_4p_5+p_3p_5+p_3p_4)=0,\; p_3(p_4p_5+p_2p_4+p_2p_5)=0,\;p_4(p_2p_3+p_2p_5+p_3p_5)=0,\;p_5(p_3p_4+p_2p_4+p_2p_3)=0~.$$ up to a factor $\sum p_i-2N+1$ that we asume to be non-zero. Otherwise the solution would already be considered in the type {\bf (b)} class. From the previous equations it is now obvious that two more $p_i$'s are zero. Again, this would lead to vacuum solutions which will be analyzed further in the corresponding section. For general $N$, imposing the above condition (\ref{evenc}) just for one pair $i,j$ implies either two $p_{l\neq i,j}$ exponents are zero, for instance $p_1=p_2=0$, or else $$ \sum_{k\neq i,j}\frac{1}{p_k}=0$$ In the first case, $p_1=p_2=0$, all components of $G^a_{\ b}$ but $G^1_{\ 1}=G^2_{\ 2}$ yield zero. Furthermore, isotropy requires the rest must also vanish which implies one more $p_{i\neq 1,2}=0$ and we have again a vacuum solution. The other possibility $\sum 1/p_{k\neq i,j}=0$, tacitly assumes that none of the exponents involved $p_{k\neq i,j}$ is zero. However, when considering another pair such that $p_l\neq p_j,p_i$, the only way $G^l_{\ l}-G^i_{\ i}=0$ is that at least one $p_{k\neq i,j,l}=0$, thus contradicting the initial assumption. The only remaining possibility is that all $p_k$'s are organized in two groups such that either $p_k=p_{1,2}$ with $p_1\neq p_2$. This corresponds to isotropy type {\bf (d)} and we just have to impose one restriction \begin{equation} G^1_{\ 1}-G^2_{\ 2}\sim\sum_{k\neq 1,2}\frac{1}{p_k}=0~. \end{equation} The rest of the isotropy conditions are either equivalent to this one or trivial. For $p_{1,2}$ having multiplicities $n_{1,2}$ with $n_1+n_2=2N+1$, this reduces to \begin{equation} \frac{n_1-1}{p_1}+\frac{n_2-1}{p_2}=0 \label{typed} \end{equation} with $n_{1,2}\neq 1$. For $d=6$ GB gravity, we have for instance $n_1=3$, $n_2=2$ and $p_1=-2p_2$, and it is easy to check that the isotropy condition $G^1_{\ 1}=G^2_{\ 2}$ is verified. \section{Perfect fluid} Once the isotropy conditions are imposed the equation of motion coupled to matter has the form $T^\mu_{\ \nu}=\text{diag}(G^t_{\ t},G^i_{\ i},\ldots,G^i_{\ i})$, which matches precisely the form of the stress-energy tensor of a perfect fluid with energy density $\rho=-G^t_{\ t}$ and pressure $P=G^i_i$. Moreover, all components of the Lovelock-Einstein tensor $G^a_{\ b}$ scale as $t^{-2N}$ with time, and consequently the same behavior applies to the fluid $\rho$ and $P$ supporting these geometries. Energy and pressure are, in this way, proportional to each other and satisfy a linear barotropic equation of state of the form, $P=\omega\, \rho$ with $\omega=\omega(p_i;d,N)$. \subsection*{{\bf (a)} \ All equal $p_i$ exponents} We have density and pressure as given by $$\rho= -G^{(N)}{}^{t}{}_{t}=\frac{(d-1)!}{2}\left(\frac{p}{t}\right)^{2N}$$ and $$P=G^{(N)}{}^i{}_i=\frac{(d-1)!}{2}\left(\frac{p}{t}\right)^{2N}\left(\frac{2N}{d-1}\frac{1}{p}-1\right)$$ where $d=2N+1, 2N+2$ respectively for odd and even dimensions. The equation of state is $$P=\left( \frac{2N}{d-1}\frac1p-1\right)\rho$$ which gives for $p=\frac{2N}{d-1}, \frac{3N}{2(d-1)}, \frac{N}{d-1}$ dust, radiation and stiff fluid respectively. It is an FRW flat model. \subsection*{{\bf (b)} \ $\sum_{k=1}^{d-1}p_i=2N-1$} Note that the $G^{(N)}{}^i{}_i$ component is free of $p_i$. We can however substitute in Eqs (\ref{eq14}) and (\ref{eq16}) $ 2N-1-\sum_{m\neq i} p_m=p_i $ so that in both cases $G^{(N)}{}^i{}_i$ has the same form of $G^{(N)}{}^{t}{}_{t}$ except for a numerical factor. Thus, upon substitution, all spatial stresses become equal, and it turns out that $$P=\frac{1}{2N-1}\rho$$ for both odd and even dimensions. \subsection*{{\bf (c)} \ $d-2N+1$ zero exponents.} As explained before this isotropy type corresponds to two and three zero exponents in odd and even dimensions respectively. This, as already mentioned, are vacuum solutions and thus have vanishing energy density and pressure, $\rho=P=0$. \subsection*{{\bf (d)} \ $p_i=p_{1,2}$ with multiplicities $n_{1,2}$} For the isotropy condition {\bf (c)}, this is the only case of non-vacuum solution and this class of perfect fluid solutions exist only for even $d=2N+2\geq6$ dimensions. As before we have to further impose the condition, $$ \frac{n_1-1}{p_1}+\frac{n_2-1}{p_2}=0$$ with $n_{1,2}\neq 1$ and $n_1+n_2=2N+1$. Substituting this into the equations of motion we get \begin{equation} G^t_{\ t}\sim -(2N-1)p_1^{n_1-1}p_2^{n_2-1}(p_1+p_2) \end{equation} whereas the spatial components are \begin{equation} G^1_{\ 1}=G^2_{\ 2}\sim p_1^{n_1-1}p_2^{n_2-1}\left(2N-1-(n_1-1)p_1-(n_2-1)p_2\right) \end{equation} Thus we get an equation of state, $$ P=\left(\frac{2N-1-(n_1-1)p_1-(n_2-1)p_2}{(2N-1)(p_1+p_2)}\right)\rho =\frac{1-\Delta\chi}{\Delta\chi}\rho~.$$ The nice parametrization above corresponds to $p_1=(n_1-1)\chi$, $p_2=-(n_2-1)\chi$ and $\Delta=n_1-n_2$. Notice that this class of solutions allows very large values for $p_{1,2}$ (equivalently $\chi$) in which case we obtain $P=-\rho$. Conversely for small $p_{1,2}\rightarrow 0$ we have an {\it almost vacuum} solution with $P\approx (\frac{\rho}{p_1+p_2}$). The parameter $\omega=P/\rho$ thus spans the whole range $[-1,\infty)$ in this case. In principle this parametrization would allow for fluids with lower barotropic parameter $\omega$ but these would violate all of the usual energy conditions. Notice that the barotropic constant depends on two parameters in this case, one discrete, $n_1$, and the other continuous, $p_1$. For each possible value of $1<n_1<2N$ (or $\Delta$) we have a continuous function, $\omega=\omega(p_1)$. \section{Vacuum solutions} Vacuum spacetimes can be considered as a subset of the more general perfect fluid class of solutions from previous section. Sincein all cases we have a equation of the type $P=\omega \rho$, we just need to impose one more condition, $\rho=0$, to get the vacuum solutions. Clearly a perfect fluid or vacuum solution for pure Lovelock would not in general satisfy the isotropy condition for the Einstein tensor. In odd $d=2N+1$ dimensions $\rho=0$ implies that at least one of the exponents has to be zero. This fact combined with the isotropy conditions yields two types of solutions. Considering the isotropy type {\bf (b)} we get solutions of the form \begin{equation} p_1=0\quad \text{and}\quad p_2=2N-1-\sum_{i=3}^{2N}p_i~. \end{equation} We have already mentioned that all isotropy type {\bf (c)} solutions are vacuum. These have not one but two zero exponents, say \begin{equation} p_1=p_2=0. \end{equation} In accordance with the discussion above, these solutions will be referred to as type {\bf (b)} and {\bf (c)} vacuum metrics respectively. In the way defined so far, these two families have a non-empty intersection. To be precise we will consider as type {\bf (b)} metrics those with {\it only} one vanishing exponent, $p_{i>1}\neq 0$, in order to have mutually exclusive categories. These are all non-trivial vacuum Kasner solutions of the theory in odd dimension. Type {\bf (a)} solutions are trivial as all the exponents are equal and thus $\rho=0$ implies that the metric is just Minkowski. Moreover, there are no type {\bf (d)} vacuum metrics in odd dimensions. In fact, there exists no solution with all non-vanishing exponents $p_i$ in odd dimensions. Besides, in the type {\bf (b)} case at least one of the exponents has to be positive whereas the sign of the rest is a priori unconstrained. There are no further restrictions on the exponents, $p_{i>2}$, of type {\bf (c)} solutions, on their signs or otherwise. Both vacuum families of solutions have the same number of free parameters, we have $2N-2$ free exponents, $p_{i>2}$. For even $d=2N+2$ dimensions, there are two ways of solving the vacuum condition $\rho=0$. We may have two flat directions, say $p_1=p_2=0$, or $$\sum_{i=1}^{2N+1}\frac1{p_i}=0~.$$ This can be combined with the different isotropy types to give three different families of solutions. The first two families are analogous to those found in the odd dimensional case and correspond to having at least two zero exponents. For type {\bf (b)} metrics we have \footnote{Remarkably one particular solution in both odd and even dimensions is that the nonzero $p$ are all equal to one. For $d=3, 4$ in Einstein gravity this is actually the only solution in this class.} \begin{equation} p_1=p_2=0\quad \text{and}\quad p_3=2N-1-\sum_{i=4}^{2N}p_i \end{equation} whereas type {\bf (c)} solutions are all vacuum as explained before. These have three flat directions instead of just two, \begin{equation} p_1=p_2=p_3=0 \end{equation} These two families of solutions will be referred to as vacuum types {\bf (b.1)} and {\bf (c)} respectively. We again further constraint type {\bf (b.1)} metrics to have {\it only} two flat directions, {\it i.e.} $p_{i>2}\neq0$, so that there is no intersection between types {\bf (b.1)} and {\bf (c)}. For the last family of solutions, corresponding to the case of all $p_i$ being non-zero --only compatible with type {\bf (b)}--, we have \begin{equation} \sum_{i=1}^{2N+1}p_i=2N-1 \quad \text{and} \quad \sum_{i=1}^{2N+1}\frac1{p_i}=0 ~. \label{b2cond} \end{equation} This family will be referred as {\bf (b.2)} and is the familiar Kasner metric in its most generic form. Clearly, there have to be both positive and negative exponents $p_i$, at least one of each kind. For type {\bf (b.1)}, as it happens in odd dimensions, at least one exponent has to be positive but there may be none of negative sign. There are no constraints on the signs for type {\bf (c)} metrics. Again type {\bf (a)} solutions are trivial and correspond to Minkowski flat space and, even though we have type {\bf (d)} metrics in even dimensions, $\rho=0$ would require either $p_{1,2}=0$ or $p_1+p_2=0$, both conditions incompatible with Eq. (\ref{typed}). The latter condition would imply $n_1=n_2$, but this is impossible as the sum of the multiplicities has to be odd, $n_1+n_2=2N+1$. Therefore, there are no vacuum type {\bf (d)} solutions. All this discussion is summarized in Table \ref{classes}. In odd $d=2N+1$ dimensions we have two $(2N-2)$-parameter families of solutions whereas for even $d=2N+2$ dimensions we have three sectors, two $(2N-2)$ and one $(2N-1)$-parameter families of solutions. It is clear that for $d=3, 4$ type {\bf (c)} vacuum solutions are just the trivial flat solution while for $d>4$, we have non-trivial vacuum solutions in that class. \begin{table}[ht] \begin{tabular}{c|c||c|c||c} type & $d$ & isotropy cond. & vacuum & vac. curvature\\ \hline\hline {\bf (a)} & any $d$ & $p_i=p \ ,\quad \forall i=1,2\ldots d-1$ & $p=0$ & $R=\mathcal{R}=\mathbb{R}=0$\\ \hline {\bf (b)} & $2N+1$ & & $p_1=0$ & $\mathbb{R}=0~$ $^{(\#_1)}$\\ \hskip.4in {\bf --b.1--}\, & $2N+2$ & $\sum_{i=1}^{d-1}p_i=2N-1$ & $p_1=p_2=0$ & $\mathbb{R}=0~$ $^{(\#_1)}$\\ \hskip.4in {\bf --b.2--}\, & $2N+2$ & & $\sum_{i=1}^{d-1}\frac1{p_i}=0$ & $ R,\mathcal{R},\mathbb{R}\neq 0$\\ \hline {\bf (c)} & $2N+1$ & $p_1=p_2=0$ & \multirow{2}{*}{\it all} & \multirow{2}{*}{$\mathcal{R}=\mathbb{R}=0~$ $^{(\#_2)}$}\\ & $2N+2$ & $p_1=p_2=p_3=0$ & & \\ \hline {\bf (d)} & \multirow{2}{*}{$2N+2$} & $p_i=p_{1,2}$ with multiplicities $n_{1,2}$ & \multirow{2}{*}{\it none} & \\ & & $\frac{n_1-1}{p_1}+\frac{n_2-1}{p_2}=0~, \quad n_1+n_2=2N+1$ & & \end{tabular} \caption{Classification of isotropic and vacuum solutions. ${(\#_1)}$ $\mathcal{R}=0$ as well for the subset of type {\bf(b)} ({\bf(b.1)} in $d=2N+2$) with all non-zero exponents equal to one, $p_{i\neq1,2}=1$. ${(\#_2)}$ For {\it flat Kasner} (naively type {\bf(c)}) we also have $R=0$. for $N=2$, except for these {\it exceptional} cases, all the tensors that are not in the table are non-vanishing.} \label{classes} \end{table} \subsection{Curvature tensors} We may now compute the different tensorial quantities defined in the Introduction, namely Kastor's $\mathbb{R}_{abcd}$ and the alternative $\mathcal{R}_{abcd}$, but also the Riemann tensor, $R_{abcd}$; for the different vacuum types. For simplicity we will denote these $4$th rank tensors just as $\mathbb{R}$, $\mathcal{R}$ and $R$ respectively, not to be confused with respective contractions. Note that Lovelock vacuum is defined by $\mathbb{R}_{ab}=\mathcal{R}_{ab}=0$. The computation of these tensors is very simple for type {\bf (c)} vacuum metrics for which both tensors are actually zero. This is easy to understand as in both tensors there is at least one set of antisymmetrized $2N$ indices. For type {\bf (c)} metrics, for both odd and even dimensions, we have at most $2N-2$ non-zero exponents, {\it i.e.} $2N-1$ non-flat directions including time. As a consequence, at least one of the $2N$ antisymmetrized indices will have to be on one of the flat directions. As any component of the Riemann tensor involving that direction is zero, for all dimensions and Lovelock orders we have $\mathcal{R}=0$ and ${}^{(2)}\mathbb{R}=0$ (also then $\mathcal{M}=0$). Another way of stating the same thing is that the spacetime has reduced effective dimension $2N-1$ (number of non-flat directions), with $d-2N+1$ flat directions added to it, and the corresponding Lovelock-Riemann is therefore zero. The Lovelock-Riemann tensor is non-trivial only in $d_{\text{eff}}\geq 2N$. It would have to be $d_{\text{eff}}\geq 2N+1$ for the Lovelock-Einstein to be non-zero. Moreover, as discused in \cite{xd}, in $d_{\text{eff}}=2N$ the Lovelock-Riemann is completely determined by the Lovelock scalar, whereas for $d_{\text{eff}}=2N+1$ it can be given in terms of the Lovelock-Einstein. Note that the Lovelock scalar is proportional to the trace of the Lovelock-Einstein except when $d_{\text{eff}}=2N$. In that case the Lovelock-Einstein vanishes but the corresponding scalar in general does not. It does vanish in vacuum as we will see below. Vacuum spacetimes have non-trivial Lovelock-Riemann only in dimension, $d_{\text{eff}}\geq 2N+1$, below that threshold all vacua are Lovelock-flat; {\it i.e.} the corresponding Lovelock Riemann is zero For type {\bf (b)} solutions in odd dimensions (or {\bf (b.1)} in even dimensions), a similar argument holds. The effective dimension is a priori $d_{\text{eff}}=2N$ in this case. In principle we have enough indices so that the antisymmetrization does not necessarily yield zero as before. We may realize however that, if all exponents are either zero or one, all components of the Riemann tensor with any time index vanish, $R^{0i}_{\ \ 0i}=p_i(p_i-1)t^{-2}=0$, thus effectively removing that direction as well. We reduce again $d_{\text{eff}}=2N-1$, thus implying $\mathcal{R}=\mathbb{R}=0$. Notice that all such solutions belong to types {\bf(b.1)} and {\bf (c)} (depending on the number of zeros and ones) in both the relevant odd and even dimensions. One can easily check that, at least for $d=5,6$ pure GB, type {\bf (c)} metrics, and those of type {\bf (b.1)} ({\bf (b)} in 5d) with all non-zero exponents equal to one, are the only solutions verifying $\mathcal{R}=0$. Something similar happens for for the Lovelock-Riemann. In this case, as the effective dimension is $d_{\text{eff}}=2N$, this tensor can be written completely in terms of a scalar, basically the Lovelock term for that effective dimension \cite{xd}. Also the equation of motion will be given in terms of this invariant, therefore it has to vanish. We can explicitly check that by computing the effective Lovelock scalar, \begin{equation} \mathbb{R}_{\text{eff}}^{(2)}=\frac{(2N)!}{(2N-1)t^{2N}}\left[\sum_{i=1}^{2N-1}p_i -2N+1\right]\prod_{j=1}^{2N-1}p_j \end{equation} where we have set to zero, $p_{i>2N-1}=0$. For type ({\bf b}) metrics this quantity is zero, thus, the whole Lovelock-Riemann tensor is also zero. Note that the Lovelock-Enstein is not trivial in this case as the {\it real} dimension is bigger than $2N$. All of type {\bf (c)} and type {\bf (b.1)} metrics are Lovelock flat solutions. This had to be the case in odd $d=2N+1$ dimensions because of kinematicity, but it is a non-trivial statement in even dimensions. Contrary to $\mathcal{R}$ the Lovelock-Riemann tensor vanishes for all type {\bf (b.1)} solutions not just for those with $p_{i\neq1,2}=1$. Summarizing, the least restrictive condition among the ones we use is Lovelock flatness. The Lovelock-Riemann tensor, $\mathbb{R}=0$, singles out type {\bf (b.1)} (all type {\bf (b)} in odd dimensions) and {\bf (c)} metrics. $\mathcal{R}=0$ is verified just for all of type {\bf (c)} but just a subsector of type {\bf (b.1)} with all non-zero exponents equal to one. These spaces effectively have reduced effective dimensionality, $d_{\text{eff}}\leq2N-1$, thus any tensor with $2N$ antisymmetrized indices vanish for these metrics. Moreover, the Riemann tensor vanishes only when all exponents are zero except for one that may be zero or one. Both possibilities correspond to flat spacetime, the former being just Minkowski space in its canonical form. The latter --so called {\it flat Kasner}--, even though naively belonging to a different isotropy type, also corresponds to Minkowski space, a patch of it, in a different set of coordinates. Therefore the previous solutions are, in general Lovelock flat but not Riemann flat. Besides, only type {\bf (b.2)} vacuum solutions have a non-trivial Lovelock-Riemann tensor. One can in fact verify that, the Lovelock flatness condition for 6-dimensional pure GB metrics precisely implies that these have to have at least two flat directions, thus belong to types {\bf (b.1)} or {\bf (c)}. The other tensors are also non-vanishing for {\bf (b.2)} metrics. We can use the previous results concerning the vanishing of the different curvature tensors to actually classify all vacuum Kasner solutions into the corresponding families. For that we just have to be careful in identifying the {\it exceptional} cases, {\it i.e.} type {\bf (b)} metrics with all non-zero exponents equal to one and {\it flat Kasner}. \subsection{Structure of vacuum solution space: the {\it Kasner Shamrock}} The space of vacuum solutions can be easily visualized in the space parametrized by the exponents. Except for type {\bf (b.2)} that has a more complicated form, all of type {\bf (b)} and {\bf (c)} vacuum solutions correspond to intersections of planes in $\mathbb{R}^{d-1}$ (we will denote this space $\mathbb{R}^{d-1}_p$), two in odd $d=2N+1$ dimensions, and three in even $d=2N+2$ dimensions. These are the Lovelock flat solutions. For the simplest case, $N=2$, GB gravity in $d=5,6$ dimensions we can visualize this in the three dimensional parameter space spanned by $(p_1,p_2,p_3)$ with the remaining exponents set to zero. We have in this way four families of solutions corresponding to $p_1=0$, $p_2=0$, $p_3=0$ --type {\bf (c)}-- and $p_1+p_2+p_3=3$ --type {\bf (b)} or {\bf(b.1)} in odd and even dimensions respectively-- (see Figure \ref{planes}). \begin{figure} \begin{center} \includegraphics[scale=1]{planes} \end{center} \caption{Lovelock flat solutions in GB gravity in the $p_4=0$ plane in $d=5$ (also in $d=6$ with $p_5=0$). The vertical and horizontal planes correspond to each of the exponents being zero, $p_i=0$ for $i=1,2,3$, whereas the remaining one is $p_1+p_2+p_3=3$.} \label{planes} \end{figure} The remaining case, type {\bf (b.2)}, has a much richer structure and will be explored in detail in what follows. These solutions verify the isotropy condition {\bf (b)}, $2N-1-\sum_{i=1}^{d-1} p_i=0$, which implies that the volume of $t=\text{const.}$ slices grows as $t^{2N-1}$. This is also true for vacuum type {\bf (b.1)} spaces. In addition, type {\bf (b.2)} solutions are the only ones that have a non-trivial Lovelock Riemann tensor, they are not Lovelock flat and are in this sense more {\it generic}. In fact they are in the most generic form of Kasner, with all non-vanishing exponents. Type {\bf (b.2)} vacuum solutions verify one less condition than the other two families present in even dimensions. In fact, we will see that type {\bf (b.1)} solutions appear at the boundary, in some appropriate sense, of the bigger {\bf (b.2)} family. When taking two of the exponents to zero, the second of (\ref{b2cond}) conditions have to be understood as a limit and, depending on how this is taken, we may solve the condition for any value of the remaining $p_i$. This condition is thus vacuous and we recover the {\bf (b.1)} type solutions. These will appear as codimension one limiting surfaces in the space of {\bf (b.2)} solutions. Notice that type {\bf (c)} metrics are completely different in this sense, they cannot be recovered in this way as they correspond to different isotropy types. All type {\bf (b)} metrics lie in a plane $\sum p_i=2N-1$ in $\mathbb{R}^{d-1}_p$ whereas type {\bf (c)} metrics do not. As we have two constraints on the exponents, we can always solve for two of them in terms of the remaining ones. Defining two new parameters $$\zeta=p_1+p_2=2N-1-\sum_{k=3}^{2N} p_k\quad \mbox{and} \quad \xi=\frac1{p_1}+\frac1{p_2}=- \sum_{i=3}^{2N+1}\frac{1}{p_i}$$ we can determine $p_1$ and $p_2$ as solutions of a quadratic equation $$p^2-\zeta\,p+\frac{\zeta}{\xi}=0$$ yielding $$p_1=\frac{\zeta}2\left[1 \pm \sqrt{1-\frac4{\zeta\xi}}\right],\quad p_2=\frac{\zeta}2\left[1 \mp \sqrt{1-\frac4{\zeta\xi}}\right]~,$$ all the remaining exponents $p_{k\neq 1,2}$ being free. There is only one limitation to the above formula, the product of the two parameters cannot be in the $(0,4)$ range, therefore \begin{equation} \zeta\xi\geq 4 \quad \text{or} \quad \zeta\xi\leq 0~, \label{reality} \end{equation} otherwise the values the $p_{1,2}$ become complex. This constrains possible values the remaining $p_i$ can take. For a pair of complex conjugate exponents we can make a complex change of variables making the metric real. These are also solutions to the pure Lovelock (also Einstein) equation, however they contain closed timelike curves and thus do not give physically interesting well defined spacetimes. We will comment more on this in the appendix \ref{app}. Another interesting limiting case of the above formul\ae\ is $\zeta=0$ which necessarily implies $\xi=0$ as well, unless $p_1=p_2=0$ in which case we recover a type {\bf (b.1)} family of solutions. For $\zeta=\xi=0$, we can still choose the limiting value of $\zeta/\sqrt{\xi}$ in such a way that we obtain, $p_1=-p_2=p$, a one-parameter family of solutions. Alternatively, we may try to see what the space of allowed $p_3, p_4, \cdots$ looks like for fixed $\xi$ and $\zeta$, similar to how the Kasner solutions of Einstein gravity can be visualized as the intersection of a plane and a sphere $$ \sum p_i=1\quad; \qquad \sum p_i^2=1. $$ In three dimensions these two constraints reduce the space of nontrivial solutions to just one, one of the $p_i$ parameters being zero and the other equal to one. This is what is known as the {\it flat Kasner} solution, as it has zero Riemann curvature. In four dimensions we can split the solutions into those with two vanishing exponents, the other being one --type {\bf (b.1)}, again {\it flat Kasner} in this case-- and those with all $p_i$ nonzero --type {\bf (b.2)}-- for which the sphere constraint is equivalent to the above $\sum 1/p_i=0$, with $i=1,2,3$, when the equation for the plane is used. The only type {\bf (c)} metric is trivial in this case. The reality conditions (\ref{reality}) in this case simply amount to the exponents, all three, being in the range $p_i\in [-1/3,1]$. The upper bound is obvious from the sphere condition, otherwise at least one of the other exponents has to be complex. One obvious difference between $d=4$ Einstein gravity and higher dimensional pure Lovelock is that the exponents are bounded as $\|p_i\|\leq1$ in the former case, equality meaning that the solution is {\it flat Kasner}. This seems to have some relation to the stability of these solutions \cite{Petersen2015}. In $d=6$ GB gravity we can visualize the three dimensional space of $p_3,p_4,p_5$ in a similar way with equations, $$ p_3+p_4+p_5=3-\zeta \quad ; \qquad \frac{1}{p_3}+\frac{1}{p_3}+\frac{1}{p_3}=-\xi. $$ For even dimensions higher than $6$ --and corresponding Lovelock order $N\geq 3$-- we can always decompose the space of exponents $p_i$ into lower dimensional subspaces similar to the one above. This will allow to suitably represent the space of solutions and also to exploit the rich symmetry structure of these equations. We now analyze the six dimensional case in detail. To make the analogy closer to the $d=4$ Einstein case we can rescale the exponents, $q_i=p_i/(3-\zeta)$, such that $$ q_3+q_4+q_5=1 \quad ; \qquad \frac{1}{q_3}+\frac{1}{q_4}+\frac{1}{q_5}=\xi(\zeta-3) $$ The case $\zeta=3$ has to be treated separately. Notice that for $\xi=0$ we recover precisely the $d=4$ space of solutions, the {\it Kasner sphere} --a circle in this case--, whereas in general it gets deformed. To see how the deformation parameter, $$k=\xi(\zeta-3)=\left(\frac{1}{p_1}+\frac{1}{p_2}\right)\left(p_1+p_2-3\right)~ ,$$ affects the space of solution we can plot these spaces. In order to simplify the visualization we project onto the plane $q_3+q_4+q_5=1$ parametrizing $q_3=\frac{1+X-\sqrt{3}Y}{3}$, $q_4=\frac{1+X+\sqrt{3}Y}{3}$, $q_5=\frac{1-2X}{3}$ and plot in terms of the new $(X,Y)$ coordinates (see Figure \ref{Kspace}). Cold colors correspond to negative values of $k$ whereas warm colors indicate positive $k$. \begin{figure} \begin{center} \includegraphics[scale=.32]{KasnerGeometries3.pdf}~ \includegraphics[scale=.32]{KasnerGeometries_scale.pdf} \includegraphics[scale=.52]{KasnerGeometries-fdtal3.pdf} \end{center} \caption{Level diagram for the function $K(X,Y)=\sum_{i=1}^3 q_i^{-1}(X,Y)$ as defined in the text. The orbits of solutions for the different values of $k$ correspond to the level sets of the function $K$, {\it i.e.} $K(X,Y)=k$. The numbers on the color scale correspond to $\frac{2}{\pi}\arctan(k)$.} \label{Kspace} \end{figure} We see the thick line corresponding to the Kasner circle ($k=0$) and the white lines to one of the exponents being zero, $q_i=0$, or $k\rightarrow\pm\infty$. These lines correspond precisely to type {\bf (b.1)} metrics. For $k$ to be infinite we need $\xi\to \pm\infty$ (we cannot take $\zeta\to \infty$ otherwise the rescaling would not be well defined) and thus either $p_1$ or $p_2$ vanish as well. The intersections of the white lines correspond to two $q_i=0$, the remaining {\bf(b.1)} solutions, for any value of $k$ in this case (equivalently any value of $p_{1,2}$). We can, in this way, analyze the whole set of type {\bf (b)} --both {\bf (b.1)} and {\bf(b.2)}-- vacuum even dimensional solutions. To complete the analysis one just has to include type {\bf (c)} solutions that cannot be represented in this way. Another interesting set of solutions is $k=1$, for which the orbit again yields three lines in the space of solutions. These correspond to each of the exponents being one, $q_i=1$ for $i=3,4,5$, that automatically solve the constraints. This same triangle plays a prominent role in $d=4$ Einstein gravity as the vertices can be used to define the {\it Kasner map}. This map is an application of the Kasner circle on itself and corresponds to the evolution of type II Bianchi models that have Kasner asymptotics both to the future and the past. The iteration of this map represents the so called {Mixmaxter attractor}, the subsequent transitions between Kasner epochs as we approach the big-bang singularity in more general (Bianchi IX) models (see \cite{Heinzle2009} for a recent discussion). It is reasonable to think that a similar map may exist for Lovelock gravity models as well, despite the fact that the previous chaotic behaviour disappears in higher dimensions for vacuum Einstein gravity \cite{Szydowski1987,Szydowski1987a,Turkowski1988}. The characterization of the space of solutions performed in this section can thus be taken as the starting point for the more general analysis of big-bang singularities for this class of theories. Interestingly enough, the previous representation makes apparent the existence of three axes of symmetry that correspond to the exchange of a pair of exponents, $q_i\leftrightarrow q_j$. We have three possible such exchanges. We have also another symmetry that corresponds to a rotation of $2\pi/3$ that in turn represents a cyclic permutation of the three exponents. Using these symmetries we can always restrict the values of $X$ and $Y$ to one of the six fundamental domains, each corresponding to a different permutation of $(p_3,p_4,p_5)$. The representation of Figure \ref{Kspace} is useful, for instance, to discuss the signs of the exponents. In the center of the inner triangle all three $q_i$ are positive, whereas each time one crosses one of the white lines one of the exponents changes sign. We can see thus that over the Kasner circle we always have one negative and two positive exponents. This is also true for any solution corresponding to a positive value of $k$, except for the piece of the orbit inside the inner triangle (only for $k\geq 9$) that has all three exponents bigger than zero. Negative $k$ orbits split in two parts as well. The piece contained inside the Kasner circle $k=0$ still has two positive and one negative exponents, whereas the pieces in the outer blue triangles have two negative directions. The only solutions that cannot be represented in the $(X,Y)$ plane as above correspond precisely to $\zeta=3$. In this case the system reduces to $$ p_3+p_4+p_5=0 \quad ; \qquad \frac{1}{p_3}+\frac{1}{p_4}+\frac{1}{p_5}=-\xi $$ and can be treated in a very similar manner. We can again rescale the exponents, $\bar{q}_i=-\xi \, p_i$, so that now we set to one the parameter in the second equation, \begin{equation} \bar{q}_3+\bar{q}_4+\bar{q}_5=0 \quad ; \qquad \frac{1}{\bar{q}_3}+\frac{1}{\bar{q}_4}+\frac{1}{\bar{q}_5}=1 \label{invertedcircle} \end{equation} Using a similar projection as above, $\bar{q}_3=\frac{X-\sqrt{3}Y}{3}$, $\bar{q}_4=\frac{X+\sqrt{3}Y}{3}$, $\bar{q}_5=\frac{-2X}{3}$ now on the plane $\sum_{i=3}^5 \bar{q}_i=0$, we can represent this orbit as shown in Figure \ref{inversion} below (in blue). \begin{figure}[b] \begin{center} \includegraphics[scale=.7]{KasnerInversion.pdf} \end{center} \caption{Representation of the Kasner circle upon inversion as defined in Eq.~(\ref{invertedcircle}) with $\bar{q}_3=\frac{X-\sqrt{3}Y}{3}$, $\bar{q}_4=\frac{X+\sqrt{3}Y}{3}$, $\bar{q}_5=\frac{-2X}{3}$. } \label{inversion} \end{figure} Besides, if we change the signs of all the exponents $q_i$ we get the corresponding curves in the complementary regions (in red). Notice that the same system can be obtained from the generic case rescaling the $q$'s (or equivalently $X,Y$) by $k$ and taking the limit $k\rightarrow 0$. These curves represent somehow the boundary of the $(X,Y)$ space above, brought to finite distance. Remark that in the original representation the $q_i=p_i/(3-\zeta)$ diverge as $\zeta\to 3$. Besides, we can also view this as a {\it Kasner sphere} with inverted coordinates, $q_i\rightarrow 1/\bar{q}_i$. Instead of just for $\zeta=3$, we could have used the representation in terms of $\bar{q}_i$ and the corresponding rescaling from the beginning. We would then had gotten, $$ \bar{q}_3+\bar{q}_4+\bar{q}_5=k \quad ; \qquad \frac{1}{\bar{q}_3}+\frac{1}{\bar{q}_4}+\frac{1}{\bar{q}_5}=1 $$ where the value of $k=\xi(\zeta-3)$ is the same as before. In principle, any point can be represented in two equivalent ways related as $\bar{q}_i=k q_i$. In addition to this, we can also get one system of equations from the other through an inversion, $q_i\leftrightarrow 1/\bar{q}_i$. This inversion can be then rephrased as a symmetry of the original equations, whether in terms of $q_i$, $\bar{q}_i$ or $p_i$. This symmetry amounts to \begin{equation} \frac{p_i}{\zeta-3}\leftrightarrow\frac{1}{\xi p_i}\quad \text{or} \quad q_i\leftrightarrow \frac{1}{k q_i} \quad \text{or} \quad \bar{q}_i\leftrightarrow \frac{k}{\bar{q}_i} \label{invsym} \end{equation} We can find similar symmetries for any subset of the exponents except for the complete set. For any non-zero value of $k$, this symmetry would generate from one piece of the orbit some other piece with the same number of positive and negative exponents (for positive $k$) or all opposite signs (for negative $k$). The only points for which we cannot choose the representation we use are those with $k=0$. Depending on whether $\zeta=3$ or $\xi=0$, the corresponding orbit can just be plotted using $\bar{q}_i$ or $q_i$ respectively. Still we can easily obtain one of these orbits from the other through the inversion, $q_i\leftrightarrow 1/\bar{q}_i$. The representation in terms of the $\bar{q}_i$ is otherwise not very useful. The first representation, as already seen, is projected on the plane $q_3+q_4+q_5=1$ whereas for the second we can project on $\bar{q}_3+\bar{q}_4+\bar{q}_5=k$. Notice, that contrary to the original visualization scheme, now each point in the $(X,Y)$ plane does not have a unique value of $k$ associated to it, but three. There are three orbits with different values of $k$ through every point. It is much more complicated to plot and not very enlightening. Remarkably enough, the inversion symmetry $q_i\leftrightarrow 1/(k\,q_i)$, when translated into the $(X,Y)$ plane, admits a very clear and elegant geometric realization. To see this, instead of performing just the inversion, we will define three different {\it inversion maps}, each of them being the composition of the inversion with one of the three possible exchange symmetries, $q_i\leftrightarrow q_j$ with $i\neq j$. Remember that these correspond to reflection symmetry about each of the three reflection axes of Figure \ref{Kspace}. We will refer to them as $f_i$ with $i=1,2,3$; {\it e.g.} $f_1$ corresponds to the composition of the inversion with the $q_2\leftrightarrow q_3$ exchange. The other maps are defined analogously. The composition of these reflection symmetries with the inversion, each of these maps, verify a very interesting geometric property. The straight line on the $(X,Y)$ plane connecting the original point with its image via the map always passes through one of the intersections of two white lines, $q_i=0$ or $k\to\pm\infty$ (see Figure \ref{dualmap}, in green). Conversely, the intersection of one such straight line with the orbit of given $k$ value would give me two points related by the map. This is very reminiscent of the way one can define geometrically the {\it Kasner map}, the only difference is the points used to trace the lines. The {\it Kasner map} has the intersections of the $k=1$ lines as focal points, instead of those of $k\to\pm\infty$. If I pick a different green point to trace the line the resulting points would be related through a different one of the maps. Moreover, from a given point, tracing the lines through the three green points I would get the corresponding three images. These images are all image under inversion of the same point, up to a reflection symmetry. Therefore they are related through the composition of two such reflections, {\it i.e.} they are related by a $2\pi/3$ rotation. Graphically they form an equilateral triangle around the origin of the $(X,Y)$ plane. Using this map we can restrict to the values of $(X,Y)$ that are inequivalent under the inversion. This will allow us to focus on a compact set of points in that plane, despite the original space of solutions being non-compact. The boundary of that compact set (shaded region in Figure \ref{dualmap}) is given by the fixed points of any of the maps. It is easy to verify that one such set of fixed points corresponds to the circle, $$ (X-1)^2+Y^2=1 $$ whereas the rest can be obtained by rotating this set by $\pm 2\pi/3$ (see Figure \ref{dualmap}, bottom right). Each of the regions contained in the shaded region generates under inversion one of the corresponding regions outside. More precisely, given the set of solutions for a given $k$ inside the three circles we get its counterpart outside. We can actually perform this operation in a completely geometric way just tracing lines. Given three points related by $2\pi/3$ rotations inside the shaded region, one can trace the nine lines through these and the green points mentioned before. This nine lines will cross in groups of three outside the shaded region generating the images of the three original points under inversion (see Figure \ref{dualrev}). These are also related by $2\pi/3$ rotations. In this way, given the constant $k$ orbits in the shaded region we get the corresponding orbits outside. This operation is well defined for all points on the $(X,Y)$ plane, except for the points on the {\it Kasner circle} that get mapped to infinity (the lines are parallel in groups of three), corresponding to the orbit $\zeta=3$, and points on the white lines. Each of this lines gets mapped to a single points, one of the green points used to generate the map. Notice that these are {\bf (b.1)} solutions that do not enjoy the inversion symmetry. The more involved part of the shaded region corresponds to the inside of the inner triangle. In that region the circles overlap dividing the triangle in six regions, three {\it narrow} and three {\it wide} regions. The map thus maps the {\it narrow} regions into the {\it wide} ones and viceversa. We have chosen the {\it wide} regions to belong to our shaded {\it fundamental domain} but we could as well have chosen the other ones. The shape of this fundamental domain consists in three equal leaves of almost circular shape, joined at the center, reminiscent of a shamrock. We may call it the {\it Kasner shamrock}. Each of the leaves of the {\it shamrock} can be divided in two halves, each of these halves contained in one of the six sectors from which one can reconstruct the space of solutions with reflections. Thus, from just one of these half-leaves we can reconstruct the whole space of solutions using reflections and inversions. \begin{figure} \begin{center} \begin{minipage}[b]{0.64\linewidth} \includegraphics[scale=1.2]{dualMap} \label{fig:minipage1} \end{minipage} \begin{minipage}[b]{0.34\linewidth} \!\!\includegraphics[scale=.6]{dualMapS1}\\ \includegraphics[scale=.6]{dualMapS2} \label{fig:minipage2} \end{minipage} \includegraphics[scale=.6]{dualMapS3}~~~\includegraphics[scale=.6]{dualMapS4}~~\includegraphics[scale=.6]{dualMapS5} \end{center} \caption{Graphical representation of the inversion maps $f_i$. Any point and its image correspond to the intersection points of straight lines through each of the green points and a given $k$-orbit. The blue lines correspond to the orbits of solutions with the same value of $k$. For $f_3$ the related green point corresponds to $q_1=q_2=0$. An example of such map appears in the larger figure. The rest of the figures show all three images under the inversion maps for points in different regions of the solution space. The boundary of the {\it Kasner shamrock} corresponds to the points that are mapped to themselves under any of the inversion maps (see bottom right corner figure).} \label{dualmap} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=.8]{dualMapSRev}~~\includegraphics[scale=.8]{dualMapSRev2} \end{center} \caption{Graphical representation of the inversion maps of three points related by $\pm 2\pi/3$ rotations in the $(X,Y)$ plane (on the left). The tracing of the nine straight lines joining these points (in red) with the points generating the maps (in green) determine the three corresponding image points (in orange). The only points that do not have an image in this way are those in the {\it Kasner circle} whose lines are parallel in groups of three (on the right). } \label{dualrev} \end{figure} \section{Exponential solutions} Another class of solutions closely related to Kasner spacetimes is that of exponential type metrics of the form \begin{equation} ds^2=-dt^2+\sum_{i=1}^n e^{2H_it}dx_i^2 \end{equation} These exponential solutions are a generalization of de Sitter spaces, with different {\it Hubble parameters} in every direction. This is very similar to the Kasner metric in a different form except that the $dt^2$ lacks the customary $e^{2t}$ in front. In fact, both Kasner and exponential solutions can be treated together using a slightly more general form of the metric, \begin{equation} ds^2=-e^{2H_0t}dt^2+\sum_{i=1}^n e^{2H_it}dx_i^2~, \label{expansatz} \end{equation} that reduces to the Kasner metric for $H_0=1$ and to the exponential form for $H_0=0$. For more general values of $H_0$ we can always make a change of coordinates of the form $\tilde{t}=\frac1{H_0}e^{H_0t}$ (accompanied by a rescaling of the spatial coordinates) to bring the metric again to the Kasner form with exponents $p_i=H_i/H_0$. The exponential solutions can thus be pictured as living in the {\it boundary} of the space of Kasner solutions. We can take a limit where $p_i\to \infty$, at least for some of the Kasner exponents with, $p_i/p_j=H_i/H_j$. The role of the Hubble exponents, $H_i$, is very similar to the $p_i$ of previous sections. There are just two differences. First while the $p_i$ are dimensionless quantities the $H_i$ have dimension of inverse length. This is the reason the $(2,2)$ Riemann components scale as $t^{-2}$ for Kasner whereas they are constant in the exponential case. Second, the way these constants enter the Riemann curvature is just slightly different, the $R^{ti}_{\ \ ti}=\frac{p_i(p_i-1)}{t^2}\rightarrow H_i^2$ while the remaining components, $R^{ij}_{\ \ ij}=\frac{p_ip_j}{t^2}\rightarrow H_iH_j$, are equal in form without the $t^{-2}$ scaling factor. For the more general ansatz the form of the $ti$ components would be $H_i(H_i-H_0)$ making the connection between the different parametrizations obvious. At the level of the pure Lovelock equations of motion, we realize that these become of homogeneous degree in the new exponents, $H_i$. From this basic observation and the discussion of Kasner solutions we can readily get the general form of the constraints for the exponential metrics. The algebraic form of the equations for the $H_i$ parameters is the same as for the $p_i$ when we keep just the highest degree in all expressions. Lower order in $p$ terms correspond to terms with $H_0$ factors in the exponential representation that are then set to zero. For instance we can take the above classification of vacuum solutions for pure Lovelock gravity and write: In odd $d=2N+1$ dimensions we have two $(2N-2)$-parameter families of solutions \begin{itemize} \item $H_1=H_2=0\ \ $ --type {\bf (c)}. \item $H_1=0$ and $H_2=-\sum_{i=3}^{2N}H_i\ \ $ --type {\bf (b)}, with $H_{i>1}\neq0$, so that there is no intersection between types {\bf (c)} and {\bf (b)}. \end{itemize} whereas in even $d=2N+2$ dimensions we have three sectors, two $(2N-2)$ and one $(2N-1)$-parameter families of solutions, \begin{itemize} \item $H_1=H_2=H_3=0\ \ $ --type {\bf (c)}. \item $H_1=H_2=0$ and $H_3=-\sum_{i=4}^{2N+1}H_i\ \ $ --type {\bf (b.1)}, again with $H_{i>2}\neq0$. \item $H_i\neq 0, \quad \forall i$ with $\sum_{i=1}^{2N+1}H_i=0$ and $\sum_{i=1}^{2N+1}\frac1{H_i}=0\ \ $ --type {\bf (b.2)}. \end{itemize} This is summarized in Table \ref{classExp}. We can do the same for perfect fluid solutions or any other. Notice that this class of vacuum solutions does not exist in Einstein gravity without cosmological constant. Revisiting the {\it Kasner circle} adapted to this case we get \begin{equation} \sum_{i}H_i=0\qquad ; \quad \sum_i H_i^2=0 \end{equation} and the second condition readily implies that all the exponents are zero. In order to go to the boundary of the space of solutions, as explained above, we may take $p_{1,2}\to \infty$, therefore, either $\xi\to 0$ or $\zeta\to \infty$. In the latter case we can rescale $q_i=-p_i/\zeta$ ($p_i\to \infty$) to get the equation of the $k\to\pm\infty$ orbit \begin{equation} \sum_{i=3}^{d-1}q_i=1 \quad \text{and}\quad \sum_{i=3}^{d-1}\frac1{q_i}=\xi\zeta\to \pm\infty \end{equation} The second possibility is to rescale then $\bar{q}_i=-p_i\xi$ ($p_i\to \infty$) to get \begin{equation} \sum_{i=3}^{d-1}\bar{q}_i=\xi\zeta\to 0 \quad \text{and}\quad \sum_{i=3}^{d-1}\frac1{\bar{q}_i}=1 \end{equation} This is the image of the Kasner circle under inversion $q_i\to 1/\bar{q}_i$ (see Figure \ref{inversion} for the $d=6$ case). Moreover, taking both limits at a time, $\xi\to 0$ and $\zeta\to\infty$ we can actually fix the value $\xi\zeta\to k$ getting the corresponding orbit, analogous to the Kasner case, and its image under inversion. The structure of the space of solutions is very similar to the Kasner case. Notice that the inversion $H_i\to 1/H_i$ is also a symmetry of the exponential space of solutions for the whole set of exponents $H_i$ in this case. For other subsets we can define an analogous transformation similar to that of previous sections. We can actually identify the parameters $q_i$ with a rescaled version of the Hubble parameters $H_i$. We can readily analyze the form of the curvature 2-form (equivalently the Riemann tensor) for the more general ansatz (\ref{expansatz}) above, \begin{equation} R^{0i}=H_i(H_i-H_0)e^{-2H_0t}e^0\wedge e^i \quad , \qquad R^{ij}=H_iH_j e^{-2H_0t}e^i \wedge e^j~, \end{equation} and notice that for every value $H_0\neq 0$ the Riemann components diverge either to the past or to the future (at the singularity) in which case the dominant term in the Lovelock equation will correspond again to pure Lovelock. The exponential case, $H_0=0$, is quite different as the components of the Riemann tensor become constant and all Lovelock terms contribute to the same order in the equation of motion. Besides, every curvature scalar will be constant and there is no curvature singularity at all. The geometry is completely regular. In the case of Einstein gravity we can add a positive cosmological constant to get solutions of this type. In the general case we would get a complicated set of polynomial constraints involving all of the Lovelock couplings. Even in the pure Lovelock case, the addition of just a cosmological constant term would complicate the equations to a high degree. In analogy with the Kasner metrics we can also discuss the values the different curvature tensors take for vacuum exponential solutions. We summarize the situation in Table \ref{classExp}. The only difference is that the subset of sector {\bf (b)} for which $\mathcal{R}=0$ disappears. This is easy to understand as it would correspond to $H_i=H_0$ in the general parametrization, thus $H_i=0$ and they reduce to type {\bf (c)} metrics. For this class of solutions we have a perfect hierarchy of solutions with regard to which curvature tensors vanish. This is true for $N=2$ but expected to hold in general. Type {\bf (a)} vacua have $R=\mathcal{R}=\mathbb{R}=0$, type {\bf (c)} $\mathcal{R}=\mathbb{R}=0$, $R\neq 0$, type {\bf (b.1)} (or {\bf(b)} for d=2N+1) has $\mathbb{R}=0$, $R,\mathcal{R}\neq0$ and finally for type {\bf (b.2)} all those tensors are non-zero. The Riemann tensor is only zero for the trivial Minkowski metric in this case --there is no analogue of {\it flat Kasner}. We can then distinguish the different families of vacuum solutions just by using these tensors! In the case of Kasner metrics this is not true due to the existence of exceptions; {\it i.e.} {\it flat Kasner}, that belongs to type {\bf (c)} yet has $R=0$, and the subset of type {\bf (b)} metrics with all non-zero exponents equal to one, that have $\mathcal{R}=0$. Once this cases are taken care of separately, the above classification scheme carries over to Kasner as well. \begin{table}[ht] \begin{tabular}{c|c||c|c||c} type & $d$ & isotropy cond. & vacuum & vac. curvature\\ \hline\hline {\bf (a)} & any $d$ & $H_i=H \ ,\quad \forall i=1,2\ldots d-1$ & $H=0$ & $R=\mathcal{R}=\mathbb{R}=0$\\ \hline {\bf (b)} & $2N+1$ & & $H_1=0$ & $\mathbb{R}=0$, $R,\mathcal{R}\neq0$\\ \hskip.4in {\bf --b.1--}\, & $2N+2$ & $\sum_{i=1}^{d-1}H_i=0$ & $H_1=H_2=0$ & $\mathbb{R}=0$, $R,\mathcal{R}\neq0$\\ \hskip.4in {\bf --b.2--}\, & $2N+2$ & & $\sum_{i=1}^{d-1}\frac1{H_i}=0$ & $ R,\mathcal{R},\mathbb{R}\neq 0$\\ \hline {\bf (c)} & $2N+1$ & $H_1=H_2=0$ & \multirow{2}{*}{\it all} & \multirow{2}{*}{$\mathcal{R}=\mathbb{R}=0$, $R\neq0$}\\ & $2N+2$ & $H_1=H_2=H_3=0$ & & \\ \hline {\bf (d)} & \multirow{2}{*}{$2N+2$} & $H_i=H_{1,2}$ with multiplicities $n_{1,2}$ & \multirow{2}{*}{\it none} & \\ & & $\frac{n_1-1}{H_1}+\frac{n_2-1}{H_2}=0~, \quad n_1+n_2=2N+1$ & & \end{tabular} \caption{Classification of isotropic and vacuum exponential type solutions in pure Lovelock gravity.} \label{classExp} \end{table} \section{Discussion} Kasner solutions play a fundamental role in the analysis of big bang singularities. In this note we have analyzed and classified Kasner type metrics in pure Lovelock theories of gravity. Regardless of the phenomenological interest of such theories, they capture the leading order dynamics in the approach to the singularity within the Lovelock class of theories. Close to the big-bang the curvature is generically diverging, thus the leading contribution will be that of the highest order curvature term, thus pure $N$th order Lovelock gravity in $d=2N+1,2N+2$. Analyzing the conditions for isotropy we were able to classify perfect fluid and vacuum solutions in several families denoted as {\bf (a)}-{\bf(d)} (just {\bf (a)}-{\bf(c)} in the vacuum case, with subtypes {\bf (b.1)} and {\bf (b.2)} in even dimensions). In vacuum the different types differ mainly in the number of flat directions, $p_i=0$, the metric has. Type {\bf (a)} vacuum metrics have all directions flat, it is just Minkowski. Type {\bf (c)} metrics have at least $d-2N+1$ flat directions, thus two and three in odd and even dimensions respectively, whereas type {\bf (b)} solutions have at most $d-2N$ vanishing $p_i$. In odd dimensions these have one flat direction but in even dimensions they may have either two or none, corresponding to subtypes {\bf (b.1)} and {\bf (b.2)}. Lovelock non-flat vacua emerge only in even $d=2N+2$ dimensions only when all exponents $p_i$s are non-zero, these satisfying the conditions $\sum p_i = 2N-1$ and $\sum p_i^{-1}=0$. This is true for any pure Lovelock theory in vacuum. Parallely, we analyzed the values the different Lovelock-Riemann analogue tensors take for the different vacuum types. We found a nice correspondence between the different families of solutions and the vanishing of particular sets of these 4th rank tensors, in a hierarchical way. Type {\bf (a)} solutions verify $R=\mathcal{R}=\mathbb{R}=0$, type {\bf (c)} have $\mathcal{R}=\mathbb{R}=0$ and $R\neq 0$ (except for {\it flat Kasner} that has $R=0$), while type {\bf (b.1)} --or just {\bf (b)} in odd dimensions-- yields $\mathbb{R}=0$ and $R,\mathcal{R}\neq0$ (except for the solutions with all non-zero exponents equal to unity). Finally type {\bf (b.2)} solutions are the ones for which all these tensors are non-vanishing. We can then use this tensors to classify our solutions in the different families, taking into account the exceptions. This may be very helpful in classifying solutions in case these are not given in the canonical Kasner form, but in some other possibly complicated set of coordinates. Besides, this classification scheme may be relevant for more general classes of solutions. A similar classification scheme exists for another class of solutions closely related to Kasner. Exponential type metrics as those studied in section VII, are also divided in isotropy types as Kasner's and the relation to the vanishing of the different sets of tensors carries over, in this case without exceptions. The {\bf (b.2), (b.1), (c), (a)} types can be defined precisely setting to zero one further 4-tensor at a time; $\mathbb{R}=0$, $\mathcal{R}=\mathbb{R}=0$, $R=\mathcal{R}=\mathbb{R}=0$. In four dimensional Einstein gravity ($N=1$ pure Lovelock) we can analyze much more general cosmological models. These have been classified long ago by Bianchi. Kasner metrics correspond to the Bianchi type I models in this classification, and represent the asymptotic solutions within those models. The situation is much more complicated for more general models, yet Kasner solutions still play a preeminent role. Bianchi type II models turn out to have Kasner asymptotics both to the past and the future. The Kasner solutions connected through a Bianchi type II trajectory define the so called {\it Kasner map}. For more general models, the BKL conjecture proposed that the Universe close to the initial singularity undergoes a series of oscillations, transitions between Kasner epochs where the expanding and contracting directions exchange their roles. A precise realization of this conjecture is given by the {\it Mixmaster Universe}. This represents the asymptotic behavior of Bianchi types VII and XI and is given by the iteration of the Kasner map, giving in this way the sequence of Kasner epochs. This sequence is infinite in four dimensions and the dynamics has been shown to be chaotic \cite{Cornish1997}. This chaotic behavior disappears in higher dimensional Einstein gravity. Non-trivial Kasner metrics in four dimensions correspond to type {\bf (b)} solutions in our classification (type {\bf (b.1)} are just {\it flat Kasner} in that case). It is reasonable to expect that a similar behavior to that of the Mixmaster attractor may also appear for other even dimensional pure Lovelock theories. It would be one more feature of $d=4$ Einstein gravity respected by pure Lovelock theories in all dimensions. Our classification could also be regarded as the necessary first step in such more general analysis. The obvious next step would be trying to generalize a result analogous to that of Bianchi type II in four dimensions and define a {\it generalized Kasner map}. For that we would have to introduce curvature in the spatial slices of our model (equivalently a nontrivial Lie group structure). The analogue of Bianchi II models would be a deformation in just a given 3-dimensional subspace; {\it i.e.} $t^{p_i}dx_i\to e^i$ with $i=1,2\ldots d-1$ and $$ [e^1,e^2]=n(t) e^3 $$ in the usual notation, the rest of the commutation relations being zero. This is the simplest possible modification of the Lie group structure. If a structure such as that of the Kasner map exists also in this case, this could also provide new examples of chaotic maps, interesting objects in their own right. The rich geometric structure we have found in the space of type {\bf (b)} solutions would perhaps come in handy in performing such analysis.
2,877,628,088,661
arxiv
\section{Introduction} To fully comprehend a scene, one should not only be able to detect the objects in the scene but also understand the attributes (properties) of each object detected. Even if two objects belong to the same category, their behavior might vary depending on their attributes. For example, we can't predict the route of a driving vehicle based on a still 2D image alone, unless we know the vehicle's heading/direction and if the vehicle is parked or not. Accurate classification of objects and their attributes is critical in numerous applications of computer vision and pattern recognition such as autonomous driving where a thorough grasp of the surroundings is essential for safe driving decisions. In order to drive safely, a driver must be able to predict numerous crucial aspects. They include, among other things, the activities of other drivers and pedestrians, the slipperiness of the road surface, the weather, traffic signs and their contents, and pedestrian behavior. Attributes are often defined as semantic (visual) descriptions of objects in a scene. An object's semantic information includes how it looks (color, size, shape, etc.), interacts with surroundings, and behaviors. The category of an object, in general, determines the set of possible attributes that it can have. For instance, a table might have attributes related to shape, color, and material. However, a human will have a more complicated set of attributes related to age, gender, and activity status (sitting, standing, walking, etc.). Some properties, such as the visible proportion of an object, may exist across multiple categories. Therefore, to accurately predict an object's attributes, we must consider the following: 1) some attributes are unique to certain categories, 2) some categories may share the same attribute, 3) some attributes require a global understanding of the entire scene and 4) some attributes are inherent to the object of interest. In this paper, we present a new algorithm -- \gls{glidenet} -- to tackle the attribute prediction problem. \gls{glidenet} is capable of addressing the aforementioned listed concerns while also predicting a variety of categories. Earlier methods for object detection and classification relied heavily on tailored or customized features that are either generated by ORB \cite{rublee2011orb}, SIFT \cite{lowe2004distinctive}, HOG \cite{dalal2005histograms} or other descriptors. Then, the extracted features pass through a statistical or learning module -- such as CRF\cite{lafferty2001conditional} -- to find the relation between the extracted features from the descriptor and the desired output. Recently, Convolutional Neural Networks (CNN) have proven their capability in extracting better features that ease the following step of classification and detection. This has been empirically proven in various fields, such as in object classification \cite{li2020group, huang2019convolutional}, object detection \cite{he2017mask, redmon2016yolo} and inverse image problems such as dehazing \cite{metwaly2020nonlocal, zhang2021learning}, denoising \cite{liu2021invertible,ren2021adaptive}, HDR estimation \cite{liu2020single, metwaly2020attention, chen2021hdrunet}, etc. Deep learning with CNN typically requires a large amount of data for training and regularization \cite{cabon2020vkitti2, yu2020bdd100k, ancuti2020ntire, pougue2021debagreement, caesar2020nuscenes}. Classical methods \cite{antwarg2012attribute, fang2010dangerous} for predicting attributes may require less data, however they perform worse than deep learning based techniques. In this work, we present a new deep learning approach \gls{glidenet} for attributes prediction that is capable of incorporating problem (dataset) specific characteristics. Our main contributions can be summarized as follows: \begin{itemize} \item We employ three distinct feature-extractors; each has a specific purpose. \gls{gfe} captures global information, which encapsulates information about different objects in the image (their locations and category type). \gls{lfe} captures local information, which encapsulates information related to attributes of the object as well as its category and binary mask. Lastly, \gls{ife} encapsulates information about the intrinsic attributes of objects. It ensures that we estimate characteristics solely from the object's pixels, excluding contributions from other pixels. \item We use a novel convolution layer (named Informed Convolution) in the \gls{ife} to focus on intrinsic information of the object related to the attributes prediction. \item To learn appropriate weights for each \gls{fe}, we employ a self-attention technique. Utilizing binary mask and a self-learned category embedding, we generate a ``Description'' Then we use a gating mechanism to fine-tune each feature layer's spatial contributions. \item We employ a multi-head technique for the final classification stage for two reasons. First, it ensures that the final classification step's weights are determined by the category. Second, the length of the final output can vary depending on the category. This is significant since not every category has the same set of attributes. \end{itemize} The term ``class'' can be confusing because it can refer to the object's type (vehicle, pedestrian, etc.) or the value of one of the object's attributes (parked, red, etc). As a result, we avoid using the term ``class'' throughout the work. We use the word ``category'' to refer to the object's type and the word ``attribute'' for one of the semantic descriptions of that object. In addition, we use uppercase letters $X$ to denote images or 2D spatial features, lowercase bold letters $\mathbf{x}$ for 1D features, and lowercase non-bold letters $x$ for scalars, a hat accent over a letter $\hat{x}$ to denote an estimated value and calligraphic letters $\mathcal{X}$ to denote either a mathematical operation or a building block in \gls{glidenet}'s architecture. \section{Related Work} Attributes prediction shares common background with other popular topics in research such as object detection \cite{wang2021end, joseph2021towards}, image segmentation \cite{huynh2021progressive, li2021semantic} and classification \cite{liu2021ntie, srinivas2021bottleneck}. However, visual attributes recognition has its unique characteristics and challenges that distinguish it from other vision problems such as multi-class classification \cite{reese2020lbcnn} or multi-label classification \cite{durand2019learning, chen2019multi}. Examples of these challenges are the possibly large number of attributes to predict, the dependency of attributes on the category type, and the necessity of incorporating both global and local information effectively. This has motivated several past studies to investigate how we could tailor a recognition algorithm that can predict the attributes. So far, the majority of relevant research has concentrated on a small number of generic attributes. \cite{kalayeh2021symbiosis, wang2017joint, tay2019aanet, rothe2015dex, li2016human, wang2021pedestrian} or a targeted set of categories \cite{he2017adaptively, park2018attribute, yang2020hierarchical, tang2019improving, li2018landmark, abdulnabi2015multi}. For instance, \cite{huo2016vehicle, sun2019vehicle} predict the attributes related to the vehicles. \cite{sun2019vehicle} have proposed a vehicle attributes prediction algorithm. The proposed method uses two branches one to predict the brand of the vehicle and another to predict the color of the vehicle. They use a combined learning schedule to train the model on both types of attributes. Huo \etal \cite{huo2016vehicle} use a convolution stage first to extract important features, then they use a multi-task stage which consists of a fully connected layer per an attribute. The output of each fully connected layer is a value describing that particular attribute. For more details about recent work in vehicle attribute prediction, Ni and Huttunen \cite{ni2021vehicle} have a good survey of recent work, and some existing vehicle datasets for vehicle attributes recognition (e.g. color, type, make, license plate, and model) can be found in \cite{yang2015large, liu2016deep}. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{network_architecture-whole_model.drawio.pdf} \vspace{-20pt} \caption{\gls{glidenet} -- the inputs are the image, the binary mask and the category of an object. The output is the attributes of the object. Note that, the category embedding is self-learned from the extracted features of \gls{lfe} using the category estimator. All shown images are taken from \gls{vaw} Dataset.} \label{fig:network_architecture} \vspace{-13pt} \end{figure*} On the other hand, \cite{abdulnabi2015multi, jia2020rethinking, tang2019improving} tackle the prediction of attributes related to pedestrians or humans. Jahandideh \etal \cite{jahandideh2018physical} attempts to predict physical attributes such as age and weight. They use a residual neural network and train it on two datasets; CelebA \cite{liu2015faceattributes} and a self-developed one \cite{liu2015faceattributes}. Abdulnabi \etal \cite{abdulnabi2015multi} learns semantic attributes through a multi-task CNN model, each CNN generates attribute-specific feature representations and shares knowledge through multi-tasking. They use a group of CNN networks that extract features and concatenate them to form a matrix that is later decomposed into a shared features matrix and attribute-specific features matrix. \cite{zhang2020solving} attempt to focus on datasets with missing labels and attempt to solve it with ``background replication loss''. Multiple datasets focus on attributes of humans, but the majority target facial attributes such as eye color, whether the human is wearing glasses or not, $\cdots$, etc. Examples of datasets for humans with attributes are CelebA \cite{liu2015faceattributes} and IMDB-WIKI \cite{rothe2015dex}. Li \etal \cite{li2019visual} propose a framework that contains a spatial graph and a directed semantic graph. By performing reasoning using the Graph Convolutional Network (GCN), one graph captures spatial relations between regions, and the other learns potential semantic relations between attributes. Only a handful of published work tackled a large set of attributes from a large set of categories \cite{pham2021learning, huang2020image, sarafianos2018deep, yang2020hierarchical}. Sarafianos \etal \cite{sarafianos2018deep} proposed a new method that targeted the issue of class imbalance. Although they focused on human attributes, their method can be extended to other categories as well. Pham \etal \cite{pham2021learning} proposed a new dataset \gls{vaw} that is rich with different categories where each object in an image has three sets of positive, negative, and unlabeled attributes. They use GloVe \cite{pennington2014glove} word embedding to generate a vector representing the object's category. \vspace{-5pt} \section{Proposed Model} \vspace{-5pt} Universal semantic (visual) attribute prediction is a challenging problem as some attributes may require a global understanding of the whole scene, while other attributes may only need to focus on the close vicinity of the object of interest or even intrinsically in the object regardless of other objects in the scene. We also aspire to estimate the possible attributes of various types of categories. This necessitates a hierarchical structure where the set of predicted attributes depends on the category of the object of interest. In this section, we discuss the details of \gls{glidenet} and the training procedure to guide each \gls{fe} to achieve its purpose. \subsection{\gls{glidenet}'s Architecture}\label{subsec:net_arch} \Cref{fig:network_architecture} shows \gls{glidenet}'s network architecture at inference. The input to the model is an image capturing the entire scene ($I$), the category ($C$), and the binary mask ($M$) of the object of interest. The output of the model is a vector ($\textbf{a}$) representing different attributes of that object. \Cref{fig:network_architecture} shows an example where the object of interest is the small portion of the floor below the bed. The output is a vector of the attributes of the floor. We can decompose the information flow in \gls{glidenet} into three consecutive steps; feature extraction, feature composition, and finally interpretation. In the next few subsections, we discuss the details of each step. However, the reader can refer to \Cref{sec:supp_net_arch} for exact numerical values of the parameters of the architecture. \subsubsection{Feature Extraction}\label{subsubsec:feat_ext} Feature extraction generates valuable features for the final classification step. It is of utmost importance to extract features that help in predicting attributes accurately. Some of which require an understanding of the whole image while others are intrinsic to the object. In addition, we are interested in the multi-category case. Thus, we need to strengthen the feature extraction process to deal with arbitrary shapes for the object of interest. For these reasons, we have three \glspl{fe}; namely \acrfull{gfe}, \acrfull{lfe} and \acrfull{ife}. Each \gls{fe} has a specific purpose so that collectively we have a complete understanding of the scene while giving attention to the object of interest. \headline{\gls{gfe}} generates features related to the entire image $I$. It produces features that are used for the identification of the most prominent objects in the image. Specifically, the generated features from \gls{gfe} describe objects detected in the image (their center coordinates, their height and width, and their category). We use the backbone of ResNet-50 \cite{he2016deep} network here. We extract features at three different levels of the backbone network to enrich the feature extraction process and for enhanced detection of objects at multiple scales. We denote the extracted features by \gls{gfe} as $F_G^1, F_G^2, F_G^3$ and collectively by $F_G$. Since the extracted features will have different spatial dimensions, we upsample $F_G^2, F_G^3$ to the spatial size of $F_G^1$; which is denoted by $h\times w$ for the height and width, respectively. Let $\mathcal{U}(X, Y)$ represent a function that upsamples $X$ to the spatial size of $Y$ and $\mathcal{S}$ be a concatenation layer, then \begin{equation} F_G = \mathcal{S}\left(F_G^1, \mathcal{U}\left(F_G^2, F_G^1\right), \mathcal{U}\left(F_G^3, F_G^1\right)\right) \label{eq:fg} \end{equation} \headline{\gls{lfe}} generates features related to the object of interest, but it also considers the object's edges as well as its vicinity. The extracted features from \gls{lfe} are used for the identification of the object's binary mask as well as its category and attributes. \gls{lfe} should be capable of estimating a significant portion of attributes as it focuses on the object of interest in contrast to \gls{gfe}. However, \gls{gfe} is still necessary for some attributes, which require an understanding of other objects in the scene as well. To illustrate, consider a vehicle towing another one. We cannot recognize the attribute ``towing'' without recognizing the existence of another vehicle and their mutual interaction. That is why we employ \gls{gfe} in features extraction. Similar to \gls{gfe}, we use ResNet-50 as the backbone for \gls{lfe}. The extracted features are denoted by $F_L^1, F_L^2, F_L^3$ and collectively by $F_L$. $F_L^2, F_L^3$ are up-sampled to the spatial size of $F_L^1$. \begin{equation} F_L = \mathcal{S}\left(F_L^1, \mathcal{U}\left(F_L^2, F_L^1\right), \mathcal{U}\left(F_L^3, F_L^1\right)\right) \label{eq:fl} \end{equation} \headline{\gls{ife}} generates intrinsic features of the object of interest, utilizing its binary mask using a novel convolutional layer dubbed as Informed Convolution. It is of great importance to differentiate and distinguish between the objectives of \gls{lfe} and \gls{ife}. Both of them attempt to extract features that predict the object's attributes. However, \gls{ife} generates features related to the intrinsic properties of the object (its texture as an example). On the other hand, \gls{lfe} generates features associated with its neighborhood and the boundaries of the object. To clarify, assume we want to predict the attributes of a pole in an image. \gls{lfe} cannot estimate its color, as typically poles have low aspect ratios; its height is much larger than its width. Thus, the number of pixels contributing to the pole's color is small compared to the total number of pixels in the cropped image $I_C$. Therefore, any typical \gls{fe} will obscure the pole's pixels with other pixels in the cropped image, even if we use an attention scheme to the output features. On the other hand, \gls{ife} cannot understand the interaction of an object with its vicinity, as it only considers the object's pixels while extracting features. As an example, consider an object's exposure to light. \gls{ife} cannot predict the exposure to light accurately; as that requires comparison with other objects in the vicinity of the pole (a dark-red object may be dark due to its low exposure to light or that it intrinsically has that color). Therefore, \gls{lfe} and \gls{ife} supplement each other for a better estimation of attributes. The structure of \gls{ife} resembles the backbone of ResNet-50 where we replace each convolutional layer with an informed-convolutional one (see \Cref{subsec:informed_conv}). The extracted features are denoted by $F_I^1, F_I^2, F_I^3$ and collectively by $F_I$. $F_I^2, F_I^3$ are also up-sampled to the spatial size of $F_I^1$. \begin{equation} F_I = \mathcal{S}\left(F_I^1, \mathcal{U}\left(F_I^2, F_I^1\right), \mathcal{U}\left(F_I^3, F_I^1\right)\right) \label{eq:fi} \end{equation} Therefore, we have three different sets of features at the end of the feature extraction step; $F_G, F_L, F_I$. Each of them contains features from three levels (dense embeddings) that are all up-sampled to the same spatial size $h\times w$, which we set to $28\times28$ in our implementation. \begin{figure} \centering \begin{subfigure}{\linewidth} \includegraphics[width=\textwidth]{gfe_training.pdf} \caption{The purpose of \acrshort{gfe} is to understand the scene holistically.} \label{fig:gfe_training} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\textwidth]{lfe_training.pdf} \caption{The purpose of \acrshort{lfe} is to extract features related to the object while understanding its vicinity.} \label{fig:lfe_training} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\textwidth]{ife_training.pdf} \caption{The purpose of \acrshort{ife} is to extract features related to intrinsic properties of the object using Informed Convolution.} \label{fig:ife_training} \end{subfigure} \caption{Training of different feature extractors in Stage I.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{object_descriptor.pdf} \caption{Structure of the Object Descriptor -- the inputs are the binary mask and the self-learned category embedding $\hat{\mathbf{c}}$.} \label{fig:object_descriptor} \end{figure} \subsubsection{Feature Composition}\label{subsubsec:feat_comp} Feature composition amalgamates the generated dense embeddings from different feature extractors. A diligent feature composition is indispensable here, as a weak one will impair the extracted features and give all of the attention to only one of the \glspl{fe}. Therefore, we leverage the binary mask of the object of interest besides a self-generated and learnable ``category embedding'' to produce a description $D$ for the composition mechanism. Details about how we generate the ``category embedding'' can be found in \Cref{subsubsec:lfe_training}. After generating the description $D$, it passes by spatial gating mechanisms $\mathcal{G}_G, \mathcal{G}_L, \mathcal{G}_I$ to generate spatial attention weights denoted by $A_G, A_L, A_I$ in \Cref{fig:network_architecture}. Later, we use these weights to reduce the 2D spatial extracted features $F_G, F_L, F_I$ to 1D features $\mathbf{f}_G, \mathbf{f}_L, \mathbf{f}_I$ through $\delta_G, \delta_L, \delta_I$ , respectively. That effectively generates spatial attention maps to each feature level of each \gls{fe} based on the shape and category of the object. In other words, \gls{glidenet} learns to focus on different spatial locations per each \gls{fe} individually. The structure of the Object Descriptor ($\mathcal{D}$), \Cref{fig:object_descriptor}, is as follows. First, the binary mask $M$ passes through a convolution block to learn spatial attention based on the object's shape. Meanwhile, the Category Embedding $\hat{\mathbf{c}}$ passes by a fully connected block to learn an attention vector based on the category. Then the category attention vector is broadcasted and multiplied by the mask attention as follows. \begin{align} \Bar{M} &= \hat{\mathbf{c}} \otimes M\\ \Bar{M}_i[m,n] &= \hat{c}_i \cdot M_i[m,n] \label{eq:obj_desc_intermediate} \end{align} where $[m,n]$ represents a spatial location and $i$ represents the channel number. This leads to a composed description for the attention based on the object's shape and category. Finally, a convolution block is used to refine the output description and generates $D$. The exact structure of $\mathcal{D}$ can be found in \Cref{sec:supp_net_arch}. \begin{equation} D = \mathcal{D}\left(M, \hat{\mathbf{c}}\right) \label{eq:obj_desc} \end{equation} Then, $D$ passes by three different gates $\mathcal{G}_G, \mathcal{G}_L, \mathcal{G}_I$ each has a final Sigmoid activation layer to assert that the output is between $0$ and $1$. Each gate generates a three channels spatial attention map $A$ for each \gls{fe}. Then, $\mathcal{\delta}$ reduces the 2D extracted features from \gls{fe} to 1D features by multiplying each with its corresponding spatial attention map as follows. \begin{align} A_k &= \mathcal{G}_k\left(D\right), & A_k &\in \mathbb{R}^{3\times h \times w}\\ f_k &= \delta\left(F_k, A_k\right), & \forall k &\in \{G, L, I\} \end{align} \begin{equation} \delta(F_k, A_k) \coloneqq \mathcal{S}_{i=1}^3\left(\sum_{m=1}^{h}\sum_{n=1}^{w} A_k^i[m,n] F_k^i[m,n]\right) \end{equation} where $\mathcal{S}_{i=1}^3(\cdot)$ denotes concatenation for $i\in\{1,2,3\}$ and $F_k^i[m,n]$ represents the generated features of \gls{fe} $k$ at feature level $i$ and spatial location $[m,n]$. Similarly, $A_k^i[m,n]$ is the output attention map from the gate $\mathcal{G}_k$ at feature level $i$ and spatial location $[m,n]$. Finally, the features are combined to get a single 1D feature vector $f_T$ as follows \begin{equation} \vspace{-5pt} f_T = \mathcal{S}\left(f_G, f_L, f_I\right) \label{eq:f_T} \vspace{-5pt} \end{equation} \begin{figure} \newcommand{0.32\linewidth}{\linewidth} \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{interpreter_cad.pdf} \vspace{-1.5em} \caption{\acrfull{car} Dataset.} \label{fig:car_interpreter} \end{subfigure}\hfill \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{interpreter_vaw.pdf} \vspace{-1.5em} \caption{\acrfull{vaw} Dataset.} \label{fig:vaw_interpreter} \end{subfigure} \vspace{-2em} \caption{Structure of the interpreter for different datasets.} \label{fig:interpreter} \vspace{-10pt} \end{figure} \subsubsection{Interpretation}\label{subsubsec:interpretation} The interpreter translates the final feature vector to meaningful attributes. Its design depends on the final desired attributes outputs. In \Cref{sec:exp_res}, we experiment with two datasets \gls{vaw} and \gls{car}. Both datasets are very recent and focus on a large set of categories with various possible attributes. However, there are some differences between them. Specifically, \gls{vaw} has three different labels (positive, negative, and unlabeled). On the other hand, \gls{car} doesn't have unlabeled attributes; it has a complex taxonomy where each category has its own set of attributes, and each attribute has a set of possible values it may take. This obligates the interpreter to depend on the training dataset and the final desired output. Therefore, two models are provided in \Cref{fig:interpreter}. In both cases, we first start with a dimensional reduction fully connected layer from $\mathbb{R}^l$ to $\mathbb{R}^m$; $m < l$. That enables us to create multiple heads for each category without increasing the memory size drastically. Then, the reduced features $\mathbf{f}_A$ passes by a single head corresponding to the category of the object of interest. For \gls{car} in \Cref{fig:car_interpreter}, the output size $n_c$ varies from one head to another depending on the taxonomy of category $c$. While for \gls{vaw} in \Cref{fig:vaw_interpreter}, the output size is the same $n = 620$. The other difference between the two interpreters is in the possible values the output can take. In \gls{vaw}, the output ranges from $0$ to $1$, where $0$ represents negative attributes and $1$ represents positive ones (unlabeled attributes are disregarded in training). In \gls{car}, the output is not binary as some attributes have more than two possible values. Therefore, we encode each attribute as one hot encoder. For example, the ``Vehicle Form'' attribute can take one of 11 values such as ``sedan'', ``Van'', etc. Thus, we have a vector of 11 values where ideally we want the value $1$ at the correct form type and $0$ elsewhere. It's noteworthy to mention that \gls{car} has an ``unclear'' value for all attributes. We skip attributes with unclear values during training. \begin{figure} \newcommand{0.32\linewidth}{0.32\linewidth} \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{mask_original.png} \vspace{-1.8em} \caption{\scriptsize{Input Mask}} \label{fig:input_original_mask} \end{subfigure}\hfill \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{mask_partial.png} \vspace{-1.8em} \caption{\scriptsize{Partial Convolution}} \label{fig:partial_conv_output_mask} \end{subfigure}\hfill \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{mask_informed.png} \vspace{-1.8em} \caption{\scriptsize{Informed Convolution}} \label{fig:informed_conv_output_mask} \end{subfigure} \vspace{-1em} \caption{An input mask and its propagated mask after $5$ layers for different update rules - kernel size = $5$.} \label{fig:mask_update} \vspace{-15pt} \end{figure} \subsection{Training}\label{subsec:training} \vspace{-5pt} Since \gls{glidenet} has a complex architecture, tailored training of the model is necessary to lead each \gls{fe} to its objective. Therefore, we have developed a customized training scenario for \gls{glidenet} which divides the training into two stages. In \textbf{Stage I}, we focus on guiding \glspl{fe} to a reasonable good status of their objective by adding some temporary decoders to guide the feature extraction process. The objective in Stage I is to have powerful and representative \glspl{fe}. Therefore, we do not train $\mathcal{D}$ nor $\mathcal{I}$ in Stage I. In \textbf{Stage II}, we focus on the actual objective of \gls{glidenet}, which is predicting the attributes accurately. Therefore, we remove the temporary decoders, and we train the whole network structure as in \Cref{fig:network_architecture}. The details of training \glspl{fe} in Stage I is detailed in \Cref{subsubsec:gfe_training,subsubsec:lfe_training,subsubsec:ife_training} while \Cref{subsubsec:stage_2} discusses the training in Stage II. \vspace{-10pt} \subsubsection{\acrfull{gfe}}\label{subsubsec:gfe_training} \vspace{-5pt} \gls{gfe} is trained as in \Cref{fig:gfe_training} by having a temporary objects decoder that attempts to detect the objects in the input image $I$, their categories and their bounding boxes center locations $(c_x, c_y)$, widths $w$ and heights $h$. $\hat{O}_G$ has $c+5$ channels; $c$ of which are a one-hot representation of the category ($\hat{\mathbf{P}}$), $4$ values for the bounding box, and the remaining value is the probability of having the center of an object in that pixel ($\hat{P}_0$). The training loss term for \gls{gfe} is as follows. \vspace{-5pt} \begin{equation} \vspace{-5pt} \begin{split} \mathcal{L}_\text{g} &= \lambda_{gp0}\mathcal{L}_\text{BCE}\left(P_0, \hat{P}_0\right) + \lambda_{gp}\mathcal{L}_\text{CE}\left(\mathbf{P}, \hat{\mathbf{P}}\right) \\ &+ \lambda_{gd}\left[\mathcal{L}_\text{MSE}\left(H,\hat{H}\right) + \mathcal{L}_\text{MSE}\left(W,\hat{W}\right)\right] \\ &+ \lambda_{gc}\left[\mathcal{L}_\text{MSE}\left(C_x, \hat{C}_x\right) + \mathcal{L}_\text{MSE}\left(C_y, \hat{C}_y\right)\right] \end{split} \label{eq:global_loss_1} \end{equation} where $\mathcal{L}_\text{BCE}$ is the Binary Cross Entropy loss, $\mathcal{L}_\text{CE}$ is the multi-class Cross Entropy and $\mathcal{L}_\text{MSE}$ is the Mean-Square-Error loss. $\lambda_{gp0}, \lambda_{gp}, \lambda_{gd}, \lambda_{gc}$ are hyperparameters used to tune the importance of each term. \begin{table*} \centering \caption{Comparison Between \gls{glidenet} and other state-of-the-art methods on two challenging datasets \gls{car} and \gls{vaw}} \vspace{-1.1em} \label{tab:results} \begin{tabularx}{\linewidth}{r||Y|Y|Y|Y||Y|Y|Y|Y} \Xhline{4\arrayrulewidth} \multicolumn{1}{c||}{\multirow{2}{*}{Method}}& \multicolumn{4}{c||}{\acrfull{vaw}\cite{pham2021learning}} & \multicolumn{4}{c}{\acrfull{car}\cite{metwaly2022car}} \\\cline{2-9} & mA & mR & mAP & F1 & mA & mR & mAP & F1 \\\Xhline{3\arrayrulewidth} Durand \etal \cite{durand2019learning} & $0.689$ & $0.643$ & $ 0.623$ & $0.632$ & $0.641$ & $0.629$ & $0.637$ & $0.635$ \\\hline Jiang \etal \cite{jiang2020defense} & $0.503$ & $0.631$ & $ 0.564$ & $0.597$ & $0.668$ & $0.659$ & $0.671$ & $0.654$ \\\hline Sarafianos \etal \cite{sarafianos2018deep} & $0.683$ & $0.647$ & $ 0.651$ & $0.646$ & $0.701$ & $0.699$ & $0.705$ & $0.703$ \\\hline Pham \etal \cite{pham2021learning} & $0.715$ & $0.717$ & $ 0.683$ & $0.694$ & $0.731$ & $0.727$ & $0.739$ & $0.720$ \\\hline\hline \textbf{\gls{glidenet}} & $\mathbf{0.737}$ & $\mathbf{0.768}$ & $\mathbf{0.712}$ & $\mathbf{0.725}$ & $\mathbf{0.781}$ & $\mathbf{0.802}$ & $\mathbf{0.788}$ & $\mathbf{0.796}$ \\\Xhline{4\arrayrulewidth} \end{tabularx} \vspace{-1.5em} \end{table*} \vspace{-10pt} \subsubsection{\acrfull{lfe}}\label{subsubsec:lfe_training} \vspace{-5pt} \Cref{fig:lfe_training} shows the training of \gls{lfe}. Here, we use three decoders; two temporary decoders for the binary mask $\mathcal{M}$ and attributes, and one decoder for the category embedding $\mathcal{C}$. The training loss term for \gls{lfe} is as follows. \vspace{-3pt} \begin{equation} \footnotesize \mathcal{L}_\text{l} = \lambda_{lm}\mathcal{L}_\text{BCE}\left(M, \hat{M}\right) + \lambda_{lc}\mathcal{L}_\text{CE}\left(C, \hat{C}\right) + \lambda_{la}\mathcal{L}_\text{BCE}\left(a, \hat{a}\right) \end{equation} where $\lambda_{lm}, \lambda_{lc}, \lambda_{la}$ are hyperparameters to tune the importance of each term. The category embedding encapsulates visual similarities between different categories unlike a word embedding \cite{pennington2014glove}, which was previously used in \cite{pham2021learning}. We reason that learnable vectors, rather than static pre-trained word embedding, capture greater visual similarities between objects depending on their attributes; a teddy-bear is visually similar (attribute-wise) to a toy more than to an actual real bear. \vspace{-5pt} \subsubsection{\acrfull{ife}}\label{subsubsec:ife_training} \Cref{fig:ife_training} depicts the training of \gls{ife}. It uses Informed Convolution layers detailed in \Cref{subsec:informed_conv} to focus on the intrinsic attributes. Its training loss term is as follows. \vspace{-5pt} \begin{equation} \vspace{-5pt} \mathcal{L}_\text{i} = \lambda_{ia}\mathcal{L}_\text{BCE}\left(a, \hat{a}\right) \label{eq:ife_loss_1} \end{equation} where $\lambda_{ia}$ is a hyperparameter. Therefore, the complete training loss function in Stage I is as follows. \vspace{-5pt} \begin{equation} \vspace{-5pt} \mathcal{L}_\text{I} = \mathcal{L}_\text{g} + \mathcal{L}_\text{l} + \mathcal{L}_\text{i} \label{eq:total_loss_1} \end{equation} \subsubsection{Stage II}\label{subsubsec:stage_2} In Stage II, the following loss function focuses on generating the final attributes vector correctly from the interpreter while maintaining accurate category embedding $\hat{c}$. \vspace{-5pt} \begin{equation} \vspace{-5pt} \mathcal{L}_\text{II} = \mathcal{L}_\text{BCE}\left(a, \hat{a}\right) + \lambda_{lc2}\mathcal{L}_\text{CE}\left(C, \hat{C}\right) \label{eq:loss_2} \end{equation} Therefore, the main goal is to predict the desired attributes. However, we keep the term for the category embedding to ensure the convergence of the category embedding during training in Stage II. \subsection{Informed Convolution}\label{subsec:informed_conv} The utilization of the binary mask in the feature extraction process has been previously applied in image inpainting problems in \cite{liu2018image, yu2019free, chang2019free}. \cite{yu2019free, chang2019free} used learnable gates to find the best mask-update rule, which is not suitable here as we want \gls{ife} to only focus on intrinsic attributes of the object. Therefore, a learnable update rule does not guarantee the convergence to a physically meaningful updated mask. Inspired by \cite{liu2018image} we perform a mask-update rule as follows. \begin{equation} X^{(i+1)} = \left\{ \begin{array}{ll} \frac{k^2\cdot W^T}{\sum M^{(i)}}\left(X^{(i)} \odot M^{(i)}\right) & \text{if}\max M^{(i)} > 0, \\ 0 & \text{otherwise} \end{array} \right. \end{equation} \begin{equation} M^{(i+1)} = \left\{ \begin{array}{ll} \frac{1}{k^2}\sum M^{(i)} & \text{if} \max M^{(i)} > 0, \\ 0 & \text{otherwise} \end{array} \right.\\ \end{equation} where $k$ is the kernel size of the convolution layer, $X^{(i)}, M^{(i)}$ are the input features and input binary mask for convolution layer $i$ that is only visible for the kernel and $\odot$ represents element-wise multiplication. It is important to notice the difference between our mask-update rule and the one provided in \cite{liu2018image}. \Cref{fig:informed_conv_output_mask} shows an output example based on the update rule. In our case, each pixel contributes to the new mask by a soft value that depends on the contribution of the object of interest at that spatial location. Furthermore, Informed Convolution can be reduced to a regular convolution if the binary mask was all ones. In this case, the object of interest completely fills the image and the intrinsic features would be any feature that we can extract from the image. It is also noteworthy to recognize the difference between Informed Convolution and Masked Convolution presented in \cite{van2016pixel}, where the authors are interested in generating an image from a caption by using a mask to ensure the generation of a pixel depends only on the already generated pixels. Their purpose and approach is entirely different. \section{Experiments and Results}\label{sec:exp_res} In this section, we validate the effectiveness of \gls{glidenet} and provide results of extensive experiments to compare it with existing state-of-the-art methods. Specifically, we provide results on two challenging datasets for attributes prediction -- \gls{vaw} \cite{pham2021learning} and \gls{car} \cite{metwaly2022car}. In addition, we perform several ablation studies to show the importance of various components of \gls{glidenet}. While we can consider other datasets such as \cite{patterson2016coco, krishna2017visual}, they lack diversity in either categories or attributes. However, \gls{vaw} has $260,895$ instances; each with $620$ positive, negative and unlabeled attributes. On the other hand, \gls{car} \cite{car_api} has $32,729$ instances focusing on self-driving. Unlike \gls{vaw}, \gls{car} has a complex hierarchical structure for attributes, where each category has its own set of possible attributes. Some attributes may exist over several categories (such as visibility) and some other are specific to the category (such as walking for pedestrian). \headline{Experiment setup:} the model is implemented using PyTorch framework \cite{paszke2019pytorch}. We choose the values of $\lambda_{gp0}$, $\lambda_{gp}$, $\lambda_{gd}$, $\lambda_{gc}$, $\lambda_{lm}$, $\lambda_{lc}$, $\lambda_{la}$, $\lambda_{ia}$ and $\lambda_{lc2}$ to be $1, 0.01, 0.5, 0.5, 0.1, 0.01, 1, 1$ and $0.01$, respectively by cross validation \cite{monga2018handbook}. We trained the model for 15 epochs at Stage I and then 10 epochs for Stage II. More details can be found in \Cref{sec:supp_exp_setup}. \headline{Evaluation Metrics:} mean balanced Accuracy (mA), mean Recall (mR), F$_1$-score and mean Average Precision (mAP) are used for evaluation. They are unanimously used for classification and detection problems. Specifically, they have been used in existing work for attributes prediction such as \cite{pham2021learning, durand2019learning, sarafianos2018deep, li2017improving, jiang2020defense, anderson2018bottom}. Excluding mAP, we calculate these metrics over each category then compute the mean over all categories. Therefore, the metrics are balanced; a frequent category contributes as much as a less-frequent one (no category dominates any metric). However for mAP, the mean is computed over the attributes similar to \cite{pham2021learning, gupta2019lvis}. We compute the mean over attributes in case of mAP to ensure diversity in metrics used in evaluation. As in this case, we ensure having balance between different attributes. All metrics are defined as follows. \vspace{-3pt} $$ \vspace{-3pt} \text{mA} = \frac{1}{2c}\sum_{i=1}^{c}\frac{\text{TP}_i}{\text{P}_i} + \frac{\text{TN}_i}{\text{N}_i},\quad\quad \text{F}_1 = \frac{2 \text{mP} * \text{mR}}{\text{mP} + \text{mR}}, $$ $$ \text{mP} = \frac{1}{c}\sum_{i=1}^{c}\frac{\text{TP}_i}{\text{PP}_i},\quad \text{mR} = \frac{1}{c}\sum_{i=1}^{c}\frac{\text{TP}_i}{\text{P}_i},\quad \text{mAP} = \frac{1}{n}\sum_{j=1}^{n} \text{AP}_j \vspace{-3pt} $$ where $c$ and $n$ are the numbers of categories and attributes respectively. TP$_i$, TN$_i$, P$_i$, N$_i$ and PP$_i$ are the number of true-positive, true-negative, positive samples, negative samples and predicted-positive samples for category $i$. AP$_{j}$ is the average of the precision-recall curve of attribute $j$ \cite{lin2014microsoft}. Since some attributes are unlabeled in \gls{vaw}, we disregard them in the evaluation as \cite{pham2021learning} did. Conversely, \gls{car} does not contain unlabeled attributes. It has, however, a complex hierarchical taxonomy of attributes that requires modification in the metrics used. For instance, most attributes are not binary. They can take more than two values; a ``visibility'' attribute may take one of five values. Therefore, we define TP and TN per attribute per category. Then we compute the mean over all attributes of all categories. For example, mA would be as follows. \vspace{-5pt} \begin{equation*} \vspace{-5pt} \text{mA} = \frac{1}{2c}\sum_{i=1}^{c}\left( \frac{1}{n_i}\sum_{j=1}^{n_i}\frac{\text{TP}_{i,j}}{\text{P}_{i,j}} + \frac{\text{TN}_{i,j}}{\text{N}_{i,j}}\right) \end{equation*} where $n_i$ is the number of attributes of category $i$. TP$_{i,j}$ is the positive samples of attribute $j$ of category $i$. Similarly, we can extend the definition of other metrics to suit the taxonomy of \gls{car}. For further details, the reader is encouraged to check \Cref{sec:supp_exp_setup}. \headline{Results on \gls{vaw} and \gls{car}:} \Cref{tab:results} shows the results of \gls{glidenet} in comparison with four state-of-the-art method over \gls{vaw} and \gls{car}. In \gls{vaw}, \gls{glidenet} obtained better values in all metrics. More prominently, it was able to gain $5\%$ in mR metric than the closest method \cite{pham2021learning}. This is mainly due to \gls{glidenet}'s usage of \gls{ife} and \gls{gfe} to detect attributes requiring global and intrinsic understanding. In \gls{car}, \gls{glidenet} was capable of achieving even a higher gain ($\sim 8\%$ mR). \gls{glidenet} can be trained directly with \gls{car} dataset due to its varying output length. However, we had to slightly modify the architecture of other method to work with \gls{car}. \begin{table} \caption{Ablation study over dense embeddings} \vspace{-1em} \label{tab:dense_embeddings_ablation} \centering \begin{tabularx}{\linewidth}{r||Y|Y|Y|Y}\Xhline{4\arrayrulewidth} \multicolumn{1}{c||}{Method} & mA & mR & mAP & F1 \\\Xhline{2\arrayrulewidth} \gls{lfe} only & $0.612$ & $0.639$ & $0.620$ & $0.613$ \\\hline \gls{lfe}$+$\gls{gfe} & $0.661$ & $0.644$ & $0.671$ & $0.668$ \\\hline \gls{lfe}$+$\gls{ife} & $0.719$ & $0.724$ & $0.699$ & $0.705$ \\\hline \textbf{\gls{glidenet}} & $\mathbf{0.737}$ & $\mathbf{0.768}$ & $\mathbf{0.712}$ & $\mathbf{0.725}$ \\\Xhline{4\arrayrulewidth} \end{tabularx} \vspace{-1.2em} \end{table} \begin{table} \caption{Ablation study over Objects with low pixel count} \vspace{-1em} \label{tab:informed_conv_ablation} \centering \begin{tabularx}{\linewidth}{r||Y|Y|Y|Y}\Xhline{4\arrayrulewidth} \multicolumn{1}{c||}{Method} & mA & mR & mAP & F1 \\\Xhline{2\arrayrulewidth} Pham \etal\cite{pham2021learning} & $0.619$ & $0.655$ & $0.603$ & $0.626$ \\\hline \gls{glidenet} w/o \gls{ife} & $0.658$ & $0.691$ & $0.643$ & $0.647$ \\\hline \textbf{\gls{glidenet}} & $\mathbf{0.704}$ & $\mathbf{0.721}$ & $\mathbf{0.680}$ & $\mathbf{0.698}$ \\\Xhline{4\arrayrulewidth} \end{tabularx} \vspace{-1.2em} \end{table} \begin{table} \caption{Comparison between \gls{glidenet} with and without $\mathcal{D}$} \vspace{-1em} \label{tab:descriptor_ablation} \centering \begin{tabularx}{\linewidth}{r||Y|Y|Y|Y}\Xhline{4\arrayrulewidth} \multicolumn{1}{c||}{Method} & mA & mR & mAP & F1 \\\Xhline{2\arrayrulewidth} \gls{glidenet} w/o $\mathcal{D}$ & $0.720$ & $0.725$ & $0.696$ & $0.708$ \\\hline \textbf{\gls{glidenet}} & $\mathbf{0.737}$ & $\mathbf{0.768}$ & $\mathbf{0.712}$ & $\mathbf{0.725}$ \\\Xhline{4\arrayrulewidth} \end{tabularx} \vspace{-1.2em} \end{table} \begin{table} \caption{Ablation study over category embedding} \vspace{-1em} \label{tab:category_ablation} \centering \begin{tabularx}{\linewidth}{r||Y|Y|Y|Y}\Xhline{4\arrayrulewidth} \multicolumn{1}{c||}{Method} & mA & mR & mAP & F1 \\\Xhline{2\arrayrulewidth} \gls{glidenet} w/o CE & $0.725$ & $0.731$ & $0.701$ & $0.712$ \\\hline \textbf{\gls{glidenet}} & $\mathbf{0.737}$ & $\mathbf{0.768}$ & $\mathbf{0.712}$ & $\mathbf{0.725}$ \\\Xhline{4\arrayrulewidth} \end{tabularx} \vspace{-1.2em} \end{table} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{training_size.pdf} \vspace{-10pt} \caption{Comparison against training size (VAW Dataset).} \label{fig:training_size_ablation} \vspace{-1.7em} \end{figure} \subsection{Ablation Study} Several ablation studies are presented here to demonstrate the importance of the unique components in \gls{glidenet}. Only ablations from the VAW dataset are shown here, however, similar behavior was noticed in CAR as well. \headline{Dense Embedding:} \Cref{tab:dense_embeddings_ablation} shows the results of \gls{glidenet} with different combinations of \glspl{fe}. We achieve best results by using all \glspl{fe}. Notice that the gain from using \gls{ife} is higher than \gls{gfe}. This is expected given most attributes in \gls{vaw} focus on the object of interest itself and do not require a lot of global context. However, \gls{gfe} is still valuable when global understanding of the scene is necessary, as in \gls{car}. \headline{Informed Convolution:} We retrained the model with a restricted dataset comprising objects with low pixel counts to demonstrate the usefulness of Informed Convolution layers. We specifically identified examples with a lower than $0.35$ ratio between their binary mask and their corresponding bounding boxes. This reflects the goal of Informed Convolution layers, which is to give low-pixel-count objects special attention. Because the only architectural difference between \gls{ife} and \gls{lfe} is in the usage of Informed Convolution layers, we test two scenarios: one with and one without \gls{ife}. In all measures, \gls{glidenet} obtains the best performance, as seen in \Cref{tab:informed_conv_ablation} by meaningful margin. \headline{Object Descriptor:} \Cref{tab:descriptor_ablation} shows a comparison between \gls{glidenet} with and without the Object Descriptor $\mathcal{D}$. Despite the fact that the results without $\mathcal{D}$ are less than ideal, they are still meaningfully higher than \cite{pham2021learning}. This suggests that the generated dense embeddings are helping in better attributes recognition. The feature composition of $\mathcal{D}$, on the other hand, is superior. \headline{Semantic Embedding:} \gls{glidenet} uses a self-learned category embedding that encapsulate semantic similarities between objects. If the category embedding confuses two categories, it is most likely owing to their visual similarities. In prior studies \cite{pham2021learning}, word embeddings \cite{pennington2014glove} were used to capture the semantic but a word embedding alone would not be sufficient to capture the visual similarities. \Cref{tab:category_ablation} shows a comparison of \gls{glidenet} by swapping the Category Embedding (CE) with GloVe \cite{pennington2014glove} -- a word embedding. \headline{Limited Training Scenario:} We also perform a limited training data size comparison between \gls{glidenet} and other methods in \Cref{fig:training_size_ablation}. The training data size is limited to $60\%$ and $40\%$ of the original training data size of \gls{vaw} while keeping the validation set as it is. Although all methods suffer in the limited data size scenario, \gls{glidenet} shows a much more graceful decay in comparison to other methods. \vspace{-5pt} \section{Conclusion} \vspace{-5pt} Global, Local, and Intrinsic based Dense Embedding Network (GlideNet) is a novel attributes prediction model that can work with a variety of datasets and taxonomies of categories and attributes. It surpasses existing state-of-the-art approaches, and we believe this is due to the use of a variety of Feature Extractors (FEs), each with its distinct goal. A two-stage training program establishes their objectives. Furthermore, the self-attention method, which combines a binary mask and a self-learned category embedding, fuses dense embeddings based on the object's category and shape and achieves richer composed features. The suggested Informed Convolution-based module estimates attributes for objects in the cropped image that have a very low pixel contribution. A rigorous ablation study and comparisons with other SOTA methods demonstrated the advantages of GlideNet's unique blocks empirically. \newpage
2,877,628,088,662
arxiv
\section{Introduction} \label{sec:introduction} Dusty disks made up of rocky and icy debris have been observed around other stars, both in reflected optical light \citep{Smith:1984} and in long wavelength thermal radiation \citep{Aumann}. Multiple surveys have reported that a significant fraction of main-sequence stars harbor detectable infrared excesses: $\sim 15\%$ solar-type stars \citep{Trillingetal:2008,Lawler}, and $\sim 30\%$ for A-stars \citep{Suetal06}. The infrared luminosity, when compared to the luminosity of the central star, ranges from $\sim 10^{-5}$ to $\sim 10^{-3}$. In contrast, the fractional dust luminosity from the Kuiper belt is estimated to be $\sim 10^{-7}$ \citep{Teplitzetal:1999} and remains undetected. The observed excess luminosities arise primarily from small ($\sim \mu m - mm$) dust grains. Due to their short survival time \citep{Pawel}, these grains are believed to be continuously produced by collisions between large parent bodies (`planetesimals'). These planetesimals, analogous to the Kuiper belt objects in our own system, are in turn left-overs from the epoch of planet formation. In this article, we describe how we can use debris disks to test theories of planetesimal formation. We first focus our attention on the primordial size spectrum of planetesimals, often characterized by a single power-law, $dn/ds \propto s^{-q}$, where $s$ is the size. In the following, we briefly summarize theoretical understandings and observational evidences for the value of $q$. The conventional picture of planetesimal formation is composed of a number of steps. The formation of the first generation planetesimals is not yet well-understood and is an area of active research \citep[see, e. g.][]{Youdin:2002,Dominik:2007, Johansen:2007,Garaud:2007}. If these are sufficiently massive, gravity dominates their subsequent growth \citep{Weidenschilling}. At first, objects grow in an orderly fashion, where collisions and conglomerations occur at rates that are proportional to their geometric cross sections. But when these bodies become so massive that the effect of gravitational focusing becomes significant, run-away growth commences where the largest bodies accrete small planetesimals at the highest rate and quickly distance themselves from their former peers \citep{WetherillStewart,Kokubo96}. The run-away phase is succeeded by the oligarchic phase where individual large bodies are responsible for stirring the small bodies that they accrete \citep{KokuboIda,Kokubo:1998}. At the end of these steps, an entire size spectrum of planetesimals are produced. This is the `primordial spectrum'. During the run-away phase, N-body simulations have typically produced a slope of $q \sim 6$ \citep{Kokubo96,Morishima:2008}. This slope is naturally explained if there is energy equi-partition among planetesimals of different sizes \citep{Makino:1998}. Moreover, one expects that the distribution becomes shallower (smaller $q$) if larger planetesimals have higher kinetic energies. This indeed occurs during the oligarchic phase when all small and intermediate-sized planetesimals are stirred to the same velocity dispersion. The value of $q$ is then reduced to $\approx 4$ \citep{Morishima:2008}. Using particles-in-a-box simulations and later hybrid simulations, \citet{KenyonandLuu:1999, Kenyon:2004, 2008ApJS..179..451K} followed the growth of planetesimals. They also found that $q$ decreases with time after the run-away phase, finishing up with $3.75 \leq q \leq 4.5$ for planetesimals of sizes between $10$ and $1000$~kms. Recently, \citet{2011ApJ...728...68S} argued analytically that a $q=4$ spectrum is the natural outcome of conglomeration. Observational constraints on the value of $q$ currently come exclusively from counting large Kuiper belt objects. Kuiper belt objects larger than about $30-50$~kms are commonly believed to be primordial. Collision timescales for these bodies well exceed that of the Solar system age \citep{1997Icar..125...50D,2010AJ....139.1499B}. The size distribution for these bodies can be probed by present-day surveys. Published values for $q$ are scattered: $q =4.0_{-0.6}^{+0.5}$ \citep{Trujillo:2001}, $q = 4.25 \pm 0.25$ \citep{Fraser:2008b}, $q = 4.5\pm 0.4$ \citep{2009AJ....137...72F} and $q = 4.5_{-0.5}^{+1.0}$ \citep{2008AJ....136...83F}. This scatter may be intrinsic and reflect both the different size ranges and the different dynamical populations emphasized by various surveys \citep{Bernstein,2006P&SS...54..243D,FraserBrown}. For bodies smaller than $\sim 30$~kms, the size distribution adopts a shallower power-law \citep{Bernstein,2008AJ....136...83F,2009Natur.462..895S}. This break in the power-law index has been argued to be due to collisional erosion \citep{PanSari}, but a different opinion has surfaced \citep{morbidelli}. So at least for the value of $q$, current coagulation models appear to be vindicated by the observations. These models enjoy a further success. In the Kuiper belt region, the solid mass of the so-called Minimum Mass Solar Nebula is $\sim 10~M_\oplus$ \citep{Hayashi,Weidenschilling:1977}, while the mass in large Kuiper belt objects is estimated to be $\lesssim 0.1 M_\oplus$ \citep[see, e.g.][]{Gladman,Bernstein}. This large difference, however, is explained by current models where the formation of large planetesimals has a very low efficiency \citep{2006AJ....131.2737B,2011ApJ...728...68S}. With these two remarkable concordances, one wonders if debris disks will ever tell us anything new and unexpected. Furthermore, every debris disk likely has a different initial condition and evolves in a different dynamical environment. For instance, dynamical interactions with Neptune or other planets may have qualitatively affected the evolution of the Kuiper belt \citep{Levison:2008}. It seems difficult, therefore, to extract any universal truth about the formation process from these disparate objects. However, based only on a modest sample of debris disks, we argue in this paper that there is already a serious issue in current coagulation models. To achieve this, we first construct a simple collisional model (\S \ref{sec:luminosityevolution}) to compare against the set of debris disks reported in \citet{Hillenbrand:2008}. Our collisional model does not differ in essence from previous works \citep{Krivov05,Wyattetal:2007,Lohne:2008}, but we interpret the observations in a new way. This allows us to measure the value of $q$ as well as the initial masses of planetesimal belts (\S \ref{sec:results}). The latter result challenges the current models of planetesimal formation (\ref{sec:discussions}). We summarize in \S \ref{sec:summary}. \section{Model: Luminosity Evolution of a Debris Disk} \label{sec:2} \label{sec:luminosityevolution} The debris phase commences when eccentricities of the primordial planetesimals are further increased so that they no longer coalesce at encounter, but are instead broken into fragments.\footnote{\citet{2008ApJS..179..451K} find that fragmentation begins once Pluto-sized bodies form.} In this phase, the smallest primordial planetesimals enter into a collisional cascade first, followed by progressively larger bodies. During the collisional cascade, a primordial body is broken down into smaller and smaller fragments until all its mass ends up in small grains. The small grains may spiral in towards the star due to Poynting-Robertson drag, as happens in the Solar system, or, be ground down by frequent collisions to sizes so small that they are promptly removed by radiation pressure, as happens in bright debris disks \citep{Wyatt:2005}. \subsection{Debris Rings} We model the debris disk as a single, azimuthally smooth ring composed of planetesimals of different sizes. The ring is centered at a semi-major axis $a$ with a full radial width of $\Delta a$ and a constant surface density. We take $\Delta a/a = 0.1 \ll 1$ as our standard input. This is motivated by the following observations. Spatially resolved debris disks often appear as narrow rings. Examples are, $\Delta a/a$ $\sim 0.1$ for AU Microscopii \citep{2007ApJ...670..536F}, $\sim 0.5$ for HD 10647 \citep{2010A&A...518L.132L}, $\sim 0.3$ for HD 92945 \citep{2007lyot.confE..46G}, $\sim 0.3$ for HD 139664 \citep{2006ApJ...637L..57K}, $\sim 0.2$ for HD 207129, \citep{2010AJ....140.1051K}, $\sim 0.5$ for $\epsilon$ Eridani \citep{2000MNRAS.314..702D}, $\sim 0.1$ for Fomalhaut \citep{Kalas:2005}, $\sim 0.2$ for Vega \citep{2005ApJ...628..487S}. Similarly, unresolved disks often exhibit spectral energy distribution that is well fit by a single temperature blackbody \citep{Hillenbrand:2008, 2010A&A...518A..40N,Moor}. This ring-like topology also show up in our own Solar system, hence the name the asteroid ``belt'' and the Kuiper ``belt''. \subsection{Initial Size Distribution of the Planetesimals} \label{subsubsec:dnds} We adopt the following power-law forms for the initial size distributions, \begin{equation} \left. \frac{dn}{ds}\right|_{t=0} \propto \begin{cases} s^{-q_3} \quad \quad s_{\rm small} < s < s_{\rm big} , \\ s^{-q_1} \quad \quad s_{\rm min} < s < s_{\rm small} .\\ \end{cases} \label{eq:time0} \end{equation} The index $q_3$ is the primordial size index for large bodies, like one that arises out of conglomeration models. Previous studies of collisional debris disks have taken this value to be a given, in fact it is commonly set to be the power law one expects from collisional equilibrium \citep{Krivov05,Krivov06,Wyattetal:2007,Lohne:2008}. In contrast, in this contribution we use the observed sample to measure this value. In equation \refnew{eq:time0}, $s_{\rm big}$ is the size of the biggest planetesimals, $s_{\rm min}$ the smallest. The intermediate size $s_{\rm small}$ is introduced for the purpose of mass accounting: the original mass counts only those between $s_{\rm big}$ and $s_{\rm small}$, \begin{equation} M_0 = \int_{s_{\rm small}}^{s_{\rm{big}}} \frac {4\pi}{3}\rho s^3\, n_3s^{-q_3}\,ds. \label{eq:m0defined} \end{equation} While $s_{\rm min}$ is naturally taken to be the size at which radiation pressure unbinds dust grains from the star ( $\sim \mu m$ for a Sun-like star), we discuss our choice for $s_{\rm big}$ and $s_{\rm small}$ below. Motivated by the observational and numerical results discussed in \S \ref{sec:introduction}, we investigate values of $q_3$ between $3.5$ and $5$. The value $q_3 = 4$ has the special property that mass is distributed equally among all logarithmic size ranges, while masses in systems with $q_3 > 4$ diverge toward the small end. The intermediate size $s_{\rm small}$ is introduced, partly to avoid dealing with this divergence. For sizes below $s_{\rm small}$, we assume that collisions have set up an equilibrium power law with index $q_1$ (see Appendix). So, the intermediate size $s_{\rm small}$ can also be interpreted as the collisional break size at time zero. For our study, we set $s_{\rm small} = 100$ m. For our typical disks, we find that, within a few million years, collisional equilibrium is established for bodies up to sizes $\sim 1$ km. So the choice of $s_{\rm small}$ is not important for late time evolution. The choice of size for the largest bodies, $s_{\rm big}$, deserves some discussion, as it affects the qualitative character of the evolution. As a collisional cascade progresses, bodies of larger and larger sizes come into collisional equilibrium, opening up fresh mass reserve to produce the small particles. Once the largest bodies enter into collisional equilibrium, the dust production rate decays with time as $L_{\rm IR} \propto t^{-1}$ \citep{2007ApJ...658..569W}. Two previous studies \citep{Wyattetal:2007,Lohne:2008} have adopted sizes for the largest bodies of $s_{\rm big} = 30$ and $74$ km, respectively. For some of their disks, the largest bodies can enter collision equilibrium during the lifetime of the system. Both Kuiper belt observations and numerical studies of coagulation favor a largest size of $\sim 1000$~km. The largest object yet found in the Kuiper Belt, (136199) Eris, has a radius of $1200 \pm 50$~km \citep{Brown:2006}. In the simulations of \citet{Kenyon:2004b}, coagulation of planetesimals at 30 - 150 AU produces bodies as large as $1000$ - $3000$ km. When the largest bodies reach this size, self-stirring increases the velocity dispersion and collisions become destructive rather than conglomerating. Therefore, we adopt a maximum body size of $1000$ km in our study. Our quoted masses reflect this choice of $s_{\rm big}$. Our largest bodies never enter into collisional equilibrium. If this assumption turns out to be erroneous, namely, $s_{\rm big}$ is much smaller and enters into collisional cascade within system lifetime, our model would underestimate the initial masses for old disks. As a result, we would overestimate the value for $q_3$. \subsection{Collisions} \label{subsubsec:collision} We only consider collisions that are catastrophically destructive. A catastrophic collision is defined as one that removes at least $50\%$ of the mass of the primary body. In so doing, we have implicitly assumed that both cratering collisions and conglomerating collisions are unimportant. When a destructive collision occurs, the total mass (bullet plus target) is redistributed to all smaller sizes according to $dn/ds \propto s^{-4}$. This choice is somewhat arbitrary and we have confirmed that modifying it (within reasonable bounds) does not change our results. We do not model evolution of the orbital dynamics as bodies collide. This is justified by the discussions in \S \ref{subsec:eccentricity}. Let the chance of collisions between two bodies of sizes $s$ and $s^{\prime}$ be, \begin{equation} f_{\rm{collision}} = \frac{\pi \left(s + s^{\prime}\right)^2} {2 \pi a \Delta a \, t_{\rm{orb}}}, \label{eq:fcol} \end{equation} Here, $2\pi a \Delta a$ is the surface area spanned by the debris ring in the orbital plane, and $t_{\rm orb}$ is the orbital period. Gravitational focusing is negligible for the high random velocities we consider here. The typical encounter velocity, for particles with eccentricity $e$ and inclination $i$, is \citep{Wetherill:1993} \begin{equation} v_{\rm{col}} = \sqrt{1.25e^2+i^2}\, v_{\rm{kep}}, \label{eq:vimpact} \end{equation} where $v_{\rm kep}$ is the local Keplerian velocity. We adopt $i \approx e/2$ so $v_{\rm col} \approx 1.32\, e \, v_{\rm kep}$. As argued in \S \ref{subsec:eccentricity}, it is reasonable to assume a constant eccentricity (and inclination) for all bodies. We take a value of $e=0.1$ as the standard input, and discuss this assumption in \S \ref{sec:discussions}. We denote the specific impact energy required to catastrophically disrupt a body (target) as ${Q}^*$. The scaling of $Q^*$ with the size of the target depends on whether its strength is dominated by material cohesion or self-gravity. We adopt the following form \citep{BenzAndAsphaug}, \begin{equation} {Q}^*~=~A\left(\frac{s}{1~\mbox{cm}}\right)^\alpha+B\rho\left(\frac{s}{1~\mbox{cm}}\right)^\beta \label{BenzAndAsphaug} \end{equation} where $\rho$ is the bulk density which we take to be $2.5 \rm{g}/\, \rm cm^3$. The first term on the right-hand-side describes the internal strength limit, important for small bodies, while the second term the self-gravity limit, important for larger bodies. The strength law sets the size of the smallest bullets required to destroy a target. Since these are also the most numerous, they determine the downward conversion rate of mass during a collisional cascade. As such, the power indexes in the strength law directly determine the size spectrum at collisional equilibrium. For a strength law of the form ${Q}^* \propto s^c$, the equilibrium size spectrum is $dn/ds \propto s^{-q}$, with \citep{1997Icar..130..140D}: \begin{equation} q=(21+c)/(6+c). \label{eq:whatisq} \end{equation} The famous Dohnanyi-law \citep{Dohnanyi:1969}, $dn/ds \propto s^{-3.5}$, obtains from $c = 0$. The value and form for $Q^*$ are notoriously difficult to assess. It depends on, among other factors, material composition, porosity and impact velocity. A number of computations and compilations have appeared in the literature. We select three representative formulations for our study (Fig. \ref{fig:Qstarcomp}). Based on a variety of experimental data and SPH simulations, \citet{Krivov05,Lohne:2008} advocated the following choices, $A = 2 \times 10^7 \, \rm erg/\rm g$, $\alpha = -0.3$, $B = 0.158$, $\beta = 1.5$. We call this the 'hard' strength law. In this case, the collision spectrum satisfies $q \approx 3.6$ and $3.0$, in the strength and gravity regimes respectively. Based on energy conservation, \citet{PanSari} calculated a destruction threshold for bodies that have zero internal strength and obtained $B = 3.3 \times 10^{-8}$, $\beta = 2$. So bodies at $100 \, \rm km$ is weaker by a factor $\sim 1000$ than their counterparts in the \citet{Krivov05} formulation. We refer to this as the 'soft' strength law. A softer strength implies smaller bullets and therefore more frequent destruction of the targets. \citet{PanSari} did not consider smaller bodies that are strength bound. We adopt $A = 2 \times 10^{7} \, \rm erg/g$ and $\alpha = -0.3$ in this range to complete the soft prescription. \cite{2009ApJ...691L.133S} proposed a strength law that depends on impact velocity, \begin{equation} Q^* = \left(500\, s^{-0.33}+10^{-4}\, s^{1.2}\right)v_{\rm col}^{0.8}, \label{eq:sllaw} \end{equation} For a typical velocity $v_{\rm col}= 500$m/s and for bodies greater than 1km, this gives rise to a strength law that falls in-between that of the hard and the soft case. We call this the medium strength law. Note that this strength law is much weaker than the other two for small bodies. For the strength laws we consider, transitions from material strength domination to self-gravity domination occur at size $s \approx s_1$, with $s_1$ ranging between $100$ m (the hard and the medium laws) and $10$ km (the soft law). \begin{figure}[t] \begin{center} \includegraphics[scale=.65, trim = 0 0 0 0, clip]{figure1.eps} \caption{ Prescriptions for specific strength from \citet{Lohne:2008}, \citet{PanSari} and \citet{2009ApJ...691L.133S}, plotted here as functions of target sizes. We insert an impact velocity of $500 \, \rm m/\, \rm s$ to evaluate the last prescription. Strength of small bodies are dominated by material cohesion, while that of larger bodies by self-gravity. Transitions between the two limits occur around $100$ m (the hard and the medium laws) or around $10$ km (the soft law). Strength for bodies smaller than $1 \, \rm cm$ are extrapolations as both laboratory and numerical experiments only concern bodies of larger sizes.} \label{fig:Qstarcomp} \end{center} \end{figure} \subsection{Luminosity Evolution} \begin{figure}[] \begin{center} \includegraphics[scale=.65]{figure4.eps} \caption{Time evolution of the break-size in a model system, with $M_0 = 12.4 M_{\oplus}$, $q_3 = 4.0$, $e = 0.1$, $a = 31\rm{AU}$, and $\Delta a/a = 0.1$. Here, break-size is defined as the size at which all bodies initially at that size have encountered of order one destructive collision. Break size increases with time monotonically as larger bodies enter into collisional cascade. The numerical results are shown as solid curves, while the analytical scaling relations (see Appendix) are plotted as dashed lines. The bends in the curves occur at $s \approx s_1$, i.e., sizes for which material cohesion and gravity binding are comparable. The set of thick curves are for the case of hard material strength, while the thin lines for soft strength. } \label{fig:s2} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=.65, trim = 0 0 0 0, clip]{figure2.eps} \caption{Evolution of fractional luminosity, $L_{\rm IR}/L_*$, for the system in Fig. \ref{fig:s2}. The thick line is obtained using the hard strength law, and the thin line the soft one. The evolution proceeds in two stages: the flatter early stage when collisional cascade only involves small bodies that are bound by material cohesion; and a steeper later stage where bodies bound by self-gravity enter the cascade. At late times, fractional luminosity decays as $t^{-0.5}$ (eq. \refnew{eq:ftime}). While break-sizes differ for the two adopted strength laws (Fig. \ref{fig:s2}), this appears to have little influence on the overall luminosity. } \label{fig:flHD1} \end{center} \end{figure} The planetesimal disk, starting from an initial disk mass of $M_0$, and an initial size spectrum (eq. \ref{eq:time0}), is numerically collided and ground down. We divide the particles between $s_{\rm small}$ and $s_{\rm big}$ into $500$ equal logarithmic size bins. The time-step for the simulations is adaptively set so that over one time-step, the maximum mass gain (from larger bodies) or loss (to smaller bodies) per bin falls below $5\%$. The net mass change is substantially smaller than this due to the cancellation between gain and loss. We calculate the fractional brightness of the dust disk, $L_{IR}/L_*$, by integrating the geometrical cross section over all grains. This assumes that grains are perfect absorbers at the optical and can emit efficiently in the infrared. An example of such a calculation is reported in Figs. \ref{fig:s2} \& \ref{fig:flHD1}. To understand these results, a simple analytical model (see Appendix) is introduced. Scaling relations obtained using this analytical model compares well with our numerical results. Fig. \ref{fig:s2} shows that, with time, larger and larger planetesimals enter into collisional cascade. Within a million years or so, the cascade has advanced to size of order one kilometer. Beyond this time, bodies bound by self-gravity can be gradually eroded. By 1 Gyrs, bodies with sizes $10-100$ kms may be affected. The exact value depends on the strength law. The dust luminosity is related to the dust mass, which is in turn related to the dust production rate. The dust production rate, on the other hand, is simply the primordial mass stored at the break-size divided by the system age. If the primordial spectrum is such that a large amount of mass is piled at the large end, debris disks would not exhibit significant fading even up to a few billion years. Fig. \ref{fig:flHD1} shows that dust luminosity $L_{\rm IR}/L_* \propto t^{-0.5}$ for $q_3 = 4$, consistent with equation \refnew{eq:frac2}. That same equation also demonstrates that the value of $B$, strength constant for bodies bound by self-gravity, affects the luminosity only minorly. This is born out by results shown in Fig. \ref{fig:flHD1}. An important result on which we base our later analysis is shown in Fig. \ref{fig:varyq}. Luminosity evolution for disks with the same initial mass but different $q_3$ are depicted. As eq. \refnew{eq:ftime} predicts, $L \propto t^{(q_3-3)/(2-q_3)}$. If $q_3$ is shallow (e.g. $q_3 \leq 4$), most of the initial mass is deposited at the largest planetesimals. This mass reservoir is harder to reach by collision and allows the disk to remain brighter at later times. In comparison, disks with a steeper $q_3$ decay faster. If one observes a collection of debris disk all at the same age, intrinsic scatter in, e.g., initial masses, makes it impossible to differentiate between models of different $q_3$. However, a collection of disks with a large age spread can be used to constrain $q_3$. This we proceed to demonstrate. \begin{figure}[t] \begin{center} \includegraphics[scale=.65,trim=0 0 0 0,clip]{figure3.eps} \caption{Luminosity evolution for disks with different $q_3$ but the same initial mass ($8 M_\oplus$). Systems with a steeper primordial size spectrum (larger $q_3$) exhibit a more pronounced decline of luminosity with time, since at a given time, a bigger fraction of their mass reservoir has been depleted. Systems with shallower $q_3$ (e.g., $q_3 = 3.5$), on the other hand, are initially dimmer due to the relative shortage of smaller rocks, but eventually outshine the higher $q_3$ disks as they can hold on to their mass reservoir for longer. The luminosity decay of observed disks that span a large range of ages can thus be used to infer the value of $q_3$. All other parameters here are similar to those used in Fig. \ref{fig:flHD1} and we adopt the hard strength law.} \label{fig:varyq} \end{center} \end{figure} \section{Observed Ensemble} \label{sec:ensemble} Several debris disks surveys have been carried out \citep[see, e.g.][]{Suetal06,Trillingetal:2008,Lawler, Moor}. The sample of most interest to us is that reported in \citet{Hillenbrand:2008}. Together with updates in \citet{2009ApJS..181..197C}, \citet{Hillenbrand:2008} presented a collection of debris disks around F/G/K type stars, obtained as part of the Spitzer program on Formation and Evolution of Planetary Systems (FEPS). This sample is unique in that both the stellar age and the radial distance of the dust ring are determined: isochrone fitting provides the age for the host stars (spanning from $\sim 10^{7}$~years to a few $10^{9}$~years), while multi-band photometry and spectral energy fitting yield the semi-major axis of the dust ring. Together with fractional luminosity of the dust belt, these provide the most important constraints to infer the primordial properties of parent planetesimals. To obtain the blow-out size ($s_{\rm min}$) for each system, we take luminosity values for the central stars as given in \citep{Hillenbrand:2008}, and we assign stellar masses by assuming that $M_* \propto L_*^{1/3}$, as appropriate for solar type main-sequence stars. Out of the $31$ disks listed in \citet{Hillenbrand:2008}, we focus only on a sub-sample of 13 disks that appear radially unextended and are around main-sequence stars. In \citet{Hillenbrand:2008}, emission from each disk is initially fitted with a single temperature blackbody (a ring). If agreement between the ${24 \mu \rm{m}}/{33\mu \rm{m}}$ fit and the ${33 \mu \rm{m}}/{70 \mu \rm{m}}$ fit is poor, they argue that the disk is likely radially extended and fit the data instead with two radial components. Since our numerical model is a one-zone model, we find that including the extended sources into our analysis causes significant scatter in the results. This leads us to discard them for the current analysis. We have excluded HD 191089 from our sample. Its fluxes in 13 $\mu$m and 33 $\mu$m are not measured, and cannot be reliably identified as an unextended source. In all, we are left with 13 sources. It is interesting to note that most of the extended sources are relatively young, all younger than a few hundred million years. In contrast, the unextended sources have a larger age spread, lasting till a few billion years (Fig. \ref{fig:compage}). All systems may be born with more than one debris rings, but after a sufficiently long time, only the outermost ring, which has the longest erosion timescale, remains shining. The extended system are also brighter than the average, likely related to their relative youth. \begin{figure}[] \centering \includegraphics[scale=.65]{figure5.eps} \caption{Cumulative distribution of stellar age (left panel) and dust luminosity (right panel) for the \citet{Hillenbrand:2008} sample. The extended systems (solid curve) tend to be younger and brighter than the unextended systems (dashed curves). } \label{fig:compage} \end{figure} \begin{figure*} \centering \includegraphics[scale=1.1]{figure6.eps} \caption{Inferred disk initial masses, plotted against system ages, for the unextended systems in \citet{Hillenbrand:2008}. The four panels present four different choices of $q_3$. The other parameters chosen are $e = 0.1$, $s_{\rm small} = 10^{4} \rm{cm}$,~and $\Delta a/a = 0.1$. A value of $q_3 \in [3.5 ,4.0]$ is preferred: the upper envelopes for the disk mass remain constant at all ages in the two top plots. Models with higher $q_3$ are excluded as they require a rising upper envelope. In addition, the $q_3 = 5$ model requires unphysically large disk masses for very old disks. } \label{fig:main} \end{figure*} \begin{figure*} \centering \includegraphics[scale=1.1]{figure7.eps} \caption{ Similar to Fig. \ref{fig:main} except $q_3$ is fixed at $4$ and a number of parameters are varied to test how the inferred initial masses depend on them. Specific values for the inferred mass may change, when the small end of the primordial size spectrum ($s_{\rm small}$, top-left), the eccentricity of particles (bottom-left), the fractional width of the debris ring (top-right), and the adopted strength law (bottom-right) are varied for all systems. However, the important indicator for our study, the upper envelope of the masses as a function of system age, remains flat. So the conclusion that $q_3 \sim 4$ remains valid. } \label{fig:9to13} \end{figure*} \section{The Primordial Size Spectrum Revealed} \label{sec:results} We have a simple strategy. Knowing the luminosity, the age and the semi-major axis of each debris ring, we use our collisional model to backtrack the evolution to infer its initial mass in the planetesimal belt. These initial masses, when plotted against system ages, should show a spread. One expects this spread to be constant across all ages, as disks formed at different cosmic times likely have the same distribution of disk properties. This property could be used to test model assumptions. However, using the spread is difficult due to selection effects. For instance, low mass disks may become too dim at late times to be observable. So we propose instead to study the upper envelope of this spread. The upper envelope should be flat with age for the correct model. From our analytical scaling relations (see Appendix), we find that the most important parameter in our model that affects this mass slope is $q_3$, the power-law index in the primordial size spectrum. The results of such a procedure are shown in Fig. \ref{fig:main}. Models with $q_3 = 4.5$ or greater appear to be excluded by data, as they would require a rise of initial disk mass with stellar ages. The reason behind this is transparent by studying Fig. \ref{fig:varyq}. Models with $q_3 = 3.5$ and $4$ are compatible with observations. Models with smaller $q_3$ lead to a decreasing initial mass with system age and are excluded as well. Our model employs a number of other parameters, such as the radial position and extent of the debris ring, the dynamical excitation and break-up strength of the particles. We have studied the robustness of our results when these parameters are varied (Fig. \ref{fig:9to13}). As long as the values for these parameters remain constant over age, varying them do not affect our conclusion on $q_3$. The assumption that the dynamical excitation is constant over age is suspicious, in light of results from coagulation models showing that stirring by large plantesemals increases gradually eccentricities of the disk particles. This is discussed in \S \ref{sec:discussions}. There is significant uncertainty in our conclusion due to the small sample size. However, we argue that a larger sample may still not favor models with, e.g., $q_3 = 5$. If $q_3 = 5$ (lower-right panel in Fig. \ref{fig:varyq}), the system that remains easily detectable at 2 Gyrs of age requires an initial solid mass of $\sim 200 M_\oplus \sim 1 M_J $ in the planetesimal belt. The initial gas mass in such a belt will be higher than the total disk mass of a typical T-Tauri star ($0.01 M_\odot$). By focusing on dust luminosities, we are sensitive only to bodies that lie below the break-size. As seen in Fig. \ref{fig:s2}, break-size marches up to few tens to a hundred kilometers by the end of a few billion years, if the disk has a mass of $M_0 = 12.4 M_\oplus$. \section{Discussions} \label{sec:discussions} \subsection{Coagulation Models vs. Debris Disks} \label{subsec:compare} \begin{figure}[t] \centering \includegraphics[scale=0.65]{figure8.eps} \caption{Inferred initial masses for a broken power-law size-distribution. We investigate two particular forms, motivated by coagulation simulations by \citet{2008ApJS..179..451K} and \citet{2011ApJ...728...68S}, respectively. Other parameters adopted are $e = 0.1$, $\Delta a/a = 0.1$, and the hard strength law. The inferred disk mass rises sharply with system age. Moreover, to make old and bright systems, we require disk masses that approach the mass of Jupiter.} \label{fig:kenyon} \end{figure} In our exercise, we have assumed a simple initial size distribution (eq. \ref{eq:time0}), with all bodies larger than a few hundred meters described by a single power law index $q_3$. We relax this assumption here. Simulations of planetesimal coagulation produce typically more complicated size distributions. For example, \citet{2008ApJS..179..451K} started their simulations with all bodies at $\leq 1$ km. After tens of millions of years of growth, most of the mass still remains at or below $1$ km, with only $\sim 8\%$ of the mass being accreted into bodies $10$ km or larger, $\sim 6\%$ into bodies $100$~kms or larger, and $\sim 3\%$ into bodies of order $1000$~kms. We use a broken power-law to replicate this kind of primordial spectrum. We set $q_3 = 5.5$~from $1$~km~to $10$~km, and $q_3 = 4$~from $10$~km~to $1000$~km. Motivated by \citet{2011ApJ...728...68S}, we also consider a slightly different initial distribution with $q_3 = 7$~from $1$~km~to $10$~km, and $q_3 = 4$~from $10$~km~to $1000$~km. Both sets of size spectrum deposit mass mostly at the low end ($\leq 1$ km) and little at the large sizes. As expected, when initial masses are determined for different systems (Fig. \ref{fig:kenyon}), we find that young systems require exceedingly low initial masses, while old systems require unphysically large initial masses. If we follow the luminosity evolution of such a disk, we will see that the disk flares brightly in the first tens of millions of years, due to the large mass reservoir at the $1$ km-range. Then the luminosity decays as $t^{-1/2}$ (as expected of a $q_3 = 4$ spectrum) but with a low normalization -- most of the disk mass has been ground down in the early stage and we are now left with but a scrap remnant of the original. Conglomeration simulations typically find that only a small fraction of the mass can be accreted to make large bodies, before viscous stirring effectively stalls the growth. \citet{2011ApJ...728...68S} showed that the fraction in large bodies can only be of order $10^{-3}$ in Kuiper-belt-like environments. Does results in Fig. \ref{fig:kenyon} allow us to exclude current conglomeration models? One possible caveat in our analysis is the eccentricity. We discuss this below. \subsection{Eccentricity} \label{subsec:eccentricity} We assume a static, high eccentricity ($e=0.1$) for all systems at all times. In realistic systems, eccentricities can be a function of time. One possible cause of eccentricity evolution is collisional cooling. Collisions dissipate energy, so collisional products have in average lower velocity dispersion than their parent bodies. In a single collision, two bodies with masses $m_1$ and $m_1^\prime$ (assume $m_1 \gg m_1^\prime$) impact with typical velocities\footnote{ Velocities here refer to the random component.} \begin{equation} v_1 \sim v_1^\prime \sim e_1 \, v_{\rm{kep}}, \end{equation} where the subscript $1$ indicates that this is a first generation collision in our counting. Assuming that all collision debris fly away from the collision site with the velocity of the center-of-mass, i.e., all relative velocities in the center-of-mass frame is dissipated during the collision, collisional cooling can be expressed as \begin{equation} v_2 \sim \frac{\sqrt{m_1^2v_1^2+m_1^{\prime 2}v_1^{\prime 2}}}{m_1+m_1^\prime} \sim v_1\left(1 - \frac{m_1^\prime}{2m_1}\right). \end{equation} So the closer in mass the two colliding bodies are, the more cooling their debris experiences. If cooling dominates the eccentricity evolution, we find that a minimum eccentricity of $e_1 = 0.13$ is required (for the hard strength law, and $e_1 = 0.02$ for the medium law) to allow collisional cascade to proceed all the way to micron range. However, even if collisional cooling is severe, we argue that viscous stirring by large planetesimals dominates the eccentricity evolution. This is able to raise the eccentricity of collisional debris to values comparable to that of their parents in a time shorter than a collisional time. So the condition for a successful collisional cascade is reduced to $e \geq 0.05$ for the hard strength law and $e \geq 0.01$ for the medium strength law, i.e., the minimum random motion necessary to break up the hardest grains (the smallest ones). \footnote{This constraint can be reduced by a factor of unity when radiation pressure on small grains are considered \citep{Thebault09}.} In fact, stirring is likely to gradually raise the eccentricity of all bodies. Stirring by large bodies in the disk goes as $e \propto t^{1/4}$ \citep[c.f.][]{GLS}. In the simulations of \citet{2008ApJS..179..451K}, planetesimals are continuously stirred by Pluto-like bodies, but they only reach $e\sim 0.1$ at about a Gyrs.\footnote{ An eccentricity of $e\sim 0.3$ at $40$ AU corresponds to the surface escape velocity of Pluto. Planetesimals have to have a near-surface encounter before they can reach such a high eccentricity. This takes time.} Under such a scenario, the inferred initial disk mass is similar to the original result (Fig. \ref{fig:newfig}), but it is clear that we prefer the same range of values for $q_3$. \begin{figure}[t] \centering \includegraphics[scale=0.65]{figure10.eps} \caption{Same as the top-right panel in Fig. \ref{fig:main} but instead of a constant eccentricity ($e=0.1$), here we assume that the eccentricity rises as $e(t) \approx 0.1 (t/10^9 \rm{yrs})^{1/4}$. There is little difference to the inferred mass, and if anything, the data seems to argue that a $q_3= 4$ model slightly overestimate the value of $q_3$. So our conclusion that $q_3 \in [3.5,4]$ remains unchanged even considering eccentricity growth. } \label{fig:newfig} \end{figure} \section{Summary} \label{sec:summary} Using an ensemble of bright debris disks around Sun-like stars, we have measured the size spectrum of their embedded planetesimals. We parametrize the size spectrum as $dn/ds \propto s^{-q_3}$ and find $q_3 \approx 3.5-4$, where $q_3 = 4$ corresponds to equal mass per logarithmic decade. The planetesimal sizes our technique probes lie between a couple kms to $\sim 100 \, \rm km$. While this size spectrum appears consistent with results of coagulation simulations ($q_3 \sim 4$), there are two lines of evidences that suggest problems in current coagulation models. The first line of evidence is related to the inferre disk mass. The inferred initial masses for these bright disks are surprisingly high. We find total masses reaching as high as $10 M_\oplus$.\footnote{This is for $q_3 = 4$, and even higher values are required if $q_3 = 3.5$.} This is comparable to the total solid mass in the Kuiper belt region of Minimum Mass Solar Nebula model, and about a factor of $100$ higher than the mass in large Kuiper belt objects. Current coagulation models require an MMSN-like total mass to produce the observed density of large Kuiper belt objects. If the same inefficiency persists for our disks, one would require a total disk mass of $\sim 100$ MMSN to produce those embedded planetesimals. This is difficult to imagine. The second line of evidence regards the size spectrum. We experiment with size distributions that arise from coagulation simulations. We find that these distributions could not reproduce the luminosity distribution of the observed disks. Current coagulation models are highly inefficient in making large planetesimals. So most of the mass remains at where they started, presumably $\sim 1$ km. This leads to debris disks that are too bright at early times and that are too dim at late times, by a couple orders of magnitude. We do not believe these discrepancies can be resolved by relaxing some of our model assumptions. In particular, we argue that our estimate for $q_3$ is unchanged even taking into account the fact that disk eccentricity may rise with time. Our results are also insensitive to the width of the debris ring, to the strength of bodies, and to the assumed upper and lower sizes. Because we restrict our attention to the upper envelope of inferred masses, our result is dominated by a handful of systems. Our analysis may be vulnerable to errors. However, the evidence is solid that debris disks remain fairly bright even at a few billion years. This alone dictates that there ought to be lots of mass stored in large (10-100 kms) planetesimals. We address how this is accomplished by revisiting coagulation model in an upcoming publication. \begin{acknowledgements} Y. Wu thanks Y. Lithwick, H. Schlichting and P. Sari for discussions. We acknowledge financial support by NSERC. \end{acknowledgements}
2,877,628,088,663
arxiv
\section{Introduction} Data preprocessing, which involves data labeling, is often the most expensive aspect of deploying machine learning\cite{polyzotis2018data}. Active learning is a sub-field of machine learning that proposes a solution to this problem; it is concerned with selecting which unlabeled datapoints would be most beneficial to label and train on\cite{settles1995active}. This is especially applicable when there is a large pool of unlabeled data, but there is a restriction on the number of datapoints that can be labeled due to budget or time constraints. Active learning has found success in many applications including image segmentation \cite{yang2017suggestive}, sequence labeling \cite{settles2008analysis}, medical image classification \cite{hoi2006batch}, cybersecurity \cite{zhao2013cost, nissim2015boosting}, and manufacturing \cite{dasari2021active}. Yet, active learning methods are heavily model-dependent, thus datapoints labeled for one model may not be effective for the training of other models \cite{lowell2019practical, pmlr-v16-tomanek11a}. As pointed out by Paleyes and Urma, model selection is often an iterative process in machine learning deployment \cite{paleyes2020challenges}. Therefore, for the use of active learning to be practical, the sampled datapoints should also be effective in training other models. Current active learning methods are successful in particular applications, in other words, a specific method will fare well for some combination of dataset and model \cite{lowell2019practical,pmlr-v16-tomanek11a}. The issue is that, given a dataset, identifying which active learning method is the most effective for a given model may defeat the purpose of applying active learning due to the required investment of resources. This is especially true in deployment scenarios where model type is being constantly updated, e.g., from decision tree to random forest to support vector machine, and so on. The cause of this issue is the model dependency of active learning methods. Typically, points to be labeled are regarded as beneficial to a specific model. However, these labeled points may not be the ideal datapoints for training a different model. Additionally, these points may be from a particular area of the feature space, creating sampling bias. Several methods have been proposed in the literature to combat the sampling bias issue, few of which are generalizable to any model and none of which are model independent \cite{settles2008analysis,dasgupta2008hierarchical,krishnan2021mitigating,sener2018active,agarwal2020contextual,Liu_2021_ICCV,Elhamifar_2013_ICCV}. To our knowledge, no data-centric active learning methods have been proposed to sample data so that other models are applicable without resampling. In short, active learning methods optimize data labeling for a given model, but struggle to sample data which is also effective for training other models. This issue is closely related to the sampling bias issue, which results from the model dependency of existing active learning methods. In this work we propose an active learning approach based on combinatorial coverage (CC) that is data-centric, can generalize to any model, samples data which can be effectively transferred to new models, and achieves improvements in sampling bias. We contribute three CC-based active learning methods: \begin{itemize} \item coverage density sampling, \item informative coverage density sampling, and \item uncertainty sampling weighted by coverage density. \end{itemize} While combinatorial interaction testing (CIT) and CC are not widespread in machine learning, several applications have proven successful \cite{pei2017deepxplore, ma2018deepgauge, ma2019deepct, kuhn2020combinatorial,lanus2021combinatorial, cody2022systematic}. We leverage these ideas to develop the proposed methods, and present their competitive performance in terms of classification performance of the trained model and different models as well as the advantages in sampling bias. The rest of this paper is organized as follows. Next, background is given on active learning and CIT. Then, three CC-based active learning methods are proposed. Subsequently, the experimental design is described and results are presented. The results cover 6 publicly available data sets. Then, before concluding with a synopsis, results and future work are discussed. \section{Background} \subsection{Active Learning} \begin{figure}[t] \centering \includegraphics[scale=0.6]{figures/active_learning.png} \caption{A depiction of active learning for labeling data and the reuse of those labels.} \label{fig:AL_wire} \end{figure} Active learning methods can be divided into three groups; membership query synthesis, stream based selective sampling, and pool based sampling \cite{settles1995active}. In membership query synthesis the model can arbitrarily select datapoints to label. Stream based selective sampling involves the model receiving a stream of datapoints and deciding whether they should be labeled one at a time. Pool based sampling, which is the focus of this work, involves the model drawing a set of unlabeled samples to label from the entire pool of unlabeled samples available. Several popular and generalizable query strategies exist for active learning. Uncertainty sampling selects the data point the model is currently most uncertain about \cite{lewis1994heterogeneous}. Query by committee selects points to label as those which a committee of classifiers most disagree on or are on average most uncertain about \cite{seung1992query}. The dependency of active learning on the model being used has lead to issues in data transferability and sampling bias. The process of sampling data with respect to one model and using it for other models is shown in Figure \ref{fig:AL_wire}, where data is sampled according to the initial model and that same pool of labeled data is used to train different models downstream in the model deployment life cycle. Solutions to transferability have not been proposed but there has been work done to look into the specifics of how well the data selected by popular methods transfers to other models \cite{lowell2019practical, baldridge2004active, pmlr-v16-tomanek11a, pardakhti2021practical}. Lowell et al. find that, in generic classification problems, transferability of data sampled using uncertainty sampling is not guaranteed \cite{lowell2019practical}. Baldridge and Miles share a similar finding; data generated by random sampling are more transferrable to other models than data generated by uncertainty sampling \cite{baldridge2004active}. Tomanek and Morik draw a similar conclusion, but they find that data sampled using uncertainty sampling is transferable to other models for some tasks \cite{pmlr-v16-tomanek11a}. Pardakhti et al. also reference the inability of active learning methods to sample data that is effective for multiple models, but their work focuses on finding the optimal hyper parameters for a model given a data set and active learning method \cite{pardakhti2021practical}. The dependency of active learning on the model being used also leads to issues with sampling bias. Several methods have been proposed in the literature to counteract this. To improve the sampling bias of any baseline sampling method Settles and Craven propose an information density method which is computationally expensive for large pools of unlabeled data \cite{settles2008analysis}. Several other successful methods have been proposed to combat the sampling bias and robustness issues, but all these methods are designed to work with specific model types, e.g., with convolutional neural networks \cite{dasgupta2008hierarchical,krishnan2021mitigating,sener2017active,agarwal2020contextual,Liu_2021_ICCV}. A generalizable method for reducing sampling bias is proposed by Elhamifar et al. \cite{Elhamifar_2013_ICCV}, but it involves an optimization problem with $n^2$ variables where $n$ is the number of datapoints in the data set, so the method is not scalable to large data sets. In this manuscript, the proposed CC-based active learning algorithms are compared to random sampling, uncertainty sampling \cite{settles1995active}, query by committee \cite{seung1992query}, and information density \cite{settles2008analysis}. For pool based active learning, all methods must select some subset of datapoints from an (unlabeled) query set of data. We implement them as follows. \begin{itemize} \item For \emph{random sampling}, we assume a uniform distribution over all datapoints and select datapoints from the query set with equal probability. \item For \emph{uncertainty sampling}, we use entropy, defined as $-\sum_{y \in Y}p(y)log(p(y))$ where $y$ is a class and $Y$ is all classes. The model trained on currently labeled data is tested on the query set to determine the probability of each query datapoint belonging to each class. These probabilities are then used for the entropy calculation, and those datapoints for which the model has the highest entropy are selected from the query set. \item For \emph{query by committee}, we also use entropy. Entropy is used to determine which datapoint the committee of classifiers is, on average, most uncertain about. The committee is comprised of three classifiers: random forest, k-nearest neighbors, and logistic regression. The datapoints with the highest average entropy are selected from the query set. \item For \emph{information density}, as presented by Settles and Craven \cite{settles2008analysis}, we weight an informativeness measure by a similarity metric. We weight the entropy for a datapoint with its cosine similarity from the labeled data divided by the cardinality of the unlabeled set. \end{itemize} We compare these benchmark methods to the proposed methods using 6 open-source data sets that are described in Section \ref{sec:ed}. \subsection{Combinatorial Interaction Testing} CIT stems from covering arrays, ultimately derived from the statistical field of design of experiments, and is principally concerned with designing tests that guarantee all interactions up to a certain level. In CIT, an interaction level is the number of system components for which possible interactions should be included in the test set. For example pairwise interaction testing, which is an interaction level of two, aims to design a test set with datapoints containing every possible interaction between the values of every two system components. A thorough review of CIT is provided by Nie and Leung \cite{nie2011survey}. CIT has been applied to several fields but has found a plethora of success in software testing. The application of CIT to software testing has proven capable of fault detection while minimizing the test set size requirements, as a majority of failures can be attributed to the interaction between few parameters \cite{22621}. The extension of CIT to machine learning involves treating the feature space being used to train and test the model as the system parameters. Values are then the specific values each feature can take. A $t$-way interaction is the same as a $t$-way value combination. This is defined as a $t$-tuple of (feature, value) pairs. For example, a 3-way value combination for a car condition classification dataset could be a specific combination of values for mileage, age, and days since last inspection, e.g., `150,000 miles traveled', `20 years old', and `168 days since last inspection'. An extension of CIT is CC, which is the proportion of possible t-way interactions which appear in a set \cite{kuhn2013combinatorial}. `Covered' combinations are those interactions which do appear in a set, and `not covered' are those which do not appear in a set. As CC is concerned with the universe of all possible interactions, Lanus et al. extend CC to Set Difference Combinatorial Coverage (SDCC) \cite{lanus2021combinatorial}. SDCC is the proportion of interactions contained in one dataset but not in another, and is formally defined as follows. \begin{definition} [$t$-way Set Difference Combinatorial Coverage] Let $D_L$ and $D_U$ be sets of data, and $D{_L}^t$ and $D{_U}^t$ be the corresponding $t$-way sets of data. The set difference $D{_U}^t \setminus D{_L}^t$ gives the value combinations that are in $D{_U}^t$ but that are not in $D{_L}^t$. The $t$-way set difference combinatorial coverage is $$SDCC^t(D_U, D_L) = \frac{|D{_U}^t \setminus D{_L}^t|}{|D{_U}^t|}.$$ \end{definition} Kuhn et al.\cite{kuhn2020combinatorial} show how combinatorial interactions can be used for explainable artificial intelligence. Combinatorial interactions has been used to better define the activity of hidden layers in deep learning \cite{ma2019deepct}, and CC has been used for the testing of deep learning models \cite{pei2017deepxplore,ma2018deepgauge}. CC has also been used as a holistic approach for training and testing of models \cite{cody2022systematic}. Lanus et al. utilize SDCC for failure analysis of machine learning, and find that a dataset with greater coverage leads to a better performing model \cite{lanus2021combinatorial}. Cody et al. expand their experiments and apply SDCC to MNIST \cite{cody2022systematic}. \section{Methods} The proposed query criterion relies heavily on SDCC, where the labeled dataset is considered $D_L$ and the unlabeled dataset $D_U$. The query strategy involves finding those datapoints in the unlabeled pool which contain interactions not included in the labeled set upto an interaction level of 6 as upto this level is where a majority of software failures are found \cite{22621}. Those datapoints which contain a greater number of missing interactions are to have a higher priority for labeling. Once the hierarchy of datapoints to label has been determined, selection according to this hierarchy is done in three ways; coverage density sampling, informative coverage density sampling, and uncertainty sampling weighted by coverage density. These data-centric methods should aid in sampling points which allow for data transferability to new models as illustrated in Figure 1. Algorithm 1 presents a method to determine coverage density given unlabeled and labeled datasets. As a data point from the query set can contain several missing interactions, the sum of the number of missing interactions it contains could be considered as the density of coverage at that point. Lower level interactions are expected to be associated with a greater number of classes than higher level interactions, so they should hold a greater weight. The weighting scheme that is proposed utilizes the decreasing function $\frac{1}{t}$ for $t=1,...,6$ where each t is the t interaction level. The coverage density of some point is then the weighted sum of all interactions contained in that data point. This density is used in determining which datapoints to query in the proposed methods. Coverage density is formally defined as follows. \begin{definition} [Coverage Density] Let $D_L$ and $D_U$ be sets of data, and let $j \in D_L$ and $i \in D_U$. Also, let $j_t$ and $i_t$ be corresponding t-way set of data. Then coverage density of i at level t is $c_{i_t}$ = $\sum_{j \in D_L}{\frac{1}{t}\ \forall\ i_t\ \text{not in}\ j_t}$\\ Coverage Density of each $i \in D_U$ is $\sum_{t \in T} c_{i_t} $, where T, the highest interaction level, is user specified. \end{definition} \begin{algorithm} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \underline{function Coverage Density} $(L,U)$\; \Input{labeled Set $L$ and Unlabeled Set $U$} \Output{Coverage Density, $c$} \For([This loop finds all interactions at t in L and U]){$t$ $in$ $1$ $to$ $T$}{$\mu_t \gets \text{LIST of interactions in L at level t} $ $\beta_t \gets \text{LIST of interactions in U at level t}$ $indices \gets EMPTY LIST$ \For([Update indices with missing interactions]){$i$ $in$ $1$ $to$ \text{length of} $\beta_t$}{ \If{$i^{th}$ $element$ $of$ $\beta_t$ $not$ $in$ $\mu_t$} { APPEND associated index value from U to indices\; } } \For([Coverage Density is Updated]){$j$ $in$ $indices$}{ $j^{th}$ $element$ $of$ $c$ $\gets$ $j^{th}$ $element$ $of$ $c + 1/t$ } } \caption{Coverage Density Algorithm} \end{algorithm} All proposed sampling methods use the same definition of variables. $x_i$ is a binary variable valued 1 if data point $i$ will be sent to the oracle for labeling and 0 otherwise. $c_i$ is the coverage density of data point i as previously defined, and $b$ is the budget or number of points we are allowed to select. The three proposed methods to sample points are presented in Definitions 3-5. \begin{definition} [Coverage Density Sampling] Let $c_i$ and $b$ be given, the points selected are those with the highest coverage density: \begin{equation*} \begin{aligned} \argmax_{i} \quad & \sum_{i}{ c_{i}x_i}\\ \textrm{s.t.} \quad & \sum_{i}x_{i} \leq b\\ \end{aligned} \end{equation*} \end{definition} The second method weighs coverage density by similarity, which should protect against outliers. Cosine similarity is used as the measure of similarity between each query point and all other points, this is defined in Equation 1 where $\boldsymbol{x}$ represents all datapoints in both training and query sets. \begin{definition} [Informative Coverage Density Sampling] Let $c_i$ and $b$ be given, the informativeness of a datapoint, $x_i$, is coverage density by similarity, where U is the cardinality of the unlabeled set: \begin{equation*} \begin{aligned} I(x_i) = c_i \frac{1}{U}sim(\boldsymbol{x},x_i) \end{aligned} \end{equation*} Where similarity is cosine similarity: \begin{equation} \begin{aligned} sim(\boldsymbol{x},x_i) = \sum_{x \in \boldsymbol{x}}\frac{x\cdot x_i}{\left|x\right|\left|x_i\right|} \end{aligned} \end{equation} The datapoints selected should maximize the sum of informativeness: \begin{equation*} \begin{aligned} \argmax_i \quad & \sum_{i}{I(x_i)}\\ \textrm{s.t.} \quad & \sum_{i}x_{i} \leq b\\ \end{aligned} \end{equation*} \end{definition} The final method involves weighting the common uncertainty sampling with entropy formulation by the coverage density of the data point as defined previously. \begin{definition} [Uncertainty Sampling Weighted by Coverage Density(USWCD)] Let $c_i$ and $b$ be given, the informativeness of a data point, $x_i$, is defined as the following: \begin{equation*} \begin{aligned} I(x_i) = H(x_i)c_{i} \end{aligned} \end{equation*} Where $H(x_i)$ is the entropy of the model at prediction at point $x_i$: \begin{equation*} \begin{aligned} H(x_i)= - \sum_{y \in Y}{p(y_i)log(p(y_i))} \end{aligned} \end{equation*} That is, the entropy over all the classes that a specific data point may belong too. The datapoints selected should maximize the sum of informativeness: \begin{equation*} \begin{aligned} \argmax_i \quad & \sum_{i}{I(x_i)}\\ \textrm{s.t.} \quad & \sum_{i}x_{i} \leq b\\ \end{aligned} \end{equation*} \end{definition} These three methods are reliant on the data, with USWCD being the only method which takes some model input. Data-centric methods should allow for the data to be better transferred between models; this is illustrated in the experiments. \section{Experimental Design} \label{sec:ed} \subsection{Data} All experiments are conducted on data sets from the UCI Machine Learning repository \cite{Dua:2019}, and the benchmark methods used are the ones previously defined. Table 1 displays general information about each data set. Batch size is the number of datapoints queried at each active learning iteration, for larger data sets a batch size of 100 is used while 25 points per sample is used for smaller data sets. All data sets, other than the Monk data set, are randomly split so that there are 10\% to test on. Of the remaining 90\% of data, 2.5\% is used as initial training data and 97.5\% is used as the query set. The Monk data set is pre-partitioned into training and testing sets, so the training set (which is the size listed in Table 1) is split into 97.5\% query points and 2.5\% initial training set. \begin{table}[h!] \captionsetup{justification=centering, labelsep=newline, font=footnotesize} \centering \caption{\sc Data Set Information} \resizebox{9cm}{!}{% \begin{tabular}{l c c c c} \hline Data Set & datapoints & Features & Batch Size & Number of Batches \\ [0.5ex] \hline Tic-Tac-Toe & 957 & 9 & 100 & 8 \\ \hline Balance Scale & 624 & 4 & 25 & 21\\\hline Car Evaluation & 1727 & 6 & 100 & 15\\\hline Chess & 28066 & 6 & 100 & 246\\\hline Nursery & 12959 & 8 & 100 & 113\\\hline Monk & 414 & 6 & 25 & 13\\ \hline \end{tabular}} \label{table:1} \end{table} \subsection{Performance Measures} F1 is used as the measure of performance for each of the classifiers, as F1 will take into account class imbalance as well as model performance unlike model accuracy which only looks at model performance. Also, F1 balances precision and recall as opposed to other F-measures. F1 is calculated as \begin{equation*} \begin{aligned} F1 = 2* \frac{Precision*Recall}{Precision+Recall} \\ \end{aligned} \end{equation*} Precision and Recall are defined as the following, where tp is true positive and fn is false negative: \begin{equation*} \begin{aligned} Precision = \frac{tp}{tp+fp} \\ Recall = \frac{tp}{tp+fn} \\ \end{aligned} \end{equation*} Experiments on each data set are conducted three times, each time using the same random partition of data. The average F1 of the three runs is used to determine performance. To quantify the performance over all iterations of sampling we take the area under the learning curve (AUC) for which the x-axis is number of datapoints queried and the y-axis is F1. The area is determined using the Trapezoidal rule as implemented in Numpy \cite{harris2020array}. To determine the effect of the proposed methods on sampling bias, a sampling bias comparison method proposed by Krishan et al. \cite{krishnan2021mitigating} is used, this is presented in equation 2. $H_{D_{L}}$ is the entropy of the sampled set and $H_{Balanced}$ is the entropy of a set with an equal number of datapoints from each class. \begin{equation} \begin{aligned} \text{Sampling Bias} = 1 - \frac{H_{D_{L}}}{H_{Balanced}} \end{aligned} \end{equation} $H_{D_{L}}$ is defined as $- \sum_{k=1}^{K}\frac{M_k}{M}log(\frac{M_k}{M})$ where $M_k$ is the number of datapoints belonging to class k and $M$ is the total number of datapoints in our sample. The average of sampling bias of the three runs is used for comparison. \subsection{Experiment 1} The initial experiment involves sampling with respect to a certain model and using the same model to compare methods. This is the active learning scenario most frequently discussed and presented in the `initial labeling effort' section of Figure 1. All data sets are sampled with respect to, trained, and tested utilizing a Random Forest Classifier with max depth constrained to five. \subsection{Experiment 2} To determine the effectiveness of the methods in sampling points that are beneficial to a model outside the learning loop as depicted in the `Reuse of Labeled Data' portion of Figure 1, another experiment is employed. The points sampled with respect to a Random Forest Classifier with max depth constrained to 5 are used to train a Decision Tree classifier and Support Vector Machine (SVM); for the SVM a Support Vector Classifier (SVC) implementation is used. Both the Decision Tree and SVC do not have any hyperparameter tuning and are utilized as is from Scikit-learn \cite{scikit-learn}. That is, the Decision Tree uses Gini impurity to determine quality of split and is not contrained to a max depth, the SVC uses a radial basis function kernel and has the squared L2 norm as a regularization term. After each iteration of sampling, all models are tested and the average of the three runs is used for the results. These experiments should provide an understanding as to how methods compare when sampling and training with respect to a particular model, and further will depict the effectiveness of these sampled points to transfer to different models. Both scenarios are crucial to machine learning and active learning deployment. \section{Results} \begin{figure*}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth,height=5cm,keepaspectratio]{figures/chess_forest_og.png} \caption{Chess with Random Forest} \label{fig:chess_forest} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth,height=5cm,keepaspectratio]{figures/chess_svm.png} \caption{Chess with SVM} \label{fig:chess_svm} \end{subfigure} \caption{Chess Dataset Results} \label{fig:chess_results} \end{figure*} \begin{figure*}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth,height=5cm,keepaspectratio]{figures/nursery_forest_og.png} \caption{Nursery with Random Forest} \label{fig:nursery_forest} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth,height=5cm,keepaspectratio]{figures/nursery_svm.png} \caption{Nursery with SVM} \label{fig:nursery_svm} \end{subfigure} \caption{Nursery Dataset Results} \label{fig:nursery_results} \end{figure*} The learning curves for the chess and nursery datasets using Random Forest and SVM classifiers are in figures 2 and 3; these plots are F1 score versus number of query points added. Dashed lines signify the benchmark methods while solid lines signify the methods incorporating coverage. AUC of F1 vs query points added to training set for each of the sampling methods are presented in Table 2. Query by Committee has the best performance a majority of the trials, seven, but USWCD is a near second with six best performances. To break down instances in which each method outperforms the others we look at the performance on the original model as well as on the models that the data is transferred to. We first focus on `Experiment 1' as described in Section \ref{sec:ed}. The proposed methods should be competitive with the benchmark methods when sampling and testing with respect to a specific model, in our case the Random Forest model restricted to a depth of five. To examine this performance we look at the percent difference in AUC of each method from the best performing method on each dataset, this is presented in Table 3. USWCD is the best performer once, it has 0.00\% difference in AUC from the best method, itself. Uncertainty sampling and information density also perform best only once, but Query by Committee performs best three times. Though USWCD does not always perform the best, it achieves performance nearest the best performer in 60\% of instances where it is not the best performer. So, USWCD does achieve a competitive performance for the model in the active learning loop, next we study models outside the learning loop as described in `Experiment 2'. In machine learning deployment, the model in use would likely not change unless the new model provides some benefit such as computational efficiency or performance. For this study we pay special attention to performance, and look at instances in which the use of a Decision Tree or SVM increases final model performance by 5\% or more. As Lowell et al. \cite{lowell2019practical} point out, active learning methods often do not outperform random sampling when the sampled data is transferred to a new model. Therefore, we treat random sampling as a baseline for data transfer comparison. Table 4 shows the percent difference in area under F1 versus queried points curve between each method and random sampling when model performance increases by 5\% or more. In a majority of the presented scenarios USWCD outperforms the other methods. This is also the only method which does not perform worse than random sampling in any instance of model improvement. So, the proposed method is effective in sampling data which is transferable to new and more effective models. \begin{table*}[t] \captionsetup{justification=centering, labelsep=newline, font=footnotesize} \centering \caption{\sc AUC of F1 vs. Query Points Added For Each Method, Model, and Dataset} \begin{tabular}{llllllll} Active Learning Method & Model & Monk & Balance Scale & Car Evaluation & Tic-Tac-Toe & Nursery & Chess \\ \hline Random Sampling & \multirow{3}{7em}{Random Forest\\ Decision Tree \\ SVM} & \multirow{3}{4em}{221.60\\210.46 \\ 219.71} & \multirow{3}{4em}{462.63\\426.47 \\ 488.30}& \multirow{3}{4em}{1303.77\\1509.68 \\ 1321.41} & \multirow{3}{4em}{696.84\\764.03 \\ 690.35}&\multirow{3}{4em}{10499.40\\11201.24 \\ 10997.11}&\multirow{3}{4em}{9437.38\\17051.97 \\ 11464.67} \\ \\ \\ \\ \hline Uncertainty Sampling & \multirow{3}{7em}{Random Forest\\ Decision Tree \\ SVM}& \multirow{3}{4em}{\textbf{228.47}\\213.20 \\ 221.67} & \multirow{3}{4em}{459.87\\\textbf{433.73} \\ 477.82} & \multirow{3}{4em}{1332.55\\1455.13 \\ 1305.52}& \multirow{3}{4em}{714.21\\758.59 \\ 710.00} & \multirow{3}{4em}{10507.02\\11126.62 \\ 10800.42}&\multirow{3}{4em}{8972.98\\15848.8 \\ 10537.07} \\ \\ \\ \\ \hline Query by Committee & \multirow{3}{7em}{Random Forest\\ Decision Tree \\ SVM}& \multirow{3}{4em}{224.90\\\textbf{217.30}\\222.03} & \multirow{3}{4em}{\textbf{477.55}\\415.52 \\ 489.51}& \multirow{3}{4em}{1346.31\\1491.37 \\ 1395.44}& \multirow{3}{4em}{\textbf{731.57}\\761.22 \\ \textbf{717.71}}& \multirow{3}{4em}{10430.06\\\textbf{11301.59} \\ \textbf{11056.61}}&\multirow{3}{4em}{\textbf{9632.39}\\17196.40 \\ 11031.15}\\ \\ \\ \\ \hline Information Density Sampling & \multirow{3}{7em}{Random Forest\\ Decision Tree \\ SVM}& \multirow{3}{4em}{220.91\\213.28 \\ 218.88}& \multirow{3}{4em}{466.80\\419.28 \\ 488.57}& \multirow{3}{4em}{1374.51\\1517.34 \\ 1403.19}& \multirow{3}{4em}{712.80\\760.35 \\ 703.33}& \multirow{3}{4em}{\textbf{10563.24}\\11299.12 \\ 10971.20}& \multirow{3}{4em}{9511.56\\\textbf{17593.14} \\ 11048.85} \\ \\ \\ \\ \hline Coverage Density Sampling & \multirow{3}{7em}{Random Forest\\ Decision Tree \\ SVM}& \multirow{3}{4em}{219.68\\211.20\\ \textbf{222.81}}& \multirow{3}{4em}{453.42\\408.87 \\ 484.27} & \multirow{3}{4em}{1297.09\\1507.07 \\ 1304.94}& \multirow{3}{4em}{705.78\\775.08 \\ 681.57}& \multirow{3}{4em}{10390.65\\11090.34 \\ 10969.15}& \multirow{3}{4em}{9512.64\\16704.74 \\ 11381.05} \\ \\ \\ \\ \hline Informative Coverage Density Sampling & \multirow{3}{7em}{Random Forest\\ Decision Tree \\ SVM}& \multirow{3}{4em}{222.02\\209.71\\ 221.73}& \multirow{3}{4em}{445.56\\409.81 \\ 479.83}& \multirow{3}{4em}{1314.82\\1505.03 \\ 1317.73}& \multirow{3}{4em}{693.15\\759.47 \\ 686.84}& \multirow{3}{4em}{10376.79\\11065.89 \\ 10970.84}& \multirow{3}{4em}{9465.12\\16720.74 \\ 11369.16} \\ \\ \\ \\ \hline USWCD & \multirow{3}{7em}{Random Forest\\ Decision Tree \\ SVM}& \multirow{3}{4em}{220.05\\209.74\\218.84}& \multirow{3}{4em}{477.21\\425.33 \\ \textbf{493.54}}& \multirow{3}{4em}{\textbf{1388.95}\\\textbf{1531.87} \\ \textbf{1404.94}}& \multirow{3}{4em}{716.84\\\textbf{778.59} \\ 711.05} & \multirow{3}{4em}{10522.23\\11245.97\\ 11006.75}& \multirow{3}{4em}{9485.49\\17300.60 \\ \textbf{11482.56}} \\ \\ \\ \\ \hline \end{tabular} \end{table*} \begin{table*}[t] \captionsetup{justification=centering, labelsep=newline, font=footnotesize} \centering \caption{\sc Percent Difference in AUC Between Best Method And Each Method on Original Model by Dataset (\%)} \begin{tabular}{lcccccc} Active Learning Method & Monk & Balance Scale & Car Evaluation & Tic-Tac-Toe & Nursery & Chess \\ \hline Random Sampling & -3.01 &-3.13 &-6.13 & -4.75 &-0.61 &-2.03 \\ \hline Uncertainty Sampling & \textbf{0.00} &-3.70 &-4.06 &-2.37 &-0.53 &-6.85 \\ \hline Query by Committee & -1.56 &\textbf{0.00} &-3.07 &\textbf{0.00}&-1.26 &\textbf{0.00} \\ \hline Information Density Sampling & -3.31&-2.25&-1.04&-2.57&\textbf{0.00}&-1.25 \\ \hline Coverage Density Sampling & -3.85&-5.05&-6.61&-3.53&-1.63&-1.24\\ \hline Informative Coverage Density Sampling & -2.82&-6.70&-5.34&-5.25&-1.77&-1.74 \\ \hline USWCD & -3.69&-0.07&\textbf{0.00}&-2.01&-0.39&-1.53\\ \hline \end{tabular} \end{table*} \begin{table*}[t] \captionsetup{justification=centering, labelsep=newline, font=footnotesize} \centering \caption{\sc Percent Difference in AUC from Random Sampling When Model Changes and Performance Increases (\%)} \begin{tabular}{lllllll} Active Learning Method & Model & Balance Scale & Car Evaluation & Tic-Tac-Toe & Nursery & Chess \\ \hline Uncertainty Sampling & \multirow{2}{7em}{Decision Tree \\ SVM}& \multirow{2}{5em}{ \\ -2.15} & \multirow{2}{5em}{-3.61 \\ -1.20} & \multirow{2}{5em}{\hspace{.14cm}-.71\\}& \multirow{2}{5em}{\hspace{.14cm}-.67\\-1.79} & \multirow{2}{5em}{-7.06\\-8.09} \\ \\ \hline Query by Committee & \multirow{2}{7em}{Decision Tree \\ SVM}& \multirow{2}{5em}{\\\hspace{.22cm}.25} & \multirow{2}{5em}{-1.21\\\hspace{.09cm}5.60}& \multirow{2}{5em}{\hspace{.14cm}-.37 \\}& \multirow{2}{5em}{\hspace{.23cm}\textbf{.9}\\ \hspace{.22cm}\textbf{.54}}& \multirow{2}{5em}{\hspace{.22cm}.85\\-3.78} \\ \\ \hline Information Density Sampling & \multirow{2}{7em}{Decision Tree \\ SVM}& \multirow{2}{5em}{\\\hspace{.22cm}.06}& \multirow{2}{5em}{\hspace{.22cm}.51\\\hspace{.09cm}6.19}& \multirow{2}{5em}{\hspace{.14cm}-.48\\}& \multirow{2}{5em}{\hspace{.22cm}.87\\\hspace{.14cm}-.24}& \multirow{2}{5em}{\hspace{.09cm}\textbf{3.17}\\-3.63} \\ \\ \hline Coverage Density Sampling & \multirow{2}{7em}{Decision Tree \\ SVM}& \multirow{2}{5em}{\\\hspace{.14cm}-.83}& \multirow{2}{5em}{\hspace{.14cm}-.17\\-1.25} & \multirow{2}{5em}{\hspace{.09cm}1.45\\}& \multirow{2}{5em}{\hspace{.14cm}-.99\\\hspace{.14cm}-.25}& \multirow{2}{5em}{-2.04\\\hspace{.14cm}-.73} \\ \\ \hline Informative Coverage Density Sampling & \multirow{2}{7em}{Decision Tree \\ SVM}& \multirow{2}{5em}{\\-1.74}& \multirow{2}{5em}{\hspace{.14cm}-.31\\\hspace{.14cm}-.28}& \multirow{2}{5em}{\hspace{.14cm}-.6\\}& \multirow{2}{5em}{-1.21\\\hspace{.14cm}-.24}& \multirow{2}{5em}{-1.94\\\hspace{.14cm}-.83} \\ \\ \hline USWCD & \multirow{2}{7em}{Decision Tree \\ SVM}& \multirow{2}{5em}{\\\hspace{.09cm}\textbf{1.07}}& \multirow{2}{5em}{\hspace{.09cm}\textbf{1.47}\\\hspace{.09cm}\textbf{6.32}}& \multirow{2}{5em}{\hspace{.09cm}\textbf{1.91} \\}& \multirow{2}{5em}{\hspace{.22cm}.40\\\hspace{.22cm}.09} & \multirow{2}{5em}{\hspace{.09cm}1.46\\\hspace{.22cm}\textbf{.16}} \\ \\ \hline \end{tabular} \end{table*} The proposed USWCD does achieve comparable performance to the benchmark methods on the original model, and outperforms the benchmark methods on new models which perform better. However, comparing methods across all datasets for each model would help to paint a better picture of overall performance. To compare results of active learning methods across datasets, the area under the F1 curve is normalized using the following equation: $ x^* = \frac{x-x_{min}}{x_{max}-x_{min}}$. Then, the median performance of each method across all datasets is recorded, these results are presented in Table 5, where the top performer is bolded. It is clear that the proposed USWCD method outperforms the other methods when data is transfered to new models. On the original model, query by committee performs best but USWCD achieves the most similar performance. So, UWSCD achieves a comparable performance to existing methods in the single-model setting, and outperforms existing methods in the multi-model setting. \begin{table*}[t] \captionsetup{justification=centering, labelsep=newline, font=footnotesize} \centering \caption{\sc Median Normalized Area Under Curve of F1 vs. Query Points Added} \begin{tabular}{lccc} Active Learning Method & Random Forest & Decision Tree & SVM \\ \hline Random Sampling & .376 & .632 & .455\\\hline Uncertainty Sampling & .498 & .129 & .003\\\hline Query by Committee & \textbf{.797} & .622 & .854 \\\hline Information Density Sampling & .740 & .641 & .634\\\hline Coverage Density Sampling & .160 & .344 & .534 \\\hline Informative Coverage Density Sampling & .097 & .041 & .406 \\\hline USWCD & .779 & \textbf{.798} & \textbf{.908} \\\hline \end{tabular} \end{table*} To study the sampling bias across all datasets, a violin plot is used as implemented in seaborn\cite{Waskom2021}. The violin plot is created using kernel density estimation for fitting a probability distribution. The white dot in each distribution marks the median. The plot is shown in Figure 4, and the median sampling bias of USWCD is lower than that of all other methods. However, the distributions do look very similar, so testing the methods in datasets with a greater class diversity may help to paint a better picture of sampling bias. \begin{figure}[t] \centering \includegraphics[scale=.4]{figures/violin_plot.png} \caption{Violin Plot for Sampling Bias} \label{fig:violin plot} \end{figure} \section{Discussion and Future Work} As shown in the experiment results, our proposed USWCD achieves a comparable performance on the model in the active learning loop, and outperforms existing methods on models outside the loop. The versatility of the sampled data is crucial for deployable active learning. We also study the effect on sampling bias as a data-centric approach may ease these issues usually associated with active learning. It is found that there are not large improvements in sampling bias compared to all benchmark methods, but the proposed USWCD does achieve a lower median sampling bias than other methods. Studying the methods on datasets with more classes may help to paint a better picture of the sampling bias. Other future extensions of this work include the extension of coverage based active learning to continuous data, where a specific discretization method would be applied to determine which datapoints contain missing interactions. The extension of coverage density sampling to neural networks, deep learning, and other computationally expensive models could also be beneficial. Re-training these models at each iteration of active learning, as is required for uncertainty sampling, query by committee, and information density sampling, might not be ideal. Coverage density could offer a data-centric approach to sample points without the need to re-train the model at each iteration. \section{Conclusion} In this paper we introduced three active learning methods utilizing SDCC with the goal of creating methods that sample data which is useful not only for the current model but future models as well. We test the methods on six data sets from the UCI machine learning repository and use four benchmark methods: random sampling, uncertainty sampling, query by committee, and information density sampling. We initially sample data using a Random Forest classifier, then use the sampled data to train a Decision Tree and SVM to test our hypothesis that a data-centric method should sample datapoints which contribute more to learning for any arbitrary model than a heavily model focused method. The results are summarized in Table 5. We find that the proposed USWCD method performs similarly if not better than other methods on the original model. When the data is transferred to new models, USWCD outperforms all other methods. \bibliographystyle{IEEEtran}
2,877,628,088,664
arxiv
\section{Background} \label{sec:intro} We consider a random access system with symmetric users who compete to communicate with a common receiver, or a base station (BS). Traditional approaches for analyzing such systems use the simplified collision model (\cite{BG:92} and references therein), which assumes that a message is received error-free by the BS {\bf if and only if} a single user transmits. Under this model, several protocols, which attempt to avoid collisions, have been proposed in the literature, for example, Gallager tree algorithm (GTA) \cite{G:78}. The collision model, however, does not adequately capture some important characteristics of the wireless channel, e.g., multi-path fading, and ignores certain physical layer (PHY) properties like multi-packet reception (MPR)~\cite{NMT:05}. Recently, several researchers have started to focus on cross-layer optimization approaches which leverage the wireless medium to improve the performance of random access systems. For example, Naware {\em et al.} \cite{NMT:05} analyzed the stability and average delay of slotted-ALOHA based random access channels with MPR at the BS. This analysis, however, abstracts out the physical layer parameters by using a very simplified model for MPR probabilities. Another example is \cite{TZB:00} where Tsatsanis {\em et al.} propose the Network-assisted Diversity Multiple Access (NDMA) protocol, which uses a {\bf repetition based} Automatic Repeat reQuest (ARQ) approach for collision resolution. As argued in the sequel, this protocol suffers from a significant loss in throughput resulting from repetition coding. In \cite{CT:01}, Caire {\em et al.} analyzed the throughput of incremental redundancy (IR)-ARQ for the Gaussian collision channel with fully-loaded\footnote{Each queue has infinite packets for transmission.} queues and single-user decoders at the base station. By adopting the fully-loaded queuing model, this work ignores the stability issues that arise in practical random access systems with random arrivals. Moreover, the single-user decoders used in this work are sub-optimal and result in considerable throughput losses. To overcome the limitations of these previous works, we adopt a more realistic model for the physical layer, and develop a variant of IR-ARQ protocols optimized for random access over wireless channels. \section{ARQ Random Access} \label{sec:sys_model} In this section, we introduce our system model and briefly review two existing random access schemes; namely, the GTA and NDMA protocols. To the best of the authors' knowledge, these two approaches represent the state of the art in the design of random access protocols. We then present an IR-ARQ random access protocol that overcomes the limitations of these protocols. \subsection{System Model} We consider a $K$-user symmetric random access channel with $M$ antennas at each user and $N$ antennas at the receiver (base station). We assume that all the channels are independent and experience Rayleigh-flat and long-term static block fading where the channel coefficients remain constant during one collision resolution (CR) epoch and change independently from one epoch to the next (a CR epoch will be defined rigorously in the next section). The channel coefficients are assumed to be perfectly known to the BS, but unknown to the random access users. We consider individual power constraints on the users, and denote the average received signal-to-noise ratio (SNR) of each user by $\rho$. In our model, time is slotted and a slot is composed of $T$ channel uses. In order to control the number of users colliding in any particular slot, each user selects a slot for transmitting a new packet according to the probability-$p_t$ rule: in every slot, each user with a non-empty queue transmits a packet with probability $p_t$ and does not transmit with probability $1-p_t$, where $0 < p_t \le 1$. We assume that the BS can perfectly identify the set of active users (by assigning a different control channel to each user). We initially assume fully-loaded queues in Section~\ref{sec:dmdt}, and then relax this assumption and consider a queuing system with random arrivals in Section~\ref{sec:random}. \subsection{Gallager Tree Algorithm (GTA)} This algorithm was proposed by Gallager \cite{G:78} for the random access channel under the simplified collision model. The extension of this algorithm to our channel model mainly includes the probability-$p_t$ rule and an explicit assumption that the base station does not try to decode in the case of a collision. We describe the extended GTA as follows. The traffic in the channel is interpreted as a flow of collision resolution (CR) epochs. At the beginning of a CR epoch, each user uses the probability-$p_t$ rule to decide whether it should (or should not) transmit in that epoch. If none of the users choose to transmit, the slot remains idle and a new CR epoch starts from the following slot. If only one user chooses to transmit, then the message is assumed to be successfully decoded at the BS, and a new CR epoch begins from the following slot. But when a collision occurs, i.e., more than one user chooses to transmit in the current slot, the system enters into a CR mode, and only the users that participated in the collision at the beginning of a CR epoch are allowed to transmit until the end of that CR epoch. The colliding users are randomly split into two different groups according to a fair random split, wherein each user has an equal probability of joining either of the groups. A CR epoch is finished when all the users who have initiated it and not been excluded (or \emph{pruned}) by the tree algorithm, obtain a slot to transmit their packets without collisions. We omit the detailed description of the algorithm for brevity and refer interested readers to \cite{G:78,MP:93}. \subsection{Orthogonal Network-Assisted Diversity Multiple Access (O-NDMA)} \label{subsec:ndma} The NDMA protocol was proposed by Tsatsanis {\em et al.} \cite{TZB:00} and relies on the use of time diversity through a repetition ARQ scheme to resolve collisions between users. At the beginning of each CR epoch, the transmission of each user will be determined by the probability-$p_t$ rule as in the GTA protocol. If none or only a single user choose to transmit, then the next CR epoch starts from the following slot as before. However, when $k$ ($\ge 2$) users transmit, then all those users repeat their transmissions in the next $(k-1)$ slots. At the end of $k$ slots, the BS is assumed to be able to reliably decode the $k$ packets, and a new CR epoch begins from the next slot. On the other hand, in~\cite{ZST:02}, Zhang {\em et al.} proposed a new variant of NDMA which does not rely on time diversity to resolve/detect collisions. This variant, named B-NDMA, relies on a blind signal separation method utilizing a Vandermonde mixing matrix constructed via specially designed user retransmissions. In B-NDMA, the detection and resolution of a $k$-user collision require $(k+1)$ slots. However, in this paper, we assume the use of separate control channels for collision detection; which allows for a slightly more efficient version of the B-NDMA protocol, named orthogonal NDMA (O-NDMA), which requires only $k$ slots to resolve a $k$-user collision, without relying on temporal diversity. The behavior of users in O-NDMA is the same as that in NDMA, with the only difference that in case of a $k$-user collision, user $i$ transmits its symbols scaled by $(w_i)^{\ell}=(e^{\frac{j2\pi i}{k}})^{\ell}$, where $i=1, \cdots, k$ and $j = \sqrt{-1}$, in the $\ell$-th slot after the initial collision. At the end of the $k^{th}$ slot, the BS utilizes the orthogonal structure constructed by $w_i$'s to decompose the joint decoding problem into $k$ single-user problems. For example, suppose that user 1 and user 2 have collided ($k=2$), and user $i$'s codeword is $\mathbf{x}_i$, for $i=1,2$. Then, the BS coordinates the users so that user 1 repeats $\mathbf{x}_1$ whereas user 2 transmits $-\mathbf{x}_2$, in the slot following the collision. To decode user 1, the BS calculates the sum of the received vectors in the two slots, while to decode user 2, it takes the difference (i.e., matched filtering). This way, the multi-user interference is removed, and single-user decoders can be utilized to recover both packets. It is worth noting that O-NDMA requires symbol-level synchronization to facilitate the interference cancellation described above. Hence, our results for O-NDMA can be interpreted as optimistic upper bounds on the performance of repetition based random access protocols. However, O-NDMA is still sub-optimal for two reasons. First, the BS might be able to decode\footnote{Multiple messages can be jointly decoded in a single transmission block, with an arbitrary small error probability, if a rate-tuple lies within the capacity region of the channel and a sufficiently large block length is used \cite{G:68}.} the messages of $k$ colliding users in less than $k$ time slots. Conversely, it is also possible that $k$ time slots are insufficient for the successful decoding of the $k$ packets. Thus, such a static strategy may result in a throughput loss. Second, O-NDMA is essentially {\bf a repetition based} collision resolution mechanism. Although this results in a low-complexity decoder at the BS, the throughput performance is highly sub-optimal, as shown rigorously in the sequel. A significant improvement in the throughput can be achieved by allowing for IR transmissions from the colliding users within the CR epoch, and using joint decoding, across ARQ rounds and users, at the base-station (as discussed next). \subsection{IR-ARQ Random Access} To overcome the disadvantages of the existing protocols, we propose a new IR-ARQ random access protocol operating as follows. Each user encodes an information message (packet) of $B_T$ bits using a codebook of length-$LT$ codewords, where $L$ is an integer denoting a deadline constraint on the transmission delay (i.e., a constraint on the maximum number of allowed ARQ rounds). Codewords are divided into $L$ sub-blocks of length $T$. At the beginning of each CR epoch, the users choose to transmit or not based on the probability-$p_t$ rule as before. Once a user chooses to transmit in a particular slot, it transmits its first $T$ symbols during that slot. On receiving signals at the end of a slot, the BS uses a joint decoder that decodes the received observations both across users and ARQ rounds. If the receiver successfully decodes {\bf all} the transmitted messages, it feeds back an ACK bit; otherwise, it returns a NACK signal. On receiving an ACK, the CR epoch is terminated and a new CR epoch starts from the next slot. Thus a CR epoch can be defined as the time between two successive ACKs from the receiver (we observe that this definition requires the BS to return an ACK message after an idle slot). On the other hand, if a NACK is received, each colliding user sends its second sub-block of $T$ codeword symbols in the next slot, while all the other users remain silent. The ACK/NACK rule applies in a similar manner, until the $L^{th}$ slot is reached (after $(L-1)$ consecutive NACKs). In this case, the receiver sends an ACK regardless of its decoding result. If a certain message is decoded after $\ell$ ARQ rounds, the effective coding rate for the corresponding user becomes $R/\ell$ bits per channel use (BPCU), where $R=(B_T/T)$ denotes the rate computed assuming only one transmission round. Finally, we note that, unlike the O-NDMA, the IR-ARQ protocol requires only slot-synchronization. \section{The Diversity-Multiplexing-Delay Tradeoff (DMDT)} \label{sec:dmdt} In this section, we analyze the DMDT of the proposed IR-ARQ protocol and contrast it with our two benchmark protocols under the assumption of fully-loaded queues. The ``fully-loaded'' assumption allows for analyzing the maximum achievable throughput without focusing on the stability and average delay issues, for the moment. \subsection{Definitions} We borrow the notion of DMDT from \cite{ECD:04}. This notion is a generalization of the Zheng-Tse diversity-multiplexing tradeoff (DMT) which characterizes the fundamental limits of fading channels in the high SNR regime~\cite{ZT:02}. The delay here refers to the {\bf maximum transmission delay} corresponding to our upper bound $L$ on the number of ARQ rounds (including the first one). In particular, we consider a family of ARQ protocols where the size of the information messages $B_T(\rho)$ depends on the operating SNR $\rho$. These protocols are based on a family of space time-codes $\{C_\rho\}$ with a first round rate of $R(\rho) = B_T(\rho)/T$ and an overall block length $TL$. For this family of protocols, we define the first round multiplexing gain $r$ and the {\bf effective} ARQ multiplexing gain $r_e$ as \begin{equation} r ~=~ \lim_{\rho \rightarrow \infty} ~\frac{R(\rho)}{\log_2 \rho} ~ \qquad \mbox{and} \qquad r_e ~\triangleq~ \lim_{\rho \rightarrow \infty}~ \frac{\eta_{FL}(\rho)}{ \log_2 \rho}. \end{equation} Here $\eta_{FL}(\rho)$ is the average throughput of the ARQ protocol in the random access channel with Fully-Loaded (FL) queues, i.e., \begin{equation} \eta_{FL}(\rho) ~=~ \lim_{s \rightarrow \infty}~ \frac{b(s)}{sT}, \end{equation} where $s$ is the slot index and $b(s)$ is the total number of message bits transmitted up to slot $s$. Note that the message bits received in error at the BS are also counted in $b(s)$. The {\bf effective} ARQ diversity gain is defined as \begin{equation} d ~=~ - \lim_{\rho \to \infty} ~\frac{\log_2 P_e (\rho)}{\log_2 \rho}, \label{eqn:d_def} \end{equation} where $P_e(\rho)$ is the system error probability, which is defined as the probability that at least one of the messages is not correctly decoded by the BS. In the symmetric random access channel, the diversity gain obtained from \eqref{eqn:d_def} is the same as the diversity gain of an individual user, since \begin{equation} P_{e^{(i)}}(\rho) \le P_e(\rho) \le \sum_{j=1}^{K} P_{e^{(j)}}(\rho), \quad \forall i \in \{1, \cdots, K\} ~, \end{equation} \cite{TVZ:04} where $P_{e^{(i)}}(\rho)$ is the error probability of the $i^{th}$ user. In summary, the DMDT of a certain protocol characterizes the set of achievable tuples $(d,r_e,L)$ (here, we observe that our results are information theoretic in the sense that we assume the use of random Gaussian codebooks~\cite{ZT:02}). In our analysis, we will make use of the results of Tse {\em et al.} on the diversity-multiplexing tradeoff of {\bf coordinated} multiple access channels \cite{TVZ:04}, where the access mechanism is controlled by the base-station. In the sequel, we denote the diversity gain of the coordinated multiple access channel with $k$ users as $d_k^{MAC}(r)$, which is given by \begin{equation} \label{eqn:dmac} d_{k}^{MAC}(r) ~=~ \left\{ \begin{array}{ll} d^{M,N}(r), & r \le \min\{M, \frac{N}{k+1}\} \\ d^{kM,N}(kr), & r \ge \min\{M,\frac{N}{k+1}\} \end{array} \right. ~, \end{equation} where $d^{M,N}(r)$ is the diversity gain of the point-to-point channel with $M$ transmit and $N$ receive antennas, and multiplexing gain $r$, as given in \cite{ZT:02}. In the ARQ setting, we denote the event that a NACK is transmitted in the $\ell^{th}$ ARQ round, when $k$ users are transmitting simultaneously, by $\bar{\mathcal{A}}_{k}(\ell)$, for $\ell = 1, \cdots, L-1$, and the error event in the $L^{th}$ round by $\bar{\mathcal{A}}_{k}(L)$. We also denote the complement of $\bar{\mathcal{A}}_{k}(\ell)$ by $\mathcal{A}_{k}(\ell)$. We define $\alpha_k(\ell) ~\triangleq~ \textrm{Pr} \left( \bar{\mathcal{A}}_{k}(1), \cdots, \bar{\mathcal{A}}_{k}(\ell-1), \mathcal{A}_{k}(\ell) \right)$ and $\beta_k(\ell) ~\triangleq~ \textrm{Pr} \left(\bar{\mathcal{A}}_{k}(1),\cdots, \bar{\mathcal{A}}_{k}(\ell) \right)$ for $\ell=1, \cdots, L$, where, by definition, we let $\beta_k(0) = 1$, for $k=1, \cdots, K$. Note that $\alpha_k(\ell)$ is the probability that the length of a CR epoch is $\ell$ (slots), given that $k$ users have collided initially. Following the approach of \cite{TZB:00}, we classify the CR epochs from the viewpoint of a particular user (say user 1) into either {\em relevant} or {\em irrelevant} epochs, depending on whether a packet of that particular user is being transmitted in that CR epoch or not. The lengths of the relevant and the irrelevant epochs of user 1 are random variables, which are denoted by $U$ and $V$, respectively. For notational convenience, we denote the pmf of a Bernoulli random variable with population $K$ and probability of success $p$ by, $\mathcal{B}(K,k,p) ~\triangleq ~ {K \choose k} ~p^{k} (1-p)^{K-k} $. \subsection{Main Results} First, we characterize the DMT for GTA (note that we do not have a deadline in this protocol, and hence, no limit on the maximum transmission delay). \begin{proposition} \label{thm:dmdt_gta} The DMT for GTA with a given $p_t \in (0,1]$ is \begin{equation} \label{eqn:thm_tree1} d^{GTA}(r_e) ~=~ d_1^{MAC}\left( \frac{\sum_{k=0}^K \mathcal{B}(K,k,p_t) \mathcal{X}_k} {\sum_{k=0}^K \mathcal{B}(K,k,p_t) J_k} ~r_e \right), \end{equation} where $\mathcal{X}_k$ and $J_k$ can be found by the following recursions: \begin{equation} \mathcal{X}_k ~=~ 1 + \mathcal{B}(k,0,0.5)\mathcal{X}_k + \mathcal{B}(k,1,0.5)(1+ \mathcal{X}_{k-1}) + \sum_{i=2}^k \mathcal{B}(k,i,0.5)\mathcal{X}_{i} ~, \label{eqn:thm_tree2} \end{equation} \begin{equation} J_k ~=~ \mathcal{B}(k,0,0.5)J_k + \mathcal{B}(k,1,0.5)(1+J_{k-1}) + \sum_{i=2}^k \mathcal{B}(k,i,0.5)J_{i} ~, \label{eqn:thm_tree3} \end{equation} for $k=2,3,\cdots$, with $\mathcal{X}_0 = \mathcal{X}_1 = 1$ and $J_0 =0$, $J_1 = 1$. \end{proposition} \begin{proof} Noticing that the CR epoch termination event is a renewal event under the fully-loaded assumption, the result can be easily derived by extending the recursion analysis in \cite{MP:93} and using the renewal-reward theorem \cite{G:96}. The details are omitted due to space limitation. \end{proof} Since the GTA protocol is inspired by the simplified collision model, the main idea is to assign a single slot exclusively for transmission of each colliding user (that was not pruned by the algorithm). The resulting DMT, therefore, is given in terms of a single-user performance, i.e., $d_1^{MAC}(.)$. The main drawback of the algorithm is the relatively large number of slots needed to resolve each collision, which translates into a loss in the effective multiplexing gain, i.e., the argument of $d_1^{MAC}(.)$ in (\ref{eqn:thm_tree1}). It is now easy to see that GTA cannot achieve the full effective multiplexing gain of the multiple access channel, i.e., $\min\{KM,N\}$. An example highlighting this fact will be provided in the later part of this section. On the other hand, the DMT in \eqref{eqn:thm_tree1} reveals the performance dependence on $p_t$ (and $r$), which implies the possibility of maximizing the diversity gain by choosing the appropriate values, $p_t^*$ and $r^*$ for each $r_e \in [0, \min\{KM,{N}\})$. At the moment, we do not have a general analytical solution for this optimization problem. However, the solution for the special case of two users is obtained in Section~\ref{dmdt-examples}. Next, we characterize the optimal DMT for the O-NDMA protocol (Again we do not have a delay parameter in the tradeoff since the number of ARQ rounds is {\bf always} equal to the number of colliding users). \begin{proposition} \label{thm:dmdt_ndma} The \emph{optimal} DMT for O-NDMA is, \begin{equation} d^{ONDMA}(r_e) ~=~ d_1^{MAC}\left(r_e\right). \label{eqn:ondma_d_opt} \end{equation} \end{proposition} \begin{proof} The DMT for O-NDMA with a given $p_t \in (0,1]$ and $r$ is found as \begin{equation} \label{eqn:dmt_ndma} d^{ONDMA}(r_e) ~=~ d_1^{MAC}(r) \quad \mbox{where} \quad r ~=~\frac{Kp_t+(1-p_t)^K}{Kp_t}r_e, \end{equation} utilizing the average throughput results in \cite{TZB:00}, and noting that the average SNR of each single-user decoder is still $\rho$. Then, it is easy to find that the optimal values $(r^*,p_t^*) = (r_e,1)$, which yields \eqref{eqn:ondma_d_opt}. We omit the details due to space limitation. \end{proof} The matched-filter-like structure utilizing the orthogonality of transmissions over different slots allows the O-NDMA protocol to achieve the single-user performance, as we see from \eqref{eqn:dmt_ndma}. Furthermore, $p_t^*=1$ ensures that the throughput is maximized, and the optimal DMT is given by \eqref{eqn:ondma_d_opt}. By comparing the expressions in \eqref{eqn:thm_tree1} and \eqref{eqn:ondma_d_opt}, we realize that the O-NDMA protocol achieves a larger diversity gain, as compared with the GTA protocol, for any $r_e$ less than $\min\{M,N\}$. Finally, the optimal DMDT of the IR-ARQ random access protocol is characterized in the following theorem. \begin{theorem} \label{thm:dmdt} The \emph{optimal} DMDT for the IR-ARQ protocol is, \begin{equation} \label{eqn:d_opt} d^{IR}(r_e,L) ~=~ d_K^{MAC}\left(\frac{r_e}{KL}\right). \end{equation} \end{theorem} \begin{proof} (sketch) First, we assume an asymptotically large block length $T\rightarrow\infty$ to allow our error correction (and detection) scheme to operate arbitrarily close to the channel fundamental limits. An application of the renewal-reward theorem \cite{G:96} gives \begin{equation} \label{eqn:tput} \eta_{FL} ~=~ \frac{p_tKR}{1+\sum_{k=1}^K \mathcal{B}(K,k,p_t) \sum_{ \ell=1}^{L-1} \beta_k(\ell)} ~. \end{equation} In addition, given that joint typical-set decoders \cite{CT:01}, which have an inherent ability to detect errors, are used for the channel output over slots $1$ to $\ell$ in ARQ round $\ell$, extending the results in \cite{ECD:04} and \cite{TVZ:04}, the probability of error $P_e$ is upper-bounded by, \begin{equation} \label{eqn:petmp} P_e ~\le~ \sum_{k=1}^K \mathcal{B}(K,k,p_t) \beta_k(L). \end{equation} Noting that in the high SNR regime $\beta_k(\ell)$ approaches \begin{equation} \label{eqn:beta1} \lim_{\rho \to \infty} ~\beta_k(\rho, \ell) ~=~ \mathbf{1}\left(r >\min\left\{\ell M, \frac{\ell N}{k}\right\}\right) ~\triangleq~ \left\{ \begin{array}{ll} 0, & r < \min\{\ell M, \frac{\ell N}{k}\} \\ 1, & r > \min\{\ell M,\frac{\ell N}{k}\} \end{array} \right., \end{equation} we find the DMDT with a given $p_t \in (0, 1]$ as \begin{equation} \label{eqn:dmdt} d^{IR}(r_e,L) ~=~ d_K^{MAC} \left(\frac{r}{L}\right), \end{equation} where $r$ can be obtained from $r_e$ using the relation (for $0 \le r \le \min\{M,N\}$), \begin{equation} \label{eqn:r_e} r_e ~=~ \frac{p_t K r} {1 + \sum_{k=1}^K \mathcal{B}(K,k,p_t) \sum_{l=1}^{ L-1} \mathbf{1}\left( r > \min\{lM, \frac{lN}{k} \}\right)} ~, \end{equation} from (\ref{eqn:tput}--\ref{eqn:beta1}), and the results in \cite{ECD:04,TVZ:04}. Finally, we find the optimal values $(r^*,p_t^*)=(\frac{r_e}{K}, 1)$, which gives \eqref{eqn:d_opt}. A detailed proof is provided in Appendix \ref{app:dmdt}. \end{proof} Two remarks are now in order. First, we elaborate on the intuitive justification for the optimal values $(r^*,p_t^*)=(\frac{r_e}{K},1)$ for the IR-ARQ protocol. In the asymptotic case with $\rho \rightarrow \infty$, the error probability is dominated by the worst case $K$-user collision for any $p_t \in (0,1]$, which does not depend on $\rho$ by definition. This implies that choosing $p_t=1$ will maximize the average throughput, without penalizing the asymptotic behavior of the error probability. Now with $p_t=1$, choosing $r^*=\frac{r_e}{K}<\min\{M,\frac{N}{K}\}$ will result in an effective multiplexing gain equal to $r_e$ and will minimize the number of rounds needed to decode the colliding messages, since each user is transmitting at a small rate. Furthermore, it is clear that with this choice of $r^*$, we can achieve any desired effective multiplexing gain less than $\min\{KM,N\}$ (the degrees of freedom in the coordinated multiple access channel). Next, Comparing Propositions \ref{thm:dmdt_gta}, \ref{thm:dmdt_ndma} and Theorem \ref{thm:dmdt}, it is straightforward to verify that the DMT of the IR-ARQ protocol is {\bf always} superior to that of the GTA and O-NDMA protocols. This advantage of IR-ARQ is a manifestation of the {\bf ARQ diversity} resulting from the IR transmission and joint decoding. More specifically, the ARQ diversity {\bf scales down} the effective multiplexing in the right hand side of (\ref{eqn:d_opt}), and hence, results in an increased diversity advantage (since $d_K^{MAC}(.)$ is a decreasing function in its argument). The O-NDMA protocol does not allow for {\bf efficiently} exploiting the ARQ diversity due to the sub-optimality of repetition based ARQ. \subsection{Examples}\label{dmdt-examples} We numerically illustrate the gains offered by the IR-ARQ protocol, as compared with the GTA and O-NDMA protocols, in the following two examples. \subsubsection{Two-User Scalar Random Access Channels} We consider the single-antenna $2$-user random access channel, i.e., $M=N=1$ and $K=2$. Substituting these parameters in Proposition \ref{thm:dmdt_gta}, we obtain the DMT for the GTA protocol as, $d^{GTA}(r_e) ~=~ d_1^{MAC}\left( \frac{1+3p_t^2}{2p_t}r_e \right) ~=~ 1-\left(\frac{1+3p_t^2} {2p_t}\right)r_e $. In order to maximize the effective multiplexing gain that achieves nonzero diversity gain, we need to choose $p_t = \frac{1}{\sqrt{3}}$, which yields the optimal DMT for GTA as $d^{GTA}(r_e) = 1-\sqrt{3}r_e, \ 0 \le r_e < \frac{1}{\sqrt{3}}$. The optimal DMTs for O-NDMA and IR-ARQ are obtained from Proposition \ref{thm:dmdt_ndma} and Theorem \ref{thm:dmdt}. \figref{DMDT_Comp} compares the tradeoffs of the three protocols where the IR-ARQ protocol is shown to dominate our two benchmarks, with both $L=1, 2$. Even though O-NDMA achieves the nominal single-user DMT {\bf without multi-user interference}, i.e., $d(r_e)=1-r_e, \forall r_e < 1$, it is still worse than IR-ARQ, since it wastes slots to facilitate single-user decoding and relies on the repetition ARQ. In addition, as $L$ increases from 1 to 2, the DMDT of IR-ARQ improves, as expected. \subsubsection{Two-User Vector Random Access Channels} We consider a $2$-user vector random access channel with $M=1$ and $N=2$. By allowing multiple antennas at the BS, the total degrees of freedom of the system is increased by a factor of $2$, as compared with the scalar channel in the previous example. The tradeoffs achieved by the three protocols in this scenario are shown in \figref{DMDT_Comp2}. First, we observe that the three protocols achieve an increased diversity gain, for a given $r_e$, when compared with the scalar channel in \figref{DMDT_Comp}. However, the full effective multiplexing gain, $r_e=2$, is not achieved by the GTA and O-NDMA protocols, since these two protocols exclude the possibility of first-round decoding when a collision occurs. The IR-ARQ protocol, on the other hand, achieves $r_e=2$, and the DMDT further improves as $L$ increases. \section{Random Arrivals} \label{sec:random} In this section, we relax the ``fully loaded'' assumption adopted in Section \ref{sec:dmdt}. In addition to the traditional measures of stability region and average delay commonly used in this set-up, we also consider the probability of error. In particular, for the proposed IR-ARQ protocol, we will show, through numerical results, that the choice of the transmission-delay constraint $L$ determines an \textbf{interesting tradeoff} between the average delay and error probability: For typical SNR, a larger $L$ leads to an increase in the average delay along with a decrease in the error probability. We consider infinite-length queues at the users, that are fed by randomly-arriving packets of a fixed length of $B_A$ information bits. For simplicity, we assume that $B_T = B_A = B$, i.e., the arrival packet size and the transmission packet size are the same. Thus the first-round transmission rate $R$ is equal to the arrival rate $R_A = (B/T)$. To emphasize that $R$ is a system parameter determined by the arrival packet size, we denote the first-round multiplexing gain $r$ by $r_A$ in this section, and call it \emph{the arrival multiplexing gain}. The {\bf packet arrival} rate of user $i$ is $\lambda_i = \lambda/K$ packets/slot, where $\lambda$ denotes the total packet arrival rate, where arrivals are assumed to be independent across users. \subsection{Stability and Average Delay} We use the following notion of stability~\cite{DST:03}: let $\mathbf{g}(m) \triangleq (g_1(m), \cdots, g_K(m))$ be the vector of the backlogs at the beginning of CR epoch $m$. Then, queue $i$ of the system is stable if $\lim_{m \to \infty} ~ \textrm{Pr} \left( g_i(m) < \bar{g} \right) = F(\bar{g})$ and $\lim_{\bar{g} \to \infty} ~F(\bar{g}) = 1$. Furthermore, we say that the system is stable if all the $K$ queues in the system are stable. The stability region of GTA and O-NDMA can be found using the techniques in \cite{MP:93,DST:03} as \begin{equation} \lambda ~<~ \frac{\sum_{k=0}^K \mathcal{B}(K,k,p_t) J_k} {\sum_{k=0}^K \mathcal{B}(K,k,p_t) \mathcal{X}_k} ~, \qquad \mbox{and} \qquad \lambda ~<~ \frac{Kp_t}{Kp_t+(1-p_t)^K}. \end{equation} From the literature (\cite{NMT:05} and references therein), we find only limited analytical results on the average delay of slotted ALOHA channels. In this paper, we present only numerical results for the average delay of the GTA and O-NDMA protocols, and provide an approximate delay analysis for the proposed IR-ARQ protocol. The average delay of the IR-ARQ scheme can be approximated by using the analysis of the M/G/1 queue with vacations \cite{BG:92}, following the approach of Tsatsanis {\em et al.}~\cite{TZB:00}. This analysis yields only an approximation of the average delay, since the CR epoch lengths of the IR-ARQ scheme are dependent on the traffic load (and hence are not independent and identically distributed (i.i.d.) as needed for the result to hold). However, as we will see, the i.i.d. property holds in the limit of $\rho \rightarrow \infty$, and hence, our result becomes asymptotically accurate. We also note that as $K$ increases, this approximation becomes progressively more accurate \cite{TZB:00}. We summarize our results in the following theorem. \begin{theorem} \label{thm:stab} Assuming that $\exists ~ \ell < \infty \mbox{ with } \ell \in \{1, \cdots, L\}, \mbox{ such that } \alpha_K(\ell)>0$, the necessary and sufficient condition for the stability of the IR-ARQ protocol is (in packets/slot) \begin{equation} \label{eqn:stab_dom} \lambda ~<~ \frac{\eta_{FL}}{R} ~=~ \frac{p_tK}{1+\sum_{k=1}^K \mathcal{B}(K,k,p_t) \sum_{\ell=1}^{L-1} \beta_k(\ell)} ~. \end{equation} For Poisson arrivals, when $\lambda$ satisfies \eqref{eqn:stab_dom}, the average delay is \emph{approximately} given by (in slots) \begin{equation} \label{eqn:delay} D ~\approx~ {\mathbb E}[U] + \left(\frac{1}{p_t}-1\right) {\mathbb E}[V] +\frac{\lambda \left( {\mathbb E}[U^2] + \frac{(2-p_t)(1-p_t)}{p_t^2} {\mathbb E}[V^2] + 2\left( \frac{1}{p_t} -1 \right) {\mathbb E}[U] {\mathbb E}[V] \right)}{2\left( K- \lambda \left( {\mathbb E}[U]+ \left( \frac{1}{p_t}-1 \right) {\mathbb E}[V] \right) \right)} + \frac{{\mathbb E}[V^2]} {2{\mathbb E}[V]} ~, \end{equation} where the expected values are evaluated as, \small \begin{equation} {\mathbb E} [U] = 1 + \sum_{k=1}^{K} \mathcal{B}(K-1,k-1,p) \sum_{\ell=1}^{L-1} \beta_k(\ell) \quad ; \quad {\mathbb E} [U^2] = 1 + \sum_{k=1}^{K} \mathcal{B}(K-1,k-1,p) \sum_{\ell=1}^{L-1} (2\ell +1)\beta_k(\ell),\nonumber \end{equation} \begin{equation} {\mathbb E}[V] = 1 + \sum_{k=1}^{K-1} \mathcal{B}(K-1,k,p) \sum_{\ell=1}^{L-1} \beta_k(\ell) \quad ; \quad {\mathbb E}[V^2] = 1 +\sum_{k=1}^{K-1} \mathcal{B}(K-1,k,p) \sum_{\ell =1}^{L-1} (2\ell +1)\beta_k(\ell), \nonumber \end{equation} \normalsize \begin{equation} \label{eqn:p_eq} \begin{array}{lc} \mbox{ and $p \in (0,1]$ satisfies } \qquad \qquad & Kp ~=~ \lambda \left[ 1 + \sum_{k=1}^K \mathcal{B}(K,k,p) \sum_{\ell=1}^{L-1} \beta_k (\ell) \right]. \end{array} \end{equation} Moreover, the delay expression in \eqref{eqn:delay} holds with probability 1 if $U$ and $V$ are i.i.d. and $U$ and $V$ are mutually independent. \end{theorem} \begin{proof} See Appendix \ref{app:stab}. \end{proof} A few remarks are now in order: First, the assumption in Theorem \ref{thm:stab} always holds when $L$ is finite since the length of any CR epoch is bounded by $L$. If $L = \infty$, this assumption requires the existence of a non-zero probability that the length of an epoch is finite. Second, as $\rho\to\infty$, the stability region \eqref{eqn:stab_dom} approaches \begin{equation} \label{eqn:stab_hsnr} \lambda ~<~ \frac{p_tK}{1+\sum_{k=1}^K \mathcal{B}(K,k,p_t) \sum_{\ell =1}^{L-1} \mathbf{1}\left(r_A>\min\left\{\ell M,\frac{\ell N}{k} \right\}\right)} . \end{equation} To achieve the maximum stability region, we need to maximize the right hand side of \eqref{eqn:stab_hsnr} over $p_t$. At the moment, we do not have a general solution for this problem. Thus, we present results only for one special case: $r_A<\min\{M, \frac{N}{K}\}$. In this case, the stability region is $\lambda < p_t K$, and the maximum stability region is thus given by $\lambda < K$ for the optimal choice of $p_t = 1$. This is a remarkable improvement over the O-NDMA protocol, whose maximum stability region is only $\lambda < 1$, for any $r_A$. Finally, the diversity gain with random arrivals can be readily obtained from the results in the previous section. The only difference is that, unlike the fully-loaded case, one cannot optimize over $r_A$ in this scenario. In summary, we find that the GTA, O-NDMA and IR-ARQ protocols achieve the diversity gains $d^{GTA}(r_A) ~=~ d^{ONDMA}(r_A) ~=~ d_1^{MAC}(r_A)$ and $d^{IR}(r_A) ~=~ d_K^{MAC}\left(\frac{r_A}{L}\right)$, respectively. \subsection{Examples} \subsubsection{Two-User Scalar Random Access Channels} Here, we consider the random access channels with $M=N=1$. For ease of analysis, we assume that $L \ge K=2$ for the IR-ARQ scheme. The stability region of the different random access protocols with $\rho \rightarrow \infty$ is summarized in Table \ref{tbl:stab_ex}. In addition, the error probability, diversity gain and average delay are shown in \figref{num_snr_pe1}, \figref{DMT_Stab} and \figref{num_lambda_D1} respectively. Here, the stability region and diversity gain of the three protocols, and the average delay of the IR-ARQ protocol with $\rho \to \infty$, are obtained analytically. However, the average delay of the GTA and O-NDMA protocols, and the average delay of the IR-ARQ scheme with $\rho < \infty$ are obtained through numerical simulations. In these simulations, we use $R = r_A \log (1+\rho)$ with $r_A=0.45$, and $p_t=1$ for the IR-ARQ and the O-NDMA protocols, while $p_t=\frac{1}{\sqrt{3}}$ for the GTA protocol. It is assumed that the transmission results in errors, if and only if the channel is in outage~\cite{CT:01}; which is a valid assumption if $T$ is sufficiently large. In addition, for the IR-ARQ protocol, it is assumed that the errors in $\ell^{th}$ round, where $\ell<L$, are always detected. We also note that when $r_A<0.5$, the average delay expression for the IR-ARQ scheme, evaluated from Theorem~\ref{thm:stab}, holds with probability $1$, and is given by (when $p_t=1$) $D = 1.5 + \frac{\lambda}{2(2-\lambda)}$. Table \ref{tbl:stab_ex} and \figref{DMT_Stab} shows that both the stability region and diversity gain of the IR-ARQ protocol are the largest. Next, we focus on the delay and the error probability of IR-ARQ with different $L$'s and different $\rho$'s reported in \figref{num_snr_pe1} and \figref{num_lambda_D1}. We observe that the delay approaches the asymptotic result with $\rho=\infty$, and the difference of the delay for IR-ARQ with $L=2$ and with $L=4$ decreases, as $\rho$ increases, which agrees with the analytical results. Furthermore, \figref{num_snr_pe1} and \figref{num_lambda_D1} reveal an important insight into the relation between the performance of IR-ARQ and the transmission-delay constraint $L$, i.e., a tradeoff between average delay and error probability emerges. These figures suggest that for certain finite $\rho$'s, a large $L$ achieves a small error probability, at the expense of a large average delay and a small stability region. Therefore, depending on quality-of-service (QoS) requirements, $L$ can be adjusted for achieving the best performance. \subsubsection{Two-User Vector Random Access Channels} Here, we consider the $2$-user random access protocols with $M=1$ and $N=2$ in the high SNR regime ($\rho \rightarrow \infty$). We first note that the stability region and delay of the GTA and O-NDMA protocols are not different from the scalar case; only the diversity gain changes with this multiple-antenna setting. For the IR-ARQ protocol, on the other hand, the average delay is given by $D = 1.5 + \frac{\lambda}{2(2-\lambda)}$ for any $r_A \in [0,1)$, and the stability region is given by, $\lambda < 2, \ \ 0 \le r_A < 1$, with $p_t=1$. Comparing the stability region of the vector IR-ARQ protocol with that of the scalar IR-ARQ protocol, we find that the vector IR-ARQ achieves a better stability region, especially when $r_A >0.5$. Finally, \figref{DMT_Stab2} shows the diversity gain achieved with different random access protocols. As expected, the IR-ARQ protocol achieves the best diversity gain, which improves as $L$ increases. \section{Conclusions} \label{sec:conc} We have proposed a new wireless random access protocol which jointly considers the effects of collisions, multi-path fading, and channel noise. The proposed protocol relies on incremental redundancy transmission and joint decoding to resolve collisions and combat multi-path fading. This approach represents a marked departure from traditional collision resolution algorithms and exhibits significant performance gains, as compared with two benchmarks corresponding to the state of the art in random access protocols; namely GTA and O-NDMA. It is interesting to observe that, in order to fully exploit the benefits of the proposed IR-ARQ protocol, all the users with non-empty queues must transmit with probability one, when given the opportunity, and should use a small transmission rate. Finally, we have identified the tradeoff between average delay and error probability exhibited by the IR-ARQ protocol for certain SNRs, and have shown that this tradeoff can be controlled by adjusting the maximum number of ARQ rounds. \appendices \section{Proof of Theorem~\ref{thm:dmdt}} \label{app:dmdt} To find the long-term average throughput of IR-ARQ, we first focus on the distribution and the expected value of the relevant and the irrelevant epochs for user 1, $U$ and $V$. The probability mass functions (pmf) of $U$ and $V$ are \begin{eqnarray} \textrm{Pr} (U = \ell) &=& \sum_{k=1}^{K} \mathcal{B}(K-1,k-1,p_t) \alpha_k(\ell), \quad (\ell=1, \cdots, L), \label{eqn:P_U} \\ \textrm{Pr}(V = \ell) &=& \left\{ \begin{array}{ll} \sum_{k=1}^{K-1} \mathcal{B}(K-1,k,p_t) \alpha_k(1) ~+~ (1-p_t)^{K-1}, & \ell=1, \\ \sum_{k=1}^{K-1} \mathcal{B}(K-1,k,p_t) \alpha_k(\ell), & \ell=2, \cdots, L \end{array} \right. \label{eqn:P_V} \end{eqnarray} We introduce the relation shown in \cite{ECD:04} for deriving the expected values of $U$ and $V$: \begin{equation} \label{eqn:alpha_beta} \sum_{\ell=1}^L \left[ \sum_{i=1}^{\ell} a_i \right]\alpha_k(\ell) ~=~ \sum_{\ell=1}^L a_{\ell} \beta_k(\ell-1), \end{equation} for any $(a_1, \cdots, a_L) \in \mathbb{R}^L$. Using this relation, it is straightforward to get \small \begin{equation} {\mathbb E}[U] ~=~ \sum_{k=1}^{K} \mathcal{B}(K-1,k-1,p_t) \sum_{\ell=1}^{L} \beta_k(\ell-1) \quad; \quad {\mathbb E}[V] ~=~ (1-p_t)^{K-1} + \sum_{k=1}^{K-1} \mathcal{B}(K-1,k,p_t) \sum_{\ell=1}^{L} \beta_k(\ell-1). \label{eqn:E_V} \end{equation} \normalsize Now, we are ready to calculate the long-term average throughput \eqref{eqn:tput} and the upper-bound of error probability $P_e$ given in \eqref{eqn:petmp} for the IR-ARQ scheme as in the following. First, we prove \eqref{eqn:tput} utilizing the renewal theory \cite{G:96}. Denoting the average throughput of user 1 by $\eta_1$, the average throughput of the symmetric system is given by $\eta_{FL} ~=~ K \eta_1 ~$. Under the fully-loaded assumption, the event that a CR epoch terminates is a renewal event. We associate a random reward $\mathcal{R}$ to the occurrence of the renewal event; $\mathcal{R} = R$ BPCU if the CR epoch is a relevant epoch for user 1, and $\mathcal{R} = 0$ otherwise. Then, the renewal-reward theorem \cite{G:96} with \eqref{eqn:E_V} gives, \begin{equation} \label{eqn:eta} \eta_1 ~=~ \lim_{s \to \infty} ~\frac{b_1(s)}{sT} ~=~ \frac{{\mathbb E}[\mathcal{R}]}{{\mathbb E}[\mathcal{X}]} ~=~ \frac{p_t \cdot R + (1-p_t) \cdot 0}{p_t \cdot {\mathbb E}[U] + (1-p_t) \cdot {\mathbb E}[V]} ~=~ \frac{p_t R}{1+\sum_{k=1}^K \mathcal{B}(K,k,p_t) \sum_{\ell=1}^{L-1} \beta_k(\ell)}. \end{equation} Since $\eta_{FL} = K\eta_1$, we obtain $\eta_{FL}$ as given in \eqref{eqn:tput}. Next, we prove \eqref{eqn:petmp}. An error occurs in the IR-ARQ system in two different cases: (i) when decoding failure is not detected at the BS and an ACK is fed back, or (ii) when decoding fails at round $L$. Let $E_k(\ell)$ denote the event that the decoder makes an error with $\ell$ received blocks when $k$ users have collided in the first round. Then, we can upper-bound $P_e$ as \cite{ECD:04} \begin{eqnarray} \nonumber P_e &=& \sum_{k=1}^K \mathcal{B}(K,k,p_t) \sum_{\ell=1}^L \textrm{Pr}( E_k(\ell),\mbox{an epoch length when $k$ users have collided}=\ell) \\ &\le & \sum_{k=1}^K \mathcal{B}(K,k,p_t) ~\textrm{Pr}(E_k(L),\bar{\mathcal{A}_1},\cdots, \bar{\mathcal{A}}_{L-1}) + \epsilon ~=~ \sum_{k=1}^K \mathcal{B}(K,k,p_t) \beta_k(L) + \epsilon, \label{eqn:p_e} \end{eqnarray} where $\epsilon \to 0$ as $T \to \infty$. The intuition behind this upper-bound is that the undetected error probability approaches zero as $T \to \infty$ for the joint typical-set decoder, and hence the error probability is dominated by the information outage probability. On the other hand, the diversity gain \eqref{eqn:dmdt} can be found as in the following. We first find $\beta_k(\rho,\ell) ~\dot{=}~\rho^{-d_{k}^{MAC}(r/\ell)} $, using the results in \cite{ECD:04} and \cite{TVZ:04}, where $A(\rho) \dot{=} \rho^{-b}$ implies $b=-\lim_{\rho \rightarrow \infty} \frac{\log_2 A(\rho)}{\log_2 \rho}$ as defined in \cite{ZT:02} and $\dot{\le}$, $\dot{\ge}$ are similarly defined. With this, given that $p_t$ does not depend on $\rho$, \eqref{eqn:petmp} implies that $P_e(\rho)$ satisfies the exponential inequality $P_e(\rho) ~\dot{\le}~\rho^{-d_K^{MAC}\left(\frac{r}{L}\right)}$, as $T \to \infty$. In addition, applying the outage bound in \cite{ECD:04} to the random-access system yields $P_e(\rho) ~\dot{\ge}~ \rho^{-d_K^{MAC}\left(\frac{r}{L}\right)}$. These two exponential inequalities imply \eqref{eqn:dmdt}. Noticing that the span of $d^{M,N}(r)$ is $r \in [0, \min\{M,N\})$ \cite{ZT:02}, we verify that the span of $d_{k}^{MAC}\left(\frac{r}{\ell}\right)$ is $r \in [0, \min\{\ell M,\frac{\ell N} {k}\} )$. Then \eqref{eqn:beta1} can be readily verified as in the following. For $r < \min\{\ell M,\frac{\ell N} {k}\}$, it is obvious that $\beta_k(\rho,\ell) \rightarrow 0$ as $\rho \rightarrow \infty$ from $\beta_k(\rho,\ell) ~\dot{=}~\rho^{-d_{k}^{MAC}(r/\ell)} $. On the other hand, if $r>\min\{\ell M,\frac{\ell N} {k}\}$, then $\beta_k(\rho,\ell) \rightarrow 1$ as $\rho \rightarrow \infty$ since the outage probability approaches $1$ as $\rho \to \infty$, and the error probability given the outage event approaches $1$ as $T \to \infty$ by the strong converse \cite{G:68}. Combining the results shown above, we get the DMDT of the proposed IR-ARQ protocol as \eqref{eqn:dmdt}, where the relation between $r$ and $r_e$ is given in \eqref{eqn:r_e}. Finally, we consider the optimal pairs $(r^*,p_t^*)$ that achieve the largest diversity gain for a desired $r_e$. Regarding $r_e$ in \eqref{eqn:r_e} as a function of $r$, we observe that $r_e(r)$ is discontinuous at the points $r=\min\{\ell M,\frac{\ell N}{k}\}$, $\ell=1,\cdots,L-1$ and $k=1,\cdots,K$. We consider the values of $r$ that lie within the first discontinuity of $r_e(r)$, i.e., $r \in [0, \min\{M, \frac{N}{K}\})$. For these $r$, \eqref{eqn:r_e} yields $r_e ~=~ p_t K r$. Since the diversity gain increases when $r$ decreases, we want to determine the smallest value of $r$ that achieves the desired $r_e$. We find that $r$ is minimized when $p_t=1$. Thus for $r \in [0, \min\{M, \frac{N}{K}\})$, the optimal choice of $(r,p_t)$ achieving $r_e$ is $(\frac{r_e}{K},1)$. Furthermore, we find that this choice achieves the maximum effective multiplexing gain given by the degrees of freedom of the channel ($\min\{KM,N\}$). Thus we do not need to consider the values of $r > \min\{M,\frac{N}{K}\}$, since it is clear from \eqref{eqn:r_e} that such $r$ values result in a smaller diversity gain for the same desired $r_e$. \section{Proof of Theorem~\ref{thm:stab}} \label{app:stab} We consider the backlog evolution $\mathbf{g}(m)$ of IR-ARQ, where $m$ is the \emph{epoch} index. We observe that $\mathbf{g}(m)$ is an embedded Markov chain; $g_i(m)$, the backlog evolution of user $i$ is, \begin{equation} g_i(m+1) = \left\{ \begin{array}{ll} (g_i(m) - 1)^+ + a_i(m), & \mbox{ with probability } p_t \\ g_i(m) + a_i(m), & \mbox{ with probability } 1-p_t \end{array} \right. \label{eqn:backlog} \end{equation} where $a_i(m)$ is the number of packets that arrived at user $i$'s queue during epoch $m$, and $(x)^+ = x$ if $x \ge 0$, $(x)^+ = 0$ otherwise, for a real number $x$. We first prove that \eqref{eqn:stab_dom} is the necessary and sufficient condition for the stability of IR-ARQ. One can straightforwardly prove that under the assumption that there is a finite $\ell$ with $\alpha_K(\ell)>0$, $\mathbf{g}(m)$ is a homogeneous, irreducible and aperiodic Markov chain, by following the argument in the proof of Proposition 1 in \cite{DS:02}. Given that the Markov chain has these properties, stability of the system is equivalent to the existence of a limiting distribution for the Markov chain, and thus is also equivalent to ergodicity of the Markov chain \cite{AT:05,TM:79}. Sufficiency and necessity of \eqref{eqn:stab_dom} for the ergodicity can be straightforwardly proved by following the footsteps of the proof of Theorem 1 in \cite{AT:05}. In particular, we consider a stochastically dominant system, in which users are first chosen according to the probability-$p_t$ rule, and those users with empty queues transmit {\em dummy} packets. It can be shown that \eqref{eqn:stab_dom} is a sufficient condition for the stability of the dominant system, which implies the stability of the original system. On the other hand, we observe that the bounded homogeneity property \cite{TM:79} holds for the Markov chain \eqref{eqn:backlog}, and thus the instability of the dominant system implies the instability of the original system. Next, we consider the approximate average delay. We denote the time period between the instance when a packet of user 1 reaches the head of its queue and the instance when it has finally been transmitted, by a random variable $\mathcal{Y}$ (slots). Then, the average delay $D$ is given by the result for the M/G/1 queue with vacations \cite{BG:92}, $D = \mathbb{E}[\mathcal{Y}] + \frac{\lambda \mathbb{E}[\mathcal{Y}^2]}{2(1-\lambda \mathbb{E}[\mathcal{Y}])} + \frac{\mathbb{E}[V^2]}{2\mathbb{E}[V]}$, with probability 1, if $\mathcal{Y}$ is i.i.d. and $V$ is i.i.d.. Here, the first moment of $\mathcal{Y}$ is calculated as, \begin{equation} \nonumber \mathbb{E}[\mathcal{Y}] = \mathbb{E}[p_t U + p_t(1-p_t)(U+V) + p_t(1-p_t)^2(U+2V) + \cdots] = \mathbb{E}[U] + \left(\frac{1}{p_t}-1\right)\mathbb{E}[V]. \end{equation} On the other hand, the second moment of $\mathcal{Y}$ is calculated as, \begin{eqnarray} \nonumber \mathbb{E}[\mathcal{Y}^2] & = \mathbb{E}[p_t U^2 + p_t(1-p_t)(U+V)^2 + p_t(1-p_t)^2(U+2V)^2 + \cdots] \\ \nonumber & = \mathbb{E}[U^2] + \frac{(2-p_t)(1-p_t)}{p_t^2}\mathbb{E}[V^2] + 2\left(\frac{1}{p_t}-1\right)\mathbb{E}[U]\mathbb{E}[V], \end{eqnarray} if $U$ and $V$ are independent. Substituting the values of $\mathbb{E}[\mathcal{Y}^2]$ and $\mathbb{E}[\mathcal{Y}]$ in $D$, the approximate delay \eqref{eqn:delay} can be readily obtained. The expected values of the steady-state epoch lengths, $\mathbb{E}[U],\mathbb{E}[U^2],\mathbb{E}[V]$ and $\mathbb{E}[V^2]$ used in the expression \eqref{eqn:delay} are easy to derive utilizing \eqref{eqn:alpha_beta}, noticing that the pmf's for $U$ and $V$ are given by \eqref{eqn:P_U} and \eqref{eqn:P_V} with a substitution of $p$ into $p_t$, where $p \triangleq p_t (1-p_e)$ and $p_e$ is the steady-state probability of a user's queue being empty. Finally, to see that the steady-state transmission probability $p$ can be found by solving the equation \eqref{eqn:p_eq}, we consider the method of the steady-state analysis of the Markov chain $g_1(m)$, whose time-evolution is given by \eqref{eqn:backlog}. Let us define the steady state values: $g_1 \triangleq \lim_{m \rightarrow \infty} g_1(m)$ and $a_1 \triangleq \lim_{m \rightarrow \infty} a_1(m)$. Then, taking expectation on both sides of \eqref{eqn:backlog} results in: \begin{equation} \mathbb{E}[g_{1}(m+1)] = \mathbb{E}[g_{1}(m)] - p_t\textrm{Pr}(g_{1}(m)>0) + \mathbb{E}[a_1(m)]. \end{equation} In the limit as $m \rightarrow \infty$, we have, $\mathbb{E}[g_1] = \mathbb{E}[g_1] - p_t\textrm{Pr}(g_1>0) + \mathbb{E}[a_1]$, or $p=\mathbb{E}[a_1]$. Thus, \begin{equation} p ~=~ \frac{\lambda}{K} \left( p_t\textrm{Pr}(g_1>0)\mathbb{E}[U]+(1-p_t\textrm{Pr}(g_1>0))\mathbb{E}[V] \right) ~=~ \frac{\lambda}{K} \left( p\mathbb{E}[U]+(1-p)\mathbb{E}[V] \right), \label{eqn:p_empty2} \end{equation} which is equivalent to \eqref{eqn:p_eq}.
2,877,628,088,665
arxiv
\section{Acknowledgements} The authors want to thank D. Awschalom, H.D. Chen, Y. Kato, J. Orenstein and J. Zaanen for useful discussions. B.A.B. acknowledges support from the Stanford Graduate Fellowship Program. This work is supported by the NSF under grant numbers DMR-0342832 and the US Department of Energy, Office of Basic Energy Sciences under contract DE-AC03-76SF00515.
2,877,628,088,666
arxiv
\section{Introduction} \label{sec:intro} In order to confront the\ parameter space of the\ Minimal Supersymmetric Standard Model (MSSM)~\cite{mssm,HaK85,GuH86} with experimental data, one can take a purely phenomenological approach in which the\ soft SUSY-breaking parameters are specified at low energies, and\ are not required to be universal at any input scale, a class of models referred to as the\ phenomenological MSSM with $n$ free parameters (pMSSM$n$)~\cite{pMSSM}. Here we review a recent exploration of this framework, the pMSSM10~\cite{mc11,mc12}, in particular in view of the\ physics at a future $e^+e^-$ linear collider, such as the\ ILC~\cite{ILC-TDR,teslatdr,ilc,LCreport} or CLIC~\cite{CLIC,LCreport}. In our version of the\ pMSSM10 the\ following assumptions are made. Motivated by the\ absence of significant flavor-changing neutral interactions (FCNI) beyond those in the\ Standard Model (SM), we assume that the\ soft SUSY-breaking contributions to the\ masses of the squarks of the\ first two generations are equal, which we also assume for the three generations of sleptons. the\ FCNI argument does not motivate any relation between the\ soft SUSY-breaking contributions to the\ masses of left- and\ right-handed sfermions, but here we assume for simplicity that they are equal. As a result, we consider the\ following 10 parameters in our analysis (where ``mass'' is here used as a synonym for a soft SUSY-breaking parameter{, and\ the\ gaugino masses and\ trilinear couplings are taken to be real}): \begin{align} {\rm 3~gaugino~masses}: & \; M_{1,2,3} \, , \nonumber \\ {\rm 2~squark~masses}: & \; m_{\tilde q_1} \, = \, m_{\tilde q_2} \, \ne \, m_{\tilde q_{3}}, \nonumber \\ {\rm 1~slepton~mass}: & \; \ensuremath{m_{\tilde \ell}} \, , \nonumber \\ \label{pMSSM10} {\rm 1~trilinear~coupling}: & \; A \, , \\ {\rm Higgs~mixing~parameter}: & \; \mu \, , \nonumber \\ {\rm Pseudoscalar~Higgs~mass}: & \; \ensuremath{M_A} \, , \nonumber \\ {\rm Ratio~of~vevs}: & \; \ensuremath{\tan\beta} \, . \nonumber \end{align} All of these parameters are specified at a low renormalisation scale, the mean scalar top mass scale, $M_{\rm SUSY} \equiv \sqrt{\mst1 \mst2}$, close to that of electroweak symmetry breaking. More information about the\ scan of the\ pMSSM10 parameter space using the {\tt MultiNest}~\cite{multinest} technique can be found in \citere{mc11}. \section{Our method} \label{sec:method} As discussed above we consider a ten-dimension subset (pMSSM10) of the\ full MSSM parameter space. The selected SUSY parameters were listed in \refeq{pMSSM10}, and the ranges of these parameters that we sample are shown in Table~\ref{tab:ranges}. We also indicate in the\ right column of this Table how we divide the\ ranges of most of these parameters into segments for the {\tt MultiNest} sampling. \begin{table*}[htb!] \begin{center} \begin{tabular}{|c|c|c|} \hline Parameter & \; \, Range & Number of \\ & & segments \\ \hline $M_1$ & (-1 , 1 )\ensuremath{\,\, \mathrm{TeV}} & 2 \\ $M_2$ & ( 0 , 4 )\ensuremath{\,\, \mathrm{TeV}} & 2 \\ $M_3$ & (-4 , 4 )\ensuremath{\,\, \mathrm{TeV}} & 4 \\ \ensuremath{m_{\tilde q}} & ( 0 , 4 )\ensuremath{\,\, \mathrm{TeV}} & 2 \\ \ensuremath{m_{\tilde q_3}} & ( 0 , 4 )\ensuremath{\,\, \mathrm{TeV}} & 2 \\ \ensuremath{m_{\tilde l}} & ( 0 , 2 )\ensuremath{\,\, \mathrm{TeV}} & 1 \\ \ensuremath{M_A} & ( 0 , 4 )\ensuremath{\,\, \mathrm{TeV}} & 2 \\ $A$ & (-5 , 5 )\ensuremath{\,\, \mathrm{TeV}} & 1 \\ $\mu$ & (-5 , 5 )\ensuremath{\,\, \mathrm{TeV}} & 1 \\ \ensuremath{\tan\beta} & ( 1 , 60) & 1 \\ \hline \hline Total number of boxes & & 128 \\ \hline \end{tabular} \caption{\it Ranges of the\ pMSSM10 parameters sampled, together with the numbers of segments into which each range was divided, and\ the corresponding number of sample boxes.} \label{tab:ranges} \end{center} \end{table*} \medskip We calculate the\ observables that go into our likelihood evaluation using the\ {\tt MasterCode} framework~\cite{mc7,mc8,mc8.5,mc9,mc10,mc11,mc12,mcweb}, which interfaces various public and\ private codes: {\tt SoftSusy~3.3.9}~\cite{Allanach:2001kg} for the\ spectrum, {\tt FeynWZ}~\cite{Svenetal} for the\ electroweak precision observables, {\tt FeynHiggs~2.10.0}~\cite{FeynHiggs,Mh-logresum} for the\ Higgs sector and\ \ensuremath{(g-2)_\mu}, {\tt SuFla}~\cite{SuFla}, {\tt SuperIso}~\cite{SuperIso} for the\ $B$-physics observables, {\tt Micromegas~3.2}~\cite{MicroMegas} for the\ dark matter relic density, {\tt SSARD}~\cite{SSARD} for the\ spin-independent cross-section \ensuremath{\sigma^{\rm SI}_p}, {\tt SDECAY~1.3b}~\cite{Muhlleitner:2003vg} for calculating sparticle branching ratios, and\ {\tt HiggsSignals~1.3.0}~\cite{HiggsSignals} and {\tt HiggsBounds~4.2.0}~\cite{HiggsBounds} for calculating constraints on the\ Higgs sector. the\ codes are linked using the\ SUSY Les Houches Accord (SLHA)~\cite{SLHA}. For many of these constraints, we follow very closely our previous implementations, which were summarized recently in Table~1 in~\cite{mc10}. Updates concerning BR($b \to s \gamma$), BR($B_u \to \tau \nu_\tau$), Higgs boson masses and\ rates etc.\ can be found in \citere{mc11}. \medskip Particular attention has been paid to correctly include the\ various SUSY searches at the\ LHC. As most of these searches have been interpreted by ATLAS and\ CMS only in simplified model frameworks, we have introduced supplementary procedures in order to apply these searches to the\ complicated sparticle spectrum content of a full SUSY model such as the\ pMSSM10. For this we consider three separate categories of particle mass constraints that arise from the LHC searches: a) generic constraints on coloured sparticles (gluinos and squarks), b) dedicated constraints on electroweakly-interacting gauginos, Higgsinos and\ sleptons, c) dedicated constraints on stop production in scenarios with compressed spectra. In the\ following we refer to the\ combination of all these constraints from direct SUSY searches as the\ LHC8 constraint, with sectors labelled as \ensuremath{{\rm LHC8}_{\rm col}}, \ensuremath{{\rm LHC8}_{\rm EWK}}, and\ \ensuremath{{\rm LHC8}_{\rm stop}}, respectively. The implementation of these results have been validated with {\tt Atom}~\cite{Atom} and\ {\tt Scorpion}~\cite{Scorpion}. \section{Predictions for the\ ILC and\ CLIC} \label{sec:results} \subsection{The Best-Fit Point} \label{sec:best-fit} \begin{table*}[htb!] \renewcommand{\arraystretch}{1.1} \begin{center} \begin{tabular}{|c|r|} \hline Parameter & Best-Fit \\ \hline $M_1$ & 170 \ensuremath{\,\, \mathrm{GeV}} \\ $M_2$ & 170 \ensuremath{\,\, \mathrm{GeV}} \\ $M_3$ & 2600 \ensuremath{\,\, \mathrm{GeV}} \\ \ensuremath{m_{\tilde q}} & 2880 \ensuremath{\,\, \mathrm{GeV}} \\ \ensuremath{m_{\tilde q_3}} & 4360 \ensuremath{\,\, \mathrm{GeV}} \\ \ensuremath{m_{\tilde l}} & 440 \ensuremath{\,\, \mathrm{GeV}} \\ \ensuremath{M_A} & 2070 \ensuremath{\,\, \mathrm{GeV}} \\ $A$ & 790 \ensuremath{\,\, \mathrm{GeV}} \\ $\mu$ & 550 \ensuremath{\,\, \mathrm{GeV}} \\ \ensuremath{\tan\beta} & 37.6~~~~~ \\ \hline \end{tabular} \caption{\it {Parameters of the\ pMSSM10 best-fit point.}} \label{tab:bf-point} \end{center} \renewcommand{\arraystretch}{1.0} \end{table*} We start with the\ discussion of the\ characteristics of the\ best-fit point, whose parameters are listed in \refta{tab:bf-point}. The best-fit spectrum is shown in \reffi{fig:bestfitspectrum}, and\ its SLHA file~\cite{SLHA} can be downloaded from the\ MasterCode website~\cite{mcweb}. We note first the\ near-degeneracy between the $\neu1, \neu2$ and\ $\cha1$, which is a general feature of our 68\% CL region that occurs in order to bring the\ cold dark matter density into the range allowed by cosmology. Correspondingly, we see in \refta{tab:bf-point} that $M_1 \simeq M_2$, though $M_3$ is very different. The overall $\neu1/\neu2/\cha1$ mass scale is bounded from below by the\ LEP and \ensuremath{{\rm LHC8}_{\rm EWK}}\ constraints, and\ from above by \ensuremath{(g-2)_\mu}, especially at the\ 68\% CL. We display in \reffi{fig:mass-summary} the 95\% (68\%) CL intervals in our fit for the\ masses of pMSSM10 particles as lighter (darker) peach shaded bars, with the\ best-fit values being indicated with blue horizontal lines. Turning back to \reffi{fig:bestfitspectrum}, we note the\ near-degeneracy between the\ slepton masses, which reflects our assumption of a common input slepton mass at the\ input scale $M_{\rm SUSY}$ that would not hold in more general versions of the\ pMSSM. the\ overall slepton mass scale is {below 1 \ensuremath{\,\, \mathrm{TeV}}}, as seen in \reffi{fig:mass-summary}, being bounded from above by \ensuremath{(g-2)_\mu}\ and\ from below by \ensuremath{{\rm LHC8}_{\rm EWK}}\ constraint. The latter also provides the\ strongest upper bound on the\ $\neu1/\neu2/\cha1$. We also see in \reffi{fig:mass-summary} that the\ gluino, squark, stop and\ bottom masses are all very poorly constrained in our pMSSM10 analysis, though the\ \ensuremath{{\rm LHC8}_{\rm col}}\ constraint forbids low masses. Concerning the\ Higgs sector, we note that the\ best-fit value for $M_A$ lies in the\ multi-TeV region (where its actual value is only weakly constrained) and\ is therefore far in the\ decoupling region. Accordingly, the properties of the\ light Higgs boson at about 125~GeV resemble very closely those of the\ Higgs boson of the\ SM. \begin{figure*}[htb!] \vspace{0.5cm} \centering \resizebox{14cm}{!}{\includegraphics{101773776_susy_hit}} \caption{\it the\ particle spectrum and\ dominant decay branching ratios at our best-fit {pMSSM10} point. Note the near-degeneracies between $\neu1, \neu2$ and\ $\cha1$, between the sleptons, between $\neu3, \neu4$ and\ $\cha2$, between the\ ${\tilde q_L}$ and ${\tilde q_R}$, between the\ heavy Higgs bosons, and\ between the\ stops and bottoms, which are general features of our 68\% CL region. On the\ other hand, the overall sparticle mass scales, in particular of the\ coloured sparticles, are poorly determined. } \label{fig:bestfitspectrum} \end{figure*} \begin{figure*}[htb!] \vspace{1.0cm} \centering \resizebox{17cm}{!}{\includegraphics{mass-spectrum}} \caption{\it Summary of mass ranges {predicted in the\ pMSSM10}. the\ light (darker) peach shaded bars indicate the\ 95\% (68\%)~CL intervals, whereas the\ blue horizontal lines mark the values of the\ masses at the\ best-fit point. } \label{fig:mass-summary} \end{figure*} \medskip SUSY particle pair production at an $e^+e^-$ collider is possible for masses up to $\sqrt{s}/2$, i.e.\ up to $\sim 500 \ensuremath{\,\, \mathrm{GeV}}$ at the\ ILC and\ up to $\sim 1500 \ensuremath{\,\, \mathrm{GeV}}$ at CLIC. Here it should be kept in mind that also the production of two different SUSY particles could be possible, such as $e^+e^- \to \neu1\neu2$ or $e^+e^- \to \smu1\smu2$, thus extending the mass reach. From \reffi{fig:mass-summary} it becomes obvious that in particular the electroweak sector of the\ pMSSM10 could be accessible at ILC or CLIC. This offers interesting prospects for the\ precision determination of the\ underlying SUSY parameters, see, e.g., \citere{LCreport}. In the\ next two subsections we review some more details on the\ preferred SUSY particle mass ranges as well as on the\ $e^+e^-$ production cross sections for electroweak particles. \subsection{Sparticle Masses} \begin{figure*}[htb!] \resizebox{8.5cm}{!}{\includegraphics{pmssm10_n12c_abs_mg_4K_chi2}} \resizebox{8.5cm}{!}{\includegraphics{pmssm10_n12c_msqr_4K_chi2}}\\[1em] \hspace {0.5cm} \resizebox{8.5cm}{!}{\includegraphics{pmssm10_n12c_mstop1_4K_chi2}} \resizebox{8.5cm}{!}{\includegraphics{pmssm10_n12c_msbot1_4K_chi2}}\\[1em] \hspace {0.5cm} \resizebox{8.5cm}{!}{\includegraphics{pmssm10_n12c_abs_mchar1_2K_chi2}} \resizebox{8.5cm}{!}{\includegraphics{pmssm10_n12c_mstau1_4K_chi2}}\\[1em] \vspace{-1cm} \caption{\it the\ one-dimensional profile likelihood functions for \ensuremath{m_{\tilde g}}, \ensuremath{m_{\tilde q}}, \mstop1, \msbot1, \mcha1 and\ \mstau1. In each panel the\ solid black line is for the\ pMSSM10, the\ solid blue line for the\ NUHM2, the dashed blue line for the\ NUHM1 and\ the\ dotted blue line for the CMSSM. } \label{fig:onedimensional} \end{figure*} \reffi{fig:onedimensional} displays (from top left to bottom right) the one-dimensional profile likelihood functions for the\ masses of the gluino, the\ first- and\ second-generation squarks, the\ lighter stop and sbottom squarks, the\ lighter chargino and\ the\ lighter stau. In each panel the\ solid black line is for the\ pMSSM10, the\ solid blue line for the NUHM2, the\ dashed blue line for the\ NUHM1 and\ the\ dotted blue line for the\ CMSSM (the latter three lines are {updated from \citere{mc10} to include new constraints such as the\ LHC combined value of $\ensuremath{M_h}$~\cite{Aad:2015zhl}}). In the case of $\ensuremath{m_{\tilde g}}$, we see that significantly lower masses are allowed in the\ pMSSM10 than in the\ other models: $> 1250 \ensuremath{\,\, \mathrm{GeV}}$ at the\ 68\%~CL and $\sim 1000 \ensuremath{\,\, \mathrm{GeV}}$ at the\ 95\%~CL. We also see that there is a similar, though smaller, reduction in the\ lower limit on $\ensuremath{m_{\tilde q}}$, to $\sim 1500\ensuremath{\,\, \mathrm{GeV}}$ at the\ 68\% CL and\ $\sim 1300\ensuremath{\,\, \mathrm{GeV}}$ at the\ 95\% CL. The picture is more complicated for $\mstop1$, where we see structures in the one-dimensional likelihood function for $\mstop1 < 1000 \ensuremath{\,\, \mathrm{GeV}}$ that that are allowed at the\ 95\% CL. This reflects the\ compressed stop spectra, see \citere{mc11} for more details. In the\ bottom row of \reffi{fig:onedimensional}, the\ one-dimensional profile likelihood functions for $\mcha1$ and\ $\mstau1$ in the\ pMSSM have minima at the\ lower mass limits $\sim 100 \ensuremath{\,\, \mathrm{GeV}}$ established at LEP, and\ there is an upper limit $\mstau1 \lesssim 1000 \ensuremath{\,\, \mathrm{GeV}}$ at the\ 95\% CL. These effects are due to the\ \ensuremath{(g-2)_\mu}\ constraint and the\ choice of generation-independent slepton masses in the\ pMSSM10. On the\ other hand, the\ light chargino (which is nearly degenerate in mass with the second lightest neutralino) has an upper mass limit below $500 \ensuremath{\,\, \mathrm{GeV}}$ at the\ 90\%, which would allow neutralino and chargino pair production at an 1000~GeV $e^+e^-$~collider, as we discuss below. However, we find no upper limit on \mcha1\ at the\ 95\% CL. \subsection{Prospects for Sparticle Detection at the\ ILC and\ CLIC} \label{sec:e+e-prospects} \begin{figure*}[htb!] \vspace{-0.25cm} \resizebox{8cm}{!}{\includegraphics{pmssm10_n12c_mneu1_plus_mneu1_3000_chi2}} \resizebox{8cm}{!}{\includegraphics{pmssm10_n12c_mneu1_plus_mneu2_3000_chi2}} \\[1em] \resizebox{8cm}{!}{\includegraphics{pmssm10_n12c_mneu1_plus_mneu3_3000_chi2}} \resizebox{8cm}{!}{\includegraphics{pmssm10_n12c_mchar1_plus_mchar1_3000_chi2}} \\ \vspace{-0.75cm} \caption{\it the\ one-dimensional profile likelihood functions for various thresholds in $e^+ e^-$ annihilation. Upper left panel: the\ threshold for $\neu1 \neu1$ production. Upper right panel: the\ threshold for associated $\neu1 \neu2$ production. Lower left panel: the\ threshold for associated $\neu1 \neu3$ production. Lower right panel: the\ threshold for $\cha1 \champ1$ production.} \label{fig:e+e-chi2} \end{figure*} \reffi{fig:e+e-chi2} displays the\ one-dimensional $\chi^2$ functions for the lowest particle pair- and\ associated {chargino and\ neutralino} production thresholds in $e^+ e^-$ annihilation in the\ pMSSM10 (black), compared with their counterparts in the CMSSM (dotted blue), NUHM1 (dashed blue) and\ NUHM2 (solid blue). In the\ cases of $\neu1 \neu1$ (upper left panel), $\neu1 \neu2$ (upper right panel) and $\cha1 \champ1$ (lower right panel) production, we see that the\ minima of the\ $\chi^2$ functions in the\ pMSSM10 lie within reach of an $e^+ e^-$ collider with centre-of-mass energy 500~GeV, and\ that threshold locations favoured by $\Delta\chi^2 \le 3$ would be within reach of a 1000~GeV collider, whereas no upper limit can be established at the\ 95\% CL. We also see that, in the\ case of $\neu1 \neu3$ production (lower left panel) (which is very similar to the\ cases of $\neu1 \neu4$, $\neu2 \neu3$ and $\cha1 \champ2$ production that we do not show) the minimum of the\ global $\chi^2$ function for the\ threshold lies between 400~GeV and\ 1000~GeV, again with no upper limit at the\ 95\% CL. Referring back to the\ bottom right panel of Fig.~\ref{fig:onedimensional}, we see that slepton pair-production thresholds may well also lie below 1000~GeV. In all cases, the\ expected locations of the\ thresholds in the\ CMSSM, NUHM1 and\ NUHM2 are at much higher centre-of-mass energies. Thus, the\ accessibility of supersymmetric particles at $e^+e^-$ colliders is vastly different in the\ pMSSM10 and similar non-GUT models, as compared to the\ simplest GUT-based models. The prospects to produce SUSY particles at the\ ILC and\ CLIC are substantially better in the\ pMSSM10 than in the\ CMSSM, NUHM1 and NUHM2. \section{Conclusions} \label{sec:conclusions} We have reviewed the\ first global likelihood analysis of the\ pMSSM using a frequentist approach that includes comprehensive treatments of the LHC8 constraints, performed with the\ {\tt MasterCode} framework. We have analysed the\ preferred mass ranges for SUSY particles and compared them to the\ reach of the\ ILC and\ CLIC. In particular, such a machine would have a significant discovery potential in the\ preferred region for the\ lighter neutralinos and\ charginos, as well as for scalar leptons, while those states would be difficult to access at the\ LHC (with the\ searches discussed in \citere{mc11}). \subsection*{Acknowledgements} We thank E.~Bagnaschi, O.~Buchmueller, R.~Cavanaugh, M.~Citron, A.~De~Roeck, M.~Dolan, J.~Ellis, H.~Fl\"acher, G.~Isidori, S.~Malik, J.~Marrouche, D.~Mart\'inez Santos, K.~Olive, K.~Sakurai, K.~de~Vries and G.~Weiglein for the collaboration on the work presented here. The work of S.H.\ is supported in part by CICYT (grant FPA 2013-40715-P) and by the Spanish MICINN's Consolider-Ingenio 2010 Program under grant MultiDark CSD2009-00064. \newcommand\jnl[1]{{\frenchspacing #1}} \newcommand\vol[1]{\textbf{#1}}
2,877,628,088,667
arxiv
\section{Introduction} Cardiac auscultation is the most practiced non-invasive and cost-effective procedure for the early diagnosis of various heart diseases. Effective cardiac auscultation requires trained physicians, a resource which is limited especially in low-income countries of the world \cite{alam2010cardiac}. This lack of skilled doctors opens up opportunities for the development of machine learning based assistive technologies for point-of-care diagnosis of heart diseases. With the advent of smartphones and their increased computational capabilities, machine learning based automated heart sound classification systems implemented with a smart-phone attachable digital stethoscope in the point-of-care locations can be of significant impact for early diagnosis of cardiac diseases. Automated classification of the PCG, i.e., the heart sound, have been extensively studied and researched in the past few decades. Previous research on automatic classification of heart sounds can be broadly classified into two areas: (i) PCG segmentation, i.e., detection of the first and second heart sounds (S1 and S2), and (ii) detection of recordings as pathologic or physiologic. For the latter application, researchers in the past have utilized Artificial Neural Networks (ANN) \cite{uuguz2012ANN}, Support Vector Machines (SVM) \cite{gharehbaghi2015SVM} and Hidden Markov Models (HMM) \cite{saraccouglu2012HMM}. In, the 2016 Physionet/CinC Challenge was organized and an archive of $4430$ PCG recordings were released for binary classification of normal and abnormal heart sounds. This particular challenge encouraged new methods being utilized for this task. Notable features used for this dataset included, time, frequency and statistical features \cite{homsi2017}, Mel-frequency Cepstral Coefficients (MFCC) \cite{bobillo2016}, and Continuous Wavelet Transform (CWT). Most of the systems adopted the segmentation algorithm developed by Springer et al. \cite{springer2016segmentation}. Among the top scoring systems, Maknickas et al. \cite{maknickas2017} extracted Mel-frequency Spectral Coefficients (MFSC) from unsegmented signals and used a 2D CNN. Plesinger et al. \cite{plesinger2017} proposed a novel segmentation method, a histogram based feature selection method and parameterized sigmoid functions per feature, to discriminate between classes. Various machine learning algorithms including SVM \cite{whitaker2017}, k-Nearest Neighbor (k-NN) \cite{bobillo2016}, Multilayer Perceptron (MLP) \cite{kay2017,zabihi2016}, Random Forest \cite{homsi2017}, 1D \cite{potes2016ensemble} and 2D CNNs \cite{maknickas2017}, and Recurrent Neural Network (RNN) \cite{yang2016classification} were employed in the challenge. A good number of submissions used an ensemble of classifiers with a voting algorithm \cite{homsi2017,kay2017,zabihi2016,potes2016ensemble}. The best performing system was presented by Potes et al. \cite{potes2016ensemble} that combined a 1D-CNN model with an Adaboost-Abstain classifier using a threshold based voting algorithm. In audio signal processing, filter-banks are commonly employed as a standard pre-processing step during feature extraction. This was done in \cite{potes2016ensemble} before the 1D-CNN model. We propose a CNN based Finite Impulse Response (FIR) filter-bank front-end, that automatically learns frequency characteristics of the FIR filterbank utilizing time-convolution (tConv) layers. The INTERSPEECH ComParE Heart Sound Shenzhen (HSS) Dataset is a relatively smaller corpus, with three class labels according to the degree of the disease; while the Physionet Heart Sounds Dataset has binary annotations. We train our model on the Physionet Challenge Dataset and transfer the learned weights for the three class classification task. We also avail unsupervised/semi-supervised learning to find latent representations of PCG. \section{Data Preparation} \subsection{Datasets} \subsubsection{The INTERSPEECH 2018 ComParE HSS Dataset} The INTERSPEECH 2018 ComParE Challenge \cite{schuller2018interspeech} released the Heart Sounds Shenzhen PCG signal corpus containing $845$ recordings from $170$ different subjects. The recordings were collected from patients with coronary heart disease, arrhythmia, valvular heart disease, congenital heart disease, etc. The PCG recordings are sampled at $4$ KHz and annotated with three class labels: (i) \emph{Normal}, (ii) \emph{Mild}, and (iii) \emph{Moderate/Severe} (heart disease). \subsubsection{PhysioNet/CinC Challenge Dataset} The 2016 PhysioNet/CinC Challenge dataset \cite{liu2016open} contains PCG recordings from seven different research groups. The training data contains $3153$ heart sound recordings collected from $764$ patients with a total number of $84,425$ annotated cardiac cycles ranging from $35$ to $159$ bpm. Cardiac Anomalies range from coronary heart disease, arrhythmia, valvular stenosis/regurgitation, etc. The dataset has $2488$ and $665$ PCG signals annotated as \emph{Normal} and \emph{Abnormal}, respectively. The Aristotle University of Thessaloniki heart sounds database (AUTHHSDB) \cite{papadaniil2014efficient}, a subset of the Physionet corpus (training-c), contains additional metadata based on the severity of the heart diseases. The recordings are sampled at 2000 Hz. \subsection{Data Imbalance Problem} The INTERSPEECH ComParE HSS Dataset suffers from significant class imbalance in its training set, which could introduce performance reduction for both classical machine learning and deep learning based classifiers. The training set is divided in a ratio of 16.7/55.0/28.3 percent between the \emph{Normal}/\emph{Mild}/\emph{Severe} classes, with more than half of the training data comprising of PCG signals annotated as \emph{Mild}". The result of the imbalance was evident in our recall metrics which are discussed later in Sec. \ref{disc}. \subsection{Fused Training Sets} \label{database} To cope with the class imbalance and increase the volume of the training data, we created $3$ new fused training corpora out of the INTERSPEECH ComParE HSS Dataset and the Physionet/CinC Challenge Dataset training partitions. The AUTHHSDB (training-c) partition of the dataset was relabeled using the metadata files provided to have $7$ \emph{Normal}, $8$ \emph{Mild} and $16$ \emph{Severe} annotated recordings. The dataset distributions are depicted in Fig. \ref{foldsplit}. The fused datasets prepared for Transfer Learning (TL), Supervised Learning (SL) and Representation Learning (RL) will be referred to as TL-Data, SL-Data and RL-Data respectively. \begin{figure}[tb] \includegraphics[width=\linewidth]{foldsplit} \centering \caption{Dataset preparation for transfer Learning, supervised Learning and representation learning using Physionet and ComParE corpus.} \label{foldsplit} \end{figure} \section{Proposed Transfer Learning Framework} \subsection{1D-CNN Model for Abnormal Heart Sound Detection} \label{potes} The Physionet/CinC Challenge PCG database is a larger corpus with Normal and Abnormal labels designed for a binary classification task. We propose a 1D-CNN Neural Network improving the top scoring model \cite{potes2016ensemble} of the Physionet/CinC 2016 challenge. First, the signal is re-sampled to 1000 Hz (after an anti-aliasing filter) and decomposed into four frequency bands ($25-45$, $45-80$, $80-200$, $200-500$ Hz). Next, spikes in the recordings are removed \cite{schmidt2010segmentation} and PCG segmentation is performed to extract cardiac cycles \cite{springer2016segmentation}. Taking into account the longest cardiac cycle in the corpus, each cardiac cycle is zero padded to be $2.5$s in length. Four different frequency bands of extracted from each cardiac cycle are fed into four different input branches of the 1D-CNN architecture. Each branch has two convolutional layers of kernel size $5$, followed by a Rectified Linear Unit (ReLU) activation and a max-pooling of $2$. The first convolutional layer has $8$ filters while the second has $4$. The outputs of the four branches are fed to an MLP network after being flattened and concatenated. The MLP network has a hidden layer of $20$ neurons with ReLU activation and two output neurons with softmax activation. The resulting model provides predictions on every heart sound segment (cardiac cycle), which are averaged over the entire recording and rounded for inference. \begin{figure*}[t] \includegraphics[width=\linewidth]{transferlearning__1_} \centering \caption{Proposed architecture incorporating tConv layers for Transfer Learning.} \label{bottle} \end{figure*} \subsection{Filter-bank Learning using Time-Convolutional (tConv) Layers} For a causal discrete-time FIR filter of order $N$ with filter coefficients $b_{0}, b_{1}, \dots b_{N}$, the output signal samples $y[n]$ is obtained by a weighted sum of the most recent samples of the input signal $x[n]$. This can be expressed as: \begin{eqnarray} y[n] &=& b_{0}x[n]+b_{1}x[n-1]+.....+b_{N}x[n-N] \nonumber\\ &=& \sum_{i=0}^{N}b_{i}x[n-i]. \end{eqnarray} A 1D-CNN performs cross-correlation between its input and its kernel using a spatially contiguous receptive field of kernel neurons. The output of a convolutional layer, with a kernel of odd length $N+1$, can be expressed as: \begin{align}\label{conveqn} \small y[n] & = b_{0}x[n+\tfrac{N}{2}]+b_{1}x[n+\tfrac{N}{2}-1]+....+b_{\tfrac{N}{2}}x[n]+.... \nonumber\\ & +b_{N-1}x[n-\tfrac{N}{2}+1]+b_{N}x[n-\tfrac{N}{2}] \nonumber\\ & = \sum_{i=0}^{N}b_{i}\hspace{0.5mm}x[n+\tfrac{N}{2}-i] \end{align} where $b_{0}, b_{1}, ... b_{N}$ are the kernel weights. Considering a causal system the output of the convolutional layer becomes: \begin{equation} y[n - \tfrac{N}{2}] = \sigma\left(\beta +\sum_{i=0}^{N}b_{i}x[n-i]\right) \end{equation} where $\sigma(\cdot)$ is the activation function and $\beta$ is the bias term. Therefore, a 1D convolutional layer with linear activation and zero bias, acts as an FIR filter with an added delay of $N/2$ \cite{matei2006CNNFIR}. We denote such layers as time-convolutional (tConv) layers \cite{sainath2015google}. Naturally, the kernels of these layers (similar to filter-bank coefficients) can be updated with Stochastic Gradient Descent (SGD). These layers therefore replace the static filters that decompose the pre-processed signal into four bands (Sec. \ref{potes}). We use a special variant of the tConv layer that learns coefficients with a linear phase (LP) response. \subsection{Transfer Learning from Physionet Model} Our proposed tConv Neural Network is trained on the Physionet CinC Challenge Dataset with four-fold in house cross validation \cite{ahmed2018}. The model achieves a mean cross-validation accuracy of $87.10\%$ and Recall of $90.91\%$. The weights up-to the flatten layer are transferred \cite{yosinski2014transferable} to a new convolutional neural network architecture with a fully connected layer with two hidden layers of 239 and 20 neurons and 3 output neurons for \emph{Normal}, \emph{Mild} and \emph{Severe} classes (Fig. \ref{bottle}). The model weights are fine-tuned on TL-Data. TL-Data comprises of all of the samples from the INTERSPEECH ComParE Dataset and the \emph{Normal} signals from the Physionet in house validation fold, from which the trained weights are transferred. We chose the weights of a model trained on Fold 1 for better per cardiac cycle validation accuracy. The cross-entropy loss is optimized with a stochastic gradient descent optimizer with a learning rate of $4.5*10^{-05}$. Dropout of $0.5$ is applied to all of the layers except for the output layer. The model hyperparameters were \emph{not} optimized while fine-tuning with TL-Data. The cost function was weighted to account for the class imbalance. \section{Representation Learning (RL) with Recurrent Autoencoders} \begin{figure}[h] \includegraphics[width=\linewidth]{autoreco} \centering \caption{Reconstructed Mel-spectrogram of recording thresholded to reduce background noise a) below -30 dB b) below -45 dB} \label{recon} \end{figure} Representation learning is particularly of interest when a large amount of unlabeled data is available compared to a smaller labeled dataset. Considering the two corpora at hand, we approach the problem from a semi-supervised representation learning perspective to train recurrent sequence to sequence autoencoders \cite{sutskever2014sequence} on unlabeled RL-Data (Sec. \ref{database}) and then use lower dimensional representations of SL-Data to train classifiers. Sequence-to-sequence learning is about translating sequences from one domain to another. Unsupervised Sequence-to-sequence representation learning was popularized in the use of machine translation \cite{bahdanau2014neural}. It has also been employed for audio classification with success \cite{amiriparian2017sequence}. It offers the chance of resolving the overfitting problem experienced when training an end to end deep learning model. First, mel-spectrogram of $126$ bands are extracted with a window size of $320$ms with $50$\% overlap. The raw audio files are clipped to 30 seconds in length. To reduce background noise, the spectrogram is thresholded below $-30$,$-45$,$-60$ and $-75$ dB. This results in four different spectrograms. The model is trained on all four of these separately, which results in four different feature sets. Both the encoder and decoder Recurrent Neural Network had 2 hidden layers with 256 Gated Recurrent Units each. The final hidden states of all the GRUs are concatenated into a 1024 dimensional feature vector. Fig. \ref{recon} portrays the reconstructed outputs for mel-spectrograms clipped below $-30$ dB and $-45$ dB. Four different feature vectors for the four different spectrograms are also concatenated to form fused features. Feature representations of SL-Data were used to train classifiers. The model is deployed and trained using the {\scriptsize AU}{D}{\scriptsize EEP} toolkit \cite{freitag2017audeep}. \section{Supervised Learning with Segment-level Features} \subsection{ComParE Acoustic Feature Set} In this sub-system, we utilize the acoustic feature set described in \cite{weninger2013acoustics}. This feature set contains $6373$ static features resulting from the computation of various functionals over LLD parameters \cite{schuller2018interspeech}. The LLD parameters and functionals utilized are described in \cite{weninger2013acoustics}. The features are extracted using the openSMILE toolkit \cite{eyben2010opensmile}. \subsection{Classifiers} We have implemented several machine learning algorithms for heart sound classification from the ComParE Acoustic feature set. The evaluated classifiers include: Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and Multi-Layer Perceptron (MLP). SVM classifier with complexity C$=10^{-4}$ and tolerance L$=0.3$ outperformed the other classifiers. \section{Experimental Evaluation and Results}\label{results} The evaluation metric for the INTERSPEECH ComParE Challenge is Unweighted Average Recall (UAR) since the datasets are class unbalanced. We also monitor classwise recall and accuracy for evaluation of model performance. Performance metrics on both the development and test set are listed on Table \ref{metrics} with the training datasets mentioned. The Comp-SVM model, evaluated on the ComParE test set, acquired 45.9\% UAR and 51.5\% overall accuracy. Our transfer learning based model with a variant of our proposed tConv layer acquired improved performance compared to the end to end deep learning baseline (END2YOU). Training on a larger corpus has provided an improved performance on the development set using representation learning with significantly reduced performance on the test set. Comp-SVM, RL-SVM and LP-tConv models are ensembled using a majority voting algorithm. It yields UAR of 57.92\% on the development set, and UAR of 39.2\% on the test set. To improve the \emph{Normal} hit rate a hierarchical decision system is implemented where an LP-tConv network trained on Physionet/Cinc Database is first used for binary classification between \emph{Normal} and \emph{Abnormal} recordings. Following that, an ensemble of Comp-SVM, RL-SVM and LP-tConv is used to classify between \emph{Mild} and \emph{Severe} classes. The hierarchical model has acquired a dev set UAR of 57.93\% and test set UAR of 42.1\%. \begin{table}[t] \centering \caption{Performance evaluation of proposed methods compared to the official baseline systems.} \label{metrics} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline \multicolumn{7}{|c|}{\emph{Baseline Systems}}\\ \hline \hline Model Name & Dataset & Features & Classifiers & UAR (\%) dev & Acc. (\%) dev & UAR (\%) test\\ \hline OPENSMILE\cite{schuller2018interspeech} & \begin{tabular}[c]{@{}c@{}}INTERSPEECH\\ ComParE HSS\end{tabular} & \begin{tabular}[c]{@{}c@{}}ComParE\\Feature set\end{tabular} & SVM & 50.3 & 52.2 & 46.4 \\ \hline AUDEEP\cite{schuller2018interspeech} & \begin{tabular}[c]{@{}c@{}}INTERSPEECH\\ ComParE HSS\end{tabular} & \begin{tabular}[c]{@{}c@{}}Fused\\ Autoencoder\\ Features\end{tabular} & SVM & 38.6 & - & 47.9 \\ \hline END2YOU\cite{schuller2018interspeech} & \begin{tabular}[c]{@{}c@{}}INTERSPEECH\\ ComParE HSS\end{tabular} & CNN & LSTM & 41.2 & - & 37.7 \\ \hline \multicolumn{4}{|c|}{Fusion of best 2 systems \cite{schuller2018interspeech}} & - & - & 56.2 \\\hline\hline \multicolumn{7}{|c|}{\emph{Proposed Systems}}\\ \hline \hline Model Name & Dataset & Features & Classifiers & UAR (\%) dev & Acc. (\%) dev & UAR (\%) test\\ \hline ComP-SVM & \begin{tabular}[c]{@{}c@{}}INTERSPEECH\\ ComParE HSS\end{tabular} & \begin{tabular}[c]{@{}c@{}}ComParE\\ Feature set\end{tabular} & SVM & 52.1 & 53.9 & 45.9 \\ \hline RL-SVM & \begin{tabular}[c]{@{}c@{}}RL-Data\\ SL-Data\end{tabular} & \begin{tabular}[c]{@{}c@{}}$-60$ dB\\ Autoencoder\\ Features\end{tabular} & SVM & 42.9 & 48.9 & - \\ \hline RL-LDA & \begin{tabular}[c]{@{}c@{}}RL-Data\\ SL-Data\end{tabular} & \begin{tabular}[c]{@{}c@{}}$-60$ \& $-75$dB\\ Autoencoder\\ Features\end{tabular} & LDA & 51.4 & 54.4 & 34.4 \\ \hline LP-tConv & TL-Data & \begin{tabular}[c]{@{}c@{}}tConv\\ CNN\end{tabular} & MLP & 44.6 & 56.1 & 39.5 \\ \hline\hline \multicolumn{7}{|c|}{\emph{System Ensembles}}\\ \hline \hline \multicolumn{4}{|c|}{Ensemble System Name} & UAR (\%) dev & Acc. (\%) dev & UAR (\%) test\\ \hline \multicolumn{4}{|c|}{Fusion of Comp-SVM, RL-SVM and LP-tConv models} & 57.92 & 63.9 & 39.3 \\\hline \multicolumn{4}{|c|}{Hierarchical with Fusion} & 57.93 & 64.2 & 42.1 \\\hline \end{tabular}% } \end{table} \begin{figure}[t] \includegraphics[width=\linewidth,trim={1cm 3.2cm 2cm 1cm}]{representation.pdf} \centering \caption{Mean values of the $4096$ features learned from the 4 mel-spectrograms by the RNN-Autoencoders.} \label{encoding} \end{figure} \section{Discussion}\label{disc} \begin{figure}[tb] \includegraphics[width=\linewidth,trim={1cm 1cm 1 1cm}]{classwise_recall.pdf} \centering \caption{Recall scores obtained on the validation data after each training epoch. A steady increase in the mild recall is visible while recall for the other classes are steadily decreasing.} \label{recall} \end{figure} Our proposed end to end LP-tConv model superseded the test set metric for the standalone baseline end to end model (END2YOU). Other proposed systems failed to beat the baseline systems test set UAR while it outperformed the development set UAR. This could indicate overfitting on the development set. On the other hand, for the baseline systems a tendency of overfitting on the test set was visible. This is because the individual approach/hyperparameters performing best on the test set has been chosen as baselines \cite{schuller2018interspeech}. A generalized feature-classifier system should yield similar UAR on both development and test dataset if the development and test data distributions are consistent. This was noticeable only for the openSMILE features with an SVM classifier. More interesting insights were revealed during the training of the recurrent autoencoders. The lower dimensional representations learned were different for the Physionet CinC Challenge database and the INTERSPEECH ComParE HSS database. The RL model was trained on both RL-data and the INTERSPEECH HSS database. Fig. \ref{encoding} shows the mean of the concatenated (fused) representations learned from the 4 mel-spectrograms. A distinct difference can be visualized from feature dimension 1700. The last 2048 dimensions are representations learned from the -60 dB and the -75 dB mel-spectrograms, these are the dimensions where the feature means deviate the most. Quite interestingly, the -60 dB and -75 dB spectrogram features yield better results compared to the others. After training the model with preprocessed signals (resampled to 1000 Hz and band-pass filtered between 20-400 Hz), the representation differences in the mean reduced for certain dimensions. This could mean that the corresponding dimensions represent information from the higher end of the frequency spectrum. Another observation experienced during experimentation was the Normal Recall vs Mild/Severe Recall trade-off. While training an end to end LP-tConv model, we have seen a divergent behavior between the normal and mild/severe recall metrics (Fig. \ref{recall}) which persisted even when the percentage of \emph{Normal} recordings were more than \emph{Mild} recordings. \section{Conclusions} In this work, we have presented an ensemble of classifiers for automatically detecting abnormal heart sounds of different severity levels for the INTERSPEECH 2018 ComParE Heart Beats Sub-Challenge. The primary framework was based on transfer learning of parameters from a 1D-CNN model pre-trained on the Physionet HS Classification dataset. We have also deployed unsupervised feature representation learning from mel-spectrograms using a deep autoencoder based architecture. Finally, we have also implemented a segment-level feature based system using the ComParE feature set and an SVM classifier. The final hierarchical ensemble of the systems provided with a UAR of 57.9\% on the development dataset and 42.1\% on the test dataset. \section{Acknowledgement} The Titan X Pascal used for this research was donated by the NVIDIA Corporation. \bibliographystyle{IEEEtran} \balance
2,877,628,088,668
arxiv
\section{Introduction}\label{S:intro} This paper is the second installment, following \cite{KYP1}, on the infinite dimensional bounded real lemma for discrete-time systems and the discrete-time Kalman-Yakubovich-Popov (KYP) inequality. In this context, we consider the discrete-time linear system \begin{equation}\label{dtsystem} \Si:=\left\{ \begin{array}{ccc} \bx(n+1)&=&A \bx(n)+B \bu(n),\\ \by(n)&=&C \bx(n)+D \bu(n), \end{array} \right. \qquad (n\in\BZ) \end{equation} where $A:\cX\to\cX$, $B:\cU\to\cX$, $C:\cX\to\cY$ and $D:\cU\to\cY$ are bounded linear Hilbert space operators, i.e., $\cX$, $\cU$ and $\cY$ are Hilbert spaces and the {\em system matrix} associated with $\Si$ takes the form \begin{equation}\label{sysmat} M=\mat{cc}{A&B\\ C& D}:\mat{cc}{\cX\\ \cU}\to\mat{c}{\cX\\ \cY}. \end{equation} We refer to the pair $(C,A)$ as the {\em output pair} and to the pair $(A,B)$ as the {\em input pair}. In this case input sequences $\bu=(\bu(n))_{n\in\BZ}$, with $\bu(n)\in\cU$, are mapped to output sequences $\by=(\by(n))_{n\in\BZ}$, with $\by(n)\in\cY$, through the state sequence $\bx=(\bx(n))_{n\in\BZ}$, with $\bx(n)\in \cX$. A system trajectory of the system $\Si$ is then any triple $(\bu(n),\bx(n),\by(n))_{n\in\BZ}$ of input, state and output sequences that satisfy the system equations \eqref{dtsystem}. With the system $\Si$ we associate the {\em transfer function} given by \begin{equation}\label{trans} F_\Si(\lambda)=D+\lambda C(I-\lambda A)^{-1}B. \end{equation} Since $A$ is bounded, $F_\Si$ is defined and analytic on a neighborhood of $0$ in $\BC$. We are interested in the case where $F_\Si$ admits an analytic continuation to the open unit disk $\BD$ such that the supremum norm $\|F_\Si\|_\infty$ of $F_\Si$ over $\BD$ is at most one, i.e., $F_\Si$ has analytic continuation to a function in the Schur class \[ \cS(\cU, \cY) = \left\{ F \colon {\mathbb D} \underset{\text{holo}}\mapsto \cL(\cU, \cY) \colon \| F(\lambda) \| \le 1 \text{ for all } z \in {\mathbb D}\right\}. \] Sometimes we also consider system trajectories $(\bu(n),\bx(n),\by(n))_{n\ge n_0}$ of the system $\Si$ that are initiated at a certain time $n_0\in\BZ$, in which case the input, state and output at time $n<n_0$ are set equal to zero, and we only require that the system equations \eqref{dtsystem} are satisfied for $n\geq n_0$. Although technically such trajectories are not system trajectories for $\Si$, but rather correspond to trajectories of the corresponding singly-infinite forward-time system rather than the bi-infinite system $\Si$, the transfer function of this singly-infinite forward-time system coincides with the transfer function $F_\Si$ of $\Si$. Hence for the sake of the objective, determining whether $F_\Si\in \cS(\cU,\cY)$, there is no problem with considering such singly infinite system trajectories. Before turning to the infinite-dimensional setting, we first discuss the case where $\cU$, $\cX$, $\cY$ are all finite-dimensional. If in this case one considers the parallel situation in continuous time rather than in discrete time, these ideas have origins in circuit theory, specifically conservative or passive circuits. An important question in this context is to identify which rational matrix functions, analytic on the left half-plane (rather than the unit disk $\BD$), arise from a lossless or dissipative circuit in this way (see e.g. Belevitch \cite{Bel}). According to Willems \cite{Wil72a, Wil72b}, a linear system $\Sigma$ as in \eqref{dtsystem} is {\em dissipative} (with respect to {\em supply rate} $s(u,y) = \| u \|^2 - \| y \|^2$) if it has a {\em storage function} $S \colon \cX \to {\mathbb R}_+$, where $S(x)$ is to be interpreted as a measure of the {\em energy} stored by the system when it is in state $x$. Such a storage function $S$ is assumed to satisfy the dissipation inequality \begin{equation} \label{diss} S(\bx(n+1)) - S(\bx(n)) \le \|\bu(n) \|^2 - \| \by(n) \|^2 \end{equation} over all trajectories $(\bu(n), \bx(n), \by(n))_{n\in\BZ}$ of the system $\Sigma$ as well as the additional normalization condition that $S(0) = 0$. The dissipation inequality can be interpreted as saying that for the given system trajectory, the energy stored in the system ($S(\bx(n+1)) - S(\bx(n))$) when going from state $x(n)$ to $x(n+1)$ can be no more than the difference between the energy that enters the system ($\|\bu(n) \|^2$) and the energy that leaves the system ($\| \by(n) \|^2$) at time $n$. For our discussion here we shall only be concerned with the so-called {\em scattering supply rate} $s(u,y) = \| u \|^2 - \| y \|^2$. It is not hard to see that a consequence of the dissipation inequality \eqref{diss} on system trajectories is that the transfer function $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$. The results extend to nonlinear systems as well (see \cite{Wil72a}), where one talks about the system having $L^2$-gain at most $1$ rather the system having transfer function in the Schur class. In case the system $\Sigma$ is finite-dimensional and minimal (as defined in the statement of Theorem \ref{T:BRLfinstan} below), one can show that the smallest storage function, the {\em available storage} $S_a$, and the largest storage function, the {\em required supply} $S_r$, are {\em quadratic}, provided storage functions for $\Si$ exist. That $S_a$ and $S_r$ are quadratic means that there are positive-definite matrices $H_a$ and $H_r$ so that $S_a$ and $S_r$ have the quadratic form $$ S_a(x) = \langle H_a x, x \rangle, \quad S_r(x) = \langle H_r x, x \rangle $$ with $H_a$ and $H_r$ actually being positive-definite. For a general quadratic storage function $S_H(x) = \langle H x, x \rangle$ for a positive-definite matrix $H$, it is not hard to see that the dissipation inequality \eqref{diss} assumes the form of a linear matrix inequality (LMI): \begin{equation} \label{KYP1} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{*} \begin{bmatrix} H & 0 \\ 0 & I_{\cY} \end{bmatrix} \begin{bmatrix} A & B \\ C & D\end{bmatrix} \preceq \begin{bmatrix} H & 0 \\ 0 & I_{\cU}\end{bmatrix}. \end{equation} This is what we shall call the {\em Kalman-Yakubovich-Popov} or KYP inequality (with solution $H$ for given system matrix $M = \sbm{ A & B \\ C & D}$). Conversely, if one starts with a finite-dimensional, minimal, linear system $\Si$ as in \eqref{dtsystem} for which the transfer function $F_\Sigma$ is in the Schur-class, it is possible to show that there exist quadratic storage functions $S_H$ for the system satisfying the coercivity condition $S_H(x) \ge \delta \| x \|^2$ for some $\delta > 0$ (i.e., with $H$ strictly positive-definite). This is the storage-function interpretation behind the following result, known as the {\em Kalman-Yakubovich-Popov lemma}. \begin{theorem}[Standard Bounded Real Lemma (see \cite{AV})] \label{T:BRLfinstan} Let $\Si$ be a discrete-time linear system as in \eqref{dtsystem} with $\cX$, $\cU$ and $\cY$ finite dimensional, say $\cU = {\mathbb C}^{r}$, $\cY = {\mathbb C}^{s}$, $\cX = {\mathbb C}^{n}$, so that the system matrix $M$ has the form \begin{equation}\label{findimsys} M = \begin{bmatrix} A & B \\ C & D \end{bmatrix} \colon \begin{bmatrix} {\mathbb C}^{n} \\ {\mathbb C}^{r} \end{bmatrix} \to \begin{bmatrix} {\mathbb C}^{n} \\ {\mathbb C}^{s} \end{bmatrix} \end{equation} and the transfer function $F_{\Sigma}$ is equal to a rational matrix function of size $s \times r$. Assume that the realization $(A,B,C,D)$ is {\em minimal}, i.e., the output pair $(C,A)$ is {\em observable} and the input pair $(A,B)$ is {\em controllable}: \begin{equation}\label{obscontr} \bigcap_{k=0}^{n} \kr C A^{k} = \{0\}\ands \textup{span}_{k=0,1,\dots, n-1} \im A^{k} B = \cX = {\mathbb C}^{n}. \end{equation} Then $F_{\Sigma}$ is in the Schur class $\cS({\mathbb C}^{r}, {\mathbb C}^{s})$ if and only if there is an $n \times n$ positive-definite matrix $H$ satisfying the KYP-inequality \eqref{KYP1}. \end{theorem} There is also a {\em strict} version of the Bounded Real Lemma. The associated storage function required is a {\em strict storage function}, i.e., a function $S \colon \cX \to {\mathbb R}_+$ for which there is a number $\delta > 0$ so that \begin{equation} \label{diss-strict} S(\bx(n+1)) - S(\bx(n)) + \delta \| x(n) \|^2 \le (1- \delta) \| \bu(n)\|^2 - \| \by(n) \|^2 \end{equation} holds over all system trajectories $(\bu(n), \bx(n), \by(n))_{n\in\BZ}$, in addition to the normalization condition $S(0)=0$. If $S_H(x) = \langle H x, x \rangle$ is a quadratic strict storage function, then the associated linear matrix inequality is the {\em strict KYP-inequality} \begin{equation} \label{KYP2} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{*} \begin{bmatrix} H & 0 \\ 0 & I_{\cY} \end{bmatrix} \begin{bmatrix} A & B \\ C & D\end{bmatrix} \prec \begin{bmatrix} H & 0 \\ 0 & I_{\cU}\end{bmatrix}. \end{equation} In this case, one also arrives at a stronger condition on the transfer function $F_\Si$, namely that it has an analytic continuation to a function in the {\em strict Schur class}: \[ \cS^{o}(\cU, \cY) =\left \{ F \colon {\mathbb D} \underset{\text{holo}} \mapsto \cL(\cU, \cY) \colon \sup_{z \in {\mathbb D}} \| F(z) \| \le \rho \text{ for some } \rho < 1\right\}. \] Note, however, that the strict KYP-inequality implies that $A$ is stable, so that in case \eqref{KYP2} holds, $F_\Si$ is in fact analytic on $\BD$. This is the storage-function interpretation of the following strict Bounded Real Lemma, in which one replaces the minimality condition with a stability condition. \begin{theorem}[Strict Bounded Real Lemma (see \cite{PAJ})]\label{T:BRLfinstrict} Suppose that the dis\-crete-time linear system $\Si$ is as in \eqref{dtsystem} with $\cX$, $\cU$ and $\cY$ finite dimensional, say $\cU = {\mathbb C}^{r}$, $\cY = {\mathbb C}^{s}$, $\cX = {\mathbb C}^{n}$, i.e., the system matrix $M$ is as in \eqref{findimsys}. Assume that $A$ is {\em stable}, i.e., all eigenvalues of $A$ are inside the open unit disk $\BD$, so that $\spec(A) < 1$ and the transfer function $F_{\Si}(z)$ is analytic on a neighborhood of $\overline{\BD}$. Then $F_{\Si}(z)$ is in the strict Schur class $\cS^{o}({\mathbb C}^{r}, {\mathbb C}^{s})$ if and only if there is a positive-definite matrix $H \in {\mathbb C}^{n \times n}$ so that the strict KYP-inequality \eqref{KYP2} holds. \end{theorem} We now turn to the general case, where the state space $\cX$ and the input space $\cU$ and output space $\cY$ are allowed to be infinite-dimensional. In this case, the results are more recent, depending on the precise hypotheses. For generalizations of Theorem \ref{T:BRLfinstan}, much depends on what is meant by minimality of $\Si$, and hence by the corresponding notions of controllable and observable. Here are the three possibilities for controllability of an input pair $(A,B)$ which we shall consider. The third notion involves the controllability operator $\bW_c$ associated with the pair $(A,B)$ tailored to the Hilbert space setup which in general is a closed, possibly unbounded operator with domain $\cD(\bW_c)$ dense in $\cX$ mapping into the Hilbert space $\ell^2_\cU({\mathbb Z}_-)$ of $\cY$-valued sequences supported on the negative integers ${\mathbb Z}_- =\{ -1, -2, -3, \dots \}$, as well as the observability operator $\bW_o$ associated with the pair $(C,A)$, which has similar properties. We postpone precise definitions and properties of these operators to Section \ref{S:review}. For an input pair $(A,B)$ we define the following notions of controllability: \begin{itemize} \item $(A,B)$ is {\em (approximately) controllable} if the reachability space \begin{equation} \label{ReachSpace} \Rea(A|B) = \operatorname{span}\{\im A^k B \colon k=0,1,2,\dots\} \end{equation} is dense in $\cX$. \item $(A,B)$ is {\em exactly controllable} if the reachability space $\Rea(A|B)$ is equal to $\cX$, i.e., each state vector $x \in \cX$ has a representation as a finite linear combination $x = \sum_{k=0}^K A^k B u_k$ for a choice of finitely many input vectors $u_0, u_1, \dots, u_K$ (also known as every $x$ is a {\em finite-time reachable state} (see \cite[Definition 3.3]{OpmeerStaffans2008}). \item $(A,B)$ is {\em $\ell^2$-exactly controllable} if the $\ell^2$-adapted controllability operator $\bW_c$ has range equal to all of $\cX$: $ \bW_c\, \cD(\bW_c) = \cX$. \end{itemize} If $(C,A)$ is an output pair, we have the dual notions of observability: \begin{itemize} \item $(C,A)$ is {\em (approximately) observable} if the input pair $(A^*, C^*)$ is (approximately) controllable, i.e., if the observability space \begin{equation} \label{ObsSpace} \Obs(C|A) = \operatorname{span} \{ \im A^{*k} C^* \colon k=0,1,2,\dots\} \end{equation} is dense in $\cX$, or equivalently, if $\cap_{k=0}^\infty \ker C A^k = \{0\}$. \item $(C,A)$ is {\em exactly observable} if the observability subspace $\Obs(C|A)$ is the whole space $\cX$. \item $(C,A)$ is {\em $\ell^2$-exactly observable} if the adjoint input pair $(A^*, C^*)$ is $\ell^2$-exactly controllable, i.e., if the adjoint $\bW_o^*$ of the $\ell^2$-adapted observability operator $\bW_o$ has full range: $\bW_o^*\, \cD(\bW_o^*) = \cX$. \end{itemize} Then we say that the system $\Sigma \sim (A,B,C,D)$ is \begin{itemize} \item {\em minimal} if $(A,B)$ is controllable and $(C,A)$ is observable, \item {\em exactly minimal} if both $(A,B)$ is exactly controllable and $(C,A)$ is exactly observable, and \item {\em $\ell^2$-exactly minimal} if both $(A,B)$ is $\ell^2$-exactly controllable and $(C,A)$ is $\ell^2$-exactly observable. \end{itemize} Despite the fact that the operators $A$, $B$, $C$ and $D$ associated with the system $\Si$ are all bounded, in the infinite dimensional analogue of the KYP-inequality \eqref{KYP1} unbounded solutions $H$ may appear. We therefore have to be more precise concerning the notion of positive-definiteness we employ. Suppose that $H$ is a (possibly unbounded) selfadjoint operator $H$ on a Hilbert space $\cX$ with domain $\cD(H)$ dense in $\cX$; we refer to \cite{RS} for background and details on this class and other classes of unbounded operators. Then we shall say: \begin{itemize} \item $H$ is {\em strictly positive-definite} (written $H \succ 0$) if there is a $\delta > 0$ so that $\langle Hx, x \rangle \ge \delta \| x \|^2$ for all $x \in \cD(H)$; \item $H$ is {\em positive-definite} if $\langle H x, x \rangle > 0$ for all nonzero $x \in \cD(H)$; \item $H$ is {\em positive-semidefinite} (written $H \succeq 0$) if $\langle H x, x \rangle \ge0$ for all $x \in \cD(H)$. \end{itemize} We also note that any (possibly unbounded) positive-semidefinite operator $H$ has a positive-semidefinite square root $H^\half$; as $H = H^\half \cdot H^\half$, we have $$ \cD(H) = \{ x \in \cD(H^\half) \colon H^\half x \in \cD(H^\half) \} \subset \cD(H^\half). $$ See e.g.\ \cite{RS} for details. Since solutions $H$ to the corresponding KYP-inequality may be unbounded, the KYP-inequality cannot necessarily be written in the LMI form \eqref{KYP1}, but rather, we require a spatial form of \eqref{KYP1} on the appropriate domain: For a (possibly unbounded) positive-definite operator $H$ on $\cX$ satisfying \begin{equation} \label{KYP1b'} A \cD(H^{\half}) \subset \cD(H^{\half}), \quad B \cU \subset \cD(H^{\half}), \end{equation} the spatial form of the KYP inequality takes the form: \begin{equation}\label{KYP1b} \left\| \begin{bmatrix} H^{\half} \! & \! 0 \\ 0 \! & \! I_{\cU} \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2} - \left\| \begin{bmatrix} H^{\half} \! & \! 0 \\ 0 \! & \! I_{\cY} \end{bmatrix} \begin{bmatrix} A\! & \! B \\ C \! & \! D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2} \ge 0 \ \ (x \in \cD(H^{\half}),\, u \in \cU). \end{equation} The corresponding notion of a storage function will then be allowed to assume $+\infty$ as a value; this will be made precise in Section \ref{S:Storage}. With all these definitions out of the way, we can state the following three distinct generalizations of Theorem \ref{T:BRLfinstan} to the infinite-dimensional situation. \begin{theorem}[Infinite-dimensional standard Bounded Real Lemma] \label{T:BRLinfstan} Let $\Si$ be a discrete-time linear system as in \eqref{dtsystem} with system matrix $M$ as in \eqref{sysmat} and transfer function $F_\Si$ defined by \eqref{trans}. \begin{enumerate} \item[(1)] Suppose that the system $\Si$ is minimal, i.e., the input pair $(A,B)$ is controllable and the output pair $(C,A)$ is observable. Then the transfer function $F_{\Sigma}$ has an analytic continuation to a function in the Schur class $\cS(\cU, \cY)$ if and only if there exists a positive-definite solution $H$ of the KYP-inequality in the following generalized sense: $H$ is a closed, possibly unbounded, densely defined, positive-definite (and hence injective) operator on $\cX$ such that $\cD(H^\half)$ satisfies \eqref{KYP1b'} and $H$ solves the spatial KYP-inequality \eqref{KYP1b}. \item[(2)] Suppose that $\Sigma$ is exactly minimal. Then the transfer function $F_{\Sigma}$ has an analytic continuation to a function in the Schur class $\cS(\cU, \cY)$ if and only if there exists a bounded, strictly positive-definite solution $H$ of the KYP-inequality \eqref{KYP1}. In this case $A$ has a spectral radius of at most one, and hence $F_{\Sigma}$ is in fact analytic on $\BD$. \item[(3)] Statement {\rm(}2{\rm)} above continues to hold if the ``exactly minimal'' hypothesis is replaced by the hypothesis that $\Sigma$ be ``$\ell^2$-exactly minimal.'' \end{enumerate} \end{theorem} We shall refer to a closed, densely defined, positive-definite solution $H$ of \eqref{KYP1b'}--\eqref{KYP1b} as a positive-definite solution of the {\em generalized KYP-inequality}. The paper of Arov-Kaashoek-Pik \cite{AKP06} gives a penetrating treatment of item (1) in Theorem \ref{T:BRLinfstan}, including examples to illustrate various subtleties surrounding this result---e.g., the fact that the result can fail if one insists on classical bounded and boundedly invertible selfadjoint solutions of the KYP-inequality. We believe that items (2) and (3) appeared for the first time in \cite{KYP1}, where also a sketch of the proof of item (1) is given. The idea behind the proofs of items (1)--(3) in \cite{KYP1} is to combine the result that a Schur-class function $S$ always has a contractive realization (i.e., such an $S$ can be realized as $S = F_\Sigma$ for a system $\Sigma$ as in \eqref{dtsystem} with system matrix $M$ in \eqref{sysmat} a contraction operator) with variations of the State-Space-Similarity Theorem (see \cite[Theorem 1.5]{KYP1}) for the infinite-dimensional situation under the conditions that hold in items (1)--(3); roughly speaking, under appropriate hypothesis, a State-Space-Similarity Theorem says that two systems $\Si$ and $\Si'$ whose transfer functions coincide on a neighborhood of zero, necessarily can be transformed (in an appropriate sense) from one to other via a change of state-space coordinates. In the present paper we revisit these three results from a different point of view: we adapt Willems' variational formulas to the infinite dimensional setting, and in this context present the available storage $S_a$ and required supply $S_r$, as well as an $\ell_2$-regularized version $\uS_r$ of the required supply. It is shown, under appropriate hypothesis, that these are storage functions, with $S_a$ and $\uS_r$ being quadratic storage functions, i.e., $S_a$ agrees with $S_{H_a}(x)=\|H_a^\half x\|^2$ and $\uS_r(x)=S_{H_r}(x)=\|H_r^\half x \|^2$ for $x$ in a suitably large subspace of $\cX$, where $H_a$ and $H_r$ are possibly unbounded, positive-definite density operators, which turn out to be positive-definite solutions to the generalized KYP-inequality. In this way we will arrive at a proof of item (1). Further analysis of the behavior of $H_a$ and $H_r$, under additional restrictions on $\Si$, lead to proofs of items (2) and (3), as well as the following version of the strict Bounded Real Lemma for infinite dimensional systems, which is a much more straightforward generalization of the result in the finite-dimensional case (Theorem \ref{T:BRLfinstrict}). \begin{theorem}[Infinite-dimensional strict Bounded Real Lemma] \label{T:BRLinfstrict} Let $\Si$ be a dis\-crete-time linear system as in \eqref{dtsystem} with system matrix $M$ as in \eqref{sysmat} and transfer function $F_\Si$ defined by \eqref{trans}. Assume that $A$ is exponentially stable, i.e., $\spec(A) < 1$. Then the transfer function $F_{\Sigma}$ is in the strict Schur class $\cS^{o}(\cU, \cY)$ if and only if there exists a bounded strictly positive-definite solution $H$ of the strict KYP-inequality \eqref{KYP2}. \end{theorem} Theorem \ref{T:BRLfinstrict} was proved by Petersen-Anderson-Jonkheere \cite{PAJ} for the con\-tinuous-time finite-dimensional setting by using what we shall call an $\epsilon$-regulariza\-tion procedure to reduce the result to the standard case Theorem \ref{T:BRLfinstan}. In \cite{KYP1} we show how this same idea can be used in the infinite-dimensional setting to reduce the hard direction of Theorem \ref{T:BRLinfstrict} to the result of either of item (2) or item (3) in Theorem \ref{T:BRLinfstan}. For the more general nonlinear setting, Willems \cite{Wil72a} was primarily interested in what storage functions look like assuming that they exist, while in \cite{Wil72b} for the finite-dimensional linear setting he reduced the existence problem to the existence theory for Riccati matrix equations. Here we solve the existence problem for the more general infinite-dimensional linear setting by converting Willems' variational formulation of the available storage $S_a$ and an $\ell^2$-regularized version $\uS_r$ of his required supply $S_r$ to an operator-theoretic formulation amenable to explicit analysis. This paper presents a more unified approach to the different variations of the Bounded Real Lemma, in the sense that we present a pair of concretely defined, unbounded, positive-definite operators $H_a$ and $H_r$ that, under the appropriate conditions, form positive-definite solutions to the generalized KYP-inequality, and that have the required additional features under the additional conditions in items (2) and (3) of Theorem \ref{T:BRLinfstan} as well as Theorem \ref{T:BRLinfstrict}. We also make substantial use of connections with corresponding objects for the adjoint system $\Sigma^*$ (see \eqref{dtsystem*}) to complete the analysis and arrive at some order properties for the set of all solutions of the generalized KYP-inequality which are complementary to those in \cite{AKP06}. The paper is organized as follows. Besides the current introduction, the paper consists of seven sections. In Section \ref{S:review} we give the definitions of the observability operator $\bW_o$ and controllability operator $\bW_c$ associated with the system $\Si$ in \eqref{dtsystem} and recall some of their basic properties. In Section \ref{S:Storage} we define what is meant by a storage function in the context of infinite dimensional discrete-time linear systems $\Si$ of the form \eqref{dtsystem} as well as strict and quadratic storage functions and we clarify the relations between quadratic (strict) storage functions and solutions to the (generalized) KYP-inequality. Section \ref{S:ASRS} is devoted to the available storage $S_a$ and required supply $S_r$, two examples of storage functions, in case the transfer function of $\Si$ has an analytic continuation to a Schur class function. It is shown that $S_a$ and an $\ell^2$-regularized version $\uS_r$ of $S_r$ in fact agree with quadratic storage functions on suitably large domain via explicit constructions of two closed, densely defined, positive-definite operators $H_a$ and $H_r$ that exhibit $S_a$ and $\uS_r$ as quadratic storage functions $S_{H_a}$ and $S_{H_r}$. In Section \ref{S:dual} we make explicit the theory for the adjoint system $\Sigma^*$ and the duality connections between $\Sigma$ and $\Sigma^*$. In Section \ref{S:order} we study the order properties of a class of solutions of the generalized KYP-inequality, and obtain the conditions under which $H_a$ and $H_r$ are bounded and/or boundedly invertible and thereby solutions of the classical KYP-inequality. These results are then used in Section \ref{S:BRLproof} to give proofs of Theorems \ref{T:BRLinfstan} and \ref{T:BRLinfstrict} via the storage function approach. \section{Review: minimality, controllability, observability} \label{S:review} In this section we recall the definitions of the observability operator $\bW_o$ and controllability operator $\bW_c$ associated with the discrete-time linear system $\Si$ given by \eqref{dtsystem} and various of their basic properties which will be needed in the sequel. Detailed proofs of most of these results as well as additional properties can be found in \cite[Section 2]{KYP1}. For the case of a general system $\Sigma$, following \cite[Section 2]{KYP1}, we define the {\em observability operator} $\bW_{o}$ associated with $\Si$ to be the possibly unbounded operator with domain $\cD(\bW_{o})$ in $\cX$ given by \begin{equation} \label{bWo1} \cD(\bW_{o}) = \{ x \in \cX \colon \{ C A^{n} x\}_{n \ge 0} \in\ell^{2}_{\cY}({\mathbb Z}_{+})\} \end{equation} with action given by \begin{equation} \label{bWo2} \bW_{o} x = \{ C A^{n} x\}_{n \ge 0} \text{ for } x \in \cD(\bW_{o}). \end{equation} Dually, we define the {\em adjoint controllability operator} $\bW_{c}^{*}$ associated with $\Si$ to have domain \begin{equation} \label{bWc*1} \cD(\bW_{c}^{*}) = \{ x \in \cX \colon \{B^* A^{*(-n-1)} x\}_{n\le -1} \in\ell^{2}_{\cU}({\mathbb Z}_{-})\} \end{equation} with action given by \begin{equation} \label{bWc*2} \bW_{c}^{*} x = \{B^* A^{*(-n-1)} x\}_{n\le -1} \text{ for } x \in \cD(\bW_{c}^{*}). \end{equation} It is directly clear from the definitions of $\bW_o$ and $\bW_c^*$ that \begin{equation}\label{KerWoWc} \ker \bW_o=\Obs(C|A)^\perp\ands \ker \bW_c^*=\Rea(A|B)^\perp. \end{equation} We next summarize the basic properties of $\bW_c$ and $\bW_o$. \begin{proposition}[Proposition 2.1 in \cite{KYP1}] \label{P:WcWo'} Let $\Sigma$ be a system as in \eqref{dtsystem} with observability operator $\bW_o$ and adjoint controllability operator $\bW_c^*$ as in \eqref{bWo1}--\eqref{bWc*2}. Basic properties of the controllability operator $\bW_c$ are: \begin{enumerate} \item[(1)] It is always the case that $\bW_o$ is a closed operator on its domain \eqref{bWo1}. \item[(2)] If $\cD(\bW_o)$ is dense in $\cX$, then the adjoint $\bW_o^*$ of $\bW_o$ is a closed and densely defined operator, by a general property of adjoints of closed operators with dense domain. Concretely for the case here, $\cD(\bW_o^*)$ contains the dense linear manifold $\ell_{\tu{fin},\cY}(\BZ_+)$ consisting of finitely supported sequences in $\ell^2_\cY(\BZ_+)$. In general, one can characterize $\cD(\bW_{o}^{*})$ explicitly as the set of all $\by \in \ell^{2}_{\cY}({\mathbb Z}_{+})$ such that there exists a vector $x_{o} \in \cX$ such that the limit $$ \lim_{K \to \infty}\langle x, \sum_{k=0}^{K} A^{*k} C^{*} \by(k) \rangle_{\cX} $$ exists for each $x \in \cD(\bW_o)$ and is given by \begin{equation} \label{limit-o} \lim_{K \to \infty}\langle x, \sum_{k=0}^{K} A^{*k} C^{*} \by(k) \rangle_{\cX} = \langle x, x_{o} \rangle_{\cX}, \end{equation} with action of $\bW_c$ then given by \begin{equation} \label{Wo*act} \bW_{o}^{*} \by = x_{o} \end{equation} where $x_{o}$ is as in \eqref{limit-o}. In particular, $\ell_{\tu{fin}, \cY}({\mathbb Z}_+)$ is contained in $\cD(\bW_o^*)$ and the observability space defined in \eqref{ObsSpace} is given by $$ \Obs (C|A) = \bW_{o}^{*} \ell_{\tu{fin}, \cY}({\mathbb Z}_{+}). $$ Thus, if in addition $(C,A)$ is observable, then $\bW_o^*$ has dense range. \end{enumerate} Dual properties of the controllability operator $\bW_c^*$ are: \begin{enumerate} \item[(3)] It is always the case that the adjoint controllability operator $\bW_c^*$ is closed on its domain \eqref{bWc*1}. \item[(4)] If $\cD(\bW_c^*)$ is dense in $\cX$, then the controllability operator $\bW_c = (\bW_c^*)^*$ is closed and densely defined by a general property of the adjoint of a closed and densely defined operator. Concretely for the case here, $\cD(\bW_c)$ contains the dense linear manifold $\ell_{\tu{fin},\cU}(\BZ_-)$ of finitely supported sequences in $\ell^2_\cU(\BZ_-)$. In general, one can characterize $\cD(\bW_{c})$ explicitly as the set of all $\bu \in \ell^{2}_{\cU}({\mathbb Z}_{-})$ such that there exists a vector $x_{c} \in \cX$ so that $$ \lim_{K \to \infty} \langle x, \sum_{k=-K}^{-1} A^{-k-1} B \bu(k) \rangle_{\cX} $$ exists for each $x \in \cD(\bW_{c}^{*})$ and is given by \begin{equation} \label{limit-c} \lim_{K \to \infty} \langle x, \sum_{k=-K}^{-1} A^{-k-1} B \bu(k) \rangle_{\cX} = \langle x, x_{c} \rangle_{\cX}, \end{equation} and action of $\bW_{c}$ then given by \begin{equation} \label{Wc-act} \bW_{c} \bu = x_{c} \end{equation} where $x_{c}$ is as in \eqref{limit-c}. In particular, the reachability space $\Rea (A|B)$ is equal to $\bW_{c} \ell_{{\rm fin}, \cU}({\mathbb Z}_{-})$. Thus, if in addition $(A,B)$ is controllable, then $\bW_c$ has dense range. \end{enumerate} \end{proposition} For systems $\Si$ as in \eqref{dtsystem}, without additional conditions, it can happen that $\bW_o$ and/or $\bW_c^*$ are not densely defined, and therefore the adjoints $\bW_o^*$ and $\bW_c$ are at best linear relations and difficult to work with. However, our interest here is the case where the transfer function $F_\Sigma$ has analytic continuation to a bounded function on the unit disk (or even in the Schur class, i.e., norm-bounded by $1$ on the unit disk). In this case the multiplication operator \begin{equation} \label{mult-op} M_{F_\Sigma} \colon f(\lambda) \mapsto F_\Sigma(\lambda) f(\lambda) \end{equation} is a bounded operator from $L^2_\cU({\mathbb T})$ to $L^2_\cY({\mathbb T})$ and hence also its compression to a map ``from past to future'' \begin{equation} \label{freq-Hankel} {\mathbb H}_{F_\Sigma} = P_{H^2_\cY({\mathbb D})} M_{F_\Sigma}|_{H^2_\cU({\mathbb D})^\perp}, \end{equation} often called the {\em Hankel operator} with symbol $F_\Sigma$, is also bounded (by $\| M_{F_\Sigma} \|$). If we take inverse $Z$-transform to represent $L^2({\mathbb T})$ as $\ell^2({\mathbb Z})$, $H^2({\mathbb D})$ as $\ell^2({\mathbb Z}_+)$ and $H^2({\mathbb D})^\perp$ as $\ell^2({\mathbb Z}_-)$, then the frequency-domain Hankel operator $$ {\mathbb H}_{F_\Sigma} \colon H^2_\cU({\mathbb D})^\perp \to H^2_\cY({\mathbb D}) $$ given by \eqref{freq-Hankel} transforms via inverse $Z$-transform to the time-domain Hankel operator $\fH_{F_\Sigma}$ with matrix representation \begin{equation} \label{Hankel-matrix} \fH_{F_\Sigma} = [ C A^{i-j-1} B ]_{i \ge 0, j<0} \colon \ell^2_\cU({\mathbb Z}_-) \to \ell^2_\cY({\mathbb Z}_+). \end{equation} We conclude that the Hankel matrix $\fH_{F_\Sigma}$ is bounded as an operator from $\ell^2_\cU({\mathbb Z}_-)$ to $\ell^2_\cY({\mathbb Z}_+)$ whenever $F_\Sigma$ has analytic continuation to an $H^\infty$ function. From the matrix representation \eqref{Hankel-matrix} we see that the Hankel matrix formally has a factorization \begin{equation} \label{formal-Hank-fact} \fH_{F_\Sigma} = \tu{col} [C A^i]_{i \ge 0} \cdot \tu{row} [A^{-j-1} B]_{j<0} = \bW_o \cdot \bW_c. \end{equation} It can happen that $\fH_{F_\Sigma}$ is bounded while $\bW_o$ and $\bW_c$ are unbounded. Nevertheless, from the fact that $\fH_{F_\Sigma}$ is bounded one can see that $\Rea (A|B)$ is in $\cD(\bW_o)$ and $$ \fH_{F_\Sigma} \bu = \bW_o \left( \sum_{k=K}^{-1} A^{-1-k} B \bu(k) \right) \in \ell^2_\cY({\mathbb Z}_+). $$ for each finitely supported input string $\bu(K), \dots, \bu(-1)$. If we assume that $(A,B)$ is controllable, we conclude that $\bW_o$ is densely defined. Similarly, by working with boundedness of $\fH_{F_\Sigma}^*$ one can show that boundedness of $F_\Sigma$ on ${\mathbb D}$ leads to $\cD(\bW_c^*)$ containing the observability space $\Obs(C|A)$; hence if we assume that $(C,A)$ is observable, we get that $\bW_c^*$ is densely defined. With these observations in hand, the following precise version of the formal factorization \eqref{formal-Hank-fact} for the case where $\bW_o$ and $\bW_c$ may be unbounded becomes plausible. \begin{proposition}[Corollary 2.4 and Proposition 2.6 in \cite{KYP1}] \label{P:HankelDecs} Suppose that the system $\Sigma$ given by \eqref{dtsystem} has transfer function $F_\Sigma$ with analytic continuation to an $H^\infty$-function on the unit disk ${\mathbb D}$. \begin{enumerate} \item[(1)] Assume that $\cD(\bW_c^*)$ is dense in $\cX$ {\rm(}as is the case if $(C,A)$ is observable{\rm)}. Then $\cD(\bW_{o})$ contains $\im \bW_{c} = \bW_{c} \cD(\bW_{c})$ and \begin{equation} \label{HankDec1} \fH_{F_{\Sigma}}|_{\cD(\bW_{c})} = \bW_{o} \bW_{c}. \end{equation} In particular, as $\ell_{{\rm fin}, \cU}({\mathbb Z}_-) \subset \cD(\bW_c)$ and $\bW_c \ell_{{\rm fin}, \cU}({\mathbb Z}_-) = \Rea(A|B)$ {\rm(}from Proposition \ref{P:WcWo'} {\rm(4))}, it follows that $\Rea(A|B) \subset \cD(\bW_o)$. \item[(2)] Assume that $\cD(\bW_o)$ is dense in $\cX$ {\rm(}as is the case if $(A,B)$ is controllable{\rm)}. Then $\cD(\bW_{c}^{*})$ contains $\im \bW_{o}^{*} = \bW_{o}^{*}\cD(\bW_{o}^{*})$ and \begin{equation} \label{HankDec2} \fH_{F_{\Sigma}}^{*}|_{\cD(\bW_{o}^{*})} = \bW_{c}^{*} \bW_{o}^{*}. \end{equation} In particular, as $\ell_{{\rm fin}, \cY}({\mathbb Z}) \subset \cD(\bW_o^*)$ and $\bW_o^* \ell_{{\rm fin}, \cY}({\mathbb Z}_+) = \Obs(C|A)$ {\rm(}from Proposition \ref{P:WcWo'} {\rm(2))}, it follows that $\Obs(C|A) \subset \cD(\bW_c^*)$. \item[(3)] In case the system matrix $M = \sbm{A & B \\ C & D}$ is contractive, then $\bW_o$ and $\bW_c$ also are bounded contraction operators and we have the bounded-operator factorizations \begin{equation} \label{HankDec3} \fH_{F_\Sigma} = \bW_o \bW_c, \quad (\fH_{F_\Sigma})^* = \bW_c^* \bW_o^*. \end{equation} \end{enumerate} \end{proposition} The following result from \cite{KYP1} describes the implications of $\ell^2$-exact controllability and $\ell^2$-exact observability on the operators $\bW_o$ and $\bW_c$ \begin{proposition}[Corollary 2.5 in \cite{KYP1}] \label{P:ell2implics} Let $\Si$ be a discrete-time linear system as in \eqref{dtsystem} with system matrix $M$ as in \eqref{sysmat}. Assume that the transfer function $F_\Si$ defined by \eqref{trans} has an analytic continuation to an $H^{\infty}$-function on ${\mathbb D}$. \begin{itemize} \item[(1)] If $\Sigma$ is $\ell^2$-exactly controllable, then $\bW_o$ is bounded. \item[(2)] If $\Sigma$ is $\ell^2$-exactly observable, then $\bW_c$ is bounded. \item[(3)] $\Sigma$ is $\ell^2$-exactly minimal, i.e., both $\ell^2$-exactly controllable and $\ell^2$-exactly observable, then $\bW_o$ and $\bW_c^*$ are both bounded and bounded below. \end{itemize} \end{proposition} The following result will be useful in the sequel. \begin{proposition} \label{P:Wc} Suppose that the discrete-time linear system $\Sigma$ given by \eqref{dtsystem} is minimal and that its transfer function $F_\Sigma$ has analytic continuation to an $H^\infty$-function on ${\mathbb D}$, so (by Propositions \ref{P:WcWo'} and \ref{P:HankelDecs}) $\cD(\bW_c^*) \supset \Obs(C|A)$ is dense in $\cX$ and $\bW_c = (\bW_c^*)^*$ is densely defined with dense range $\im(\bW_c) \supset \Rea(A|B)$. \begin{enumerate} \item[(1)] Suppose that $(\bu(n), \bx(n), \by(n))_{n \ge n_{-1}}$ is a system trajectory of $\Si$ with initialization $\bx(n_{-1}) = 0$. Define an input string $\bu' \in \ell_{{\rm fin},\cU}({\mathbb Z}_-)$ by $$ \bu'(n) = \begin{cases} 0 &\text{if } n < n_{-1}, \\ \bu(n) &\text{if } n_{-1} \le n < 0. \end{cases} $$ Then $\bx(0) = \bW_c \bu'$. \item[(2)] Suppose that $\bu \in \ell^2_\cU({\mathbb Z}_-)$ is in $\cD(\bW_c)$ and $\widetilde u \in \cU$. Define a new input string $\bu' \in \ell^2_\cU({\mathbb Z}_-)$ by $$ \bu'(n) = \begin{cases} \bu(n+1) & \text{if } n < -1, \\ \widetilde u & \text{if } n= -1. \end{cases} $$ Then $\bu' \in \cD(\bW_c)$ and $$ \bW_c \bu' = A \bW_c \bu + B \widetilde u. $$ \end{enumerate} \end{proposition} \begin{proof} We start with item (1). From item (4) of Proposition \ref{P:WcWo'} see that $\ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ is contained in $\cD(\bW_c)$, and thus $\bu'\in \cD(\bW_c)$. From formula \eqref{limit-c} for the action of $\bW_c$ on its domain we obtain that \begin{equation} \label{Wc-fin} \bW_c \bu' = \sum_{k \in {\mathbb Z}_-} A^{-k-1} B \bu'(k) =\sum_{k = n_{-1}}^{-1} A^{-k-1} B \bu(k) \end{equation} where the sum is well defined since there are only finitely many nonzero terms. By a standard induction argument, using the input-state equation in \eqref{dtsystem}, one verifies that this is the formula for $\bx(0)$ for a system trajectory $(\bu(n), \bx(n), \by(n))_{n \ge n_{-1}}$ with initialization $\bx(n_{-1}) = 0$. This verifies (1). As for item (2), it is easily verified that $\cD(\bW_c^*)$ is invariant under $A^*$ and that the following intertwining condition holds: $$ \bW_c^* A^*|_{\cD(\bW_c^*)} = \cS_- \bW_c^*, $$ with $\cS_-$ the truncated right shift operator on $\ell^2_\cU({\mathbb Z}_-)$ given by $$ ( \cS_- \bu)(n) = \bu(n-1) \text{ for } n \in {\mathbb Z}_-. $$ The adjoint version of this is that $\cD(\bW_c)$ is invariant under the untruncated left shift operator $\cS_-^*$ on $\ell^2_\cU({\mathbb Z}_-)$ $$ (\cS_-^* \bu)(n) = \begin{cases} \bu(n+1) &\text{if } n < -1, \\ 0 &\text{if } n=-1 \end{cases} $$ and we have the intertwining condition $$ \bW_c \cS_-^*|_{\cD(\bW_c)} = A \bW_c. $$ Next note that $\cS_-^* \bu= \bu' - \Pi_{-1} \wtil{u}$, with $\Pi_{-1}:\cU\to\ell^2_{\cU}(\BZ_-)$ the embedding of $\cU$ into the $-1$-th entry of $\ell^2_{\cU}(\BZ_-)$. This implies that \[ \bu'=\cS_-^* \bu + \Pi_{-1} \wtil{u}\in \cS_-^* \cD(\bW_c)+ \ell_{{\rm fin},\cU}({\mathbb Z}_-)\subset \cD(\bW_c), \] and \begin{equation} \label{intertwine1} A \bW_c\bu =\bW_c \cS_-^*|_{\cD(\bW_c)}\bu =\bW_c (\bu' -\Pi_{-1}\wtil{u})=\bW_c \bu' -B\wtil{u}, \end{equation} which provides the desired identity. \end{proof} \begin{remark} \label{R:Wc} It is of interest to consider the shift $\bW_c^{(1)}$ of the controllability operator $\bW_c$ to the interval $(-\infty, 0]$ in place of ${\mathbb Z}_- = (-\infty, 0)$, i.e., $$ \bW_c^{(1)} = \bW_c \tau^{-1} $$ where the map $\tau$ transforms sequences $\bu$ supported on ${\mathbb Z}_- = (-\infty, 0)$ to sequences $\bu'$ supported on $(-\infty, 0]$ according to the action $$ (\tau \bu)(n) = \bu(n+1)\quad \text{ for } n < 0 $$ with inverse given by $$ (\tau^{-1} \bv)(n) = \bv(n-1)\quad \text{ for } n \le 0. $$ For all $\bu \in \ell^2_\cU({\mathbb Z}_-)$ and $\widetilde u \in \cU$, define a sequence $(\bu, \widetilde u) \in \ell^2_\cU((-\infty, 0])$ by $$ (\bu, \widetilde u)(n) = \begin{cases} \bu(n) &\text{if } n \in {\mathbb Z}_-, \\ \widetilde u &\text{if } n=0. \end{cases} $$ The result of item (2) in Proposition \ref{P:Wc} can be interpreted as saying: given $\bu \in \ell^2_\cU({\mathbb Z}_-)$ and $\widetilde u \in \cU$ we have \[ (\bu, \widetilde u) \in \cD(\bW_c^{(1)}) \quad \Longleftrightarrow \quad \bu \in \cD(\bW_c) \] and in that case $ \bW_c^{(1)} (\bu, \widetilde u) = A \bW_c \bu + B \widetilde u$. \end{remark} \section{Storage functions} \label{S:Storage} In the case of systems with an infinite dimensional state space we allow storage functions to also attain $+\infty$ as a value. Set $[0,\infty]:= \BR_+\cup\{+\infty\}$. Then, given a discrete-time linear system $\Si$ as in \eqref{dtsystem}, we say that a function $S \colon \cX \to [0, \infty]$ is a {\em storage function} for the system $\Sigma$ if the dissipation inequality \begin{equation}\label{disineq} S(\bx(n+1)) \le S(\bx(n)) + \| \bu(n) \|_{\cU}^{2} - \| \by(n)\|_{\cY}^{2} \text{ for } n \ge N_0 \end{equation} holds along all system trajectories $(\bu(n), \bx(n), \by(n))_{n \ge N_0}$ with state initialization $x(N_0) = x_0$ for some $x_0 \in \cX$ at some $N_0 \in {\mathbb Z}$, and $S$ is normalized to satisfy \begin{equation}\label{normalization'} S(0) = 0. \end{equation} As a first result we show that existence of a storage function for $\Si$ is a sufficient condition for the transfer function to have an analytic continuation to a Schur class function. \begin{proposition}\label{P:storage-Schur} Suppose that the system $\Sigma$ in \eqref{dtsystem} has a storage function $S$. Then the transfer function $F_{\Sigma}$ of $\Si$ defined in \eqref{trans} has an analytic continuation to a function in the Schur class $\cS(\cU, \cY)$. \end{proposition} The proof of Proposition \ref{P:storage-Schur} relies on the following observation, which will also be of use in the sequel. \begin{lemma}\label{L:finH2} Suppose that the system $\Sigma$ in \eqref{dtsystem} has a storage function $S$. For each system trajectory $(\bu(n),\bx(n),\by(n))_{n\in\BZ}$ and $N_0\in\BZ$ so that $\bx(N_0)=0$, the following inequalities hold for all $N\in\BZ_+$: \begin{align} S(\bx(N_0+N+1))&\le \sum_{n=N_0}^{N_0+N} \| \bu(n)\|_{\cU}^{2} - \sum_{n=N_0}^{N_0+N} \| \by(n) \|^{2}_{\cY};\label{Sbound}\\ \sum_{n=N_0}^{N_0+N} \| \by(n) \|^{2}_{\cY} &\le \sum_{n=N_0}^{N_0+N} \| \bu(n) \|^{2}_{\cU}. \label{IOdisineq} \end{align} \end{lemma} \begin{proof} By the translation invariance of the system $\Si$ we may assume without loss of generality that $N_0=0$, i.e., $\bx(0)=0$. From \eqref{disineq} and \eqref{normalization'} we get \[ S(\bx(1)) \le \| \bu(0)\|^{2} - \| \by(0) \|^{2} + S(0) = \| \bu(0)\|^{2} - \| \by(0) \|^{2} < \infty. \] Inductively, suppose that $S(\bx(n)) < \infty$. Then \eqref{disineq} gives us \[ S(\bx(n+1)) \le \| \bu(n) \|^{2}_{\cU} - \| \by(n) \|^{2}_{\cY} + S(\bx(n)) < \infty. \] We may now rearrange the dissipation inequality for $n\in\BZ_+$ in the form \begin{equation} \label{difdis} S(\bx(n+1)) - S(\bx(n)) \le \| \bu(n) \|^{2} - \| \by(n) \|^{2} \quad (n\in\BZ_+). \end{equation} Summing from $n=0$ to $n=N$ gives \[ 0 \le S(\bx(N+1)) \le \sum_{n=0}^{N} \| \bu(n)\|_{\cU}^{2} - \sum_{n=0}^{N} \| \by(n) \|^{2}_{\cY}, \] which leads to \[ \sum_{n=0}^{N} \| \by(n) \|^{2}_{\cY} \le \sum_{n=0}^{N} \| \bu(n) \|^{2}_{\cU} \text{ for all } N \in {\mathbb Z}_{+}. \] These inequalities prove \eqref{Sbound} and \eqref{IOdisineq} for $N_0=0$. As observed above, the case of $N_0\not=0$ is then obtained by translation of the system trajectory. \end{proof} \begin{proof}[Proof of Proposition \ref{P:storage-Schur}] Let $\bu \in \ell^{2}_{\cU}({\mathbb Z}_{+})$ and run the system $\Sigma$ with input sequence $\bu$ and initial condition $\bx(0)= 0$. From Lemma \ref{L:finH2}, with $N_0=0$, we obtain that for each $N\in\BZ_+$ we have \[ \sum_{n=0}^{N} \| \by(n) \|^{2}_{\cY} \le \sum_{n=0}^{N} \| \bu(n) \|^{2}_{\cU}\ \text{ for all }\ N \in {\mathbb Z}_{+}. \] Letting $N \to \infty$, we conclude that $\bu \in\ell^{2}_{\cU}({\mathbb Z}_{+})$ implies that the output sequence $\by$ is in $\ell^{2}_{\cY}({\mathbb Z}_{+})$ with $\| \by \|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{+})} \le \| \bu \|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})}$. Write $ \widehat u$ and $ \widehat y$ for the $Z$-transforms of $\bu$ and $\by$, respectively, i.e., $\widehat u(z) = \sum_{n=0}^{\infty} \bu(n) z^{n}$ and $\widehat y(z)= \sum_{n=0}^{\infty} \by(n) z^{n}$. Since we have imposed zero-initial condition on the state, it now follows that $\widehat y(z) = F_{\Sigma}(z) \widehat u(z)$ in a neighborhood of 0. Since $\bu$ was chosen arbitrarily in $\ell^{2}_\cU(\BZ_+)$, we see that $\widehat u$ is an arbitrary element of $H^2_\cU(\BD)$. Thus, the multiplication operator $M_{F_\Si} \colon \widehat u \mapsto F_\Si \cdot\widehat u$ maps $H^2_\cU(\BD)$ into $H^2_\cY(\BD)$. In particular, taking $\widehat u\in H^2_\cU(\BD)$ constant, it follows that $ F_\Si$ has an analytic continuation to $\BD$. Furthermore, the inequality \[ \|F_\Si \widehat u\|_{H^2_\cY(\BD)}=\|\widehat y\|_{H^2_\cY(\BD)}=\| \by \|^{2}_{\ell^{2}_{{\mathbb Z}_{+}}(\cY)} \le \| \bu \|^{2}_{\ell^{2}_{{\mathbb Z}_{+}}(\cU)}=\|\widehat u\|^{2}_{H^2_\cU(\BD)}, \] implies that the operator norm of the multiplication operator $M_{F_\Si}$ from $H^{2}_{\cU}({\mathbb D})$ to $H^{2}_{\cY}({\mathbb D})$ is at most 1. It is well known that the operator norm of $M_{F_\Si}$ is the same as the supremum norm $\| F_\Si \|_{\infty} = \sup_{z \in {\mathbb D}} \| F_\Si(z) \|$. Hence we obtain that the analytic continuation of $F_\Si$ is in the Schur class $\cS(\cU, \cY)$. \end{proof} We shall see below (see Proposition \ref{P:SaSr}) that conversely, if the transfer function $F_\Sigma$ admits an analytic continuation to a Schur class function, then a storage function for $\Si$ exists.\medskip \paragraph{Quadratic storage functions} \label{S:QuadStorage} The class of storage functions associated with solutions to the generalized KYP inequality \eqref{KYP1b'}--\eqref{KYP1b} are the so-called {\em quadratic storage functions} described next. We shall say that a storage function $S$ is {\em quadratic} in case there is a positive-semidefinite operator $H$ on the state space $\cX$ so that $S$ has the form \begin{equation}\label{QuadStorage1} S(x)=S_H(x)= \begin{cases} \| H^\half x \|^2 &\text{for } x \in \cD(H^\half), \\ +\infty &\text{ otherwise.} \end{cases} \end{equation} If in addition to $F_\Si$ having an analytic continuation to a Schur class function it is assumed that $\Si$ is minimal, it can in fact be shown (see Theorem \ref{T:Sar} below) that quadratic storage functions for $\Si$ exist; for the finite dimensional case see \cite{Wil72b}. \begin{proposition}\label{P:QuadStorage} Suppose that the function $S \colon \cX \to [0, \infty]$ has the form \eqref{QuadStorage1} for a (possibly) unbounded positive-semidefinite operator $H$ on $\cX$. Then $S_H$ is a storage function for $\Sigma$ if and only if $H$ is a positive-semidefinite solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. Moreover, $S$ is {\em nondegenerate} in the sense that $S_H(x) > 0$ for all nonzero $x$ in $\cX$ if and only if $H$ is positive-definite. \end{proposition} \begin{proof} Suppose that $H$ solves \eqref{KYP1b'}--\eqref{KYP1b}. It is clear that $S(0)=\|H^\half 0\|^2=0$, so in order to conclude that $S$ is a storage function it remains to verify the dissipation inequality \eqref{disineq}. Let $(\bu(n), \bx(n), \by(n))_{n \ge N_0}$ be a system trajectory with state initialization $\bx(n_0)=x_0$ for some $x_0\in\cX$ and $N_0\in\BZ$. Fix $n \ge N_0$. If $\bx(n) \notin \cD(H^\half)$, then $S_H(\bx(n)) = \infty$ and the dissipation inequality \eqref{disineq} is automatically satisfied. If $\bx(n) \in \cD(H^\half)$, then \eqref{KYP1b'} implies that $\bx(n+1)=A\bx(n)+B \bu(n)\in \cD(H^\half)$. Thus $S_H(\bx(n+1))<\infty$. Replacing $x$ by $\bx(n)$ and $u$ by $\bu(n)$ in \eqref{KYP1b} and applying \eqref{dtsystem} we obtain that \[ \left\|\mat{cc}{H^\half &0\\ 0& I_\cU}\mat{c}{\bx(n)\\ \bu(n)}\right\|^2 -\left\|\mat{cc}{H^\half&0\\ 0& I_\cY}\mat{c}{\bx(n+1)\\ \by(n)}\right\|^2 \geq 0. \] This can be rephrased in terms of $S_H$ as \[ S_H(\bx(n))+\|\bu(n)\|^2-S_H(\bx(n+1))-\|\by(n)\|^2\geq 0, \] so that \eqref{disineq} appears after adding $S_H(\bx(n+1))$ on both sides. Conversely, suppose that $S_H$ is a storage function. Take $x \in \cX$ and $u \in \cU$ arbitrarily. Let $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ be any system trajectory with initialization $\bx(0) = x$ and with $\bu(0) = u$. Then the dissipation inequality \eqref{disineq} with $n=0$ gives us \begin{equation}\label{eqnStoKYP} S_H(Ax + Bu) \le S_H(x) + \| u \|^2 - \|y \|^2,\quad\mbox{ with }\quad y=Cx+Du. \end{equation} In particular, $S_H(x) < \infty$ (equivalently, $x \in \cD(H^\half)$) implies that $S_H(Ax + Bu) < \infty$ (equivalently, $Ax + B u \in \cD(H^\half)$). Specifying $u=0$ shows that $A \cD(H^\half)\subset \cD(H^\half)$ and specifying $x=0$ shows $B\cU \subset \cD(H^\half)$. Thus \eqref{KYP1b'} holds. Bringing $\|y\|^2$ in \eqref{eqnStoKYP} to the other side and writing out $S_H$ gives \[ \|H^\half (Ax + Bu)\|^2 +\|Cx+Du\|^2 \le \|H^\half x\|^2 + \| u \|^2, \] which provides \eqref{KYP1b}. \end{proof} We say that a function $S \colon \cX \to {\mathbb R}_+=[0,\infty)$ is a {\em strict storage function} for the system $\Sigma$ in \eqref{dtsystem} if the strict dissipation inequality \eqref{diss-strict} holds, i.e., if there exists a $\delta > 0$ so that \begin{equation} \label{diss-strict2} S(\bx(n+1)) - S(\bx(n)) + \delta \| x(n)\|^2 \le (1- \delta) \| \bu(n)\|^2 - \| \by(n) \|^2\quad (n\ge N_0) \end{equation} holds for all system trajectories $\{ \bu(n), \bx(n), \by(n)\}_{n \ge N_0}$, initiated at some $N_0 \in {\mathbb Z}$. Note that strict storage functions are not allowed to attain $+\infty$ as a value. The significance of the existence of a strict storage function for a system $\Sigma$ is that it guarantees that the transfer function $F_\Sigma$ has analytic continuation to a $H^\infty$-function with $H^\infty$-norm strictly less than 1 as well as a coercivity condition on $S$, i.e., we have the following strict version of Proposition \ref{P:storage-Schur}. \begin{proposition} \label{P:strictstorage-Schur} Suppose that the system $\Sigma$ in \eqref{dtsystem} has a strict storage function $S$. Then \begin{enumerate} \item[(1)] the transfer function $F_\Sigma$ has analytic continuation to a function in $H^\infty$ on the unit disk ${\mathbb D}$ with $H^\infty$-norm strictly less than 1, and \item[(2)] $S$ satisfies a coercivity condition, i.e., there is a $\delta > 0$ so that \begin{equation} \label{coercive} S(x) \ge \delta \| x \|^2\quad (x\in\cX). \end{equation} \end{enumerate} \end{proposition} \begin{proof} Assume that $S \colon \cX \to [0, \infty)$ is a strict storage function for $\Sigma$. Then for each system trajectory $(\bu(n), \bx(n), \by(n)))_{n \ge 0}$ with initialization $\bx(0) = 0$, the strict dissipation inequality \eqref{diss-strict2} gives that there is a $\delta > 0$ so that for $n\geq 0$ we have \begin{align*} S(\bx(n+1)) - S(\bx(n)) & \le - \delta \| x \|^2 + (1- \delta) \| \bu(n) \|^2 - \| \by(n) \|^2 \\ & \le (1- \delta) \| \bu(n) \|^2 - \| \by(n) \|^2. \end{align*} Summing up over $n=0,1,2,\dots, N$ for some $N \in {\mathbb N}$ for a system trajectory $(\bu(n), \bx(n), \by(n))_{n\ge 0}$ subject to initialization $\bx(0) = 0$ then gives $$ 0 \le S(\bx(N+1)) = S(\bx(N+1)) - S(\bx(0)) \le (1-\delta) \sum_{n=0}^N \| \bu(n) \|^2 - \sum_{n=0}^N \| \by(n) \|^2. $$ By restricting to input sequences $\bu\in \ell^2_\cU(\BZ_+)$, it follows that the corresponding output sequences satisfy $\by\in\ell^2_\cY(\BZ_+)$ and $\|\by\|_{\ell^2_\cU(\BZ_+)}^2 \le (1 - \delta) \|\bu\|_{\ell^2_\cY(\BZ_+)}^2$. Taking $Z$-transform and using the Plancherel theorem then gives $$ \| M_{F_\Sigma} \widehat \bu \|^2_{H^2_\cY({\mathbb D})}= \| \widehat \by \|^2_{H^2_\cY({\mathbb D})}\le (1- \delta) \| \widehat \bu \|^2_{H^2_\cU({\mathbb D})}. $$ Thus $\|M_{F_\Sigma}\|\leq \sqrt{1-\delta} < 1$. This implies $F_\Sigma$ has analytic continuation to an $\cL(\cU, \cY)$-valued $H^\infty$ function with $H^\infty$-norm at most $\|M_{F_\sigma}\|\leq\sqrt{1-\delta} < 1$. To this point we have not made use of the presence of the term $\delta \| x(n) \|^2$ in the strict dissipation inequality \eqref{diss-strict2}. We now show how the presence of this term leads to the validity of the coercivity condition \eqref{coercive} on $S$. Let $x_0$ be any state in $\cX$ and let $(\bu(n), \bx(n), \by(n))_{n\ge 0}$ be any system trajectory with initialization $\bx(0) = x_0$ and $\bu(0) = 0$. Then the strict dissipation inequality \eqref{diss-strict2} with $n=0$ gives us $$ \delta \| x_0 \|^2 = \delta \| \bx(0) \|^2 \le S(\bx(1)) + \delta \| \bx(0)\|^2 + \| \by(0) \|^2 \le S(\bx(0)) = S(x_0), $$ i.e., $S(x_0) \ge \delta \| x_0 \|^2$ for each $x_0 \in \cX$, verifying the validity of \eqref{coercive}. \end{proof} The following result classifies which quadratic storage functions $S_H$ are strict storage functions. \begin{proposition} \label{P:strictQuadStorage} Suppose that $S = S_H$ is a quadratic storage function for the system $\Sigma$ in \eqref{dtsystem}. Then $S_H$ is a strict storage function for $\Sigma$ if and only if $H$ is a bounded positive-semidefinite solution of the strict KYP-inequality \eqref{KYP2}. Any such solution is in fact strictly positive-definite. \end{proposition} \begin{proof} Suppose that $S_H$ is a strict storage function for $\Sigma$. Then by definition $S_H(x) < \infty$ for all $x \in \cX$. Hence $\cD(H)=\cX$. By the Closed Graph Theorem, it follows that $H$ is bounded. As a consequence of Proposition \ref{P:strictstorage-Schur}, $S_H$ is coercive and hence $H$ is strictly positive-definite. The strict dissipation inequality \eqref{diss-strict2} expressed in terms of $H$ and the system matrix $\sbm{ A & B \\ C & D}$ becomes $$ \| H^\half (Ax + Bu) \|^2 - \| H^\half x \|^2 + \delta \| x \|^2 \le (1 - \delta) \| u \|^2 - \| C x + D u \|^2 $$ for all $x \in \cX$ and $u \in \cU$. This can be expressed more succinctly as \begin{align*} & \left\langle \begin{bmatrix} H & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right \rangle - \left\langle \begin{bmatrix} H & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} x \\ u \end{bmatrix} \right\rangle \\ & \quad \quad \quad \quad \le - \delta \left\langle \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} x \\ u \end{bmatrix} \right\rangle \end{align*} for all $x \in \cX$ and $u \in \cU$, for some $\delta > 0$. This is just the spatial version of \eqref{KYP2}, so $H$ is a strictly positive-definite solution of the strict KYP-inequality \eqref{KYP2}. By reversing the steps one sees that $H \succeq 0$ being a solution of the strict KYP-inequality \eqref{KYP2} implies that $S_H$ is a strict storage function. As a consequence of Proposition \ref{P:strictstorage-Schur} we see that then $S_H$ satisfies a coercivity condition \eqref{coercive}, so necessarily $H$ is strictly positive-definite. \end{proof} \section{The available storage and required supply}\label{S:ASRS} In Proposition \ref{P:storage-Schur} we showed that the existence of a storage function (which is allowed to attain the value $+\infty$) for a discrete-time linear system $\Si$ implies that the transfer function $F_\Si$ associated with $\Si$ is equal to a Schur class function on a neighborhood of 0. In this section we investigate the converse direction. Specifically, we give explicit variational formulas for three storage functions, referred to as the available storage function $S_a$ (defined in \eqref{Sa2}) the required supply function $S_r$ (defined in \eqref{Sr2}) and the ``regularized'' version $\uS_r$ of the required supply (defined in \eqref{uSr2}). Let $\bcU$ denote the space of all functions $n \mapsto u(n)$ from the integers ${\mathbb Z}$ into the input space $\cU$. Then $S_{a}$ is given by \begin{equation} \label{Sa2} S_{a}(x_{0}) = \sup_{\bu \in \bcU,\, n_{1} \ge 0} \sum_{n=0}^{n_{1}} \left( \| \by(n) \|^{2} - \|\bu(n)\|^{2}\right) \end{equation} with the supremum taken over all system trajectories $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ with initialization $\bx(0) = x_0$, while $S_r$ is given by \begin{equation} \label{Sr2} S_{r}(x_{0}) = \inf_{\bu \in \bcU, \, n_{-1} < 0} \sum_{n=n_{-1}}^{-1} \left( \|\bu(n)\|^{2} - \| \by(n) \|^{2} \right) \end{equation} with the infimum taken over all system trajectories $(\bu(n), \bx(n), \by(n))_{n\ge n_{-1}}$ subject to the initialization condition $\bx(n_{-1}) = 0$ and the condition $\bx(0) = x_{0}$. The proof that $S_a$ and $S_r$ are storage functions whenever $F_\Sigma$ is in the Schur class requires the following preparatory lemma. We shall use the following notation. For an arbitrary Hilbert space $\cZ$, write $P_+$ and $P_-$ for the orthogonal projections onto $\ell^2_\cZ(\BZ_+)$ and $ \ell^2_\cZ(\BZ_-)$, respectively, acting on $ \ell^2_\cZ(\BZ)$. For integers $m\leq n$, we write $P_{[m,n]}$ for the orthogonal projection on the subspace of sequences in $\ell^2_\cZ(\BZ)$ with support on the coordinate positions $m, m+1,\ldots, n$. \begin{lemma} \label{L:prep} Let $\Sigma$ be as in \eqref{dtsystem} and suppose that its transfer function $F_\Sigma$ is in the Schur class. Then, for each system trajectory $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ with initialization $\bx(0) = 0$, the inequality \begin{equation} \label{io-ineq} \sum_{n=0}^N \| \by(n) \|^2 \le \sum_{n=0}^N \| \bu(n) \|^2 \end{equation} holds for all $N \in {\mathbb Z}_+$. \end{lemma} \begin{proof} As we have already observed, the fact that $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$ implies that the multiplication operator $M_{F_\Sigma}$ \eqref{mult-op} has norm at most $1$ as an operator from $L^2_\cU({\mathbb T})$ to $L^2_\cY({\mathbb T})$. If we apply the inverse $Z$-transform to the full operator $M_{F_\Sigma}$, not just to the compression ${\mathbb H}_{F_\Sigma}$ as was done to arrive at the Hankel operator $\fH_{F_\Sigma}$ in \eqref{Hankel-matrix}, we get the {\em Laurent operator} \begin{equation}\label{Laurent0} \frakL_{F_{\Si}}=\mat{ccc|ccc}{ \ddots&\ddots&\ddots &\ddots&\ddots&\ddots\\ \ddots&F_{0}&0 &0&0&\ddots\\ \ddots&F_{1}&F_{0} &0&0&\ddots\\ \hline \ddots&F_{2}&F_{1} &F_0&0&\ddots\\ \ddots&F_{3}&F_{2} &F_1&F_0&\ddots\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots}: \ell^2_\cU(\BZ)\to \ell^2_\cY(\BZ), \end{equation} where $F_0, F_1, F_2, \dots$ are the Taylor coefficients of $F_\Sigma$: \begin{equation} \label{Taylor} F_n = \begin{cases} D &\text{if } n=0 \\ C A^{n-1}B &\text{if $n \ge 1$.} \end{cases} \end{equation} It is convenient to write $\fL_{F_\Sigma}$ as a $2 \times 2$-block matrix with respect to the decomposition $\ell^2_\cU({\mathbb Z}) = \sbm{ \ell^2_\cU({\mathbb Z}_-) \\ \ell^2_{\cU}({\mathbb Z}_+)}$ of the domain and the decomposition $\ell^2_\cY({\mathbb Z}) = \sbm{ \ell^2_\cY({\mathbb Z}_-) \\ \ell^2_{\cY}({\mathbb Z}_+)}$ of the range; the result is \begin{equation} \label{Laurent} \frakL_{F_{\Si}}=\mat{c|c}{\wtil{\frakT}_{F_\Si}&0\\\hline \frakH_{F_\Si}& \frakT_{F_\Si}}:\mat{c}{\ell^2_\cU(\BZ_-)\\ \ell^2_\cU(\BZ_+)}\to \mat{c}{\ell^2_\cY(\BZ_-)\\ \ell^2_\cY(\BZ_+)}. \end{equation} Here $\frakH_{F_\Si}:\ell^2_-(\cU)\to \ell^2_+(\cY)$ denotes the Hankel operator associated with ${F_\Si}$ already introduced in \eqref{Hankel-matrix}, $\frakT_{F_\Si}:\ell^2_+(\cU)\to \ell^2_+(\cY)$ the Toeplitz operator associated with ${F_\Si}$, and $\widetilde \frakT_{{F_\Si}}$ the Toeplitz operator acting from $\ell^{2}_{\cU}({\mathbb Z}_{-})$ to $\ell^{2}_{\cY}({\mathbb Z}_{-})$ associated with ${F_\Si}$. From the assumption that $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$, it follows that $M_{F_\Sigma}$ is contractive, and hence also each of the operators $\widetilde \fT_{F_\Sigma}$, $\fH_{F_\Sigma}$, and $\fT_{F_\Sigma}$ is contractive. From the lower triangular form of $\fT_{F_\Sigma}$ we see in addition that $\fT_{F_\Sigma}$ has the {\em causality property}: \begin{equation}\label{causal} P_{[0,N]} \fT_{F_\Sigma} = P_{[0,N]} \fT_{F_\Sigma} P_{[0,N]}\quad (N\ge 0). \end{equation} Now suppose that $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ is a system trajectory on ${\mathbb Z}_+$ with initialization $\bx(0) = 0$. In this case the infinite matrix identity $\by = \fT_{F_\Sigma} \bu$ holds formally. For $N \in {\mathbb Z}_+$ we have $P_{[0,N]}\bu \in \ell^2_\cU(\BZ_+)$, and by the causality property \[ P_{[0,N]} \fT_{F_\Sigma} P_{[0,N]}\bu =P_{[0,N]} \fT_{F_\Sigma}\bu =P_{[0,N]}\by. \] Since $\fT_{F_\Sigma}$ is contractive, so is $P_{[0,N]} \fT_{F_\Sigma} P_{[0,N]}$ and thus the above identity shows that $ \| P_{[0,N]} \by \| \le \| P_{[0,N]} \bu \|, $ or, equivalently, \begin{equation} \label{causal-contraction} \sum_{n=0}^N \| \by(n) \|^2 \le \sum_{n=0}^N \| \bu(n) \|^2 \end{equation} holds for each system trajectory $(\bu(n), \bx(n), \by(n))_{n\ge 0}$ with $\bx(0) = 0$. \end{proof} The proof of the following result is an adaptation of the proofs of Theorems 1 and 2 for the continuous time setting in \cite{Wil72a}. \begin{proposition}\label{P:SaSr} Assume that the discrete-time linear system $\Si$ has a transfer function $F_\Sigma$ which has an analytic continuation to a function in the Schur class $\cS(\cU,\cY)$. Define $S_{a}$ and $S_{r}$ by \eqref{Sa2} and \eqref{Sr2}. Then \begin{enumerate} \item[(1)] $S_{a}$ is a storage function, \item[(2)] $S_{r}$ is a storage function, and \item[(3)] for each storage function $S$ for $\Sigma$ we have $$ S_a(x_0) \le S(x_0) \le S_r(x_0) \text{ for all } x_0 \in \cX. $$ \end{enumerate} \end{proposition} \begin{proof} The proof consists of three parts, corresponding to the three assertions of the proposition. \smallskip {(1)} To see that $S_{a}(x_{0}) \ge 0$ for all $x_{0}\in\cX$, choose $\bx(0) = x_{0}$ and $\bu(n) = 0$ for $n \ge 0$ to generate a system trajectory $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ such that $\sum_{n=0}^{n_{1}} ( \| \by(n) \|^{2} - \| \bu(n) \|^{2}) = \sum_{n=0}^{n_{1}} \| \by(n) \|^{2} \ge 0$ for all $n_{1} \ge 0$. From the definition \eqref{Sa2}, we see that $S_{a}(x_{0}) \ge 0$. By Lemma \ref{L:prep}, each system trajectory $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ with initialization $\bx(0) = 0$ satisfies the inequality \[ \sum_{n=0}^{n_1} \| \by(n) \|^{2}_{\cY} \le \sum_{n=0}^{n_1} \| \bu(n) \|^{2}_{\cU}\quad (n_1\in\BZ_+). \] This observation leads to the conclusion that $S_{a}(0) \le 0$. Hence $S_{a}(0)=0$ and thus $S_{a}$ satisfies the normalization \eqref{normalization'}. Now let $\{\wtil\bu(n), \wtil\bx(n), \wtil\by(n)\}_{n \ge N_0}$ be any system trajectory initiated at some $N_0\in\BZ$. We wish to show that this trajectory satisfies the dissipation inequality \eqref{disineq}. It is convenient to rewrite this condition in the form \[ \| \wtil\by(n) \|^{2}_{\cY} - \| \wtil\bu(n) \|^{2}_{\cU} + S_a(\wtil\bx(n+1)) \le S_a(\wtil\bx(n))\quad (n\in \BZ). \] By translation invariance of the system equations \eqref{dtsystem}, without loss of generality we may take $n=0$, so we need to show \begin{equation} \label{distoshow} \| \wtil\by(0) \|^{2}_{\cY} - \| \wtil\bu(0) \|^{2}_{\cU} + S_a(\wtil\bx(1)) \le S_a(\wtil\bx(0)). \end{equation} We rewrite the definition \eqref{Sa2} for $S_a(\wtil\bx(1))$ in the form \[ S_a(\wtil\bx(1)) = \sup_{\bu \in \bcU, n_1\geq 0} \sum_{n=0}^{n_{1}} \left( \| \by(n) \|^{2}_{\cY} - \| \bu(n) \|^{2}_{\cU} \right), \] where the system trajectory $(\bu(n),\bx(n),\by(n))_{n \ge 0}$ is subject to the initializa\-tion $\bx(0)=\wtil\bx(1)$. Again making use of the translation invariance of the system equations, we may rewrite this in the form \[ S_a(\wtil\bx(1)) = \sup_{\bu \in \bcU, n_1\geq 1} \sum_{n=1}^{n_{1}} \left( \| \by(n) \|^{2}_{\cY} - \| \bu(n) \|^{2}_{\cU} \right), \] where $(\bu(n),\bx(n),\by(n))_{n \ge 0}$ is a system trajectory with initialization now given by $\bx(1)=\wtil\bx(1)$. Substituting this expression for $S(\wtil\bx(1))$, the left hand side of \eqref{distoshow} reads \[ \| \wtil\by(0) \|^{2}_{\cY} - \| \wtil\bu(0) \|^{2}_{\cU} + \sup_{\bu \in \bcU, n_1\geq 1} \sum_{n=1}^{n_{1}} \left( \| \by(n) \|^{2}_{\cY} - \| \bu(n) \|^{2}_{\cU} \right). \] This quantity indeed is bounded above by \[ S_a(\wtil\bx(0))= \sup_{\bu \in \bcU, n_1\geq 0} \sum_{n=0}^{n_{1}} \left( \| \by(n) \|^{2}_{\cY} - \| \bu(n) \|^{2}_{\cU} \right), \] with $(\bu(n),\bx(n),\by(n))_{n \ge 0}$ a system trajectory subject to initialization $\wtil\bx(0)=\bx(0)$. Hence the inequality \eqref{distoshow} follows as required, and $S_{a}$ is a storage function for $\Sigma$. \smallskip {(2)} Let $(\bu(n), \bx(n), \by(n))_{n \ge n_-1}$ be a system trajectory with zero-initial\-ization of the state at $n_{-1} < 0$, subject also to $\bx(0)=x_0$. Applying the result of Lemma \ref{L:prep} to this system trajectory, using the translation invariance property of $\Si$ to get a sum in \eqref{io-ineq} starting at $n_{-1}$ and ending at 0, it follows that $S_{r}(x_{0}) \ge 0$ for all $x_{0}$ in $\Rea (A|B)$. In case $x_0\not\in\Rea(A|B)$, i.e., $x_{0}$ is not reachable in finitely many steps via some input signal $\bu(n)$ ($n_{-1}\le n < 0$) with $\bx(n_{-1}) = 0$, then the definition of $S_r$ in \eqref{Sr2} gives us $S_{r}(x) = +\infty \ge 0$. By choosing $n_{-1} = -1$ with $\bu(-1) = 0$, we see that $S_{r}(0) \le 0$. Since $S_{r}(x_{0}) \ge 0$ for each $x_{0} \in \cX$, it follows that $S_{r}$ also satisfies the normalization \eqref{normalization'}. An argument similar to that used in part 1 of the proof shows that $S_{r}$ satisfies \eqref{disineq}. Indeed, note that it suffices to show that for each system trajectory $\{\wtil\bu(n), \wtil\bx(n), \wtil\by(n)\}_{n \ge 0}$ we have \begin{align} S_{r}(\wtil\bx(1)) & \le \| \wtil\bu(0) \|^{2}_{\cU} - \| \wtil\by(0)\|^{2}_{\cY}+ S_{r}(\wtil\bx(0)) \label{toshow2} \\ & = \inf_{\bu \in \bcU, \, n_{-1} < 0} \left\{ \| \wtil\bu(0) \|^{2}_{\cU} - \| \wtil\by(0)\|^{2}_{\cY}+ \sum_{n=n_{-1}}^{-1} \left( \| \bu(n) \|^{2}_{\cU} - \| \by(n) \|^{2}_{\cY} \right)\right\} \notag \end{align} where $(\bu(n), \bx(n), \by(n))_{n\ge n_{-1}}$ is a system trajectory subject to the initial condition $\bx(n_{-1}) = 0$ and the terminal condition $\bx(0) = \widetilde x(0)$. Rewrite the definition of $S_{r}(\wtil\bx(1))$ as \[ S_{r}(\wtil\bx(1)) = \inf_{\bu \in \bcU, \, n_{-1}< 1} \sum_{n=n_{-1}}^{0} \left( \| \bu(n) \|^{2}_{\cU} - \| \by(n) \|^{2}_{\cY} \right), \] with the system trajectory $(\bu(n),\bx(n),\by(n))_{n\in\BZ}$ subject to the initial and terminal conditions $\bx(n_{-1}) = 0$ and $\bx(1) = \wtil\bx(1)$. Now recognize the argument of the $\inf$ in the right-hand side of \eqref{toshow2} as part of the competition in the infimum defining $S_r(\wtil\bx(1))$ to deduce the inequality \eqref{toshow2}. \smallskip {(3)} Let $S$ be any storage function for $\Sigma$ and $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ any system trajectory with initialization $\bx(0) = x_0$. Iteration of the dissipation inequality \eqref{disineq} for $S$ along the system trajectory $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ as in the proof of Lemma \ref{L:finH2} yields $$ 0 \le S(n_1 + 1) \le S(x_0) + \sum_{n=0}^{n_1} \left( \| \bu(n) \|^2 - \| \by(n) \|^2 \right) $$ or $$ \sum_{n=0}^{n_1} \left( \| \by(n) \|^2 - \| \bu(n) \|^2 \right) \le S(x_0). $$ Taking the supremum in the left-hand side of the above inequality over all such system trajectories $(\bu(n), \bx(n), \by(n))_{n \ge 0}$ and all $n_1 \ge 0$ yields $S_a(x_0) \le S(x_0)$ and the first part of (3) is verified. Next let $x_{0}\in\cX$ be arbitrary. If $(\bu(n), \bx(n), \by(n))_{n\ge n_{-1}}$ is any system trajectory with state-initializa\-tion $x(n_{-1}) = 0$ and $x(0) = x_{0}$, applying Lemma \ref{L:finH2} with $N_0=n_{-1}$ and $N=-1-n_{-1}$ gives us that \begin{equation} \label{Stilde-ineq} S(x_{0}) \le \sum_{n=n_{-1}}^{-1} \left( \| \bu(n)\|^{2}_{\cU} - \| \by(n) \|^{2}_{\cY} \right). \end{equation} Taking the infimum of the right-hand side over all such system trajectories gives us $S(x_{0}) \le S_{r}(x_{0})$. Here we implicitly assumed that the state $x_{0} \in \cX$ is reachable. If $x_{0}$ is not reachable, there are no such system trajectories, and taking the infimum over an empty set leads to $S_{r}(x_{0}) = \infty$, in which case $S(x_0)\le S_{r}(x_{0})$ is also valid. Hence $S(x_{0}) \le S_{r}(x_{0})$ holds for all possible $x_{0}\in\cX$. This completes the verification of the second part of (3). \end{proof} Combining Proposition \ref{P:SaSr} with Proposition \ref{P:storage-Schur} leads to the following. \begin{corollary} \label{C:storageSchur} A discrete-time linear system $\Sigma$ in \eqref{dtsystem} has a transfer function $F_\Sigma$ with an analytic continuation in the Schur class if and only if $\Sigma$ has a storage function $S$. \end{corollary} \begin{proof} The sufficiency is Proposition \ref{P:storage-Schur}. For the necessity direction, by Proposition \ref{P:SaSr} we may choose $S$ equal to either $S_a$ or $S_r$. \end{proof} We next impose a minimality assumption on $\Sigma$ and in addition assume that $F_{\Sigma}$ has an analytic continuation in the Schur class $\cS(\cU, \cY)$, i.e., we make the following assumptions: \begin{equation} \label{A} \hspace*{-.15cm} \left\{ \!\!\! \begin{array}{l} \mbox{\em $\Si$ is minimal, i.e., $(C,A)$ is observable and $(A,B)$ is controllable,} \\ \mbox{\em and $F_{\Sigma}$ has an analytic continuation to a function in $\cS(\cU, \cY)$.} \end{array} \right. \end{equation} Our next goal is to understand storage functions from a more operator-theoretic point of view. We first need some preliminaries. Recall the Laurent operator $\frakL_{F_\Si}$ in \eqref{Laurent0}. From the $2 \times 2$-block form for $\frakL_{{F_\Si}}$ in \eqref{Laurent}, we see that \begin{equation}\label{LFids} \begin{aligned} I - \frakL_{{F_\Si}} \frakL_{{F_\Si}}^{*} &= \begin{bmatrix} D_{\widetilde \frakT_{{F_\Si}}^{*}}^{2} & - \widetilde \frakT_{{F_\Si}} \frakH_{{F_\Si}}^{*} \\ - \frakH_{{F_\Si}} \widetilde \frakT_{{F_\Si}}^{*} & D_{\frakT_{{F_\Si}}^{*}}^{2} - \frakH_{{F_\Si}} \frakH_{{F_\Si}}^{*} \end{bmatrix};\\ I - \frakL_{{F_\Si}}^{*} \frakL_{{F_\Si}} &= \begin{bmatrix} D_{\widetilde \frakT_{{F_\Si}}}^{2} - \frakH_{{F_\Si}}^{*} \frakH_{{F_\Si}} & -\frakH_{F_\Si}^*\frakT_{{F_\Si}}\\ -\frakT_{{F_\Si}}^* \frakH_{F_\Si} & D_{\frakT_{{F_\Si}}}^{2} \end{bmatrix}. \end{aligned} \end{equation} where in general we use the notation $D_{X}$ for the defect operator $D_{X} = (I - X^{*} X)^{\half}$ of a contraction operator $X$. Since ${F_\Si}$ is assumed to be a Schur class function, $\frakT_{F_\Si}$ and $\widetilde \frakT_{{F_\Si}}$ are contractions, and hence $D_{\frakT_{{F_\Si}}}$, $D_{ \frakT_{{F_\Si}}^{*}}$, $D_{\widetilde \frakT_{{F_\Si}}}$ and $D_{\widetilde \frakT_{{F_\Si}}^{*}}$ are well defined. \begin{lemma}\label{L:SaSrOpForm} Let the discrete-time linear system $\Si$ in \eqref{dtsystem} satisfy the assumptions \eqref{A}. The available storage function $S_a$ and required supply function $S_r$ can then be written in operator form as \begin{align} \label{SaOpForm} S_{a}(x_{0}) &= \!\! \sup_{ \bu \in \ell^{2}_{\cU}({\mathbb Z}_{+})} \| \bW_{o} x_{0} + \frakT_{{F_\Si}} \bu \|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{+})} - \| \bu \|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})}\ (x_0\in \cD(\bW_o)) \\ S_r(x_0)&=\inf_{\bu\in\ell_{\tu{fin},\cU}(\BZ_-),\, x_0=\bW_c \bu}\|D_{\wtil\fT_{F_\Si}}\bu\|^2\quad (x_0\in\cX), \label{SrOpForm} \end{align} and $S_{a}(x_{0})=+\infty$ for $x_0\not\in\cD(\bW_o)$. Here $\bW_o$ and $\bW_c$ are the observability and controllability operators defined via \eqref{bWo1}--\eqref{bWc*2} and $\ell_{\tu{fin},\cU}(\BZ_-)$ is the linear manifold of finitely supported sequences in $\ell^2_\cU(\BZ_-)$. In particular, $S_r(x_0)<\infty$ if and only if $x_0\in\Rea(A|B)$. \end{lemma} \begin{proof} We shall use the notation $P_\pm$ and $P_{[m,n]}$ as introduced in the discussion immediately preceding the statement of Lemma \ref{L:prep}. We start with $S_{a}$. For each system trajectory $(\bu(n),\bx(n),\by(n))_{n \ge 0}$ with initialization $\bx(0) = x_0$ and with $\bu\in \ell^2_\cU(\BZ_+)$ by linearity we have \[ \by= \bW_o x_0 + \fT_{F_\Sigma} \bu. \] Now note that, for each system trajectory $(\bu(n),\bx(n),\by(n))_{n \ge 0}$ with initialization $\bx(0) = x_0$ but with $\bu$ not necessarily in $\ell^2_\cU(\BZ_+)$ and with $n_1\geq 0$, by the causality property \eqref{causal}, as in the proof of Lemma \ref{L:prep} we see that we can replace $\bu$ with $P_{[0,n_1]}\bu\in \ell_{{\rm fin}, \cU}({\mathbb Z}_{+}) \subset \ell^2_\cU (\BZ_+)$ within the supremum in \eqref{Sa2} without changing the value. Therefore, the value of $S_a$ at $x_0$ can be rewritten in operator form as \begin{align} &S_{a}(x_{0})=\notag \\ &=\!\!\! \sup_{\bu \in \ell_{{\rm fin}, \cU}({\mathbb Z}_{+}), \, n_{1} \ge 0} \| P_{[0,n_{1}]} (\bW_{o}x_{0} + \frakT_{{F_\Si}} \bu )\|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{+})} - \| P_{[0, n_{1}]} \bu \|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})} \label{sup'-pre} \end{align} where we use the notation $\ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ for $\cU$-valued sequences on ${\mathbb Z}_+$ of finite support. If $x_0 \notin \cD(\bW_o)$ so that $\bW_o x_0 \notin \ell^2_\cY({\mathbb Z}_+)$, the above formulas are to be interpreted algebraically, and we may choose $\bu = 0$ and take the limit as $n_1 \to \infty$ to see that $\bW_o(x_0) = +\infty$. Now assume $x_0\in \cD(\bW_o)$. Fix $\bu \in \ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ and take the limit as $n_1 \to +\infty$ in the right hand side of \eqref{sup'-pre} to see that an equivalent expression for $S_a(x_0)$ is $$ S_a(x_0) = \sup_{\bu \in \ell_{{\rm fin}, \cU}({\mathbb Z}_+)} \| \bW_o x_0 + \fT_{F_\Sigma} \bu \|^2 - \| \bu \|^2. $$ Since $\ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ is dense in $\ell^2_\cU({\mathbb Z}_+)$ and $\fT_{F_\Sigma}$ is a bounded operator, we see that another equivalent expression for $S_a(x_0)$ is \eqref{SaOpForm}. This completes the verification of \eqref{SaOpForm}. We next look at $S_r$. Let $(\bu(n), \bx(n), \by(n))_{n \ge n_{-1}}$ be any system trajectory with initialization $x(n_{-1}) = 0$ for some $n_{-1}<0$. Let us identify $\bu$ with an element $\bu \in \ell_{{\rm fin}, \cU}({\mathbb Z}_-)$ by ignoring the values of $\bu$ on ${\mathbb Z}_+$ and defining $\bu(n) = 0$ for $n < n_{-1}$. Then, as a consequence of item (1) in Proposition \ref{P:Wc}, the constraint $\bx(0) = x_0$ in \eqref{Sr2} can be written in operator form as $\bW_c \bu = x_0$. Furthermore, since $(\bu(n), \bx(n), \by(n))_{n \ge n_{-1}}$ is a system trajectory with zero state initialization at $n_{-1}$, it follows that $$ \by|_{{\mathbb Z}_-} = \widetilde \fT_{F_\Sigma} \bu. $$ We conclude that a formula for $S_r$ equivalent to \eqref{Sr2} is $$ S_r(x_0) = \inf_{ \bu \in \ell^2_{{\rm fin}, \cU}({\mathbb Z}_-) \colon \bW_c \bu = x_0} \| \bu \|^2 - \| \widetilde \fT_{F_\Sigma} \bu \|^2 $$ which in turn has the more succinct formulation \eqref{SrOpForm}. If $x_0 \in \Rea(A|B)$, then the infimum in \eqref{SrOpForm} is taken over a nonempty set, so that $S_r(x_0)<\infty$. On the other hand, if $x_0 \not\in \Rea(A|B)$, then the infimum is taken over an empty set, so that $S_r(x_0)=\infty$. \end{proof} To compute storage functions more explicitly for the case where assumptions \eqref{A} are in place, it will be convenient to restrict to what we shall call {\em $\ell^2$-regular storage functions} $S$, namely, storage functions $S$ which assume finite values on $\im \bW_c$: \begin{equation} \label{reg-storage} x_0 = \bW_c \bu \text{ where } \bu \in \cD(\bW_c) \Rightarrow S(x_0) < \infty. \end{equation} We shall see in the next result that $S_a$ is $\ell^2$-regular. However, unless if $\Rea(A|B)$ is equal to the range of $\bW_c$, the required supply $S_r$ will not be $\ell^2$-regular (by the last assertion of Lemma \ref{L:SaSrOpForm}). To remedy this situation, we introduce the following modification $\uS_r$ of the required supply $S_r$, which we shall call the {\em $\ell^2$-regularized required supply}: \begin{equation} \label{uSr2} \uS_r(x_0) = \inf_{\bu \in \cD(\bW_c) \colon \bW_c \bu = x_0} \sum_{n=-\infty}^{-1} \left( \| \bu(n) \|^2 - \| \by(n) \|^2 \right) \end{equation} where $\bu \in \ell^2_\cU({\mathbb Z}_-)$ determines $\by \in \ell^2_\cY({\mathbb Z}_-)$ via the system input/output map:\ $\by = \widetilde \fT_{F_\Sigma} \bu$. Thus formula \eqref{uSr2} can be written more succinctly in operator form as \begin{align} \uS_r(x_0) & = \inf_{\bu \in \cD(\bW_c) \colon \bW_c \bu = x_0} \| \bu \|^2_{\ell^2_\cU({\mathbb Z}_-)} - \| \widetilde \fT_{F_\Sigma} \bu \|^2_{\ell^2_\cY({\mathbb Z}_-)} \notag \\ & = \inf_{\bu \in \cD(\bW_{c}), \, \bW_{c} \bu = x_{0}} \| D_{\widetilde \frakT_{{F_\Si}}} \bu \|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{-})} \text{ for } x_0\in \im \bW_c. \label{uSrOpForm} \end{align} It is clear that $\uS_r(x_0)<\infty$ if and only if $x_0\in \im \bW_c$. Since the objective in the infimum defining $\uS_r$ in \eqref{uSrOpForm} is the same as the objective in the infimum defining $S_r$ in \eqref{SrOpForm} but the former infimum is taken over an a priori larger set, it follows directly that $S_r(x_0)\geq \uS_r(x_0)$ for all $x_0\in\cX$, as can also be seen as a consequence of Proposition \ref{P:SaSr} once we show that $\uS_r$ is a storage function for $\Sigma$. From either of the formulas we see that $0\leq\uS_r(x_0)$ and that $\uS_r(x_0) <\infty$ exactly when $x_0$ is in the range of $\bW_c$. Hence once we show that $\uS_r$ is a storage function, it follows that $\uS_r$ is an $\ell^2$-regular storage function and is a candidate to be the largest such. However at this stage we have only partial results in this direction, as laid out in the next result. \begin{proposition} \label{P:uSr} Assume that $\Sigma$ is a system satisfying the assumptions \eqref{A} and let the function $\uS_r \colon \im \bW_c \to {\mathbb R}_+$ be given by \eqref{uSrOpForm}. Then: \begin{enumerate} \item[(1)] $S_a$ and $\uS_r$ are $\ell^2$-regular storage functions. \item[(2)] $\uS_r$ is ``almost'' the largest $\ell^2$-regular storage function in the following sense: if $S$ is another $\ell^2$-regular storage function such that either \begin{enumerate} \item $S$ is $\cD(\bW_c^*)$-weakly continuous in the sense that: given a sequence $\{ x_n\} \subset \im \bW_c$ and $x_c \in \im \bW_c$ such that $$ \lim_{n \to \infty} \langle x, x_n \rangle_\cX = \langle x, x_c \rangle_\cX \text{ for all } x \in \cD(\bW_c^*), $$ then $\lim_{n \to \infty} S(x_n) = S(x_0)$, or \item $\bW_c$ is bounded and $S$ is continuous on $\cX$ {\rm(}with respect to the norm topology on $\cX${\rm)}, \end{enumerate} then $S(x_0) \le \uS_r(x_0)$ for all $x_0\in\cX$. \end{enumerate} \end{proposition} \begin{proof} We first prove item (1), starting with the claim for $S_a$. Since by assumption $\Si$ is minimal and $F_\Si$ has an analytic continuation to a Schur class function, by item (1) of Proposition \ref{P:HankelDecs}, $\im \bW_c\subset \cD(\bW_o)$. So on $\im \bW_c$, the available storage $S_a$ is given by \eqref{SaOpForm}. It remains to show that for $x_0\in \im \bW_c$ the formula for $S_a(x_0)$ in \eqref{SaOpForm} gives a finite value. So assume $x_0\in \im \bW_c$, say $x_0=\bW_c \bu_-$ for a $\bu_-\in\ell^2_\cU(\BZ_-)$. Choose $\bu_+\in\ell^2_\cU(\BZ_+)$ arbitrarily and define $\bu\in\ell^2_\cU(\BZ)$ by setting $P_- \bu=\bu_-$ and $P_+ \bu=\bu_+$. Then $\bW_o x_0=\bW_o \bW_c \bu_-= \fH_{F_\Si}\bu_-$. Thus, using the decomposition of $\fL_{F_\Si}$ in \eqref{Laurent} and the fact that $\|\fL_{F_\Si}\|\leq 1$, we find that \begin{align*} &\| \bW_{o} x_{0} + \frakT_{{F_\Si}} \bu_+\|^{2} - \| \bu_+ \|^{2} =\| \fH_{F_\Si}\bu_- + \frakT_{{F_\Si}} \bu_+\|^{2} - \| \bu_+ \|^{2}\\ &\qquad\qquad=\| P_+\fL_{F_\Si} \bu\|^{2} - \|P_+ \bu \|^{2} = \|P_- \bu \|^{2}+ \| P_+\fL_{F_\Si} \bu\|^{2} - \| \bu \|^{2}\\ &\qquad\qquad\leq \|P_- \bu \|^{2}=\|\bu_-\|^2. \end{align*} Since the upper bound $\|\bu_-\|^2$ is independent of the choice of $\bu_+\in \ell^2_\cU(\BZ_+)$, we can take the supremum over all $\bu_+\in \ell^2_\cU(\BZ_+)$ to arrive at the inequality $S_a(x_0)\leq \|\bu_-\|^2<\infty$. Next we prove the statement of item (1) concerning $\uS_r$. By the discussion immediately preceding the statement of the proposition, it follows that $\uS_r$ is an $\ell^2$-regular storage function once we show that $\uS_r$ is a storage function, that is, $\uS_r(0) = 0$ and that $\uS_r$ satisfies the dissipation inequality \eqref{disineq}. If $x_0 = 0$, we can choose $\bu = 0$ as the argument in the right hand side of \eqref{uSrOpForm} to conclude that $\uS_r(0) \le 0$. As we have already seen that $\uS_r(x_0) \ge 0$ for all $x_0$, we conclude that $\uS_r(0) = 0$. To complete the proof of item (1), it remains to show that $\uS_r$ satisfies the dissipation inequality \eqref{disineq}. By shift invariance we may take $n=N_0 = 0$ in \eqref{disineq}. If $\bx(0) \notin \im \bW_c$, then $\uS_r(x_0) = \infty$ and \eqref{disineq} holds trivially. We therefore assume that $(\widetilde \bu(n), \widetilde \bx(n), \widetilde \by(n))_{n \ge 0}$ is a system trajectory with initialization $\widetilde \bx(0) = x_0 = \bW_c \bu_-$ for some $\bu_- \in \cD(\bW_o)$ and the problem is to show \begin{align} &\uS_r(\widetilde \bx(1)) \le \|\widetilde \bu(0) \|^2 - \| \widetilde \by(0) \|^2 + \uS_r(\widetilde \bx(0))= \label{tocheck} \\ & \quad = \inf_{\bu \in \cD(\bW_c) \colon \bW_c \bu = \widetilde \bx(0)} \left[ \|\widetilde \bu(0) \|^2 - \| \widetilde \by(0) \|^2 + \sum_{n=-\infty}^{-1} \left( \| \bu(n) \|^2 - \| \by(n) \|^2 \right) \right], \notag \end{align} where $\by = \widetilde \fT_{F_\Sigma} \bu$. As $(\widetilde \bu(n), \widetilde \bx(n), \widetilde \by(n))_{n \ge 0}$ is a system trajectory initiated at 0, we know that $\widetilde \bx(1) = A \widetilde \bx(0) + B \widetilde \bu(0)$ and $\widetilde \by(0) = C \widetilde \bx(0) + D \widetilde \bu(0)$. On the other hand, by translation-invariance of the system equations \eqref{dtsystem} we may rewrite the formula \eqref{uSr2} for $\uS_r(\widetilde \bx(1))$ as \begin{equation} \label{uSr-shift} \uS_r(\widetilde \bx(1)) = \inf_{ \bu' \in \cD(\bW_c^{(1)}) \colon \bW_c^{(1)} \bu = \widetilde \bx(1)} \sum_{n=-\infty}^0 \left( \| \bu'(n) \|^2 - \| \by'(n) \|^2 \right), \end{equation} where $\bW_c^{(1)}$ is the shifted observability operator discussed in Remark \ref{R:Wc} and where $\by' =\widetilde \fT_{F_\Sigma}^{(1)} \bu'$; here now $\bu'$ is supported on $(-\infty, 0]$ rather than on ${\mathbb Z}_- = (-\infty, 0)$ and $\widetilde \fT_{F_\Sigma}^{(1)}$ is the shift of $\widetilde \fT_{F_\Sigma}$ from the interval ${\mathbb Z}_-$ to the interval $(-\infty, 0]$. Let us write sequences $\bu' \in \ell^2_\cU((-\infty, 0])$ in the form $\bu' = (\bv', v')$ as in Remark \ref{R:Wc} where $\bv' \in \ell^2_\cU({\mathbb Z}_-)$ and $v' \in \cU$. As observed in Remark \ref{R:Wc}, $$ \bW_c^{(1)}(\bv', v) = A \bW_c \bv' + B v'. $$ Furthermore, from the structure of the Laurent operator $\fL_{F_\Sigma}$ \eqref{Laurent} we read off that \begin{equation} \label{shift-Toeplitz} \widetilde \fT_{F_\Sigma}^{(1)} (\bv', v') = \left( \widetilde \fT_{F_\Sigma} \bv', \sum_{k=-\infty}^{-1} C A^{-k-1} B \bv'(k) + D v' \right) \end{equation} where the series converges at least in the weak topology of $\cY$. For $\bv' \in \cD(\bW_c)$, we know from Proposition \ref{P:WcWo'} that $\bW_c \bv'$ is given by \begin{equation} \label{Wc-formula} \bW_c \bv' = \sum_{k=-\infty}^{-1} A^{-k-1} B \bv'(k) \end{equation} where the series converges $\cD(\bW_c^*)$-weakly. We also know under our standing assumption \eqref{A} that $\Obs(C|A) \subset \cD(\bW_c^*)$ (see Proposition \ref{P:HankelDecs} (2)), and hence in particular $C^* y \in \cD(\bW_c^*)$ for all $y \in \cY$. This observation combined with the formula \eqref{Wc-formula} implies that $$ C \bW_c \bv' = \sum_{k=-\infty}^{-1} C A^{-k-1} B \bv'(k) $$ where the series converges weakly in $\cY$. This combined with \eqref{shift-Toeplitz} gives us $$ \widetilde \fT_{F_\Sigma}^{(1)} (\bv', u) = \left( \widetilde \fT_{F_\Sigma} \bv', C \bW_c \bv' + Dv' \right). $$ Thus the formula \eqref{uSr-shift} for $\uS_r(\widetilde \bx(1))$ can be written out in more detail as \begin{equation} \label{uSrtildex(1)} \hspace*{-.2cm} \uS_r(\widetilde \bx(1)) =\!\!\! \inf_{ (\bv', v') \in \cT' } \! \left\{\! ( \| \bv' \|^2 - \| \widetilde \fT_{F_\Sigma} \bv' \|^2) + \| u \|^2 - \| C \bW_c \bv' + Du\|^2\! \right\} \end{equation} where \begin{equation} \label{cT'} \cT' : = \{ \bv' \in \cD(\bW_c), \, v' \in \cU \colon A \bW_c \bv' + Bv' = \widetilde \bx(1)\}. \end{equation} Note that the infimum \eqref{tocheck} can be identified with the infimum \eqref{uSrtildex(1)} if we restrict the free parameter $(\bv', v')$ to lie in the subset $$ \cT = \{ (\bv', v') \in \cT' \colon \bW_c \bv' = \widetilde \bx(0), \quad v' = \widetilde \bu(0)\}. $$ As the infimum of an objective function over a given set $\cT$ is always bounded above by the infimum of the same objective function over a smaller set $\cT' \subset \cT$, the inequality \eqref{tocheck} now follows as wanted. It remains to address item (2), i.e., to show that $S(x_0) \le \uS_r(x_0)$ for any other storage function $S$ satisfying appropriate hypotheses. If $x_0 \notin \im \bW_c$, $\uS_r(x_0) = \infty$ and the desired inequality holds trivially, so we assume that $x_0 = \bW_c \bu$ for some $\bu \in \cD(\bW_c)$. Let us approximate $\bu$ by elements of $\ell_{\tu{fin}, \, \cU}({\mathbb Z}_-)$ in the natural way: $$ \bu_K(n) = \begin{cases} \bu(n) &\text{for } -K \le n \le -1, \\ 0 &\text{for } n< -K \end{cases} $$ for $K=1,2,\dots$, and set $x_K = \bW_c \bu_K$. We let $(\bu(n), \bx(n), \by(n))_{n \ge -K}$ be a system trajectory with $\bu(n) = \bu_K(n)$ and with the state initialization $\bx(-K) = 0$. Then, as $\bx(0)$ will then be equal to $x_K$, iteration of the dissipation inequality \eqref{disineq} gives us \begin{equation} \label{tS-ineq} S(x_K) \le \sum_{n=-K}^{-1} \left( \| \bu_K(n) \|^2 - \| \widetilde \fT_{F_\Sigma} \bu_K(n) \|^2 \right). \end{equation} We seek to let $K \to \infty$ in this inequality. As $\bu_K \to \bu$ in the norm topology of $\ell^2_\cU({\mathbb Z}_-)$ and $\| \widetilde \fT_{F_\Sigma} \| \le 1$ since $F$ is in the Schur class by assumption, it is clear that the right hand side of \eqref{tS-ineq} converges to $$ \| \bu \|^2_{\ell^2_\cU({\mathbb Z}_-) } - \| \widetilde \fT_{F_\Sigma} \bu \|^2_{\ell^2_\cU({\mathbb Z}_-)}= \| D_{\widetilde \fT_{F_\Sigma}} \bu \|^2_{\ell^2_\cU({\mathbb Z}_-)} $$ as $K \to \infty$. On the other hand, as a consequence of the characterization \eqref{limit-c} of the action of $\bW_c$, it follows that $x_K = \bW_c \bu_K$ converges to $x_0 = \bW_c \bu$ in the $\cD(\bW_c^*)$-weak sense. Hence, if $S$ is continuous with respect to the $\cD(\bW_c^*)$-weak topology as described in the statement of item (a), we see that $S(x_K) \to S(x_0)$ as $K \to \infty$ and we arrive at the limiting version of inequality \eqref{tS-ineq}: \begin{equation} \label{limit-wS-ineq} S(x_0) \le \| \bu\|^2 - \| \widetilde \fT_{F_\Sigma} \bu \|^2 = \| D_{\widetilde \fT_{F_\Sigma}} \bu \|^2_{\ell^2_\cU({\mathbb Z}_-)}. \end{equation} We may now take the infimum over all $\bu \in \cD(\bW_c)$ with $\bW_c \bu = x_0$ to arrive at the desired inequality $S(x_0) \le \uS_r(x_0)$. This proves item (a) of (2). If $\bW_c$ is bounded, then $x_K = \bW_c \bu_K$ converges in norm to $\bW_c \bu = x_0$. If $S$ is continuous with respect to the norm topology on $\cX$, then $S(x_K) \to S(x_0)$ and we again arrive at the limit inequality \eqref{limit-wS-ineq}, from which the desired inequality $S(x_0) \le \uS_r(x_0)$ again follows. This completes the verification of item (2) in Proposition \ref{P:uSr}. \end{proof} \begin{remark} Note that the fact that $S_a$ is $\ell^2$-regular can alternatively be seen from the fact that $\uS_r$ is a $\ell^2$-regular storage function combined with the first inequality in item (3) of Proposition \ref{P:SaSr}. \end{remark} Collecting some of the observations on the boundedness of $S_a$ and $\uS_r$ from the above results we obtain the following corollary. The inequalities in \eqref{ineqs} follow directly from \eqref{SaOpForm} and \eqref{uSrOpForm}. \begin{corollary}\label{C:boundedSauSr} Assume $\Si$ as in \eqref{dtsystem} is a system satisfying the assumptions \eqref{A}. Define $S_a$ by \eqref{Sa2} and $\uS_r$ by \eqref{uSr2}. For $x_0\in\im \bW_o$ we have \begin{equation}\label{ineqs} \|\bW_ox_0\|^2\leq S_a(x_0)\leq \uS_r(x_0)\leq \|\bu_-\|^2 \end{equation} for all $\bu_-\in\cD(\bW_c)$ with $x_0=\bW_c \bu_-$, with the last inequality being vacuous if $x_0\not\in\im \bW_c$, in which case $\uS_r(x_0)=\infty$. Hence \begin{align*} \uS_r(x_0)<\infty \quad &\Longleftrightarrow \quad x_0\in \im \bW_c,\\ x_0\in \im \bW_c \quad \Longrightarrow\quad &S_a(x_0)<\infty \quad \Longrightarrow\quad x_0\in\cD(\bW_o). \end{align*} In particular, $\uS_r$ is finite-valued if and only if $\im \bW_c=\cX$, that is, $\Si$ is $\ell^2$-exactly controllable, and $S_a$ is finite-valued in case $\Si$ is $\ell^2$-exactly controllable. \end{corollary} Since ${F_\Si}$ is assumed to be a Schur class function, $\frakL_{{F_\Si}}$ is a contraction, so that $I - \frakL_{{F_\Si}} \frakL_{{F_\Si}}^{*}$ and $I - \frakL_{{F_\Si}}^{*} \frakL_{{F_\Si}}$ are positive-semidefinite operators. We can thus read off from the $(2,2)$-entry in the right-hand side of the first identity and the $(1,1)$-entry in the right hand side of the second identity of \eqref{LFids} that \begin{equation} \label{Hankel-est} D_{\frakT_{{F_\Si}}^{*}}^{2} \succeq \frakH_{{F_\Si}} \frakH_{{F_\Si}}^{*} \ands D_{\widetilde \frakT_{{F_\Si}}}^{2}\succeq \frakH_{{F_\Si}}^{*} \frakH_{{F_\Si}}. \end{equation} The observability and controllability assumptions of \eqref{A} imply that the observability operator $\bW_o:\cD(\bW_o)\to \ell^2_\cY(\BZ_+)$ and the controllability operator $\bW_c:\cD(\bW_c)\to\cX$ are closed densely defined operators that satisfy the properties listed in Propositions \ref{P:WcWo'} and \ref{P:HankelDecs}. As spelled out in Proposition \ref{P:HankelDecs}, the Hankel operator $\frakH_{{F_\Si}}$ admits the factorizations \begin{equation}\label{HankelFact} \frakH_{F_\Si}|_{\cD(\bW_{c})} = \bW_{o} \bW_{c}\ands \frakH_{F_\Si}^*|_{\cD(\bW_{o}^*)} = \bW_{c}^* \bW_{o}^*. \end{equation} Using the Douglas factorization lemma \cite{Douglas} together with the factorizations \eqref{HankelFact}, we arrive at the following result. The proof also requires use of the Moore-Penrose generalized inverse $X^\dagger$ of a densely defined closed linear Hilbert-space operator $X:\cD(X)\to\cH_2$, with $\cD(X)\subset\cH_1$: we define $X^\dagger \colon \cD(X^\dagger) = (\im X \oplus (\im X)^\perp) \to \cH_1$ by \begin{equation} \label{MP} \left\{ \begin{array}{rcl} X^\dagger (X h_1) & = & P_{ (\kr X)^\perp} h_1, \\ X^\dagger|_{(\im X)^\perp} & = & 0. \end{array} \right. \end{equation} Then $X^\dagger$ is also closed and has the properties $$ X^\dagger X = P_{(\kr X)^\perp}|_{\cD(X)}, \quad X X^\dagger = P_{\overline{\im} X}|_{ \im X \oplus (\im X)^\perp }. $$ In particular, if $X$ is bounded and surjective, then $X^\dagger$ is a bounded right inverse of $X$, and, if $X$ is bounded, bounded below and injective, then $X^\dagger$ is a bounded left inverse of $X$. \begin{lemma}\label{L:fact} Let the discrete-time linear system $\Si$ in \eqref{dtsystem} satisfy the assumptions in \eqref{A}. Then: \begin{enumerate} \item[(1)] There exists a unique closable operator $X_a$ with domain $\im \bW_c$ mapping into $ (\kr D_{\frakT_{F_{\Sigma}}^{*}})^\perp \subset \ell^2_\cY(\BZ_+)$ so that we have the factorization \begin{equation} \label{fact1} \bW_{o}|_{\im \bW_{c}} = D_{\frakT_{{F_\Si}}^{*}} X_{a}. \end{equation} Moreover, if we let $\overline{X}_{a}$ denote the closure of $X_{a}$, then $\overline{X}_{a}$ is injective. \item[(2)] There exists a unique closable operator $X_{r}$ with domain $\im \bW_{o}^{*}$ mapping into $(\kr D_{\widetilde \frakT_{F_{\Sigma}}})^\perp \subset \ell^{2}_{\cU}({\mathbb Z}_{-})$ so that we have the factorization \begin{equation} \label{fact2} \bW_{c}^{*}|_{\im \bW_{o}^{*}} = D_{\widetilde \frakT_{F_{\Sigma}}} X_{r}. \end{equation} Moreover, if we let $\overline{X}_{r}$ denote the closure of $X_{r}$, then $\overline{X}_{r}$ is injective. \end{enumerate} \end{lemma} \begin{proof} As statement (2) is just a dual version of statement (1), we only discuss the proof of (1) in detail. Apply the Douglas factorization lemma to the first of the inequalities in \eqref{Hankel-est} to get the existence of a unique contraction operator \[ Y_a:\ell^{2}_{\cU}({\mathbb Z}_{-})\to (\kr D_{\frakT_{F_{\Sigma}}^{*}})^\perp \subset \ell^{2}_{\cY}({\mathbb Z}_{+}) \] such that \[ D_{\frakT_{{F_\Si}}^{*}}Y_{a} =\frakH_{{F_\Si}}, \quad\mbox{so that, by \eqref{HankelFact},}\quad D_{\frakT_{{F_\Si}}^{*}}Y_{a}|_{\cD(\bW_{c})} =\bW_{o} \bW_{c}. \] If we let $\bW_{c}^{\dagger}$ be the Moore-Penrose generalized inverse \eqref{MP} of $\bW_c$, then \[ \bW_{c}^{\dagger} (x) = {\text{arg min }} \{ \|\bu\|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{-})} \colon \bu\in\cD(\bW_c),\ x =\bW_{c} \bu \}\quad (x\in \im \bW_c). \] Since $\bW_c$ is closed, $\kr \bW_c$ is a closed subspace of $\ell^2_\cU(\BZ_-)$ and for all $\bu\in\cD(\bW_c)$ with $x = \bW_{c} \bu$ we have $\bW_{c}^{\dagger} (x)= \bu-P_{\kr \bW_c}\bu$. We next define $X_{a} \colon \im \bW_{c} \to \ell^{2}_{\cY}({\mathbb Z}_{+})$ by $$ X_{a} = Y_{a} \bW_{c}^{\dagger}. $$ Then $X_{a}$ is a well-defined, possibly unbounded, operator on the dense domain $\cD(X_{a}) = \im \bW_{c}$. Moreover we have \[ D_{\frakT_{{F_\Si}}^{*}} X_a = D_{\frakT_{{F_\Si}}^{*}} Y_{a} \bW_{c}^{\dagger} = \frakH_{{F_\Si}} \bW_{c}^{\dagger} = \bW_{o} \bW_{c} \bW_{c}^{\dagger} = \bW_{o}|_{\im \bW_{c}}. \] Hence $X_a$ provides the factorization \eqref{fact1}. Furthermore, $X_a = Y_{a} \bW_{c}^{\dagger}$ implies that $\im X_a \subset \im Y_a$, so that $\im X_a \subset (\kr D_{\frakT_{{F_\Si}}^{*}})^\perp$. Moreover, from the factorization \eqref{fact1} we see that this property makes the choice of $X_{a}$ unique. We now check that $X_a$ so constructed is closable. Suppose that $\{x_{0}^{(k)}\}_{k \ge 0}$ is a sequence of vectors in $\im \bW_{c}$ such that $\lim_{k \to \infty} x_{0}^{(k)} = 0$ in $\cX$-norm, while $\lim_{k \to \infty } X_a x_{0}^{(k)} = \by$ in $\ell^{2}_{\cY}({\mathbb Z}_{+})$-norm. As $D_{\frakT_{{F_\Si}}^{*}}$ is bounded, it follows that \[ \lim_{k \to \infty} \bW_{o} x_{0}^{(k)} = \lim_{k \to \infty} D_{\frakT_{{F_\Si}}^{*}} X_a x_{0}^{(k)} = D_{\frakT_{{F_\Si}}^{*}} \by \text{ in } \ell^{2}_{\cY}({\mathbb Z}_{+})\text{-norm.} \] Since $\bW_{o}$ is a closed operator and we have $x_{0}^{(k)} \to 0$ in $\cX$-norm, it follows that $D_{\frakT_{{F_\Si}}^{*}} \by = 0$. As $\im X_a \subset (\kr D_{\frakT_{{F_\Si}}^{*}})^{\perp}$ and $X_a x_{0}^{(k)} \to \by$, we also have that $\by \in (\kr D_{\frakT_{{F_\Si}}^{*}})^{\perp}$. It follows that $\by = 0$, and hence $X_a$ is closable. Let $\overline{X}_{a}$ be the closure of $X_{a}$. We check that $\overline{X}_{a}$ is injective as follows. The vector $x_{0}$ being in $\cD(\overline{X}_{a})$ means that there is a sequence of vectors $\{x^{(k)}_{0}\}_{k \ge 1}$ contained in $\cD(X_{a})$ with $\lim_{k \to \infty} x_{0}^{(k)} = x_{0}$ in $\cX$ and $\lim_{k \to \infty} X_{a} x_{0}^{(k)} = \by$ for some $\by \in \ell^{2}_{\cY}({\mathbb Z}_{+})$. The condition that $\overline{X}_{a} x = 0$ means that in addition $\by = 0$. Since $D_{\frakT_{F_{\Sigma}}^{*}}$ is bounded, it then follows that $\lim_{k \to \infty} D_{\frakT_{F_{\Sigma}}^{*}} X_{a} x_{0}^{(k)} = 0$, or, by \eqref{fact1} $$ \lim_{k \to \infty}\bW_{o} x_{0}^{(k)} = 0. $$ As we also have $\lim_{k \to \infty} x_{0}^{(k)} = x_{0}$ in $\cX$ and $\bW_{o}$ is a closed operator, it follows that $x_{0} \in \cD(\bW_{o})$ and $\bW_{o} x_{0} = 0$. As $\bW_{o}$ is injective, it follows that $x_{0} = 0$. We conclude that $\overline{X}_{a}$ is injective as claimed. \end{proof} Using the closed operators $\overline{X}_a$ and $\overline{X}_r$ defined in Lemma \ref{L:fact} we now define (possibly unbounded) positive-definite operators $H_a$ and $H_r$ so that the storage functions $S_a$ and $\uS_r$ have the quadratic forms $S_a = S_{H_a}$ and $\uS_r = S_{H_r}$ as in \eqref{QuadStorage1}. We start with $H_a$. Since $\oX_{a}$ is closed, there is a good polar factorization $$\oX_a = U_{a} |\oX_a|$$ (see \cite[Theorem VIII.32]{RS}); in detail, $\oX_{a}^{*} \oX_{a}$ is selfadjoint with positive selfadjoint square-root $|\oX_a| = (\oX_{a}^{*} \oX_{a})^{\half}$ satisfying $\cD(|\oX_a|) = \cD(\oX_{a})$, $U_{a}$ is a partial isometry with initial space equal to $({\rm Ker}\, \oX_{a})^{\perp}$ and final space equal to $\overline{\rm Im}\, \oX_{a}$ so that we have the factorization $\oX_{a} = U_{a} |\oX_{a}|$. Now set \begin{equation} \label{Ha-def} H_a=\oX_{a}^{*} \oX_{a}, \quad H_a^{\half}=|\oX_a|. \end{equation} As noted in Lemma \ref{L:fact}, $\oX_{a}$ is injective, and thus $H_a$ and $H_a^{\half}$ are injective as well, and as a result $U_{a}$ is an isometry. We proceed with the definition of $H_r$. As the properties of $\oX_{r}$ parallel those of $\oX_{a}$, $\oX_{r}$ has a good polar decomposition $\oX_{r} = U_{r} | \oX_{r}|$ with $| \oX_{r}|$ and $U_{r}$ having similar properties as $| \oX_{a}|$ and $U_{a}$, in particular, $\oX_{r}^*\oX_{r}$ and $|\oX_{r}|$ are injective and $U_{r}$ is an isometry. We then define \begin{equation} \label{Hr-def} H_{r} = \left( \oX_{r}^{*} \oX_{r} \right)^{-1}, \quad H_{r}^{\half} = | \oX_{r} |^{-1}. \end{equation} We shall also need a modification of the factorization \eqref{fact2}. For $\bu \in \cD(\bW_{c})$ and $x \in \im \bW_{o}^{*}$, let us note that \begin{align*} \langle \bW_{c} \bu, x \rangle_{\cX} &=\langle \bu, \bW_{c}^{*} x \rangle_{\ell^{2}_{\cU}({\mathbb Z}_{-})} = \langle \bu, D_{\widetilde \frakT_{F_{\Sigma}}} X_{r} x \rangle_{\ell^{2}_{\cU}({\mathbb Z}_{-})} \text{ (by \eqref{fact2})}\\ & = \langle D_{\widetilde \frakT_{F_{\Sigma}}} \bu, X_{r} x \rangle_{\ell^{2}_{\cU}({\mathbb Z}_{-})}. \end{align*} The end result is that then $D_{\widetilde \frakT_{F_{\Sigma}}} \bu$ is in $\cD(X_{r}^{*})$ and $X_{r}^{*} D_{\widetilde \frakT_{F_{\Sigma}}} \bu = \bW_{c} \bu$. In summary we have the following adjoint version of the factorization \eqref{fact2}: \begin{equation} \label{fact3} \bW_{c} = X_{r}^{*} D_{\widetilde \frakT_{F_{\Sigma}}} |_{\cD(\bW_{c})}. \end{equation} In the following statement we use the notion of a {\em core} of a closed, densely defined operator $\Gamma$ between two Hilbert spaces $\cH$ and $\cK$ (see \cite{RS} or \cite{Kato}), namely: a dense linear submanifold $\cD$ is said to be a {\em core} for the closed, densely defined operator $X$ with domain $\cD(X)$ in $\cH$ mapping into $\cK$ if, given any $x \in \cD(X)$, there is a sequence $\{ x_n \}_{n \ge 1}$ of points in $\cD$ such that $\lim_{n \to \infty} x_n = x$ and also $\lim_{n \to \infty} X x_n = X x$. \begin{theorem}\label{T:Sar} Let the discrete-time linear system $\Si$ in \eqref{dtsystem} satisfy the assumptions in \eqref{A}. Define $X_a$, $\oX_{a}$, $X_{r}$, $\oX_r$ as in Lemma \ref{L:fact} and the closed operators $H_{a}$ and $H_r$ as in the preceding discussion. Then the available storage function $S_{a}$ and required supply function $S_r$ are given by \begin{align} S_{a}(x_{0}) = \| \oX_{a} x_{0} \|^{2} = \| H_{a}^{\half} x_{0}\|^{2}\quad (x_0\in \im \bW_{c}), \label{form1}\\ \uS_{r}(x_{0}) = \| | \oX_{r} |^{-1} x_{0} \|^{2}=\|H_{r}^{\half} x_{0}\|^2\quad (x_0\in \im \bW_c) \label{form2}. \end{align} In particular, the available storage $S_{a}$ and $\ell^2$-regularized required supply $\uS_r$ agree with quadratic storage functions on $\im \bW_c$. Moreover, $\im \bW_c$ is a core for $H_a^\half$ and $\im \bW_o^*$ is a core for $H_r^{-\half}$. \end{theorem} \begin{proof} By Lemma \ref{L:fact}, in the operator form of $S_a$ derived in Lemma \ref{L:SaSrOpForm} we can replace $\bW_{o} x_{0}$ by $D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0}$, leading to \begin{equation}\label{sup} S_{a}(x_{0}) = \sup_{ \bu \in \ell^{2}_{\cU}({\mathbb Z}_{+})} \| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} + \frakT_{{F_\Si}} \bu \|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{+})} - \| \bu \|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})}. \end{equation} For $x_0\in \im \bW_c$ and each $\bu\in\ell^2_\cU(\BZ_+)$ we have \begin{align*} &\| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} + \frakT_{{F_\Si}} \bu \|^{2} - \| \bu \|^{2}=\\ &\qquad\qquad=\| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0}, \frakT_{{F_\Si}} \bu \rangle + \| \frakT_{{F_\Si}}\bu \|^{2}- \|\bu \|^{2}\\ &\qquad\qquad=\| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0}, \frakT_{{F_\Si}} \bu \rangle - \| D_{\frakT_{{F_\Si}}}\bu \|^{2}\\ &\qquad\qquad=\| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle X_{a} x_{0}, D_{\frakT_{{F_\Si}}^{*}} \frakT_{{F_\Si}} \bu \rangle - \| D_{\frakT_{{F_\Si}}} \bu \|^{2} \\ &\qquad\qquad= \| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle X_{a} x_{0}, \frakT_{{F_\Si}} D_{\frakT_{{F_\Si}}} \bu \rangle - \| D_{\frakT_{{F_\Si}}} \bu \|^{2} \\ &\qquad\qquad= \| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle \frakT_{{F_\Si}}^{*} X_{a} x_{0}, D_{\frakT_{{F_\Si}}} \bu \rangle - \| D_{\frakT_{{F_\Si}}} \bu \|^{2} \\ &\qquad\qquad= \| D_{\frakT_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + \|\frakT_{{F_\Si}}^{*} X_{a} x_{0} \|^{2} - \| \frakT_{{F_\Si}}^{*} X_{a} x_{0} - D_{\frakT_{{F_\Si}}} \bu \|^{2}\\ &\qquad\qquad= \| X_{a} x_{0} \|^{2} - \| \frakT_{{F_\Si}}^{*} X_{a} x_{0} - D_{\frakT_{{F_\Si}}} \bu \|^{2}. \end{align*} By construction, we have $\im X_{a} \subset (\kr D_{\frakT_{{F_\Si}}^{*}})^{\perp} =\overline{\im} D_{\frakT_{{F_\Si}}^{*}}$. Using that $\frakT_{{F_\Si}}^{*}D_{\frakT_{{F_\Si}}^{*}}=D_{\frakT_{{F_\Si}}} \frakT_{{F_\Si}}^{*}$, we obtain $$ \frakT_{{F_\Si}}^{*} \overline{\im} D_{\frakT_{{F_\Si}}^{*}}\subset \overline{\im} D_{\frakT_{{F_\Si}}}. $$ Thus $\im \frakT_{{F_\Si}}^{*} X_{a} \subset \overline{\im} D_{\frakT_{{F_\Si}}}$. Hence there is a sequence $\bu_{k}$ of input signals in $\ell^{2}_{\cU}({\mathbb Z}_{+})$ so that $\| \frakT_{{F_\Si}}^{*} X_{a} x_{0} - D_{\frakT_{{F_\Si}}} \bu_{k} \| \to 0$ as $k \to \infty$. We conclude that for $x_0\in \im \bW_c$ the supremum in \eqref{sup} is given by \[ S_{a}(x_{0}) = \| X_{a} x_{0} \|^{2}=\| \oX_{a} x_{0} \|^{2}=\| H_a^{\half} x_{0} \|^{2}. \] Let $x_0\in \im\bW_c$. Given a $\bu \in\cD(\bW_c)$, by the factorization \eqref{fact3} we see that $\bW_c\bu=x_0$ if and only if $X_r^{*} D_{\widetilde\frakT_{F_\Si}}\bu=x_0$. Therefore, we have \begin{align*} \uS_r(x_0) &=\inf_{\bu \in \cD(\bW_{c}), \, X_r^{*} D_{\widetilde\frakT_{F_\Si}}\bu=x_0} \| D_{\widetilde \frakT_{{F_\Si}}} \bu \|^{2} =\inf_{\bv \in D_{\widetilde \frakT_{{F_\Si}}}\cD(\bW_{c}), \, X_r^{*} \bv=x_0} \| \bv \|^{2}. \end{align*} A general property of operator closures is $\oX_{r}^{*} = X_{r}^{*}$. Hence \begin{equation} \label{inf1} \uS_{r}(x_{0}) = \inf_{\bv \in D_{\widetilde \frakT_{{F_\Si}}}\cD(\bW_{c}), \, \oX_r^{*}\bv=x_0} \| \bv \|^{2}. \end{equation} As $x_{0} \in \im \bW_{c}$ by assumption, the factorization \eqref{fact3} gives us a $\bu_{0} \in \cD(\bW_{c})$ so that \begin{equation} \label{x0} x_{0} = \oX_{r}^{*} D_{\widetilde \frakT_{F_{\Sigma}}} \bu_{0}. \end{equation} In particular, $x_{0}$ has the form $x_{0} = \oX_{r}^{*} \bv_{0}$ with $\bv_{0} \in D_{\widetilde \frakT_{F_{\Sigma}}} \cD(\bW_{c})$. From \eqref{x0} we see that the general solution $\bv \in \cD(\oX_{r}^{*})$ of $x_{0} = \oX_{r}^{*} \bv$ is \begin{equation} \label{gensol} \bv = D_{\widetilde \frakT_{F_{\Sigma}}} \bu_{0} + k \text{ where } k \in \kr \oX_{r}^{*}. \end{equation} By construction the target space for $X_{r}$ (and $\oX_{r}$) is $\left( \kr D_{\widetilde \frakT_{F_{\Sigma}}}\right)^{\perp}$ so the domain space for $\oX_{r}^{*}$ is $( \kr D_{\widetilde \frakT_{F_{\Sigma}}})^\perp$ and $\kr \oX_r^* \subset \overline{\im} D_{\widetilde \fT_{F_\Sigma}}$. Hence the infimum in \eqref{inf1} remains unchanged if we relax the constraint $\bv \in D_{\widetilde \frakT_{F_{\Sigma}}} \cD(\bW_c)$ to just $\bv \in \cD(\oX_{r}^{*})$, i.e., \begin{equation} \label{inf2} \uS_{r}(x_{0}) = \inf_{ \bv \in \cD(\oX_{r}^{*}), \, \oX_{r}^{*} \bv = x_{0}} \| \bv\|^{2}. \end{equation} In terms of the polar decomposition $\oX_{r} = U_{r} | \oX_{r}|$ for $\oX_{r}$, we have $$ \oX_{r}^{*} = | \oX_{r}| U_{r}^{*} $$ with $$ \cD(\oX_{r}^{*}) = \{ \bu \in \overline{\im} D_{\widetilde \frakT_{F_{\Sigma}}} \colon U_{r}^{*} \bu \in \cD(|\oX_{r}|) = \cD(\oX_{r}) \}. $$ Since $|\oX_{r}|$ is injective and $U_{r}$ is an isometry with range equal to $(\kr \oX_{r}^{*})^{\perp}$, the constraint $|\oX_{r}| U_{r}^{*} \bv = \oX_{r}^{*} \bv = x_{0}$ is equivalent to $$ P_{(\kr \oX_{r}^{*})^{\perp}} \bv = U_{r} U_{r}^{*} \bv = U_{r} |\oX_{r}|^{-1} x_{0}. $$ Since we want to minimize $\|\bv\|^2$ with $P_{(\kr \oX_{r}^*)^\perp}\bv$ equal to $U_r|\oX_{r}^*|^{-1}x_0\in \cD(\oX_r)$, it is clear that this is achieved at $\bv_\textup{opt}=U_r|\oX_{r}^*|^{-1}x_0$, so that \[ \uS_r(x_0)=\|\bv_\textup{opt}\|^2=\|U_r|\oX_{r}|^{-1}x_0\|^2=\||\oX_{r}|^{-1}x_0\|^2 =\|H_r^\half x_0\|^2, \] as claimed. It remains to verify the last assertion concerning the core properties of $\im \bW_c$ and $\im \bW_o^*$. By definition $H_a^\half = | \overline{X}_a|$ where $\overline{X}_a$ is defined to be the closure of the $X_a = \overline{X}_a|_{\im \bW_c}$. Hence $\im \bW_c$ by definition is a core for $\overline{X}_a$ from which it immediately follows that $\im \bW_c$ is a core for $H_a^\half = | \overline{X}_a |$. That $\im \bW_o^*$ is a core for $H_r^{-\half} = | \overline{X}_r |$ follows in the same way via a dual analysis. \end{proof} \section{The dual system $\Si^*$} \label{S:dual} In this section we develop a parallel theory for the dual system $\Si^*$ of $\Si$, which is the system with system matrix equal to the adjoint of \eqref{sysmat} evolving in backward-time. \subsection{Controllability, observability, minimality and transfer function for the dual system} With the discrete-time linear system $\Sigma$ given by \eqref{dtsystem} with system matrix $M = \sbm{ A & B \\ C & D }$ we associate the dual system $\Sigma^*$ with system matrix $M^* = \sbm{ A^* & C^* \\ B^* & D^* } \colon \sbm{ \cX \\ \cY} \to \sbm{ \cX \\ \cU}$. It will be convenient for our formalism here to let the dual system evolve in backward time; we therefore define the system $\Sigma^*$ to be given by the system input/state/output equations \begin{equation} \label{dtsystem*} \Sigma^* \colon = \left\{ \begin{array}{rcl} \bx_*(n-1) & = & A^* \bx_*(n) + C^* \bu_*(n), \\ \by_*(n) & = & B^* \bx_*(n) + D^* \bu_*(n). \end{array} \right. \end{equation} If we impose a final condition $\bx_*(-1) = x_0$ and feed in an input-sequence $\{ \bu(n) \}_{n \in {\mathbb Z}_-}$, one can solve recursively to get, for $n \le -1$, $$ \left\{ \begin{array}{rcl} \bx_*(n) & = & A^{* -n-1} x_0 + \sum_{j=n+1}^{-1} A^{* -n+j} C^* \bu_*(j), \\ \by_*(n) & = & B^* A^{* -n-1} x_0 + \sum_{j = n+1}^{-1} B^* A^{* -n+j-1} C^* \bu_*(j) + D^* \bu_*(n). \end{array} \right. $$ Alternatively, the $Z$-transform $ \{\bx_*(n)\}_{n \in {\mathbb Z}_-} \mapsto \widehat \bx_*(\lambda) = \sum_{n=-\infty}^{-1} \bx_*(n) \lambda^n $ may be applied directly to the system equations \eqref{dtsystem*}. Combining this with the observation that \begin{align*} \sum_{n=-\infty}^{-1} \bx_*(n-1) \lambda^n & = \lambda \left( \sum_{n=-\infty}^{-1} \bx_*(n-1) \lambda^{n-1} \right) = \lambda \left( \sum_{n=-\infty}^{-2} \bx_*(n) \lambda^{n} \right) \\ & = \lambda \left( \widehat \bx_*(\lambda) - x_0 \lambda^{-1} \right) = \lambda \widehat \bx_*(\lambda) - x_0. \end{align*} converts the first system equation in \eqref{dtsystem*} to $$ \lambda \widehat \bx_*(\lambda) - x_0 = A^* \widehat \bx_*(\lambda) + C^* \widehat \bu_*(\lambda) $$ leading to the $Z$-transformed version of the whole system: $$ \left\{ \begin{array}{rcl} \widehat \bx_*(\lambda) & = & (\lambda I - A^*)^{-1} x_0 + (\lambda I - A^*)^{-1} C^* \widehat \bu_*(\lambda), \\ \widehat \by_*(\lambda) & = & B^* (\lambda I - A^*)^{-1} x_0 + F_{\Sigma^*}(\lambda) \widehat \bu_*(\lambda), \end{array} \right. $$ where the {\em transfer function} $F_{\Sigma^*}(\lambda)$ for the system $\Sigma^*$ is then given by \begin{align} F_{\Sigma^*}(\lambda) & = D^* + B^* (\lambda I - A^*)^{-1} C^* \notag \\ & = D^* + \lambda^{-1} (I - \lambda^{-1} A^*)^{-1} C^* = F_\Sigma(1/\overline{\lambda})^* \label{transfunc*} \end{align} which is an analytic function on a neighborhood of the point at $\infty$ in the complex plane. Moreover, $F_{\Sigma^*}$ has analytic continuation to a function analytic on the exterior of the unit disk ${\mathbb D}_e : = \{ \lambda \in {\mathbb C} \colon |\lambda| > 1\} \cup \{\infty\}$ exactly when $F_\Sigma$ has analytic continuation to a function analytic on the unit disk ${\mathbb D}$ with equality of corresponding $\infty$-norms: $$ \| F_{\Sigma_*} \|_{\infty, {\mathbb D}_e} : = \sup_{\lambda \in {\mathbb D}_e} \| F_{\Sigma_*}(\lambda) \| = \sup_{\lambda \in {\mathbb D}} \| F_\Sigma(\lambda) \| =: \| F_\Sigma \|_{\infty, {\mathbb D}}. $$ All the analysis done up to this point for the system $\Sigma$ has a dual analogue for the system $\Sigma^*$. In particular, the observability operator $\bW_{*o}$ for the dual system is obtained by running the system \eqref{dtsystem*} with final condition $\bx_*(-1) = x_0$ and input string $\bu_*(n) = 0$ for $n \le -1$, resulting in the output string $\{ B^* A^{* (-n-1)} x_0 \}_{n \in {\mathbb Z}_-}$. Since we are interested in a setting with operators on $\ell^2$, we define the {\em observability operator} $\bW_{*o}$ for $\Sigma^*$ to have domain $$ \cD(\bW)_{*o} = \{ x_0 \in \cX \colon \{ B^* A^{*(- n-1)} x_0\}_{n\in\BZ_-} \in \ell^2_\cU({\mathbb Z}_-)\} $$ with action given by $$ \bW_{*o} x_0 = \{ B^* A^{* (-n-1)} x_0 \}_{n \in {\mathbb Z}_-} \text{ for } x_0 \in \cD(\bW_{*o}). $$ Note that $\bW_{*o}$ so defined is exactly the same as the adjoint controllability operator $\bW_c^*$ for the original system \eqref{bWc*1}--\eqref{bWc*2}, and in fact viewing this operator as $\bW_{*o}$ gives a better control-theoretic interpretation for this operator. Similarly it is natural to define the adjoint controllability operator for the adjoint system $(\bW_{*c})^*$ by $$ \cD((\bW_{*c})^*) = \{ x_0 \in \cX \colon \{ C A^n x_0 \}_{n \in {\mathbb Z}_+} \in \ell^2_\cY({\mathbb Z}_+)\} = \cD(\bW_o) $$ with action given by $$ \bW_{*c}^* x_0 = \{ C A^n x_0 \}_{n \in {\mathbb Z}_+} = \bW_o x_0. $$ In view of the equalities \begin{equation} \label{identifications} \bW_{*o} = \bW_c^*, \quad (\bW_{*c})^* = \bW_o, \quad (\bW_{*o})^* = \bW_c, \quad \bW_{*c} = \bW_o^*, \end{equation} one can work out the dual analogue of Proposition \ref{P:WcWo'}, either by redoing the original proof with the backward-time system $\Sigma^*$ in place of the forward-time system $\Sigma$, or simply by making the substitutions \eqref{identifications} in the statement of the results. Let us now assume that $F_\Sigma$ has analytic continuation to a bounded analytic $\cL(\cU, \cY)$-valued function on the unit disk, or equivalently, $F_{\Sigma^*}$ has analytic continuation to a bounded analytic $\cL(\cY, \cU)$-valued function on the exterior of the unit disk ${\mathbb D}_e$. Then $F_\Sigma$ and $F_{\Sigma^*}$ can be identified via strong nontangential boundary-value limits with $L^\infty$-functions on the unit circle ${\mathbb T}$; the relations between these boundary-value functions is simply $$ F_{\Sigma^*}(\lambda) = F_\Sigma(\lambda)^* \quad (\mbox{a.e. } \la\in\BT) $$ with the consequence that the associated multiplication operators $$ M_{F_\Sigma} \colon L^2_\cU({\mathbb T}) \to L^2_\cY({\mathbb T}), \quad M_{F_{\Sigma^*}} \colon L^2_\cY({\mathbb T}) \to L^2_\cU({\mathbb T}) $$ given by $$ M_{F_\Sigma} \colon \widehat \bu(\lambda) \mapsto \widehat F_\Sigma(\lambda) \cdot \widehat \bu(\lambda), \quad M_{F_{\Sigma^*}} \colon \widehat \bu_*(\lambda) \mapsto \widehat F_{\Sigma_*}(\lambda) \cdot \widehat \bu_*(\lambda) $$ are adjoints of each other: $$ (M_{F_\Sigma})^* = M_{F_{\Sigma^*}}. $$ Note also that $M_{F_\Sigma}$ maps $H^2_\cU({\mathbb D})$ into $H^2_\cY({\mathbb D})$ while $M_{F_{\Sigma^*}} = M_{F_\Sigma}^*$ maps $(H^2_\cY)^\perp: = L^2_\cY({\mathbb T}) \ominus H^2_\cY({\mathbb D}) \cong H^2_\cY({\mathbb D}_e)$ into $(H^2_\cU)^\perp := L^2_\cU({\mathbb T}) \ominus H^2_\cU({\mathbb D}) \cong H^2_\cU({\mathbb D}_e)$. It is natural to define the frequency-domain Hankel operator ${\mathbb H}_{F_{\Sigma^*}}$ for the adjoint system as the operator from $H^2_\cY({\mathbb D}_e)^\perp = H^2_\cY({\mathbb D})$ (the past from the point of view of the backward-time system $\Sigma^*$) to $H^2_\cU({\mathbb D}_e) =H^2_\cU({\mathbb D})^\perp$ (the future from the point of view of $\Sigma^*$) by \begin{equation} \label{Hankel-identification} {\mathbb H}_{F_{\Sigma^*}} = P_{H^2_\cU({\mathbb D})^\perp} M_{F_{\Sigma^*}}|_{H^2_\cY({\mathbb D})} = ( {\mathbb H}_{F_\Sigma})^*. \end{equation} After application of the inverse $Z$-transform, we see that the time-domain version $\fH_{F_{\Sigma^*}}$ of the Hankel operator for $\Sigma^*$ is just the adjoint $(\fH_{F_\Sigma})^*$ of the time-domain version of the Hankel operator for $\Sigma$, namely $$ \fH_{F_{\Sigma^*}} = [ B^* A^{*(-i + j -1)} C^* ]_{i < 0, j \ge 0} \colon \ell^2_\cY({\mathbb Z}_+) \to \ell^2_\cU({\mathbb Z}_-). $$ from which we see immediately the formal factorization \begin{equation} \label{Hankel-fact*} \fH_{F_{\Sigma^*}} = \operatorname{col}_{i<0} [B^* A^{*( -i -1)}] \cdot \operatorname{row}_{j \ge 0} [A^{*j} C^* ] = \bW_{*o} \bW_{*c} = \bW_c^* \bW_o^*. \end{equation} With all these observations in place, it is straightforward to formulate the dual version of Proposition \ref{P:HankelDecs}, again, either by redoing the proof of Proposition \ref{P:HankelDecs} with the backward-time system $\Sigma^*$ in place of the forward-time system $\Sigma$, or by simply substituting the identifications \eqref{identifications} and \eqref{Hankel-identification}. Note next that an immediate consequence of the identifications \eqref{identifications} is that $\ell^2$-exact controllability for $\Sigma$ is the same as $\ell^2$-exact observability for $\Sigma^*$ and $\ell^2$-exact observability for $\Sigma$ is the same as $\ell^2$-exact controllability for $\Sigma^*$. With this observation in hand, the dual version of Proposition \ref{P:ell2implics} is immediate. \subsection{Storage functions for the adjoint system} Let $S_*$ be a function from $\cX$ to $[0, \infty]$. In parallel with what is done in Section \ref{S:Storage}, we define $S_*$ to be a {\em storage function for the system $\Sigma^*$} if \begin{equation} \label{disineq*} S_*(\bx_*(n-1))) \le S_*(\bx_*(n)) + \| \bu_*(n) \|^2 - \| \by_*(n) \|^2_\cY \text{ for } n \le N_0 \end{equation} holds over all system trajectories $(\bu_*(n), \bx_*(n), \by_*(n))_{n \le N_0}$ of the system $\Sigma^*$ in \eqref{dtsystem*} with state initialization $\bx_*(N_0) = x_0$ for some $x_0 \in \cX$ at some $N_0 \in {\mathbb Z}$, and $S_*$ is normalized to satisfy \begin{equation} \label{normalization*} S_*(0) = 0. \end{equation} Then by redoing the proof of Proposition \ref{P:storage-Schur} with the backward-time system $\Sigma_*$ in place of the forward-time system $\Sigma$, we arrive at the following dual version of Proposition \ref{P:storage-Schur}. \begin{proposition} \label{P:storage-Schur*} Suppose that the system $\Sigma^*$ in \eqref{dtsystem*} has a storage function $S_*$ as in \eqref{disineq*} and \eqref{normalization*}. Then the transfer function $F_{\Sigma^*}$ of $\Sigma^*$ defined by \eqref{transfunc*} has an analytic continuation to the exterior unit disk ${\mathbb D}_e$ in the Schur class $\cS_{{\mathbb D}_e}(\cY, \cU)$. \end{proposition} Note that by the duality considerations already discussed above, an equivalent conclusion is that $F_\Sigma$ has analytic continuation to the unit disk in the Schur class $\cS(\cU, \cY)$ over the unit disk. We say that $S_*$ is a {\em quadratic storage function} for $\Sigma^*$ if $S_*$ is a storage function of the form \begin{equation} \label{QuadStorage1*} S_*(x) = S_{H_*}(x) = \begin{cases} \| H_*^\half x \|^2 &\text{for } x \in \cD(H_*^\half), \\ +\infty &\text{otherwise.} \end{cases} \end{equation} where $H_*$ is a (possibly) unbounded positive-semidefinite operator on $\cX$. To analyze quadratic storage functions for $\Sigma^*$, we introduce the adjoint KYP-inequality: we say that the bounded selfadjoint operator $H$ on $\cX$ satisfies the {\em adjoint KYP-inequality} if \begin{equation} \label{KYP1*} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H_* & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^* \preceq \begin{bmatrix} H_* & 0 \\ 0 & I_\cU \end{bmatrix}. \end{equation} More generally, for a (possibly) unbounded positive-semidefinite operator $H_*$ on $\cX$, we say that $H_*$ satisfies the {\em generalized KYP-inequality} if, for all $x \in \cD(H_*^\half)$ we have \begin{equation} \label{KYP1b'*} A^* \cD(H_*^\half) \subset \cD(H_*^\half), \quad C^* \cY \subset \cD(H_*^\half), \end{equation} and for all $x_* \in \cD(H_*^\half)$ and $u_* \in \cY$ we have \begin{equation} \label{KYP1b*} \left\| \begin{bmatrix} H_*^\half & 0 \\ 0 & I_\cY \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\|^2 - \left\| \begin{bmatrix} H_*^\half & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\|^2 \ge 0. \end{equation} Then the dual version of Proposition \ref{P:QuadStorage} is straightforward. \begin{proposition} \label{P:QuadStorage*} Suppose that the function $S_*$ has the form \eqref{QuadStorage1*} for a (possibly) unbounded positive-semidefinite operator $H_*$ on $\cX$. Then $S_*$ is a storage function for $\Sigma^*$ if and only if $H_*$ is a solution of the generalized adjoint-KYP inequality \eqref{KYP1b'*}--\eqref{KYP1b*}. In particular, $S_*$ is a finite-valued storage function for $\Sigma^*$ if and only if $H$ is a bounded positive-semidefinite operator satisfying the adjoint KYP-inequality \eqref{KYP1*}. \end{proposition} We next discuss a direct connection between positive-definite solutions $H$ of the KYP-inequality \eqref{KYP1} and positive-definite solutions $H_*$ of the adjoint KYP-inequality \eqref{KYP1*}. First let us suppose that $H$ is a bounded strictly positive-definite solution of the KYP-inequality \eqref{KYP1}. Set $$ Q = \begin{bmatrix} H^\half & 0 \\ 0 & I_\cY\end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-\half} & 0 \\ 0 & I_\cU \end{bmatrix}. $$ Then the KYP-inequality \eqref{KYP1} is equivalent to $Q^* Q \preceq I$, i.e., the fact that the operator $Q \colon \sbm{ \cX \\ \cU} \to \sbm{ \cX \\ \cY}$ is a contraction operator. But then the adjoint $Q^*$ of $Q$ is also a contraction operator, i.e., $Q Q^* \preceq I$. Writing out $$ Q^* = \begin{bmatrix} H^{-\half} & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} H^\half & 0 \\ 0 & I_\cY \end{bmatrix} $$ and rearranging gives $$ \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-1} & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \preceq \begin{bmatrix} H^{-1} & 0 \\ 0 & I_\cY \end{bmatrix}, $$ i.e., $H_* := H^{-1}$ is a solution of the adjoint KYP-inequality \eqref{KYP1*} for the adjoint system $\Sigma_*$. Conversely, by flipping the roles of $\Sigma$ and $\Sigma^*$ and using that $\Sigma^{**} = \Sigma$, we see that if $H_*$ is a bounded, strictly positive-definite solution of the adjoint KYP-inequality \eqref{KYP1*}, then $H : = H_*^{-1}$ is a bounded, strictly positive-definite solution of the KYP-inequality \eqref{KYP1}. The same correspondence between solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$ and solutions of the generalized KYP-inequality for the adjoint system \eqref{KYP1b'*}--\eqref{KYP1b*} continues to hold, but the details are more delicate, as explained in the following proposition. For an alternative proof see Proposition 4.6 in \cite{AKP06}. \begin{proposition} \label{P:KYPduality} Suppose $\Sigma$ in \eqref{dtsystem} is a linear system with system matrix $M = \sbm{ A & B \\ C & D}$ while $\Sigma^*$ is the adjoint system \eqref{dtsystem*} with system matrix $M^* = \sbm{ A^* & C^* \\ B^* & D^* }$. Then the {\rm(}possibly unbounded{\rm)} positive-definite operator $H$ is a solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$ if and only if $H^{-1}$ is a positive-definite solution of the generalized KYP-inequality \eqref{KYP1b'*}--\eqref{KYP1b} for $\Sigma^*$. \end{proposition} \begin{proof} Suppose that the positive-definite operator $H$ with dense domain $\cD(H)$ in $\cX$ solves the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. Define an operator $Q \colon \sbm{ \im H^\half \\ \cU } \to \sbm{ \im H^\half \\ \cY}$ by $$ Q \colon \begin{bmatrix} H^\half & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \mapsto \begin{bmatrix} H^\half & 0 \\ 0 & I_\cY \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} $$ for $x \in \cD(H^\half)$ and $u \in \cU$. We can write the formula for $Q$ more explicitly in terms of $x' = H^\half x \in \im H^\half$ as $$ Q \colon \begin{bmatrix} x' \\ u \end{bmatrix} \mapsto \begin{bmatrix} H^\half & 0 \\ 0 & I_\cY \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-\half} & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} x' \\ u \end{bmatrix} $$ for $x' \in \im H^\half$ and $u \in \cU$. The content of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} is that $Q$ is a well-defined contraction operator from $\sbm{ \im H^\half \\ \cU}$ into $\sbm{\cX \\ \cY}$ and hence has a uniquely determined contractive extension to a contraction operator from $\sbm{ \cX \\ \cU}$ to $\sbm{ \cX \\ \cY}$. Let us now choose arbitrary vectors $x \in \cD(H^\half)$, $x_* \in \cD(H^{-\half}) = \im H^\half$, $u \in \cU$, $u_* \in \cY$ and set $x' = H^\half x$, $x_*' = H^{-\half} x_*$. Then we compute on the one hand \begin{align*} \left \langle \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\rangle & = \left\langle \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-\half} & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} x' \\ u \end{bmatrix}, \begin{bmatrix} H^\half x_*' \\ u_* \end{bmatrix} \right\rangle \\ & = \left\langle \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-\half} & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} x' \\ u \end{bmatrix}, \, \begin{bmatrix} x_*' \\ u_* \end{bmatrix} \right\rangle \\ & = \left\langle Q \begin{bmatrix} x' \\ u \end{bmatrix}, \, \begin{bmatrix} x_*' \\ u_* \end{bmatrix} \right \rangle = \left \langle \begin{bmatrix} x' \\ u \end{bmatrix}, \, Q^* \begin{bmatrix} x_*' \\ u_* \end{bmatrix} \right\rangle \\ & = \left\langle \begin{bmatrix} H^\half x \\ u \end{bmatrix}, Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} \right\rangle \end{align*} while on the other hand $$ \left\langle \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \, \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\rangle = \left\langle \begin{bmatrix} x \\ u \end{bmatrix}, \, \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\rangle. $$ We thus conclude that $$ \left\langle \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} \right\rangle = \left\langle \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\rangle $$ for all $\begin{bmatrix} x \\ u \end{bmatrix}$ in $\cD \left( \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix} \right)$. Hence \begin{equation} \label{Q*1} Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} \in \cD\left( \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix}^*\right) = \cD\left( \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix}\right) \end{equation} and \begin{equation} \label{Q*2} \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix} Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} = \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \end{equation} where $x_* \in \cD(H^{-\half})$ and $u_* \in \cY$ are arbitrary. From the formula \eqref{Q*2} we see that $A^* \colon \cD(H^{-\half}) \to \im H^\half = \cD(H^{-\half})$ and that $C^* \colon \cY \to \im H^\half = \cD(H^{-\half})$, i.e., condition \eqref{KYP1b'*} holds with $H_* = H^{-1}$. Let us now rewrite equation \eqref{Q*2} in the form $$ Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} = \begin{bmatrix} H^{-\half} & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ y_* \end{bmatrix}. $$ Using that $Q^*$ is a contraction operator now gives us the spatial KYP-inequality \eqref{KYP1b*} with $H_* = H^{-1}$. This completes the proof of Proposition \ref{P:KYPduality}. \end{proof} We next pursue the dual versions of the results of Section \ref{S:ASRS} concerning the {\em available storage} and {\em required supply} as well as the $\ell^2$-regularized required supply. First of all let us note that the Laurent operator $\frakL_{F_{\Si^*}}$ of $F_{\Si^*}$, i.e., the inverse $Z$-transform version of the multiplication operator $M_{F_{\Sigma^*}} = (M_{F_\Sigma})^*$, is just the adjoint of the Laurent operator $\fL_{F_\Sigma}$ given by \eqref{Laurent0}. We can rewrite $\frakL_{F_{\Si^*}}$ in the convenient block form \begin{equation} \label{Laurent*} \fL_{F_{\Sigma^*}} = \left[ \begin{array}{c|c} \fT_{F_{\Sigma^*}} & \fH_{F_{\Sigma^*}} \\ \hline 0 & \widetilde \fT_{F_{\Sigma^*}} \end{array} \right] = \left[ \begin{array}{c|c} (\widetilde \fT_{F_\Sigma})^* & ( \fH_{F_\Sigma})^* \\ \hline 0 & ( \fT_{F_\Sigma})^* \end{array} \right] \end{equation} where the Toeplitz operators associated with the adjoint system $\Sigma^*$ are given by \begin{align*} & \fT_{F_{\Sigma^*}} = (\fL_{F_\Sigma})^*|_{\ell^2_\cY({\mathbb Z}_-)} = (\widetilde \fT_{F_\Sigma})^*, \\ & \widetilde \fT_{F_{\Sigma^*}} = P_{\ell^2_\cU({\mathbb Z}_+)} ( \fL_{F_\Sigma})^*|_{\ell^2_\cY({\mathbb Z}_+)} = ( \fT_{F_\Sigma})^* \end{align*} and where the Hankel operator for the adjoint system (already introduced as the inverse $Z$-transform version of the frequency-domain Hankel operator ${\mathbb H}_{F_{\Sigma^*}}$ given by \eqref{Hankel-identification}) has the explicit representation in terms of the Laurent operator $\fL_{F_{\Sigma^*}} = (\fL_{F_\Sigma})^*$: $$ \fH_{F_{\Sigma^*}} = P_{\ell^2_\cU({\mathbb Z}_-)} ( \fL_{F_\Sigma})^*|_{\ell^2_\cU({\mathbb Z}_+)}. $$ Let $\bcU_*$ be the space of all functions $n \mapsto \bu_*(n)$ from the integers ${\mathbb Z}$ into the input space $\cY$ for the adjoint system $\Sigma^*$. We define the available storage for the adjoint system $S_{*a}$ by \begin{equation} \label{Sa2*} S_{*a}(x_0) = \sup_{\bu \in \bcU_*, n_{-1} < 0 } \sum_{n=n_{-1}}^{n=-1} (\|\by_*(n) \|^2 - \| \bu_*(n) \|^2 ) \end{equation} where the supremum is taken over all adjoint-system trajectories $$ (\bu_*(n), \bx_*(n), \by_*(n) )_{n \le -1} $$ (specified by the adjoint-system equations \eqref{dtsystem*} running in backwards time) with final condition $\bx_*(-1) = x_0$. Similarly, the dual required supply $S_{*r}$ is given by \begin{equation} \label{Sr2*} S_{*r} (x_0) = \inf_{ \bu \in \bcU, \, n_1 \ge 0} \sum_{n=0}^{n_1} ( \| \bu_*(n) \|^2 - \| \by_*(n) \|^2) \end{equation} where the infimum is taken over system trajectories $(\bu_*(n), \bx_*(n), \by_*(n) )_{n \le n_1}$ subject to the boundary conditions $\bx_*(n_1) = 0$ and $\bx(-1) = x_0$. Then one applies the analysis behind the proof of Proposition \ref{P:SaSr} to the backward-time system $\Sigma^*$ in place of the forward-time system $\Sigma$ to see that $S_{*a}$ and $S_{*r}$ are both storage functions for $\Sigma^*$ and furthermore $S_{*a}(x_0) \le S_*(x_0) \le S_{*r}(x_0)$, $x_0\in\cX$, for any other $\Sigma^*$-storage function $S_*$. We shall however be primarily interested in the $\ell^2$-regularized dual required supply $\uS_{*r}$, rather than in $S_{*r}$, defined by \begin{equation} \label{uSr2*} \uS_{*r}(x_0) = \inf_{\bu \in \cD(\bW_{*c}) \colon \bW_{*c} \bu = x_0} \sum_{n=0}^\infty \left(\| \bu_*(n) \|^2 - \| \by_*(n) \|^2 \right). \end{equation} Furthermore, by working out the backward-time analogues of the analysis in Section \ref{S:ASRS}, one can see that $\uS_{*r}$ is also a storage function for $\Sigma^*$, and that the definitions of $S_{*a}$ and $S_{*r}$ can be reformulated in a more convenient operator-theoretic form: \begin{align} & S_{*a}(x_0) = \sup_{\bu_* \in \ell^2_\cY({\mathbb Z}_-)} \| \bW_{*o} x_0 + \fT_{F_{\Sigma^*}} \bu_*\|^2_{\ell^2_\cU({\mathbb Z}_-)} - \| \bu_* \|^2_{\ell^2_\cY({\mathbb Z}_-)} \notag \\ & = \sup_{\bu_* \in \ell^2_\cY({\mathbb Z}_-)} \| \bW_{c}^* x_0 + \widetilde \fT_{F_{\Sigma}}^* \bu_*\|^2_{\ell^2_\cU({\mathbb Z}_-)} - \| \bu_* \|^2_{\ell^2_\cY({\mathbb Z}_-)} \text{ for } x_0 \in \cD(\bW_c^*) \label{SaOpform*} \end{align} with $S_{*a}(x_0) = + \infty$ if $x_0 \notin \cD(\bW_c^*)$, while \begin{align} & \uS_{*r}(x_0) = \inf_{\bu_* \in \cD(\bW_{*c}) \colon \bW_{*c} \bu_* = x_0} \| \bu_* \|^2_{\ell^2_\cY({\mathbb Z}_+)} - \| \widetilde \fT_{F_{\Sigma^*}} \bu_* \|^2_{\ell^2_\cU({\mathbb Z}_+)} \notag \\ & = \inf_{\bu_* \in \cD(\bW_{o}^*) \colon \bW_{o}^* \bu_* = x_0} \| \bu_* \|^2_{\ell^2_\cY({\mathbb Z}_+)} - \| \fT_{F_{\Sigma}}^* \bu_* \|^2_{\ell^2_\cU({\mathbb Z}_+)} \notag \\ & = \inf_{\bu_* \in \cD(\bW_{o}^*), \, \bW_{o}^* \bu_* = x_0} \| D_{ \fT_{F_\Sigma}^*} \bu_* \|^2 . \label{uSrOpForm*} \end{align} By notational adjustments to the arguments in the proof of Theorem \ref{T:Sar}, we arrive at the following formulas for $S_{*a}$ and $\uS_{*r}$ on $\im \bW_o^*$. \begin{theorem} \label{T:Sar*} Let the operators $\overline{X}_a$, $\overline{X}_r$ be as in Lemma \ref{L:fact} and define operators $H_a$ and $H_r$ as in \eqref{Ha-def} and \eqref{Hr-def}. Then the dual available storage $S_{*a}$ and the dual $\ell^2$-regularized required supply are given {\rm(}on a suitably restricted domain{\rm)} by \begin{align} & S_{*a}(x_0) = \| \overline{X}_r x_0 \|^2 = \| H_r^{-\half} x_0 \|^2 \text{ for } x_0 \in \im \bW_o^*, \label{form1*} \\ & \uS_{*r}(x_0) = \| | \overline{X}_a|^{-1} x_0 \|^2 = \| H_a^{- \half} x_0 \|^2 \text{ for } x_0 \in \im \bW_o^*. \label{form2*} \end{align} \end{theorem} Let us associate extended-real-valued functions $S_{H_a}$, $S_{H_r}$, $S_{H_r^{-1}}$, $S_{H_a^{-1}}$ with the positive-definite operators $H_a$, $H_r$, $H_r^{-1}$, $H_a^{-1}$ as in \eqref{QuadStorage1}. Theorems \ref{T:Sar} and \ref{T:Sar*} give us the close relationship between these functions and the functions $S_a$, $\uS_r$ (storage functions for $\Sigma$) and $S_{*a}$, $\uS_{*r}$ (storage functions for $\Sigma^*$), namely: \begin{align} & S_a(x) = S_{H_a}(x), \quad \uS_{r}(x) = S_{H_r}(x) \text{ for } x \in \im \bW_c, \notag \\ & S_{*a}(x) = S_{H_r^{-1}}(x), \quad \uS_{*r}(x) = S_{H_a^{-1}}(x) \text{ for } x \in \im \bW_o^*. \label{storage-quadratic} \end{align} In general we do not assert that equality holds in any of the four equalities in \eqref{storage-quadratic} for all $x \in \cX$. Nevertheless it is the case that $S_{H_a}$ and $S_{H_r}$ are storage functions for $\Sigma$ and $S_{H_r^{-1}}$ and $S_{H_a^{-1}}$ are storage functions for $\Sigma^*$, as we now explain. \begin{proposition} \label{P:QuadStorageFuncs} Let $H_a$, $H_r$, $H_r^{-1}$, $H_a^{-1}$ be the positive-definite operators as in Theorems \ref{T:Sar} and \ref{T:Sar*}. Then the following hold: \begin{enumerate} \item[(1)] $S_{H_a}$ and $S_{H_r}$ are nondegenerate storage functions for $\Sigma$, or equivalently, $H_a$ and $H_r$ are positive-definite solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$. \item[(2)] $S_{H_r^{-1}}$ and $S_{H_a^{-1}}$ are storage functions for $\Sigma^*$, or equivalently, $H_r^{-1}$ and $H_a^{-1}$ are positive-definite solutions of the generalized KYP-inequality \eqref{KYP1b'*}--\eqref{KYP1b*} for $\Sigma^*$. \end{enumerate} \end{proposition} \begin{proof} The fact that $S_H$ is a nondegenerate storage function for $\Sigma$ (respectively $\Sigma^*$) if and only if $H$ is a positive-definite solution of the generalized KYP-inequality for $\Sigma$ (respectively $\Sigma^*$) is a consequence of Proposition \ref{P:QuadStorage} and its dual Proposition \ref{P:QuadStorage*}. We shall use these formulations interchangeably. We know that $S_{H_a}(x) = S_a(x)$ for $x \in \im \bW_c$. Furthermore as a consequence of \eqref{intertwine1} with $\widetilde u = 0$ and of \eqref{Wc-fin} with $n_{-1} = -1$, we see that $\im \bW_c$ is invariant under $A$ and contains $\im B$. Thus condition \eqref{KYP1b'} holds with $\im \bW_c$ in place of $\cD(H^\half)$. The facts that $S_{H_a}$ agrees with $S_a$ on $\im \bW_c$ and that $S_a$ is a storage function for $\Sigma$ implies that the inequality \eqref{KYP1b} holds for $x \in \im \bW_c$ and $u \in \cU$: \begin{equation}\label{KYP1b-Wc} \left\| \begin{bmatrix} H_a^{\half} \! & \! 0 \\ 0 \! & \! I_{\cU} \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2} - \left\| \begin{bmatrix} H_a^{\half} \! & \! 0 \\ 0 \! & \! I_{\cY} \end{bmatrix} \begin{bmatrix} A\! & \! B \\ C \! & \! D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2} \ge 0. \end{equation} As noted at the end of Theorem \ref{T:Sar}, $\im \bW_c$ is a core for $H_a^\half$; hence, given $x \in \cD(H_a)$, there is a sequence of points $\{ x_n\}_{n \ge 1}$ contained in $\im \bW_c$ such that $\lim_{n \to \infty} x_n = x$ and $\lim_{n \to \infty} H^\half x_n = H^\half x$. As each $x_n \in \im \bW_c$, we know that the inequality \eqref{KYP1b-Wc} holds with $x_n$ in place of $x$ for all $n = 1,2,\dots$. We may now take limits in this inequality to see that the inequality continues to hold with $x = \lim_{n \to \infty} x_n \in \cD(H_a^\half)$, i.e., condition \eqref{KYP1b} holds with $H_a$ in place of $H$. Thus $H$ is a solution of the generalized KYP-inequality for $\Sigma$. That $H_r^{-1}$ is a solution of the generalized KYP-inequality for $\Sigma^*$ now follows by applying the same analysis to $\Sigma^*$ rather than to $\Sigma$. Finally, the fact that $H_a$ (respectively, $H_r^{-1}$) is a positive-definite solution of the generalized KYP-inequality for $\Sigma$ (respectively for $\Sigma^*$) implies that $H_a^{-1}$ (respectively, $H_r$) is a positive-definite solution of the generalized KYP-inequality for $\Sigma^*$ (respectively, $\Sigma$) as a consequence of Proposition \ref{P:KYPduality}. \end{proof} \section{Order properties of solutions of the generalized KYP-inequality and finer results for special cases} \label{S:order} We have implicitly been using an order relation on storage functions, namely: we say that $S_1 \le S_2$ if $S_1(x_0) \le S_2(x_0)$ for all $x_0 \in \cX$. For the case of quadratic storage functions $S_{H_1}$ and $S_{H_2}$ where $H_1$ and $H_2$ are two positive-semidefinite solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}, the induced ordering $\le$ on positive-semidefinite (possibly unbounded) operators can be defined as follows: given two positive-semidefinite operators $H_1$ with dense domain $\cD(H_1)$ and $H_2$ with dense domain $\cD(H_2)$ in $\cX$, we say that {\em $H_1 \le H_2$ if $\cD(H_2^\half) \subset \cD(H_1^\half)$ and } \begin{equation} \label{H1leH2} \| H^\half_1 x \|^2 \le \| H^\half_2 x \|^2 \text{ for all } x \in \cD(H_2^\half). \end{equation} In case $H_1$ and $H_2$ are bounded positive-semidefinite operators, one can see that $H_1 \le H_2$ is equivalent to $H_1 \preceq H_2$ in the sense of the inequality between quadratic forms: $\langle H_1 x, x \rangle \le \langle H_2 x, x \rangle$, i.e., in the Loewner partial order: $H_2 - H_1 \succeq 0$. This ordering $\le$ on (possibly unbounded) positive-semidefinite operators has appeared in the more general context of closed quadratic forms $S_H$ (not necessarily storage functions for some dissipative system $\Sigma$) and associated semibounded selfadjoint operators $H$ (not necessarily solving some generalized KYP-inequality); see formula (2.17) and the subsequent remark in the book of Kato \cite{Kato}. This order has been studied in the setting of solutions of a generalized KYP-inequality in the paper of Arov-Kaashoek-Pik \cite{AKP06}. Here we offer a few additional such order properties which follow from the results developed here. Recall that the notion of a {\em core} of a closed, densely defined linear operator was introduced in the paragraph preceding Theorem \ref{T:Sar}. \begin{theorem} \label{T:order} Assume that the system $\Sigma$ in \eqref{dtsystem} satisfies the standing assumption \eqref{A} and $H_a$ and $H_r$ are defined by \eqref{Ha-def} and \eqref{Hr-def}. Let $H$ be any positive-definite solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. \begin{enumerate} \item[(1)] Assume that $\im \bW_c$ is a core for $H^\half$. Then we have the operator inequality \begin{equation} \label{HaH-ineq} H_a \le H \end{equation} and furthermore $\im \bW_o^* \subset \cD(H^{-\half})$. \item[(2)] Assume that $\im \bW_o^*$ is a core for $H^{-\half}$. Then we have the operator inequality \begin{equation} \label{HHr-ineq} H \le H_r \end{equation} and furthermore $\im \bW_c \subset \cD(H^\half)$. \end{enumerate} \end{theorem} \begin{proof} We deal with (1) and (2) in turn. {(1)} Suppose that $H$ is a positive-definite solution of the generalized KYP-inequality such that $\im \bW_c$ is a core for $H^\half$. From Theorem \ref{T:Sar}, we know that $S_a(x) = \| H_a^\half x\|$ for $x \in \im \bW_c$. Since $S_a$ is the smallest storage function (see Proposition \ref{P:SaSr}) and $S_H$ is a storage function, it follows that \begin{equation} \label{Ha-Hineq} \| H_a^\half x \|^2 = S_a(x) \le S_H(x) = \| H^\half x \|^2 \text{ for } x \in \im \bW_c. \end{equation} Let now $x$ be an arbitrary point of $\cD(H^\half)$. Since $\im \bW_c$ is a core for $H^\half$, we can find a sequence $\{x_n \}_{n \ge 1}$ of points in $\im \bW_c$ such that $x_n \to x$ and $H^\half x_n \to H^\half x$. In particular $H^\half x_n$ is a Cauchy sequence and the inequality \begin{align*} & \| H_a^\half x_n - H_a^\half x_m \|^2 = \| H_a ^\half(x_n - x_m ) \|^2 \\ & \quad \quad \le \| H^\half(x_n - x_m ) \|^2 = \| H^\half x_n - H^\half x_m \|^2 \end{align*} implies that $\{ H_a^\half x_n \}_{n \ge 1}$ is Cauchy as well, so converges to some $y \in \cX$. As $H_a$ is closed, we get that $x \in \cD(H_a)$ and $y = H_a^\half x$. We may then take limits in the inequality $\|H_a^\half x_n \|^2 \le \| H^\half x_n \|^2$ holding for all $n$ (a consequence of \eqref{Ha-Hineq}) to conclude that $\| H_a^\half x \|^2 \le \| H^\half x \|^2$, i.e., $H_a \le H$, i.e., \eqref{HaH-ineq} holds. Recall next from Corollary \ref{C:boundedSauSr} that $\| \bW_o x_0 \|^2 \le S_a(x_0)$, where we now also know from Theorem \ref{T:Sar} that $S_a(x_0) = \| H^\half x_0 \|^2$ for $x_0 \in \im \bW_c$. We thus have the chain of operator inequalities $$ \bW_o^* \bW_o \le H_a \le H. $$ By Proposition 3.4 in \cite{AKP05}, we may equivalently write $$ H^{-1} \le H_a^{-1} \le (\bW_o^* \bW_o)^{-1}. $$ In particular $\cD( | \bW_o |^{-1}) \subset \cD(H^{- \half})$. If we introduce the polar decomposition $\bW_o = U_o | \bW_o |$ for $\bW_o$, we see that $\bW_o^* = | \bW_o | U_o^*$ and hence $\im \bW_o^* = \im | \bW_o|$. Thus $$ \cD( | \bW_o |^{-1}) = \im | \bW_o| = \im \bW_o^* $$ and it follows that $\im \bW_o^* \subset \cD(H^{-\half})$ and the verification of (1) is complete. {(2)} We now suppose that $H$ is a positive-definite solution of the generalized KYP-inequality such that $\im \bW_o^*$ is a core for $H^{-\half}$. By the applying the result of part (1) to the adjoint system $\Sigma^*$, we see that $H_r^{-1} \le H^{-1}$ and that $\im \bW_c \subset H^\half$. If we apply the result of Proposition 3.4 in \cite{AKP05}, we see that $H_r^{-1} \le H^{-1}$ implies that (is actually equivalent to) $H \le H_r$, completing the verification of (2). \end{proof} \begin{remark} \label{R:ineq-chain} By the last assertion in Theorem \ref{T:Sar}, we know that $\im \bW_c$ is a core for $H_a^\half$ and that $\im \bW_o^*$ is a core for $H_r^{-\half}$. Also by Proposition \ref{P:QuadStorageFuncs} we know that $H_a$ and $H_r$ are positive-definite solutions of the generalized KYP-inequality for $\Sigma$. Thus item (1) in Theorem \ref{T:order} may be rephrased as follows: \begin{itemize} \item {\sl The set ${\mathcal GS}_c$ consisting of all positive-definite solutions $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$ such that $\im \bW_c$ is a core for $H^\half$ has the solution $H_a$ as a minimal element with respect to the ordering $\le$.} \end{itemize} \noindent Similarly item (2) in Theorem \ref{T:order} may be rephrased as: \begin{itemize} \item {\em The set ${\mathcal GS}_o$ consisting of all positive-definite solutions $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} such that $\im \bW_o^*$ is a core for $H^{-\half}$ has the solution $H_r$ as a maximal element with respect to the ordering $\le$.} \end{itemize} \noindent It would be tempting to say: \begin{itemize} \item {\em The set ${\mathcal GS}$ consisting of all positive-definite solutions $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} such that $\im \bW_c$ a core for $H^\half$ and $\im \bW_o^*$ is a core for $H^{-\half}$ has $H_a$ as a minimal element and $H_r$ as a maximal element with respect to the ordering $\le$.} \end{itemize} However, while the above results imply that $\im \bW_c \subset \cD(H_r^{\half})$ and that $\im \bW_o^* \subset \cD(H_a^{-\half})$, we have not been able to show in general that $\im \bW_c$ is a core for $H_r^\half$ or that $\im \bW_o^*$ is a core for $H_a^{-\half}$. Such a more satisfying symmetric statement does hold in the pseudo-similarity framework for the analysis of solutions of generalized KYP-inequalities (see Proposition 5.8 in \cite{AKP06}). \end{remark} We now consider the case that $\Si$ is not only controllable and/or observable, but has the stronger $\ell^2$-exact controllability or $\ell^2$-exact observability condition, or both, i.e., $\ell^2$-exact minimality. We first consider the implications on $H_a$ and $H_r$. \begin{proposition}\label{P:ell2minImplsHaHr} Let $\Sigma$ be a system as in \eqref{dtsystem} such that assumption \eqref{A} holds. \begin{itemize} \item[(1)] If $\Sigma$ is $\ell^2$-exactly controllable, then $H_a$ and $H_r$ are bounded. \item[(2)] If $\Sigma$ is $\ell^2$-exactly observable, then $H_a$ and $H_r$ are boundedly invertible. \item[(3)] $\Sigma$ is $\ell^2$-exactly minimal, i.e., both $\ell^2$-exactly controllable and $\ell^2$-exactly observable, then $H_a$ and $H_r$ are both bounded and boundedly invertible. \end{itemize} \end{proposition} \begin{proof} We discuss each of (1), (2), (3) in turn. {(1)} Item (1) follows directly from the fact that $\im \bW_c$ is contained in both $\cD(H_a)$ and $\cD(H_r)$ together with the Closed Graph Theorem. {(2)} From the last assertion in Theorem \ref{T:Sar}, we know that $\im \bW_c$ is a core for $H_a^\half$. Then item (1) in Theorem \ref{T:order} implies that $\im \bW_o^* \subset \cD(H_a^{-\half})$. If $\im \bW_o^* = \cX$, the Closed Graph Theorem then gives us that $H_a^{-\half}$ is bounded. Also part of the last assertion of Theorem \ref{T:Sar} is the statement that $\bW_o^*$ is a core for $H_r^{-\half}$, so in particular $\im \bW_o^* \subset \cD(H_r^{-\half})$. Then again the Closed Graph Theorem implies that $H_r^{-\half}$ is bounded. {(3)}. Simply combine the results of items (1) and (2). \end{proof} Next we consider general positive-definite solutions to the generalized KYP-inequality. \begin{proposition}\label{P:ell2minImplsH} Suppose that $\Sigma$ is a system as in \eqref{dtsystem} such that assumption \eqref{A} holds and that $H$ is any positive-definite solution of the generalized KYP-inequality. \begin{itemize} \item[(1)] Suppose that $\Sigma$ is $\ell^2$-exactly controllable and that $\im \bW_c \subset \cD(H^\half)$ {\rm(}as is the case e.g.\ if $\im \bW_o^*$ is a core for $H^{-\half}${\rm)}. Then $H$ is bounded and furthermore $$ H_a \le H. $$ \item[(2)] Suppose that $\Sigma$ is $\ell^2$-exactly observable and that $\im \bW_o^* \subset \cD(H^{-\half})$ {\rm(}as is the case e.g.\ if $\im \bW_c$ is a core for $H^\half${\rm)}. Then $H^{-1}$ is bounded and furthermore $$ H \le H_r. $$ \item[(3)] Suppose that $\Sigma$ is both $\ell^2$-exactly controllable and $\ell^2$-exactly observable and that either {\rm(a)} $\im \bW_c \subset \cD(H^\half)$ or {\rm(b)} $\im \bW_o^* \subset \cD(H^{-\half})$. Then $H$ is bounded and boundedly invertible and we have the inequality chain \begin{equation} \label{HaHrineq} H_a \le H \le H_r. \end{equation} \end{itemize} \end{proposition} \begin{proof} First note that the fact that the parenthetical hypotheses in items (1) and (2) are stronger than the given hypotheses is a consequence of the final assertions in parts (1) and (2) of Theorem \ref{T:order}. We now deal with the rest of (1), (2), (3). {(1)} If we assume that $\cX = \im \bW_c \subset \cD(H^\half)$, then $H^\half$ (and hence also $H$) is bounded by the Closed Graph Theorem. Moreover, as $\im \bW_c = \cD(H^\half)$, in particular $\im \bW_c$ is a core for $H^\half$ and the inequality $H_a \le H$ follows from Theorem \ref{T:order} (1). {(2)} Similarly, if we assume $\cX = \im \bW_o^* \subset \cD(H^{-\half})$, then $H^{-\half}$ is bounded by the Closed Graph Theorem. As $\im \bW_o^* = \cD(H^{-\half})$, in particular $\im \bW_o^*$ is a core for $H^{-\half}$ and $H \le H_r$ follows as a consequence of Theorem \ref{T:order} (2). {(3)} If $\cX = \im \bW_c \subset \cD(H^\half)$, then in fact $\im \bW_c = \cD(H^\half)$ so $\im \bW_c$ is a core for $H^\half$. By Theorem \ref{T:order}, it follows that $\im \bW_o^* \subset \cD(H^{-\half})$ and hence hypothesis (b) is a consequence of hypothesis (a) when combined with all the other hypotheses in (3). Similarly hypothesis (a) is a consequence of hypothesis (b). Hence there is no loss of generality in assuming that both (a) and (b) hold. Then the verification of (3) is completed by simply combining the results of (1) and (2). \end{proof} \section{Proofs of Bounded Real Lemmas} \label{S:BRLproof} We now put all the pieces together to give a storage-function proof of Theorem~\ref{T:BRLinfstan}. \begin{proof}[Proof of Theorem \ref{T:BRLinfstan}] We are given a minimal system $\Sigma$ as in \eqref{dtsystem} with transfer function $F_\Sigma$ in the Schur class $\cS(\cU, \cY)$. \smallskip \noindent {\em Proof of sufficiency.} For the sufficiency direction, we assume either that there exists a positive-definite solution $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} (statement (1)) or a bounded and boundedly invertible solution $H$ of the KYP-inequality \eqref{KYP1} (statements (2) and (3)). As the latter case is a particular version of the former case, it suffices to assume that we have a positive-definite solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. We are to show that then $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$. Given such a generalized solution of the KYP-inequality, Proposition \ref{P:QuadStorage} guarantees us that $S_H$ is an (even quadratic) storage function for $\Sigma$. Then $F_\Sigma$ has analytic continuation to a Schur class function by Proposition \ref{P:storage-Schur}. \smallskip \noindent {\em Proof of necessity in statement (1):} We assume that $\Sigma$ is minimal and that $F_\Sigma$ has analytic continuation to a Schur-class function, i.e., assumption \eqref{A} holds. Then Proposition \ref{P:QuadStorageFuncs} gives us two choices $H_a$ and $H_r$ of positive-definite solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. \smallskip \noindent {\em Proof of necessity in statement (2):} We assume that $\Sigma$ is exactly controllable and exactly observable with transfer function $F_\Sigma$ having analytic continuation to the Schur class. From Proposition \ref{P:HankelDecs} (1) we see that $\im \bW_c \supset \Rea(A|B) = \cX$ and that $\cD(\bW_o) \supset \Rea (A|B) = \cX$ while from item (2) in the same proposition we see that $\im \bW_o^* \supset \Obs(C|A) = \cX$ and that $\cD(\bW_c^*) \supset \Obs(C|A) = \cX$. Hence by the Closed Graph Theorem, in fact $\bW_c$ and $\bW_o^*$ are bounded in addition to being surjective. In particular $\Sigma$ is $\ell^2$-exactly controllable and $\ell^2$-exactly observable, so this case actually falls under item (3) of Theorem \ref{T:BRLinfstan}, which we will prove next. \smallskip \noindent {\em Proof of necessity in statement (3):} We now assume that $\Sigma$ is $\ell^2$-exactly controllable and $\ell^2$-exactly observable with $F_\Sigma$ having analytic continuation to a function in the Schur class $\cS(\cU, \cY)$ and we want to produce a bounded and boundedly invertible solution $H$ of the KYP-inequality \eqref{KYP1}. In particular, $\Sigma$ is minimal (controllable and observable), so Proposition \ref{P:QuadStorageFuncs} gives us two solutions $H_a$ and $H_r$ of the generalized KYP-inequality. But any solution $H$ of the generalized KYP-inequality becomes a solution of the standard KYP-inequality \eqref{KYP1} if it happens to be the case that $H$ is bounded. By the result of item (3) in Proposition \ref{P:ell2minImplsHaHr}, both $H_a$ and $H_r$ are bounded and boundedly invertible under our $\ell^2$-minimality assumptions. Thus in this case $H_a$ and $H_r$ serve as two choices for bounded, strictly positive-definite solutions of the KYP-inequality, as needed. \end{proof} We are now ready also for a storage-function proof of Theorem \ref{T:BRLinfstrict}. \begin{proof}[Proof of Theorem \ref{T:BRLinfstrict}] The standing assumption for both directions is that $\Sigma$ is a linear system as in \eqref{dtsystem} with exponentially stable state operator $A$. \smallskip \noindent {\em Proof of necessity:} Assume that there exists a bounded strictly positive-definite solution $H$ of the strict KYP-inequality. By Proposition \ref{P:strictQuadStorage}, $S_H$ is a strict storage function for $\Sigma$. Then by Proposition \ref{P:strictstorage-Schur}, $F_\Sigma$ has analytic continuation to an $\cL(\cU, \cY)$-valued $H^\infty$-function with $H^\infty$-norm strictly less than 1 as wanted. The fact that $A$ is exponentially stable implies that $F_\Sigma$ has analytic continuation to a slightly larger disk beyond ${\mathbb D}$, and the fact that $H$ is strictly positive-definite implies that $S_H$ has the additional coercivity property $S_H(x) \ge \epsilon_0 \| x \|^2$ for some $\epsilon_0 > 0$. \smallskip \noindent {\em Proof of sufficiency:} We are assuming that $\Sigma$ has state operator $A$ exponentially stable and with transfer function $F_\Sigma$ in the strict Schur class. The exponential stability of $A$ (i.e. $A$ has spectral radius $r_{\rm spec}(A) < 1$) means that the series $$ \bW_{o}^{*}\by = \sum_{k=0}^{\infty} A^{*k} C^{*}\by(k)\ \ (\by\in\ell^2_\cY(\BZ_+)), \quad \bW_{c}\bu = \sum_{k=0}^{\infty} A^{k} B\bu(k)\ \ (\bu\in\ell^2_\cU(\BZ_-)) $$ are norm-convergent (not just in the weak sense as in Proposition \ref{P:WcWo'}), and hence $\bW_c$ and $\bW_o$ are bounded. However it need not be the case that $\bW_c$ or $\bW_o^*$ be surjective, so we are not in a position to apply part (3) of Theorem \ref{T:BRLinfstan} to the system $\Sigma$. The adjustment for handling this difficulty which also ultimately produces bounded and boundedly invertible solutions of the strict KYP-inequality \eqref{KYP2} is what we shall call {\em $\epsilon$-regularization reduction}. It goes back at least to Petersen-Anderson-Jonkheere \cite{PAJ} for the finite dimensional case, and was extended to the infinite dimensional case in our previous paper \cite{KYP1}. We recall the procedure here for completeness and because we refer to it in a subsequent remark. Since $\spec (A) < 1$, the resolvent expression $(I - \lambda A)^{-1}$ is uniformly bounded for all $\lambda$ in the unit disk ${\mathbb D}$. Since we are now assuming that $F_\Sigma$ is in the strict Schur class, it follows that we can choose $\epsilon >0$ sufficiently small so that the augmented matrix function \begin{equation} \label{Fepsilon} F_{\epsilon}(\lambda) : = \begin{bmatrix} F(\lambda) & \epsilon \lambda C (I - \lambda A)^{-1} \\ \epsilon \lambda (I - \lambda A)^{-1} B & \epsilon^{2} \lambda (I - \lambda A)^{-1} \\ \epsilon I_{\cU} & 0 \end{bmatrix} \end{equation} is in the strict Schur class $\cS^{o}( \cU \oplus \cX, \cY \oplus \cX \oplus \cU)$. Note that \[ F_{\epsilon}(\lambda) = \begin{bmatrix} D & 0 \\ 0 & 0 \\ \epsilon I_{\cU} & 0 \end{bmatrix} + \lambda \begin{bmatrix} C \\ \epsilon I_{\cX} \\ 0 \end{bmatrix} (I - \lambda A)^{-1} \begin{bmatrix} B & \epsilon I_{\cX} \end{bmatrix} \] and hence \begin{equation} \label{breal} M_{\epsilon} = \begin{bmatrix} \bA & \bB \\ \bC & \bD \end{bmatrix} : = \mat{c|cc}{ A & B & \epsilon I_{\cX}\\ \hline C & D & 0 \\ \epsilon I_{\cX} & 0 & 0 \\ 0 & \epsilon I_{\cU} & 0} \end{equation} is a realization for $ F_{\epsilon}(\lambda)$. Suppose that we can find a bounded and boundedly invertible positive-definite operator $H$ satisfying the KYP-inequality \eqref{KYP1} associated with the system $\Sigma_\epsilon$: \begin{equation} \label{KYP-epsilon} \begin{bmatrix} \bA^{*} & \bC^{*} \\ \bB^{*} & \bD^{*} \end{bmatrix} \begin{bmatrix} H & 0 \\ 0 & I_{\cY \oplus \cX \oplus \cU} \end{bmatrix} \begin{bmatrix} \bA & \bB \\ \bC & \bD \end{bmatrix} \preceq \begin{bmatrix} H & 0 \\ 0 & I_{\cU \oplus \cX} \end{bmatrix}. \end{equation} Spelling this out gives $$ \begin{bmatrix} A^{*}HA + C^{*}C + \epsilon^{2}I_{\cX} & A^{*}H B + C^{*}D & \epsilon A^{*}H \\ B^{*}HA + D^{*}C & B^{*} H B + D^{*} D + \epsilon^{2}I_{\cU} & \epsilon B^{*} H \\ \epsilon HA & \epsilon HB & \epsilon^{2} H \end{bmatrix} \preceq \begin{bmatrix} H & 0 & 0 \\ 0 & I_{\cU} & 0 \\ 0 & 0 & I_{\cX} \end{bmatrix}. $$ By crossing off the third row and third column, we get the inequality $$ \begin{bmatrix} A^{*} H A + C^{*} C + \epsilon^{2} I_{\cX} & A^{*}H B + C^{*} D \\ B^{*}H A + D^{*} C & B^{*} H B + D^{*} D + \epsilon^{2} I_{\cU} \end{bmatrix} \preceq \begin{bmatrix} H & 0 \\ 0 & I_{\cU} \end{bmatrix} $$ or $$ \begin{bmatrix} A^{*} & C^{*} \\ B^{*} & D^{*} \end{bmatrix} \begin{bmatrix} H & 0 \\ 0 & I_{\cY} \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} + \epsilon^{2} \begin{bmatrix} I_{\cX} & 0 \\ 0 & I_{\cU} \end{bmatrix} \preceq \begin{bmatrix} H & 0 \\ 0 & I_{\cU} \end{bmatrix} $$ leading us to the strict KYP-inequality \eqref{KYP2} for the original system $\Sigma$ as wanted. It remains only to see why there is a bounded and boundedly invertible solution $H$ of \eqref{KYP-epsilon}. It is easily checked that the system $\Sigma_\epsilon$ is exactly controllable and exactly minimal, since $\bB$ and $\bC^*$ are both already surjective; as observed in the proof of necessity in item (2) of Theorem \ref{T:BRLinfstan}, since $F_{\Sigma_\epsilon}$ is in the Schur class it then follows that $\Sigma_\epsilon$ is $\ell^2$-exactly controllable and $\ell^2$-exactly observable as well. Hence we can appeal to either items (2) or (3) of Theorem \ref{T:BRLinfstan} to conclude that indeed the KYP-inequality \eqref{KYP-epsilon} has a bounded and boundedly invertible positive-definite solution. This is what is done in \cite{KYP1}, where the State-Space-Similarity approach is used to prove items (2) and (3) in Theorem \ref{T:BRLinfstan} rather the storage-function approach as is done here. \end{proof} \begin{remark} \label{R:HaHr-notbounded} Let $\Si$ and $F_\Si$ satisfy the conditions of strict Bounded Real Lemma (Theorem \ref{T:BRLinfstrict}). Define the $\ep$-augmented system $\Si_\ep$ as in \eqref{breal}. We then obtain bounded, strictly positive-definite solutions $H_{a,\ep}$ and $H_{r,\ep}$ of the strict KYP inequality \eqref{KYP2}, and consequently, by Proposition \ref{P:ell2minImplsH} (3) all bounded or bounded below solutions $H$ to the generalized KYP inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma_{\epsilon}$ satisfy $H_{a,\ep} \le H \le H_{r,\ep}$ and hence are in fact bounded, strictly positive-definite solutions to the KYP inequality \eqref{KYP1} for the original system $\Sigma$. An application of Theorem \ref{T:order} together with the observation that $\cD$ being a core for the bounded operator $X$ on $\cX$ is the same as $\cD$ being dense in $\cX$ leads to the conclusion that the operators $H_a$ and $H_r$ associated with the original system satisfy $H_a \le H_{a, \epsilon}$ and $H_r^{-1} \le H_{r, \epsilon}^{-1}$ and hence are bounded. However, this by itself is not enough to conclude that $H_a$ and $H_r^{-1}$ are also bounded below. \end{remark} \paragraph {\bf Acknowledgements} This work is based on the research supported in part by the National Research Foundation of South Africa. Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF does not accept any liability in this regard. It is a pleasure to thank Olof Staffans for enlightening discussion (both verbal and written) and Mikael Kurula for his useful comments while visiting the first author in July 2017.
2,877,628,088,669
arxiv
\section*{Errata} \section{Definitions of quantities in the main text}\label{sec:analytic_expression_overline} \subsection{Full definitions of $\overline \cU$, $\overline \cT$, $\cA_U$, and $\cA_T$ in Proposition \ref{prop:concentration_lag}}\label{sec:analytic_expression_U_T} We first define functions $m_1(\cdot), m_2(\cdot)$, which could be understood as the limiting partial Stieltjes transforms of $\bA(\bq)$ (c.f. Definition \ref{def:log_determinant_A}). \begin{definition}[Limiting partial Stieltjes transforms]\label{def:Stieltjes} For $\xi \in \C_+$ and $\bq \in \cQ$ where \begin{equation}\label{eqn:definition_of_cQ} \cQ = \{ (s_1, s_2, t_1, t_2, p): \vert s_2 t_2 \vert \le \ob_1^2(1 + p)^2 / 2 \}, \end{equation} define functions $\sFone(\,\cdot\, ,\,\cdot\,;\xi;\bq, \psi_1,\psi_2, \ob_1, \ob_\star), \sFtwo(\,\cdot\, ,\,\cdot\,;\xi;\bq, \psi_1,\psi_2, \ob_1, \ob_\star):\complex\times\complex \to \complex$ via: \[ \begin{aligned} &\sFone(m_1,m_2;\xi;\bq, \psi_1,\psi_2, \ob_1, \ob_\star) \equiv \psi_1\Big(-\xi+s_1 - \ob_\star^2 m_2 +\frac{(1+t_2m_2)s_2- \ob_1^2 (1 + p)^2 m_2}{(1+s_2m_1)(1+t_2m_2)- \ob_1^2 (1 + p)^2 m_1m_2}\Big)^{-1}\,, \\ &\sFtwo(m_1,m_2;\xi;\bq, \psi_1,\psi_2, \ob_1, \ob_\star) \equiv \psi_2\Big(-\xi+t_1 - \ob_\star^2 m_1 +\frac{(1+s_2 m_1)t_2- \ob_1^2 (1 + p)^2 m_1}{(1+t_2m_2)(1+s_2m_1)- \ob_1^2 (1 + p)^2 m_1m_2}\Big)^{-1}\, . \end{aligned} \] Let $m_1(\,\cdot\, ;\bq; \bpsi)$ $m_2(\,\cdot\, ;\bq; \bpsi):\complex_+\to\complex_+$ be defined, for $\Im(\xi)\ge C$ a sufficiently large constant, as the unique solution of the equations \begin{equation}\label{eq:FixedPoint} \begin{aligned} m_{1} &= \sFone(m_1,m_2;\xi;\bq, \psi_1,\psi_2, \ob_1, \ob_\star),\\ m_{2} &= \sFtwo(m_1, m_2;\xi;\bq,\psi_1,\psi_2, \ob_1, \ob_\star)\, \end{aligned} \end{equation} subject to the condition $\vert m_1\vert \le \psi_1/\Im(\xi)$, $\vert m_2\vert \le \psi_2/\Im(\xi)$. Extend this definition to $\Im(\xi) >0$ by requiring $m_1,m_2$ to be analytic functions in $\complex_+$. \end{definition} We next define the function $g(\cdot)$ that will be shown to be the limiting log determinant of $\bA(\bq)$. \begin{definition}[Limiting log determinants]\label{def:limiting_log_determinant} For $\bq = (s_1, s_2, t_1, t_2, p)$ and $\bpsi = (\psi_1, \psi_2)$, define \begin{equation}\label{eqn:log_determinant_variation} \begin{aligned} \Xi(\xi, z_1, z_2; \bq; \bpsi) \equiv&~ \log[(s_2 z_1 + 1)(t_2 z_2 + 1) - \ob_1^2 (1 + p)^2 z_1 z_2] - \ob_\star^2 z_1 z_2 \\ &+ s_1 z_1 + t_1 z_2 - \psi_1 \log (z_1 / \psi_1) - \psi_2 \log (z_2 / \psi_2) - \xi (z_1 + z_2) - \psi_1 - \psi_2. \end{aligned} \end{equation} Let $m_1(\xi; \bq; \bpsi), m_2(\xi; \bq; \bpsi)$ be defined as the analytic continuation of solution of Eq. (\ref{eq:FixedPoint}) as defined in Definition \ref{def:Stieltjes}. Define \begin{equation}\label{eqn:formula_g_mm19} g(\xi; \bq; \bpsi) = \Xi(\xi, m_1(\xi; \bq; \bpsi), m_2(\xi; \bq; \bpsi); \bq; \bpsi). \end{equation} \end{definition} We next give the definitions of $\overline \cU$, $\overline \cT$, $\cA_U$, and $\cA_T$. \begin{definition}[$\overline \cU$, $\overline \cT$, $\cA_U$, and $\cA_T$ in Proposition \ref{prop:concentration_lag}]\label{def:analytic_expression_overline} For any $\lambda \in \Lambdau$, define \[ \begin{aligned} \cA_U(\lambda, \psi_1, \psi_2) &= -\lim_{u\rightarrow 0_+}\left[ \psi_1\left( \normf_1^2\mu_1^2\partial_{s_1s_2} +\normf_1^2\partial_{s_1p} + \normf_1^2\partial_{s_1t_2} + \tau^2 \partial_{s_1 t_1}\right) g({\mathrm i} u; \bq; \bpsi)\Big\vert_{\bq = \bq_U}\right],\\ \overline \cU(\lambda, \psi_1, \psi_2) &= F_1^2 + \tau^2 - \lim_{u \to 0_+} \left[ \big( \normf_1^2\mu_1^2 \partial_{s_2} + \normf_1^2\partial_{p} + \normf_1^2\partial_{t_2} + \tau^2 \partial_{t_1} \big)g({\mathrm i} u; \bq; \bpsi) \Big\vert_{\bq = \bq_U}\right], \\ \cA_T(\lambda, \psi_1, \psi_2) &= -\lim_{u\rightarrow 0_+}\left[ \psi_1\left( \normf_1^2\mu_1^2\partial_{s_1s_2} +\normf_1^2\partial_{s_1p} + \normf_1^2\partial_{s_1t_2} + \tau^2 \partial_{s_1 t_1}\right) g({\mathrm i} u; \bq; \bpsi)\Big\vert_{\bq = \bq_T}\right],\\ \overline \cT(\lambda, \psi_1, \psi_2) &= F_1^2 + \tau^2 - \lim_{u \to 0_+} \left[ \big( \normf_1^2\mu_1^2 \partial_{s_2} + \normf_1^2\partial_{p} + \normf_1^2\partial_{t_2} + \tau^2 \partial_{t_1} \big)g({\mathrm i} u; \bq; \bpsi) \Big\vert_{\bq = \bq_T}\right],\\ \end{aligned} \] where $\bq_U = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0), \bq_T = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, 0, 0,0)$. \end{definition} In the following, we give a simplified expression for $\overline \cU$ and $\cA_U$. \begin{remark}[Simplification of $\overline \cU$ and $\cA_U$]\label{rmk:simplification} Define $\zeta, \overline\lambda$ as the rescaled version of $\mu_1^2$ and $\lambda$ \[\ratio = \frac{\mu_1^2}{\mu_\star^2},~~ \overline\lambda=\frac{\lambda}{\mu_\star^2}.\] Let $m_1(\,\cdot\,; \bpsi)$ $m_2(\,\cdot\,; \bpsi):\complex_+\to\complex_+$ be defined, for $\Im(\xi)\ge C$ a sufficiently large constant, as the unique solution of the equations \begin{equation} \begin{aligned} m_{1} &= \psi_1\left[-\xi+(1-\overline\lambda\psi_1) - m_2 + \frac{\ratio(1-m_2)}{1+\ratio m_1-\ratio m_1 m_2}\right]^{-1},\\ m_{2} &= -\psi_2\left[\xi+\psi_2-m_1-\frac{\ratio m_1}{1+\ratio m_1 - \ratio m_1 m_2}\right]^{-1}, \end{aligned} \end{equation} subject to the condition $\vert m_1\vert \le \psi_1/\Im(\xi)$, $\vert m_2\vert \le \psi_2/\Im(\xi)$. Extend this definition to $\Im(\xi) >0$ by requiring $m_1,m_2$ to be analytic functions in $\complex_+$. Let \[ \begin{aligned} \overline{m}_1 = \lim_{u\rightarrow\infty}m_1({\mathrm i} u, \bpsi),\\ \overline{m}_2 = \lim_{u\rightarrow\infty}m_2({\mathrm i} u, \bpsi). \end{aligned} \] Define \[ \begin{aligned} \chi_1 &= \overline{m}_1 \ratio-\overline{m}_1 \overline{m}_2 \ratio+1,\\ \chi_2 &= \overline{m}_1 -\psi_2+\frac{\overline{m}_1 \ratio}{\chi_1},\\ \chi_3 &= \overline{\lambda} \psi_1+\overline{m}_2-1+\frac{\ratio \left(\overline{m}_2-1\right)}{\chi_1}. \end{aligned} \] Define two polynomials $\cE_1, \cE_2$ as \[ \begin{aligned} \cE_1(\psi_1, \psi_2, \overline{\lambda}, \ratio) =&~ \psi_1^2(\psi_2 \chi_1^4+\psi_2 \chi_1^2 \ratio),\\ \cE_2(\psi_1, \psi_2, \overline{\lambda}, \ratio) =&~ \psi_1^2 (\chi_1^2 \chi_2^2 \overline{m}_2^2 \ratio-2 \chi_1^2 \chi_2^2 \overline{m}_2 \ratio+\chi_1^2 \chi_2^2 \ratio +\psi_2 \chi_1^2-\psi_2 \overline{m}_1^2 \overline{m}_2^2 \ratio^3+2 \psi_2 \overline{m}_1^2 \overline{m}_2 \ratio^3-\psi_2 \overline{m}_1^2 \ratio^3+\psi_2 \ratio),\\ \cE_3(\psi_1, \psi_2, \overline{\lambda}, \ratio) =&~ -\chi_1^4 \chi_2^2 \chi_3^2+\psi_1 \psi_2 \chi_1^4 +\psi_1 \chi_1^2 \chi_2^2 \overline{m}_2^2 \ratio^2-2 \psi_1 \chi_1^2 \chi_2^2 \overline{m}_2 \ratio^2+\psi_1 \chi_1^2 \chi_2^2 \ratio^2\\ &~+\psi_2 \chi_1^2 \chi_3^2 \overline{m}_1^2 \ratio^2+2 \psi_1 \psi_2 \chi_1^2 \ratio -\psi_1 \psi_2 \overline{m}_1^2 \overline{m}_2^2 \ratio^4+2 \psi_1 \psi_2 \overline{m}_1^2 \overline{m}_2 \ratio^4-\psi_1 \psi_2 \overline{m}_1^2 \ratio^4+\psi_1 \psi_2 \ratio^2. \end{aligned} \] Then \[ \begin{aligned} \overline\cU(\overline{\lambda}, \psi_1, \psi_2) &= -\frac{\left(\overline{m}_2-1\right) \left(\tau ^2\chi_1(\psi_1, \psi_2, \overline{\lambda}, \ratio)+F_1^2\right)}{\chi_1(\psi_1, \psi_2, \overline{\lambda}, \ratio)},\\ \cA_U(\overline{\lambda}, \psi_1, \psi_2) &= \frac{\tau^2\cE_1(\psi_1, \psi_2, \overline{\lambda}, \ratio)+F_1^2\cE_1(\psi_1, \psi_2, \overline{\lambda}, \ratio)}{\cE_2(\psi_1, \psi_2, \overline{\lambda}, \ratio)}. \end{aligned} \] \end{remark} \begin{remark}[Simplification of $\overline \cT$ and $\cA_T$]\label{rmk:simplification_interpolating} Define $\zeta, \overline\lambda$ as the rescaled version of $\mu_1^2$ and $\lambda$ \[\ratio = \frac{\mu_1^2}{\mu_\star^2},~~ \overline\lambda=\frac{\lambda}{\mu_\star^2}.\] Let $m_1(\,\cdot\,; \bpsi)$ $m_2(\,\cdot\,; \bpsi):\complex_+\to\complex_+$ be defined, for $\Im(\xi)\ge C$ a sufficiently large constant, as the unique solution of the equations \begin{equation} \begin{aligned} m_{1} &= \psi_1\left[-\xi+(1-\overline\lambda\psi_1) - m_2 + \frac{\ratio(1-m_2)}{1+\ratio m_1-\ratio m_1 m_2}\right]^{-1},\\ m_{2} &= -\psi_2\left[\xi+m_1+\frac{\ratio m_1}{1+\ratio m_1 - \ratio m_1 m_2}\right]^{-1}, \end{aligned} \end{equation} subject to the condition $\vert m_1\vert \le \psi_1/\Im(\xi)$, $\vert m_2\vert \le \psi_2/\Im(\xi)$. Extend this definition to $\Im(\xi) >0$ by requiring $m_1,m_2$ to be analytic functions in $\complex_+$. Let \[ \begin{aligned} \overline{m}_1 = \lim_{u\rightarrow\infty}m_1({\mathrm i} u, \bpsi),\\ \overline{m}_2 = \lim_{u\rightarrow\infty}m_2({\mathrm i} u, \bpsi). \end{aligned} \] Define \[ \begin{aligned} \chi_4 = \overline{m}_1 + \frac{\overline{m}_1\ratio}{\chi_1(\overline{m}_1, \overline{m}_2, \ratio)}, \end{aligned} \] and \[ \begin{aligned} \chi_1 &= \overline{m}_1 \ratio-\overline{m}_1 \overline{m}_2 \ratio+1,\\ \chi_3 &= \overline{\lambda} \psi_1+\overline{m}_2-1+\frac{\ratio \left(\overline{m}_2-1\right)}{\chi_1}, \end{aligned} \] where the definitions of $\chi_1, \chi_3$ are the same as in Remark \ref{rmk:simplification}. Define three polynomials $\cE_3, \cE_4, \cE_5$ as \[ \begin{aligned} \cE_4(\psi_1, \psi_2, \overline{\lambda}, \ratio) =& \psi_1 \Big(\psi_2 \chi_1^4 \chi_4^3+\chi_1^4 \chi_4^2 \overline{m}_1^3 \overline{m}_2^2 \ratio^3-2 \chi_1^4 \chi_4^2 \overline{m}_1^3 \overline{m}_2 \ratio^3+\chi_1^4 \chi_4^2 \overline{m}_1^3 \ratio^3+2 \chi_1^3 \chi_4^2 \overline{m}_1^3 \overline{m}_2^2 \ratio^2\\ &-4 \chi_1^3 \chi_4^2 \overline{m}_1^3 \overline{m}_2 \ratio^2+2 \chi_1^3 \chi_4^2 \overline{m}_1^3 \ratio^2-\psi_2 \chi_1^3 \chi_4^2 \overline{m}_1 \ratio+\chi_1^2 \chi_4^2 \overline{m}_1^3 \overline{m}_2^2 \ratio-2 \chi_1^2 \chi_4^2 \overline{m}_1^3 \overline{m}_2 \ratio\\ &+\chi_1^2 \chi_4^2 \overline{m}_1^3 \ratio +\psi_2 \chi_1^2 \chi_4^2 \overline{m}_1 \ratio-\psi_2 \chi_1^2 \overline{m}_1^5 \overline{m}_2^2 \ratio^5+2 \psi_2 \chi_1^2 \overline{m}_1^5 \overline{m}_2 \ratio^5-\psi_2 \chi_1^2 \overline{m}_1^5 \ratio^5\\ &-2 \psi_2 \chi_1 \overline{m}_1^5 \overline{m}_2^2 \ratio^4+4 \psi_2 \chi_1 \overline{m}_1^5 \overline{m}_2 \ratio^4 -2 \psi_2 \chi_1 \overline{m}_1^5 \ratio^4-\psi_2 \overline{m}_1^5 \overline{m}_2^2 \ratio^3 \\ &+2 \psi_2 \overline{m}_1^5 \overline{m}_2 \ratio^3 -\psi_2 \overline{m}_1^5 \ratio^3 \Big),\\ \cE_5 (\psi_1, \psi_2, \overline{\lambda}, \ratio) =& \overline{m}_1 {\Big(\ratio+1+\overline{m}_1 \ratio -\overline{m}_1 \overline{m}_2 \ratio \Big)}^2 \Big(-\chi_1^4 \chi_3^2 \chi_4^2 \overline{m}_1^2 \\ &+\psi_1 \psi_2 \chi_1^4 \chi_4^2-2 \psi_1 \psi_2 \chi_1^3 \chi _{4} \overline{m}_1 \ratio+\psi_2 \chi_1^2 \chi_3^2 \overline{m}_1^4 \ratio^2 +\psi_1 \chi_1^2 \chi_4^2 \overline{m}_1^2 \overline{m}_2^2 \ratio^2 \\ & -2 \psi_1 \chi_1^2 \chi_4^2 \overline{m}_1^2 \overline{m}_2 \ratio^2+\psi_1 \chi_1^2 \chi_4^2 \overline{m}_1^2 \ratio^2+2 \psi_1 \psi_2 \chi_1^2 \chi _{4} \overline{m}_1 \ratio+\psi_1 \psi_2 \chi_1^2 \overline{m}_1^2 \ratio^2 \\ & -2 \psi_1 \psi_2 \chi_1 \overline{m}_1^2 \ratio^2-\psi_1 \psi_2 \overline{m}_1^4 \overline{m}_2^2 \ratio^4+2 \psi_1 \psi_2 \overline{m}_1^4 \overline{m}_2 \ratio^4-\psi_1 \psi_2 \overline{m}_1^4 \ratio^4+\psi_1 \psi_2 \overline{m}_1^2 \ratio^2\Big),\\ \cE_6(\psi_1, \psi_2, \overline{\lambda}, \ratio) =& \chi_1^2 \chi_4^2 \psi_1 \psi_2 \Big(\chi _{4} \chi_1^2-\overline{m}_1 \chi_1 \ratio+\overline{m}_1 \ratio\Big) {\Big(\overline{m}_1 \ratio-\overline{m}_1 \overline{m}_2 \ratio+1\Big)}^2.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{aligned} \] Then \[ \begin{aligned} \overline\cT(\overline{\lambda}, \psi_1, \psi_2) &= -\frac{\left(\overline{m}_2-1\right) \left(\tau ^2\chi_1(\psi_1, \psi_2, \overline{\lambda}, \ratio)+F_1^2\right)}{\chi_1(\psi_1, \psi_2, \overline{\lambda}, \ratio)},\\ \cA_T(\overline{\lambda}, \psi_1, \psi_2) &= -\psi_1 \frac{F_1^2\cE_4(\psi_1, \psi_2, \overline{\lambda}, \ratio)+\tau^2\cE_6(\psi_1, \psi_2, \overline{\lambda}, \ratio)}{\cE_5(\psi_1, \psi_2, \overline{\lambda}, \ratio)}. \end{aligned} \] \end{remark} \subsection{Definitions of $\cR$ and $\cA$} \label{sec:analytic_expression_R_A} In this section, we present the expression of $\cR$ and $\cA$ from \citet{mm19} which are used in our results and plots. \begin{definition}[Formula for the prediction error of minimum norm interpolator] Define \[ \ratio = \mu_1^2/\mu_\star^2,~~ \rho = F_1^2/\tau^2 \] Let the functions $\nu_1, \nu_2: \C_+ \to \C_+$ be be uniquely defined by the following conditions: $(i)$ $\nu_1$, $\nu_2$ are analytic on $\C_+$; $(ii)$ For $\Im(\xi)>0$, $\nu_1(\xi)$, $\nu_2(\xi)$ satisfy the following equations \begin{equation} \begin{aligned} \nu_1 =&~ \psi_1\Big(-\xi - \nu_2 - \frac{\ratio^2 \nu_2}{1- \ratio^2 \nu_1\nu_2}\Big)^{-1}\, ,\\ \nu_2 =&~ \psi_2\Big(-\xi - \nu_1 - \frac{\ratio^2 \nu_1}{1- \ratio^2 \nu_1\nu_2}\Big)^{-1}\, ; \end{aligned} \end{equation} $(iii)$ $(\nu_1(\xi), \nu_2(\xi))$ is the unique solution of these equations with $\vert \nu_1(\xi)\vert\le \psi_1/\Im(\xi)$, $\vert \nu_2(\xi) \vert \le \psi_2/\Im(\xi)$ for $\Im(\xi) > C$, with $C$ a sufficiently large constant. Let \begin{equation}\label{eqn:definition_chi_main_formula} \chi \equiv \lim_{u\rightarrow 0} \nu_1({\mathrm i} u) \cdot \nu_2({\mathrm i} u), \end{equation} and \begin{equation} \begin{aligned} E_0(\ratio, \psi_1, \psi_2) \equiv&~ - \chi^5\ratio^6 + 3\chi^4 \ratio^4+ (\psi_1\psi_2 - \psi_2 - \psi_1 + 1)\chi^3\ratio^6 - 2\chi^3\ratio^4 - 3\chi^3\ratio^2 \\ &+ (\psi_1 + \psi_2 - 3\psi_1\psi_2 + 1)\chi^2\ratio^4 + 2\chi^2\ratio^2+ \chi^2+ 3\psi_1\psi_2\chi\ratio^2 - \psi_1\psi_2\, ,\\ E_1(\ratio, \psi_1, \psi_2) \equiv&~ \psi_2\chi^3\ratio^4 - \psi_2\chi^2\ratio^2 + \psi_1\psi_2\chi\ratio^2 - \psi_1\psi_2\, , \\ E_2(\ratio, \psi_1, \psi_2) \equiv&~ \chi^5\ratio^6 - 3\chi^4\ratio^4+ (\psi_1 - 1)\chi^3\ratio^6 + 2\chi^3\ratio^4 + 3\chi^3\ratio^2 + (- \psi_1 - 1)\chi^2\ratio^4 - 2\chi^2\ratio^2 - \chi^2\,.\\ \end{aligned} \end{equation} Then the expression for the asymptotic risk of minimum norm interpolator gives \[ \cR(\psi_1, \psi_2) = F_1^2\frac{E_1(\ratio, \psi_1, \psi_2) }{E_0 (\ratio, \psi_1, \psi_2) } + \tau^2\frac{E_2(\ratio, \psi_1, \psi_2) }{E_0 (\ratio, \psi_1, \psi_2) } + \tau^2. \] The expression for the norm of the minimum norm interpolator gives \[ \begin{aligned} A_1 =&~ \frac{\rho}{1 + \rho} \Big[ - \chi^2 (\chi \ratio^4 - \chi \ratio^2 + \psi_2 \ratio^2 + \ratio^2 - \chi \psi_2 \ratio^4 + 1)\Big] + \frac{1}{ 1 + \rho} \Big[ \chi^2 (\chi \ratio^2 - 1) (\chi^2 \ratio^4 - 2 \chi \ratio^2 + \ratio^2 + 1) \Big], \\ A_0 =&~ - \chi^5\ratio^6 + 3\chi^4\ratio^4 + (\psi_1\psi_2 - \psi_2 - \psi_1 + 1)\chi^3\ratio^6 - 2\chi^3\ratio^4 - 3\chi^3\ratio^2\\ &~+ (\psi_1 + \psi_2 - 3\psi_1\psi_2 + 1)\chi^2\ratio^4 + 2\chi^2\ratio^2 + \chi^2 + 3\psi_1\psi_2\chi\ratio^2 - \psi_1\psi_2, \\ \cA(\psi_1, \psi_2) =&~ \psi_1(F_1^2+\tau^2)A_1 /(\mu_\star^2 A_0). \end{aligned} \] \end{definition} \section{Experimental setup for simulations in Figure \ref{fig:simulation}} \label{sec:simulation_detail} In this section, we present additional details for Figure \ref{fig:simulation}. We choose $y_i = \< \bx_i, \bbeta\>$ for some $\Vert \bbeta \Vert_2^2 = 1$, the ReLU activation function $\sigma(x) = \max\{ x, 0\}$, and $\psi_1=N/d = 2.5$ and $\psi_2=n/d = 1.5$. For the theoretical curves (in solid lines), we choose $\lambda\in[0.426, 2]$, so that $\cA_U(\lambda) \in [0, 15]$, and plot the parametric curve $(\cA_U(\lambda), \overline\cU(\lambda)+\lambda\cA_U(\lambda))$ for the uniform convergence. For the uniform convergence over interpolators, we choose $\lambda\in[0.21, 2]$ so that $\cA_T(\lambda) \in [6.4, 15]$, and plot $(\cA_T(\lambda), \overline\cT(\lambda)+\lambda\cA_T(\lambda))$. The definitions of these theoretical predictions are given in Definition \ref{def:analytic_expression_overline}, Remark \ref{rmk:simplification} and Remark \ref{rmk:simplification_interpolating} For the empirical simulations (in dots), first recall that in Proposition \ref{prop:concentration_lag}, we defined \[ \begin{aligned} \ba_U(\lambda) =&~ \arg\max_{\ba} \Big[ R(\ba) - \what R_n(\ba) - \psi_1 \lambda \| \ba \|_2^2 \Big],\\ \ba_T(\lambda) =&~ \arg\max{\ba}\inf_{\bmu} \Big[ R(\ba) - \lambda\psi_1 \| \ba \|_2^2 + 2 \< \bmu, \bZ \ba - \by/\sqrt{d}\ \>\Big]. \end{aligned} \] After picking a value of $\lambda$, we sample $20$ independent problem instances, with the number of features $N=500$, number of samples $n=300$, covariate dimension $d=200$. We compute the corresponding $(\psi_1\|\ba_U\|_2^2, R(\ba_U)-\hat{R}_n(\ba_U))$ and $(\psi_1\|\ba_T\|_2^2, R(\ba_T))$ for each instance. Then, we plot the empirical mean and $1/\sqrt{20}$ times the empirical standard deviation (around the mean) of each coordinate. \section{Discussions}\label{sec:discuss} \vspace{-0.5em} In this paper, we calculated the uniform convergence bounds for random features models in the proportional scaling regime. Our results exhibit a setting in which standard uniform convergence bound is vacuous while uniform convergence over interpolators gives a non-trivial bound of the actual generalization error. \vspace{-0.8em} \paragraph{Modeling assumptions and technical assumptions.} We made a few assumptions to prove the main result Theorem \ref{thm:main_theorem}. Some of these assumptions can be relaxed. Indeed, if we assume a non-linear target function $f_d$ instead of a linear one as in Assumption \ref{ass:linear_target}, the non-linear part will behave like additional noises in the proportional scaling limit. However, proving this rigorously requires substantial technical work. Similar issue exists in \citet{mm19}. Moreover, it is not essential to assume vanishing $\ob_0^2$ in Assumption \ref{ass:activation}. Assumption \ref{ass:overline_U_invertable} and \ref{ass:exchange_limit} involve some properties of specific random matrices. We believe these assumptions can be proved under more natural assumptions on the activation function $\sigma$. However, proving these assumptions requires developing some sophisticated random matrix theory results, which could be of independent interest. \vspace{-0.8em} \paragraph{Relationship with non-asymptotic results.} We hold the same opinion as in \citet{abbaras2020rademacher}: the exact formulae in the asymptotic limit can provide a complementary view to the classical theories of generalization. On the one hand, asymptotic formulae can be used to quantify the tightness of non-asymptotic bounds; on the other hand, the asymptotic formulae in many cases are comparable to non-asymptotic bounds. For example, Lemma 22 in \citet{bartlettAndMendelson} coupled with the bound of Lipschitz constant of the square loss in proper regime implies that $\cU_\infty(A, \psi_2)$ have a non-asymptotic bound that scales linearly in $A$ and inverse proportional to $\psi_2^{1/2}$ (c.f. Proposition 6 of \citet{weinan2020machine}). This coincides with the intuitions in Section \ref{sec:kernel_noisy}. \vspace{-0.8em} \paragraph{Uniform convergence in other settings.} A natural question is whether the power law derived in Section \ref{sec:power_law} holds for models in more general settings. One can perform a similar analysis to calculate the uniform convergence bounds in a few other settings \cite{montanari2019generalization,dhifallah2020precise,hu2020universality}. We believe the power law may be different, but the qualitative properties of uniform convergence bounds will share some similar features. \vspace{-0.8em} \paragraph{Relationship with~\citet{zhou2021uniform}.} The separation of uniform convergence bounds ($U$ and $T$) is first pointed out by \citet{zhou2021uniform}, where the authors worked with the linear regression model in the ``junk features" setting. We believe random features model are more natural models to illustrate the separation: in \citet{zhou2021uniform}, there are some unnatural parameters $\lambda_n, d_J$ that are hard to make connections to deep learning models, while the random features model is closely related to two-layer neural networks. \section{Problem formulation}\label{sec:model_setup_def_limit} In this section, we present the background needed to understand the insights from our main result. In Section \ref{sec:model_setup} we define the random feature regression task that this paper focuses on. In Section \ref{sec:def_limit}, we informally present the limiting regime our theory covers. \subsection{Model setup}\label{sec:model_setup} Consider a dataset $(\bx_i, y_i)_{i \in [n]}$ with $n$ samples. Assume that the covariates follow $\bx_i \sim_{iid} \Unif(\S^{d-1}(\sqrt d))$, and responses satisfy $y_i = f_d(\bx_i) + \eps_i$, with the noises satisfying $\eps_i \sim_{iid} \cN(0, \tau^2)$ which are independent of $(\bx_i)_{i \in [n]}$. We will consider both the noisy ($\tau^2 > 0$) and noiseless ($\tau^2 = 0$) settings. We fit the dataset using the random features model. Let $(\btheta_j)_{j \in [N]} \sim_{iid} \Unif(\S^{d-1}(\sqrt d))$ be the random feature vectors. Given an activation function $\sigma: \R \to \R$, we define the random features function class $\cF_{\RF}(\bTheta)$ by \[ \cF_{\RF}(\bTheta) \equiv \Big\{ f(\bx) = \sum_{j = 1}^N a_j \sigma\big(\< \bx, \btheta_j\>/\sqrt d \big): \ba \in \R^N \Big\}. \] \paragraph{Generalization error of the minimum norm interpolator.} Denote the population risk and the empirical risk of a predictor $\ba \in \R^N$ by \begin{align} R(\ba) =&~ \E_{\bx, y}~ \Big( y - \sum_{j = 1}^N a_j \sigma(\< \bx, \btheta_j\>/\sqrt d) \Big)^2,\label{eqn:pop_risk}\\ \what R_n(\ba) =&~ \frac{1}{n}\sum_{i = 1}^n \Big( y_i - \sum_{j = 1}^N a_j \sigma(\< \bx_i, \btheta_j\>/\sqrt d) \Big)^2, \label{eqn:emp_risk} \end{align} and the regularized empirical risk minimizer with vanishing regularization by \[ \ba_{\min} = \lim_{\lambda \to 0+} \arg\min_{\ba} \Big[ \what R_n(\ba) + \lambda \|\ba\|_2^2 \Big]. \] In the overparameterized regime ($N>n$), under mild conditions, we have $\min_{\ba} \what R_n(\ba) = \what R_n(\ba_{\min})=0$. In this regime, $\ba_{\min}$ can be interpreted as the minimum $\ell_2$ norm interpolator. A quantity of interest is the generalization error of this predictor, which gives (with a slight abuse of notation) \begin{align} R(N, n, d) \equiv R(\ba_{\min}). \label{eqn:risk_min_l2} \end{align} \paragraph{Uniform convergence bounds.} We denote the uniform convergence bound over a norm ball and the uniform convergence over interpolators in the norm ball by \begin{align} &U(A, N, n, d) \equiv \sup_{(N/d) \| \ba \|_2^2 \le A} \Big( R(\ba) - \what R_n(\ba) \Big), \label{eqn:uniform}\\ &T(A, N, n, d) \equiv \sup_{(N/d) \| \ba \|_2^2 \le A, \what R_n(\ba) = 0} R(\ba). \label{eqn:uniform_zeroloss} \end{align} Here the scaling factor $N/d$ of the norm ball is such that the norm ball converges to a non-trivial RKHS norm ball with size $\sqrt{A}$ as $\psi_1 \to \infty$ (limit taken after $N/d \to \psi_1$). Note that in order for the maximization problem in \eqref{eqn:uniform_zeroloss} to have a non-empty feasible region, we need $\what R_n(\ba_{\min}) = 0$ and need to take $A \geq (N/d)\|\ba_{\min}\|_2^2$: we will show that in the region $N > n$ with sufficiently large $A$, this happens with high probability. By construction, for any $A\geq (N/d)\|\ba_{\min}\|_2^2$, we have $U(A) \ge T(A) \ge R(\ba_{\min})$ (see Figuire \ref{fig:simulation}). So a natural problem is to quantify the gap among $U(A)$, $T(A)$, and $R(\ba_{\min})$, which is our goal in this paper. \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\textwidth]{figures/syn.pdf} \vskip -0.1in \caption{ Illustration of uniform convergence $U$ (c.f. eq. \eqref{eqn:uniform}), uniform convergence over interpolators $T$ (c.f. eq. \eqref{eqn:uniform_zeroloss}), and minimum norm interpolator $R(\ba_{\min})$. We take $y_i =\<\bx_i, \bbeta\>$ for some $\|\bbeta\|_2^2=1$, and take the ReLU activation function $\sigma(x) = \max\{ x, 0\}$. Solid lines are our theoretical predictions $\cU$ and $\cT$ (cf.~\eqref{eqn:u_limit} \&~\eqref{eqn:t_limit}). Points with error bars are obtained from simulations with the number of features $N=500$, number of samples $n=300$, and covariate dimension $d=200$. The error bar reports $1/\sqrt{20}\times$standard deviation over $20$ instances. See Appendix \ref{sec:simulation_detail} for details. } \label{fig:simulation} \end{center} \vskip -0.2in \end{figure} \subsection{High dimensional regime}\label{sec:def_limit} We approach this problem in the limit $d \to \infty$ with $N/d \to \psi_1$ and $n/d \to \psi_2$ (c.f. Assumption \ref{ass:linear}). We further assume the setting of a linear target function $f_d$ and a nonlinear activation function $\sigma$ (c.f. Assumptions \ref{ass:linear_target} and \ref{ass:activation}). In this regime, our main result Theorem \ref{thm:main_theorem} will show that, the uniform convergence $U$ and the uniform convergence over interpolators $T$ will converge to deterministic functions, i.e., writing here informally, \begin{align} U(A, N, n, d) \overset{d \to \infty}{\rightarrow} \cU(A, \psi_1, \psi_2), \label{eqn:u_limit}\\ T(A, N, n, d) \overset{d \to \infty}{\rightarrow}\cT(A, \psi_1, \psi_2), \label{eqn:t_limit} \end{align} where $\cU$ and $\cT$ will be defined in Definition~\ref{def:formula_U_T} (which depends on the definition of some other quantities that are defined in Appendix \ref{sec:analytic_expression_overline} and heuristically presented in Remark \ref{rmk:heuristic_def}). In addition to $\cU$ and $\cT$, Theorem 1 of \citet{mm19} implies the following convergence \begin{align} (N/d) \|\ba_{\min}\|_2^2 \overset{d \to \infty}{\rightarrow} &~ \cA(\psi_1, \psi_2), \label{eqn:cA}\\ R(\ba_{\min}) \overset{d \to \infty}{\rightarrow} &~ \cR(\psi_1, \psi_2). \label{eqn:cR} \end{align} The precise algebraic expression of equation \eqref{eqn:cA} and \eqref{eqn:cR} was given in Definition 1 of \citet{mm19}, and we include in Appendix \ref{sec:analytic_expression_overline} for completeness. We will sometimes refer to $\cU, \cT, \cA, \cR$ without explicitly mark their dependence on $A, \psi_1, \psi_2$ for notational simplicity. \paragraph{Kernel regime. } \citet{aliben} have shown that, as $N \to \infty$, the random feature space $\cF_{\RF}(\bTheta)$ (equipped with proper inner product) converges to the RKHS (Reproducing Kernel Hilbert Space) induced by the kernel \[ H(\bx, \bx') = \E_{\bw \sim \Unif(\S^{d-1})}[\sigma(\< \bx, \bw\>) \sigma(\< \bx', \bw)\>]. \] We expect that, if we take limit $\psi_1\rightarrow\infty$ after $N, d, n\rightarrow\infty$, the formula of $\cU$ and $\cT$ will coincide with the corresponding asymptotic limit of $U$ and $T$ for kernel ridge regression with the kernel $H$. This intuition has been mentioned in a few papers \cite{mm19, d2020double, jacot2020implicit}. In this spirit, we denote \begin{align} \cU_\infty(A, \psi_2) \equiv&~ \lim_{\psi_1 \to \infty} \cU(A, \psi_1, \psi_2), \label{eqn:cU_inf}\\ \cT_\infty(A, \psi_2) \equiv&~ \lim_{\psi_1 \to \infty} \cT(A, \psi_1, \psi_2), \label{eqn:cT_inf}\\ \cA_\infty(\psi_2) \equiv&~ \lim_{\psi_1 \to \infty} \cA(\psi_1, \psi_2), \label{eqn:cA_inf}\\ \cR_\infty(\psi_2) \equiv&~ \lim_{\psi_1 \to \infty} \cR(\psi_1, \psi_2). \label{eqn:cR_inf} \end{align} We will refer to the quantities $\{\cU_\infty, \cT_\infty, \cA_\infty, \cR_\infty\}$ as the $\{$uniform convergence in norm ball, uniform convergence over interpolators in norm ball, minimum $\ell_2$ norm of interpolators, and generalization error of interpolators$\}$ of kernel ridge regression. \paragraph{Low norm uniform convergence bounds.} There is a question of which norm $A$ to choose in $\cU$ and $\cT$ to compare with $\cR$. In order for $U$ and $T$ to serve as proper bounds for $R(\ba_{\min})$, we need to take at least $A \geq \psi_1\|\ba_{\min}\|_2^2$. Therefore, we will choose \begin{equation}\label{eqn:low_norm} A = \alpha \psi_1 \|\ba_{\min}\|_2^2, \end{equation} for some $\alpha>1$ (e.g., $\alpha = 1.1$). Note $\psi_1 \|\ba_{\min}\|_2^2 \to \cA(\psi_1, \psi_2)$ as $d \to \infty$. So for a fixed $\alpha > 1$, we further define \begin{align} \cU^{(\alpha)}(\psi_1, \psi_2) \equiv&~ \cU(\alpha \cA(\psi_1, \psi_2), \psi_1, \psi_2), \label{eqn:cU_alpha}\\ \cT^{(\alpha)}(\psi_1, \psi_2) \equiv&~ \cT(\alpha \cA(\psi_1, \psi_2), \psi_1, \psi_2), \label{eqn:cT_alpha} \end{align} and their kernel version, \begin{align} \cU^{(\alpha)}_\infty(\psi_2) \equiv&~ \lim_{\psi_1 \to \infty} \cU^{(\alpha)}(\psi_1, \psi_2), \label{eqn:cU_inf_alpha}\\ \cT^{(\alpha)}_\infty(\psi_2) \equiv&~ \lim_{\psi_1 \to \infty} \cT^{(\alpha)}(\psi_1, \psi_2). \label{eqn:cT_inf_alpha} \end{align} This definition ensures that $\cR(\psi_1, \psi_2) \le \cT^{(\alpha)}(\psi_1, \psi_2) \le \cU^{(\alpha)}(\psi_1, \psi_2)$ and $\cR_\infty(\psi_2) \le \cT^{(\alpha)}_\infty(\psi_2) \le \cU^{(\alpha)}_\infty(\psi_2)$. \section{Asymptotic power laws and separations}\label{sec:power_law} In this section, we evaluate the algebraic expressions derived in our main result (Theorem~\ref{thm:main_theorem}) as well as the quantities $\cU^{(\alpha)}$, $\cT^{(\alpha)}$, $\cA$, and $\cR$, before formally presenting the theorem. We examine their dependence with respect to the noise level $\tau^2$, the number of features $\psi_1 = \lim_{d \to \infty} N/d$, and the sample size $\psi_2 = \lim_{d \to \infty} n/d$, and we further infer their asymptotic power laws for large $\psi_1$ and $\psi_2$. \subsection{Norm of the minimum norm interpolator}\label{sec:kernel_norm} Since we are considering uniform convergence bounds over the norm ball of size $\alpha$ times $\cA_\infty(\psi_2)$ (the norm of the min-norm interpolator), let's first examine how $\cA_\infty(\psi_2)$ scale with $\psi_2$. As we shall see, $\cA_\infty(\psi_2)$ behaves differently in the noiseless ($\tau^2=0$) and noisy ($\tau^2>0$) settings, so here we explicitly mark the dependence on $\tau^2$, i.e. $\cA_\infty(\psi_2;\tau^2)$. The inferred asymptotic power law gives (c.f. Figure \ref{fig:kernel_norm}) \[ \begin{aligned} \cA_\infty(\psi_2; \tau^2>0) \sim&~ \psi_2,\\ \cA_\infty(\psi_2; \tau^2=0) \sim&~ 1, \end{aligned} \] where $X_1(\psi) \sim X_2(\psi)$ for large $\psi$ means that \[ \lim_{\psi \to \infty} \log(X_1(\psi)) / \log(X_2(\psi)) = 1. \] In words, when there is no label noise ($\tau^2=0$), we can interpolate infinite data even with a finite norm. When the responses are noisy $(\tau^2>0)$, interpolation requires a large norm that is proportional to the number of samples. On a high level, our statement echoes the finding of \citet{belkin2018understand}, where they study a binary classification problem using the kernel machine, and prove that an interpolating classifier requires RKHS norm to grow at least exponentially with $n^{1/d}$ for fixed dimension $d$. Here instead we consider the high dimensional setting and we show a linear grow in $\psi_2 = \lim_{d \to \infty} n/d$. \subsection{Kernel regime with noiseless data}\label{sec:kernel_noiseless} We first look at the noiseless setting ($\tau^2=0$) and present the asymptotic power law for the uniform convergence $\cU_\infty^{(\alpha)}$ over the low-norm ball, the uniform convergence over interpolators $\cT_\infty^{(\alpha)}$ in the low norm ball, and the minimum norm risk $\cR_\infty$ from \eqref{eqn:cU_inf_alpha} \eqref{eqn:cT_inf_alpha} \eqref{eqn:cR_inf}, respectively. In this setting, the inferred asymptotic power law of $\cU_\infty^{(\alpha)}(\psi_2)$, $\cT_\infty^{(\alpha)}(\psi_2)$, and $\cR_\infty(\psi_2)$ gives (c.f. Figure \ref{fig:kernel_noiseless_utr}) \[ \begin{aligned} \cU_\infty^{(\alpha)}(\psi_2;\tau^2=0) &\sim \psi_2^{-1/2},\\ \cT_\infty^{(\alpha)}(\psi_2;\tau^2=0) &\sim \psi_2^{-1},\\ \cR_\infty^{(\alpha)}(\psi_2;\tau^2=0) &\sim \psi_2^{-2}. \end{aligned} \] As we can see, all the three quantities converge to $0$ in the large sample limit, which indicates that uniform convergence is able to explain generalization in this setting. yet uniform convergence bounds do not correctly capture the convergence rate (in terms of $\psi_2$) of the generalization error. \subsection{Kernel regime with noisy data}\label{sec:kernel_noisy} In the noisy setting (fix $\tau^2>0$), the Bayes risk (minimal possible risk) is $\tau^2$. We study the excess risk and the excess version of uniform convergence bounds by subtracting the Bayes risk $\tau^2$. The inferred asymptotic power law gives (c.f. Figure \ref{fig:kernel_noisy_utr}) \[ \begin{aligned} \cU^{(\alpha)}_\infty(\psi_2;\tau^2) - \tau^2 &\sim \psi_2^{1/2},\\ \cT^{(\alpha)}_\infty (\psi_2;\tau^2) - \tau^2 &\sim 1,\\ \cR_\infty(\psi_2;\tau^2) - \tau^2 &\sim \psi_2^{-1}. \end{aligned} \] In the presence of label noise, the excess risk $\cR_\infty- \tau^2$ vanishes in the large sample limit. In contrast, the classical uniform convergence $\cU_\infty$ becomes vacuous, whereas the uniform convergence over interpolators $\cT_\infty$ converges to a constant, which gives a non-vacuous bound of $\cR_\infty$. The decay of the excess risk of minimum norm interpolators even in the presence of label noise is no longer a surprising phenomenon in high dimensions \cite{liang2019risk, ghorbani2019linearized, bartlett2020benign}. A simple explanation of this phenomenon is that the nonlinear part of the activation function $\sigma$ has an implicit regularization effect \cite{mm19}. The divergence of $\cU^{(\alpha)}_\infty$ in the presence of response noise is partly due to that $\cA_\infty(\psi_2)$ blows up linearly in $\psi_2$ (c.f. Section \ref{sec:kernel_norm}). In fact, we can develop a heuristic intuition that $\cU_\infty(A, \psi_2; \tau^2) \sim A / \psi_2^{1/2}$. Then the scaling $\cU^{(\alpha)}_\infty(\psi_2; \tau^2 > 0) \sim \cA_\infty(\psi_2; \tau^2 > 0) / \psi_2^{1/2} \sim \psi_2^{1/2}$ can be explained away by the power law of $\cA_\infty(\psi_2; \tau^2 > 0) \sim \psi_2$. In other words, the complexity of the function space of interpolators grows faster than the sample size $n$, which leads to the failure of uniform convergence in explaining generalization. This echoes the findings in \citet{zico2019unable}. To illustrate the scaling $\cU_\infty(A, \psi_2)\sim A/\psi_2^{1/2}$. We fix all other parameters ($\mu_1, \mu_\star, \tau, F_1$), and examine the dependence of $\cU_\infty$ on $A$ and $\psi_2$. We choose $A = A(\psi_2)$ according to different power laws $A(\psi_2) \sim \psi_2^p$ for $p = 0, 0.25, 0.5, 0.75, 1$. The inferred asymptotic power law gives $\cU_{\infty}(A(\psi_2), \psi_2)\sim \psi_2^{p-0.5}$ (c.f. Figure \ref{fig:uAfunc}). This provides an evidence for the relation $\cU_\infty(A,\psi_2) \sim A/ \psi_2^{1/2}$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\textwidth]{figures/uA.pdf} \vskip -0.1in \caption{Uniform convergence $\cU_\infty(A(\psi_2), \psi_2)$ over the norm ball in the kernel regime $\psi_1\rightarrow\infty$. The size of the norm ball $A = A(\psi_2)$ is chosen according to different power laws as shown in the legend.} \label{fig:uAfunc} \end{center} \vskip -0.2in \end{figure} \subsection{Finite-width regime}\label{sec:finite_width} \begin{figure*}[ht] \begin{center} \subfigure[]{ \includegraphics[width=.33\textwidth]{figures/rf_psi1.pdf} \label{fig:rf_utr} } \subfigure[]{ \includegraphics[width=.33\textwidth]{figures/rf_norm.pdf} \label{fig:rf_norm} } \vskip -0.1in \caption{Random feature regression with the number of sample $\psi_2=1.5$, activation function $\sigma(x) = \max(0, x)- 1/\sqrt{2\pi}$, target function $f_d(\bx)=\<\bbeta, \bx\>$ with $\|\bbeta\|_2^2=1$, and noise level $\tau^2=0.1$. The horizontal axes are the number of features $\psi_1$. The solid lines are the the algebraic expressions derived in the main theorem (Theorem \ref{thm:main_theorem}). The dashed lines are the function $\psi_1^p$ in the log scale. Figure \ref{fig:rf_utr}: Comparison of the classical uniform convergence in the norm ball of size level $\alpha = 1.5$ (Eq. \eqref{eqn:cU_alpha}, blue curve), the uniform convergence over interpolators in the same norm ball (Eq. \eqref{eqn:cT_alpha}, red curve), the risk of minimum norm interpolator (Eq. \eqref{eqn:cR}, yellow curve). Figure \ref{fig:rf_norm}: Minimum norm required to interpolate the training data (Eq. \eqref{eqn:cA}).} \label{fig:rf_psi1} \end{center} \vskip -0.2in \end{figure*} Here we shift attention to the dependence of $\cU$, $\cT$, and $\cR$ on the number of features $\psi_1$. We fix the number of training samples $\psi_2$, noise level $\tau^2 > 0$, and norm level $\alpha > 1$ similar as before. Since $\cU^{\alpha}\rightarrow\cU^{\alpha}_\infty, \cT^{\alpha}\rightarrow\cT^{\alpha}_\infty$ and $\cR\rightarrow\cR_\infty$ as $\psi_1\rightarrow\infty$, we look at the dependence of $\cU^{\alpha}-\cU^{\alpha}_\infty, \cT^{\alpha}-\cT^{\alpha}_\infty$ and $\cR^{\alpha}-\cR^{\alpha}_\infty$ with respect to $\psi_1$. The inferred asymptotic law gives (c.f. Figure \ref{fig:rf_psi1}) \[ \begin{aligned} \cU^{(\alpha)}(\psi_1, \psi_2) - \cU^{(\alpha)}_\infty(\psi_2) &\sim \psi_1^{-1},\\ \cT^{(\alpha)}(\psi_1, \psi_2) - \cT^{(\alpha)}_\infty(\psi_2) &\sim \psi_1^{-1},\\ \cR(\psi_1, \psi_2) - \cR_\infty(\psi_2) &\sim \psi_1^{-1}, \\ \cA(\psi_1, \psi_2) - \cA_\infty(\psi_2) &\sim \psi_1^{-1}. \end{aligned} \] Note that large $\psi_1$ should be interpreted as the model being heavily overparametrized (a large width network). This asymptotic power law implies that, both uniform convergence bounds correctly predict the decay of the test error with the increase of the number of features. \paragraph{Remark on power laws.} For the derivation of the power laws in this section, instead of working with the analytical formula, we adopt an empirical approach: we perform linear fits with the inferred slopes, upon the numerical evaluations (of these expressions defined in Definition \ref{def:formula_U_T}) in the log-log scale. However, these linear fits are for the analytical formulae and do not involve randomness, and thus reliably indicate the true decay rates. \section{Introduction}\label{sec:introduction} \begin{figure*}[ht] \begin{center} \subfigure[Noiseless response ($\tau^2=0$)]{ \includegraphics[width=.31\textwidth]{figures/kernel_noiseless_psi2.pdf} \label{fig:kernel_noiseless_utr} } \subfigure[Noisy response ($\tau^2=0.1$)]{ \includegraphics[width=.31\textwidth]{figures/kernel_noisy_psi2.pdf} \label{fig:kernel_noisy_utr} } \subfigure[Minimum norm $\cA_\infty(\psi_2)$]{ \includegraphics[width=.31\textwidth]{figures/kernel_norm.pdf} \label{fig:kernel_norm} } \vskip -0.1in \caption{Random feature regression with activation function $\sigma(x) = \max(0, x)- 1/\sqrt{2\pi}$, target function $f_d(\bx)=\<\bbeta, \bx\>$ with $\|\bbeta\|_2^2=1$, and $\psi_1=\infty$. The horizontal axes are the number of samples $\psi_2 = \lim_{d \to \infty} n/d$. The solid lines are the the algebraic expressions derived in the main theorem (Theorem \ref{thm:main_theorem}). The dashed lines are the function $\psi_2^p$ in the log scale. Figure \ref{fig:kernel_noiseless_utr} and \ref{fig:kernel_noisy_utr}: Comparison of the classical uniform convergence in the norm ball of size level $\alpha = 1.5$ (Eq. \eqref{eqn:cU_inf_alpha}, blue curve), the uniform convergence over interpolators in the same norm ball (Eq. \eqref{eqn:cT_inf_alpha}, red curve), the risk of minimum norm interpolator (Eq. \eqref{eqn:cR_inf}, yellow curve). Figure \ref{fig:kernel_norm}: Minimum norm required to interpolate the training data (Eq. \eqref{eqn:cA_inf}).} \label{fig:kernel_noiseless} \end{center} \vskip -0.2in \end{figure*} Uniform convergence—the supremum difference between the training and test errors over a certain function class—is a powerful tool in statistical learning theory for understanding the generalization performance of predictors. Bounds on uniform convergence usually take the form of $\sqrt{\text{complexity}/n}$ \cite{vapnik95}, where the numerator represents the complexity of the function class, and $n$ is the sample size. If such a bound is tight, then the predictor is not going to generalize well whenever the function class complexity is too large. However, it is shown in recent theoretical and empirical work that overparametized models such as deep neural networks could generalize well, even in the interpolating regime in which the model exactly memorizes the data~\citep{zhang2016understanding,belkin2019reconciling}. As interpolation (especially for noisy training data) usually requires the predictor to be within a function class with high complexity, this challenges the classical methodology of using uniform convergence to bound generalization. For example, \citet{belkin2018understand} showed that interpolating noisy data with kernel machines requires exponentially large norm in fixed dimensions. The large norm would effectively make the uniform convergence bound $\sqrt{\text{complexity}/n}$ vacuous. \citet{zico2019unable} empirically measured the spectral-norm bound in~\citet{NIPS2017_b22b257a} and find that for interpolators, the bound increases with $n$, and is thus vacuous at large sample size. Towards a more fine-grained understanding, we ask the following \begin{quote} {\bf Question}: How large is the gap between uniform convergence and the actual generalization errors for interpolators? \end{quote} In this paper, we study this gap in the random features model from \citet{aliben}. This model can be interpreted as a linearized version of two-layer neural networks \citep{jacot2018neural} and exhibit some similar properties to deep neural networks such as double descent \citep{belkin2019reconciling}. We consider two types of uniform convergence in this model: \begin{itemize} \item $\cU:$ The classical uniform convergence over a norm ball of radius $\sqrt{A}$. \item $\cT:$ The modified uniform convergence over the same norm ball of size $\sqrt{A}$ but only include the interpolators, proposed in \citet{zhou2021uniform}. \end{itemize} Our main theoretical result is the exact asymptotic expressions of two versions of uniform convergence $\cU$ and $\cT$ in terms of the number of features, sample size, as well as other relevant parameters in the random feature model. Under some assumptions, we prove that the actual uniform convergence concentrates to these asymptotic counterparts. To further compare these uniform convergence bounds with the actual generalization error of interpolators, we adopt \begin{itemize} \item $\cR:$ the generalization error (test error) of the minimum norm interpolator. \end{itemize} from \citet{mm19}. To make $\cU$, $\cT$, $\cR$ comparable with each other, we choose the radius of the norm ball $\sqrt{A}$ to be slightly larger than the norm of the minimum norm interpolator. Our limiting $\cU$, $\cT$ (with norm ball of size $\sqrt{A}$ as chosen above), and $\cR$ depend on two main variables: $\psi_1=\lim_{d\to\infty} N/d$ representing the number of parameters, and $\psi_2=\lim_{d\to\infty} n/d$ representing the sample size. Our formulae for $\cU, \cT$ and $\cR$ yield three major observations. \begin{enumerate} \item \textbf{Sample Complexity in the Noisy Regime:} When the training data contains label noise (with variance $\tau^2$), we find that the norm required to interpolate the noisy training set grows linearly with the number of samples $\psi_2$ (green curve in Figure \ref{fig:kernel_norm}). As a result, the standard uniform convergence bound $\cU$ grows with $\psi_2$ at the rate $\cU\sim\psi_2^{1/2}$, leading to a vacuous bound on the generalization error (Figure \ref{fig:kernel_noisy_utr}). In contrast, in the same setting, we show the uniform convergence over interpolators $\cT\sim 1$ is a constant for large $\psi_2$, and is only order one larger than the actual generalization error $\cR\sim 1$. Further, the excess versions scale as $\cT-\tau^2\sim 1$ and $\cR-\tau^2\sim \psi_2^{-1}$. \item \textbf{Sample Complexity in the Noiseless Regime:} When the training set does not contain label noise, the generalization error $\cR$ decays faster: $\cR\sim\psi_2^{-2}$. In this setting, we find that the classical uniform convergence $\cU\sim\psi_2^{-1/2}$ and the uniform convergence over interpolators $\cT\sim\psi_2^{-1}$. This shows that, even when the classical uniform convergence already gives a non-vacuous bound, there still exists a sample complexity separation among the classical uniform convergence $\cU$, the uniform convergence over interpolators $\cT$, and the actual generalization error $\cR$. \item \textbf{Dependence on Number of Parameters:} In addition to the results on $\psi_2$, we find that $\cU, \cT$ and $\cR$ decay to its limiting value at the same rate $1/\psi_1$. This shows that both $\cU$ and $\cT$ correctly predict that as the number of features $\psi_1$ grows, the risk $\cR$ would decrease. \end{enumerate} These results provide a more precise understanding of uniform convergence versus the actual generalization errors, under a natural model that captures a lot of essences of nonlinear overparametrized learning. \subsection{Related work} \paragraph{Classical theory of uniform convergence.} Uniform convergence dates back to the empirical process theory of \citet{givenko1933} and \citet{cantelli1933}. Application of uniform convergence to the framework of empirical risk minimization usually proceeds through Gaussian and Rademacher complexities \cite{bartlettAndMendelson, bartlett2005} or VC and fat shattering dimensions \cite{vapnik95, bartlett1998the}. \paragraph{Modern take on uniform convergence.} A large volume of recent works showed that overparametrized interpolators could generalize well \cite{zhang2016understanding, belkin18a, behnm2015search, advani2020high, bartlett2020benign,belkin2018overfit, pmlr-v89-belkin19a, Nakkiran2020Deep, yang2020rethinking, belkin2019reconciling, mm19,spigler2019jamming}, suggesting that the classical uniform convergence theory may not be able to explain generalization in these settings \cite{zhang2016understanding}. Numerous efforts have been made to remedy the original uniform convergence theory using the Rademacher complexity \cite{pmlr-v40-Neyshabur15, pmlr-v75-golowich18a, neyshabur2018the,NIPS2009_f7664060,NEURIPS2019_cf9dc5e4}, the compression approach \cite{pmlr-v80-arora18b}, covering numbers \cite{NIPS2017_b22b257a}, derandomization \cite{negrea2020defense} and PAC-Bayes methods \cite{dziugaite2017computing,neyshabur2018a,nagarajan2018deterministic}. Despite the progress along this line, \citet{zico2019unable,bartlett2021failures} showed that in certain settings ``any uniform convergence'' bounds cannot explain generalization. Among the pessimistic results, \citet{zhou2021uniform} proposes that uniform convergence over interpolating norm ball could explain generalization in an overparametrized linear setting. Our results show that in the nonlinear random feature model, there is a sample complexity gap between the excess risk and uniform convergence over interpolators proposed in \citet{zhou2021uniform}. \paragraph{Random features model and kernel machines.} A number of papers studied the generalization error of kernel machines \cite{caponnetto2007optimal, jacot2020kernel, wainwright2019high} and random features models \cite{rahimi2009weighted, rudi2017generalization, bach2015equivalence, ma2020towards} in the non-asymptotic settings, in which the generalization error bound depends on the RKHS norm. However, these bounds cannot characterize the generalization error for interpolating solutions. In the last three years, a few papers \cite{belkin2018understand, liang2020just, liang2019risk} showed that interpolating solutions of kernel ridge regression can also generalize well in high dimensions. Recently, a few papers studied the generalization error of random features model in the proportional asymptotic limit in various settings \cite{hastie2019surprises, louart2018random, mm19, montanari2019generalization, gerace2020generalisation, d2020double, yang2020rethinking, adlam2020understanding,dhifallah2020precise,hu2020universality}, where they precisely characterized the asymptotic generalization error of interpolating solutions, and showed that double-descent phenomenon \cite{belkin2019reconciling,advani2020high} exists in these models. A few other papers studied the generalization error of random features models in the polynomial scaling limits \cite{ghorbani2019linearized, ghorbani2020neural, mei2021generalization}, where other interesting behaviors were shown. Precise asymptotics for the Rademacher complexity of some \emph{underparameterized} learning models was calculated using statistical physics heuristics in \citet{abbaras2020rademacher}. In our work, we instead focus on the uniform convergence of \emph{overparameterized} random features model. \section{Main theorem} In this section, we state the main theorem that presents the asymptotic expressions for the uniform convergence bounds. We will start by stating a few assumptions, which fall into two categories: Assumption \ref{ass:linear_target}, \ref{ass:activation}, and \ref{ass:linear}, which specify the setup for the learning task; Assumption \ref{ass:overline_U_invertable} and \ref{ass:exchange_limit}, which are technical in nature. \subsection{Modeling assumptions} The three assumptions in this subsection specify the target function, the activation function, and the limiting regime. \begin{assumption}[Linear target function]\label{ass:linear_target} We assume that $f_d\in L^2(\S^{d-1}(\sqrt{d}))$ with $f_d(\bx) = \< \bbeta^{(d)}, \bx\>$, where $\bbeta^{(d)} \in \R^d$ and \[ \lim_{d \to \infty} \| \bbeta^{(d)} \|_2^2 = \normf_1^2. \] \end{assumption} We remark here that, if we are satisfied with heuristic formulae instead of rigorous results, we are able to deal with non-linear target functions, where the additional nonlinear part is effectively increasing the noise level $\tau^2$. This intuition was first developed in \cite{mm19}. \begin{assumption}[Activation function]\label{ass:activation} Let $\sigma \in C^2(\R)$ with $\vert \sigma(u) \vert, \vert\sigma'(u)\vert, \vert\sigma''(u)\vert \le c_0 e^{c_1 \vert u \vert}$ for some constant $c_0, c_1 < \infty$. Define \[ \ob_0 \equiv \E[\sigma(G)], ~ \ob_1 \equiv \E[G \sigma(G)], ~ \ob_\star^2 \equiv \E[\sigma(G)^2] - \ob_0^2 - \ob_1^2, \] where expectation is with respect to $G \sim \cN(0, 1)$. Assume $\ob_0 = 0$, $0 < \ob_1^2, \ob_\star^2 < \infty$. \end{assumption} The assumption that $\ob_0 = 0$ is not essential and can be relaxed with a certain amount of additional technical work. \begin{assumption}[Proportional limit]\label{ass:linear} Let $N = N(d)$ and $n = n(d)$ be sequences indexed by $d$. We assume that the following limits exist in $(0, \infty)$: \[ \lim_{d \to \infty} N(d) / d = \psi_1, ~~~~~~~ \lim_{d \to \infty} n(d) / d = \psi_2. \] \end{assumption} \subsection{Technical assumptions} We will make some assumptions upon the properties of some random matrices that appear in the proof. These assumptions are technical and we believe they can be proved under more natural assumptions. However, proving them requires substantial technical work, and we defer them to future work. We note here that these assumptions are often implicitly required in papers that present intuitions using heuristic derivations. Instead, we ensure the mathematical rigor by listing them. See Section \ref{sec:discuss} for more discussions upon these assumptions. We begin by defining some random matrices which are the key quantities that are used in the proof of our main results. \begin{definition}[Block matrix and log-determinant]\label{def:log_determinant_A} Let $\bX = (\bx_1, \ldots, \bx_n)^\sT \in \R^{n \times d}$ and $\bTheta = (\btheta_1, \ldots, \btheta_N)^\sT \in \R^{N \times d}$, where $\bx_i, \btheta_a \sim_{iid} \text{\normalfont Unif}(\S^{d-1}(\sqrt{d}))$, as mentioned in Section \ref{sec:model_setup}. Define \begin{align} \bZ &= \frac{1}{\sqrt{d}}\sigma\left( \frac{\bX\bTheta^\sT}{\sqrt{d}} \right),~~ \bZ_1 = \frac{\mu_1}{d}\bX\bTheta^\sT, \nonumber \\ \bQ &= \frac{\bTheta\bTheta^\sT}{d},~~~~~~~~~~~~~~~~~~~ \bH = \frac{\bX\bX^\sT}{d},~ \label{eqn:def_Q_H_Z_Z1} \end{align} and for $\bq = (s_1, s_2, t_1, t_2, q) \in \R^5$, we define \[ \bA(\bq) \equiv \begin{bmatrix} s_1 \id_N + s_2 \bQ & \bZ^\sT + p \bZ_1^\sT \\ \bZ + p \bZ_1 & t_1 \id_n + t_2 \bH\\ \end{bmatrix}. \] Finally, we define the log-deteminant of $\bA(\bq)$ by \[ G_d(\xi; \bq) \equiv \frac{1}{d} \sum_{i = 1}^{N + n} \Log \lambda_i\Big(\bA(\bq) - \xi \id_{n + N}\Big). \] Here $\Log$ is the complex logarithm with branch cut on the negative real axis and $\{\lambda_i(\bA)\}_{i \in [n + N]}$ is the set of eigenvalues of $\bA$. \end{definition} The following assumption states that for properly chosen $\lambda$, some specific random matrices are well-conditioned. As we will see in the next section, this ensures that the dual problems in Eq. (\ref{eqn:uniform_lag}) and (\ref{eqn:uniform_zeroloss_lag}) are bounded with high probability. \def{\boldsymbol T}{{\boldsymbol T}} \def{\rm null}{{\rm null}} \def{\mathsf P}{{\mathsf P}} \begin{assumption}[Invertability]\label{ass:overline_U_invertable} Consider the asymptotic limit as specified in Assumption \ref{ass:linear} the activation function as in Assumption \ref{ass:activation}. We assume the following. \begin{itemize} \item Denote $\overline \bU(\lambda) = \ob_1^2 \bQ + (\ob_\star^2 - \psi_1 \lambda ) \id_N - \psi_2^{-1}\bZ^\sT \bZ$. There exists $\eps > 0$ and $\lambdau =\lambdau(\psi_1, \psi_2, \ob_1^2, \ob_\star^2)$, such that for any fixed $\lambda \in (\lambdau, \infty) \equiv \Lambdau$, with high probability, we have \[ \overline \bU(\lambda) \preceq - \eps \id_N. \] \item Denote $\overline {\boldsymbol T}(\lambda) = {\mathsf P}_{{\rm null}} [\ob_1^2 \bQ + (\ob_\star^2 - \psi_1 \lambda ) \id_N ] {\mathsf P}_{{\rm null}}$ where ${\mathsf P}_{{\rm null}} = \id_N - \bZ^\dagger \bZ$. There exists $\eps > 0$ and $\lambdat =\lambdat(\psi_1, \psi_2, \ob_1^2, \ob_\star^2)$, such that for any fixed $\lambda \in (\lambdat, \infty) \equiv \Lambdat$, with high probability we have \[ \overline {\boldsymbol T}(\lambda) \preceq - \eps {\mathsf P}_{{\rm null}}, \] and $\bZ$ has full row rank with $\sigma_{\min}(\bZ) \ge \eps$ (which requires $\psi_1 > \psi_2$). \end{itemize} \end{assumption} The following assumption states that the order of limits and derivatives regarding $G_d$ can be exchanged. \begin{assumption}[Exchangeability of limits]\label{ass:exchange_limit} We denote \[ \begin{aligned} \cSu =&~ \{ (\mu_\star^2 - \lambda \psi_1, \mu_1^2, \psi_2, 0,0; \psi_1, \psi_2): \lambda \in (\lambdau, \infty) \},\\ \cSt =&~ \{ (\mu_\star^2 - \lambda \psi_1, \mu_1^2, 0, 0,0; \psi_1, \psi_2): \lambda \in (\lambdat, \infty) \}, \end{aligned} \] where $\lambdau$ and $\lambdat$ are given in Assumption \ref{ass:overline_U_invertable} and depend on $(\psi_1, \psi_2, \ob_1^2, \ob_\star^2)$. For any fixed $(\bq ; \bpsi) = (s_1, s_2, t_1, t_2, p; \psi_1, \psi_2) \in \cSu \cup \cSt$, in the asymptotic limit as in Assumption \ref{ass:linear}, for $k = 1, 2$, we have \[ \begin{aligned} \lim_{u \to 0_+} \lim_{d \to \infty} \E[ \nabla_\bq^k G_d({\mathrm i} u; \bq)] = \lim_{u \to 0_+ } \nabla_\bq^k \Big( \lim_{d \to \infty} \E[ G_d({\mathrm i} u; \bq)] \Big),\\ \end{aligned} \] and \[ \begin{aligned} \Big \| \nabla_\bq^k G_d(0; \bq) - \lim_{u \to 0+} \lim_{d \to \infty} \E[\nabla_\bq^k G_d({\mathrm i} u; \bq)] \Big\| = o_{d, \P}(1), \end{aligned} \] where $o_{d, \P}(1)$ stands for convergence to $0$ in probability. \end{assumption} \subsection{From constrained forms to Lagrangian forms} Before we give the asymptotics of $U$ and $T$ as defined in Eq. $\eqref{eqn:uniform}$ and $\eqref{eqn:uniform_zeroloss}$, we first consider their dual forms which are more amenable in analysis. These are given b \begin{align} \overline U(\lambda, N, n, d) \equiv&~ \sup_{\ba} \Big[ R(\ba) - \what R_n(\ba) - \psi_1 \lambda \| \ba \|_2^2 \Big], \label{eqn:uniform_lag}\\ \overline T(\lambda, N, n, d) \equiv&~ \sup_{\ba}\inf_{\bmu} \Big[ R(\ba) - \lambda\psi_1 \| \ba \|_2^2 \label{eqn:uniform_zeroloss_lag}\\ &~ + 2 \< \bmu, \bZ \ba - \by/\sqrt{d}\ \>\Big]. \nonumber \end{align} The proposition below shows that the strong duality holds upon the constrained forms and their dual forms. \begin{proposition}[Strong Duality]\label{prop:strong_duality} For any $A > 0$, we have \[ \begin{aligned} U(A, N, n, d) =&~ \inf_{\lambda \ge 0} \Big[ \overline U(\lambda, N, n, d) + \lambda A \Big]. \\ \end{aligned} \] Moreover, for any $A > \psi_1 \|\ba_{\min} \|_2^2$, we have \[ \begin{aligned} T(A, N, n, d) =&~ \inf_{\lambda \ge 0} \Big[ \overline T(\lambda, N, n, d) + \lambda A \Big]. \\ \end{aligned} \] \end{proposition} The proof of Proposition \ref{prop:strong_duality} is based on a classical result which states that strongly duality holds for quadratic programs with single quadratic constraint (Appendix B.1 in \citet{boyd_vandenberghe_2004}). \subsection{Expressions of $\cU$ and $\cT$} Proposition \ref{prop:strong_duality} transforms our task from computing the asymptotics of $U$ and $T$ to that of $\overline U$ and $\overline T$. The latter is given by the following proposition. \begin{proposition}\label{prop:concentration_lag} Let the target function $f_d$ satisfy Assumption \ref{ass:linear_target}, the activation function $\sigma$ satisfy Assumption \ref{ass:activation}, and $(N, n, d)$ satisfy Assumption \ref{ass:linear}. In addition, let Assumption \ref{ass:overline_U_invertable} and \ref{ass:exchange_limit} hold. Then for $\lambda \in \Lambdau$, with high probability the maximizer in Eq. (\ref{eqn:uniform_lag}) can be achieved at a unique point $\overline \ba_U(\lambda)$, and we have \[ \begin{aligned} \overline U(\lambda, N, n, d) = \overline\cU(\lambda, \psi_1, \psi_2) + o_{d, \P}(1),\\ \psi_1\|\overline\ba_U(\lambda)\|_2^2 = \cA_U(\lambda, \psi_1, \psi_2) + o_{d, \P}(1).\\ \end{aligned} \] Moreover, for any $\lambda \in \Lambdat$, with high probability the maximizer in Eq. (\ref{eqn:uniform_zeroloss_lag}) can be achieved at a unique point $\overline \ba_T(\lambda)$, and we have \[ \begin{aligned} \overline T(\lambda, N, n, d) = \overline\cT(\lambda, \psi_1, \psi_2) + o_{d, \P}(1),\\ \psi_1\|\overline\ba_T(\lambda)\|_2^2 = \cA_T(\lambda, \psi_1, \psi_2) + o_{d, \P}(1) .\\ \end{aligned} \] The functions $\overline \cU, \overline \cT, \cA_U, \cA_T$ are given in Definition \ref{def:analytic_expression_overline} in Appendix \ref{sec:analytic_expression_overline}. \end{proposition} \begin{remark}\label{rmk:heuristic_def} \def{\rm ext}{{\rm ext}} Here we present the heuristic formulae of $\overline \cU, \overline \cT, \cA_U, \cA_T$, and defer their rigorous definition to the appendix. Define a function $g_0(\bq; \bpsi)$ by \begin{equation}\label{eqn:def_g0_heuristic} \begin{aligned} &g_0(\bq; \bpsi) \equiv~ {\rm ext}_{z_1, z_2} \Big[ \log\big( (s_2 z_1 + 1)(t_2 z_2 + 1) \\ &- \ob_1^2 (1 + p)^2 z_1 z_2 \big) - \ob_\star^2 z_1 z_2 + s_1 z_1 + t_1 z_2 \\ &- \psi_1 \log (z_1 / \psi_1) - \psi_2 \log (z_2 / \psi_2) - \psi_1 - \psi_2 \Big], \end{aligned} \end{equation} where ${\rm ext}$ stands for setting $z_1$ and $z_2$ to be stationery (which is a common symbol in statistical physics heuristics). We then take \[ \begin{aligned} \overline \cU(\lambda, \bpsi) = \normf_1^2 ( 1 - \mu_1^2 \gamma_{s_2} - \gamma_{p} - \gamma_{t_2} ) + \tau^2( 1 - \gamma_{t_1}), \\ \end{aligned} \] where $\gamma_a \equiv \partial_a g_0(\bq; \bpsi) \vert_{\bq = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)}$ for the symbol $a \in \{s_1, s_2, t_1, t_2, p \}$, and \[ \begin{aligned} \overline \cT(\lambda, \bpsi) = \normf_1^2 ( 1 - \mu_1^2 \nu_{s_2} - \nu_{p} - \nu_{t_2} ) + \tau^2( 1 - \nu_{t_1}), \end{aligned} \] where we define $\nu_a \equiv \partial_a g_0(\bq; \bpsi) \vert_{\bq = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, 0, 0,0)}$ for symbols $a \in \{s_1, s_2, t_1, t_2, p \}$. Finally $\cA_U = - \partial_\lambda \overline \cU$, $\cA_T = - \partial_\lambda \overline \cT$. By a further simplification, we can express these formulae to be rational functions of $(\ob_1^2, \ob_\star^2, \lambda, \psi_1, \psi_2, m_1, m_2)$ where $(m_1, m_2)$ is the stationery point of the variational problem in Eq. (\ref{eqn:def_g0_heuristic}) (c.f. Remark \ref{rmk:simplification}). \end{remark} We next define $\cU$ and $\cT$ to be dual forms of $\overline \cU$ and $\overline \cT$. \begin{definition}[Formula for uniform convergence bounds]\label{def:formula_U_T} For $A \in \Gamma_U \equiv \{\cA_U(\lambda, \psi_1, \psi_2): \lambda \in \Lambdau \}$, define \[ \begin{aligned} \cU(A, \psi_1, \psi_2) \equiv&~ \inf_{\lambda \ge 0} \Big[ \overline \cU(\lambda, \psi_1, \psi_2) + \lambda A \Big]. \\ \end{aligned} \] For $A \in \Gamma_T \equiv \{\cA_T(\lambda, \psi_1, \psi_2): \lambda \in \Lambdat \}$, define \[ \begin{aligned} \cT(A, \psi_1, \psi_2) \equiv&~ \inf_{\lambda \ge 0} \Big[ \overline \cT(\lambda, \psi_1, \psi_2)+ \lambda A \Big].\\ \end{aligned} \] \end{definition} Finally, we are ready to present the main theorem of this paper, which states that the uniform convergence bounds $U(A, N, n, d)$ and $T(A, N, n, d)$ converge to the formula presented in the definition above. \begin{theorem}\label{thm:main_theorem} Let the same assumptions in Proposition \ref{prop:concentration_lag} hold. For any $A \in \Gamma_U$, we have \begin{align}\label{eqn:main_U} U(A, N, n, d) = \cU(A, \psi_1, \psi_2) + o_{d, \P}(1), \ \end{align} and for $A \in \Gamma_T$ we have \begin{align}\label{eqn:main_T} T(A, N, n, d) = \cT(A, \psi_1, \psi_2) + o_{d, \P}(1), \end{align} where functions $\cU$ and $\cT$ are given in Definition \ref{def:formula_U_T}. \end{theorem} The proof of this theorem is contained in Section \ref{sec:proof_main_thm}. \section{Proof of Proposition \ref{prop:strong_duality}} The proof of Proposition \ref{prop:strong_duality} contains two parts: standard uniform convergence $U$ and uniform convergence over interpolators $T$. The proof for the two cases are essentially the same, both based on the fact that strong duality holds for quadratic program with single quadratic constraint (c.f. \citet{boyd_vandenberghe_2004}, Appendix A.1). \subsection{Standard uniform convergence $U$} Recall that the uniform convergence bound $U$ is defined as in Eq. \eqref{eqn:uniform} \[ U(A, N, n, d) =~ \sup_{(N/d) \| \ba \|_2^2 \le A} \Big( R(\ba) - \what R_n(\ba) \Big). \] Since the maximization problem in \eqref{eqn:uniform} is a quadratic program with a single quadratic constraint, the strong duality holds. So we have \[ \sup_{(N/d) \| \ba \|_2^2 \le A^2} R(\ba) - \what R_n(\ba) = \inf_{\lambda\ge0} \sup_{\ba} \Big[ R(\ba) - \what{R}_n(\ba) - \psi_1\lambda(\|\ba\|_2^2 - \psi_1^{-1}A) \Big]. \] Finally, by the definition of $\overline U$ as in Eq. (\ref{eqn:uniform_lag}), we get \[ U(A, N, n, d) = \inf_{\lambda\ge0}\Big[ \overline U(\lambda, N, n, d) + \lambda A\Big]. \] \subsection{Uniform convergence over interpolators $T$} Without loss of generality, we consider the regime when $N > n$. Recall that the uniform convergence over interpolators $T$ is defined as in Eq. \eqref{eqn:uniform_zeroloss} \[ T(A, N, n, d)= \sup_{(N/d) \| \ba \|_2^2 \le A, \what R_n(\ba) = 0} R(\ba). \] When the set $\{ \ba \in \R^N : (N / d) \| \ba \|_2^2 \le A, \what R_n(\ba) = 0 \}$ is empty, we have \[ T(A, N, n, d) =~ \inf_{\lambda \ge 0} \Big[ \overline T(\lambda, N, n, d) + \lambda A \Big]=-\infty. \] In the following, we assume that the set $\{ \ba \in \R^N : (N / d) \| \ba \|_2^2 \le A, \what R_n(\ba) = 0 \}$ is non-empty, i.e., there exists $\ba \in \R^N$ such that $\what R_n(\ba) = 0$ and $(N/ d) \| \ba \|_2^2 \le A$. Let $m$ be the dimension of the null space of $\bZ \in \R^{n \times N}$, i.e. $m = \dim(\{ \bu: \bZ \bu = \bzero \})$. Note that $\bZ \in \R^{N \times n}$ and $N > n$, we must have $N - n \le m \le N$. We let $\bR \in \R^{N \times m}$ be a matrix whose column space gives the null space of matrix $\bZ$. Let $\ba_0$ be the minimum norm interpolating solution (whose existence is given by the assumption that $\{ \ba \in \R^N: \what R_n(\ba) = 0\}$ is non-empty) \[ \ba_0 = \lim_{\lambda \to 0_+} \arg \min_{\ba \in \R^N} \Big[ \what R_n(\ba) + \lambda \| \ba \|_2^2 \Big] = \arg \min_{\ba \in \R^N: \what R_n(\ba) = 0} \| \ba \|_2^2. \] Then we have \[ \{\ba\in\R^N: \what R_n(\ba) = 0\} = \{\ba\in\R^N: \by = \sqrt{d}\bZ\ba\} = \{\bR\bu + \ba_0: \bu\in\R^m\}. \] Then $T$ can be rewritten as a maximization problem in terms of $\bu$: \[ \begin{aligned} \sup_{(N/d) \| \ba \|_2^2 \le A, \what R_n(\ba) = 0} R(\ba) =&~ \sup_{\bu \in \R^{m}: \| \bR \bu + \ba_0 \|_2^2 \le \psi_1^{-1} A} \Big[\<\bR \bu + \ba_0, \bU (\bR \bu + \ba_0)\> - 2\<\bR \bu + \ba_0, \bv\> + \E(y^2) \Big]\\ =&~ R(\ba_0) +\sup_{\bu \in \R^{m}: \| \bR \bu + \ba_0 \|_2^2 \le \psi_1^{-1} A} \Big[\<\bu, \bR^\sT \bU \bR \bu\> + 2\<\bR \bu, \bU \ba_0 - \bv\>\Big]. \end{aligned} \] Note that the optimization problem only has non-feasible region when $A>(N/d)\|\ba_0\|_2^2$. By strong duality of quadratic programs with a single quadratic constraint, we have \[ \begin{aligned} &~\sup_{\bu \in \R^{m}: \| \bR \bu + \ba_0 \|_2^2 \le \psi_1^{-1} A} \Big[\<\bu, \bR^\sT \bU \bR \bu\> + 2\<\bR \bu, \bU \ba_0 - \bv\>\Big] \\ =&~ \inf_{\lambda\geq 0}\sup_{\bu \in \R^{m}} \Big[\<\bu, \bR^\sT \bU \bR \bu\> + 2\<\bR \bu, \bU \ba_0 - \bv\> - \lambda(\psi_1\|\bR\bu+\ba_0\|_2^2 - A)\Big]. \end{aligned} \] The maximization over $\bu$ can be restated as the maximization over $\ba$: \[ \begin{aligned} R(\ba_0) + \sup_{\bu \in \R^{m}} \Big[\<\bu, \bR^\sT \bU \bR \bu\> + 2\<\bR \bu, \bU \ba_0 - \bv\> - \lambda\psi_1\|\bR\bu+\ba_0\|_2^2 \Big] = \sup_{\ba: \what R_n(\ba) = 0} \Big[R(\ba) - \lambda\psi_1\|\ba\|_2^2\Big]. \end{aligned} \] Moreover, since $\sup_{\ba: \what R_n(\ba) = 0} [ R(\ba) - \lambda\psi_1\|\ba\|_2^2 ]$ is a quadratic programming with linear constraints, we have \[ \sup_{\ba: \what R_n(\ba) = 0} \Big[ R(\ba) - \lambda\psi_1\|\ba\|_2^2 \Big] = \sup_{\ba}\inf_{\bmu} \Big[ R(\ba) - \lambda\psi_1 \| \ba \|_2^2 + 2 \< \bmu, \bZ \ba - \by/\sqrt{d}\ \>\Big]. \] Combining all the equality above and the definition of $\overline T$ as in Eq. (\ref{eqn:uniform_zeroloss_lag}), we have \[ \begin{aligned} T(A, N, n, d) =&~ \sup_{(N/d) \| \ba \|_2^2 \le A, \what R_n(\ba) = 0} R(\ba) \\ =&~ R(\ba_0) +\sup_{\bu \in \R^{m}: \| \bR \bu + \ba_0 \|_2^2 \le \psi_1^{-1} A} \Big[ \<\bu, \bR^\sT \bU \bR \bu\> + 2\<\bR \bu, \bU \ba_0 - \bv\> \Big]\\ =&~R(\ba_0) + \inf_{\lambda\geq0}\sup_{\bu} \Big[\<\bu, \bR^\sT \bU \bR \bu\> + 2\<\bR \bu, \bU \ba_0 - \bv\> - \lambda(\psi_1\|\bR\bu+\ba_0\|_2^2 - A) \Big]\\ =&~\inf_{\lambda\geq0} \Big\{ \lambda A + R(\ba_0) + \sup_{\bu} \big[\<\bu, \bR^\sT \bU \bR \bu\> + 2\<\bR \bu, \bU \ba_0 - \bv\> - \lambda\psi_1\|\bR\bu+\ba_0\|_2^2\Big] \Big\} \\ =&~ \inf_{\lambda\geq0} \Big\{\lambda A +\sup_{\ba: \what R_n(\ba) = 0} \Big[ R(\ba) - \lambda\psi_1\|\ba\|_2^2 \Big] \Big\}\\ =&~ \inf_{\lambda\geq0} \Big\{ \lambda A +\sup_{\ba}\inf_{\bmu} \Big[ R(\ba) - \lambda\psi_1 \| \ba \|_2^2 + 2 \< \bmu, \bZ \ba - \by/\sqrt{d}\ \>\Big] \Big\}\\ =&~ \inf_{\lambda\geq 0 } \Big[ \overline T(\lambda, N, n, d) + \lambda A\Big]. \end{aligned} \] This concludes the proof. \section{Proof of Proposition \ref{prop:concentration_lag}} Note that the definitions of $\overline U$ and $\overline T$ as in Eq. (\ref{eqn:uniform_lag}) and (\ref{eqn:uniform_zeroloss_lag}) depend on $\bbeta = \bbeta^{(d)}$, where $\bbeta^{(d)}$ gives the coefficients of the target function $f_d(\bx) = \< \bx, \bbeta^{(d)}\>$. Suppose we explicitly write their dependence on $\bbeta = \bbeta^{(d)}$, i.e., $\overline U(\lambda, N, n, d) = \overline U(\bbeta, \lambda, N, n, d)$ and $\overline T(\lambda, N, n, d)= \overline T(\bbeta, \lambda, N, n, d)$, then we can see that for any fixed $\bbeta_\star$ and $\tilde \bbeta$ with $\| \tilde \bbeta \|_2 = \| \bbeta_\star \|_2$, we have $\overline U(\bbeta_\star, \lambda, N, n, d) \stackrel{d}{=} \overline U(\tilde \bbeta, \lambda, N, n, d)$ and $\overline T(\bbeta_\star, \lambda, N, n, d) \stackrel{d}{=} \overline T(\tilde \bbeta, \lambda, N, n, d)$ where the randomness comes from $\bX, \bTheta, \beps$. This is by the fact that the distribution of $\bx_i$'s and $\btheta_a$'s are rotationally invariant. As a consequence, for any fixed deterministic $\bbeta_\star$, if we take $\bbeta \sim \Unif(\S^{d-1}(\| \bbeta_\star\|_2))$, we have \[ \begin{aligned} \overline U(\bbeta_\star, \lambda, N, n, d) \stackrel{d}{=}&~ \overline U(\bbeta, \lambda, N, n, d), \\ \overline T(\bbeta_\star, \lambda, N, n, d) \stackrel{d}{=}&~ \overline T(\bbeta, \lambda, N, n, d). \\ \end{aligned} \] where the randomness comes from $\bX, \bTheta, \beps, \bbeta$. Consequently, as long as we are able to show the equation \[ \overline U(\bbeta, \lambda, N, n, d) = \overline\cU(\lambda, \psi_1, \psi_2) + o_{d, \P}(1) \] for random $\bbeta \sim \Unif(\S^{n-1}(\normf_1))$, this equation will also hold for any deterministic $ \bbeta_\star$ with $\| \bbeta_\star \|_2^2 = \normf_1^2$. Vice versa for $\overline T$, $\| \overline \ba_U \|_2^2 $ and $\| \overline \ba_T \|_2^2$. As a result, in the following, we work with the assumption that $\bbeta = \bbeta^{(d)} \sim \Unif(\S^{d-1}(\normf_1))$. That is, in proving Proposition \ref{prop:concentration_lag}, we replace Assumption \ref{ass:linear_target} by Assumption \ref{ass:linear_target_prime} below. By the argument above, as long as Proposition \ref{prop:concentration_lag} holds under Assumption \ref{ass:linear_target_prime}, it also holds under the original assumption, i.e., Assumption \ref{ass:linear_target}. \begin{assumption}[Linear Target Function]\label{ass:linear_target_prime} We assume that $f_d\in L^2(\S^{d-1}(\sqrt{d}))$ with $f_d(\bx) = \< \bbeta^{(d)}, \bx\>$, where $\bbeta^{(d)} \sim \Unif(\S^{d-1}(\normf_1))$. \end{assumption} \subsection{Expansions} Denote $\bv = (v_i)_{i \in [N]} \in \R^N$ and $\bU = (U_{ij})_{i, j \in [N]} \in \R^{N \times N}$ where their elements are defined via \[ \begin{aligned} v_i \equiv&~ \E_{\eps, \bx}[y \sigma(\< \bx, \btheta_i\> / \sqrt d)], \\ U_{ij} \equiv&~ \E_\bx[\sigma(\< \bx, \btheta_i\> / \sqrt d) \sigma(\< \bx, \btheta_j\> / \sqrt d)]. \end{aligned} \] Here, $y = \< \bx, \bbeta\> + \eps$, where $\bbeta \sim \Unif(\S^{d-1}(\normf_1))$, $\bx \sim \Unif(\S^{d-1}(\sqrt{d}))$, $\eps \sim \cN(0, \tau^2)$, and $(\btheta_j)_{j \in [N]} \sim_{iid} \Unif(\S^{d-1}(\sqrt{d}))$ are mutually independent. The expectations are taken with respect to the test sample $\bx \sim \Unif(\S^{d-1}(\sqrt{d}))$ and $\eps \sim \cN(0, \tau^2)$ (especially, the expectations are conditional on $\bbeta$ and $(\btheta_i)_{i \in [N]}$). Moreover, we denote $\by = (y_1, \ldots, y_n)^\sT \in \R^n$ where $y_i = \< \bx_i, \bbeta\> + \beps_i$. Recall that $(\bx_i)_{i \in [n]} \sim_{iid} \Unif(\S^{d-1}(\sqrt{d}))$ and $(\eps_i)_{i \in [n]} \sim_{iid} \cN(0, \tau^2)$ are mutually independent and independent from $\bbeta \sim \Unif(\S^{d-1}(\sqrt{d}))$. We further denote $\bZ = (Z_{ij})_{i \in [n], j \in [N]}$ where its elements are defined via \[ Z_{ij} = \sigma(\< \bx_i, \btheta_j\> / \sqrt d) / \sqrt d. \] The population risk \eqref{eqn:pop_risk} can be reformulated as \[ R(\ba) = \<\ba, \bU \ba\> - 2\<\ba, \bv\> + \E[y^2], \] where $\ba = (a_1, \dots, a_N)\in\R^N$. The empirical risk \eqref{eqn:emp_risk} can be reformulated as \[ \what R_n(\ba) = \psi_2^{-1} \<\ba, \bZ^\sT\bZ\ba\> - 2\psi_2^{-1}\frac{\<\bZ^\sT\by, \ba\>}{\sqrt{d}} + \frac{1}{n}\|\by\|_2^2. \] By the Appendix A in \citet{mm19} (we include in the Appendix \ref{sec:Background} for completeness), we can expand $\sigma(x)$ in terms of Gegenbauer polynommials \[ \begin{aligned} \sigma(x) =&~ \sum_{k = 0}^\infty \lambda_{d, k}(\sigma) B(d, k) Q_k^{(d)}(\sqrt d \cdot x),\\ \end{aligned} \] where $Q_k^{(d)}$ is the $k$'th Gegenbauer polynomial in $d$ dimensions, $B(d, k)$ is the dimension of the space of polynomials on $\S^{d-1}(\sqrt{d})$ with degree exactly $k$. Finally, $\lambda_{d, k}(\sigma)$ is the $k$'th Gegenbauer coefficient. More details of this expansion can be found in Appendix \ref{sec:Background}. By the properties of Gegenbauer polynomials (c.f. Appendix \ref{sec:Gegenbauer}), we have \[ \begin{aligned} \E_{ \bx \sim \Unif(\S^{d-1}(\sqrt{d}))}[\bx Q_k(\< \bx, \btheta_i\>)] =&~ \bzero, ~~~~~ &\forall k \neq&~ 1,\\ \E_{ \bx \sim \Unif(\S^{d-1}(\sqrt{d}))}[\bx Q_1(\< \bx, \btheta_i\>)] =&~ \btheta_i /d, ~~~~~ &k =&~ 1.\\ \end{aligned} \] As a result, we have \begin{align} v_i =&~ \E_{\eps, \bx}[y \sigma(\< \bx, \btheta_i\> / \sqrt d)] = \sum_{k = 0}^\infty \lambda_{d, k}(\sigma) B(d, k) \E_{\bx}[\< \bx, \bbeta\> Q_k^{(d)}(\sqrt d \cdot x) ] = \lambda_{d, 1}(\sigma) \<\btheta_i, \bbeta \>. \label{eqn:vexpan} \end{align} \subsection{Removing the perturbations}\label{sec:removing_perturbation} By Lemma \ref{lem:decomposition_of_kernel_matrix} and \ref{lem:small_lambda_d0} as in Appendix \ref{sec:auxiliary_lemmas}, we have the following decomposition \begin{equation}\label{eqn:Uexpan} \bU = \ob_1^2 \bQ + \ob_\star^2 \id_N + \bDelta, \end{equation} with $\bQ = \bTheta \bTheta^\sT / d$, $\E[\| \bDelta \|_{\op}^2] = o_{d}(1)$, and $\ob_1^2$ and $\ob_\star^2$ are given in Assumption \ref{ass:activation}. In the following, we would like to show that $\bDelta$ has vanishing effects in the asymptotics of $\overline U$, $\overline T$, $\| \overline \ba_U \|_2^2$ and $\| \overline \ba_T \|_2^2$. For this purpose, we denote \begin{equation}\label{eqn:definitions_oUc_oTc} \begin{aligned} \bU_c =&~ \ob_1^2 \bQ + \ob_\star^2 \id_N, \\ R_c(\ba) =&~ \<\ba, \bU_c \ba\> - 2\<\ba, \bv\> + \E[y^2], \\ \what R_{c, n}(\ba) =&~ \<\ba, \psi_2^{-1} \bZ^\sT \bZ \ba\> - 2\<\ba, \psi_2^{-1} \bZ^\sT \by / \sqrt{d}\> + \E[y^2], \\ \overline U_c(\lambda, N, n, d) =&~ \sup_{\ba} \Big( R_c(\ba) - \what R_{c, n}(\ba) - \psi_1 \lambda \| \ba \|_2^2 \Big), \\ \overline T_c(\lambda, N, n, d) =&~ \sup_{\ba}\inf_{\bmu} \Big[ R_c(\ba) - \lambda\psi_1 \| \ba \|_2^2 + 2 \< \bmu, \bZ \ba - \by/\sqrt{d}\ \>\Big]. \end{aligned} \end{equation} For a fixed $\lambda \in \Lambdau$, note we have \begin{equation}\label{eqn:oUc_in_proof} \begin{aligned} \overline U_c(\lambda, N, n, d) =&~ \sup_{\ba}\Big(\< \ba, (\bU_c - \psi_2^{-1}\bZ^\sT \bZ - \psi_1 \lambda \Id_N) \ba\> - 2 \< \ba, \bv - \psi_2^{-1} \frac{\bZ^\sT \by}{\sqrt{d}}\> \Big) \\ =&~ \sup_{\ba}\Big(\< \ba, {\overline \bM} \ba\> - 2 \< \ba, {\overline \bv}\> \Big) \end{aligned} \end{equation} where ${\overline \bM} = \bU_c - \psi_2^{-1}\bZ^\sT \bZ - \psi_1 \lambda \Id_N$ and ${\overline \bv} = \bv - \psi_2^{-1} \bZ^\sT \by /\sqrt{d}$. When $\bX, \bTheta$ are such that the good event in Assumption \ref{ass:overline_U_invertable} happens (which says that ${\overline \bM} \preceq - \eps \id_N$ for some $\eps > 0$), the inner maximization can be uniquely achieved at \begin{equation}\label{eqn:oaUc_in_proof} \overline\ba_{U, c}(\lambda) = \argmax_{\ba}\Big(\< \ba, {\overline \bM} \ba\> - 2 \< \ba, {\overline \bv}\> \Big) = {{\overline \bM}}^{-1} {\overline \bv}. \end{equation} and when the good event $\{ \| \bDelta \|_{\op} \le \eps / 2\}$ also happens, the maximizer in the definition of $\overline U(\lambda, N, n, d)$ (c.f. Eq. (\ref{eqn:uniform_lag})) can be uniquely achieved at \[ \overline\ba_U(\lambda) = \argmax_{\ba}\Big(\< \ba, ({\overline \bM} + \bDelta) \ba\> - 2 \< \ba, {\overline \bv}\> \Big) = ({\overline \bM} + \bDelta)^{-1} {\overline \bv}. \] Note we have \[ \overline\ba_U(\lambda) - \overline\ba_{U, c}(\lambda) = ({\overline \bM} + \bDelta)^{-1} {\overline \bv} - {\overline \bM}^{-1} {\overline \bv} = ({\overline \bM} + \bDelta)^{-1} \bDelta {\overline \bM}^{-1} {\overline \bv}, \] so by the fact that $\| \bDelta \|_{\op} = o_{d, \P}(1)$, we have \[ \| \overline\ba_U(\lambda) - \overline\ba_{U, c}(\lambda)\|_2 \le \| ({\overline \bM} + \bDelta)^{-1} \bDelta \|_{\op} \| \overline \ba_{U, c}(\lambda) \|_2 = o_{d, \P}(1) \| \overline \ba_{U, c}(\lambda) \|_2. \] This gives $\| \overline\ba_U(\lambda) \|_2^2 = (1 + o_{d, \P}(1)) \| \overline \ba_{U, c}(\lambda) \|_2^2$. Moreover, by the fact that $\| \bDelta \|_{\op} = o_{d, \P}(1)$, we have \[ \begin{aligned} \overline U_c(\lambda, N, n, d) =&~ \sup_{\ba} \Big( R(\ba) - \what R_n(\ba) - \psi_1 \lambda \| \ba \|_2^2 - \< \ba, \bDelta \ba\> \Big) + \E[y^2] - \| \by \|_2^2/n\\ =&~ \overline U(\lambda, N, n, d) + o_{d, \P}(1) (\| \overline\ba_{U, c}(\lambda) \|_2^2 + 1). \\ \end{aligned} \] As a consequence, as long as we can prove the asymptotics of $\overline U_c$ and $\| \overline\ba_{U, c}(\lambda) \|_2^2$, it also gives the asymptotics of $\overline U$ and $\| \overline\ba_U(\lambda) \|_2^2$. Vice versa for $\overline T$ and $\| \overline \ba_T(\lambda) \|_2^2$. \subsection{The asymptotics of $\overline U_c$ and $\psi_1\| \overline \ba_{U, c}(\lambda) \|_2^2$} In the following, we derive the asymptotics of $\overline U_c(\lambda, N, n, d)$ and $\psi_1\| \overline \ba_{U, c}(\lambda) \|_2^2$. When we refer to $\overline \ba_{U, c}(\lambda)$, it is always well defined with high probability, since it can be well defined under the condition that the good event in Assumption \ref{ass:overline_U_invertable} happens. Note that this good event only depend on $\bX, \bTheta$ and is independent of $\bbeta, \beps$. By Eq. (\ref{eqn:oUc_in_proof}) and (\ref{eqn:oaUc_in_proof}), simple calculation shows that \[ \begin{aligned} \overline U_c(\lambda, N, n, d) \equiv&~ - \< {\overline \bv}, {\overline \bM}^{-1} {\overline \bv}\> = - \Psi_1 - \Psi_2 - \Psi_3,\\ \| \overline\ba_{U, c} \|_2^2 \equiv&~ \< {\overline \bv}, {\overline \bM}^{-2} {\overline \bv}\> = \Phi_1 + \Phi_2 + \Phi_3, \\ \end{aligned} \] where \[ \begin{aligned} \Psi_1 =&~ \<\bv, {\overline \bM}^{-1} \bv\>, & \Phi_1 =&~ \<\bv, {\overline \bM}^{-2} \bv\>, \\ \Psi_2 =&~ -2\psi_2^{-1} \<\frac{\bZ^\sT \by}{\sqrt{d}}, {\overline \bM}^{-1} \bv\>, ~~~~~~~~~& \Phi_2=&~ -2\psi_2^{-1} \<\frac{\bZ^\sT \by}{\sqrt{d}}, {\overline \bM}^{-2} \bv\>,\\ \Psi_3 =&~ \psi_2^{-2} \<\frac{\bZ^\sT \by}{\sqrt{d}}, {\overline \bM}^{-1} \frac{\bZ^\sT \by}{\sqrt{d}} \>, & \Phi_3=&~\psi_2^{-2} \<\frac{\bZ^\sT \by}{\sqrt{d}}, {\overline \bM}^{-2} \frac{\bZ^\sT \by}{\sqrt{d}} \>. \end{aligned} \] The following lemma gives the expectation of $\Psi_i$'s and $\Phi_i$'s with respect to $\bbeta$ and $\beps$. \begin{lemma}[Expectation of $\Psi_i$'s and $\Phi_i$'s]\label{lem:expectation_Psi_Phi_U} Denote $\bq_U(\lambda, \bpsi) = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)$. We have \[ \begin{aligned} \E_{\beps, \bbeta}[\Psi_1] =&~ \mu_1^2 \normf_1^2 \cdot \frac{1}{d} \Trace\Big({\overline \bM}^{-1} \bQ \Big) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Psi_2] =&~ -\frac{2 \normf_1^2}{\psi_2} \cdot \frac{1}{d}\Trace\Big( \bZ {\overline \bM}^{-1} \bZ_1^\sT \Big) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Psi_3] =&~ \frac{\normf_1^2}{\psi_2^2} \cdot \frac{1}{d} \Trace\Big( \bZ{\overline \bM}^{-1}\bZ^\sT \bH \Big) + \frac{\tau^2}{\psi_2^2} \cdot \frac{1}{d}\Trace\Big( \bZ{\overline \bM}^{-1}\bZ^\sT \Big), \\ \E_{\beps, \bbeta}[\Phi_1] =&~ \mu_1^2 \normf_1^2 \cdot \frac{1}{d} \Trace\Big({\overline \bM}^{-2} \bQ \Big) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Phi_2] =&~ -\frac{2 \normf_1^2}{\psi_2} \cdot \frac{1}{d}\Trace\Big( \bZ {\overline \bM}^{-2} \bZ_1^\sT \Big) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Phi_3] =&~ \frac{\normf_1^2}{\psi_2^2} \cdot \frac{1}{d} \Trace\Big( \bZ{\overline \bM}^{-2}\bZ^\sT \bH \Big) + \frac{\tau^2}{\psi_2^2} \cdot \frac{1}{d}\Trace\Big( \bZ{\overline \bM}^{-2}\bZ^\sT \Big). \end{aligned} \] Here the definitions of $\bQ$, $\bH$, and $\bZ_1$ are given by Eq. (\ref{eqn:def_Q_H_Z_Z1}). Furthermore, we have \[ \begin{aligned} \E_{\beps, \bbeta}[\Psi_1] =&~ \mu_1^2 \normf_1^2 \cdot \partial_{s_2} G_d(0_+; \bq_U(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Psi_2] =&~ \normf_1^2 \cdot \partial_{p} G_d(0_+; \bq_U(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Psi_3] =&~ \normf_1^2\cdot (\partial_{t_2} G_d(0_+; \bq_U(\lambda, \bpsi)) - 1) + \tau^2 \cdot (\partial_{t_1} G_d(0_+; \bq_U(\lambda, \bpsi)) - 1), \\ \E_{\beps, \bbeta}[\Phi_1] =&~ - \mu_1^2 \normf_1^2 \cdot \partial_{s_1 }\partial_{s_2} G_d(0_+; \bq_U(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Phi_2] =&~ - \normf_1^2 \cdot \partial_{s_1} \partial_{p} G_d(0_+; \bq_U(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Phi_3] =&~ - \normf_1^2\cdot \partial_{s_1} \partial_{t_2} G_d(0_+; \bq_U(\lambda, \bpsi)) - \tau^2 \cdot \partial_{s_1} \partial_{t_1} G_d(0_+; \bq_U(\lambda, \bpsi)). \end{aligned} \] The definition of $G_d$ is as in Definition \ref{def:log_determinant_A}, and $\nabla_\bq^k G_d(0_+; \bq)$ for $k \in \{1, 2\}$ stands for the $k$'th derivatives (as a vector or a matrix) of $G_d({\mathrm i} u; \bq)$ with respect to $\bq$ in the $u \to 0+$ limit (with its elements given by partial derivatives) \[ \nabla_\bq^k G_d(0_+; \bq) = \lim_{u \to 0+} \nabla_\bq^k G_d({\mathrm i} u; \bq). \] \end{lemma} We next state the asymptotic characterization of the log-determinant which was proven in \cite{mm19}. \begin{proposition}[Proposition 8.4 in \cite{mm19}]\label{prop:expression_for_log_determinant} Define \begin{equation} \begin{aligned} \Xi(\xi, z_1, z_2; \bq; \bpsi) \equiv&~ \log[(s_2 z_1 + 1)(t_2 z_2 + 1) - \ob_1^2 (1 + p)^2 z_1 z_2] - \ob_\star^2 z_1 z_2 \\ &+ s_1 z_1 + t_1 z_2 - \psi_1 \log (z_1 / \psi_1) - \psi_2 \log (z_2 / \psi_2) - \xi (z_1 + z_2) - \psi_1 - \psi_2. \end{aligned} \end{equation} For $\xi \in \C_+$ and $\bq \in \cQ$ (c.f. Eq. (\ref{eqn:definition_of_cQ})), let $m_1(\xi; \bq; \bpsi), m_2(\xi; \bq; \bpsi)$ be defined as the analytic continuation of solution of Eq. (\ref{eq:FixedPoint}) as defined in Definition \ref{def:Stieltjes}. Define \begin{equation} g(\xi; \bq; \bpsi) = \Xi(\xi, m_1(\xi; \bq; \bpsi), m_2(\xi; \bq; \bpsi); \bq; \bpsi). \end{equation} Consider proportional asymptotics $N/d\to\psi_1$, $N/d\to\psi_2$, as per Assumption \ref{ass:linear}. Then for any fixed $\xi \in \C_+$ and $\bq \in \cQ$, we have \begin{equation}\label{eqn:expression_for_log_determinant} \begin{aligned} \lim_{d \to \infty} \E[ \vert G_d(\xi; \bq) - g(\xi; \bq; \bpsi) \vert] = 0. \end{aligned} \end{equation} Moreover, for any fixed $u \in \R_+$ and $\bq \in \cQ$, we have \begin{align} \lim_{d \to \infty} \E[\| \partial_\bq G_d({\mathrm i} u; \bq) - \partial_\bq g({\mathrm i} u; \bq; \bpsi) \|_2 ] =&~ 0, \label{eqn:convergence_of_derivatives}\\ \lim_{d \to \infty} \E[\| \nabla_{\bq}^2 G_d({\mathrm i} u; \bq) - \nabla_{\bq}^2 g({\mathrm i} u; \bq; \bpsi) \|_{\op} ] =&~ 0. \label{eqn:convergence_of_second_derivatives} \end{align} \end{proposition} \begin{remark} Note that Proposition 8.4 in \cite{mm19} stated that the Eq. (\ref{eqn:convergence_of_derivatives}) and (\ref{eqn:convergence_of_second_derivatives}) holds at $\bq = \bzero$. However, by a simple modification of their proof, one can show that these equations also holds at any $\bq \in \cQ$. \end{remark} Combining Assumption \ref{ass:exchange_limit} with Proposition \ref{prop:expression_for_log_determinant}, we have \begin{proposition}\label{prop:exchange_limit_g_U} Let Assumption \ref{ass:exchange_limit} holds. For any $\lambda \in \Lambdau$, denote $\bq_U = \bq_U(\lambda, \bpsi) = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)$, then we have, for $k = 1,2$, \[ \| \nabla_\bq^k G_d(0_+;\bq_U) - \lim_{u \to 0_+} \nabla_\bq^k g({\mathrm i} u; \bq_U; \bpsi) \| = o_{d, \P}(1). \] \end{proposition} As a consequence of Proposition \ref{prop:exchange_limit_g_U}, we can calculate the asymptotics of $\Psi_i$'s and $\Phi_i$'s. Combined with the concentration result in Lemma \ref{lem:concentration_beta_eps_U} latter in the section, the proposition below completes the proof of the part of Proposition \ref{prop:concentration_lag} regarding the standard uniform convergence $U$. Its correctness follows directly from Lemma \ref{lem:expectation_Psi_Phi_U} and Proposition \ref{prop:exchange_limit_g_U}. \begin{proposition}\label{prop:asymptotics_barU_A^2} Follow the assumptions of Proposition \ref{prop:concentration_lag}. For any $\lambda \in \Lambdau$, denote $\bq_U(\lambda, \bpsi) = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)$, then we have \[ \begin{aligned} \E_{\beps, \bbeta}[\Psi_1] \stackrel{\P}{\to}&~ \mu_1^2 \normf_1^2 \cdot \partial_{s_2} g(0_+; \bq_U(\lambda, \bpsi); \bpsi), \\ \E_{\beps, \bbeta}[\Psi_2] \stackrel{\P}{\to}&~ \normf_1^2 \cdot \partial_{p} g(0_+; \bq_U(\lambda, \bpsi); \bpsi), \\ \E_{\beps, \bbeta}[\Psi_3] \stackrel{\P}{\to}&~ \normf_1^2 \cdot \Big(\partial_{t_2} g(0_+; \bq_U(\lambda, \bpsi); \bpsi) - 1 \Big) + \tau^2 \Big( \partial_{t_1} g(0_+; \bq_U(\lambda, \bpsi); \bpsi) - 1 \Big), \\ \E_{\beps, \bbeta}[\Phi_1] \stackrel{\P}{\to}&~ - \mu_1^2 \normf_1^2 \cdot \partial_{s1}\partial_{s2} g(0_+; \bq_U(\lambda, \bpsi); \bpsi) , \\ \E_{\beps, \bbeta}[\Phi_2] \stackrel{\P}{\to}&~ - \normf_1^2 \cdot \partial_{s1}\partial_p g(0_+; \bq_U(\lambda, \bpsi); \bpsi), \\ \E_{\beps, \bbeta}[\Phi_3] \stackrel{\P}{\to}&~ - \normf_1^2 \cdot \partial_{s_1}\partial_{t_2} g(0_+; \bq_U(\lambda, \bpsi); \bpsi) - \tau^2 \cdot \partial_{s_1}\partial_{t_1} g(0_+; \bq_U(\lambda, \bpsi); \bpsi), \\ \end{aligned} \] where $\nabla_\bq^k g(0_+; \bq; \bpsi)$ for $k \in \{1, 2\}$ stands for the $k$'th derivatives (as a vector or a matrix) of $g({\mathrm i} u; \bq; \bpsi)$ with respect to $\bq$ in the $u \to 0+$ limit (with its elements given by partial derivatives) \[ \nabla_{\bq}^k g(0_+; \bq; \bpsi) = \lim_{u \to 0_+} \nabla_{\bq}^k g({\mathrm i} u; \bq; \bpsi). \] As a consequence, we have \[ \E_{\beps, \bbeta}[\overline U_c(\lambda, N, n, d)] \stackrel{\P}{\to} \overline \cU(\lambda, \psi_1, \psi_2), ~~~~~ \E_{\beps, \bbeta} [\psi_1\| \overline\ba_{U, c}(\lambda)\|_2^2] \stackrel{\P}{\to} \cA_U(\lambda, \psi_1, \psi_2), \] where the definitions of $\overline \cU$ and $\cA_U$ are given in Definition \ref{def:analytic_expression_overline}. Here $\stackrel{\P}{\to}$ stands for convergence in probability as $N/d \to \psi_1$ and $n/d \to \psi_2$ (with respect to the randomness of $\bX$ and $\bTheta$). \end{proposition} \begin{lemma}\label{lem:concentration_beta_eps_U} Follow the assumptions of Proposition \ref{prop:concentration_lag}. For any $\lambda \in \Lambdau$, we have \[ \begin{aligned} \Var_{\beps, \bbeta}[\Psi_1], \Var_{\beps, \bbeta}[\Psi_2], \Var_{\beps, \bbeta}[\Psi_3] =&~ o_{d, \P}(1), \\ \Var_{\beps, \bbeta}[\Phi_1], \Var_{\beps, \bbeta}[\Phi_2],\Var_{\beps, \bbeta}[\Phi_3] =&~ o_{d, \P}(1), \end{aligned} \] so that \[ \Var_{\beps, \bbeta}[\overline U_c(\lambda, N, n, d)], \Var_{\beps, \bbeta}[\| \overline\ba_{U, c}(\lambda) \|_2^2] = o_{d, \P}(1). \] Here, $o_{d, \P}(1)$ stands for converges to $0$ in probability (with respect to the randomness of $\bX$ and $\bTheta$) as $N/d \to \psi_1$ and $n/d\to\psi_2$ and $d \to \infty$. \end{lemma} Now, combining Lemma \ref{lem:concentration_beta_eps_U} and Proposition \ref{prop:asymptotics_barU_A^2}, we have \[ \overline U_c(\lambda, N, n, d) \stackrel{\P}{\to} \overline \cU(\lambda, \psi_1, \psi_2), ~~~~~ \psi_1\| \overline\ba_{U, c}(\lambda)\|_2^2 \stackrel{\P}{\to} \cA_U(\lambda, \psi_1, \psi_2), \] Finally, combining with the arguments in Appendix \ref{sec:removing_perturbation} proves the asymptotics of $\overline U$ and $\psi_1 \| \overline \ba_U(\lambda) \|_2^2$. \subsection{The asymptotics of $\overline T_c$ and $\psi_1\| \overline \ba_{T, c}(\lambda) \|_2^2$} In the following, we derive the asymptotics of $\overline T_c(\lambda, N, n, d)$ and $\psi_1\| \overline \ba_{T, c}(\lambda) \|_2^2$. This follows the same steps as the proof of the asymptotics of $\overline U_c$ and $\psi_1\| \overline \ba_{U, c}(\lambda) \|_2^2$. We will give an overview of its proof. The detailed proof is the same as that of $\overline U_c$, and we will not include them for brevity. For a fixed $\lambda \in \Lambdat$, recalling that the definition of $\overline T_c$ as in Eq. (\ref{eqn:definitions_oUc_oTc}), we have \begin{equation}\label{eqn:oTc_variational} \begin{aligned} \overline T_c(\lambda, N, n, d) =&~ \sup_{\ba}\inf_{\bmu} \Big[ R_c(\ba) - \lambda\psi_1 \| \ba \|_2^2 + 2 \< \bmu, \bZ \ba - \by/\sqrt{d}\ \>\Big] \\ =&~ \sup_{\ba}\inf_{\bmu} \Big( \<\ba, (\bU_c -\lambda\psi_1\bI_N)\ba\> - 2\<\ba,\bv\> + 2\<\bmu, \bZ\ba\>-2\<\bmu,\by/\sqrt{d}\> \Big) +\E[y^2]\\ =&~ \sup_{\sqrt{d}\bZ\ba=y} \<\ba, (\bU_c-\lambda\psi_1\bI_N)\ba\> - 2\<\ba, \bv\> + \E[y^2] \end{aligned} \end{equation} Whenever the good event in Assumption \ref{ass:overline_U_invertable} happens, $(\bU_c-\lambda\psi_1\bI_N)$ is negative definite in null$(\bZ)$. The optimum of the above variational equation exists. By KKT condition, the optimal $\ba$ and dual variable $\bmu$ satisfies \begin{itemize} \item Stationary condition: $(\bU_c-\lambda\psi_1\bI_N)\ba + \bZ^\sT\mu = \bv$. \item Primal Feasible: $\bZ\ba = \by/\sqrt{d}$. \end{itemize} The two conditions can be written compactly as \begin{equation}\label{eqn:Tstationary} \begin{bmatrix} \bU_c - \psi_1\lambda \id_N & \bZ^\sT \\ \bZ& \bzero \end{bmatrix} \begin{bmatrix} \ba\\ \bmu \end{bmatrix}= \begin{bmatrix} \bv\\ \by/\sqrt{d} \end{bmatrix}. \end{equation} We define \begin{equation} {\overline \bM} \equiv \begin{bmatrix} \bU_c - \psi_1\lambda \id_N & \bZ^\sT \\ \bZ& \bzero \end{bmatrix}, ~~~~~~~~~~~~~~ {\overline \bv} \equiv \begin{bmatrix} \bv\\ \by/\sqrt{d} \end{bmatrix}. \end{equation} Under Assumption \ref{ass:overline_U_invertable}, ${\overline \bM}$ is invertible. To see this, suppose there exists vector $[\ba_1^\sT, \bmu_1^\sT]^\sT \neq \bzero\in\R^{N+n}$ such that ${\overline \bM}[\ba_1^\sT, \bmu_1^\sT]^\sT=\bzero$, then \[ \begin{aligned} (\bU_c-\lambda\psi_1\bI_N)\ba_1 + \bZ^\sT\bmu_1 = 0,\\ \bZ\ba_1 = 0. \end{aligned} \] As in Assumption \ref{ass:overline_U_invertable}, let ${\mathsf P}_{{\rm null}} = \id_N - \bZ^\dagger \bZ$. We write $\ba_1 = {\mathsf P}_{\rm null} \bv_1$ for some $\bv_1\neq\bzero\in\R^N$. Then, \[ \begin{aligned} &(\bU_c-\lambda\psi_1\bI_N){\mathsf P}_{\rm null} \bv_1 + \bZ^\sT\bmu_1=0,\\ \Rightarrow&~ {\mathsf P}_{\rm null}(\bU_c-\lambda\psi_1\bI_N){\mathsf P}_{\rm null} \bv_1 + {\mathsf P}_{\rm null}\bZ^\sT\bmu_1=0,\\ \Rightarrow&~ {\mathsf P}_{\rm null}(\bU_c-\lambda\psi_1\bI_N){\mathsf P}_{\rm null} \bv_1 = 0, \end{aligned} \] where the last relation come from the fact that $\bZ{\mathsf P}_{\rm null}=\bzero$. However by Assumption \ref{ass:overline_U_invertable}, ${\mathsf P}_{\rm null}(\bU_c-\lambda\psi_1\bI_N){\mathsf P}_{\rm null}$ is negative definite, which leads to a contradiction. In the following, we assume the event in Assumption \ref{ass:overline_U_invertable} happens so that ${\overline \bM}$ is invertible. In this case, the maximizer in Eq. (\ref{eqn:oTc_variational}) can be well defined as \[ \overline\ba_{T, c}(\lambda) = [\id_N, \bzero_{N \times n}]{\overline \bM}^{-1} {\overline \bv}. \] Moreover, we can write $\overline T_c$ as \[ \overline T_c(\lambda, N, n, d) = \E[y^2] - {\overline \bv}^\sT {\overline \bM}^{-1} {\overline \bv}. \] We further define \[ {\overline \bv}_1 = [\bv^\sT, \bzero_{n \times 1}^\sT]^\sT, ~~~~ {\overline \bv}_2 = [\bzero_{N \times 1}^\sT, \by^\sT / \sqrt{d}]^\sT,~~~~ {\boldsymbol E} \equiv \begin{bmatrix} \id_N & \bzero_{N \times n} \\ \bzero_{n \times N} & \bzero_{n \times n} \end{bmatrix}. \] Simple calculation shows that \[ \begin{aligned} \overline T_c(\lambda, N, n, d) \equiv&~ \E[y^2] - \< {\overline \bv}, {\overline \bM}^{-1} {\overline \bv}\> = \normf_1^2 + \tau^2 - \Psi_1 - \Psi_2 - \Psi_3,\\ \| \overline\ba_{U, c} \|_2^2 \equiv&~ \< {\overline \bv}, {\overline \bM}^{-1} {\boldsymbol E} {\overline \bM}^{-1} {\overline \bv}\> = \Phi_1 + \Phi_2 + \Phi_3, \\ \end{aligned} \] where \[ \begin{aligned} \Psi_1 =&~ \<{\overline \bv}_1, {\overline \bM}^{-1} {\overline \bv}_1\>, & \Phi_1 =&~ \<{\overline \bv}_1, {\overline \bM}^{-1} {\boldsymbol E} {\overline \bM}^{-1} {\overline \bv}_1\>, \\ \Psi_2 =&~ 2 \<{\overline \bv}_2, {\overline \bM}^{-1} {\overline \bv}_1\>, ~~~~~~~~~& \Phi_2=&~ 2 \<{\overline \bv}_2, {\overline \bM}^{-1} {\boldsymbol E} {\overline \bM}^{-1} {\overline \bv}_1\>,\\ \Psi_3 =&~ \<{\overline \bv}_2, {\overline \bM}^{-1} {\overline \bv}_2\>, & \Phi_3=&~ \< {\overline \bv}_2, {\overline \bM}^{-1} {\boldsymbol E} {\overline \bM}^{-1} {\overline \bv}_2 \>. \end{aligned} \] The following lemma gives the expectation of $\Psi_i$'s and $\Phi_i$'s with respect to $\bbeta$ and $\beps$. \begin{lemma}[Expectation of $\Psi_i$'s and $\Phi_i$'s]\label{lem:expectation_Psi_Phi_T} Denote $\bq_T(\lambda, \bpsi) = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, 0, 0,0)$. We have \[ \begin{aligned} \E_{\beps, \bbeta}[\Psi_1] =&~ \mu_1^2 \normf_1^2 \cdot \partial_{s_2} G_d(0_+; \bq_T(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Psi_2] =&~ \normf_1^2 \cdot \partial_{p} G_d(0_+; \bq_T(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Psi_3] =&~ \normf_1^2\cdot \partial_{t_2} G_d(0_+; \bq_T(\lambda, \bpsi)) + \tau^2 \cdot \partial_{t_1} G_d(0_+; \bq_T(\lambda, \bpsi)), \\ \E_{\beps, \bbeta}[\Phi_1] =&~ - \mu_1^2 \normf_1^2 \cdot \partial_{s_1 }\partial_{s_2} G_d(0_+; \bq_T(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Phi_2] =&~ - \normf_1^2 \cdot \partial_{s_1} \partial_{p} G_d(0_+; \bq_T(\lambda, \bpsi)) \times (1 + o_d(1)), \\ \E_{\beps, \bbeta}[\Phi_3] =&~ - \normf_1^2\cdot \partial_{s_1} \partial_{t_2} G_d(0_+; \bq_T(\lambda, \bpsi)) - \tau^2 \cdot \partial_{s_1} \partial_{t_1} G_d(0_+; \bq_T(\lambda, \bpsi)). \end{aligned} \] The definition of $G_d$ is as in Definition \ref{def:log_determinant_A}, and $\nabla_\bq^k G_d(0_+; \bq)$ for $k \in \{1, 2\}$ stands for the $k$'th derivatives (as a vector or a matrix) of $G_d({\mathrm i} u; \bq)$ with respect to $\bq$ in the $u \to 0+$ limit (with its elements given by partial derivatives) \[ \nabla_\bq^k G_d(0_+; \bq) = \lim_{u \to 0+} \nabla_\bq^k G_d({\mathrm i} u; \bq). \] \end{lemma} The proof of Lemma \ref{lem:expectation_Psi_Phi_T} follows from direct calculation and is identical to the proof of Lemma \ref{lem:expectation_Psi_Phi_U}. Combining Assumption \ref{ass:exchange_limit} with Proposition \ref{prop:expression_for_log_determinant}, we have \begin{proposition}\label{prop:exchange_limit_g_T} Let Assumption \ref{ass:exchange_limit} holds. For any $\lambda \in \Lambdat$, denote $\bq_T = \bq_T(\lambda, \bpsi) = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, 0, 0,0)$, then we have, for $k = 1,2$, \[ \| \nabla_\bq^k G_d(0_+;\bq_T) - \lim_{u \to 0+} \nabla_\bq^k g({\mathrm i} u; \bq_T; \bpsi) \| = o_{d, \P}(1). \] \end{proposition} As a consequence of Proposition \ref{prop:exchange_limit_g_T}, we can calculate the asymptotics of $\Psi_i$'s and $\Phi_i$'s. \begin{proposition}\label{prop:asymptotics_barT_A^2} Follow the assumptions of Proposition \ref{prop:concentration_lag}. For any $\lambda \in \Lambdat$, denote $\bq_T(\lambda, \bpsi) = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, 0, 0,0)$, then we have \[ \begin{aligned} \E_{\beps, \bbeta}[\Psi_1] \stackrel{\P}{\to}&~ \mu_1^2 \normf_1^2 \cdot \partial_{s_2} g(0_+; \bq_T(\lambda, \bpsi); \bpsi), \\ \E_{\beps, \bbeta}[\Psi_2] \stackrel{\P}{\to}&~ \normf_1^2 \cdot \partial_{p} g(0_+; \bq_T(\lambda, \bpsi); \bpsi), \\ \E_{\beps, \bbeta}[\Psi_3] \stackrel{\P}{\to}&~ \normf_1^2 \cdot \partial_{t_2} g(0_+; \bq_T(\lambda, \bpsi); \bpsi) + \tau^2 \cdot \partial_{t_1} g(0_+; \bq_T(\lambda, \bpsi); \bpsi), \\ \E_{\beps, \bbeta}[\Phi_1] \stackrel{\P}{\to}&~ - \mu_1^2 \normf_1^2 \cdot \partial_{s1}\partial_{s2}g(0_+; \bq_T(\lambda, \bpsi); \bpsi) , \\ \E_{\beps, \bbeta}[\Phi_2] \stackrel{\P}{\to}&~ - \normf_1^2 \cdot \partial_{s1}\partial_p g(0_+; \bq_T(\lambda, \bpsi); \bpsi), \\ \E_{\beps, \bbeta}[\Phi_3] \stackrel{\P}{\to}&~ - \normf_1^2 \cdot \partial_{s_1}\partial_{t_2} g(0_+; \bq_T(\lambda, \bpsi); \bpsi) - \tau^2 \cdot \partial_{s_1}\partial_{t_1} g(0_+; \bq_T(\lambda, \bpsi); \bpsi), \\ \end{aligned} \] where $\nabla_\bq^k g(0_+; \bq; \bpsi)$ for $k \in \{1, 2\}$ stands for the $k$'th derivatives (as a vector or a matrix) of $g({\mathrm i} u; \bq; \bpsi)$ with respect to $\bq$ in the $u \to 0+$ limit (with its elements given by partial derivatives) \[ \nabla_{\bq}^k g(0_+; \bq; \bpsi) = \lim_{u \to 0_+} \nabla_{\bq}^k g({\mathrm i} u; \bq; \bpsi). \] As a consequence, we have \[ \E_{\beps, \bbeta}[\overline T_c(\lambda, N, n, d)] \stackrel{\P}{\to} \overline \cT(\lambda, \psi_1, \psi_2), ~~~~~ \E_{\beps, \bbeta} [\psi_1\| \overline\ba_{T, c}(\lambda)\|_2^2] \stackrel{\P}{\to} \cA_T(\lambda, \psi_1, \psi_2), \] where the definitions of $\overline \cT$ and $\cA_T$ are given in Definition \ref{def:analytic_expression_overline}. Here $\stackrel{\P}{\to}$ stands for convergence in probability as $N/d \to \psi_1$ and $n/d \to \psi_2$ (with respect to the randomness of $\bX$ and $\bTheta$). \end{proposition} The Proposition above suggests that $\Psi_i$ and $\Phi_i$ concentrates with respect to the randomness in $\bX$ and $\bTheta$. To complete the concentration proof, we need to show that $\Psi_i$ and $\Phi_i$ concentrates with respect to the randomness in $\bbeta$ and $\beps$. \begin{lemma}\label{lem:concentration_beta_eps_T} Follow the assumptions of Proposition \ref{prop:concentration_lag}. For any $\lambda \in \Lambdat$, we have \[ \begin{aligned} \Var_{\beps, \bbeta}[\Psi_1], \Var_{\beps, \bbeta}[\Psi_2], \Var_{\beps, \bbeta}[\Psi_3] =&~ o_{d, \P}(1), \\ \Var_{\beps, \bbeta}[\Phi_1], \Var_{\beps, \bbeta}[\Phi_2],\Var_{\beps, \bbeta}[\Phi_3] =&~ o_{d, \P}(1), \end{aligned} \] so that \[ \Var_{\beps, \bbeta}[\overline T_c(\lambda, N, n, d)], \Var_{\beps, \bbeta}[\| \overline\ba_{T, c}(\lambda) \|_2^2] = o_{d, \P}(1). \] Here, $o_{d, \P}(1)$ stands for converges to $0$ in probability (with respect to the randomness of $\bX$ and $\bTheta$) as $N/d \to \psi_1$ and $n/d\to\psi_2$ and $d \to \infty$. \end{lemma} Now, combining Proposition \ref{prop:asymptotics_barT_A^2} and \ref{lem:concentration_beta_eps_T}, we have \[ \overline T_c(\lambda, N, n, d) \stackrel{\P}{\to} \overline \cT(\lambda, \psi_1, \psi_2), ~~~~~ \psi_1\| \overline\ba_{T, c}(\lambda)\|_2^2 \stackrel{\P}{\to} \cA_T(\lambda, \psi_1, \psi_2). \] The results above combined with the arguments in Appendix \ref{sec:removing_perturbation} completes the proof for the asymptotics of $\overline T$ and $\psi_1 \| \overline \ba_T(\lambda) \|_2^2$. \subsection{Proof of Lemma \ref{lem:expectation_Psi_Phi_U} and Lemma \ref{lem:concentration_beta_eps_U}} \begin{proof}[Proof of Lemma \ref{lem:expectation_Psi_Phi_U}] Note that by Assumption \ref{ass:overline_U_invertable}, the matrix ${\overline \bM} = \bU_c - \psi_2^{-1} \bZ^\sT \bZ - \psi_1\lambda \id_N$ is negative definite (so that it is invertible) with high probability. Moreover, whenever ${\overline \bM}$ is negative definite, the matrix $\bA(\bq_U)$ for $\bq_U = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)$ is also invertible. In the following, we condition on this good event happens. From the expansion for $\bv_i$ in \eqref{eqn:vexpan}, we have \[ \begin{aligned} \E_{\bbeta, \beps} \Psi_1 =&~ \E_{\bbeta, \beps} \Big[\Trace\Big(\overline\bM^{-1} \bv \bv^\sT \Big) \Big]= \frac{1}{d}\lambda_{d, 1}(\sigma)^2 F_1^2 \cdot \Big[\Trace\Big(\overline\bM^{-1} \bTheta \bTheta^\sT\Big)\Big] = \frac{1}{d}\mu_1^2F_1^2 \Trace\left(\overline\bM^{-1}\frac{\bTheta\bTheta^\sT}{d} \right)\times (1 + o_d(1)), \end{aligned} \] where we used the relation $\lambda_{d, 1} = \mu_1/\sqrt{d} \times (1 + o_d(1))$ as in Eq. (\ref{eqn:relationship_mu_lambda}). Similarly, the second term is \[ \begin{aligned} \E_{\bbeta, \beps} \Psi_2 =&~ -\frac{2}{\psi_2\sqrt{d}} \E_{\bbeta, \beps} \Big[\Trace\Big( \bZ\overline{\bM}^{-1} \bv \by^\sT \Big) \Big] \\ =&~ -\frac{2}{\psi_2 d\sqrt{d}} \lambda_{d, 1}(\sigma)F_1^2 \cdot \Trace\Big( \bZ \overline{\bM}^{-1} \bTheta \bX^\sT \Big) \\ =&~ -\frac{2}{\psi_2 d^2} \mu_1 F_1^2 \cdot \Trace\Big( \bZ \overline{\bM}^{-1} \bTheta \bX^\sT \Big)\times (1 + o_d(1)). \end{aligned} \] To compute $\Psi_3$, note we have \[ \E_{\bbeta, \beps}[\by\by^\sT] = F_1^2 \cdot (\bX\bX^\sT)/d + \tau^2 \id_n. \] This gives the expansion for $\Psi_3$ \[ \begin{aligned} \E_{\bbeta, \beps} \Psi_3 =&~ \psi_2^{-2}d^{-1}\E_{\bbeta, \beps} \Trace\Big( \bZ\overline{\bM}^{-1}\bZ^\sT \by\by^\sT \Big)\\ =&~ \psi_2^{-2}d^{-2}F_1^2\Trace\Big(\bZ\overline{\bM}^{-1}\bZ^\sT \bX\bX^\sT\Big) + \psi_2^{-2}d^{-1} \Trace\Big(\bZ\overline{\bM}^{-1}\bZ\Big)\tau^2.\\ \end{aligned} \] Through the same algebraic manipulation above, we have \[ \begin{aligned} \E_{\bbeta, \beps} \Phi_1 =&~ \frac{1}{d}\mu_1^2F_1^2 \Trace\left(\overline\bM^{-2}\frac{\bTheta\bTheta^\sT}{d} \right)\times (1 + o_d(1)), \\ \E_{\bbeta, \beps} \Phi_2 =&~ -\frac{2}{\psi_2 d^2} \mu_1 F_1^2 \cdot \Trace\Big( \bZ \overline{\bM}^{-2} \bTheta \bX^\sT \Big)\times (1 + o_d(1)),\\ \E_{\bbeta, \beps} \Phi_3 =&~ \psi_2^{-2}d^{-2}F_1^2\cdot \Trace\Big( \bZ\overline{\bM}^{-2}\bZ^\sT \bX\bX^\sT\Big) + \psi_2^{-2}d^{-1}\tau^2 \Trace\Big( \bZ\overline{\bM}^{-2}\bZ^\sT \Big). \end{aligned} \] Next, we express the trace of matrices products as the derivative of the function $G_d(\xi, \bq)$ (c.f. Definition \ref{def:log_determinant_A}). The derivatives of $G_d$ are (which can we well-defined at $\bq = \bq_U = (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)$ with high probability by Assumption \ref{ass:overline_U_invertable}) \begin{equation}\label{eqn:derivative_log_determinant} \begin{aligned} \partial_{q_i} G_d(0, \bq) = \frac{1}{d} \Trace(\bA(\bq)^{-1}\partial_i\bA(\bq)),~~~~~ \partial_{q_i} \partial_{q_j} G_d(0, \bq) = - \frac{1}{d} \Trace(\bA(\bq)^{-1}\partial_{q_i}\bA(\bq)\bA(\bq)^{-1}\partial_{q_j}\bA(\bq)). \end{aligned} \end{equation} As an example, we consider evaluating $\partial_{s_2} G_d(0, \bq)$ at $\bq = \bq_U \equiv (\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)$. Using the formula for block matrix inversion, we have \[ \bA(\mu_\star^2 - \lambda\psi_1, \mu_1^2, \psi_2, 0,0)^{-1} = \begin{bmatrix} (\mu_\star^2-\lambda \psi_1)\id_N + \mu_1^2\bQ & \bZ^\sT\\ \bZ & \psi_2\id_n \end{bmatrix}^{-1} = \begin{bmatrix} (\bU_c - \psi_2^{-1}\bZ^\sT \bZ - \psi_1 \lambda \Id_N)^{-1} & \cdots\\ \cdots & \cdots \end{bmatrix}. \] Then we have \[ \partial_{s_2} G_d(0, \bq_U) = \frac{1}{d}\Trace\left( \begin{bmatrix} \overline{\bM}^{-1} & \cdots\\ \cdots & \cdots \end{bmatrix} \begin{bmatrix} \bQ & \bzero\\ \bzero & \bzero \end{bmatrix}\right) =\Trace(\overline{\bM}^{-1}\bQ)/d . \] Applying similar argument to compute other derivatives, we get \begin{enumerate} \item $\Trace(\overline{\bM}^{-1}\bTheta\bTheta^\sT)/d^2=\Trace(\overline{\bM}^{-1}\bQ)/d = \partial_{s_2} G_d(0, \bq_U)$. \item $\mu_1\cdot\Trace(\bZ\overline{\bM}^{-1}\bTheta\bX^\sT)/d^2 = \Trace(\overline{\bM}^{-1}\bZ_1^\sT\bZ)/d=- \psi_2 \partial_{p} G_d(0, \bq_U) / 2$. \item $\Trace(\bZ\overline{\bM}^{-1}\bZ^\sT\bX\bX^\sT)/d^2 = \Trace(\bZ\overline{\bM}^{-1}\bZ^\sT\bH)/d = \psi_2^2 \partial_{t_2} G_d(0, \bq_U)-\psi_2^2$. \item $\Trace(\bZ\overline{\bM}^{-1}\bZ^\sT)/d = \psi_2^2 \partial_{t_1} G_d(0, \bq_U) - \psi_2^2$ . \item $\Trace(\overline{\bM}^{-2}\bQ)/d = -\partial_{s_1}\partial_{s_2}G_d(0, \bq_U)$. \item $(2/d\psi_2)\cdot\Trace(\bZ_1^{\sT}\bZ\overline{\bM}^{-2}) = \partial_{s_1}\partial_p G_d(0, \bq_U)$. \item $\Trace(\overline{\bM}^{-2}\bZ^\sT\bH\bZ)/(d\psi_2^2)=-\partial_{s_1}\partial_{t_2}G_d(0, \bq_U)$. \item $\Trace(\overline{\bM}^{-2}\bZ^\sT\bZ)/(d\psi_2^2)=-\partial_{s_1}\partial_{t_1}G_d(0, \bq_U)$. \end{enumerate} Combining these equations concludes the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:concentration_beta_eps_U}] We prove this lemma by assuming that $\bbeta$ follows a different distribution: $\bbeta \sim \cN(\bzero, (\| \normf_1 \|_2^2 / d) \id_d)$. The case when $\bbeta \sim \Unif(\S^{d-1}(\normf_1))$ can be treated similarly. By directly calculating the variance, we can show that, there exists scalers $(c_{ik}^{(d)})_{k \in [K_i]}$ with $c_{ik}^{(d)} = \Theta_{d}(1)$, and matrices $(\bA_{ik}, {\boldsymbol B}_{ik})_{k \in [K_i]} \subseteq \{ \id_N, \bQ, \bZ^\sT \bH \bZ, \bZ^\sT \bZ \}$, such that the variance of $\Psi_i$'s can be expressed in form \[ \Var_{\beps, \bbeta}(\Psi_i) = \frac{1}{d} \sum_{k = 1}^{K_i} c_{ik}^{(d)} \Trace({\overline \bM}^{-1} \bA_{ik} {\overline \bM}^{-1} {\boldsymbol B}_{ik}) / d. \] For example, by Lemma \ref{lem:variance_calculations}, we have \[ \begin{aligned} \Var_{\bbeta \sim \cN(\bzero, (\normf_1^2/d) \id_d)}(\Psi_1) =&~ \lambda_{d, 1}(\sigma)^4 \Var_{\bbeta \sim \cN(\bzero, (\normf_1^2/d) \id_d))}(\bbeta^\sT \bTheta^\sT {\overline \bM}^{-1} \bTheta \bbeta) = 2 \lambda_{d, 1}(\sigma)^4 \normf_1^4 \| \bTheta^\sT {\overline \bM}^{-1} \bTheta \|_F^2 / d^2 \\ =&~ c_{1}^{(d)} \Trace( {\overline \bM}^{-1} \bQ {\overline \bM}^{-1} \bQ ) /d^2, \end{aligned} \] where $c_{1}^{(d)} = 2 d^2 \lambda_{d, 1}(\sigma)^4 \normf_1^4 = O_d(1)$. The variance of $\Psi_2$ and $\Psi_3$ can be calculated similarly. Note that each $\Trace({\overline \bM}^{-1} \bA_{ik} {\overline \bM}^{-1} {\boldsymbol B}_{ik}) / d$ can be expressed as an entry of $\nabla_\bq^2 G_d(0; \bq)$ (c.f. Eq. (\ref{eqn:derivative_log_determinant})), and by Proposition \ref{prop:exchange_limit_g_U}, they are of order $O_{d, \P}(1)$. This gives \[ \Var_{\beps, \bbeta}(\Psi_i) = o_{d, \P}(1). \] Similarly, for the same set of scalers $(c_{ik}^{(d)})_{k \in [K_i]}$ and matrices $(\bA_{ik}, {\boldsymbol B}_{ik})_{k \in [K_i]}$, we have \[ \Var_{\beps, \bbeta}(\Phi_i) = \frac{1}{d} \sum_{k = 1}^{K_i} c_{ik} \Trace({\overline \bM}^{-2} \bA_{ik} {\overline \bM}^{-2} {\boldsymbol B}_{ik}) / d. \] Note that for two semidefinite matrices $\bA, {\boldsymbol B}$, we have $\Trace(\bA {\boldsymbol B}) \le \| \bA \|_{\op} \Trace({\boldsymbol B})$. Moreover, note we have $\| {\overline \bM} \|_{\op} = O_{d, \P}(1)$ (by Assumption \ref{ass:overline_U_invertable}). This gives \[ \Var_{\beps, \bbeta}(\Phi_i) = o_{d, \P}(1). \] This concludes the proof. \end{proof} \subsection{Auxiliary Lemmas}\label{sec:auxiliary_lemmas} The following lemma (Lemma \ref{lem:gegenbauer_identity}) is a reformulation of Proposition 3 in \cite{ghorbani2019linearized}. We present it in a stronger form, but it can be easily derived from the proof of Proposition 3 in \cite{ghorbani2019linearized}. This lemma was first proved in \cite{el2010spectrum} in the Gaussian case. (Notice that the second estimate ---on $Q_k(\bTheta \bX^\sT)$--- follows by applying the first one whereby $\bTheta$ is replaced by $\bW = [\bTheta^{\sT}|\bX^{\sT}]^{\sT}$ \begin{lemma}\label{lem:gegenbauer_identity} Let $\bTheta = (\btheta_1, \ldots, \btheta_N)^\sT \in \R^{N \times d}$ with $(\btheta_a)_{a\in [N]} \sim_{iid} \Unif(\S^{d- 1}(\sqrt d))$ and $\bX = (\bx_1, \ldots, \bx_n)^\sT \in \R^{n \times d}$ with $(\bx_i)_{i\in [n]} \sim_{iid} \Unif(\S^{d- 1}(\sqrt d))$. Assume $1/c \le n / d, N / d \le c$ for some constant $c \in (0, \infty)$. Then \begin{align} \E\Big[ \sup_{k \ge 2} \| Q_k(\bTheta \bTheta^\sT) - \id_N \|_{\op}^2 \Big]&= o_d(1)\, ,\label{eq:QTT}\\ \E\Big[ \sup_{k \ge 2} \| Q_k(\bTheta \bX^\sT) \|_{\op}^2 \Big] &= o_d(1). \label{eq:QTX} \end{align} \end{lemma} Notice that the second estimate ---on $Q_k(\bTheta \bX^\sT)$--- follows by applying the first one ---Eq.~\eqref{eq:QTT}--- whereby $\bTheta$ is replaced by $\bW = [\bTheta^{\sT}|\bX^{\sT}]^{\sT}$, and we use $\| Q_k(\bTheta \bX^\sT) \|_{\op} \le \| Q_k(\bW \bW^\sT)-\id_{N+n} \|_{\op}$. The following lemma (Lemma \ref{lem:decomposition_of_kernel_matrix}) can be easily derived from Lemma \ref{lem:gegenbauer_identity}. Again, this lemma was first proved in \cite{el2010spectrum} in the Gaussian case. \begin{lemma}\label{lem:decomposition_of_kernel_matrix} Let $\bTheta = (\btheta_1, \ldots, \btheta_N)^\sT \in \R^{N \times d}$ with $(\btheta_a)_{a\in [N]} \sim_{iid} \Unif(\S^{d- 1}(\sqrt d))$. Let activation function $\sigma$ satisfies Assumption \ref{ass:activation}. Assume $1/c \le N / d \le c$ for some constant $c \in (0, \infty)$. Denote \[ \bU = \Big(\E_{\bx \sim \Unif(\S^{d-1}(\sqrt d))}[\sigma(\< \btheta_a, \bx\> / \sqrt d) \sigma(\< \btheta_b, \bx \> / \sqrt d)] \Big)_{a, b \in [N]} \in \R^{N \times N}. \] Then we can rewrite the matrix $\bU$ to be \[ \bU = \lambda_{d, 0}(\sigma)^2 \ones_N \ones_N^\sT + \ob_1^2 \bQ + \ob_\star^2 (\id_N + \bDelta), \] with $\bQ = \bTheta \bTheta^\sT / d$ and $\E[\| \bDelta \|_{\op}^2] = o_{d}(1)$. \end{lemma} In the following, we show that, under sufficient regularity condition of $\sigma$, we have $\lambda_{d, 0}(\sigma) = O(1/d)$. \begin{lemma}\label{lem:small_lambda_d0} Let $\sigma \in C^2(\R)$ with $\vert \sigma'(x)\vert, \vert \sigma''(x)\vert < c_0 e^{c_1 \vert x \vert}$ for some $c_0, c_1 \in \R$. Assume that $\E_{G \sim \cN(0, 1)}[\sigma(G)] = 0$. Then we have \[ \lambda_{d, 0}(\sigma) \equiv \E_{\bx \sim \Unif(\S^{d-1}(\sqrt{d}))}[\sigma(x_1)] = O(1/d). \] \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:small_lambda_d0}] Let $\bx \sim \Unif(\S^{d-1}(\sqrt{d}))$ and $\gamma \sim \chi(d) / \sqrt{d}$ independently. Then we have $\gamma \bx \sim \cN(\bzero, \id_d)$, so that by the assumption, we have $\E[\sigma(\gamma x_1)] = 0$. As a consequence, by the second order Taylor expansion, and by the independence of $\gamma$ and $\bx$, we have (for $\xi(x_1) \in [\gamma, 1]$) \[ \begin{aligned} \vert \lambda_{d, 0}(\sigma) \vert =&~ \vert \E[\sigma(x_1)]\vert \le \vert \E[\sigma(x_1)] - \E[\sigma(\gamma x_1)] \vert \le \Big\vert \E[\sigma'(x_1)x_1] \E[\gamma - 1] \Big\vert + \Big\vert (1/2)\E[\sigma''(\xi(x_1) x_1) (\gamma - 1)^2] \Big\vert\\ \le&~ \Big\vert \E[\sigma'(x_1)x_1] \Big\vert \cdot \Big\vert \E[\gamma - 1] \Big\vert + (1/2)\E\Big[ \sup_{u \in [\gamma, 1]} \sigma''(u x_1)^2 \Big]^{1/2} \E[ (\gamma - 1)^4]^{1/2}. \end{aligned} \] By the assumption that $\vert \sigma'(x)\vert, \vert \sigma''(x) \vert < c_0 e^{c_1 \vert x \vert}$ for some $c_0, c_1 \in \R$, there exists constant $K$ that only depends on $c_0$ and $c_1$ such that \[ \sup_{d} \Big\vert \E[\sigma'(x_1)x_1] \Big\vert \le K,~~~~~ \sup_{d}\Big\vert (1/2)\E\Big[ \sup_{u \in [\gamma, 1]} \sigma''(u x_1)^2 \Big]^{1/2} \Big\vert \le K. \] Moreover, by property of the $\chi$ distribution, we have \[ \vert \E[\gamma - 1] \vert = O(d^{-1}), ~~~~~~\E[ (\gamma - 1)^4]^{1/2} = O(d^{-1}). \] This concludes the proof. \end{proof} The following lemma is a simple variance calculation and can be found as Lemma C.5 in \cite{mm19}. We restate here for completeness. \def{\boldsymbol g}{{\boldsymbol g}} \def{\boldsymbol B}{{\boldsymbol B}} \begin{lemma}\label{lem:variance_calculations} Let $\bA \in \R^{n \times N}$ and ${\boldsymbol B} \in \R^{n \times n}$. Let ${\boldsymbol g} = (g_1, \ldots, g_n)^\sT$ with $g_i \sim_{iid} \P_g$, $\E_g[g] = 0$, and $\E_g[g^2] = 1$. Let $\bh = (h_1, \ldots, h_N)^\sT$ with $h_i \sim_{iid} \P_h$, $\E_h[h] = 0$, and $\E_h[h^2] = 1$. Further we assume that $\bh$ is independent of ${\boldsymbol g}$. Then we have \[ \begin{aligned} \Var({\boldsymbol g}^\sT \bA \bh) =&~ \| \bA \|_F^2, \\ \Var({\boldsymbol g}^\sT {\boldsymbol B} {\boldsymbol g}) =&~ \sum_{i = 1}^n B_{ii}^2 (\E[g^4] - 3) + \| {\boldsymbol B} \|_F^2 + \Trace({\boldsymbol B}^2). \end{aligned} \] \end{lemma} \section{Proof of Theorem \ref{thm:main_theorem}}\label{sec:proof_main_thm} Here we give the whole proof for $U$. The proof for $T$ is the same. For fixed $A^2 \in \Gamma_U \equiv \{\cA_U(\lambda, \psi_1, \psi_2): \lambda \in \Lambdau \}$, we denote \[ \lambda_\star(A^2) = \inf_{\lambda} \Big\{ \lambda: \cA_U(\lambda, \psi_1, \psi_2) = A^2 \Big\}. \] By the definition of $\Gamma_U$, the set $\{ \lambda: \cA_U(\lambda, \psi_1, \psi_2) = A^2 \}$ is non-empty and lower bounded, so that $\lambda_\star(A^2)$ can be well-defined. Moreover, we have $\lambda_\star(A^2) \in \Lambdau$. It is also easy to see that we have \begin{equation}\label{eqn:lambda_star_U_Asquare_min} \lambda_\star(A^2) \in \argmin_{\lambda \ge 0} \Big[ \overline \cU(\lambda, \psi_1, \psi_2) + \lambda A^2 \Big]. \end{equation} \subsection{Upper bound} Note we have \[ \begin{aligned} U(A, N, n, d) =&~ \sup_{(N/d) \| \ba \|_2^2 \le A^2} \Big( R(\ba) - \what R_n(\ba) \Big) \\ \le&~ \inf_{\lambda} \sup_{(N/d) \| \ba \|_2^2 \le A^2} \Big( R(\ba) - \what R_n(\ba) - \psi_1\lambda (\|\ba\|_2^2 - \psi_1^{-1}A^2) \Big) \\ \le&~ \inf_{\lambda} \Big[ \overline U(\lambda, N, n, d) + \lambda A^2 \Big] \\ \le &~ \overline U(\lambda_\star(A^2), N, n, d) + \lambda_\star(A^2) A^2. \end{aligned} \] Note that $\lambda_\star(A^2) \in \Lambdau$, so by Lemma \ref{prop:asymptotics_barU_A^2}, in the limit of Assumption \ref{ass:linear}, we have \[ U(A, N, n, d) \le \overline \cU(\lambda_\star(A^2), \psi_1, \psi_2) + \lambda_\star(A^2) A^2 + o_{d, \P}(1) = \cU(A, \psi_1, \psi_2) + o_{d, \P}(1), \] where the last equality is by Eq. (\ref{eqn:lambda_star_U_Asquare_min}). This proves the upper bound. \subsection{Lower bound} For any $A^2 > 0$, we define a random variable $\hat \lambda(A^2)$ (which depend on $\bX$, $\bTheta$, $\bbeta$, $\beps$) by \[ \hat \lambda(A^2) = \inf\Big\{ \lambda: \lambda \in \argmin_{\lambda \ge 0} \Big[ \overline U(\lambda, N, n, d) + \lambda A^2 \Big] \Big\}. \] By Proposition \ref{prop:strong_duality}, the set is should always be non-empty, so that $\hat \lambda(A^2)$ can always be well-defined. Moreover, since $\lambda_\star(A^2) \in \Lambdau$, by Assumption \ref{ass:overline_U_invertable}, as we have shown in the proof in Proposition \ref{prop:concentration_lag}, we can uniquely define $\overline\ba_{U}(\lambda_\star(A^2))$ with high probability, where \[ \overline\ba_{U}(\lambda_\star(A^2)) = \argmax_{\ba} \Big[ R(\ba) - \what R_n(\ba) - \psi_1 \lambda_\star(A^2) \| \ba \|_2^2\Big]. \] As a consequence, for a small $\eps > 0$, the following event $\cE_{\eps, d}$ can be well-defined with high probability \[ \begin{aligned} \cE_{\eps, d} =&~ \Big\{ \psi_1 \| \overline\ba_{U} (\lambda_\star(A^2)) \|_2^2 \ge A^2 - \eps \Big\} \cap \Big\{ \hat \lambda(A^2 + \eps) \le \lambda_\star(A^2) \Big\}\\ =&~ \Big\{ A^2 - \eps\le \psi_1 \| \overline\ba_{U} (\lambda_\star(A^2)) \|_2^2 \le A^2 + \eps \Big\}. \end{aligned} \] Now, by Proposition \ref{prop:concentration_lag}, in the limit of Assumption \ref{ass:linear}, we have \begin{equation}\label{eqn:event_Asquare_concentration} \lim_{d \to \infty} \P_{\bX, \bTheta, \bbeta, \beps}(\cE_{\eps, d}) = 1, \end{equation} and we have \begin{equation}\label{eqn:overline_U_concentration} \begin{aligned} \cU(\lambda_\star(A^2), \psi_1, \psi_2) = \overline \cU(\lambda_\star(A^2), \psi_1, \psi_2) + o_{d, \P}(1). \end{aligned} \end{equation} By the strong duality as in Proposition \ref{prop:strong_duality}, for any $A^2 \in \Gamma_U$, we have \[ \begin{aligned} U(A, N, n, d) =&~ \overline U(\hat \lambda(A^2), N, n, d) + \hat \lambda(A^2) A^2 . \end{aligned} \] Consequently, for small $\eps >0$, when the event $\cE_{\eps, d}$ happens, we have \[ \begin{aligned} &~ U( (A^2 + \eps)^{1/2}, N, n, d) \\ =&~ \sup_{\ba} \Big( R(\ba) - \what{R}_n(\ba) - \psi_1\hat \lambda(A^2 + \eps) \cdot \big(\| \ba \|_2^2 - \psi_1^{-1}(A^2+ \eps)\big)\Big)\\ \ge&~ R(\overline\ba_{U}(\lambda_\star(A^2))) - \what{R}_n(\overline\ba_{U}(\lambda_\star(A^2))) - \psi_1\hat \lambda(A^2 + \eps) \cdot \big(\|\overline\ba_{U}(\lambda_\star(A^2)) \|_2^2 - \psi_1^{-1}(A^2 + \eps)\big) \\ \ge&~ R(\overline\ba_{U}(\lambda_\star(A^2))) - \what{R}_n(\overline\ba_{U}(\lambda_\star(A^2))) - \psi_1\hat \lambda(A^2 + \eps) \cdot \big(\|\overline\ba_{U}(\lambda_\star(A^2)) \|_2^2 - \psi_1^{-1}(A^2 - \eps)\big) \\ \ge&~ R(\overline\ba_{U}(\lambda_\star(A^2))) - \what{R}_n(\overline\ba_{U}(\lambda_\star(A^2))) - \psi_1 \lambda_\star(A^2) \cdot \big(\|\overline\ba_{U}(\lambda_\star(A^2)) \|_2^2 - \psi_1^{-1}(A^2 - \eps)\big) \\ =&~ \overline U( \lambda_\star(A^2), N, n, d)+ \lambda_\star(A^2) \cdot (A^2 - \eps). \end{aligned} \] As a consequence, by Eq. (\ref{eqn:event_Asquare_concentration}) and (\ref{eqn:overline_U_concentration}), we have \[ U( (A^2 + \eps)^{1/2}, N, n, d) \ge \overline \cU( \lambda_\star(A^2), \psi_1, \psi_2) + \lambda_\star(A^2) \cdot (A^2 - \eps) - o_{d, \P}(1)= \cU(A, \psi_1, \psi_2) - \eps \lambda_\star(A^2) - o_{d, \P}(1) . \] where the last equality is by the definition of $\cU$ as in Definition \ref{def:formula_U_T}, and by the fact that $\lambda_\star(A^2) \in \argmin_{\lambda \ge 0} [ \overline \cU(\lambda, \psi_1, \psi_2) + \lambda A^2 ]$. Taking $\eps$ sufficiently small proves the lower bound. This concludes the proof of Theorem \ref{thm:main_theorem}. \section{Technical background} \label{sec:Background} In this section we introduce additional technical background useful for the proofs. In particular, we will use decompositions in (hyper-)spherical harmonics on the $\S^{d-1}(\sqrt{d})$ and in Hermite polynomials on the real line. We refer the readers to \cite{costas2014spherical,szego1939orthogonal,chihara2011introduction,ghorbani2019linearized, mm19} for further information on these topics. \subsection{Functional spaces over the sphere} For $d \ge 1$, we let $\S^{d-1}(r) = \{\bx \in \R^{d}: \| \bx \|_2 = r\}$ denote the sphere with radius $r$ in $\reals^d$. We will mostly work with the sphere of radius $\sqrt d$, $\S^{d-1}(\sqrt{d})$ and will denote by $\gamma_d$ the uniform probability measure on $\S^{d-1}(\sqrt d)$. All functions in the following are assumed to be elements of $ L^2(\S^{d-1}(\sqrt d) ,\gamma_d)$, with scalar product and norm denoted as $\<\,\cdot\,,\,\cdot\,\>_{L^2}$ and $\|\,\cdot\,\|_{L^2}$: \begin{align} \<f,g\>_{L^2} \equiv \int_{\S^{d-1}(\sqrt d)} f(\bx) \, g(\bx)\, \gamma_d(\de \bx)\,. \end{align} For $\ell\in\integers_{\ge 0}$, let $\tilde{V}_{d,\ell}$ be the space of homogeneous harmonic polynomials of degree $\ell$ on $\reals^d$ (i.e. homogeneous polynomials $q(\bx)$ satisfying $\Delta q(\bx) = 0$), and denote by $V_{d,\ell}$ the linear space of functions obtained by restricting the polynomials in $\tilde{V}_{d,\ell}$ to $\S^{d-1}(\sqrt d)$. With these definitions, we have the following orthogonal decomposition \begin{align} L^2(\S^{d-1}(\sqrt d) ,\gamma_d) = \bigoplus_{\ell=0}^{\infty} V_{d,\ell}\, . \label{eq:SpinDecomposition} \end{align} The dimension of each subspace is given by \begin{align} \dim(V_{d,\ell}) = B(d, \ell) = \frac{2 \ell + d - 2}{\ell} { \ell + d - 3 \choose \ell - 1} \, . \end{align} For each $\ell\in \integers_{\ge 0}$, the spherical harmonics $\{ Y_{\ell j}^{(d)}\}_{1\le j \le B(d, \ell)}$ form an orthonormal basis of $V_{d,\ell}$: \[ \<Y^{(d)}_{ki}, Y^{(d)}_{sj}\>_{L^2} = \delta_{ij} \delta_{ks}. \] Note that our convention is different from the more standard one, that defines the spherical harmonics as functions on $\S^{d-1}(1)$. It is immediate to pass from one convention to the other by a simple scaling. We will drop the superscript $d$ and write $Y_{\ell, j} = Y_{\ell, j}^{(d)}$ whenever clear from the context. We denote by ${\mathsf P}_k$ the orthogonal projections to $V_{d,k}$ in $L^2(\S^{d-1}(\sqrt d),\gamma_d)$. This can be written in terms of spherical harmonics as \begin{align} {\mathsf P}_k f(\bx) \equiv&~ \sum_{l=1}^{B(d, k)} \< f, Y_{kl}\>_{L^2} Y_{kl}(\bx). \end{align} Then for a function $f \in L^2(\S^{d-1}(\sqrt d))$, we have \[ f(\bx) = \sum_{k = 0}^\infty {\mathsf P}_k f(\bx) = \sum_{k = 0 }^\infty \sum_{l = 1}^{B(d, k)} \< f, Y_{kl}\>_{L^2} Y_{kl}(\bx). \] \subsection{Gegenbauer polynomials} \label{sec:Gegenbauer} The $\ell$-th Gegenbauer polynomial $Q_\ell^{(d)}$ is a polynomial of degree $\ell$. Consistently with our convention for spherical harmonics, we view $Q_\ell^{(d)}$ as a function $Q_{\ell}^{(d)}: [-d,d]\to \reals$. The set $\{ Q_\ell^{(d)}\}_{\ell\ge 0}$ forms an orthogonal basis on $L^2([-d,d], \tilde \tau_d)$ (where $\tilde \tau_d$ is the distribution of $\<\bx_1, \bx_2\>$ when $\bx_1, \bx_2 \sim_{i.i.d.} \Unif(\S^{d-1}(\sqrt d))$), satisfying the normalization condition: \begin{align} \< Q^{(d)}_k, Q^{(d)}_j \>_{L^2(\tilde \tau_d)} = \frac{1}{B(d,k)}\, \delta_{jk} \, . \label{eq:GegenbauerNormalization} \end{align} In particular, these polynomials are normalized so that $Q_\ell^{(d)}(d) = 1$. As above, we will omit the superscript $d$ when clear from the context (write it as $Q_\ell$ for notation simplicity). Gegenbauer polynomials are directly related to spherical harmonics as follows. Fix $\bv\in\S^{d-1}(\sqrt{d})$ and consider the subspace of $V_{\ell}$ formed by all functions that are invariant under rotations in $\reals^d$ that keep $\bv$ unchanged. It is not hard to see that this subspace has dimension one, and coincides with the span of the function $Q_{\ell}^{(d)}(\<\bv,\,\cdot\,\>)$. We will use the following properties of Gegenbauer polynomials \begin{enumerate} \item For $\bx, \by \in \S^{d-1}(\sqrt d)$ \begin{align} \< Q_j^{(d)}(\< \bx, \cdot\>), Q_k^{(d)}(\< \by, \cdot\>) \>_{L^2(\S^{d-1}(\sqrt d), \gamma_d)} = \frac{1}{B(d,k)}\delta_{jk} Q_k^{(d)}(\< \bx, \by\>). \label{eq:ProductGegenbauer} \end{align} \item For $\bx, \by \in \S^{d-1}(\sqrt d)$ \begin{align} Q_k^{(d)}(\< \bx, \by\> ) = \frac{1}{B(d, k)} \sum_{i =1}^{ B(d, k)} Y_{ki}^{(d)}(\bx) Y_{ki}^{(d)}(\by). \label{eq:GegenbauerHarmonics} \end{align} \end{enumerate} Note in particular that property 2 implies that --up to a constant-- $Q_k^{(d)}(\< \bx, \by\> )$ is a representation of the projector onto the subspace of degree-$k$ spherical harmonics \begin{align} ({\mathsf P}_k f)(\bx) = B(d,k) \int_{\S^{d-1}(\sqrt{d})} \, Q_k^{(d)}(\< \bx, \by\> )\, f(\by)\, \gamma_d(\de\by)\, .\label{eq:ProjectorGegenbauer} \end{align} For a function $\sigma \in L^2([-\sqrt d, \sqrt d], \tau_d)$ (where $\tau_d$ is the distribution of $\< \bx_1, \bx_2 \> / \sqrt d$ when $\bx_1, \bx_2 \sim_{iid} \Unif(\S^{d-1}(\sqrt d))$), denoting its spherical harmonics coefficients $\lambda_{d, k}(\sigma)$ to be \begin{align}\label{eqn:technical_lambda_sigma} \lambda_{d, k}(\sigma) = \int_{[-\sqrt d , \sqrt d]} \sigma(x) Q_k^{(d)}(\sqrt d x) \tau_d(x), \end{align} then we have the following equation holds in $L^2([-\sqrt d, \sqrt d],\tau_d)$ sense \begin{equation}\label{eqn:sigma_G_decomposition} \sigma(x) = \sum_{k = 0}^\infty \lambda_{d, k}(\sigma) B(d, k) Q_k^{(d)}(\sqrt d x). \end{equation} \subsection{Hermite polynomials} The Hermite polynomials $\{\He_k\}_{k\ge 0}$ form an orthogonal basis of $L^2(\reals,\mu_G)$, where $\mu_G(\de x) = e^{-x^2/2}\de x/\sqrt{2\pi}$ is the standard Gaussian measure, and $\He_k$ has degree $k$. We will follow the classical normalization (here and below, expectation is with respect to $G\sim\normal(0,1)$): \begin{align} \E\big\{\He_j(G) \,\He_k(G)\big\} = k!\, \delta_{jk}\, . \end{align} As a consequence, for any function $\sigma \in L^2(\reals,\mu_G)$, we have the decomposition \begin{align}\label{eqn:sigma_He_decomposition} \sigma(x) = \sum_{k=1}^{\infty}\frac{\mu_k(\sigma )}{k!}\, \He_k(x)\, ,\;\;\;\;\;\; \mu_k(\sigma) \equiv \E\big\{\sigma(G)\, \He_k(G)\}\, . \end{align} The Hermite polynomials can be obtained as high-dimensional limits of the Gegenbauer polynomials introduced in the previous section. Indeed, the Gegenbauer polynomials (up to a $\sqrt d$ scaling in domain) are constructed by Gram-Schmidt orthogonalization of the monomials $\{x^k\}_{k\ge 0}$ with respect to the measure $\tau_d$, while Hermite polynomial are obtained by Gram-Schmidt orthogonalization with respect to $\mu_G$. Since $\tau_d\Rightarrow \mu_G$ (here $\Rightarrow$ denotes weak convergence), it is immediate to show that, for any fixed integer $k$, \begin{align} \lim_{d \to \infty} \Coeff\{ Q_k^{(d)}( \sqrt d x) \, B(d, k)^{1/2} \} = \Coeff\left\{ \frac{1}{(k!)^{1/2}}\,\He_k(x) \right\}\, .\label{eq:Gegen-to-Hermite} \end{align} Here and below, for $P$ a polynomial, $\Coeff\{ P(x) \}$ is the vector of the coefficients of $P$. As a consequence, for any fixed integer $k$, we have \begin{align}\label{eqn:relationship_mu_lambda} \mu_k(\sigma) = \lim_{d \to \infty} \lambda_{d, k}(\sigma) (B(d, k) k!)^{1/2}, \end{align} where $\mu_k(\sigma)$ and $\lambda_{d, k}(\sigma)$ are given in Eq. (\ref{eqn:sigma_He_decomposition}) and (\ref{eqn:technical_lambda_sigma}).
2,877,628,088,670
arxiv
\section{\label{sec:introduction}Introduction} The underlying theory of strong interaction is quantum chromodynamics (QCD), which is nonperturbative at low energy. Therefore, it is very difficult to be solved by the hadron spectrum model independently. Although the traditional quark model can well describe the hadron spectrum, which is classified as mesons, composed of $q\bar{q}$ and baryons, composed of $qqq$~\cite{tr_model1,tr_model2}, the emergence of plenty of states or resonant structures in experiments can not fit the hadron spectrum predicted by the naive quark model in last two decades. Those states are called exotic states. A lot of effort has been made to understand the nature of these states, but there is still controversy about their nature. Actually, most of these states are in the proximity of di-hadron thresholds. To name a few famous examples, the observed $X(3872)$~\cite{Bell-X(3872)} and $Z_{c}(3900)$~\cite{Zc(3900)1,Zc(3900)2,Zc(3900)3} are around the $D\bar{D}^{*}$ threshold and the $Z_{c}(4020)$~\cite{Zc(4020)1,Zc(4020)2} are near the $D^{*}\bar{D}^{*}$ threshold, etc. Recently, more representative states were searched for experimentally, such as $Z_{c}(3985)$~\cite{Zc(3985)}, which are near the $\bar{D}_{s}D^{*}$ and $\bar{D}^{*}_{s}D$ thresholds. Inspired by the results of recent LHCb experiment about the $X_{c0}(3930)$ which is just below the $D_{s}\bar{D}_{s}$ threshold~\cite{LHCb_X3930.1,LHCb_X3930.2}, we have a great interest in studying the tetraquark system composed of $cs\bar{c}\bar{s}$. Besides, several experiment collaborations have discovered some relevant resonance states. For instance, in 2009, the CDF Collaboration reported a new state $X(4140)$ in the $J/\psi\phi$ invariant mass distribution~\cite{CDF-CCSS} and this structure was then observed by other collaborations, such as LHCb, CMS, D0, and BABAR in next few years~\cite{LHCb-CCSS,CMS-CCSS,D0-CCSS,BABAR-CCSS}; in 2010, a narrow resonance $X(4350)$ was found in the $\gamma\gamma\longrightarrow J/\psi\phi$ process by the Belle Collaboration~\cite{Bell-CCSS}, which favored the spin parity $J^{PC}=0^{++}$ or $2^{++}$; in 2017, the CDF Collaboration observed the resonance state $X(4274)$ in the $B^+\longrightarrow J/\psi \phi K^+$ decay with $3.1\sigma$ significance~\cite{CDF-CCSS-4274}; in 2016, in the $B^+\longrightarrow J/\psi \phi K^+$ decay, the LHCb Collaboration confirmed the existence of the $X(4140)$ and $X(4274)$~\cite{LHCb-CCSS-2017.1}. Their quantum numbers are measured to be $J^{PC}=1^{++}$; in the same process, the Collaboration observed two higher resonances, $X(4500)$ and $X(4700)$ with $J^{PC}=0^{++}$~\cite{LHCb-CCSS-2017.2}; in 2021, an improved full amplitude analysis of the $B^+\longrightarrow J/\psi \phi K^+$ decay is performed by using 6 times larger signal yield than the previous analysis, the LHCb Collaboration discovered two new hadron states, which are $X(4685)$ and $X(4630)$~\cite{LHCb-CCSS-2021}. To identify the internal structure of these resonance states, there are a large number of theoretical studies about the tetraquark state $cs\bar{c}\bar{s}$. In Ref~\cite{Canonical-CCSS}, the $X(4140)$ resonance appeared as a cusp in the $J/\psi\phi$ channel and the $X(4274)$, $X(4500)$, $X(4700)$ were all defined as conventional charmonium states by using a nonrelativistic constituent quark model, respectively. In the relativized quark model, the resonance of $X(4140)$ was regarded as the $cs\bar{c}\bar{s}$ tetraquark ground state, the $X(4700)$ was assigned as the 2S excited tetraquark state, $X(4500)$ was explained as the tetraquark composed of one 2S scalar diquark and one scalar antidiquark, and $X(4274)$ was a good candidate of the conventional $\chi_{c1}$ state~\cite{relative-CCSS}. It was argued that the cusp effects might explain the structure of the $X(4140)$, but failed to account for the $X(4274)$~\cite{effect-cups-CCSS}. In Ref~\cite{di-antiquark-CCSS}, the $X(4140)$ and $X(4270)$ with the tetraquark interpretation was consistent with $X(4350)$ while the interpretation of the $X(4500)$ and $X(4700)$ needed orbital or radial excitation in the simple color-magnetic interaction model. Maiani et al. proposed that the $X(4140)$ and the $X(4274)$ could be explained as the ground state 1S-multiplet of diquark-antiquark tetraquarks while the $X(4500)$ and $X(4700)$ were radially excited 2S states~\cite{di-antiquark-CCSS1}. Stancu argued that $X(4140)$ could be the strange partner of X(3872) in a tetraquark interpretation within a simple quark model with the chromomagnetic interaction~\cite{Stancu-CCSS}. In Ref~\cite{QCD-sum-rule-CCSS}, based on the diquark-antidiquark configuration within the framework of QCD sum rules, the $X(4500)$ and the $X(4700)$ were interpreted as the D-wave $cs\bar{c}\bar{s}$ tetraquark states of $J^{P}=0^+$. According to the calculation with multiquark color flux-tube model, Deng et al. pointed out that the $X(4500)$ and the $X(4700)$ were S-wave radial excited states $[cs][\bar{c}\bar{s}]$~\cite{color-flux-tube-CCSS}. Moreover, Yang explained the $X(4274)$ as the $cs\bar{c}\bar{s}$ tetraquark states with $J^{PC}=1^{++}$, $X(4350)$ as a good candidate of the compact tetraquark state with $J^{PC}=0^{++}$, and the $X(4700)$ as the 2S radial excited tetraquark state with $J^{PC}=0^{++}$~\cite{CHQM-CCSS}. In Refs.~\cite{wang-diquark-CCSS1,wang-diquark-CCSS2}, the $X(4500)$ was observed as the first radial excited state of the axial-vector-diquark-axial-vector-antidiquark type scalar $cs\bar{c}\bar{s}$ tetraquark state and the $X(4700)$ was assigned as the ground state vector-diquark-vector-antidiquark type scalar $cs\bar{c}\bar{s}$ tetraquark state, but the results disfavored assigning the $X(4140)$ to the $J^{PC}=1^{++}$ diquark-antiquark type $cs\bar{c}\bar{s}$ tetraquark state. A rescattering mechanism was used in Ref~\cite{rescattering-CCSS} to understand the nature of $X(4140)$, $X(4350)$, $X(4500)$ and $X(4700)$, among which the $X(4140)$ and the $X(4700)$ could be simulated due to the $D^{*}_{s}D_{s}$ rescattering and the $\psi^{'}\phi$ rescattering. However, this mechanism failed to generate the $X(4274)$ and $X(4500)$, which lead to the proposal that they might be the genuine resonances. In Ref~\cite{Ebert}, the masses of the excited heavy tetraquarks with hidden charm were calculated within the relativistic diquark-antidiquark picture, and the results showed that $X(3872)$, $Y(4260)$, $Y(4360)$, $Z(4248)$, $Z(4433)$ and $Y(4660)$ could be tetraquark states with hidden charm. In this work, to see whether these exotic resonances can be described by $cs\bar{c}\bar{s}$ tetraquark systems with $J^{P}=0^{+}, 1^{+}$ and $2^{+}$, we systematically study the properties of these exotic resonances by using the quark delocalization color screening model (QDCSM)~\cite{QDCSM_explain1}, which was proposed particularly to study the similarities between nuclear and molecular forces. According to the characteristics of QDCSM, it can give a good description of the properties of the deuteron, nucleon-nucleon, and hyperon-nucleon interactions~\cite{QDCSM_explain2, QDCSM_explain3}. In the present calculation, two configurations, the meson-meson ($q\bar{q}-q\bar{q}$) and the diquark-antidiquark ($qq-\bar{q}\bar{q}$), are taken into account. Besides, to be more convincing, the channel coupling effect of $cs\bar{c}\bar{s}$ tetraquark systems is also included. This work is organized as follows. In section~\ref{model}, we present a review of the quark delocalization color screening model and the wave functions of the total system in the present work. The numerical results and a discussion for the tetraquarks are given in Section~\ref{results}. Finally, the last section is devoted to a brief summary. \section{THE QUARK DELOCALIZATION COLOR SCREENING MODEL (QDCSM) AND WAVE FUNCTIONS }{\label{model}} \subsection{The quark delocalization color screening model (QDCSM)} The quark delocalization color screening model (QDCSM) is an extension of the native quark cluster model~\cite{native} and was developed with aim of addressing multiquark systems. The detail of QDCSM can be found in the Refs.~\cite{QDCSM_explain1, QDCSM1, QDCSM2}. Here, the general form of the four body complex Hamiltonian is given by \begin{equation} H = \sum_{i=1}^{4} \left(m_i+\frac{\boldsymbol{p}_i^2}{2m_i}\right)-T_{CM}+\sum_{j>i=1}^4V(r_{ij}),\\ \end{equation} where the center-of-mass kinetic energy, $T_{CM}$ is subtracted without losing generality since we mainly focus on the internal relative motions of the multiquark system. The interplay is of two body potential which includes color-confining, $V_{CON}$, one-gluon exchange, $V_{OGE}$, and Goldstone-boson exchange, $V_{\chi}$, respectively, \begin{equation} V(r_{ij}) = V_{CON}(r_{ij})+V_{OGE}(r_{ij})+V_{\chi}(r_{ij}) \end{equation} In this work, we focus on the low-lying positive parity $cs \bar{c} \bar{s}$ tetraquark states of $s-$wave, and the spin-orbit and tensor interactions are not included. The potential $V_{OGE}(r_{ij})$ can be written as \begin{equation} V_{OGE}(r_{ij}) = \frac{1}{4}\alpha_s \boldsymbol{\lambda}^{c}_i \cdot \boldsymbol{\lambda}^{c}_j \left[\frac{1}{r_{ij}}-\frac{\pi}{2}\delta(\boldsymbol{r}_{ij})(\frac{1}{m^2_i}+\frac{1}{m^2_j} +\frac{4\boldsymbol{\sigma}_i\cdot\boldsymbol{\sigma}_j}{3m_im_j})\right] \end{equation} where $m_{i}$ and $\boldsymbol{\sigma}$ are the quark mass and the Pauli matrices, respectively. The $\boldsymbol{\lambda^{c}}$ is SU(3) color matrix. The QCD-inspired effective scale-dependent strong coupling constant, $\alpha_s^{ij}$, offers a consistent description of mesons from light to heavy quark sector. Similary, the confining interaction $V_{CON}(r_{ij})$ can be expressed as \begin{equation} V_{CON}(r_{ij}) = -a_{c}\boldsymbol{\lambda^{c}_{i}\cdot\lambda^{c}_{j}}[f(r_{ij})+V_{0_{ij}}], \end{equation} and the $f(r_{ij})$ can be written as \begin{equation} f(r_{ij}) = \left\{ \begin{array}{ll}r_{ij}^2 &\qquad \mbox{if }i,j\mbox{ occur in the same cluster} \\ \frac{1 - e^{-\mu_{ij} r_{ij}^2} }{\mu_{ij}} & \qquad \mbox{if }i,j\mbox{ occur in different cluster} \\ \end{array} \right. \end{equation} where the color screening parameter $\mu_{ij}$ is determined by fitting the deuteron properties, $NN$ and $NY$ scattering phase shifts, with $\mu_{qq}= 0.45$, $\mu_{qs}= 0.19$ and $\mu_{ss}= 0.08$, satisfying the relation $\mu_{qs}^{2}=\mu_{qq}\mu_{ss}$, where $q$ represents $u$ or $d$ quark. When extending to the heavy-quark case, we found that the dependence of the parameter $\mu_{cc}$ is not very significant in the calculation of the $P_{c}$ states~\cite{Pc_huang1} by taking it from $0.0001$ to $0.01$. So here we take $\mu_{cc}=0.01$. Then $\mu_{sc}$ and $\mu_{uc}$ are obtained by the relation $\mu^{2}=\mu_{ss}\mu_{cc} $ and $\mu^{2}=\mu_{uu}\mu_{cc}$, respectively. The Goldstone-boson exchange interactions between light quarks appear because the dynamical breaking of chiral symmetry. For the $cs\bar{c}\bar{s}$ system, the $\pi$ and $K$ exchange interactions do not appear because there is no up or down quarks herein. Only the following $\eta$ exchange term works between the $s\bar{s}$ pair. \begin{equation} V_{\chi}(r_{ij}) = v^{\eta}_{ij}\left[\left(\lambda _{i}^{8}\cdot \lambda _{j}^{8}\right)\cos\theta_P-(\lambda _{i}^{0}\cdot \lambda_{j}^{0}) \sin\theta_P\right] \label{sala-Vchi1} \end{equation} with \begin{eqnarray} \nonumber v^{\eta}_{ij} &=& {\frac{g_{ch}^{2}}{{4\pi}}}{\frac{m_{\chi}^{2}}{{\ 12m_{i}m_{j}}}}{\frac{\Lambda _{\chi}^{2}}{{\Lambda _{\chi}^{2}-m_{\chi}^{2}}}} m_{\chi} \\ &&\left\{(\boldsymbol{\sigma}_{i}\cdot\boldsymbol{\sigma}_{j}) \left[ Y(m_{\chi}\,r_{ij})-{\frac{\Lambda_{\chi}^{3}}{m_{\chi}^{3}}} Y(\Lambda _{\chi}\,r_{ij})\right] \right\} \end{eqnarray} where $Y(x)=e^{-x}/x$ is the standard Yukawa function. The $\boldsymbol{\lambda^{a}}$ is the SU(3) flavor Gell-Mann matrix. The mass of the $\eta$ meson is taken from the experimental value~\cite{PDG}. Finally, the chair coupling constant, $g_{ch}$, is determined from the $\pi NN$ coupling constant through \begin{equation} \frac{g_{ch}^{2}}{4\pi}=\left(\frac{3}{5}\right)^{2} \frac{g_{\pi NN}^{2}}{4\pi} {\frac{m_{u,d}^{2}}{m_{N}^{2}}} \end{equation} which assumes that flavor SU(3) is an exact symmetry, only broken by the different mass of the strange quark. The model parameters and the masses of the ground mesons are listed in Tables~\ref{parameters} and \ref{mass}, respectively. \begin{table}[ht] \caption{\label{biaoge}Model parameters. The masses of mesons take their experimental values. $m_{\eta}=2.77$ fm$^{-1}$.} \begin{tabular}{cccccccc} \hline \hline Quark masses &$m_u$(MeV) & 313 \\ &$m_s$(MeV) & 536 \\ &$m_c$(MeV) & 1728 \\ confinement &$b(fm)$ &0.3\\ &$a_{c}$(MeV $fm^{-2}$) &101 \\ &$V_{0_{uu}}$(MeV) &-2.2543\\ &$V_{0_{us}}$(MeV) &-1.7984\\ &$V_{0_{uc}}$(MeV) &-1.3231\\ &$V_{0_{ss}}$(MeV) &-1.3649\\ &$V_{0_{sc}}$(MeV) &-0.6739\\ &$V_{0_{cc}}$(MeV) &0.7555\\ OGE &$\alpha_{s}^{uu}$ &0.2567\\ &$\alpha_{s}^{us}$ &0.2970\\ &$\alpha_{s}^{uc}$ &0.3805\\ &$\alpha_{s}^{ss}$ &0.1905\\ &$\alpha_{s}^{sc}$ &0.6608\\ &$\alpha_{s}^{cc}$ &1.6717\\ \hline\hline \end{tabular} \label{parameters} \end{table} \begin{table}[ht] \caption{The Masses (in MeV) of the ground mesons. Experimental values are taken from the Particle Data Group (PDG)~\cite{PDG}.} \begin{tabular}{lcccccccc} \hline \hline ~~~~~~&~~$K$~~ &~~$K^{*}$~~ &~~$\pi$~~ &~~$\rho$~~ &~~$\eta_{s\bar s}$~~ &~~$\phi$~~~~~~ \\ \hline Expt &495 & 892 &139 &770 &958 &1020 \\ Model &495 & 892 &139 &770 &958 &1020 \\ ~~~~~~&~~$D_{s}$~~ &~~$D_{s}^{*}$~~ &~~$\eta_{c\bar c}$~~ &~~$J/\psi$~~ &~~$D$~~ &~~$D^{*}$~~~~~~ \\ Expt &1968 &2112 &2983 &3096 &1865 &2007 \\ Model &1968 &2112 &2983 &3096 &1865 &2007 \\ \hline\hline \end{tabular} \label{mass} \end{table} In QDCSM, the quark delocalization is realized by specifying the single particle orbital wave function of QDCSM as a linear combination of left and right Gaussian, the single particle orbital wave functions used in the ordinary quark cluster model, \begin{eqnarray} \psi_{r}(\boldsymbol{r},\boldsymbol{s}_{i},\epsilon)&=&(\phi_{R}(\boldsymbol{r},\boldsymbol{s}_{i}) +\epsilon\phi_{L}(\boldsymbol{r},\boldsymbol{s}_{i}))/N(\epsilon), \label{rl1} \\ \psi_{l}(\boldsymbol{r},\boldsymbol{s}_{i},\epsilon)&=&(\phi_{L}(\boldsymbol{r},\boldsymbol{s}_{i}) +\epsilon\phi_{R}(\boldsymbol{r},\boldsymbol{s}_{i}))/N(\epsilon), \label{rl2} \\ N(\epsilon)&=& \sqrt{1+\epsilon^2+2\epsilon e^{-s^2_{i}/{4b^2}}},\\ \phi_{R}(\boldsymbol{r},\boldsymbol{s}_{i})&=&(\frac{1}{\pi b^2})^{\frac{3}{4}} e^{-\frac{1}{2b^2}(\boldsymbol{r}-\frac{2}{5}s_{i})^2},\\ \phi_{L}(\boldsymbol{r},\boldsymbol{s}_{i})&=&(\frac{1}{\pi b^2})^{\frac{3}{4}} e^{-\frac{1}{2b^2}(\boldsymbol{r}+\frac{3}{5}s_{i})^2}, \end{eqnarray} The $\boldsymbol{s}_{i}$, $i=1,2,..., n$, are the generating coordinates, which are introduced to expand the relative motion wave function~\cite{si_1,si_2,si_3}. The mixing parameter $\epsilon(s_{i})$ is not an adjusted one but determined variationally by the dynamics of the multi-quark system itself. This assumption allows the multi-quark system to choose its favorable configuration in the interacting process. It has been used to explain the cross-over transition between the hadron phase and the quark-gluon plasma phase~\cite{si_4}. \subsection{The wave function} In this work, we focus on the double heavy $c\bar c s\bar s$ system by using the resonance group method~\cite{RGM}. Figure~\ref{fig1} shows two kinds of configurations for this system, which are the meson-meson structures shown in Fig.~\ref{fig1} (a) and (b), and the diquark-antidiquark structure shown in Fig.~\ref{fig1}(c). For the purpose of solving a manageable 4-body problem, currently, the system calculation considers only these two structures. But an economic way is used to combine these two configurations to see the effect of the multi-channel coupling. Four fundamental degrees of freedom, which are color, spin, flavor, and orbit are generally accepted by the QCD theory at the quark level. The multiquark system's wave function is an internal product of the color, spin, flavor, and orbit terms. \begin{figure*}[!htb] \includegraphics[scale=0.25]{jacobi.eps} \vspace{-2.5cm} \caption{Two types of configurations in $c\bar c s\bar s$ tetraquarks. Panel(a) and panel (b) is the meson-meson configuration, panel(c) is diquark-antidiquark. } \label{fig1} \end{figure*} \subsubsection{The color wave function} Plenty of color structures in multiquark systems will be available with respect those of conventional hadrons such as $q\bar{q}$ mesons and $qqq$ baryons. In this section, the goal is to construct the colorless wave function of a 4-quark system. For the meson-meson configurations, the color wave functions of a $q\bar{q}$ cluster are listed. \begin{eqnarray} \nonumber C^{1}_{[111]} &=& \sqrt{\frac{1}{3}}(r\bar{r}+g\bar{g}+b\bar{b}) \\ \nonumber C^{2}_{[21]} &=& r\bar{b}, C^{3}_{[21]} = -r\bar{g} \\ \nonumber C^{4}_{[21]} &=& g\bar{b}, C^{5}_{[21]} = -b\bar{g} \\ \nonumber C^{6}_{[21]} &=& g\bar{r}, C^{7}_{[21]} = b\bar{r} \\ \nonumber C^{8}_{[21]} &=& \sqrt{\frac{1}{2}}(r\bar{r}-g\bar{g}) \\ \nonumber C^{9}_{[21]} &=& \sqrt{\frac{1}{6}}(-r\bar{r}-g\bar{g}+2b\bar{b}) \\ \end{eqnarray} where the subscript [111] and [21] stand for color-singlet($\textbf{1}_{c}$) and color-octet($\textbf{8}_{c}$), respectively. So, the $SU(3)_{color}$ wave functions of color-singlet (two color-singlet cluters, $\textbf{1}_{c}\otimes\textbf{1}_{c}$) and hidden-color (two color-octet clusters, $\textbf{8}_{c}\otimes\textbf{8}_{c}$) channels are given respectively. \begin{equation} \chi^{c}_{1} = C^{1}_{[111]}C^{1}_{[111]} \end{equation} \begin{equation} \begin{split} \chi^{c}_{2} =&\sqrt{\frac{1}{8}}(C^{2}_{[21]}C^{7}_{[21]}-C^{4}_{[21]}C^{5}_{[21]}-C^{3}_{[21]}C^{6}_{[21]}\\ &+C^{8}_{[21]}C^{8}_{[21]}-C^{6}_{[21]}C^{3}_{[21]}+C^{9}_{[21]}C^{9}_{[21]}\\ &-C^{5}_{[21]}C^{4}_{[21]}+C^{7}_{[21]}C^{2}_{[21]}) \end{split} \end{equation} For the diquark-antidiquark structure, the color wave functions of the diquark clusters is given, \begin{eqnarray} \nonumber C^{1}_{[2]} &=& rr, C^{2}_{[2]} = \sqrt{\frac{1}{2}}(rg+gr) \\ \nonumber C^{3}_{[2]} &=& gg, C^{4}_{[2]} = \sqrt{\frac{1}{2}}(rb+br)\\ \nonumber C^{5}_{[2]} &=& \sqrt{\frac{1}{2}}(gb+bg), C^{6}_{[2]} = bb \\ \nonumber C^{7}_{[11]} &=& \sqrt{\frac{1}{2}}(rg-gr), C^{8}_{[11]} = \sqrt{\frac{1}{2}}(rb-br)\\ \nonumber C^{9}_{[11]} &=&\sqrt{\frac{1}{2}}(gb-bg) \\ \end{eqnarray} And the color wave functions of the antidiquark clusters can be writen as: \begin{eqnarray} \nonumber C^{1}_{[22]} &=& \bar{r}\bar{r}, C^{2}_{[22]} = -\sqrt{\frac{1}{2}}(\bar{r}\bar{g}+\bar{g}\bar{r})\\ \nonumber C^{3}_{[22]} &=& \bar{g}\bar{g}, C^{4}_{[22]} = \sqrt{\frac{1}{2}}(\bar{r}\bar{b}+\bar{b}\bar{r}) \\ \nonumber C^{5}_{[22]} &=& -\sqrt{\frac{1}{2}}(\bar{g}\bar{b}+\bar{b}\bar{g}), C^{6}_{[22]} = \bar{b}\bar{b} \\ \nonumber C^{7}_{[211]}&=& \sqrt{\frac{1}{2}}(\bar{r}\bar{g}-\bar{g}\bar{r}), C^{8}_{[211]} = -\sqrt{\frac{1}{2}}(\bar{r}\bar{b}-\bar{b}\bar{r}) \\ \nonumber C^{9}_{[211]} &=& \sqrt{\frac{1}{2}}(\bar{g}\bar{b}-\bar{b}\bar{g}) \\ \end{eqnarray} The color wave functions of the diquark-antidiquark structure shown in Fig.~\ref{fig1}(c) are $\chi^{c}_{3}$ (color sextet-antisextet clusters, $\textbf{6}_{c}\otimes\bar{\textbf{6}}_{c}$) and $\chi^{c}_{4}$ (color-triplet-antitriplet cluster, $\textbf{3}_{c}\otimes\bar{\textbf{3}}_{c}$). \begin{equation} \begin{split} \chi^{c}_{3} = &\sqrt{\frac{1}{6}}(C^{1}_{[2]}C^{1}_{[22]}-C^{2}_{[2]}C^{[2]}_{[22]}+C^{3}_{[2]}C^{3}_{[22]} \\ &+C^{4}_{[2]}C^{4}_{[22]}-C^{5}_{[2]}C^{5}_{[22]}+C^{6}_{2}C^{6}_{22}) \end{split} \end{equation} \begin{equation} \begin{split} \chi^{c}_{4} =&\sqrt{\frac{1}{3}}(C^{7}_{[11]}C^{7}_{[211]}-C^{8}_{[11]}C^{8}_{[211]}+C^{9}_{[11]}C^{9}_{[211]}) \end{split} \end{equation} \subsubsection{The flavor wave function} For the flavor degree of freedom, since the quark content of the tetraquark systems are two heavy quarks and two strange quarks, the isoscalar sector is $I=0$. The flavor wave functions denoted as $F^{i}_{I}$, with the superscript $I$ referring to isoscalar, can be written as \begin{eqnarray} \nonumber F^{1}_{0}&=& c\bar{c}s\bar{s} \\ \nonumber F^{2}_{0}&=& c\bar{s}s\bar{c} \\ F^{3}_{0}&=& cs\bar{c}\bar{s} \end{eqnarray} \subsubsection{The spin wave function} For the spin, the total spin $S$ of tetraquark states ranges from 0 to 2. All of them are considered. The wave functions of two body clusters are \begin{eqnarray} \nonumber \chi_{11}&=& \alpha\alpha,\\ \nonumber \chi_{10} &=& \sqrt{\frac{1}{2}}(\alpha\beta+\beta\alpha)\\ \nonumber \chi_{1-1} &=& \beta\beta \\ \chi_{00} &=& \sqrt{\frac{1}{2}}(\alpha\beta-\beta\alpha) \end{eqnarray} Then, the total spin wave functions $S^{i}_{s}$ are obtained by considering the coupling of two subcluster spin wave functions with SU(2) algebra, and the total spin wave functions of four-quark states can be read as \begin{eqnarray} \nonumber S^{1}_{0}&=&\chi_{00}\chi_{00}\\ \nonumber S^{2}_{0}&=&\sqrt{\frac{1}{3}}(\chi_{11}\chi_{1-1}-\chi_{10}\chi_{10}+\chi_{1-1}\chi_{11})\\ \nonumber S^{3}_{1}&=&\chi_{00}\chi_{11}\\ \nonumber S^{4}_{1}&=&\chi_{11}\chi_{00}\\ \nonumber S^{5}_{1}&=&\sqrt{\frac{1}{2}}(\chi_{11}\chi_{10}-\chi_{10}\chi_{11}) \\ S^{6}_{2}&=&\chi_{11}\chi_{11} \end{eqnarray} \subsubsection{The orbital wave function} Among the different methods to solve the Schr\"{o}dinger-like 4-body bound state equation, we use the resonating group method (RGM)~\cite{RGM}, which is one of the most extend tools to solve eigenvalue problems and scattering problems. The total orbital wave functions can be constructed by coupling the orbital wave function of two internal cluster and the relative motion wave function between two clusters. \begin{equation} \psi^{L}=\psi_{1}(R_{1})\psi_{2}(R_{2})\chi_{L}(R) \end{equation} where $R_{1}$ and $R_{2}$ are the internal coordinates for the cluster 1 and cluster 2. $R=R_{1}-R_{2}$ is the relative coordinate between the two clusters 1 and 2. The $\psi_{1}$ and $\psi_{2}$ are the internal cluster orbital functions of the clusters 1 and clusters 2, and $\chi_{L}(R)$ is the relative motion wave function between two clusters, which is expanded by the gaussian bases \begin{eqnarray} \begin{split} \chi_{L}(R)=&\sqrt{\frac{1}{4\pi}}(\frac{3}{2\pi b^2})\sum^{n}_{i=1}C_{i} \\ &\times \int{exp}[-\frac{3}{4b^2}(R-s_{i})^2]Y_{LM}(\hat{s_{i}})d\hat{s_{i}} \end{split} \end{eqnarray} where n is the number of gaussian bases, which is determined by the stability of the results. Finally, to fulfill the Pauli principle, the complete wave function is written as \begin{equation} \psi=A[[\psi^{L}S^{j}_{s}]_{JM_{J}}F^{i}_{I}\chi^{c}_{k}] \end{equation} where A is the antisymmetry operator of double-heavy tetraquarks. In this work, the operator A is defined as $A=1$ due to the absence of any homogeneous quarks in the $c\bar{c}s\bar{c}$ system. \section{RESULTS AND DISCUSSIONS}{\label{results}} The low-lying $S-$wave states of $c\bar{c}s\bar{s}$ tetraquark are systematically investigated herein. The parity for $c\bar{c}s\bar{s}$ tetraquark is positive under our assumption that the total orbital angular momenta $L$ is 0. Accordingly, the total angular momenta, $J$, can take values of 0, 1, 2. The value of isospin can only be 0 for the $c\bar{c}s\bar{s}$ tetraquark system. Two structures of $c\bar{c}s\bar{s}$ tetraquark, meson-meson and diquark-antidiquark structures, are investigated. In each structure, all possible states are considered, which are listed in Table~\ref{channels}. The $F^{i}_{I}; S^{j}_{s}; \chi^{c}_{k}$ shows the necessary basis combination in flavor $(F^{i}_{I})$, spin $(S^{j}_{s})$ and color $(\chi^{c}_{k})$ degrees of freedom. For meson-meson structure, only the color singlet-singlet $(1\times1)$ is taken into account because of the effect of hidden color channel coupling is considered in QDCSM~\cite{QDCSM1,QDCSM2}. \begin{table*}[!htb] \begin{center} \caption{\label{channels} All possible channels for all quantum numbers} \begin{tabular}{|ccc|ccc|ccc|cccccc \hline\hline \multicolumn{3}{|c}{$IJ^{P}=00^{+}$} &\multicolumn{3}{|c|}{$IJ^{P}=01^{+}$} &\multicolumn{3}{c|}{$IJ^{P}=02^{+}$} \\ index &$F^{i}_{I}; S^{j}_{s}; \chi^{c}_{k}$ &channels & index &$F^{i}_{I}; S^{j}_{s}; \chi^{c}_{k}$ &channels & index &$F^{i}_{I}; S^{j}_{s}; \chi^{c}_{k}$ &channels \\ \multicolumn{1}{|c}{} &[i;j;k] & \multicolumn{1}{c|}{} &\multicolumn{1}{c}{} &[i;j;k] & \multicolumn{1}{c|}{} &\multicolumn{1}{c}{} &[i;j;k] & \multicolumn{1}{c|}{} \\ \hline 1 & [1,1,1] &$\eta_{c}\eta_{s}$ &1 & [1,3,1] &$\eta_{c}\phi$ &1 & [1,6,1] &$J/\psi\phi$\\ 2 & [2,1,1] &$D_{s}\bar{D}_{s}$ &2 & [2,3,1] &$D_{s}{D}_{s}^{*}$ &2 & [2,6,1] &$D_{s}^{*}{D}_{s}^{*}$\\ 3 & [1,2,1] &$J/\psi \phi$ &3 & [1,4,1] &$J/\psi\eta_{s}$ &3 & [3,6,3] &$(cs)(\bar{c}\bar{s})$\\ 4 & [2,2,1] &$D_{s}^{*}\bar{D}_{s}^{*}$ &4 & [2,4,1] &$D_{s}^{*}{D}_{s}$ &4 & [3,6,4] &$(cs)(\bar{c}\bar{s})$\\ 5 & [3,1,3] &$(cs)(\bar{c}\bar{s})$ &5 & [1,5,1] &$J/\psi\phi$ & \multicolumn{3}{c|}{} \\ 6 & [3,1,4] &$(cs)(\bar{c}\bar{s})$ &6 & [2,5,1] &$D_{s}^{*}{D}_{s}^{*}$ & \multicolumn{3}{c|}{}\\ 7 & [3,2,3] &$(cs)(\bar{c}\bar{s})$ &7 & [3,3,3] &$(cs)(\bar{c}\bar{s})$ & \multicolumn{3}{c|}{} \\ 8 & [3,2,4] &$(cs)(\bar{c}\bar{s})$ &8 & [3,3,4] &$(cs)(\bar{c}\bar{s})$ & \multicolumn{3}{c|}{}\\ \multicolumn{3}{|c|}{} &9 & [3,4,3] &$(cs)(\bar{c}\bar{s})$ & \multicolumn{3}{c|}{}\\ \multicolumn{3}{|c|}{} &10& [3,4,4] &$(cs)(\bar{c}\bar{s})$ & \multicolumn{3}{c|}{} \\ \multicolumn{3}{|c|}{} &11& [3,5,3] &$(cs)(\bar{c}\bar{s})$ & \multicolumn{3}{c|}{}\\ \multicolumn{3}{|c|}{} &12& [3,5,4] &$(cs)(\bar{c}\bar{s})$ & \multicolumn{3}{c|}{}\\ \hline\hline \end{tabular} \end{center} \end{table*} The energy of $c\bar{c}s\bar{s}$ tetraquark system with $IJ^{P}=00^{+}$, $01^{+}$, and $02^{+}$ for both the meson-meson and diquark-antidiquark structures, as well as the channel coupling of these two structures are listed in Table~\ref{00}, ~\ref{01}, ~\ref{02}, respectively. In those tables, the first column represents the index of every possible channel; the second column lists the corresponding physical channels; the third column indicates the theoretical threshold of every channel; the fourth column ($E_{sc}$) is the energy of every single channel; the fifth column ($E_{cc}$) shows the energy by channel coupling of one certain configuration; the last column ($E_{mix}$) is the lowest energy of the system by coupling all channels of both two configurations. \begin{table*}[!htb] \begin{center} \caption{\label{00} The lowest-lying eigenenergies of $c\bar{c}s\bar{s}$ tetraquarks with $IJ^{P}=00^+$ in the QDCSM. } \begin{tabular}{ccccccccccccccc \hline\hline ~~~~~~Index~~~~~~ ~~~&Channel~~~ ~~~&Threshold~~~ ~~~&$E_{sc}$~~~ ~~~&$E_{cc}$~~~ ~~~&$E_{mix}$~~~ \\ ~~~~~~1~~~~~~ ~~~&$\eta_{s\bar{s}}\eta_{c\bar{c}}$~~~ ~~~&3942~~~ ~~~&3944~~~ ~~~&3938~~~ ~~~&3930~~~ \\ ~~~~~~2~~~~~~ ~~~&$D_{s}\bar{D_{s}}$~~~ ~~~&3936~~~ ~~~&3938~~~ ~~~&~~~ \\ ~~~~~~3~~~~~~ ~~~&$J/\psi \phi$~~~ ~~~&4117~~~ ~~~&4119~~~ ~~~&~~~ \\ ~~~~~~4~~~~~~ ~~~&$D_{s}^{*}\bar{D}_{s}^{*}$~~~ ~~~&4224~~~ ~~~&4226~~~ ~~~&~~~ \\ ~~~~~~5~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4324~~~ ~~~&4219~~~ \\ ~~~~~~6~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4442~~~ ~~~&~~~ \\ ~~~~~~7~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4405~~~ ~~~&~~~ \\ ~~~~~~8~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4305~~~ ~~~&~~~ \\ \hline\hline \end{tabular} \end{center} \end{table*} \begin{table*}[!htb] \begin{center} \caption{\label{01} The lowest-lying eigenenergies of $c\bar{c}s\bar{s}$ tetraquarks with $IJ^{P}=01^+$ in the QDCSM. } \begin{tabular}{ccccccccccccccc \hline\hline ~~~~~~Index~~~~~~ ~~~&Channel~~~ ~~~&Threshold~~~ ~~~&$E_{sc}$~~~ ~~~&$E_{cc}$~~~ ~~~&$E_{mix}$~~~ \\ ~~~~~~1~~~~~~ ~~~&$\eta_{c\bar{c}}\phi$~~~ ~~~&4004~~~ ~~~&4006 ~~~&4006~~~ ~~~&4006~~~ \\ ~~~~~~2~~~~~~ ~~~&$D_{s}\bar{D_{s}^{*}}$~~~ ~~~&4080~~~ ~~~&4082 ~~~&~~~ ~~~&~~~ \\ ~~~~~~3~~~~~~ ~~~&$J/\psi \eta_{s\bar{s}}$~~~ ~~~&4055~~~ ~~~&4057 ~~~&~~~ ~~~&~~~ \\ ~~~~~~4~~~~~~ ~~~&$D_{s}^{*}\bar{D_{s}}$~~~ ~~~&4080~~~ ~~~&4082 ~~~&~~~ ~~~&~~~ \\ ~~~~~~5~~~~~~ ~~~&$J/\psi \phi$~~~ ~~~&4117~~~ ~~~&4119 ~~~&~~~ ~~~&~~~ \\ ~~~~~~6~~~~~~ ~~~&$D_{s}^{*}\bar{D}_{s}^{*}$~~~ ~~~&4224~~~ ~~~&4226 ~~~&~~~ ~~~&~~~ \\ ~~~~~~7~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4375 ~~~&4327~~~ ~~~&~~~ \\ ~~~~~~8~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4419 ~~~&~~~ ~~~&~~~ \\ ~~~~~~9~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4375 ~~~&~~~ ~~~&~~~ \\ ~~~~~~10~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4419 ~~~&~~~ ~~~&~~~ \\ ~~~~~~11~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4413 ~~~&~~~ ~~~&~~~ \\ ~~~~~~12~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4352 ~~~&~~~ ~~~&~~~ \\ \hline\hline \end{tabular} \end{center} \end{table*} \begin{table*}[!htb] \begin{center} \caption{\label{02} The lowest-lying eigenenergies of $c\bar{c}s\bar{s}$ tetraquarks with $IJ^{P}=02^+$ in the QDCSM. } \begin{tabular}{ccccccccccccccc \hline\hline ~~~~~~Index~~~~~~ ~~~&Channel~~~ ~~~& Threshold~~~ ~~~& $E_{sc}$~~~ ~~~&$E_{cc}$~~~ ~~~&$E_{mix}$~~~ \\ ~~~~~~1~~~~~~ ~~~&$J/\psi \phi$~~~ ~~~&4117~~~ ~~~&4122~~~ ~~~&4121~~~ ~~~&4119~~~ \\ ~~~~~~2~~~~~~ ~~~&$D_{s}^{*}\bar{D}_{s}^{*}$~~~ ~~~&4224~~~ ~~~&4229~~~ ~~~&~~~ ~~~&~~~ \\ ~~~~~~3~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4429~~~ ~~~&4420~~~ ~~~&~~~ \\ ~~~~~~4~~~~~~ ~~~&$(cs)(\bar{c}\bar{s})$~~~ ~~~&~~~ ~~~&4437~~~ ~~~&~~~ ~~~&~~~ \\ \hline\hline \end{tabular} \end{center} \end{table*} The $IJ^P=00^+$ system: Four possible meson-meson channels, $\eta_{s\bar{s}}\eta_{c\bar{c}}, D_{S}\bar{D_{s}}, J/\psi \phi, D_{s}^{*}\bar{D}_{s}^{*}$, and two diquark-antiquark channels, $((cs)(\bar{c}\bar{s}))(3\times\bar{3})$ and $((cs)(\bar{c}\bar{s}))(6\times\bar{6})$, are studied in QDCSM. All results with the $IJ^P=00^+$ are given in Table~\ref{00}. We can see that the energy of every single channel for the meson-meson structure is higher than the corresponding theoretical threshold, which indicates that there is no any bound state. For the diquark-antidiquark configuration, all the masses are higher than the lowest energy of $D_{s}\bar{D_{s}}$ in our model calculation, and the minimum energy is 4305 MeV. Then, we perform the channel-coupling calculation on both the meson-meson and diquark-antidiquark structure, respectively. The energy of the meson-meson structure is 3938 MeV, almost the same as the lowest single channel $D_{s}\bar{D_{s}}$, which indicates that the effect of the channel coupling is quite weak and no bound state is found for the meson-meson structure. For the diquark-antidiquark structure, although the coupling is rather stronger than the meson-meson structure, the energy is still higher than the theoretical threshold of the lowest channel $D_{s}\bar{D_{s}}$. However, the lowest energy of $3930$ MeV is obtained by coupling all channels of two structures, which is 6 MeV lower than the threshold of the lowest channel $D_{s}\bar{D_{s}}$, which means that there is a bound state for the $IJ^{P}=00^{+}$ $c\bar{c}s\bar{s}$ tetraquark system with mass of $3930$ MeV. In addition, to explore the structure of this bound state, we calculate the proportion of each channel, and find that the percentage of the $D_{s}\bar{D}_{s}$ state is about $85\%$, while the percentages of the other seven channels are much smaller. This means that the largest contribution to forming this bound state comes from the $D_{s}\bar{D}_{s}$ channel, so this bound state tends to be a molecular state. Moreover, this value is in proximity to the $\chi_{c0}(3930)$ observed by the LHCb collaboration. So we can explain the $\chi_{c0}(3930)$ as a molecular state $D_{s}\bar{D}_{s}$ in present quark model calculation. This result is consistent with the work of Ref.~\cite{lattice_X3930}, in which the lattice QCD calculation with $m_{\pi}\simeq 280$ MeV indicated the existence of the scalar $\bar{D}_{s}D_{s}$ bound state, which might correspond to the $\chi_{c0}(3930)$ observed by the LHCb collaboration~\cite{LHCb_X3930.1,LHCb_X3930.2}. Also, in Ref~\cite{BS_X3930}, two pole positions of $\bar{D}_{s}D_{s}$ system were obtained by solving the Bethe-Salpeter equation, which explained the properties of new exotic resonance $\chi_{c0}(3930)$. The $IJ^P=01^+$ system: From Table~\ref{channels}, there are six meson-meson channels and six diquark-antidiquark channels. Table~\ref{01} lists the calculated masses of these channels and also their coupling results. The energy range of every single-channel of the meson-meson structure is about $4.0-4.2$ GeV, and the mass of the diquark-antidiquark channel is around 4.4 GeV. All these single channels are unbound. By coupling the channels with the same configuration, the lowest masses are located at 4006 MeV for the meson-meson structure and 4327 MeV for the diquark-antidiquark structure, both of which are still above the threshold of the lowest channel $\eta_{c\bar{c}}\phi$, indicating that no any bound state exists in the meson-meson structure or the diquark-antidiquark structure. Meanwhile, the lowest energy is still 4006 MeV by the full channel coupling calculation, which means that the effect of all channel coupling is very minor here, and there is no any bound state in the $IJ^P=01^+$ system. The $IJ^P=02^+$ system: Table~\ref{02} shows that there are two channels ($J/\psi \phi$ and $D_{s}^{*}\bar{D}_{s}^{*}$) of the meson-meson structure and two channels of the diquark-antidiquark configurations for the $IJ^P=02^+$ system. The situation is similar to the $IJ^P=01^+$ case. The energy of each channel is above the threshold of the corresponding channel. Meanwhile, the channel coupling cannot help too much, the lowest energy is still higher than the threshold of the lowest channel $J/\psi \phi$. Therefore, there is no any bound state in the $IJ^P=02^+$ system at present calculation. Although there is no any bound state for the $IJ^P=01^+$ and $IJ^P=02^+$ system, some resonance states are still possible in the $c\bar{c}s\bar{s}$ tetraquark system. The colorful subclusters diquark and antidiquark cannot fall apart directly due to the color confinement, so it is possible for them to be resonance states. To find out if there is any resonance state, a stabilization method (also named a real scaling method), which has been successfully applied in other multiquark systems~\cite{real_method2, real_method3}, is used in this work. To realize the real scaling method in our calculation, the distance between two clusters is defined as $S$. With the increase of $S$, each state will fall off towards its threshold, except the resonance state, the energy of which will be stable because it will not be affected by the boundary at a large distance. So we calculate the energy eigenvalues of the $c\bar{c}s\bar{s}$ systems by taking the value of $S$ from 4.1 fm to 9.0 fm to see if there is any stable state. The results of the $c\bar{c}s\bar{s}$ tetraquark systems with $IJ^{P}=0{0^+}, 0{1^+}$ and $0{2^+}$ are shown in Fig~\ref{phase00}, Fig~\ref{phase01} and Fig~\ref{phase02}, respectively. \begin{figure*} \includegraphics[scale=0.35]{00.eps} \vspace{-0.5cm} \caption{The stabilization plots of the energies of the $c\bar{c}s\bar{s}$ with $IJ^{P}=00^+$ in QDCSM. } \label{phase00} \end{figure*} \begin{figure*} \includegraphics[scale=0.35]{01.eps} \vspace{-0.5cm} \caption{The stabilization plots of the energies of the $c\bar{c}s\bar{s}$ with $IJ^{P}=01^+$ in QDCSM. } \label{phase01} \end{figure*} \begin{figure*} \includegraphics[scale=0.35]{02.eps} \vspace{-0.5cm} \caption{The stabilization plots of the energies of the $c\bar{c}s\bar{s}$ with $IJ^{P}=02^+$ in QDCSM. } \label{phase02} \end{figure*} For $IJ^{P}=0{0^+}$ system in Fig~\ref{phase00}, it is obvious that the lowest horizontal line locates at the energy of $3930$ MeV, which represents the bound state of this system. Then, three horizontal lines, which stand for the thresholds of $\eta_{s\bar{s}}\eta_{c\bar{c}}$, $J/\psi \phi$ and $D^{*}_{s}\bar{D}^{*}_{s}$, are marked in Fig~\ref{phase00}. Besides, another four horizontal lines appear in Fig~\ref{phase00}, corresponding to four resonance states with the energy around 4035 MeV, 4385 MeV, 4524 MeV, and 4632 MeV, respectively. By comparing with the experimental results, we find that the energy of 4385 MeV is close to the $X(4350)$, and the quantum number $IJ^{P}=0{0^+}$ is consistent with the reported data by the Belle Collaboration~\cite{Bell-CCSS}. So we explain the $X(4350)$ as a compact tetraquark resonance state with $IJ^{P}=0{0^+}$ in present calculation. Our result is also agrees with the results of the Born-Oppenheimer approach, in which a mass of $4370$ MeV was obtained~\cite{X(4350)CCSS}. Besides, in Ref~\cite{CHQM-CCSS}, $X(4350)$ was also a good candidate of the compact tetraquark state with $IJ^{P}=0{0^+}$ in the chiral quark model. Similarly, the resonance energy of 4524 MeV is close to the X(4500), and the quantum number $IJ^{P}=0{0^+}$ is also consistent with the reported result of the LHCb Collaboration ~\cite{LHCb-CCSS-2017.2}. So $X(4500)$ is possible to be a compact tetraquark resonance state with $IJ^{P}=0{0^+}$ in present calculation. In addition to the $X(4350)$ and $X(4500)$, another resonance state with energy around 4632 MeV is obtained. Although the mass is very close to the $X(4630)$, the quantum number $IJ^{P}=0{0^+}$ is different from the reported one $IJ^{P}=0{1^-}$~\cite{LHCb-CCSS-2021}. However, the mass is also close to the state $X(4700)$, and the quantum number is also fit to the experimental data of the LHCb Collaboration~\cite{LHCb-CCSS-2017.2}. So we prefer to assign the resonance state with energy 4632 MeV to be the exotic state $X(4700)$. For $c\bar{c}s\bar{s}$ system with $IJ^{P}=0{1^+}$ in Fig.~\ref{phase01}, the first six horizontal lines located at the corresponding physical threshold of six channels, which are $\eta_{c\bar{c}}\phi$, $D_{s}\bar{D_{s}^{*}}$, $J/\psi \eta_{s\bar{s}}$, $D_{s}^{*}\bar{D_{s}}$, $J/\psi \phi$ and $D_{s}^{*}\bar{D}_{s}^{*}$. Obviously, a resonant state is obtained at the energy around 4327 MeV. Although the mass is very close to the $X(4350)$, the quantum number is not quite applicable. However, the LHCb Collaboration claimed the existence of the $X(4274)$ and the measured quantum number was $J^{P}=1^{+}$~\cite{LHCb-CCSS-2017.1}. Therefore, we tend to use the resonance state around 4327 MeV to explain the $X(4274)$ state in this work. This is similar to the result of Ref~\cite{2011_CCSS}, in which a resonance state with energy near 4.3 GeV is considered as the $X(4274)$. For the last system $IJ^{P}=0{2^+}$ in Fig.~\ref{phase02}, the first two horizontal lines represent obviously the thresholds of two channels: $J/\psi \phi$ and $D_{s}^{*}\bar{D}_{s}^{*}$. Another two horizontal lines stand for two resonance states, the energy of which is about 4419 MeV and 4526 MeV, respectively. One may note that the energy of 4526 MeV is also very close to the mass of the $X(4500)$, but the quantum number is not consistent with the experimental data. So these two resonance state maybe some new exotic states. \section{Summary} The $cs\bar{c}\bar{s}$ tetraquark systems with $IJ^{P}=00^{+}$, $01^{+}$ and $02^{+}$ have been systemically investigated by using the RGM in the framework of QDCSM. Our goal is to search for any bound state or resonance state to explain the exotic states, which have been recently observed in the invariant mass distribution of $J/\psi \phi$ and another exotic state $\chi_{c0}(3930)$ observed by the LHCb collaboration. In this work, two structures: the meson-meson and diquark-antidiquark structures are taken into account. Both single-channel and channel-coupling calculations are performed. Besides, to search for any resonance state, a stabilization method is applied to the coupling calculation of all channels of both two configurations. The numerical results show that we obtain a bound molecular state $\bar{D}_{s}D_{s}$ with the quantum number $IJ^{P}=00^{+}$ and the energy 3930 MeV, which can be used to explain the observed $\chi_{c0}(3930)$. Moreover, several resonant states are obtained in this work, which are four $IJ^{P}=00^{+}$ states with the resonance masses around 4035 MeV, 4385 MeV, 4524 MeV, and 4632 MeV, respectively; one $IJ^{P}=01^{+}$ state with the resonance mass around 4327 MeV; and two $IJ^{P}=02^{+}$ states with the resonance masses around 4419 MeV and 4526 MeV, respectively. All of them are obtained by coupling all channels of both the meson-meson and diquark-antidiquark structures, so they are compact tetraquarks in present quark model calculations. By comparing with the experimental data, we are inclined to explain the exotic states $X(4350)$, $X(4500)$ and $X(4700)$ as the compact tetraquark state with $IJ^{P}=00^{+}$. The $X(4274)$ is possible to be a candidate of the compact tetraquark state with $IJ^{P}=01^{+}$. All these resonance states are worth searching by experiments. We suggest more experimental tests to check the existence of all these possible resonance states. In addition, to confirm the existence of these $cs\bar{c}\bar{s}$ tetraquark, the study of the scattering process of the corresponding open channels is needed in future work. \acknowledgments{This work is supported partly by the National Natural Science Foundation of China under Contract No. 11675080, No. 11775118, No. 11535005, and No. 11775050.}
2,877,628,088,671
arxiv
\section{Introduction} Message-driven executions with over-decomposition of tasks constitute an important model for parallel programming and provides multiple benefits including providing communication-computation overlap and minimizing idling on resources. \textsc{Charm++} \cite{kale-charm-pp1996} from UIUC, USA is one such message-driven programming environment and runtime for parallel applications. \textsc{Charm++} has been used to provide high performance for different scientific applications including NAMD \cite{mei-enablingandscaling-sc2011}, a molecular dynamics application, ChaNGa \cite{jetley-massivelyparallel-ipdps2008}, a cosmological simulator and ParFUM \cite{lawlor-parfum-ec2006}, a framework for unstructured mesh applications. In our earlier work \cite{vasudevan-gcharm-ics2013}, we had developed an adaptive runtime framework called {\em G-Charm} for efficient executions of \textsc{Charm++} message-driven parallel applications on hybrid GPU systems. Our framework dynamically combined multiple kernels corresponding to large number of small \textsc{Charm++} objects ({\em chares}) into a single GPU kernel to reduce the number of kernel invocations, managed data movement between CPU and GPU, provided reuse of data on GPUs and avoided redundant CPU-GPU data transfers by analyzing dependencies, and enabled dynamic scheduling of tasks for asynchronous executions on both CPUs and GPUs using performance estimates. The techniques in our framework were amenable mostly for regular applications like matrix computations. Supporting efficient executions of message-driven irregular applications on GPU systems presents a number of challenges to our G-Charm runtime system. The generation of the tasks (or chares) by the application need not be periodic. Our runtime framework needs to wait for sufficient number of tasks for combining the chares into a GPU kernel. Accesses of the data by the tasks can be irregular. Multiple tasks may require overlapping sets of data located in different regions of memory. This presents a problem for our data reuse policies since avoiding transfers of data already located in GPU memory for a kernel can result in non-coalesced memory accesses since the data already present in GPU memory may be widely separated from the data that is to be transferred for the current kernel invocation. Finally, the strategies for asynchronous executions will have to adapt to the varying workloads of the tasks that are mapped to CPUs and GPUs. In this work, we have developed strategies in G-Charm for efficient executions of irregular message-driven parallel applications on GPU systems. We have developed models for deciding the number of tasks to aggregate to a kernel based on the rate of task generation, and the GPU occupancy of the tasks. We have also developed a data reorganization strategy for improving coalesced memory access for irregular data accesses on GPU, and combines data coalescing with data reuse. Finally, our runtime framework dynamically adapts hybrid executions on CPUs and GPU to the amount of computations and communications in the tasks. We demonstrate our strategies for irregular applications for ChaNGa N-Body simulations application \cite{jetley-massivelyparallel-ipdps2008} and a molecular dynamics (MD) application, and show that our dynamic strategies result in 8-38\% reduction in execution times for these irregular applications over the corresponding static strategies that are amenable for regular applications. \section{Background} \label{bg} \subsection{\textsc{Charm++}} \textsc{Charm++} is a message-driven object oriented parallel programming framework based on C++ \cite{kale-charm-pp1996}. A parallel application written using \textsc{Charm++} divides the data among an array of objects called {\em chares}. The chares are mapped to physical processors, and can be migrated among processors by the \textsc{Charm++} runtime system to provide load balance. Typically, the number of chares are much larger than the number of physical processors, resulting in over-decomposition. The chares are associated with specialized methods called {\em entry} methods. Entry methods of a chare object can be invoked from chares present in same or other processors. Remote entry methods invoked by a chare are queued as messages in a message queue at the destination processor. The runtime system dequeues a message and invokes the corresponding chare's entry method upon arrival of all inputs of the entry method from the other chares. Thus, while input data for a chare is communicated from a remote processor, a processor can perform computation on some other chare for which inputs have already arrived. \subsection{\textsc{G-Charm}} In our earlier effort \cite{vasudevan-gcharm-ics2013}, we had developed G-Charm runtime framework that performs various optimizations including minimizing data transfers, data management, dynamic scheduling and work agglomeration. The runtime automatically decides the allocation of some of the CPU chares for execution on the GPU, and converts the chares to {\em GPU chares}. The runtime is also responsible for transferring data to GPU, invoking kernels, periodically monitoring the status of kernels, copying data to CPU upon kernel completion, and invoking a callback function on CPU to notify chares about the work completion. The application begins execution with the creation of chare objects. Each chare operates on a subset of data and executes its entry methods to update its own data on the arrival of input data from other chares and also to invoke entry methods of other chares. When a chare needs to invoke a kernel on the GPU, it creates a {\em workRequest} object and invokes a scheduler function in G-Charm runtime that performs dynamic scheduling of the workRequest to either CPU or GPU. The G-Charm runtime checks the data region in the application domain represented by the workRequest data, and tries to avoid redundant data transfers to GPU by transferring only the data not already present in GPU. The G-Charm runtime then adds the workRequest to a node of a linked list called {\em workGroupList}, in which each node represents a set of workRequest objects that can be combined. G-Charm periodically combines workRequest objects from this list and creates objects of type {\em workRequestCombined}. The G-Charm runtime then schedules these objects for GPU execution. Thus the G-Charm runtime performs work agglomeration dynamically by combining kernels of multiple work requests for GPU execution. \section{Optimization Strategies for Irregular Applications} \label{opt} \subsection{Combining Kernels} \label{combining} Combining multiple kernels of different chares into a single large kernel results in smaller number of kernel invocations, smaller CPU-GPU data transfer costs and larger GPU occupancy. Our G-Charm framework dynamically selects the workRequest objects of different chares for combining into a single kernel. For irregular applications, the arrival rate of the workRequests or tasks to the workGroupList for combining can vary throughout application execution. After a fixed interval when the {\em combine} routine is called, if the workGroupList has only a small number of workRequests to combine, the resulting GPU kernel can be spawned with only a small number of threads and thread blocks, thus resulting in poor GPU occupancy. However, waiting for a fixed number of workRequests to arrive in the workGroupList before spawning the GPU kernel for good GPU occupancy can result in large idling of the GPU if the workRequests do not arrive within a guaranteed period of time. Thus, a strategy for irregular applications has to consider both the arrival rate and the GPU occupancy to decide on combining the workRequests into a kernel. In our work, our G-Charm runtime system uses the CUDA occupancy calculator to determine the percentage occupancy and the maximum number of thread blocks that can be used per Streaming Multiprocessor (SM) to achieve the occupancy for a kernel. Since in our implementation, a workRequest is executed using a thread block, the maximum number of thread blocks obtained from the CUDA occupancy calculator also corresponds to the maximum number of workRequests, $maxSize$, that can be combined for maximum occupancy. Our runtime also notes the times of workRequest generation or arrival, and maintains a running maximum of the intervals, $maxInterval$, between the arrivals using these timestamps. Our framework periodically checks the workGroupList. If the number of workRequests in a workGroupList is at least $maxSize$, then it combines $maxSize$ number of workRequests into a combined kernel for GPU execution. If the number is less than the $maxSize$, G-Charm finds the interval between the current time and the time when the last workRequest arrived. If this interval is greater than $2\times maxInterval$, it combines the available workRequests for immediate execution. Thus, our framework attempts to achieve a balance between providing maximum GPU occupancy and minimum GPU idling. \subsection{Data Reuse and Coalescing} \label{re-co} Data transfers on the PCI/e bus between CPU and GPU for kernel executions can occupy significant times in overall execution. Hence it is important to minimize these times by avoiding the transfer of some of the data for a given kernel execution if the data is already located in the GPU memory due to previous kernel executions. G-Charm keeps track of the data segments in the GPU device used for kernel executions. Each chare is associated with a region of data in the application domain. The G-Charm runtime keeps track of the mapping of chare buffers to slots in the device memory using a {\em chare table}. A workRequest object contains the indices of the chare buffers representing subregions in the application domain. When a workRequest for a chare is created, the G-Charm runtime uses the buffer indices of the workRequest to lookup the chare table and find if the buffers are already located in the GPU memory due to the prior execution of kernels of other chares on the GPU (e.g., data generated from previous iterations). While minimizing the transfers is important, it is also essential to provide data locality of the data that is being reused. Data locality in the GPU memory results in coalesced access in which the data needed by the consecutive threads of a half warp (16 threads) are located in contiguous locations of the GPU device memory. Providing coalesced access in GPUs has consistently been shown as an important optimization providing large scale performance benefits. Irregular parallel applications exhibit poor data locality. In these applications, reuse of data already located in the GPU device memory will impact the locality of the data needed by consecutive threads: if the data required by thread $t_i$ is already located in GPU device memory at location $index_i$, then the data needed by thread $t_{i+1}$ may not already be located in location $index_i+1$, but may either be already located in some other location or may not be located at all. Thus reuse of data can significantly upset the coalesced memory access in irregular applications. In extreme cases, the gain obtained due to minimizing data transfers by reusing data can be offset by the loss in performance due to non-coalesced memory access. In these cases, it may be better to not reuse data, but perform redundant transfers of all the data needed by the current kernel such that the new data is organized for coalesced access. This is illustrated in Figure \ref{redundant}. The figure shows the redundant data transfers corresponding to the input data for the current kernel and the current GPU state shown in Figure \ref{input_data}. Figure \ref{reuse_uncoalesced} shows the non-coalesced access that can happen in irregular applications due to data reuse. \begin {figure} \centering \subfigure[Input Data and GPU State]{ \includegraphics[scale=0.28]{images/input_data.jpg} \label{input_data} } \subfigure[Redundant Data Transfers and Coalesced Access] { \includegraphics[scale=0.28]{images/RedundantData.jpg} \label{redundant} } \subfigure[Data Reuse, Uncoalesced Access] { \includegraphics[scale=0.28]{images/DataReuse.jpg} \label{reuse_uncoalesced} } \subfigure[Data Reuse, Sorting Indices, Coalesced Access] { \includegraphics[scale=0.28]{images/DataReuseAndCoalescing.jpg} \label{reuse_coalesced} } \caption{Data Reuse and Coalescing Strategies} \label{reuse-coalescing} \end{figure} Thus, for irregular applications, data reuse optimization has to be followed by a data reorganization or task reassignment step for coalesced access. Data reorganization for coalesced access in irregular applications is challenging. We follow a simple strategy in which we reassign the tasks to the threads. This is done by sorting the indices of the data accessed by the threads and making the threads access the data in the sorted order of the indices. This results in local sets of contiguous data accesses as illustrated in Figure \ref{reuse_coalesced}. As our results in Section \ref{exp} show, this simple reassignment achieves significant improvements in performance of kernel execution. For sorting the data indices for coalesced access, one method is to perform the sorting of the indices array after combing the workRequests. However, this will introduce high sorting overheads of $O(N^2)$ or $O(Nlogn)$ for $N$ data items. Instead, given a sorted sub-array corresponding to the earlier workRequests, G-Charm inserts an index for a data item corresponding to the current workRequest in the correct position during the invocation of {\em gcharm-insertRequest()} function such that the resulting expanded array continues to maintain the sorted order. The correct position for insertion of the data item into the array is performed using binary search. The complexity of this will be $O(log1+log2+log3+...+logN)=O(log(N!))$. \subsection{Dynamic Scheduling} \label{dyn} G-Charm dynamically decides the allocation of a chare to either CPU or GPU for tasks for which kernel functions exist for both CPU and GPU. G-Charm performs dynamic scheduling of tasks of a kernel or functions for asynchronous executions on CPU and GPU cores by executing the initial tasks on both CPU and GPU, obtain the performance ratio of executions, and use this ratio as estimates for subsequent tasks. We model the workload of a workRequest based on the amount of input data accessed by the workRequest. This information of the data buffer indices accessed by a workRequest is maintained for the data reuse optimization described in the earlier section. After every execution of a combinedWorkRequest on a CPU or GPU, our framework obtains the times taken for execution per input data item in the workRequest on both the devices. These times are dynamically updated as running averages of the times obtained till the current point in the execution. The performance ratio between the CPU and GPU times per data item is calculated. Given a queue of workRequests, first the total number of data items across all the workRequests in the queue is found. The total number is divided using the performance ratio between CPU and GPU to find the number of data items that have to be allocated to the CPU and GPU. The workRequests are then scanned from the beginning of the queue, and a running cumulative sum of the number of data items in the workRequests scanned is maintained. If this cumulative sum crosses the number of data items to be allocated to CPU, the set of workRequests scanned so far are allocated to CPU and the remaining workRequests are allocated to GPU for execution. Thus, by considering the individual workloads of the workRequests and updating the performance ratios by maintaining running averages of the times per data item, our framework adapts to changing workloads for dynamic scheduling in irregular applications. \section{Experiments and Results} \label{exp} We demonstrate our techniques for ChaNGa cosmology simulations application \cite{jetley-massivelyparallel-ipdps2008} and a molecular dynamics simulation application. We have performed single node experiments on two systems: one with a 6-core Intel Xeon E5-2620 processor connected to a single Kepler K20C GPU and the other with dual 8-core Intel Xeon E5-2670 processors connected to two Kepler K20m GPUs. \subsection{ChANGa N-Body Simulations Application} ChaNGa is an iterative N-Body simulations application and uses parallel Barnes-Hut algorithm for calculation of interactions between the bodies. In ChaNGa, particles are divided among {\em TreePiece} chares with each chare representing a part of the Barnes-Hut tree. Each iteration involves domain decomposition of particle space, distributed Barnes-Hut tree construction, local and remote tree walks to create interaction lists, gravitational force computation on particles due to interaction with tree nodes and other particles, force computations with periodic boundary conditions using Ewald summation, acceleration and updates of coordinates of particles. Particles are grouped into {\em buckets} and all particles in a bucket interact with same nodes and particles. Force computations and Ewald summation are done primarily on GPU. We use the GPU parallelization scheme of force computations by Jetley et al. \cite{jetley-scalinghierarchical-sc2010}. We use a 2D CUDA block of size $16\times8$ to compute force on each bucket. The threads in column $0$ load the bucket particles into shared memory. Threads in row $0$ load eight interactions into shared memory using which the threads in row $i$ compute force on particle $i$. Then the next set of interactions is loaded into shared memory and the process repeats until all interactions are completed. We have used two datasets in our experiments. \begin{itemize} \item cube300 - A low resolution cosmological simulation with $48^3$ particles in a cubic box of 300 Mpc per side. The application is executed with this dataset for 128 iterations. \item lambs: A larger dataset with $144^3$ particles in a cubic box of 71Mpc. The application is executed with this dataset for 10 iterations. \end{itemize} The particles in both datasets exhibit moderate clustering on small scale and become more uniformly distributed with increasing scale. \subsection{Molecular Dynamics Simulation Application} We consider a two-dimensional molecular dynamics application in which the 2D space is partitioned into patches. Each patch owns the particles present in the region. In each timestep, force on each particle due to other particles within a cutoff distance is calculated and the position of the particles are updated. Particles migrate to neighboring patches according to new positions and the application proceeds to next timestep. This is repeated for a fixed number of timesteps. In the \textsc{Charm++} implementation, a {\em compute object} calculates force between a pair of patches. The entry method {\em interact} takes two vectors of particles belonging to two patches and updates force components of each particle. The widely-used NAMD \cite{mei-enablingandscaling-sc2011, phillips-adapting-sc2008} molecular dynamics framework based on \textsc{Charm++} also adopts a similar parallelization scheme based on compute objects. The {\em interact} method has been implemented as a CUDA kernel for the G-Charm implementation. \subsection{Combining Kernels} Our framework calculated the GPU occupancy as 50\% and 31\% for the force computation and Ewald summation kernels, respectively. The maximum number of active blocks is 16 per SM (streaming multiprocessor) for the NVIDIA Kepler architecture. So the total number of blocks that can be active is 104 (8 blocks $\times$ 13 SMs for Kepler) for the force computation kernel and 65 ($4.8 \times 13$) for the Ewald summation kernel, respectively. Since each workRequest corresponds to one bucket and is executed with one CUDA block, the framework combines workRequests until the number of distinct buckets in the combinedWorkRequest is more than 104 in case of force computation kernel and 65 in case of Ewald summation kernel. Figure \ref{combining_results} shows the benefits of our strategy for combining small workRequests into a kernel for irregular applications. We compare our adaptive strategy that considers both the GPU occupancy and the arrival rate of the workRequests (Section \ref{combining}) with a static strategy that combines the available set of workRequests after processing every 100 workRequest objects in the CPU. We find that the dynamic strategy gives about 8-38\% reduction in execution times over the static strategy for the small dataset, and about 19\% reduction for the large dataset. \begin {figure} \centering \subfigure[Small Dataset (cube300)]{ \includegraphics[scale=0.22, angle=270]{images/combine_cube300.pdf} \label{combine_cube300} } \subfigure[Large Dataset (lambs)] { \includegraphics[scale=0.22, angle=270]{images/combine_lambs.pdf} \label{combine_lambs} } \caption{Dynamic vs Static Combining Strategies for Small and Large Datasets with ChaNGa} \label{combining_results} \end{figure} \subsection{Data Reuse and Coalescing} Figure \ref{reuse_coalescing_results} shows the GPU kernel times and the CPU-GPU data transfer times with redundant data transfers (no reuse), applying data reuse optimization, and applying both data reuse and improved coalesced access using sorting data indices (Section \ref{re-co}). The figure shows that applying only the reuse technique gives only 3.6\% reduction in execution time over the original code that employs redundant data transfers. Since data reuse results in non-contiguous memory accesses by members in a combinedWorkRequest object, an additional buffer containing the addresses of data items corresponding to individual workRequest objects has to be transferred to GPU before kernel invocation. This also doubles the number of accesses to global memory since for accessing each data item, the address has to be obtained from another buffer in global memory. While the time taken for data transfers reduces by 62\% due to the reduction in data transferred by employing data reuse, the GPU kernel time increases by 49\%. This is because of non-coalesced access of data already located in the GPU memory and the new data and the additional global memory accesses. Combining reuse with coalesced access on GPU by sorting the array indices results in 12\% reduction in execution time over the original code with redundant data transfers and 8\% reduction in time over applying only the data reuse strategy. The coalesced access achieves about 10\% reduction in kernel execution time over applying only the reuse strategy that results in non-coalesced access. Note that the kernel computation is still higher than the original code, since the original code achieves complete coalescing, while our techniques of sorting array indices achieves only local regions of coalesced access as described in Section \ref{re-co}. \begin {figure} \centering \includegraphics[scale=0.22, angle=270]{images/re-co_lambs.pdf} \caption{GPU Kernel and Data Transfer Times for Large Dataset with ChaNGa on 8 Cores} \label{reuse_coalescing_results} \end{figure} \subsection{Comparison with a Hand-Tuned Hybrid N-Body Simulations Code} With our G-Charm adaptive strategies, we obtained about 31\% reduction in execution time for the cube300 dataset and about 62\% reduction in execution time for the lambs dataset, over average execution time of multi-core CPU implementation for up to 8 CPU cores. We also compared the total times for ChANGa N-Body simulations obtained using our adaptive strategies for combining, data reuse and coalescing described in this work with the total times obtained using the static strategies developed in our earlier work \cite{vasudevan-gcharm-ics2013} that are amenable for regular applications. We also compare with the results obtained with a hand-tuned version of ChANGa for the hybrid GPU architectures developed by Jetley et al. \cite{jetley-scalinghierarchical-sc2010}. This code was manually tuned by the developers for optimal data layout, hybrid executions and data transfers based on various parameter studies. In comparison, in our work, all these optimizations are performed automatically by the framework in generic ways without the knowledge of the application. Figure \ref{changa_comparison} shows the comparison results for the large dataset. \begin{figure} \centering \includegraphics[scale=0.22, angle=270]{images/changa_comparison_lambs.pdf} \label{changa_comparison_lambs} \caption{Comparison of Adaptive Strategies for Combining Kernels and Data Reuse with Static Strategies and a Hand-Tuned Code for ChANGa} \label{changa_comparison} \end{figure} We see that our dynamic strategies perform better compared to static methods. Our methods show good scalability upto 8 cores and the scaling trend is similar to ChaNGa GPU implementation. The performance we obtain is less than ChaNGa GPU implementation mainly because of the overheads in our runtime system and lack of application specific optimizations such as the use of constant memory in ChaNGa GPU implementation to store the read only data for Ewald Kernel. But the strategies used in our runtime system are generic and can be applied to other irregular applications. \subsection{Dynamic Scheduling} Our profiling experiments with the ChANGa application showed that the CPU cores are sufficiently occupied with the tree traversal tasks. Hence there is no scope for applying our G-Charm's automatic dynamic scheduling techniques for asynchronous executions on CPU and GPU cores with this application. We demonstrate our dynamic scheduling strategy for irregular applications using the molecular dynamics (MD) simulation application. The G-Charm framework automatically performs asynchronous computations of interaction calculations on both the CPU and the GPU cores. We compare our dynamic strategy that adapts to changing workloads of workRequests by considering the data accesses in the workRequests with a static strategy that partitions the workRequestQueue based only on the total number of workRequests in the queue. Figure \ref{md_total_time} shows the total times taken by the MD simulations with different number of particles using both the static and adaptive strategies for dynamic scheduling. We find that the adaptive strategy for dynamic scheduling results in 10-15\% reduction in execution times over the static strategy. We also obtained about 22\% reduction in execution time over single-core CPU implementation. \begin {figure} \centering \includegraphics[scale=0.22, angle=270]{images/md_total_time.pdf} \caption{Total Execution Times for MD Simulations} \label{md_total_time} \end{figure} \section{Related Work} \label{related} There have been a number of efforts in developing runtime frameworks for efficient executions on GPU systems. StarPU \cite{augonnet-starpu-cpe2011} is a dynamic task creation and scheduling framework for heterogeneous systems. StarPU's strength lies in its ability to automatically schedule tasks on one or more compute devices. However, StarPU's support for data allocation is completely manual. The programmer has the responsibility of identifying data allocation granularity, task-to-data mapping and inter-task data dependences. These are done automatically in our G-Charm framework. Our work is closely related to the work by Kunzman \cite{kunzman-runtime-thesisuiuc2012} that has developed a unified programming model for abstracting different types of accelerators, with the runtime system performing various tasks such as load balancing, work agglomeration and data management. In this work, the user has to explicitly specify if a given data should be persistent in GPU memory across kernel invocations to avoid redundant data transfers, while in our work, such data management is performed automatically by the runtime system. All these efforts do not consider the non-uniform workloads and irregular data access patterns in irregular applications. In our results, we show that adaptive strategies that consider these aspects give significant benefits for irregular applications over strategies that assume regular workloads and access patterns. \section{Conclusions and Future Work} \label{con} In this work, we had developed adaptive strategies for efficient executions of irregular message-driven applications on GPU systems. By means of experiments with ChANGa N-Body simulations and MD applications, we showed that our dynamic strategies result in 8-38\% reduction in execution times for these irregular applications over the corresponding static strategies that are amenable for regular applications. For the N-Body simulations, we also showed that our generic framework and strategies performs competently with a hand-tuned and optimized code. \bibliographystyle{abbrv}
2,877,628,088,672
arxiv
\subsubsection{Introduction} Both classical 1-D persistent homology and zigzag persistent homology use data structures that fall under the same quiver theoretic notion: they are both orientations of \emph{Dynkin quivers of type A}, which are written throughout as \(\mathbb{A}_n\) where \(n\) is the number of vertices of the quiver. Quiver theory treats all orientations of \(\mathbb{A}_n\) equally regarding the result that any representation of such a quiver (i.e., any persistence module over the underlying poset) decomposes into \emph{interval} representations \cite{gabriel}, the collection of which in turn form a \emph{barcode}\textemdash a stable topological invariant of the representation/persistence module (or the data set that generated it). In this paper we propose two new distances on persistence modules over \(\mathbb{A}_n\)-type quivers. We will spend the rest of the paper constructing them, laying out their properties and advantages, and proving stability results between these distances and some of those already in use in persistent homology literature. We primarily focus our attention on the comparison of distances via their induced \emph{bottleneck distances} (Definition \ref{def_bn}): distances that first associate a pair of modules to their barcodes (collections of interval summands), and then pair up the elements of the barcodes in some ``closest'' manner. Here we briefly introduce and summarize these two new distances on zigzag persistence modules and relay some of their most overt properties. \begin{itemize} \item \textbf{\(\mathbb{A}_n\)-modules as multisets of vertices of the Auslander-Reiten quiver.} The \emph{AR distance} (section \ref{sec_ar_full}) can be applied to persistence modules over any orientation of \(\mathbb{A}_n\) and is a \emph{bottleneck distance} by construction. When some notion of 'endpoint parity' between a pair of interval modules agrees, their distance is simply sum of difference between endpoints (an \(\ell^1\)-type distance when considering intervals to be coordinate pairs, as is commonly seen in persistence diagrams). The distance behaves differently when parity does not agree. Over pure zigzag orientations, this distance's change in behavior relative to endpoint parity is a feature shared by the block distance \cite{botnan_lesnick}, which is reviewed in subsection \ref{sec_block} and compared in full with the AR distance in section \ref{sec_stab}. The properties of the AR distance are strongly influenced by the algebra of the underlying quiver. For instance, in pure zigzag orientations, interval modules of \([\mathrm{sink},\mathrm{sink}]\) endpoint parity are close to projective simple modules, and those with \([\mathrm{source},\mathrm{source}]\) endpoint party are close to injective simples modules (in this situation ``closeness'' is relative to support size). In general, when a pair of intervals has non-matching endpoint parity, the poset structure influences their distance to a much greater degree than similarity in supports. \item \textbf{\(\mathbb{A}_n\)-modules as persistence modules over a suspended poset.} The \emph{weighted interleaving distance} (section \ref{sec_dwil}) considers an arbitrary orientation of \(\mathbb{A}_n\) as a series of connected `valleys' (maximal upward posets of the form \([\textrm{source},\infty)\)), and then measures the distance between two modules by the depth of the valleys on which the intervals must be isomorphic. On all shallower valleys they are free to differ. The general construction was pursued in our previous paper \cite{meehan_meyer_1} for the purpose of applying interleaving distance to finite posets without inevitably encountering an excessive number of module pairs whose interleaving distance was infinite. \end{itemize} \subsubsection{Contributions} A summary of our contributions are as follows: \begin{itemize} \item We provide full and sharp Lipschitz bounds between our AR distance and the block distance, with the latter treated as its own induced bottleneck distance (Theorem \ref{thm_blar_limit}). \item Included as part of the elucidation of the AR distance is as a topic of potentially independent interest: we provide an explicit formulation of the Auslander-Reiten quiver for any orientation of \(\mathbb{A}_n\) in Section \ref{sec_ar}. While this formulation follows from the Knitting Algorithm (see \cite{schiffler} for details on the Knitting Algorithm and other methods of calculating the Auslander-Reiten quiver for orientations of \(\mathbb{A}_n\)), our formulation provides full information about the Auslander-Reiten quiver without any iterative construction. \item We provide sharp bounds for the weighted interleaving distance to dominate the AR distance. (Theorem \ref{thm_pairs}.) \end{itemize} \subsubsection{Acknowledgements} The authors would like to thank Vin de Silva and Michio Yoshiwaki for their discussions and insights, as well as the entirety of the members of the Hiraoka Laboratory for their support and assistance. The first named author is also supported in part by JST CREST Mathematics (15656429). \subsection{Preliminaries} \begin{notation} Throughout, we say that a \emph{distance} on a set \(X\) is a function \(d:X\times X\to[0,\infty]\) such that \begin{enumerate} \item \(d(x,x)=0\) for all \(x\in X\), \item \(d(x,y)=d(y,x)\) for all \(x,y\in X\), and \item \(d(x,y)\leq d(x,z)+d(z,y)\) for any \(x,y,z\in X\). \end{enumerate} \end{notation} That is, from the standard definition of 'metric' we surrender identification between points with \(d(x,y)=0\) and allow distances to take on infinite values. \begin{definition}\label{defn_gpm} A \emph{generalized persistence module} (GPM) \(F\) over a poset \(P\) in a category \(\mathcal{D}\) is a functor \(F:P\to \mathcal{D}\). That is, \(F\) is an assignment \begin{itemize} \item \(x\to F(x)\) for all \(x\in P\), \item \((x\leq y)\to F(x\leq y)\in\mathrm{Hom}_\mathcal{D}(F(x),F(y))\) for all \(x\leq y\) in \(P\) \end{itemize} such that, for any \(x\leq y\leq z\), the inequalities are sent to morphisms satisfying \(F(y\leq z)\circ F(x\leq y)=F(x\leq z)\). The category of such functors is denoted \(\mathcal{D}^P\), where morphisms in this category are given by natural transformations of functors. A \emph{persistence module} is a GPM with values in the category of finite dimensional vector spaces, and is the object of primary interest in this document. \end{definition} \subsubsection{Quivers} In this paper we will frequently view our underlying structures as both posets and as quivers. We would like to work with familiar persistent homology structures while applying quiver-theoretic machinery. The following is a short, formal definition of quivers, as well as an explanation for why they can be thought of as equivalent to posets in our setting. \begin{definition}\label{defn_quiv} A \emph{quiver} is a quadruple \((Q_o,Q_1,h,t)\) where \begin{itemize} \item \(Q_0\) is some finite set called the \emph{vertex set}, \item \(Q_1\) is a collection of \emph{arrows} between vertices, \item \(h:Q_1\to Q_0\) is a map that sends each arrow to its destination (\emph{head}), and \item \(t:Q_1\to Q_0\) is a map that sends each arrow to its source (\emph{tail}). \end{itemize} A \emph{representation} \(V\) of a quiver \(Q\) is \begin{itemize} \item a vector space \(V(i)\) assigned to every vertex, and \item a linear map \(V(a):V(ta)\to V(ha)\) assigned to every arrow. \end{itemize} \end{definition} The space of finite-dimensional representations of a quiver \(Q\), denoted \(\mathrm{rep}(Q)\), is a category with morphisms given pointwise, \(f=\{f_i\}_{i\in Q_0}:V\to W\), such that they satisfy commutative squares \(f(ha)V(a)=W(a)f(ta)\) for all arrows \(a\in Q_1\). Quivers may, in general, have closed loops or multiple arrows between the same pair of vertices. These features may prevent such quivers from being posets under the relation \[ \{x\leq y\text{ if and only if there exists a path } x\to y\}. \] However, the converse\textemdash that posets always give rise to quivers in a canonical way\textemdash is true. \begin{definition} For a poset \(P\), the \emph{Hasse} quiver \(Q(P)\) is the quiver given by: \begin{itemize} \item \(Q_0=P\) as a set of vertices. \item There exists an arrow \(i\to j\) whenever \(i\leq j\) in \(P\), and there is no \(k\) (distinct from \(i\) and \(j\)) such that \(i\leq k\leq j\). \end{itemize} \end{definition} Under certain restrictions, quivers do give rise to posets in a fashion that inverts the Hasse construction. When there is such a bi-directional correspondence, as seen in the following proposition, \emph{the space of representations of the quiver is equivalent to the space of persistence modules over the poset.} \begin{prop} \label{prop_p_and_q} Let \(Q\) be a quiver such that: \begin{itemize} \item \(Q\) has no cycles (including stationary loops), \item for any two \(i,j\in Q_0\), there exists at most one arrow between \(i\) and \(j\). \end{itemize} Then \(Q\) is the Hasse quiver of some poset \(P\). Furthermore, suppose \(Q\) also satisfies: \begin{itemize} \item For \(i,j\in Q_0\), there is at most one path from \(i\to\ldots\to j\). \end{itemize} Then, the category of finite-dimensional representions of the quiver \(Q\) is equivalent to the category of functors from the poset category \(P\) to the category of finite-dimensional vector spaces. I.e., \[\mathrm{rep}(Q)\cong\mathrm{vect}^P\] as cagegories. \end{prop} \begin{remark} The extra condition above (at most one path \(i\to\ldots\to j\)) is necessary for the equivalence of categories for the following reason. Any GPM over a poset, by virtue of being a functor from a \emph{thin} category (cardinality of any \(\mathrm{Hom}\)-space is at most \(1\)), has the property that the morphisms given by composition along any two parallel paths are equal; see Definition \ref{defn_gpm}. Contrast this with Definition \ref{defn_quiv} in which there are no parallel-path commutativity conditions on a quiver representation. (If one wished to obtain equivalence between the two categories while allowing for the existence of parallel paths, this would require the use of \emph{bound quivers}: quivers with commutativity relations, for a general reference see \cite{schiffler}. Such pursuits are not within the scope of this document.) \end{remark} By virtue of this equivalence, from here onward we will denote quivers/posets by \(P\), rather than \(Q\). The quiver of interest in this paper is \(P=\mathbb{A}_n\), the `straight line' quiver, with arbitrary orientations for its arrows. It satisfies all the conditions of Proposition \ref{prop_p_and_q}. \begin{definition} \label{def_an} For \(n\in\mathbb{N}\), an \(\mathbb{A}_n\)-type quiver is any quiver with vertex set \(\{1,\ldots,n\}\) whose arrow set consists of exactly one of \[ i\to i+1\text\,\,{ or }\,\,i\gets i+1 \] for every \(i\). The corresponding poset (whose Hasse quiver returns the original quiver) is given by \[ 1\sim \ldots \sim n \] where each \(\sim\) corresponds to \(<\) (for quiver arrows of the form \(\rightarrow\)) or \(>\) (for quiver arrows of the form \(\leftarrow\)). \(\mathbb{A}_n\) will be said to be \emph{equioriented} if all arrows face the same way. \begin{center} \includegraphics[scale=1.5]{tikz_an_equi} \end{center} \(\mathbb{A}_n\) will be said to have \emph{pure zigzag orientation} if arrows alternate (i.e., each vertex is either a source or sink). \begin{center} \includegraphics[scale=1.5]{tikz_an_zz} \end{center} \end{definition} The following definition is a fundamental one to persistent homology. \begin{definition} For an orientation of \(\mathbb{A}_n\), define the interval persistence module (indecomposable quiver representation) \([x,y]\) to be the one \[[x,y](i)=\left\{ \begin{array}{ll} K & \text{if }1\leq x\leq i\leq y\leq n \\ \\ 0 & \text{otherwise} \end{array}\right. \] where \(K\) is some base field. The internal morphisms of \([x,y]\) are \(1_K\) when possible, and \(0\) otherwise. From context it should always be clear when we mean the indecomposable \([x,y]\) or the \(\mathbb{Z}\)-interval \([x,y]\). Lastly, we will often abbreviate interval persistence modules of the form \([x,x]\) as \([x]\). \end{definition} For \(P=\mathbb{A}_n\), as it turns out, every \(\mathbb{A}_n\)-representation\,\textemdash\,equivalently, every\(P\) persistence module\,\textemdash\,is isomorphic to a direct sum of interval persistence modules. The original result cited below is quiver theoretic in origin, but this result has since been proved independently for pointwise finite dimensional persistence modules over \(\mathbb{R}\) \cite{cb}. \begin{prop}[\cite{gabriel}]\label{prop_gabriel} Representations / persistence modules over any \(P=\mathbb{A}_n\) decompose into interval persistence modules. This decomposition is unique up to ordering and isomorphism of summands. Furthermore, interval persistence modules are precisely the indecomposable persistence modules (up to isomorphism) of \(P\). \end{prop} For a very efficient exposition of the definitions and features of additive categories, categorical products and coproducts, and categories possessing unique decomposability properties, the authors recommend the paper \cite{krause}. \begin{notation} Throughout, by \emph{indecomposable representation} of \(P=\mathbb{A}_n\) we mean the unique representative of the isomorphism class that is precisely an interval representation. \end{notation} \subsubsection{The Auslander-Reiten Quiver} \label{ss_ar} The following is a crucial piece of quiver theoretic machinery that renders possible the development of this paper's first distance. \begin{definition} Given a quiver \(P\), its \emph{Auslander-Reiten (AR) quiver} is a new quiver in which: \begin{itemize} \item the vertex set is the collection of isomorphism classes of indecomposable representations of \(P\), \item an arrow exists from one vertex to another whenever there exists an irreducible morphism between the corresponding \(P\)-indecomposables. \end{itemize} When \(P=\mathbb{A}_n\), there are finitely many indecomposable representations up to isomorphism, and representatives of the distinct isomorphism classes can be chosen to be precisely the collection of \emph{interval} representations of \(P\) (Proposition \ref{prop_gabriel}). That is, the Auslander-Reiten quiver of some \(P=\mathbb{A}_n\) has vertex set consisting of the interval representations of \(P\). \end{definition} See any of \cite{elements,schiffler,krause_notes,derksen_weyman} for general introductions to Auslander-Reiten theory. What is important to note for now is that, when \(P=\mathbb{A}_n\), its Auslander-Reiten quiver has a finite vertex set, unique arrows, no closed loops, and is a connected graph (\cite{auslander} VI Thm 1.4). The nature of the Auslander-Reiten quiver of any \(P=\mathbb{A}_n\) will be discussed in detail in Subsection \ref{sec_ar}. \subsection{Classic Persistent Homology Distances} We now define two fundamental distances to persistent homology. \subsubsection{Interleaving Distance} The interleaving distance is a distance on generalized persistence modules with values in any category \(\mathcal{D}\) over any poset \(P\) (Definition \ref{defn_gpm}). We offer the following definitions in their full generality, though in the remainder of the paper they will be applied only to persistence modules (GPMs with values in \(\mathrm{vect}\)) over very specific posets. We first define translations, which are used to `shift' GPMs within a poset and are how the size of an interleaving is measured. \begin{definition}\label{def_translation} A \emph{translation} \(\Lambda\) on a poset \(P\) is a map \(\Lambda:P\to P\) such that \begin{itemize} \item \(x\leq\Lambda x\) for all \(x\in P\), \item if \(x\leq y\) in \(P\), then \(\Lambda x \leq \Lambda y\). \end{itemize} The \emph{height} of a translation is \[h(\Lambda)=\max_{x\in P}\{d(x,\Lambda x)\},\] where \(d\) is some distance on \(P\). The collection of translations over a poset \(P\) form a monoid with left action on any \(\mathcal{D}^P\), given by the pointwise statement \[ F\Lambda(x)=F(\Lambda x)\text{ for all }x\in P. \] \end{definition} In brief, before the full definition below, an interleaving between two GPMs is a translation \(\Lambda\) and a pair of morphisms from each GPM to a \(\Lambda\)-shift of the other such that certain commutativity conditions are fulfilled. \begin{definition}\label{def_il} An interleaving between two GPMs \(F,G\) in \(\mathcal{D}^P\) is a translation \(\Lambda\) on \(P\) and a pair of morphisms (natural transformations) \(\phi:F\to G\Lambda\), \(\psi:G\to F\Lambda\) such that the following diagram commutes: \begin{center} \includegraphics[scale=1]{tikz_interleaving_4} \end{center} Alternatively, we say that \(F,G\) are \(\Lambda\)-\emph{interleaved}. The \emph{interleaving distance} between \(F\) and \(G\) is \[ D_{\mathrm{IL}}(F,G)=\inf\{\epsilon:F,G\text{ have a }\Lambda\text{-interleaving with }h(\Lambda)=\epsilon.\} \] \end{definition} The translations \(\phi\) and \(\psi\) are sometimes referred to as ``approximate isomorphisms'', and the interleaving distance can be thought of as the shift distance by which there fails to be a true isomorphism between the persistence modules. \begin{remark} The above definition is not quite the traditional one seen most often in the literature (see \cite{bubenik}). In many definitions there are two translations \(\Lambda\) and \(\Gamma\) (one to shift \(F\), and the other to shift \(G\)), and the height of the interleaving is the height of the larger translation. In the posets we are interested in, the values of the interleaving distance do not change when allowing for two distinct translations rather than using the same translation twice. So, for the sake of simplicity, and without altering the distance, we have reduced Definition \ref{def_il} to a statement involving only a single translation \(\Lambda\). \end{remark} The collection of translations on a poset \(P\) is itself a poset under the partial order given by the relation \[ \Lambda\leq\Gamma\text{ if }\Lambda(x)\leq\Gamma(x)\text{ for all }x\in P. \] There is rarely a \emph{unique} translation of a given height, though occassionally it is easier to assume that we are using a \emph{full} translation of some height. \begin{remark}\label{rmk_full} By a \emph{full translation} of height \(\epsilon\), we will mean a maximal element in the poset of translations that has height \(\epsilon\). In the case \(P=\mathbb{Z},\mathbb{R}\), there is always a unique full translation of height \(\epsilon\): the translation \(\Lambda_\epsilon(x)=x+\epsilon\) for all \(x\in\mathbb{R}\). In posets that are not totally ordered, there may be multiple distinct full translations of certain heights. \end{remark} By the next result, any \(\epsilon\)-interleaving can always be taken as using a full translation of height \(\epsilon\). \begin{prop} Let \(\Lambda,\Lambda'\) be two translations over some poset \(P\) such that \(\Lambda'\geq\Lambda\), and let \(F,G\) be two GPMs in \(\mathcal{D}^P\) for some \(\mathcal{D}\). If \(F,G\) are \(\Lambda\)-interleaved, then \(M,N\) are \(\Lambda'\)-interleaved. \end{prop} \subsubsection{Bottleneck Distances} We first define the general notion of a bottleneck distance (Definition \ref{def_bn}), then present the classic bottleneck distance (Example \ref{ex_classic_bn}), and lastly put forward the meaning of a general distance's induced bottleneck distance (Remark \ref{rmk_induced}). A bottleneck distance (also a Wasserstein metric\,\textemdash\,see \cite{wasserstein}) acts on pairs of multisets of some set \(\Sigma\). It requires \begin{itemize} \item a distance \(d\) on \(\Sigma\), and \item a function \(W:\Sigma\to[0,\infty)\) \end{itemize} such that \begin{equation*}\label{eqn_triangle} |W(f)-W(g)|\leq d(f,g),\tag{$\Delta$-ineq} \end{equation*} for all \(f,g\in\Sigma\). Let \(\Sigma\) be some set, and \(F,G\) two multisets (subsets with multiplicities of elements) of \(\Sigma\). A \emph{matching} between \(F\) and \(G\) is a bijection \[x:F'\leftrightarrow G'\] where \(F'\subset F\), \(G'\subset G\). The \emph{height} of a matching \(x:F\leftrightarrow G\) is \[ h(x)=\max\{\max_{f\in F'}\{d(f,x(f))\},\max_{f\not\in F'}\{W(f)\},\max_{g\not\in G'}\{W(g)\}\}. \] That is, take the maximum over all distances (using \(d\)) between paired elements, as well as the maxima over all of the `widths' (using \(W\)) of the unpaired elements of \(F\) and \(G\). \begin{definition}\label{def_bn} Given a set \(\Sigma\), and any functions \(d\) and \(W\) as above holding to the \ref{eqn_triangle} relationship, the \emph{bottleneck distance generated by} \(d\) \emph{and} \(W\) between two multisets \(F,G\) of \(\Sigma\) is \[ D(F,G)=\min\{h(x):x\text{ is a matching between }F\text{ and }G\}. \] \end{definition} The following connects bottleneck distances to persistence modules. From \cite{cb}, this can be generalized to \(\mathbb{R}\) persistence modules. \begin{definition}\label{def_barcode} For \(\mathbb{A}_n\), let \(\Sigma\) denote the set of (isomorphism classes of) indecomposable persistence modules: i.e., its \emph{intervals}. For a persistence module \(M\) over \(\mathbb{A}_n\), define its \emph{barcode} to be the multiset of \(\Sigma\) containing exactly the summands in its decomposition (with existence and uniqueness guaranteed by Proposition \ref{prop_gabriel}): \[ \mathcal{B}(M)=\{[x_i,y_i]\}_{i\in I}\text{, where }M=\bigoplus_{i\in I}[x_i,y_i]. \] \end{definition} \begin{example}\label{ex_classic_bn} The `classical' bottleneck distance on persistence modules over \(\mathbb{R}\)) is the one given by \begin{itemize} \item \(d(f,g)=D_{\mathrm{IL}}(\{f\},\{g\})\) \item \(W(f)=D_{\mathrm{IL}}(\{f\},\emptyset)\), \end{itemize} where \(D_{\mathrm{IL}}\) is the interleaving distance of Definition \ref{def_il}. Let \(f=[x_1,y_1],g=[x_2,y_2]\) be indecomposable/interval persistence modules over \(\mathbb{R}\). Unpacking the definition of interleaving distance yields the equations: \begin{itemize} \item \(d(f,g)=\max\{|x_1-x_2|,|y_1-y_2|\}\) and \item \(W(f)=I(f,0)=1/2(y-x)\). \end{itemize} This is precisely the (\(\ell^\infty\) or \(\infty\)-Wasserstein) bottleneck distance that is most commonly used to measure distance between persistence diagrams in persistent homology literature. \end{example} \begin{remark}\label{rmk_induced} Let \(\mathcal{C}\) be any Krull-Schmidt category \cite{krause} and \(D\) any distance on the collection of objects in the category. Then there is a unique or canonical bottlneck distance induced by \(D\), that being the one in which any two objects \(X,Y\) become associated to the multisets corresponding to their Krull-Schmidt decompositions \[ X=X_1\oplus\ldots\oplus X_m,\,\,Y=Y_1\oplus\ldots\oplus Y_n, \] and the bottleneck distance between those multisets is given by \begin{itemize} \item \(d(X_i,Y_j)=D(X_i,Y_j)\) and \item \(W(X_i)=D(X_i,0)\). \end{itemize} \end{remark} \subsubsection{Comparison of Bottleneck Distances} As one of the goals of this paper is finding minimal Lipschitz bounds between bottleneck distances, we discuss the relationship between comparing bottleneck distances directly, and comparing their component \(d\)'s and \(W\)'s. For two bottleneck distances \(D_1=\{d_1,W_1\}\) and \(D_2=\{d_2,W_2\}\), while the inequality \(D_1\leq D_2\) implies \(W_1\leq W_2\), it does \emph{not} necessitate that \(d_1\leq d_2\). This has the potential to be a frustrating obstacle to comparing different bottleneck distances. To remedy this, we define a canonical \(d\) and \(W\) for a given bottleneck distance \(D\) that will allow for a more natural means of comparison. \begin{definition}[Minimal generators]\label{def_bn_canon} For a bottleneck distance \(D=\{d,W\}\) on multisets of some set \(\Sigma\), define \(\bar{d}(\sigma,\tau)=D(\{\sigma\},\{\tau\})\). Then \(\bar{d}\leq d\), and the pairs \(\{d,W\}\) and \(\{\bar{d},W\}\) both generate the same \(D\). Call \(\{\bar{d},W\}\) the \emph{minimal generators} of the bottleneck distance \(D\). \end{definition} A bottleneck distance \(D\) fully recovers its minimal generators. Specifically: \begin{itemize} \item \(\bar{d}(\sigma,\tau)=D(\{\sigma\},\{\tau\})\), and \item \(W(\sigma)=D(\{\sigma\},\emptyset)\). \end{itemize} We now get the desired comparison statement: \begin{prop}\label{prop_bn_compare} For two bottleneck distances \(D_1=\{d_1,W_1\}\) and \(D_2=\{d_2,W_2\}\), \(D_1\leq D_2\) if and only if \(\bar{d}_1\leq\bar{d}_2\) and \(W_1\leq W_2\). \end{prop} \begin{proof} The forward implication is immediate from the above statements about the recovery of \(\bar{d},W\) from \(D\). The reverse implication is immediate from the definition of \(D\) (as is the stronger statement: \(d_1\leq d_2\) and \(W_1\leq W_2\) \(\implies\) \(D_1\leq D_2\)). \end{proof} \begin{notation} From this point onward, we allow \(D(\{\sigma\},\{\tau\})\) to be shortened to \(D(\sigma,\tau)\) for bottleneck distances. \end{notation} \begin{comment} \begin{remark}\label{rmk_compare} Sharp bounds for \(D_1\leq X\cdot D_2\) are obtained as follows. \begin{itemize} \item Suppose \(W_1\leq A\cdot W_2\) for all indecomposables. \item Suppose \(d_1\leq B\cdot d_2\) for all pairs of indecomposables such that \(d_1\leq\max\{W_1's\}\) and \(d_2\leq\max\{W_2's\}\). \item Suppose \(\max\{W_1's\}\leq C\cdot d_2\) for all pairs of indecomposables such that \(d_1>\max\{W_1's\}\) and \(d_2\leq\max\{W_2's\}\). \end{itemize} Then \(X=\max\{A,B,C\}\). \begin{itemize} \item There is no need to know the Lipschitz constant for the last case. If \(d_2>\max\{W_2's\}\), then \[D_1\leq\max\{W_1's\}\leq A\cdot\max\{W_2's\}=D_2.\] \end{itemize} Furthermore, suppose we obtain \(A\) from \(W_1\leq A\cdot W_2\), and \(B'\) from \(d_1\leq B'\cdot d_2\) for all pairs such that \(d_2\leq \max\{W_2's\}\). If \(B'\geq A\) and is sharp for a pair with \(d_1\leq \max\{W_1's\}\), then there is no need to check for \(C\). \end{remark} \end{comment} \section{AR-Bottleneck Distance}\label{sec_ar_full} This bottleneck distance uses the graph-structure of some original quiver \(Q\)'s corresponding Auslander-Reiten quiver as a means of measuring the distance between indecomposable persistence modules. \subsection{Definitions} Let $Q=\mathbb{A}_n$. Let $Q'$ be the AR quiver of $Q$. For indecomposables \(\sigma,\tau\) of \(Q\), let \(p=p_0\ldots p_l\) denote an unoriented path in \(Q'\) from \(\sigma\) to \(\tau\). The tail and head of a path are those of the first and last vertex, respectively: \(tp=tp_l=\sigma\), and \(hp=hp_0=\tau\). \begin{definition}\label{def_ar_dist} Define the \emph{AR distance} between two indecomposables to be \[ \delta_{\mathrm{AR}}(\sigma,\tau)= \min_{p:\sigma\to\tau}\left\{\sum_{i=1}^{l-1}|\mathrm{dim}(Q'(hp_i)) -\mathrm{dim}(Q'(tp_i))|\right\}, \] where \emph{dimension} of an indecomposable \(M\) of \(Q\) (equivalently, a vertex of \(Q'\)) is \[ \mathrm{dim}(M)=\displaystyle\sum_{i\in Q_0}\mathrm{dim}_KM(i), \text{ i.e., }\mathrm{dim}([x,y])=y-x+1. \] That is, \(\delta_{\mathrm{AR}}(\sigma,\tau)\) is the dimension-weighted path-length between \(\sigma\) and \(\tau\), minimized over all possible paths. \end{definition} \begin{ex} See Figure \ref{fig_ex_ar}. Consider the interval modules \([2,3],[3,6]\). \begin{itemize} \item Figure \ref{fig_ex1_ar}: \(\delta_{\mathrm{AR}}([2,3],[3,6])=4\). \item Figure \ref{fig_ex2_ar}: \(\delta_{\mathrm{AR}}([2,3],[3,6])=8\). \item Figure \ref{fig_ex3_ar}: \(\delta_{\mathrm{AR}}([2,3],[3,6])=10\). \end{itemize} \end{ex} \begin{figure} \centering \begin{subfigure}[t]{1\textwidth} \centering \includegraphics[scale=.6]{tikz_ex1_A8.pdf} \caption{Equi-orientation of \(\mathbb{A}_8\) and its corresponding AR quiver.} \label{fig_ex1_ar} \end{subfigure} \begin{subfigure}[t]{1\textwidth} \centering \includegraphics[scale=.6]{tikz_ex2_A8.pdf} \caption{Orientation of \(\mathbb{A}_8\) and its corresponding AR quiver.} \label{fig_ex2_ar} \end{subfigure} \begin{subfigure}[t]{1\textwidth} \centering \includegraphics[scale=.6]{tikz_ex3_A8.pdf} \caption{Zigzag orientation of \(\mathbb{A}_8\) and its corresponding AR quiver.} \label{fig_ex3_ar} \end{subfigure} \caption{Three orientations of \(\mathbb{A}_8\) and their AR quivers, where edges of weight more than \(1\) are drawn with double lines and labeled by the difference in dimensions between the two indecomposables that they connect.} \label{fig_ex_ar} \end{figure} \begin{definition}\label{def_dar} Define the \emph{AR bottleneck distance} \(D_{\mathrm{AR}}\) on the space of indecomposable representations of $Q$ to be the bottleneck distance induced by: \begin{itemize} \item \(d_{\mathrm{AR}}(\sigma,\tau)=\delta_{\mathrm{AR}}(\sigma,\tau)\), \item \(W_{\mathrm{AR}}(\sigma)=\displaystyle\min_{t\in Q_0}\{\delta_{\mathrm{AR}}(\sigma,[t])\}+1\). \end{itemize} \end{definition} We can immediately check that \(D_{\mathrm{AR}}\) is indeed a bottleneck distance. \begin{prop}\label{prop_dar_triangle} \(D_{\mathrm{AR}}\) satisfies \ref{eqn_triangle}. \end{prop} \begin{proof} Simply note that for any \(\sigma,\tau\), and any simple \([t]\), by the graph-distance definition of \(\delta_{\mathrm{AR}}\) it is immediate that \[ d_{\mathrm{AR}}(\sigma,[t])\leq d_{\mathrm{AR}}(\sigma,\tau)+d_{\mathrm{AR}}(\tau,[t]), \] and so, minimizing over \([t]\) with respect to \(W_{\mathrm{AR}}(\tau)\), \[ W_{\mathrm{AR}}(\sigma)\leq d_{\mathrm{AR}}(\sigma,[t])+1\leq d_{\mathrm{AR}}(\sigma,\tau)+W_{\mathrm{AR}}(\tau). \] Combining with the symmetric statement (swapping \(\sigma\) and \(\tau\)) we get the full statement of the equation \ref{eqn_triangle}. \end{proof} \begin{remark} The reason for the \(+1\) in the definition of \(W_{\mathrm{AR}}\) above is simply that there are no zero representations in the AR quiver. As in \cite{escolar_hiraoka}, we account for the distance to zero being distance to a simple indecomposable, plus one additional traversal (of dimension-weight \(1\)). Put another way, we attach a zero representation to every simple indecomposable in the AR quiver (see Figure \ref{fig_ex1_ar}). For \(Q=\mathbb{A}_n\), let \(\bar{Q}'\) denote the AR quiver of \(Q\) supplemented with the vertices \(0_i\) for all vertices \(i\) of \(Q\), and with extra edges \([i]\rightarrow 0_i\). Then we may alternatively define \(W_{\mathrm{AR}}(\sigma)=\displaystyle\min_{i\in Q_0}\{\delta_{\mathrm{AR}}(\sigma,0_i)\}\). \end{remark} \begin{ex} See Figure \ref{fig_ex_ar}. Consider the interval modules \([2,3],[3,6]\). \begin{itemize} \item Figure \ref{fig_ex1_ar}: \(D_{\mathrm{AR}}([2,3],[3,6])=4\). \item Figure \ref{fig_ex2_ar}: \(D_{\mathrm{AR}}([2,3],[3,6])=4\). \item Figure \ref{fig_ex3_ar}: \(D_{\mathrm{AR}}([2,3],[3,6])=8\). \end{itemize} \end{ex} \begin{comment} \begin{remark}[Equiorientations]\label{ex_equi} Consider Figure \ref{fig_ex1_ar}, where the \(\mathbb{A}_n\) quiver is of equiorientation (Definition \ref{def_an}). Throughout this paper, all AR quivers will be drawn in the way depicted in Figure \ref{fig_ex_ar}, which is common in quiver theoretic literature. That is , we depict AR quivers starting from projective simple modules on the left and leading to injective simple modules on the right. The key feature of the AR quiver of \emph{any} orientation of \(\mathbb{A}_n\) is its axis-like behavior. Diagonal slices have fixed \(y\)-coordinates, and off-diagonal slices have fixed \(x\)-coordinates. Equiorientations further possess the property that these \(x\) and \(y\) axes are \emph{monotone}. With a \(45\degree\) clockwise rotation followed be a left-to-right mirroring, the AR quiver in Figure \ref{fig_ex1_ar} would `fit' into the real plane, with its intervals corresponding to real coordinates. To formalize this: Associate to any interval module \([b,d]\) the \emph{real} interval module \([b,d+1)\). The right endpoint's \(+1\) discrepancy comes from the fact that [closed,open) intervals are the usual birth/death indexing method for barcodes over \(\mathbb{R}\) (and we choose \([\text{closed},\text{closed}]\) for intervals over subsets of \(\mathbb{Z}\)). Then consider the real interval \([x,y+1)\) as a point \((x,y+1)\) in \(\mathbb{R}^2\). The following is formalized in Proposition \ref{prop_formula}. For any intervals \([x_1,y_1]\), \([x_2,y_2]\) over an equioriented \(\mathbb{A}_n\), it turns out that \[ \bar{d}_{\mathrm{AR}}([x_1,y_1],[x_2,y_2])=|x_1-x_2|+|y_1-y_2| =||(x_1,y_1+1),(x_2,y_2+1)||_1. \] That is, the AR bottleneck distance between a pair of intervals over an equioriented \(\mathbb{A}_n\) is the same as the \(\ell^1\) (taxicab) distance between the corresponding points in \(\mathbb{R}^2\). \end{remark} \begin{remark}[Non-equiorientations] Consider \(\mathbb{A}_8\) with the orientations and subsequent AR quivers depicted in Figures \ref{fig_ex2_ar} and \ref{fig_ex3_ar}. Note the complication in this example that does not appear in Figure \ref{fig_ex1_ar}: that in the later two orientation there are adjacent indecomposables whose dimensions (Definition \ref{def_ar_dist}) change by more than \(1\). We draw edges between such indecomposables with double lines, and label them by the difference in dimension. All unembellished edges imply a dimension difference of \(1\). Note that the axis-like behavior of the left and right endpoints remains, but these axes are no longer monotone. \end{remark} \end{comment} \subsection{AR Quiver Construction Algorithm}\label{sec_ar} From here we present an algorithm for determining the shape of the Auslander-Reiten quiver for any quiver of the form \(Q=\mathbb{A}_n\). This algorithm arises as a consequence of the Knitting Algorithm (see \cite{schiffler} Chapter 3.1.1), but has been streamlined to the specific case of \(Q=\mathbb{A}_n\), and is able to elucidate the full structure of such AR quivers without the sequential construction method that the Knitting Algorithm and other similar methods require. We maintain the convention of many quiver theoretic publications, in which the AR quiver is drawn with arrows always directed left to right, with the leftmost indecomposables being simple projectives and the rightmost indecomposables being simple injectives. Vertical orientation is arbitrary, but will be fixed under the following method. Key to this structural result about AR quivers for arbitrary orientations of any \(\mathbb{A}_n\) is the fact that the indecomposables fit into a \emph{diagonal grid} with axes for the left and right endpoints of the intervals. The algorithm instructs the formation of these axes, which subsequently induce the entire shape of the AR quiver. \begin{notation} There are two separate and obvious orderings on the vertices of any orientation of \(\mathbb{A}_n\), the first being the ordering of the vertices according to their labeling as a subset of \(\mathbb{Z}\), and the second being the ordering given by the poset relation \(\leq_P\). The following discussions are carried out in the language of the \emph{vertices as a subset of} \(\mathbb{Z}\). So, by all comparative words (increasing, decreasing, greater, lesser) we will mean relative to the inherited \(\mathbb{Z}\)-ordering of the vertices from left to right in the poset. \end{notation} \afterpage{ \begin{figure} \centering \includegraphics[scale=.75]{tikz_AR_4full} \caption{An orientation of \(\mathbb{A}_n\) and the subsequent arrangements of the \(x\) and \(y\) axes.} \label{fig_ar_super1} \end{figure} \begin{figure} \centering \includegraphics[scale=.75]{tikz_AR_master_2} \caption{AR quiver for an orientation on \(\mathbb{A}_n\). The purple arrows denote the direction of decreasing dimension of indecomposables (always away from the diagonals \(x=1\) and \(y=n\)).} \label{fig_ar_super2} \end{figure} \clearpage } \begin{alg}\label{alg_main} The construction of the left and right (\(x\) and \(y\)) axes of the AR quiver for some \(Q=\mathbb{A}_n\) are as follows. \begin{itemize} \item For the \(x\)-axis (south west to north east), list the vertices in the following order: Take all vertices of \(\mathbb{A}_n\) that are in some segment of the form \((\text{min},\text{next max}]\), and list them on the axis in \emph{reverse} \(\leq_{\mathbb{Z}}\) order. Then, take all remaining vertices and list them in forward \(\leq_{\mathbb{Z}}\) order. Note that the values of this x-axis always increase \emph{away} from \(x=1\). \item For the \(y\)-axis (north west to south east), list the vertices in the following order: Take all vertices of \(\mathbb{A}_n\) that are in some segment of the form \([\text{max},\text{next min})\), and list them on the axis in forward \(\leq_{\mathbb{Z}}\) order. Then, take all remaining vertices and list them in \emph{reverse} \(\leq_{\mathbb{Z}}\) order. Note that the values of this y-axis always increase \emph{toward} \(y=n\). \end{itemize} \end{alg} \begin{ex} In Figure \ref{fig_ar_super1}, we represent an orientation of \(\mathbb{A}_n\) with an implied arbirary density of vertices along the edges. Segments of the poset are taken and rearranged to form the \(x\) and \(y\) axes according to the algorithm. \end{ex} \begin{notation}\label{not_ewsn} From the separation made by the diagonals \(x=1\) and \(y=n\), we label the corresponding regions of the AR quiver by the four cardinal compass directions. \(\mathcal{E}_Q\subset\Sigma_Q\) is the collection of all interval modules \([x,y]\) where the vertex \(x\) is contained in some \(Q\) interval of the form \((\text{sink},\text{next source}]\), and \(y\) is in some \([\text{source},\text{next sink})\) (and \(x\neq 1,y\neq n\)). \(\mathcal{W}_Q\subset\Sigma_Q\) is the collection of all interval modules \([x,y]\) where the vertex \(x\) is contained in some \((\text{source},\text{next sink}]\), and \(y\) is in some \([\text{sink},\text{next source})\) (\(x\neq 1,y\neq n\)). \(\mathcal{S}_Q\subset\Sigma_Q\) is the collection of all interval modules \([x,y]\) where the vertex \(x\) is contained in some \((\text{source},\text{next sink}]\), and \(y\) is in some \([\text{source},\text{next sink})\) (\(x\neq 1,y\neq n\)). \(\mathcal{N}_Q\subset\Sigma_Q\) is the collection of all interval modules \([x,y]\) where the vertex \(x\) is contained in some \((\text{sink},\text{next source}]\), and \(y\) is in some \([\text{sink},\text{next source})\) (\(x\neq 1,y\neq n\)). Let \(\bar{\mathcal{E}}\) (similarly \(\bar{\mathcal{W}}\), \(\bar{\mathcal{S}}\), \(\bar{\mathcal{N}}\)) denote the original region along with all diagonal modules (those with either \(x=1\) or \(y=n\)) that are adjacent to it in the AR quiver. In addition, in all four cases, let this set also include the module \([1,n]\). \end{notation} \begin{remark}\label{rmk_monotone} Within each of the regions \(\bar{\mathcal{E}}\), \(\bar{\mathcal{W}}\), \(\bar{\mathcal{S}}\), and \(\bar{\mathcal{N}}\), the \(x\) and \(y\) coordinate axes are \emph{monotone} (Figure \ref{fig_ar_super2}). \end{remark} The following is a direct consequence of Algorithm \ref{alg_main} (and Remark \ref{rmk_monotone}). \begin{prop}{Formula for \(\delta_{\mathrm{AR}}\).}\label{prop_formula} Let \(\sigma = [x_1,y_1]\) and \(\tau = [x_2,y_2]\) be indecomposables over \(Q\). Then the graph distance \(\delta_{\mathrm{AR}}(\sigma,\tau)\) of Definition \ref{def_dar} is given by \[ \delta_{\mathrm{AR}}(\sigma,\tau)=\delta^x(x_1,x_2)+\delta^y(y_1,y_2), \] where \[ \delta^x(\sigma,\tau)=\left\{ \begin{array}{ll} |x_1-x_2| & \text{if }\sigma,\tau\in\bar{\mathcal{E}}\cup\bar{\mathcal{N}}\\ & \text{ or }\sigma,\tau\in\bar{\mathcal{W}}\cup\bar{\mathcal{S}}, \\ x_1-1+x_2-1 & \text{otherwise}. \end{array}\right. \] and \[ \delta^y(\sigma,\tau)=\left\{ \begin{array}{ll} |y_1-y_2| & \text{if }\sigma,\tau\in\bar{\mathcal{W}}\cup\bar{\mathcal{N}}\\ & \text{ or }\sigma,\tau\in\bar{\mathcal{E}}\cup\bar{\mathcal{S}}, \\ n-y_1+n-y_2 & \text{otherwise}. \end{array}\right. \] \end{prop} Proposition \ref{prop_formula} follows immediately from the monotonicity of the two axes in each of the four regions of the AR quiver. \subsection{Distance to Zero in \(D_{\mathrm{AR}}\)} The dimension of an indecomposable is a lower bound for its \(W_{\mathrm{AR}}\) value. The following characterizes precisely when this is achieved. \begin{prop}\label{prop_decreasing_path} For any indecomposable \(\sigma = [x,y]\), \(W_{\mathrm{AR}}(\sigma)\geq y-x+1\). Furthermore, \(W_{\mathrm{AR}}(\sigma)=y-x+1\) if and only if there is a path of decreasing dimension from \(\sigma\) to a simple indecomposable in the AR quiver. \end{prop} \begin{proof} The first statement is immediate from the dimension-weighting of the edges in the definition of \(\delta_{\mathrm{AR}}\) (Definition \ref{def_ar_dist}) and the induced distance \(D_{\mathrm{AR}}\) (Definition \ref{def_dar}). Let \(\sigma=[x,y]\) be an indecomposable with decreasing path to some simple \([t]\). Then necessarily \(x\leq t\leq y\), and the existence of a decreasing path guarantees that \([x,y]\) and \([t]\) are in the same \(\bar{\mathrm{compass}}\) region. Hence, \(\delta_{\mathrm{AR}}([x,y],[t])=t-x+y-t=y-x\), and so \(W_{\mathrm{AR}}(\sigma)=y-x+1=\mathrm{dim}(\sigma)\). The converse also follows from the definitions cited above. If there is not a path of decreasing dimension, then any path of minimal weight from \([x,y]\) to \([t]\) must be of the form \[ [x,y]\to\ldots\to [x_1,y_1]\to[x_2,y_2]\to\ldots\to[t] \] where \([x,y]\supset[x_1,y_1]\subset[x_2,y_2]\supset[t]\). Then, \(\delta_{\mathrm{AR}}([x,y],[t])\geq t-x+y-t+(x_1-x_2)+(y_2-y_1)\) where at least one of the parenthetical terms is strictly positive. \end{proof} \begin{corollary}\label{cor_escape_east_west} For any indecomposable \([x,y]\in\bar{\mathcal{E}}\cup\bar{\mathcal{W}}\), \[ W_{\mathrm{AR}}(\sigma)=\mathrm{dim}(\sigma). \] \end{corollary} \begin{proof} Note that the projective simple and injective simple indecomposable modules form (respectively) the outer corners of the east and west regions, and it is immediate from the shape of the AR quiver (Algorithm \ref{alg_main}) that there are decreasing paths from any module in \(\bar{\mathcal{E}}\) or \(\bar{\mathcal{W}}\) to one of these. \end{proof} For any indecomposable in the north and south regions, from Figure \ref{fig_ar_super1} we see that there exists a path of decreasing dimension to the flat north or south \emph{boundary}, but these boundaries are not comprised of exclusively simple indecomposables. This complicates the situation for \(W_{\mathrm{AR}}(\sigma)\) when \(\sigma\in\mathcal{N}\cup\mathcal{S}\). \begin{definition} The \emph{north boundary} is the collection of indecomposables that comprise the very top of the AR quiver. As a consequence of Algorithm \ref{alg_main} (see also Notation \ref{not_ewsn}) this is exactly the set \[ B_N=\{N_i=[\text{source},\text{next sink}]\}\cup\{[s]:s\not\in\cup_iN_i\}\subset\bar{\mathcal{N}}. \] The intervals are listed left to right on the boundary of the AR quiver in \emph{increasing} order of their endpoints (as a subset of \(\mathbb{Z}\)). This is the construction pictured above: the north boundary is all red intervals and blue simples listed in sequence according to \(\leq_\mathbb{Z}\). The \emph{south boundary} is \[ B_S=\{S_j=[\text{sink},\text{next source}]\}\cup\{[s]:s\not\in\cup_jS_j\}\subset\bar{\mathcal{S}}. \] These are listed left to right in the AR quiver in \emph{decreasing} order (as a subset of \(\mathbb{Z}\)). \end{definition} \begin{example}\label{ex_bd} Consider the orientation of \(Q=\mathbb{A}_{10}\) and its north and south boundaries as seen in Figure \ref{fig_trunc}. The red intervals are the starting points for finding intervals with \(W_{\mathrm{AR}}>\mathrm{dim}\). Do note first that by Corollary \ref{cor_escape_east_west} the red intervals \([1,2]\) and \([9,10]\) in fact satisfy \(W_{\mathrm{AR}}=\mathrm{dim}\) as they are in \(\bar{\mathcal{W}}\) and \(\bar{\mathcal{E}}\) respectively. The boundary intervals contained strictly within \(\mathcal{N}\) or \(\mathcal{S}\) are of potential concern. Any non-simple such indecomposables have \(W_{\mathrm{AR}}>\mathrm{dim}\). This is immediate by observing that all paths leading away from these indecomposables are paths of \emph{increasing} dimension, violating the condition of Proposition \ref{prop_decreasing_path}. \begin{figure} \centering \includegraphics[scale=.7]{tikz_AR_bd_ex} \caption{From Example \ref{ex_bd}. These are the truncated views of an AR quiver highlighting the structures of its north and south boundaries. The indecomposables with \(W_{\mathrm{AR}}>\mathrm{dim}\) are outlined.} \label{fig_trunc} \end{figure} These are still not the only intervals with \(W_{\mathrm{AR}}>\mathrm{dim}\), however. In this example, we see that the full collection of such intervals is \begin{itemize} \item North: \([4,5],[8,9]\). \item South: \([5,8],[2,4],[2,8]\). \end{itemize} The southern collection of intervals manifest the final feature of interest: since the boundary intervals \([5,8]\) and \([2,4]\) are adjacent, the interval \([2,8]\) caught above them also has no decreasing path to a simple indecomposable. \end{example} The preceding discussion motivates the following classification. \begin{definition}\label{defn_hull} For an orientation \(Q\) of \(\mathbb{A}_n\), define \(\mathrm{hull}(Q)\) to be the union of the following sets: \[ H_N=\{[\text{source},\text{sink}]\subset[2,n-1]:\text{any subintervals of the form }[\text{sink},\text{next source}]\text{ have length one}\}, \] and \[ H_S=\{[\text{sink},\text{source}]\subset[2,n-1]:\text{any subintervals of the form }[\text{source},\text{next sink}]\text{ have length one}\}. \] Each set vaccuously includes the intervals with no interior subintervals of opposite orientation. Call \(H_N\subset\mathcal{N}\) the \emph{north hull} and \(H_S\subset\mathcal{S}\) the \emph{south hull}. \end{definition} To conclude this section we will provide explicit formulas for \(W_{\mathrm{AR}}(\sigma)\) when \(\sigma\in\mathrm{hull}(Q)\), resulting in upper and lower bounds on \(W_{\mathrm{AR}}\) (Proposition \ref{prop_diam}). \begin{lemma}{Values of \(W_{\mathrm{AR}}\) for \(\mathrm{hull}(P)\).}\label{lemma_hull_w} Let \(Q\) be some orientation of \(\mathbb{A}_n\) with non-empty hull. Suppose \([x,y]\in H_N\) (symmetrically, \([x,y]\in H_S\)). Define \([x_\bullet,y_\bullet]\) to be the largest interval containing \([x,y]\) that is also in \(H_N\). Let \(e=x_\bullet-1\) and \(E=y_\bullet+1\). Then \(W_{\mathrm{AR}}([x,y])\) is attained by passing through one of the simples \([e]\) or \([E]\). That is, \[ W_{\mathrm{AR}}([x,y])=\min\{\delta_{\mathrm{AR}}([x,y],[e]),\delta_{\mathrm{AR}}([x,y],[E])\}+1. \] Moreover, the precise distances to these indecomposables are given by \begin{equation*} \delta_{\mathrm{AR}}([x,y],[e])= \left\{ \begin{array}{ll} x+y-2 & \text{if }e>1\text{ and is the leftmost sink}\\ \\ x+y-2e & \text{otherwise} \end{array}\right. \end{equation*} and \begin{equation*} \delta_{\mathrm{AR}}([x,y],[E])= \left\{ \begin{array}{ll} 2E-(x+y) & \text{if }E<n\text{ and is the rightmost source}\\ \\ 2n-(x+y) & \text{otherwise}. \end{array}\right. \end{equation*} \end{lemma} \begin{proof} Let \([x,y]\in H_N\), meaning that \(x\) is a source and \(y\) is a sink. Let \(1\leq t\leq n\). \textit{Case \(t<x_\bullet\) :} We proceed by possible regions in which \([t]\) may lie and give the corresponding \(\delta_{\mathrm{AR}}\). \begin{flalign*} & [t]\in\bar{\mathcal{N}}: & & \delta_{\mathrm{AR}}([x,y],[t]) = x+y-2t & \tag{low-N} \\ & [t]\in\bar{\mathcal{E}}\setminus\bar{\mathcal{N}}: & & \delta_{\mathrm{AR}}([x,y],[t]) = 2(n-t)-(y-x) & \tag{low-E} \\ & [t]\in\bar{\mathcal{W}}\setminus\bar{\mathcal{N}}: & & \delta_{\mathrm{AR}}([x,y],[t]) = x+y-2 & \tag{low-W} \\ & [t]\in\mathcal{S}: & & \delta_{\mathrm{AR}}([x,y],[t]) = 2(n-1)-(y-x) & \tag{low-S}\\ \textit{Case \(y_\bullet<t\) :}\\ & [t]\in\bar{\mathcal{N}}: & & \delta_{\mathrm{AR}}([x,y],[t])=2t-x-y \tag{high-N} \\ & [t]\in\bar{\mathcal{E}}\setminus\bar{\mathcal{N}}: & & \delta_{\mathrm{AR}}([x,y],[t])=2n-x-y \tag{high-E} \\ & [t]\in\bar{\mathcal{W}}\setminus\bar{\mathcal{N}}: & & \delta_{\mathrm{AR}}([x,y],[t])= 2(t-1)-(y-x) \tag{high-W} \\ & [t]\in\mathcal{S}: & & \delta_{\mathrm{AR}}([x,y],[t])= 2(n-1)-(y-x) \tag{high-S} \end{flalign*} \textit{Case \(x_\bullet\leq t\leq y_\bullet\) :} The only possibilities are \begin{enumerate} \item \([t]\) is some source with \(x_\bullet\leq t<y_\bullet\), so \([t]\) is in the east region. That is, \[ \delta_{\mathrm{AR}}([x,y],[t])=|x-t|+2n-y-t. \] Clearly, this value is minimized by all sources \(x\leq m<y_\bullet\). Choosing any of these gives us \begin{equation*} \delta_{\mathrm{AR}}([x,y],[t])=\delta_{\mathrm{AR}}([x,y],[x])=2n-x-y. \tag{mid-E} \end{equation*} \item \([t]\) is some sink with \(x_\bullet<t\leq y_\bullet\), so \([t]\) is in the west region. That is, \[ \delta_{\mathrm{AR}}([x,y],[t])=x-1+t-1+|y-t|. \] Clearly, this value is minimized by all sinks \(x_\bullet< m\leq y\). Choosing any of these gives us \begin{equation*} \delta_{\mathrm{AR}}([x,y],[t])=\delta_{\mathrm{AR}}([x,y],[y])=x+y-2. \tag{mid-W} \end{equation*} \item \([t]\) is anything else, in which case it is interior to a segment of the form \([\textrm{source},\textrm{next sink}]\), and thus lies on the south boundary. That is, \begin{equation*} \delta_{\mathrm{AR}}([x,y],[t])=2(n-1)-(y-x) \tag{mid-S} \end{equation*} \end{enumerate} We exclude various equations from consideration. \begin{itemize} \item It is easy to check that (low-N) \(\leq\) (low-W) \(\leq\) (high-W) \(\leq\) (high-S). As \(x_\bullet\) is a source and \(x_\bullet\geq 2\), there always exists some sink \(t<x_\bullet\) (and thus \([t]\in\bar{\mathcal{W}}\)), so we need never use the biggest two equations. \item Similarly, (high-N) \(\leq\) (high-E) \(\leq\) (low-E) \(\leq\) (low-S). As \(y_\bullet\) is a sink and \(y_\bullet\leq n-1\), there always exists some source \(t>y_\bullet\) (and thus \([t]\in\bar{\mathcal{E}}\)), so we need never use the biggest two equations. \item All mid-type equations are unnecessary for consideration as well. Simply note that (mid-E) \(=\) (high-E), (mid-W) \(=\) (low-W), and (mid-S) = (low,high-S). \end{itemize} From this, we can conclude that no matter the poset orientation, the only candidates for minimizing \(\delta_{\mathrm{AR}}([x,y],[t])\) are (low-N), (low-W), (high-N), and (high-E). The only time that there is \emph{no} (low-N) candidate is if \(e\) is the leftmost sink and \(e\neq 1\). But in this case, \(e=x_\bullet-1\) is a candidate for (low-W). Conversely, if there is \emph{any} (low-N) candidate, then \(e=x_\bullet-1\) is a also a candidate, and minimizes the equation. The symmetric statements are true of (high-N) and (high-E), which are minimized by substituting \(E\). The statement of the lemma follows. \end{proof} \begin{comment} \begin{lemma}\label{lemma_lower} If \([x,y]\in \mathrm{hull}(P)=H_N\cup H_S\), then \(W_{\mathrm{AR}}([x,y])>y-x+1\). \end{lemma} \begin{proof} Suppose throughout that \([x,y]\in H_N\), as the \(H_S\) case is symmetric. We will show that for any vertex \(e\) of \(\mathbb{A}_n\), \[ \delta_{\mathrm{AR}}([x,y],[e])+1>y-x+1. \] \textit{Case \(e<x\) :} We will show desired inequality for \([e]\) lying in any of the quadrants of the AR quiver. \begin{itemize} \item North (or diagonal): \(\delta_{\mathrm{AR}}([x,y],[e])=(x-e)+(y-e)>y-x\) as \(e<x.\) \item East: \(\delta_{\mathrm{AR}}([x,y],[e])=(x-e)+(n-y+n-e)=2(n-e)-(y+x)>2(y-x)-(y-x)=y-x\). \item West: \(\delta_{\mathrm{AR}}([x,y],[e])=(x-1+e-1)+(y-e)=y+x-2>y-x\) as \(x\neq 1\). \item South: \(\delta_{\mathrm{AR}}([x,y],[e])=(x-1+e-1)+(n-y+n-e)=2(n-1)-(y-x)>n-1>y-x\) as \([1,n]\) cannot be in the hull. \end{itemize} \textit{Case \(x\leq e\leq y\) :} In this case, \([e]\) is either \([x]\), \([y]\), or is on the south boundary. \begin{itemize} \item East, \([e]=[x]\): \(\delta_{\mathrm{AR}}([x,y],[x])=n-y+n-x=2n-y-x>2y-y-x=y-x.\) \item West, \([e]=[y]\): \(\delta_{\mathrm{AR}}([x,y],[y])=(x-1+y-1)=y+x-2>y-x\) as \(x\neq 1\). \item South: identical to the `south' argument in the previous case. \end{itemize} \textit{Case \(y<e\) :} This is completely symmetric to the first case. \end{proof} \begin{lemma}\label{lemma_upper} If \([x,y]\in\mathrm{hull}(P)\), then \(W_{\mathrm{AR}}([x,y])\leq n\). \end{lemma} \begin{proof} By symmetry, we may assume \([x,y]\in H_N\). For \([x,y]\in H_N\), let \([x',y']\in H_N\) be the largest interval such that \([x,y]\subset[x',y']\). Then define \(e=x'-1\) and \(E=y'+1\). Then we show that either \(\delta_{\mathrm{AR}}([x,y],[e])+1\leq n\) or \(\delta_{\mathrm{AR}}([x,y],[E])+1\leq n\). \textit{Distances to \([e]\) and \([E]\):} As noted in the proof of Lemma \ref{lemma_lower}, an interval like \([e]\) is either in the north boundary or in the west region. In the first case (north boundary), \[ \delta_{\mathrm{AR}}([x,y],[e])+1=y-e+x-e+1=x+y-2e+1.\tag{L1} \] Otherwise (west region), \[ \delta_{\mathrm{AR}}([x,y],[e])+1=y-e+(x-1+e-1)+1=x+y-1.\tag{L2} \] Similarly, \(E=y'+1\) is either in the north boundary or in the east region. In the first case (north boundary), \[ \delta_{\mathrm{AR}}([x,y],[E])+1=E-x+E-y+1=2E-(x+y)+1.\tag{R1} \] Otherwise (east region), \[ \delta_{\mathrm{AR}}([x,y],[E])+1=E-x+(n-E+n-y)+1=2n-(x+y)+1.\tag{R2} \] \textit{Upper bound of \(n\):} It remains to show that \(\min\{\delta_{\mathrm{AR}}([x,y],[e])+1,\delta_{\mathrm{AR}}([x,y],[E])+1\}\leq n\). More strongly, we show that the \emph{sum} of these two terms is always \(\leq 2n\). \begin{itemize} \item \(W_{\mathrm{AR}}(\sigma)\leq\min\{\text{(L1)},\text{(R1)}\}\leq\frac{1}{2}(\)(L1)\(+\)(R1)\()=E-e+1\leq n\), \item \(W_{\mathrm{AR}}(\sigma)\leq\min\{\text{(L1)},\text{(R2)}\}\leq\frac{1}{2}(\)(L1)\(+\)(R2)\()=n-e+1\leq n\), \item \(W_{\mathrm{AR}}(\sigma)\leq\min\{\text{(L2)},\text{(R1)}\}\leq\frac{1}{2}(\)(L2)\(+\)(R1)\()=E\leq n\), \item \(W_{\mathrm{AR}}(\sigma)\leq\min\{\text{(L1)},\text{(R1)}\}\leq\frac{1}{2}(\)(L2)\(+\)(R2)\()=n\). \end{itemize} This guarantees the desired upper bound. \end{proof} \end{comment} \begin{lemma}\label{lemma_non_hull} If \([x,y]\in(\mathcal{N}\cup\mathcal{S})\setminus\mathrm{hull}(P)\), then \(W_{\mathrm{AR}}([x,y])=\mathrm{dim}([x,y])\). \end{lemma} \begin{proof} If \([x,y]\not\in\mathrm{hull}(P)\), then there exists \(t\in[x,y]\) such that \([t]\) is on the boundary of the same region in which \([x,y]\) lies. Then \(\delta_{\mathrm{AR}}([x,y],[t])=y-t+x-t\), and so \(W_{\mathrm{AR}}([x,y])=y-x+1=\mathrm{dim}([x,y])\). \end{proof} The subsequent proposition follows from Lemmas \ref{lemma_hull_w}, \ref{lemma_non_hull} and Corollary \ref{cor_escape_east_west}: \begin{prop}\label{prop_diam} All intervals \(\sigma\) have the property that \[ \mathrm{dim}(\sigma)\leq W_{\mathrm{AR}}(\sigma)\leq n. \] The set \(\mathrm{hull}(P)\) is precisely the collection of intervals \(\sigma\) such that \(W_{\mathrm{AR}}(\sigma)>\mathrm{dim}(\sigma)\). Furthermore, the diameter \(W_{\mathrm{AR}}=n\) is always attained by the indecomposable \([1,n]\). \end{prop} And, as \(D_{\mathrm{AR}}(\sigma,\tau)\leq\max\{W_{\mathrm{AR}}(\sigma),W_{\mathrm{AR}}(\tau)\}\) for all pairs \(\sigma,\tau\), we get the following corollary. \begin{corollary}\label{cor_dar_n} For any \(P=\mathbb{A}_n\), \(D_{\mathrm{AR}}\leq n\). \end{corollary} \subsection{Behavior of \(D_{\mathrm{AR}}\) on Pure Zigzag Orientations}\label{sec_zz_shape} Recall that in Definition \ref{def_an} we say \(P=\mathbb{A}_n\) has \emph{pure zigzag orientation} if the directions of any two adjacent arrows are opposite; alternatively, if every vertex is a source (minimal) or a sink (maximal). \begin{figure} \centering \includegraphics[scale=1]{tikz_AR_zigzag_master} \caption{AR quiver of the \(\mathbb{A}_{11}\) zigzag quiver with upward orientation.} \label{fig_zig_ar} \end{figure} As zigzag is an orientation that is often of particular independent interest, we will here espouse some properties of \(D_{\mathrm{AR}}\) specifically for the zigzag setting. The Auslander-Reiten quiver of a zigzag orientation of \(\mathbb{A}_{11}\) is shown in Figure \ref{fig_zig_ar}. \begin{notation} There are slight differences in the AR quiver based on the original orientation starting and ending at a max or min. This results in four zigzag orientation types, which we label as follows for convenience: \begin{itemize} \item in (uu) orientation, \(1\) and \(n\) are sinks, \item in (ud) orientation, \(1\) is a sink and \(n\) is source, \item in (du) orientation, \(1\) is a source and \(n\) is a sink, \item in (dd) orientation, \(1\) and \(n\) are sources. \end{itemize} \end{notation} \begin{remark}[Hull of zigzag orientation] From Definition \ref{defn_hull}, we immediately see that an \(\mathbb{A}_n\) quiver with zigzag orientation has \(H_N=\{[\textrm{min},\textrm{max}]\subset[2,n-1]\}\) and \(H_S=\{[\textrm{max},\textrm{min}]\subset[2,n-1]\}\). That is, \(\mathrm{hull}(P)\) is precisely the the entire north and south regions of AR quiver (which excludes the diagonals). \end{remark} As an immediate consequence of Lemma \ref{lemma_hull_w}, we have the following. \begin{corollary}[to Lemma \ref{lemma_hull_w}]\label{cor_1zz_w} For a zigzag orientation of some \(\mathbb{A}_n\) quiver, any \(\sigma=[x,y]\) in \(\mathrm{hull}(P)\) has \[ W_{\mathrm{AR}}(\sigma)=\min\{x+y-1,2n-x-y+1\}. \] \end{corollary} \begin{example}\label{ex_bad} For \(D_{\mathrm{AR}}\) over zigzag orientations, there will be intervals of small dimension and large \(W_{\mathrm{AR}}\) value. Consider \(\mathbb{A}_{100}\) with either (ud) or (du) zigzag orientation. In either case, \(\sigma=[50,51]\) has a dimension of \(2\), but by Corollary \ref{cor_1zz_w}, \(W_{\mathrm{AR}}(\sigma)=100=\mathrm{diam}(W_{\mathrm{AR}})\) (Proposition \ref{prop_diam}). \end{example} \begin{example}\label{ex_worse} To extend the previous example to any zigzag orientation of \(\mathbb{A}_n\), consider: \begin{itemize} \item if \(n\) is even, the indecomposable \([n/2,n/2+1]\) has dimension \(2\) and \(W_{\mathrm{AR}}\) value of \(n\), \item if \(n\) is odd, the indecomposable \([\frac{n-1}{2},\frac{n-1}{2}+1]\) has dimension \(2\) and \(W_{\mathrm{AR}}\) value of \(n-1\). \end{itemize} \end{example} \begin{remark}\label{rmk_alt_zigzag} \afterpage{ \begin{figure} \centering \begin{subfigure}[t]{1\textwidth} \centering \includegraphics[scale=.4]{tikz_an_boundary_ex_combine} \end{subfigure} \vspace{.5cm} \begin{subfigure}[t]{1\textwidth} \centering \includegraphics[width=\textwidth]{tikz_ar_zz_21_crop} \caption{'s AR quiver.} \label{fig_7_1} \end{subfigure} \vspace{.5cm} \begin{subfigure}[t]{1\textwidth} \centering \includegraphics[width=\textwidth]{tikz_ar_zVz_21_crop} \caption{'s AR quiver.} \label{fig_7_2} \end{subfigure} \vspace{.5cm} \begin{subfigure}[t]{1\textwidth} \centering \includegraphics[width=\textwidth]{tikz_ar_z2z2_21_crop} \caption{'s AR quiver.} \label{fig_7_3} \end{subfigure} \vspace{.5cm} \caption{Three orientations of \(\mathbb{A}_{21}\) and the north boundaries of their AR quivers. Not only do the ``problem'' regions shrink, but the disparities between \(W_{\mathrm{AR}}\) and dimension of the problem indecomposables also shrink.} \label{fig_boundary_ex_triple} \end{figure} \clearpage } Note that any orientation less than ``pure'' zigzag (Figure \ref{fig_7_1}) will possess reduced dimension-to-\(W_{\mathrm{AR}}\) disparities. For example, consider a poset with zigzag orientation everywhere save for the middle of the poset, in which there is a consecutive pair of rightward (or leftward) edges \(\rightarrow\rightarrow\) (Figure \ref{fig_7_2}). This splits the entire north region from one giant hull into two hulls by introducing the simple \([10]\) in the middle of the north boundary, providing a path of decreasing dimension to a simple for many modules formerly in the hull. For another example, if \(\mathbb{A}_n\) has orientation \(\cdot\cdot\cdot\hspace{-.1cm} \rightarrow\rightarrow\leftarrow\leftarrow\rightarrow\rightarrow \hspace{-.1cm}\cdot\cdot\cdot\) where the zigzag feature switches every \emph{other} vertex (Figure \ref{fig_7_3}), then it turns out that the difference \(W_{\mathrm{AR}}(\sigma)-\mathrm{dim}(\sigma)\in\{0,2\}\) for all indecomposables \(\sigma\) due to a high distribution of simples over the north boundary. The last orientation in this example proves to be a worthwhile course of investigation for zigzag persistence, and is the focus of Section \ref{sec_stab_r}. \end{remark} \subsection{\(D_{\mathrm{BL}}\) and \(D_{\mathrm{AR}}\): Features and Stability}\label{sec_further} In this section we discuss the block distance \(D_{\mathrm{BL}}\) of \cite{botnan_lesnick} and explore the differences and similarities between \(D_{\mathrm{BL}}\) and the Auslander-Reiten quiver distance \(D_{\mathrm{AR}}\). There is one rather cumbersome notational concern to be overcome when considering these two distances: for quiver theoretic purposes we have labeled our vertices in sequential order on the zigzag quiver itself, while recent literature considers zigzag intervals as indexed over a particular poset denoted \(\mathbb{Z}\mathbb{Z}\), which then corresponds to some persistence module in \(\mathbb{R}^2\). The disparity of notation and structure will be addressed with care when it comes time to consider the distances side by side (Definition \ref{defn_convert}), but is worth bearing in mind throughout. As such we will take to the following convention: \begin{notation} For a zigzag interval \(I\), denote by \(I_{\mathbb{A}}\) the interval as viewed over a \(\mathbb{Z}\)-labeled \(\mathbb{A}_n\) quiver, and by \(I_{\mathbb{Z}\mathbb{Z}}\) a corresponding interval over the poset \({\mathbb{Z}\mathbb{Z}}\) (Definition \ref{defn_zz}). Of a final note is that there is \emph{no canonical association} of vertices in \(\mathbb{A}_n\) with points in \({\mathbb{Z}\mathbb{Z}}\). Throughout, we refuse to declare any point at which \(\mathbb{A}_n\) and \({\mathbb{Z}\mathbb{Z}}\) are ``fused''. The reader is encouraged to keep this in mind during the subsequent material, and to be convinced that this lack of choice is of no consequence to the work provided. This is in fact ideal when taking into account that we will eventually consider extending to \emph{limits} of zigzag quivers with unbounded length (Section \ref{subsub_limits}). \end{notation} \subsubsection{Posets} \begin{definition}\label{defn_zz Let \(\mathbb{Z}\mathbb{Z}\) be the poset consisting of all points \(\{(i,i),(i,i-1)\in\mathbb{Z}^2\}_{i\in\mathbb{Z}}\) and having the subposet order inherited from \(\mathbb{Z}^{\mathrm{op}}\times\mathbb{Z}\). Generally, an \emph{interval} of this poset is written as \(\langle i,j\rangle\), which denotes one of \[ (i,j),[i,j),(i,j],\text{ or }[i,j]. \] An interval \(\langle i,j\rangle\) in \({\mathbb{Z}\mathbb{Z}}\) is the convex set \[ \langle i,j\rangle=\{(x,y):i\sim x,y\sim j\}, \] where the \(\sim\) represent either \(\leq\) or \(<\) depending on the respectively closed or open endpoints of \(\langle i,j\rangle\). An \emph{interval representation} of \(\mathbb{Z}\mathbb{Z}\) is written \(\langle i,j\rangle_{\mathbb{Z}\mathbb{Z}}\). For any point \((x,y)\in\mathbb{Z}\mathbb{Z}\), \[ \langle i,j\rangle_{\mathbb{Z}\mathbb{Z}}(x,y)=\left\{ \begin{array}{ll} K & \text{if }(x,y)\in\langle i,j\rangle \\ \\ 0 & \text{otherwise}.\\ \end{array}\right. \] The internal maps of \(\langle i,j\rangle_{\mathbb{Z}\mathbb{Z}}\) are \(1_K\) where possible, and \(0\) otherwise. \end{definition} \begin{definition} Let \({\mathbb{U}}\subset\mathbb{R}^{\mathrm{op}}\times\mathbb{R}\) be the subposet consisting of all points \((x,y)\in\mathbb{R}^2\) such that \(x\leq y\). We will have it inherit the ordering of \(\mathbb{R}^{\mathrm{op}}\times\mathbb{R}\): that \((x,y)\leq(w,z)\) if and only if \(x\geq w\) and \(y\leq z\). \begin{comment} Let \({\mathbb{U}}^*\) be the poset of decorated points of \({\mathbb{U}}\), where each point is denoted \(\langle x,y\rangle\) and takes one of the following forms: \[ (x,y),[x,y),(x,y],\text{ or }[x,y]. \] In \({\mathbb{U}}^*\), \(\langle x,y\rangle\leq \langle w,z\rangle\) if and only if \((x,y)\leq (w,z)\) in \({\mathbb{U}}\) and any additional decoration restrictions are also satisfied: namely, that for every point \((x,y)\) in \({\mathbb{U}}\), \begin{center} \begin{tabular}{ccc} \([x,y)<\) & \begin{tabular}{@{}c@{}}\([x,y)\)\\[.1cm]\((x,y]\)\end{tabular} & \(<[x,y]\) \end{tabular} \end{center} in \({\mathbb{U}}^*\). \end{comment} \end{definition} The connection between \({\mathbb{Z}\mathbb{Z}}\) and \({\mathbb{U}}\) as subposets of \(\mathbb{R}^{\mathrm{op}}\times\mathbb{R}\) is shown in Figure \ref{fig_zz_and_u}. \begin{figure*}[t] \centering \includegraphics[scale=1]{tikz_zz_and_u} \caption{A visualization of the connection between the posets \(\mathbb{Z}\mathbb{Z}\) and \({\mathbb{U}}\) as subposets of \(\mathbb{R}^{\mathrm{op}}\times\mathbb{R}\). The arrows denote the diagonal increasing vector under \(\leq_{\mathbb{R}^{\mathrm{op}}\times\mathbb{R}}\).} \label{fig_zz_and_u} \end{figure*} \begin{definition}[See \cite{botnan_lesnick} sections 2.5 and 3 for original details]\label{defn_embed} For a point \(u\in{\mathbb{U}}\), define \({\mathbb{Z}\mathbb{Z}[\leq\hspace{-.1cm}u]}\) to be the subposet of \({\mathbb{Z}\mathbb{Z}}\) consisting of all the points of \({\mathbb{Z}\mathbb{Z}}\) that are \(\leq u\) when considering both \({\mathbb{U}}\) and \({\mathbb{Z}\mathbb{Z}}\) as subposets of \({\mathbb{R}^{\mathrm{op}}\times\mathbb{R}}\) (see Figure \ref{fig_zz_and_u_nice}). \begin{figure*}[t] \centering \includegraphics[scale=1]{tikz_zz_and_u_nice} \caption{The restriction of the poset \({\mathbb{Z}\mathbb{Z}}\) under a point \(u\in{\mathbb{U}}\).} \label{fig_zz_and_u_nice} \end{figure*} For a zigzag persistence module \(M_{\mathbb{Z}\mathbb{Z}}\), define \(M|_{{\mathbb{Z}\mathbb{Z}}[\leq u]}\) to be the restriction of \(M\) to the subposet \({\mathbb{Z}\mathbb{Z}[\leq\hspace{-.1cm}u]}\). Then define the colimit functor \(\tilde{E}:\mathrm{vect}^{{\mathbb{Z}\mathbb{Z}}}\to\mathrm{vect}^{{\mathbb{R}^{\mathrm{op}}\times\mathbb{R}}}\) by: \[\tilde{E}(M)(u)=\varinjlim M|_{{\mathbb{Z}\mathbb{Z}}[\leq u]},\] the colimit of the diagram given by the \([\leq u]\) restriction, for every \(u\in{\mathbb{U}}\subset{\mathbb{R}^{\mathrm{op}}\times\mathbb{R}}\). Under \(\tilde{E}\), interval \({\mathbb{Z}\mathbb{Z}}\) modules are sent to the following block modules. (See figure \ref{fig_all_embed}.) % \begin{center} \begin{tabular}{ l l l } \(\tilde{E}((i,j)_{\mathbb{Z}\mathbb{Z}})\)&\(=(i,j)_{\mathrm{BL}}\)&\(=\{(x,y)\in{\mathbb{U}}:i<x,y<j\}\)\\ \(\tilde{E}([i,j)_{\mathbb{Z}\mathbb{Z}})\)&\(=[i,j)_{\mathrm{BL}}\)&\(=\{(x,y)\in{\mathbb{U}}:i\leq y<j\}\)\\ \(\tilde{E}((i,j]_{\mathbb{Z}\mathbb{Z}})\)&\(=(i,j]_{\mathrm{BL}}\)&\(=\{(x,y)\in{\mathbb{U}}:i<x\leq j\}\)\\ \(\tilde{E}([i,j]_{\mathbb{Z}\mathbb{Z}})\)&\(=[i,j]_{\mathrm{BL}}\)&\(=\{(x,y)\in{\mathbb{U}}:x\leq i, j\leq y\}\) \end{tabular} \end{center} \end{definition} \begin{figure*}[t] \centering \begin{tabular}{ c c c c } \includegraphics[scale=.5]{tikz_zz_and_u_CC}&\hspace{.5cm} \includegraphics[scale=.5]{tikz_zz_and_u_CO}&\hspace{.5cm} \includegraphics[scale=.5]{tikz_zz_and_u_OC}&\hspace{.5cm} \includegraphics[scale=.5]{tikz_zz_and_u_OO} \end{tabular} \caption{The various types of \({\mathbb{Z}\mathbb{Z}}\) interval modules and their corresponding \({\mathbb{U}}\) modules under \(\tilde{E}\).} \label{fig_all_embed} \end{figure*} \subsubsection{Block Module Distance} \label{sec_block} \begin{definition}\label{defn_u_il} Let \(D_{\mathbb{U}}\) denote the \emph{interleaving distance} on \({\mathbb{U}}\): Let \(\bar{\epsilon}=(-\epsilon,\epsilon)\in{\mathbb{R}^{\mathrm{op}}\times\mathbb{R}}\) be the ``increasing'' vector of length \(\epsilon\) for \({\mathbb{U}}\). For a \({\mathbb{U}}\) persistence module \(M\), define the new \({\mathbb{U}}\) persistence module \(M(\bar{\epsilon})(u)=M(u+\bar{\epsilon})\). Similarly, for a morphism of persistence modules \(\phi\), define \(\phi(\bar{\epsilon})(u)=\phi(u+\bar{\epsilon})\). For any \(M\), \(\epsilon\), let \(1_{M,M(\bar{\epsilon})}\) be the morphism that takes the value \(1_K\) on \(u\in\mathrm{supp}(M)\cap\mathrm{supp}(M(\bar{\epsilon}))\), and is zero otherwise. (It is simple to check that the \(K\)-span of this morphism gives precisely \(\mathrm{Hom}(M,M(\bar{\epsilon}))\).) Two \({\mathbb{U}}\) persistence modules are said to be \(\epsilon\)-interleaved if there exist morphisms \(\phi:M\to N(\bar{\epsilon})\) and \(\psi:N\to M(\bar{\epsilon})\) such that \begin{itemize} \item \(\psi(\bar{\epsilon})\circ\phi=1_{M,M(2\bar{\epsilon})}\), and \item \(\phi(\bar{\epsilon})\circ\psi=1_{N,N(2\bar{\epsilon})}\). \end{itemize} For two \({\mathbb{U}}\) persistence modules \(M,N\), \[D_{\mathbb{U}}(M,N)=\inf\{\epsilon:M,N\text{ are }\epsilon\text{-interleaved}\}.\] (In full generality, this definition would make use of arbitrary \({\mathbb{U}}\)-translations, but implicit in this distance is the use of an \(\ell^\infty\) norm, in which case we may as well default to the \emph{diagonal} vector of length \(\epsilon\) to define the translation at all points. This aligns with the earlier notion of a \emph{full} translation of a given height.) \end{definition} \begin{definition} From Definitions \ref{defn_embed} and \ref{defn_u_il}, define the block distance to be the composition \[ D_{\mathrm{BL}}(M_{\mathbb{Z}\mathbb{Z}},N_{\mathbb{Z}\mathbb{Z}})\vcentcolon= (D_{\mathbb{U}}\circ \tilde{E}) (M_{\mathbb{Z}\mathbb{Z}},N_{\mathbb{Z}\mathbb{Z}}) = D_{\mathbb{U}}(\tilde{E}(M_{\mathbb{Z}\mathbb{Z}}),\tilde{E}(N_{\mathbb{Z}\mathbb{Z}})). \] \end{definition} \begin{prop}[\cite{botnan_lesnick}, Lemma 3.1]\label{prop_features_bl} The bottleneck distance induced by \(D_{\mathrm{BL}}\) can be generated by the following \(W_{\mathrm{BL}}\) and \(d_{\mathrm{BL}}\). \begin{itemize} \item \(W_{\mathrm{BL}}((i,j)_\z )=1/4(j-i). \) \item \(W_{\mathrm{BL}}([i,j]_\z )=\infty. \) \item \(W_{\mathrm{BL}}([i,j)_\z )=1/2(j-i). \) \item \(W_{\mathrm{BL}}((i,j]_\z )=1/2(j-i). \) \end{itemize} If \(\langle i_1,j_1\rangle_{\mathbb{Z}\mathbb{Z}}\) and \(\langle i_2,j_2\rangle_{\mathbb{Z}\mathbb{Z}}\) are two zigzag/block modules \emph{of the same endpoint parity}, then \begin{itemize} \item \(d_{\mathrm{BL}} (\langle i_1,j_1\rangle_{\mathbb{Z}\mathbb{Z}},\langle i_2,j_2\rangle_{\mathbb{Z}\mathbb{Z}} \max\{|i_1-i_2|,|j_1-j_2|\ \) \end{itemize} Otherwise, define \(d_{\mathrm{BL}}=\max\{W_{\mathrm{BL}}(\lr{i_1,j_1}_{\mathbb{Z}\mathbb{Z}}),W_{\mathrm{BL}}(\lr{i_2,j_2}_{\mathbb{Z}\mathbb{Z}})\}\), the max of the \(W\)-values. \end{prop} The above result on interval modules is obtained from the more general definition, in which the projection of \({\mathbb{Z}\mathbb{Z}}\) interval modules to \({\mathrm{BL}}\) interval modules is by \emph{left Kan extension} via colimit. See the original work \cite{botnan_lesnick} for more detail. \subsubsection{Intervals of zigzag \(\mathbb{A}_n\) as intervals of \(\mathbb{Z}\mathbb{Z}\)} Finally, in order to make comparisons between \(D_{\mathrm{AR}}\) and \(D_{\mathrm{BL}}\), we need to be able to relate \(\mathbb{A}_n\) modules to \({\mathbb{Z}\mathbb{Z}}\) modules before embedding via \(\tilde{E}\). \begin{definition}\label{defn_convert} For some \(P=\mathbb{A}_n(z)\) define the functor \(\mathcal{Z}:\mathrm{vect}^P\to\mathrm{vect}^{\mathbb{Z}\mathbb{Z}}\) by how it acts on the following indecomposables. For any \(x\in P\), there is some associated \((i,i)\in{\mathbb{Z}\mathbb{Z}}\) (the positioning in which \(P\) is ``fused'' to \({\mathbb{Z}\mathbb{Z}}\) is fixed ahead of time and is entirely arbitrary). \begin{itemize} \item \(\mathcal{Z}([x+1,x+2k-1]_{\mathbb{A}})=(i,i+k)_{\mathbb{Z}\mathbb{Z}}\). \item \(\mathcal{Z}([x,x+2k]_{\mathbb{A}})=[i,i+k]_{\mathbb{Z}\mathbb{Z}}\). \item \(\mathcal{Z}([x,x+2k-1]_{\mathbb{A}})=[i,i+k)_{\mathbb{Z}\mathbb{Z}}\). \item \(\mathcal{Z}([x+1,x+2k]_{\mathbb{A}})=(i,i+k]_{\mathbb{Z}\mathbb{Z}}\). \end{itemize} \end{definition} \begin{definition} Let \(P=\mathbb{A}_n(z)\) and let \(Z\) be the \({\mathbb{Z}\mathbb{Z}}\)-interval (not module) given by \(\mathcal{Z}([1,n]_{\mathbb{A}})\). Define \(\Sigma_{\mathbb{Z}\mathbb{Z}}(P)\) to be the subcategory of \(\mathrm{vect}^{\mathbb{Z}\mathbb{Z}}\) given by all modules with support contained in the \({\mathbb{Z}\mathbb{Z}}\)-interval \(Z=\mathcal{Z}([1,n]_{\mathbb{A}})\). \end{definition} \begin{prop} The functor (natural transformation) \[ \mathcal{Z}:\mathrm{vect}^P\to\Sigma_{\mathbb{Z}\mathbb{Z}}(P) \] is an equivalence of categories (natural equivalence). \end{prop} \begin{proof} The inverse of \(\mathcal{Z}\) is given by the reverse statements of Definition \ref{defn_convert}. \end{proof} \begin{definition} For a \({\mathbb{Z}\mathbb{Z}}\) module \(I_{\mathbb{Z}\mathbb{Z}}\), define dimension \(\mathrm{dim}(I_{\mathbb{Z}\mathbb{Z}})=\sum\limits_{i\in{\mathbb{Z}\mathbb{Z}}}\mathrm{dim}_K(I_{\mathbb{Z}\mathbb{Z}}(i))\) to be the sum of the dimensions of the vector spaces of \(I_{\mathbb{Z}\mathbb{Z}}\). \end{definition} \begin{notation} In any setting where we have fixed some \(P=\mathbb{A}_n(z)\) and some \(\mathcal{Z}:\mathrm{vect}^P\to\Sigma_{\mathbb{Z}\mathbb{Z}}(P)\) (``some'' only because this is technically dependent on our consistently hand-waved choice of \({\mathbb{A}}\leftrightarrow{\mathbb{Z}\mathbb{Z}}\) anchor), we will drop the equivalence \(\mathcal{Z}\) altogether and simply denote by \(\sigma_{\mathbb{A}}\) and \(\sigma_{\mathbb{Z}\mathbb{Z}}\) the same module viewed as a member of either of the two equivalent categories. Also, despite the disparity in \emph{labeling} between \({\mathbb{A}}\) and \({\mathbb{Z}\mathbb{Z}}\) modules (Definition \ref{defn_convert}), the \emph{dimension} of \(\sigma\) is the same in both contexts: \[ \mathrm{dim}_{\mathbb{A}}(\sigma_{\mathbb{A}})=\mathrm{dim}_{\mathbb{Z}\mathbb{Z}}(\mathcal{Z}(\sigma_{\mathbb{A}})=\sigma_{\mathbb{Z}\mathbb{Z}}). \] For this reason, we will simply write \(\mathrm{dim}\) with no need for subscripting based on the category. \end{notation} \section{Stability Between \(D_{\mathrm{BL}}\) and \(D_{\mathrm{AR}}\) over Pure Zigzag} \label{sec_stab} Algebraic stability results usually refer to obtaining bounds between some distance and its induced bottleneck distance (Remark \ref{rmk_induced}). The following are two important examples of stability that have been paraphrased into this paper's vocabulary. The first stability result is in fact an \emph{isometry}. \begin{theoremnonum} For \(\mathrm{vect}\)-valued persistence modules over \(\mathbb{R}\), the interleaving distance and its induced bottleneck distance are \emph{isometric}. That is, the interleaving distance can be taken to be \emph{diagonal} over the indecomposable summands without any loss of sharpness. \end{theoremnonum} The fact that a distance is a lower bound on its own induced bottleneck distance is trivial. The non-trivial direction for the above result is seen originally in \cite{cs17}. It was then algebraically presented and proved in \cite{algebraic_stability} (Theorem 4.4). The categorically focused ``induced matching'' version of the result appears in \cite{induced_matchings} (Theorem 3.5), which is emphasized even further in the entirety of \cite{bauer_lesnick2} (particularly Theorems 1.4, 1.7). The following is the initial stability result for the block distance. \begin{theoremnonum}[\cite{botnan_lesnick} Proposition 2.12 and Theorem 3.3]\label{thm_stab_bl} For \(\mathrm{vect}\)-valued persistence modules over \({\mathbb{Z}\mathbb{Z}}\) embedded via \(\tilde{E}\) as block \({\mathbb{U}}\) persistence modules, \(D_{\mathrm{BL}}\) and its induced bottleneck distance \(\widehat{D}_{\mathrm{BL}}\) satisfy \[ D_{\mathrm{BL}}\leq \widehat{D}_{\mathrm{BL}}\leq \frac{5}{2}D_{\mathrm{BL}}. \] \end{theoremnonum} As the block distance separates by \(\langle\cdot,\cdot\rangle_{\mathbb{Z}\mathbb{Z}}\) type, the result above is proved independently for each of the four cases. In three of these cases the above statement is tight with the constant of \(5/2\). In \cite{bjerkevik} it is shown that for the case of \((\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}\) modules, the block distance and its induced bottleneck distance are \emph{isometric} (i.e., the \(5/2\) can be replaced with \(1\)). These theorems are immensely important results for the topic at hand, but do not reflect the sort of stability theorem that we will provide for \(D_{\mathrm{AR}}\). As it has been defined, \(D_{\mathrm{AR}}\) is foundationally a bottleneck distance in the first place, and thus \emph{is} its own induced bottleneck distance. As such, any algebraic stability result of the type discussed here would be trivial for \(D_{\mathrm{AR}}\). Instead, we examine comparative stability of the kind \(D_{\mathrm{AR}}\leq A\cdot D_{\mathrm{BL}}\) and \(D_{\mathrm{BL}}\leq B\cdot D_{\mathrm{AR}}\) over pure zigzag orientations. The following is our final result for this section: full minimal Lipschitz constants comparing a modification of \(D_{\mathrm{AR}}\) to \(D_{\mathrm{BL}}\) over the four kinds of \({\mathbb{Z}\mathbb{Z}}\) modules (this echoes the piecewise stability results of the block distance \cite{botnan_lesnick}, as this modified \(D_{\mathrm{AR}}\) also shares the trait that it ``separates'' modules by \(\langle\cdot,\cdot\rangle_{\mathbb{Z}\mathbb{Z}}\) type). \begin{theoremnonum}[Theorem \ref{thm_blar_limit}] The following are the minimal Lipschitz constants comparing \(D_{\mathrm{BL}}\) with the modification \(D_{\mathrm{AR}}^{2,\infty}\) of \(D_{\mathrm{AR}}\) over some poset \(P=\mathbb{A}_n(z)\) of pure zigzag orientation. \begin{itemize} \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in(\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,2D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{2,\infty}\leq 16D_{\mathrm{BL}}. \) \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in[\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,2D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{2,\infty}\leq 4D_{\mathrm{BL}}\,\, \) (if \(D_{\mathrm{BL}} < \infty\)). \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in[\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,2D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{2,\infty}\leq 8D_{\mathrm{BL}}. \) \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in(\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,2D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{2,\infty}\leq 8D_{\mathrm{BL}}. \) \end{itemize} \end{theoremnonum} \subsection{Partitioning of Intervals and Modifications of \(D_{\mathrm{AR}}\)} Throughout, we compare \(D_{\mathrm{BL}}\) with the original \(D_{\mathrm{AR}}\) and then two further modifications of it. \(D_{\mathrm{AR}}^r\) is a modification of \(D_{\mathrm{AR}}\) that acts by projecting into a poset refinement of pure zigzag (called \(r\)-zigzag) in order to avoid a large hull, all while preserving the structure of the projected modules over sources and sinks. \(D_{\mathrm{AR}}^{r,\infty}\) is a further modification that views original zigzag modules over \(r\)-zigzag posets of unbounded length. This perspective both compares more favorably with \(D_{\mathrm{BL}}\) and may be of independent interest to anyone who does not wish to be limited to bounded zigzag posets in the first place. The remainder of this section chronicles Lipschitz stability between \(D_{\mathrm{BL}}\) and original \(D_{\mathrm{AR}}\) and the fact that in both directions the minimal Lipschitz constants involve \(n\) itself (the length of \(P=\mathbb{A}_n\). The first modification \(D_{\mathrm{AR}}^r\) removes one of these dependencies, while the second modification to \(D_{\mathrm{AR}}^{r\infty}\) removes the other. The most persistent discrepancy (the one removed by the \(D_{\mathrm{AR}}^{r\infty}\) modification) is discussed in the following remark. \begin{remark}[Partitions of \(\Sigma_P\): \({\mathbb{Z}\mathbb{Z}}\) vs\(.\) compass] \label{rmk_w_table} We require a brief discussion of the connection between the subsets \(\mathcal{E},\mathcal{W},\mathcal{S},\mathcal{N}\) of \(\Sigma_P\) and the subsets \((\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}},[\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}},[\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}},(\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}\) of \(\Sigma_{\mathbb{Z}\mathbb{Z}}(P)\) under the functor \(\mathcal{Z}\) (Definition \ref{defn_convert}). When trying to pair the compass regions precisely to the partitions by \({\mathbb{Z}\mathbb{Z}}\)-type, the inconvenience becomes that the diagonals (of the AR quiver) belong to different members of the \({\mathbb{Z}\mathbb{Z}}\)-partition \emph{depending on the orientation of} \(\mathbb{A}_n\). Define the sets: \begin{itemize} \item \(\mathcal{D}_{nw}= \{[1,\cdot]\in\Sigma_P:[1,\cdot]\text{ is northwest of }[1,n]\}\). \item \(\mathcal{D}_{ne}= \{[\cdot,n]\in\Sigma_P:[\cdot,n]\text{ is northeast of }[1,n]\}\). \item \(\mathcal{D}_{se}= \{[1,\cdot]\in\Sigma_P:[1,\cdot]\text{ is southeast of }[1,n]\}\). \item \(\mathcal{D}_{sw}= \{[\cdot,n]\in\Sigma_P:[\cdot,n]\text{ is southwest of }[1,n]\}\). \end{itemize} Supplement when necessary with the bar notation from Notation \ref{not_ewsn}, i.e., \[\bar{\mathcal{N}}=\mathcal{N}\cup\mathcal{D}_{nw} \cup\mathcal{D}_{ne}\cup\{[1,n]\}.\] See Table \ref{tab_partitions}. \begin{table} \centering {\tabulinesep=1.5mm \begin{tabu}{c|c|c|c|c} & \((\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}=\) & \([\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}=\) & \([\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}=\) & \((\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}=\) \\ \hline \(\mathbb{A}_n^{\mathrm{uu}}(z)\) & \(\mathcal{E}\) & \(\bar{\mathcal{W}}\) & \(\mathcal{S}\cup\mathcal{D}_{se}\) & \(\mathcal{N}\cup\mathcal{D}_{ne}\) \\ \hline \(\mathbb{A}_n^{\mathrm{ud}}(z)\) & \(\mathcal{E}\cup\mathcal{D}_{ne}\) & \(\mathcal{W}\cup\mathcal{D}_{nw}\) & \(\bar{\mathcal{S}}\) & \(\mathcal{N}\) \\ \hline \(\mathbb{A}_n^{\mathrm{du}}(z)\) & \(\mathcal{E}\cup\mathcal{D}_{se}\) & \(\mathcal{W}\cup\mathcal{D}_{sw}\) & \(\mathcal{S}\) & \(\bar{\mathcal{N}}\) \\ \hline \(\mathbb{A}_n^{\mathrm{dd}}(z)\) & \(\bar{\mathcal{E}}\) & \(\mathcal{W}\) & \(\mathcal{S}\cup\mathcal{D}_{sw}\) & \(\mathcal{N}\cup\mathcal{D}_{nw}\) \\ \end{tabu}} \caption{Equality of partitions by compass regions of the AR quiver and by endpoint type in \({\mathbb{Z}\mathbb{Z}}\), dependent on orientation of \(P\).} \label{tab_partitions} \end{table} This leads to Lemmas \ref{lemma_x} and \ref{lemma_y}, which introduce \(n\)-dependence in \(D_{\mathrm{BL}}\leq A\cdot D_{\mathrm{AR}},D_{\mathrm{AR}}^r\) Lipschitz constants. This is resolved at last when comparing with the modification \(D_{\mathrm{AR}}^{r,\infty}\), as seen in Proposition \ref{prop_d_inf}. \end{remark} Finally, we introduce a notational convention for use in Tables \ref{tab_w1} and \ref{tab_d1}. \begin{notation}\label{not_comp_diff} For the remainder of the work on stability, we invoke the following notational conventions for the sake of filling out Tables \ref{tab_w1} and \ref{tab_d1} with greater readability. Let \(\sigma=[x_1,y_1]_{\mathbb{A}}\) and \(\tau=[x_2,y_2]_{\mathbb{A}}\). We will denote the by the following values various quantities originating in Proposition \ref{prop_formula}: \begin{itemize} \item \(\mathrm{LH}^{\mathrm{diff}}(\sigma,\tau)=|x_1-x_2|\), the left hand \emph{support difference} of the modules, \item \(\mathrm{RH}^{\mathrm{diff}}(\sigma,\tau)=|y_1-y_2|\), the right hand \emph{support difference} of the modules, \item \(\mathrm{LH}^{\mathrm{comp}}(\sigma,\tau)=x_1-1+x_2-1\), the left hand \emph{support complements} of the modules, also allowing for the notation \(\mathrm{LH}^{\mathrm{comp}}(\sigma)=x_1-1\), \item \(\mathrm{RH}^{\mathrm{comp}}(\sigma,\tau)=n-y_1+n-y_2\), the right hand \emph{support complements} of the modules, also allowing for the notation \(\mathrm{RH}^{\mathrm{comp}}(\sigma)=n-y_1\). \end{itemize} \end{notation} \subsection{Unmodified Stability} \begin{table} \begin{subtable}[t]{\textwidth} \centering {\tabulinesep=1.5mm \begin{tabu}{c|c|c|c|c} & \((i,j)_{\mathbb{Z}\mathbb{Z}}\) \([i,j]_{\mathbb{Z}\mathbb{Z}}\ & \([i,j)_{\mathbb{Z}\mathbb{Z}}\) \((i,j]_{\mathbb{Z}\mathbb{Z}}\ \\ \hline \(W_{\mathrm{BL}}\) & \((j-i)/4\) & \(\infty\) & \((j-i)/2\) & \((j-i)/2\) \\ \hline \(W_{\mathrm{AR}}\) & \(y-x+1\) & \(y-x+1\) & \(\min\left\{\begin{array}{l}x+y-1\\2n-x-y+1\end{array}\right\}\) & \(\min\left\{\begin{array}{l}x+y-1\\2n-x-y+1\end{array}\right\}\) \\ \hline \(W_{\mathrm{AR}}^r\) & \(r(y-x)+1\) & \(r(y-x)+1\) & \(r(y-x)+1,3\) & \(r(y-x)+1,3\) \\ \hline \(W_{\mathrm{AR}}^{r,\infty}\) & \(r(y-x)+1\) & \(r(y-x)+1\) & \(r(y-x)+1,3\) & \(r(y-x)+1,3\) \\ \end{tabu}} \caption{Table of \(W\)-values over any poset of pure zigzag orientation, partitioned by \({\mathbb{Z}\mathbb{Z}}\) interval type. For sources of individual formulas see: Row 1, Prop \ref{prop_features_bl}; Row 2, Corollary \ref{cor_escape_east_west} and Corollary \ref{cor_1zz_w}; Row 3, Corollary \ref{cor_rzz_w}; Row 4, Proposition \ref{prop_d_inf}.} \label{tab_w11} \end{subtable} \begin{subtable}[t]{\textwidth} \centering {\tabulinesep=1.5mm \begin{tabu}{c|c|c|c|c} & \((i,j)_{\mathbb{Z}\mathbb{Z}}\) \([i,j]_{\mathbb{Z}\mathbb{Z}}\ & \([i,j)_{\mathbb{Z}\mathbb{Z}}\) \((i,j]_{\mathbb{Z}\mathbb{Z}}\ \\ \hline \(W_{\mathrm{BL}}\) & \((\mathrm{dim}+1)/8\) & \(\infty\) & \((\mathrm{dim}+1)/4\) & \((\mathrm{dim}+1)/4\) \\ \hline \(W_{\mathrm{AR}}\) & \(\mathrm{dim}\) & \(\mathrm{dim}\) & \(\mathrm{dim}+\min\left\{\begin{array}{l}2\cdot\mathrm{LH}^{\mathrm{comp}}\\ 2\cdot\mathrm{RH}^{\mathrm{comp}}\end{array}\right\}\) & \(\begin{array}{c}\text{same as}\\\text{previous}\\\text{column}\end{array}\) \\ \hline \(W_{\mathrm{AR}}^r\) & \(r\cdot\mathrm{dim}\) & \(r\cdot\mathrm{dim}\) & \(r\cdot\mathrm{dim}\) & \(r\cdot\mathrm{dim}\) \\ \hline \(W_{\mathrm{AR}}^{r,\infty}\) & \(r\cdot\mathrm{dim}\) & \(r\cdot\mathrm{dim}\) & \(r\cdot\mathrm{dim}\) & \(r\cdot\mathrm{dim}\) \\ \end{tabu}} \caption{Simplified table of approximate \(W\)-values that emphasize major scaling features.} \label{tab_w12} \end{subtable} \caption{In both tables, recall that the difference between \(\mathrm{dim}_{\mathbb{Z}\mathbb{Z}}\) and \(\mathrm{dim}_{\mathbb{A}}\) (which is the difference between \(j-i\) and \(y-x\)) is given by Definition \ref{defn_convert}, and is in all cases essentially a factor of \(2\) (with \(\mathrm{dim}_{\mathbb{A}}\) being the larger one).} \label{tab_w1} \end{table} \begin{prop}[Unmodified Right-Hand Stability]\label{prop_stab1} Over pure zigzag orientation, \[ D_{\mathrm{AR}}\leq 2n\cdot D_{\mathrm{BL}} \] is the minimal Lipschitz constant satisfying the above inequality. \end{prop} \begin{proof} Necessity is obtained from Example \ref{ex_worse}. A module of the form \([x,x+1]_{\mathbb{A}}\) can have \(W_{\mathrm{AR}}=n,n-1\), and this corresponds to some module of the form \([i,i+1)_{\mathbb{Z}\mathbb{Z}}\) or \((i,i+1]_{\mathbb{Z}\mathbb{Z}}\), both of which have \(W_{\mathrm{BL}}=1/2\). Sufficiency follows from Corollary \ref{cor_dar_n}. \end{proof} In the other direction, we must address the misalignment issues brought to attention in Remark \ref{rmk_w_table}. \begin{lemma}[Partitioning Non-alignment (see Remark \ref{rmk_w_table})]\label{lemma_x} Let \(P=\mathbb{A}_n(z)\) be a poset of pure zigzag orientation. Then if \(D_{\mathrm{BL}}<\infty\), \[D_{\mathrm{BL}}\leq n/4\cdot D_{\mathrm{AR}}\] where \(n/4\) is a \emph{lower bound} for the Lipschitz constant in the inequality above. \end{lemma} \begin{proof} No matter the orientation of \(P\), one of \(\sigma=[1,n]_{\mathbb{A}},\tau_1=[2,n]_{\mathbb{A}}\) is in some \((\cdot,\cdot\}_{\mathbb{Z}\mathbb{Z}}\) and the other is in the associated \([\cdot,\cdot\}_{\mathbb{Z}\mathbb{Z}}\). Similarly, one of \(\sigma=[1,n]_{\mathbb{A}},\tau_2=[1,n-1]_{\mathbb{A}}\) is in some \(\{\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}\) and the other is in the associated \(\{\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}\). That is to say, \(D_{\mathrm{BL}}(\sigma,\tau_i)=\max\{W_{\mathrm{BL}}(\sigma),W_{\mathrm{BL}}(\tau_i)\}\approx n/4\) or \(\infty\) (for \(i=1,2\). See Table \ref{tab_w12}). However, both pairs have a \(D_{\mathrm{AR}}\) distance of \(1\) (recall that all \(D_{\mathrm{AR}}\) distances from a diagonal to an adjacent region are of the form \(\mathrm{LH}^{\mathrm{diff}}+\mathrm{RH}^{\mathrm{diff}}\)). \end{proof} We are now prepared to state this stability result. \begin{prop}[Unmodified Left-Hand Stability]\label{prop_stab2} Over pure zigzag orientation, so long as \(D_{\mathrm{BL}}<\infty\), \[ D_{\mathrm{BL}}\leq n/4\cdot D_{\mathrm{AR}} \] where \(n/4\) is the minimal Lipschitz constant satisfying the above inequality \emph{across all pairs of indecomposables}. \end{prop} \begin{proof} Necessity is given by Lemma \ref{lemma_x}. Sufficiency follows below. Sufficiency follows from Tables \ref{tab_w1} and \ref{tab_d1}, with special concern being given to the final column of Table \ref{tab_d1}. The most extreme comparison from this column (suppose \(\mathrm{uu}\) orientation for ease of notation) are the pair of modules \(\sigma_{\mathbb{A}}=[1,n-1]_{\mathbb{A}}\) and \(\tau_{\mathbb{A}}=[2,n]_{\mathbb{A}}\), which correspond to \(\sigma_{\mathbb{Z}\mathbb{Z}}=[i,i+(n-1)/2)_{\mathbb{Z}\mathbb{Z}}\) and \(\tau_{\mathbb{Z}\mathbb{Z}}=(i,i+(n-1)/2]_{\mathbb{Z}\mathbb{Z}}\) for some \(i\in\mathbb{Z}\). But though \(n\)-dependent, these only require a Lipschitz constant of \(n/8\), and thus \(n/4\) remains permissible. \end{proof} \begin{table} \centering {\tabulinesep=1.5mm \begin{tabu}{c|c|c|c|c} & \(\begin{array}{l}\sigma,\tau\text{ are of}\\ \text{same }{\mathbb{Z}\mathbb{Z}}\text{ type}\\\end{array}\) & \(\begin{array}{c} \sigma\in\langle\cdot,\cdot\}\\ \tau\in\,\,\rangle\cdot,\cdot\} \end{array}\) & \(\begin{array}{c} \sigma\in\{\cdot,\cdot\rangle\\ \tau\in\{\cdot,\cdot\langle \end{array}\) & \(\begin{array}{c} \sigma\in\{\cdot,\cdot\rangle\\ \tau\in\,\,\}\cdot,\cdot\langle \end{array}\) \\ \hline \(d_{\mathrm{BL}}\) & \(\max\left\{\begin{array}{l} \mathrm{LH}^{\mathrm{diff}},\\ \mathrm{RH}^{\mathrm{diff}} \end{array}\right\}\) & \(\max\left\{\begin{array}{l} W_{\mathrm{BL}}(\sigma),\\ W_{\mathrm{BL}}(\tau) \end{array}\right\}\) & \(\max\left\{\begin{array}{l} W_{\mathrm{BL}}(\sigma),\\ W_{\mathrm{BL}}(\tau) \end{array}\right\}\) & \(\max\left\{\begin{array}{l} W_{\mathrm{BL}}(\sigma),\\ W_{\mathrm{BL}}(\tau) \end{array}\right\}\)\\ \hline \multicolumn{2}{c}{} \\[.5em] & \(\begin{array}{l} \sigma,\tau\in\bar{\mathcal{C}}\text{ where}\\ \mathcal{C}\in\{\mathcal{E},\mathcal{W},\mathcal{S},\mathcal{N}\} \end{array}\) & \(\begin{array}{c}(\sigma,\tau)\in\text{ one of}\\ \mathcal{N}\cup\mathcal{D}_{ne}\times\mathcal{W}\cup\mathcal{D}_{sw},\\ \mathcal{E}\cup\mathcal{D}_{ne}\times\mathcal{S}\cup\mathcal{D}_{sw}\phantom{,}\end{array}\) & \(\begin{array}{c}(\sigma,\tau)\in\text{ one of}\\ \mathcal{N}\cup\mathcal{D}_{nw}\times\mathcal{E}\cup\mathcal{D}_{se},\\ \mathcal{W}\cup\mathcal{D}_{nw}\times\mathcal{S}\cup\mathcal{D}_{se}\phantom{,}\end{array}\) & \(\begin{array}{c}(\sigma,\tau)\in\text{ one of}\\ \mathcal{N}\times\mathcal{S},\\ \mathcal{W}\times\mathcal{E}\phantom{,}\end{array}\) \\ \hline \(d_{\mathrm{AR}}\) & \(\mathrm{LH}^{\mathrm{diff}} +\mathrm{RH}^{\mathrm{diff}}\) & \(\mathrm{LH}^{\mathrm{comp}}+\mathrm{RH}^{\mathrm{diff}}\) & \(\mathrm{LH}^{\mathrm{diff}}+\mathrm{RH}^{\mathrm{comp}}\) & \(\mathrm{LH}^{\mathrm{comp}}+\mathrm{RH}^{\mathrm{comp}}\) \\ \hline \(d_{\mathrm{AR}}^r\) & \(r\cdot(\mathrm{LH}^{\mathrm{diff}}+\mathrm{RH}^{\mathrm{diff}})\) & \(r\cdot(\mathrm{LH}^{\mathrm{comp}}+\mathrm{RH}^{\mathrm{diff}})\) & \(r\cdot(\mathrm{LH}^{\mathrm{diff}}+\mathrm{RH}^{\mathrm{comp}})\) & \(r\cdot(\mathrm{LH}^{\mathrm{comp}}+\mathrm{RH}^{\mathrm{comp}})\) \\ \hline \(d_{\mathrm{AR}}^{r,\infty}\) & \(r\cdot(\mathrm{LH}^{\mathrm{diff}}+\mathrm{RH}^{\mathrm{diff}})\) & \(\max\left\{\begin{array}{l} W_{\mathrm{AR}}^r(\sigma)\\ W_{\mathrm{AR}}^r(\tau) \end{array}\right\}\) & \(\max\left\{\begin{array}{l} W_{\mathrm{AR}}^r(\sigma)\\ W_{\mathrm{AR}}^r(\tau) \end{array}\right\}\) & \(\max\left\{\begin{array}{l} W_{\mathrm{AR}}^r(\sigma)\\ W_{\mathrm{AR}}^r(\tau) \end{array}\right\}\) \\ \end{tabu}} \caption{Table of \(d\)-values over any poset of pure zigzag orientation, partitioned by \({\mathbb{Z}\mathbb{Z}}\) interval type. For sources of individual formulas see: row 1, Prop \ref{prop_features_bl}; row 2, Prop \ref{prop_formula} and Notation \ref{not_comp_diff}; row 3, Remark \ref{rmk_convert_2} and Example \ref{ex_refine}; row 4, Proposition \ref{prop_d_inf}.} \label{tab_d1} \end{table} \subsection{Stability with \(r\)-zigzag}\label{sec_stab_r} It seems to the authors that the AR distance's tendency to have \emph{hulls} in pure zigzag orientations such that intervals with small supports have \(W\)-values at or near the entire diameter of \(D_{\mathrm{AR}}\) is undesirable under quite a few perspectives (namely, for finding Lipschitsz bounds with other more ``well-behaved'' distances). See Example \ref{ex_worse} and its subsequent discussion Remark \ref{rmk_alt_zigzag} for motivation, from which we have already seen in Proposition \ref{prop_stab1} that that any relationship \(D_{\mathrm{AR}}\leq A\cdot D_{\mathrm{BL}}\) requires a constant that scales with \(n\). \begin{definition}\label{defn_r_zz} Let \(P=\mathbb{A}_n(z)\) be some pure zigzag orientation and \(r\in\mathbb{Z}_{\geq 2}\). Define \(P^r=\mathbb{A}_n(z,r)\) to be the following poset. Let \(P^r\) have sources and sinks collectively labeled \(1_r,2_r,\ldots, (n-1)_r,n_r\), alternating from source to sink in the same sequence as the vertices \(1,2,\ldots,n-1,n\) of \(P\). For each \(1\leq i\leq n-1\), add \(r-1\) vertices between \(i_r\) and \((i+1)_r\) such that the segment \([i_r,(i+1)_r]\) is totally ordered. \begin{center} \includegraphics[scale=.5]{tikz_rzz_demo_full} \end{center} Let \(R\) be the embedding from \(\Sigma_P\to\Sigma_{P^r}\) (the collections of isomorphism classes of indecomposable representations over each poset) given by \(R([x,y])=[x_r,y_r]\). (We note that \(R\) clearly depends on the original\(P\) and the choice of \(r\), but we will simply write \(R\) in all cases and leave the dependence on \(P,r\) clear by context.) Finally, define \(D_{\mathrm{AR}}^r\) on the set of indecomposable representations of \(P\) by \[ D_{\mathrm{AR}}^r(\sigma,\tau)=D_{\mathrm{AR}}(R(\sigma),R(\tau)). \] where the right hand \(D_{\mathrm{AR}}\) is the AR distance over \(P^r\). \end{definition} The endpoint conversion from \(\mathbb{A}_n(z,r)\) intervals to \({\mathbb{Z}\mathbb{Z}}\) intervals is similar to that of Definition \ref{defn_convert}, but has the labeling disparities increased by a factor of \(R\). \begin{remark}\label{rmk_convert_2} For some module \([x,y]\) over a pure zigzag orientation \(\mathbb{A}_n(z)\) and some \(r\in\mathbb{Z}_{>0}\), \[ \mathrm{dim}([x_r,y_r])=r\cdot[\mathrm{dim}([x,y])-1]+1=r\cdot(y-x)+1. \] \end{remark} The following result is immediate from Definition \ref{defn_hull}. \begin{remark}\label{rmk_rz_hull} Let \(P=\mathbb{A}_n(z)\) have pure zigzag orientation and \(P^r\) be its \(r\)-zigzag refinement. As \(\mathrm{hull}(\mathbb{A}_n^r(z))=\{[x_r,(x+1)_r]|1\leq x< n\}\), it follows that \(\{R([x,x+1])\}_{1\leq x< n}=\mathrm{hull}(P^r)\). \end{remark} \begin{corollary}[to Lemma \ref{lemma_hull_w}]\label{cor_rzz_w} For any indecomposable \(\sigma\) over pure zigzag orientation, if \(\sigma=[x,x+1]\) then \[ W_{\mathrm{AR}}^r(\sigma)=\mathrm{dim}(\sigma)+2=r+3, \] and otherwise \[ W_{\mathrm{AR}}^r(\sigma)=\mathrm{dim}(\sigma). \] \end{corollary} \begin{example}\label{ex_refine} The following is a visualization of the module embedding \(R\) from \(P=\mathbb{A}_6^{\mathrm{du}}\) to its \(3\)-zigzag refinement \(P^3\). \begin{center} \includegraphics[scale=.75]{tikz_dots_refine} \end{center} Though unlabeled for clarity, interval modules maintain the same relative position across the two AR quivers under \(R\). Shape, location, and relative distance between indecomposables are essentially unchanged. However, along the north and south boundaries, it is immediate that the gray dots in this area are simples, removing the presence of pure zigzag's large hulls. \end{example} \begin{prop}[\(r\)-zigzag Right-Hand Stability]\label{prop_stabr1} Over an \(r\)-zigzag orientation \(P=\mathbb{A}_n(z,r)\) with \(r\geq 2\), \[ D_{\mathrm{AR}}\leq 8r\cdot D_{\mathrm{BL}}. \] \end{prop} Compare with Proposition \ref{prop_stab1} in which the large hull of unmodified \(W_{\mathrm{AR}}\) caused \(n\)-dependence in the inequality. \begin{proof} Necessity comes from the first column of Table \ref{tab_w12}. Sufficiency of the remaining columns for \(W\)-values is easy to check. First column \(d\)-values in Table \ref{tab_d1} require only a constant of \(2r\). We only show the sufficiency of \(8r\) when comparing fourth column intervals from Table \ref{tab_d1}. Suppose then that \(\sigma,\tau\) are two indecomposables with opposite parity of both left and right endpoints. \(d_{\mathrm{AR}}^r(\sigma,\tau)\) becomes large (and \(d_{\mathrm{BL}}\) becomes small) when \(\sigma,\tau \) have small supports and are positioned centrally within the poset. However, if the supports are too small \(d_{\mathrm{AR}}^r\) will revert to \(\max W_{\mathrm{AR}}^r\) values, which we already know are stable. The largest value of \(d_{\mathrm{AR}}^r(\sigma,\tau)\) such that \(d_{\mathrm{AR}}^r<\max W_{\mathrm{AR}}^r\)'s is with \(\sigma\) and \(\tau\) both having supports as close as possible to \(\mathcal{Z}([n/6,5n/6])=[n_r/6,5n_r/6]\), while still possessing opposite parity on left and right endpoints. In such a situation, \(d_{\mathrm{AR}}^r(\sigma_{\mathbb{A}},\tau_{\mathbb{A}})\approx W_{\mathrm{AR}}^r(\sigma_{\mathbb{A}})\approx W_{\mathrm{AR}}^r(\tau_{\mathbb{A}})\approx r\cdot(2n/3)\). But then, \(d_{\mathrm{AR}}^r\approx 2r\cdot d_{\mathrm{BL}}\), and so \(8r\) remains permissible. \end{proof} Considering the opposite inequality, we encounter a repeat of the partition misalignments. \begin{lemma}[Partitioning Non-alignment for \(r\)-zigzag]\label{lemma_y} Let \(P=\mathbb{A}_n(z)\) be a poset of pure zigzag orientation and \(P^r\) be its \(r\)-zigzag extension. Then if \(D_{\mathrm{BL}}<\infty\), \[D_{\mathrm{BL}}\leq \dfrac{n}{4r}\cdot D_{\mathrm{AR}}^r\] where \(n/4r\) is a \emph{lower bound} for the Lipschitz constant in the inequality above. \end{lemma} \begin{proof} The proof follows identically to that of Lemma \ref{lemma_x}, where the example modules \(\sigma,\tau_1,\tau_2\) are all viewed through the functor \(R_P\). \end{proof} \begin{prop}[\(r\)-zigzag Left-Hand Stability]\label{prop_stabr2} Over pure zigzag orientation, so long as \(D_{\mathrm{BL}}<\infty\), \[ D_{\mathrm{BL}}\leq \dfrac{n}{4r}\cdot D_{\mathrm{AR}}^r \] where \(n/4r\) is the minimal Lipschitz constant satisfying the above inequality. \end{prop} \begin{proof} Necessity follows from Lemma \ref{lemma_y}. Sufficiency parallels the proof of Proposition \ref{prop_stab2} using Remark \ref{rmk_convert_2}. (In the event that it is of interest to the reader, outside of the misalignment cases handled by Lemma \ref{lemma_y}, the smaller weight \(n/8r\) suffices for all remaining cases. This is a further mirroring of the proof of Proposition \ref{prop_stab2}.) \end{proof} By projecting from pure zigzag into an \(r\)-zigzag poset and removing the hull, we have successfully eliminated the \(n\) dependence of one side of our inequalities. The final modification at last removes the other. \subsection{Stability with Poset Limits}\label{subsub_limits} The following is a further modification of \(D_{\mathrm{AR}}^r\) that assumes the representation category of some original \(P=\mathbb{A}_n(z)\) or \(P^r=\mathbb{A}_n(z,r)\) is embedded into a poset of similar structure that is lengthened on either end. There are two advantages to this modification. 1) This modification obtains stability with \(D_{\mathrm{BL}}\) in a way that does not depend on the original length \(n\) of the poset. 2) This modifies \(D_{\mathrm{AR}}\) over pure zigzag orientations (via first modifying to \(D_{\mathrm{AR}}^r\)) in such a way that one may consider the modules over a zigzag poset of \emph{unbounded length}, which may be of independent interest to many. \begin{definition}\label{defn_wedge} Let \(P=\mathbb{A}_n\) and \(P'=\mathbb{A}_m\) be two orientations of \(\mathbb{A}\)-type quivers of any lengths. Assign the labelling \(P=\{1\sim 2\sim\ldots\sim n\}\) and \(P'=\{1'\sim2'\sim\ldots\sim m'\}\). Then define \[ P\wedge P' \] to be the poset obtained from joining the \(P\)-vertex \(n\) with the \(P'\)-vertex \(1'\), along with the original \(\leq,\leq'\) relationships and any added inequalities induced by the association of \(n\) with \(1'\). \end{definition} \begin{definition} Let \(P=\mathbb{A}_n(z)\) be some pure zigzag orientation. Let \(P^r=\mathbb{A}_n(z,r)\) be its \(r\)-zigzag refinement (Definition \ref{defn_r_zz}). For \(f\in\mathbb{Z}_{\geq 1}\), define the poset \(P^{r,f}=\mathbb{A}_n(z,r\pm f)\) as follows. First define the poset \(U_r=\{1_u\geq \ldots \geq (1+r)_u\leq \ldots \leq (1+2r)_u\}\) and \(D_r=\{1_d\leq \ldots \leq (1+r)_d\geq \ldots \geq (1+2r)_d\}\) (\(=\) the opposite poset of \(U_r\)). Define \(P^{r,1}\) to be \begin{itemize} \item \(U_r\wedge P^r\wedge U_r\) if \(P=\mathbb{A}_n^{\mathrm{uu}}\), \item \(U_r\wedge P^r\wedge D_r\) if \(P=\mathbb{A}_n^{\mathrm{ud}}\), \item \(D_r\wedge P^r\wedge U_r\) if \(P=\mathbb{A}_n^{\mathrm{du}}\), \item \(D_r\wedge P^r\wedge D_r\) if \(P=\mathbb{A}_n^{\mathrm{dd}}\). \end{itemize} Below is an example of \(P^{3,1}\) for \(P=\mathbb{A}_6^{\mathrm{du}}\). \begin{center} \includegraphics[scale=.5]{tikz_rf_demo} \end{center} Define \(P^{r,f}\) inductively (i.e., the number of wedges on both sides of appropriately chosen \(U_r\) or \(D_r\) is equal to \(f\)). In this way, the \(r\)-zigzag structure and sink/source orientation of the left and right endpoints remain unchanged from \(P^r\) to \(P^{r,f}\). Let \(F:\Sigma_{P^r}\to\Sigma_{P^{r,f}}\) be the functor \(F([x,y]_{P^r})=[x,y]_{P^{r,f}}\). That is, the supports of interval modules remain \emph{fixed} within \(P^r\) considered as a subposet of \(P^{r,f}\). \end{definition} \begin{definition} For \(\sigma,\tau\) over some pure-zigzag orientation \(P=\mathbb{A}_n(z)\), define \[ D_{\mathrm{AR}}^{r,f}(\sigma,\tau)=D_{\mathrm{AR}}(F\circ R(\sigma),F\circ R(\tau)). \] Define \[ D_{\mathrm{AR}}^{r,\infty}(\sigma,\tau)=\lim_{f\to\infty}D_{\mathrm{AR}}^{r,f}(\sigma,\tau). \] \end{definition} Again, take note that in the following proposition the separation into pieces of the AR quiver of \(P^r\) when embedded by \(F_P\) align \emph{precisely} with the \(\langle\cdot,\cdot\rangle\) partitioning of the AR quiver. \begin{prop}[\(D_{\mathrm{AR}}^{r,\infty}\) Separates by \({\mathbb{Z}\mathbb{Z}}\)-type]\label{prop_d_inf} For \(P=\mathbb{A}_n(z)\), \(D_{\mathrm{AR}}^{r,\infty}\) separates modules by \({\mathbb{Z}\mathbb{Z}}\) region. That is, the image of the functor \(F:\Sigma_{P^r}\to\Sigma_{P^{r,f}}\) consists of the four connected components \begin{align*} &F(\{(\cdot,\cdot)_{{\mathbb{Z}\mathbb{Z}},P^r}\})\subset\{(\cdot,\cdot)_{{\mathbb{Z}\mathbb{Z}},P^{r,f}}\},\\ &F(\{[\cdot,\cdot]_{{\mathbb{Z}\mathbb{Z}},P^r}\})\subset\{[\cdot,\cdot]_{{\mathbb{Z}\mathbb{Z}},P^{r,f}}\},\\ &F(\{[\cdot,\cdot)_{{\mathbb{Z}\mathbb{Z}},P^r}\})\subset\{[\cdot,\cdot)_{{\mathbb{Z}\mathbb{Z}},P^{r,f}}\},\\ &F(\{(\cdot,\cdot]_{{\mathbb{Z}\mathbb{Z}},P^r}\})\subset\{(\cdot,\cdot]_{{\mathbb{Z}\mathbb{Z}},P^{r,f}}\}. \end{align*} Moreover, \(D_{\mathrm{AR}}^{r,\infty}\) is the bottleneck distance given by: \begin{itemize} \item \(d_{\mathrm{AR}}^{r,\infty}(\sigma_{\mathbb{A}},\tau_{\mathbb{A}})=|x_1-x_2|+|y_1-y_2|\) if \(\sigma,\tau\) are in the same \(\langle\cdot,\cdot\rangle_{\mathbb{Z}\mathbb{Z}}\) region, and \(d_{\mathrm{AR}}^{r,\infty}(\sigma_{\mathbb{A}},\tau_{\mathbb{A}})=\infty\) otherwise. \item \(W_{\mathrm{AR}}^{r,\infty}(\sigma)=y_1-x_1+3\) if \(\sigma=[x_1,x_1+r]\) where \(x\) is a sink or source vertex, and \(W_{\mathrm{AR}}^{r,\infty}(\sigma)=y_1-x_1+1\) otherwise. That is, \(W_{\mathrm{AR}}^{r,\infty}(\sigma)=W_{\mathrm{AR}}^r(\sigma)\). \end{itemize} \end{prop} \begin{proof} \begin{figure*}[h!] \centering \includegraphics[scale=.5]{tikz_dots_refine_2} \caption{Again, the thicker dots represent indecomposables from the AR quiver of \(P=\mathbb{A}_6^{\mathrm{du}}\) under the 3-zigzag embedding functor \(R\). Depicted here is the embedding \(F\) of modules of the AR quiver of \(P^{3}\) into that of the extension by \(D_3\) on the left and \(U_3\) on the right.} \label{fig_emb_2} \end{figure*} As we have seen, from \(P=\mathbb{A}_n(z)\) to \(P^r=\mathbb{A}_n(z,r)\), the AR quiver becomes refined by a factor of \(r\) along both axes while the relative positions of the embedded modules from \(P\) remain the same (Example \ref{ex_refine}). This separation and the fact that \(W_{\mathrm{AR}}^{r\infty}\) remains completely unchanged from \(W_{\mathrm{AR}}^r\) can be checked individually from the four possible orientations of \(P^r\) in Figures \ref{fig_epx} and \ref{fig_epy}. In all four images, when wedging with \(U_r\) or \(D_r\), the new axis contains the \(A\)'s in sequence, the \(B\)'s in sequence, but separates the two sub-axes by the \(C\)'s. Wedges on the \emph{left} side of the poset are added to the \emph{middle of the }\(x\)\emph{-axis} and to the \emph{ends of the }\(y\)\emph{-axis}. Wedges on the \emph{right} side of the poset are added to the \emph{ends of the }\(x\)\emph{-axis} and to the \emph{middle of the }\(y\)\emph{-axis}. Compare these case by case with the partitions in Table \ref{tab_partitions} in Remark \ref{rmk_w_table}. \begin{figure*}[h!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=1]{tikz_r_inf_extension_1_cut} \caption{When \(P\) is of \(d*\) orientation, the original \(x=1\) (contained in \(B_1\) in the image) is grouped with the other \(B_i\)'s, which are all \emph{open} left endpoints.} \end{subfigure}% \hspace{.5cm} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=1]{tikz_r_inf_extension_2} \caption{For \(u*\) orientations, the original \(x=1\) is grouped with the \emph{closed} endpoints when the original axis becomes split by the wedges.} \end{subfigure} \caption{} \label{fig_epx} \end{figure*} \begin{figure*}[h!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=1]{tikz_r_inf_extension_3} \caption{For \(*u\) orientations, the original axis value \(y=n\) is a closed right endpoint, and is grouped with the other closed endpoints.} \end{subfigure}% \hspace{.5cm} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=1]{tikz_r_inf_extension_4} \caption{Finally, the original axis value \(y=n\) is open, and is grouped in the new axis with the other open endpoints.} \end{subfigure} \caption{} \label{fig_epy} \end{figure*} \end{proof} \begin{remark}\label{rmk_dar_finite} While \(d_{\mathrm{AR}}^{r,\infty}\) may attain infinite values, the final bottleneck distance \(D_{\mathrm{AR}}^{r,\infty}\) does not, by virtue of the fact that \(W_{\mathrm{AR}}^{r,\infty}=W_{\mathrm{AR}}^r\) is always bounded by the length of the original \(r\)-zigzag orientation (Corollary \ref{cor_dar_n}). \end{remark} The following theorem is our concluding result on comparisons of \(D_{\mathrm{AR}}\) with \(D_{\mathrm{BL}}\). \begin{theorem}[Sharp \(D_{\mathrm{AR}}^{r,\infty}\) vs\(.\) \(D_{\mathrm{BL}}\) Lipschitz Constants]\label{thm_blar_limit} Let \(P=\mathbb{A}_n(z)\) be of pure zigzag orientation. The following are the four stability results between \(D_{\mathrm{BL}}\) and \(D_{\mathrm{AR}}^{r,\infty}\) partitioned by \({\mathbb{Z}\mathbb{Z}}\)-type (as neither distance directly compares modules from different regions of the partition). \begin{itemize} \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in(\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,r\cdot D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{r,\infty}\leq 8r\cdot D_{\mathrm{BL}}. \) \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in[\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,r\cdot D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{r,\infty}\leq 2r\cdot D_{\mathrm{BL}}\,\, \) (if \(D_{\mathrm{BL}} < \infty\)). \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in[\cdot,\cdot)_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,r\cdot D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{r,\infty}\leq 4r\cdot D_{\mathrm{BL}}. \) \item If \(\sigma_{\mathbb{Z}\mathbb{Z}},\tau_{\mathbb{Z}\mathbb{Z}}\in(\cdot,\cdot]_{\mathbb{Z}\mathbb{Z}}\), then \( \,\,r\cdot D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{r,\infty}\leq 4r\cdot D_{\mathrm{BL}}. \) \end{itemize} \end{theorem} \begin{proof} All left hand inequalities \(r\cdot D_{\mathrm{BL}}\leq D_{\mathrm{AR}}^{r,\infty}\) are necessary by the first column of Table \ref{tab_d1}, and sufficiency is easy to see by examination of Table \ref{tab_w1} (columns two three and four of Table \ref{tab_d1} simply revert to problems of comparing values in Table \ref{tab_w1}). For the right hand inequalities, a Lipschitz constant of \(D_{\mathrm{AR}}^{r,\infty}\leq 2r\cdot D_{\mathrm{BL}}\) is permissible when considering only Table \ref{tab_d1}. However, the different \(W_{\mathrm{BL}}\) behaviors in Table \ref{tab_w1} force some of the values to be larger. \end{proof} As seen in the initial statement of the proof at the beginning of this section, one may as well choose the minimal zigzag extension of \(r=2\) if there is no contextual motivation for selecting a larger value. \section{Weighted Interleaving Distance}\label{sec_dwil} As briefly discussed in the introduction, the weighted interleaving distance on some orientation of \(\mathbb{A}_n\) measures similarity between two interval modules by the depth or shallowness on the `wells' over which their supports differ (Figure \ref{fig_wil_ex1}). \begin{definition} For a general orientation \(P=\mathbb{A}_n\), enumerate the poset's source vertices from left to right as \(m_1,\ldots,m_p\). Define \(V_i\) to be the maximal sub-poset given by all elements comparable to \(m_i\). \[ V_i=\{x\in P: x\geq m_i\}. \] Label the left and right sinks of \(V_i\) (if they exist) as \(1_i\) and \(n_i\) respectively: \[ \{1_i\gets\ldots\gets m_i\to\ldots\to n_i\}. \] Let \([V_i]\) denote the interval representation \([1_i,n_i]\). Lastly, as independent posets, the \emph{wedge} of \(V_i\) and \(V_{i+1}\) is the poset in which \(n_i\) is identified with \(1_{i+1}\): \[ V_i\wedge V_{i+1}=\{1_i\gets\ldots\gets m_i\to\ldots\to n_i=1_{i+1}\gets\ldots\gets m_{i+1}\to\ldots\to n_{i+1}\}. \] as in Definition \ref{defn_wedge}. \end{definition} \begin{remark} Any orientation \(P=\mathbb{A}_n\) can be uniquely expressed as a wedge of \(V_i\)'s \[ P=V_1\wedge V_2\wedge\ldots\wedge V_l, \] where \(V_1\) and \(V_l\) may be equioriented segments. \end{remark} Moving forward, we will view representations of an orienation of \(\mathbb{A}_n\) as persistence modules over a one-vertex refinement of the original poset. \begin{definition} For a poset \(P\), let \(\tilde{P}\) be the poset \(P\cup\{\infty\}\) with the relation \(x\leq y\) if and only if either \(x,y\in P\) with \(x\leq_P y\), or \(y=\infty\). We call \(\tilde{P}\) \emph{the poset} \(P\) \emph{suspended at infinity}. \end{definition} Translations on \(P=\mathbb{A}_n\) can be viewed as a wedge of translations on each individual \(V_i\). \begin{prop}\label{prop_split} Any translation \(\Lambda\) on \(P=V_1\wedge\ldots\wedge V_p\) can be fully described by how it acts on the individual \(V_i\). Similarly, any collection \(\{\Lambda_i\in\mathrm{Trans}(V_i)\}_{1\leq i\leq p}\) determine a translation on \(P\). Similarly, any translation \(\Lambda\) on \(\tilde{P}\) where \(P=V_1\wedge\ldots\wedge V_p\) can be fully described by how it acts on the individual \(\tilde{V}_i\). However, in reverse we must add the extra condition that pairs of translations for adjacent \(V_i\) agree at the points of overlap. That is, any collection \[ \{\Lambda_i\in\mathrm{Trans}(\tilde{V}_i)\}_{1\leq i\leq p}: \Lambda_i(n_i)=\Lambda_{i+1}(1_{i+1})\text{ for all }1\leq i< p \] determines a translation on \(\tilde{P}\). \end{prop} We now define an interleaving-type distance using the poset suspended at \(\infty\). \begin{definition}\label{def_wt} For a poset \(P\) and a pair \((a,b)\in\mathbb{N}\times\mathbb{N}\), define the \emph{weighted height} of a translation \(\Lambda\) over \(\tilde{P}\) to be \[ \tilde{h}(\Lambda)=\max_{x\in P}\delta^{(a,b)}(x,\Lambda x), \] where \(\delta^{(a,b)}(x,y)\) is the directed graph distance between \(x\) and \(y\), with edges of \(P\) counted with weight \(a\), and added edges of \(\tilde{P}\) counted with weight \(b\). \end{definition} At a weight of \((1,1)\), this is the directed graph distance induced by the poset structure. However, as we want to make the movement of former maximals possible without entirely losing track of the significance of that operation, we have the ability to feather the ``penalty'' of moving these former maximal to \(\infty\) with the weight \(b\) (or rather, the weight of \(b\) relative to \(a\)). \begin{definition}\label{def_wil} For a poset \(P\) and a pair \((a,b)\in\mathbb{N}\times\mathbb{N}\), we define the \emph{weighted interleaving distance} \(D_{\mathrm{I}}^{(a,b)}\) to be the interleaving distance (Definition \ref{def_il}) on the set of representations of \(P\), but with translations taken over \(\tilde{P}\), using the height function given in Definition \ref{def_wt}. Throughout, the notation \(D_{\mathrm{I}}^{(a,b)}\) will be reduced to \(D_{\mathrm{I}}\). \end{definition} We introduce the following notation for future convenience. \begin{notation}\label{not_t} For a given \(V_i\), let \(T_i=\min\{m_i-1_i,n_i-m_i\}\) be the length of the short side, and \(S_i=\max\{m_i-1,n_i-m_i\}\) be the length of the long side. Define \(T_i=0\) if \(V_i\) is equioriented. Define \(T \vcentcolon =\displaystyle\max_{1\leq i\leq p}T_i\). Define \(S \vcentcolon =\displaystyle\max_{1\leq i\leq p}S_i\). \end{notation} \begin{prop}[Classification of Translations on \(\tilde{P}\)]\label{prop_class} Let \(P=V_1\wedge V_2\wedge\ldots\wedge V_p\) be an orientation of \(\mathbb{A}_n\), and assume \(a\leq b\). Let \(\Lambda\) be a translation on \(\tilde{P}\). The collection of full translations (Remark \ref{rmk_full}) are described below by how they act on each individual \(\tilde{V}_i\) (Remark \ref{prop_split}). \begin{itemize} \item If \(\tilde{h}(\Lambda)<a\), then each \(\Lambda_i\) (and so \(\Lambda\) itself) is the trivial translation. \item If \(a\leq\tilde{h}(\Lambda)<b\), then the sources and sinks of each \(V_i\) are fixed by \(\Lambda_i\). All other vertices move upwards by \(k\) vertices, where \(ak\leq \tilde{h}(\Lambda)< a(k+1)\), or to their unique comparable sink, if that is closer than \(k\) vertices. \item If \(b\leq \tilde{h}(\Lambda)<a(T_i-1)+b\), then \(\Lambda\) can be described in the same way as above, save that now the sinks are sent to \(\infty\). Also, if \(ak+b\leq \tilde{h}(\Lambda)<a(k+1)+b\), then any vertices (other than unique source) that are within \(k\) vertices from their corresponding sink are also sent to \(\infty\). \item If \(a(T_i-1)+b\leq\tilde{h}(\Lambda)\), then the entire shorter leg (sans the source) can be sent to \(\infty\) by the translation. Each vertex of the longer leg (including the source), is sent up the longer side as far as the translation permits (including being sent to \(\infty\)). Barring extreme differences between the length of the two sides combined with small values of \(b\), \(\Lambda^2\) will almost always send every vertex of \(V_i\) to \(\infty\). \end{itemize} To summarize, if \(I\) and \(J\) are two arbitrary persistence modules over \(P\): \begin{itemize} \item \(D_{\mathrm{I}}(I,J)<b\) guarantees that \(I\) and \(J\) are isomorphic on every fixed point. \item \(D_{\mathrm{I}}(I,J)=ak+b\) for some \(k\geq 0\) guarantees that they \(I\) and \(J\) are isomorphic on minimal vertices of any \(V_i\) in which \(T_i-1\geq ak\). \end{itemize} \end{prop} \begin{ex} \begin{figure} \centering \includegraphics[scale=1]{tikz_wil_ex1} \caption{An orientation of \(\mathbb{A}_n\) with two pairs of intervals. For \(b>>a\), the red intervals are much closer under \(D^{(a,b)}_{\mathrm{I}}\) than the blue intervals.} \label{fig_wil_ex1} \end{figure} With all definitions in place, we can make an easy example to convey what \(D_{\mathrm{I}}\) measures, and what it ignores. In Figure \ref{fig_wil_ex1}, the red intervals are much closer to each other in the weighted interleaving distance than the blue intervals are. In particular, \(D_I(\textrm{red modules})\) requires a translation of height sufficient to annihilate all the shallow \(V_i\)'s, but not the large one. However, \(D_I(\textrm{blue modules})\) immediately requires moving the minimal at the bottom of the deepest \(V_i\), already demanding a larger translation than anything involved in interleaving the red modules. \end{ex} \subsection{Stability of \(D_{\mathrm{I}}\) over \(D_{\mathrm{AR}}\) as Bottleneck Distances} \begin{remark}\label{rmk_always_bneck} We will compare \(D_{\mathrm{I}}\) and \(D_{\mathrm{AR}}\) as \emph{bottleneck distances}. From here onward, let \(D_{\mathrm{I}}\) denote the bottleneck distance induced by the weighted interleaving distance. The focus of this section is the minimization of weights \((a,b)\) (under lexicographic \(\mathbb{N}\times\mathbb{N}\) ordering) such that \(D_{\mathrm{AR}}\leq D_{\mathrm{I}}^{(a,b)}\) (again, as bottleneck distances). \end{remark} The weighted interleaving distance measures different features than the other distances in this paper, and was adopted as one of our directions of investigation due to its ability to preserve an interleaving-like approach to finite posets that is not immediately stalled by sink/source vertices, which must remain \emph{fixed} under the ordinary interleaving distance. The authors previously proved an algebraic stability result using this distance for ``branch''-type posets \cite{meehan_meyer_1}. While not supplying an algebraic stability result for arbitrary \(\mathbb{A}_n\) quivers, we do compare \(D_{\mathrm{I}}\) (its induced bottleneck distance) against \(D_{\mathrm{AR}}\). Instead of single-variable Lipschitz stability results, we state \(D_{\mathrm{I}}\) stability against another distance in terms of the two-parameter weight used to define it: \((a,b)\in\mathbb{N}\times\mathbb{N}\), where we consider \(\mathbb{N}\times\mathbb{N}\) to be ordered lexicographically (Definition \ref{def_wil}). This ordering is to prioritize first minimizing the weight attached to the original poset structure, and afterwards the weight that determines distances to \(\infty\). \begin{theorem}\label{thm_pairs} Let \(P=V_1\wedge\ldots\wedge V_l\) be some orientation of \(\mathbb{A}_n\). Let \(T,S\) be as in Notation \ref{not_t}. The classification of stable weights for \(D_{\mathrm{I}}\geq D_{\mathrm{AR}}\) as bottleneck distances is: \begin{itemize} \item for equioriented, see Corollary \ref{cor_equi}, \item for \(S<3\), see Proposition \ref{prop_s_3}, \item for \(S\geq 3\) and \(T=1\), see Proposition \ref{prop_s_3}, \item other non-shallow posets are not addressed (see Definition \ref{defn_shallow}), and remaining posets are split by centrality (see Definition \ref{defn_central}), \item for shallow and central orientations, see Proposition \ref{prop_shallow_central}, \item for shallow and non-central orientations, see Corollary \ref{cor_shallow_non_central}. \end{itemize} \end{theorem} \subsection{Stable values of \(a\)} \begin{prop}\label{prop_aok} Let \(P=V_1\wedge\ldots\wedge V_p\). \begin{itemize} \item If \(S\geq 3\), the minimal permissible weight is of the form \((2,b)\). \item If \(S=2\), the minimal permissible weight is of the form \((1,b)\) if there is only a single equioriented segment of two consecutive edges, and is of the form \((2,b)\) otherwise. \item If \(S=1\), the minimal permissible weight is of the form \((1,b)\). \end{itemize} \end{prop} \begin{proof} \textbf{Necessity:} For \(S\geq 3\) consider the following diagram. \begin{center} \includegraphics[scale=1.5]{tikz_s_is_3} \end{center} For \(S=2\) consider the following. For any two equioriented segments with \(i-2\leftrightarrow i-1\leftrightarrow i\) left of \(j\leftrightarrow j+1\leftrightarrow j+2\), the intervals \(\sigma=[i-1,j+1]\) and \(\tau=[i,j]\) are interleaved by a translation height \(a\) while having an AR distance of \(2\). Two of the four possible configurations appear below, the remaining two being those with segments of \(\nearrow\nearrow\) and \(\searrow\searrow\) arrangements. \begin{center} \includegraphics[scale=1.5]{tikz_s_is_2_up} \end{center} That is, the only \(S=2\) type of poset permitting \(a=1\) is pure zigzag with a single pair of consecutive edges with the same orientation. \textbf{Sufficiency:} One need only consider \(W\)-values of interval modules containing no maximals or minimals, and \(d\)-values of pairs of interval modules whose supports share precisely the same fixed points. This is easy to check in all cases. \end{proof} \begin{corollary}\label{cor_aok} For any poset \(P\) with \(S\geq 3\) and weight \((2,b)\) with \(b>2\), if \(D_{\mathrm{I}}(\sigma,\tau)<b\) then \[ D_{\mathrm{AR}}(\sigma,\tau)\leq D_{\mathrm{I}}(\sigma,\tau). \] \end{corollary} \begin{corollary} For any orientation \(Q\) of \(\mathbb{A}_n\) and the appropriate choice of \(a=1,2\), the weight \((a,n)\) is \emph{always} permissible. I.e., \(b=n\) is an upper bound for the value of \(b\) in the minimal permissible weight. \end{corollary} \begin{proof} This follows immediately from Propositions \ref{prop_aok} and \ref{prop_diam}. \end{proof} Lastly, a much less general corollary is the resulting minimal stable weights for equioriented \(\mathbb{A}_n\). \begin{corollary}\label{cor_equi} Let \(Q\) be an equi-orientation of \(\mathbb{A}_n\). Then \((2,1)\) is the minimal stable weight. \end{corollary} \subsection{Stability when \(S<3\)} \begin{prop}\label{prop_s_3} If \(S<3\), then stability is minimally obtained by \((a,b)\) where \(a=1\) or \(a=2\) according to Proposition \ref{prop_aok}, and \(b=n\). \end{prop} \begin{proof} \textbf{Necessity:} \(W_{\mathrm{I}}([1,n])=b\) and \(W_{\mathrm{AR}}([1,n])=n\). \textbf{Sufficiency:} Due to Proposition \ref{prop_aok}, one needs only check \(W,d\)-values involving modules of the form \([x,n]\). \end{proof} \subsection{Short and Long Escape} As translations split over wedges (Proposition \ref{prop_split}), we now examine the translations required for realizing \(W_{\mathrm{AR}}([V_i])\) of any wedged component, where \(P=V_1\wedge\ldots\wedge V_l\). \begin{definition}\label{defn_lambda} Let \(V=\{1\geq\ldots\geq m\leq\ldots\leq n\}\). Assume that the left side is strictly shorter than the right. That is, according to Notation \ref{not_t}, \[ m-1=T<S=n-m. \] We first construct the most basic translation \(\Lambda\) such that \(\phi,\psi=0\) form a \(\Lambda\)-interleaving of \([1,n]\) and \(0\). This is the translation given by: \begin{itemize} \item \(\Lambda x = \infty\) for all \(x\) in \([1,m)\). Note that the distance in the weighted poset from \(m-1\) to \(\infty\) is \begin{equation}\label{lambda1} \epsilon(b)=2(T-1)+b. \end{equation} \item For all \(x\in[m,n]\), \(\Lambda x\) moves up the right hand side (possibly to \(\infty\)) by a distance of \(\mathcal{E}(b)\), where \begin{equation}\label{lambda2} \mathcal{E}(b)=\left\{ \begin{array}{ll} b & \text{if }b \geq 2S \\ \\ 1/2(2S+b) & \text{if }2S+b\equiv 0\,\mathrm{mod}\,4 \\ \\ 1/2(2S+b)+1 & \text{if }2S+b\equiv 2\,\mathrm{mod}\,4 \\ \\ \lceil1/2(2S+b)\rceil & \text{if }2S+b\equiv 1,3\,\mathrm{mod}\,4 \\ \end{array}\right. \end{equation} \end{itemize} In short, by the translation property that \(x\leq y\) demands \(\Lambda x\leq \Lambda y\), moving the minimal up one side requires that the entire other side by sent to \(\infty\) by \(\Lambda\). However, the side up which the minimal is moved is relaxed, and may take \emph{two} \(\Lambda\)-applications in order to send all vertices to \(\infty\), by the properties of interleavings. Define \(\epsilon(b)\) to be the \emph{short escape}, and \(\mathcal{E}(b)\) to be the \emph{long escape}. This construction is of minimal height, being \(h(\Lambda)=\max\{\epsilon(b),\mathcal{E}(b)\}\), such that \(\Lambda^2(x)=\infty\) for any \(x\in V\). Replace the prototype translation and define \(\Lambda_{V}^b\) to be \emph{the maximal translation on \(V\) of height} \(\max\{\epsilon(b),\mathcal{E}(b)\}\). This translation is unique (unless it is a symmetric \(V\), in which case choose the left side be considered the `short' side). \end{definition} \begin{prop}\label{prop_kill} The translation \(\Lambda_V^b\) is of minimal height such that \(\phi,\psi=0\) form a \(\Lambda_V^b\)-interleaving of \([V]\) and \(0\). I.e., \(\Lambda_V^b\) realizes \(W_{\mathrm{I}}([V])\) and no translation of smaller height does. \end{prop} Proposition \ref{prop_kill} pairs extremely well with the following. (Recall that we are now considering \(D_I\) to always refer to its induced bottleneck distance as per Remark \ref{rmk_always_bneck}.) \begin{corollary}\label{cor_wedges}[to Proposition \ref{prop_split}] Let \(P=V_1\wedge\ldots\wedge V_p\). The induced bottleneck distance \(D_{\mathrm{I}}\) (by slight abuse of notation) and its generating functions \(W,d\) all split over wedges. \begin{itemize} \item \( W_{\mathrm{I}}(I)=\displaystyle\max_{1\leq l\leq p} \{W_{\mathrm{I}}(I|_{V_l})\}, \) \item \( d_{\mathrm{I}}(I,J)=\displaystyle\max_{1\leq l\leq p} \{d_{\mathrm{I}}(I|_{V_l},J|_{V_l})\}. \) \item \( D_{\mathrm{I}}(I,J)=\displaystyle\max_{1\leq l\leq p} \{D_{\mathrm{I}}(I|_{V_l},J|_{V_l})\}, \) \end{itemize} \end{corollary} \subsection{Shallow Posets} \begin{ex} Using Proposition \ref{prop_kill} and Corollary \ref{cor_wedges}, let us examine a powerful constraint for stability: \(W\)-values for the indecomposable \([1,n]\). If we solve simultaneously for the conditions that (a) the largest \emph{long escape} exceeds the largest \emph{short escape} (i.e., \(W_{\mathrm{I}}([1,n])\) is determined by some long escape) and (b) stability of the form \(D_{\mathrm{AR}}\leq D_{\mathrm{I}}\), we get the two bounds \[ b\geq 2n-2S\text{ and }b<2S-4T+2. \] Combining inequalities, we see that such a \(b\) can only exist if (even with some permissive rounding), \[ 2(S-T)+1>n. \] \end{ex} \begin{remark} One immediately sees from the equation above that the situation in which \(W_{\mathrm{I}}([1,n])\) is determined by some long escape value is incredibly specific, as it requires at the very least that the poset have one \(V_i\) with longer side constituting \emph{more than half of the entire poset} (using \(T\geq 2\)): \[ 2S > n+1. \] \end{remark} As long escape dictates \(W_{\mathrm{I}}([1,n])\) only in this extreme case, we henceforward will only consider the complementary situation. \begin{definition}\label{defn_shallow} An orientation of \(\mathbb{A}_n\) written \(P=V_1\wedge\ldots\wedge V_l\) that has \(S\geq 3\) (Proposition \ref{prop_aok} above) is \emph{shallow} if \(T\geq 2\) (to keep the hull small) and \(2S\leq n\) (to ensure all \(W_{\mathrm{I}}\)'s are determined by short escape). \end{definition} \begin{remark} Indeed, in a shallow poset \emph{short escape} values are used for \emph{any} \(W_{\mathrm{I}}([V_l])\) (and so, by Corollary \ref{cor_wedges}, all \(W_{\mathrm{I}}\)-values). To see that \(W_{\mathrm{I}}([V_l])=\epsilon_l(b)\) for all \(1\leq l\leq p\), simply note that \(\mathcal{E}_l(b)=b<2(T_l-1)+b=\epsilon_l(b)\). \end{remark} With only this, we can immediately get the stability statement for \(W\)-values out of the way. \begin{prop}\label{prop_shallow_w} For a shallow poset and any weight \((2,b)\) with \(b\geq n-T\), \[ W_{\mathrm{AR}}(\sigma)\leq W_{\mathrm{I}}(\sigma) \] for any indecomposable \(\sigma\). \end{prop} \begin{proof} If \(\mathrm{supp}(\sigma)\) contains no sink or source we are done by Corollary \ref{cor_aok}. If \(\sigma\in\mathrm{Hull}(Q)\), then by the \(T\geq 2\) tenet for \emph{shallow}, for any \([x,y]\in\mathrm{Hull}(Q)\) either \([x,y]\subset(1,m_t]\) or \([x,y]\subset[m_t,n)\). In particular, the corresponding \([e]\) and \([E]\) of Lemma \ref{lemma_hull_w} obey \(e< E\leq m_t+1\) or \(m_t-1\leq e\leq E\). The formulas of Lemma \ref{lemma_hull_w} are all \(\leq n-T\leq b\leq W_{\mathrm{I}}([x,y])\). (It is possible for one equation to reach \(n-T+1\), but in this case \(m_t\in[x,y]\) and \(W_{\mathrm{I}}([x,y])\geq 2+n-T\) as \(T\geq 2\)). If \(\sigma\not\in\mathrm{Hull}(Q)\) then \(W_{\mathrm{AR}}(\sigma)=\mathrm{dim}(\sigma)\). If \(m_t\not\in\mathrm{supp}(\sigma)\), then either \([1_t,m_t]\) or \([m_t,n_t]\) are disjoint from \(\mathrm{supp}(\sigma)\) (each of which has length at least \(T\)), guaranteeing that the dimension \(\mathrm{dim}(\sigma)\leq n-T\). Otherwise, \(m_t\in\mathrm{supp}(\sigma)\), and \(W_{\mathrm{I}}(\sigma)=W_{\mathrm{I}}([1,n])=2(T-1)+b\geq n+T-2 \geq n =\mathrm{diam}(W_{\mathrm{AR}})\). \end{proof} \subsection{Stability for Shallow and Central} \begin{lemma}\label{lemma_1_2_3} If any of the following are true about a pair of intervals \(\sigma,\tau\) over the shallow poset \(P=V_1\wedge\ldots\wedge V_p\), then any weight \((a,b)\) with \(a=2\) and \(b\geq n-T\) is stable. \begin{enumerate} \item \(m_t\in\) one of \(\mathrm{supp}(\sigma),\mathrm{supp}(\tau)\), but not the other. \item \(\mathrm{dim}(\sigma)\leq b\), \item \([V_t]\subset\mathrm{supp}(\sigma)\). \end{enumerate} Throughout, assume the intervals are always labeled such that \(W_{\mathrm{AR}}(\sigma)\geq W_{\mathrm{AR}}(\tau)\). \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma_1_2_3}] \textbf{(1)} Any interleaving translation must move \(m_t\), and so has height \(D_{\mathrm{I}}(\sigma,\tau)\geq 2(T-1)+b\geq 2T-2+n-T=n+T-2\geq n=\mathrm{diam}(W_{\mathrm{AR}})=\mathrm{diam}(D_{\mathrm{AR}})\). \textbf{(2)} In the proof of (1) we saw that \(d_{\mathrm{AR}}(\sigma)\leq n-T\leq b\) when \(\sigma\in\mathrm{Hull}\). So for any \(\sigma\), if \(\mathrm{dim}(\sigma)\leq b\), then \(W_{\mathrm{AR}}(\sigma)\leq b\). But then \(D_{\mathrm{AR}}(\sigma,\tau)=\min\{d_{\mathrm{AR}}(\sigma,\tau),W_{\mathrm{AR}}(\sigma)\}\) (by the running assumption of \(W_{\mathrm{AR}}(\sigma)\geq W_{\mathrm{AR}}(\tau)\)), and so \(D_{\mathrm{AR}}(\sigma,\tau)\leq b\). By Corollary \ref{cor_aok}, the pair is stable. \textbf{(3)} By (2), we may assume that \(\sigma=[x,y]\) where \(y-x\geq b\geq n-T\). As \(1+T\leq m_t\leq n-T\), it is immediate that \(m_t\in\mathrm{supp}(\sigma)\). Hence, by (1), \(m_t\in\mathrm{supp}(\tau)\) also. Assume now that, in addition, all of \([V_t]\subset\mathrm{supp}(\sigma)\). We will show by cases on the equation for \(d_{\mathrm{AR}}\) that this must also yield stability. First note the following inequalities generated by the interleaving condition: as \([V_t]\subset\mathrm{supp}(\sigma)\), the endpoints of \(\tau=[x_2,y_2]\) are restricted by \[ x_2\leq 1_t+1+\dfrac{D_I-b}{2 \] \[ y_2\geq n_t-1-\dfrac{D_I-b}{2 \] where \(D_I:=D_{\mathrm{I}}(\sigma,\tau)\). Stability can now be checked across all possible cases of \(\delta^x,\delta^y\). We show only one of them here. \begin{align*} D_{\mathrm{AR}}(\sigma,\tau)\leq d_{\mathrm{AR}}(\sigma,\tau) &=|x_1-x_2|+|y_1-y_2|\\ &\leq 1_t+1+\dfrac{D_I-b}{2}-1+n-\left(n_t-1-\dfrac{D_I-b}{2}\right)\\ &\leq D_I-(n-T)+n-(n_t-1_t)+1\\ &\leq D_I+T-(S+T)+1\\ &\leq D_I \end{align*} \end{proof} Recall the meanings of \(T,S\) from Notation \ref{not_t}. \begin{definition}\label{defn_central} We say a poset \(P=V_1\wedge\ldots\wedge V_p\) is \emph{central} if there is some \(V_t\) with \(T_t=T\) positioned in such a way that \[ [V_t]\subset[T,n-T+1]. \] \end{definition} \begin{prop} A shallow poset is central if and only if every pair of intervals fulfill at least one of the conditions of Lemma \ref{lemma_1_2_3}. \end{prop} \begin{corollary}\label{prop_shallow_central} If \(P\) is a shallow and central poset, then every pair of indecomposable modules \(\sigma,\tau\) satisfies the inequality \[ D_{\mathrm{AR}}(\sigma,\tau)\leq D_{\mathrm{I}}(\sigma,\tau) \] for any weight \((2,b)\) with \(b\geq n-T\). \end{corollary} \subsection{Stability for Shallow and non-Central} \begin{prop}\label{prop_pre_non_central} Suppose \(P=\mathbb{A}_n\) is a shallow and \emph{non-central} poset: suppose without loss of generality that \(1_t<T\). Consider a weight \((2,b)\) with \(b\geq n-T\). Any pair of indecomposables \(\sigma=[x_1,y_1],\tau=[x_2,y_2]\) is stable under this weight unless \[ x_1,x_2\in(1_t,m_t]\text{ and }\delta^y=n-y_1+n-y_2. \] \end{prop} Proposition \ref{prop_pre_non_central} follows from the subsequent lemma. \begin{lemma}\label{lemma_stable_intervals} If any of the following are true about a pair of intervals \(\sigma,\tau\) over a shallow poset \(P=V_1\wedge\ldots\wedge V_p\), then any weight \((a,b)\) with \(a=2\) and \(b\geq n-T\) is stable. Throughout, assume the intervals are always labeled such that \(W_{\mathrm{AR}}(\sigma)\geq W_{\mathrm{AR}}(\tau)\). \begin{enumerate} \item \(\sigma,\tau\) are in the same region of the AR quiver. \item \(\sigma,\tau\) are in opposite regions of the AR quiver (a north-south or east-west pair). \item \(1_t\not\in\mathrm{supp}(\sigma)\) and \(x_2\leq 1_t\) (symmetrically, \(n_t\not\in\mathrm{supp}(\sigma)\) and \(x_2\geq n_t\)). \end{enumerate} \end{lemma} \begin{proof} \textbf{(1)} From Lemma \ref{lemma_1_2_3} we may assume \([V_t]\not\subset\mathrm{supp}(\sigma)\). As \(m_t\in\mathrm{supp}(\sigma)\), it follows that either \(1_t\) or \(n_t\) is in \(\mathrm{supp}(\sigma)\). Suppose then, without loss of generality, that \(1_t\not\in\mathrm{supp}(\sigma)\): that is, \(x_1\in(1_t,m_t]\). We may assume that \(m_t\in\mathrm{supp}(\tau)\). If \(x_2\leq 1_t\), then the bound on \(|x_1-x_2|+|y_1-y_2|\) proceeds identically to the similar equation in the proof of Lemma \ref{lemma_1_2_3} (3). Otherwise, \(x_2\in(1_t,m_t]\). Then, \begin{align*} D_{\mathrm{AR}}(\sigma,\tau)\leq d_{\mathrm{AR}}(\sigma,\tau) &=|x_1-x_2|+|y_1-y_2|\\ &\leq m_t-1_t+n-\left(n_t-1-\dfrac{D_I-b}{2}\right)\\ &< D_I-(n-T)+n-(n_t-m_t)-1_t+1\\ &\leq D_I \end{align*} \textbf{(2)} Again by Lemma \ref{lemma_1_2_3}, assume without loss of generality that \(x_1\in\mathrm(1_t,m_t]\). Then it must be that \(x_2\leq 1_t\) in order to have \(\delta^x=x_1-1+x_2-1\). But in such a situation, the bound on \(x_1-1+x_2-1+n-y_1+n-y_2\) proceeds identically to the similar equation in the proof of Lemma \ref{lemma_1_2_3} (3). \textbf{(3)} Using Lemma \ref{lemma_1_2_3} and this lemma's (1) and (2), we may assume without loss of generality that \(1_t\not\in\mathrm{supp}(\sigma)\), and either \begin{itemize} \item \(d_{\mathrm{AR}}(\sigma,\tau)=x_1-1+x_2-1+|y_1-y_2|\) or \item \(d_{\mathrm{AR}}(\sigma,\tau)=|x_1-x_2|+n-y_1+n-y_2\). \end{itemize} However, given the assumption \(1_t\not\in\mathrm{supp}(\sigma)\), the first equation above also yields stability. If \(d_{\mathrm{AR}}(\sigma,\tau)\) is the first equation, then \(x_2\leq 1_t\), and so: \begin{align*} D_{\mathrm{AR}}(\sigma,\tau)\leq d_{\mathrm{AR}}(\sigma,\tau) &=x_1-1+x_2-1+|y_1-y_2|\\ &\leq 1_t+1+\dfrac{D_I-b}{2}-1+1_t-1+n-\left(n_t-1-\dfrac{D_I-b}{2}\right)\\ &\leq D_I-b+n+2\cdot1_t-n_t\\ &\leq D_I+1_t+T-(n_t-1_t)\\ &<D_I+2T-(S+T) \end{align*} Assume the second equation, and assume that \(x_2\leq 1_t\). However, one can immediately see from the bound on the similar eqation in the proof of Lemma \ref{lemma_1_2_3} (3) that this assumption results in stability as well. \end{proof} This result allows us to narrow down a \emph{maximally anti-stable} candidate pair for any shallow non-central poset. \subsection{Maximally Anti-Stable Pairs} The structure of this section is as follows. Suppose \(P\) is a shallow but non-central orientation of \(\mathbb{A}_n\). Without loss of generality suppose that \(1_t<T\). We have already shown by Lemmas \ref{lemma_1_2_3} (3) and \ref{lemma_stable_intervals} that any anti-stable pair \(\sigma=[x_1,y_1]\), \(\tau=[x_2,y_2]\) has the property that \(x_1,x_2\in(1_t,m_t]\) and \(y_1,y_2\geq m_t\) are of opposite orientation from each other. This means that \(\delta_{\mathrm{AR}}(\sigma,\tau)=|x_1-x_2|+n-y_1+n-y_2\) for any anti-stable pair. We measure anti-stability by the size of the difference \(D_{\mathrm{AR}}-D_I\), and show that starting from \emph{any} anti-stable pair, we can reduce down to one of two canonical anti-stable pairs that between them maximize anti-stability. First, choosing \(x_1,x_2\) as far apart as possible increases \(D_{\mathrm{AR}}\) while having no effect on \(D_I\). But \(y_1\) has a lower bound dependent on \(x_1\)'s position (while \(y_2\) does not depend on \(x_2\)), so to maximize later freedom we choose \(x_1=1_t+1\) and \(x_2=m_t\). Then, \(y_2\) has two \(d_{\mathrm{AR}}\)-minimizing possibilities based on the orientation of \(y_1\). Lastly, \(y_1\) can be shifted left to further minimize \(d_{\mathrm{AR}}\). This leftward shifting of \(y_1\) potentially alters the interleaving distance between \(\sigma\) and \(\tau\), but as long as \(y_1\) is chosen such that \(\mathrm{dim}(\sigma)>b\) [Lemma \ref{lemma_1_2_3} (2)] it causes a strict increase in anti-stability of the pair. \begin{definition} For any \(y>n_t\), define \(k(y)=\displaystyle\max_{t<j\leq i}\{T_j\}\) where \(y\in[m_i,m_{i+1})\). \end{definition} For and vertex \(y\) right of \(V_t\), the value \(k(y)\) returns the length of the longest shortest edge of the \(V_i\)'s contained between \(V_t\) and \(y\). This value determines the interleaving distance between two modules containing \(m_i\), one of whose right endpoints is \(y\), and the other of which is contained between \(m_i\) and \(m_{i+1}\). As \(D_{\mathrm{AR}}(\sigma,\tau)\leq W_{\mathrm{AR}}(\sigma)\), if \(W_{\mathrm{AR}}(\sigma)\leq D_{\mathrm{I}}(\sigma,\tau)\) then we are done. It suffices to assume throughout that \(W_{\mathrm{AR}}(\sigma)>D_{\mathrm{I}}(\sigma,\tau)\), and to then show that \(d_{\mathrm{AR}}(\sigma,\tau)\leq D_{\mathrm{I}}\). The assumption \(W_{\mathrm{AR}}(\sigma)>D_{\mathrm{I}}(\sigma,\tau)\) amounts to the inequality \[ y_1-x_1+1>2(k(y_1)-1)+b. \] This is clear from Lemma \ref{lemma_stable_intervals} plus the foreknowledge that we will be adjusting all other vertices such that the defining feature of \(D_{\mathrm{I}}(\sigma,\tau)\) will be \(W_{\mathrm{I}}\) of the \(V_p\)'s between \(n_t\) and \(y_1\), as these are in the support of \(\sigma\) and outside the support of \(\tau\). More conveniently, we will replace \(x_1=1_t+1\) and write the above inequality as \[ y_1>2k(y_1)-2+b+1_t. \] \begin{definition}\label{defn_k_stuff} For a weight \((2,b)\) and vertex \(y>n_t\), consider the statement \[ \Theta(y): y>2k(y)-2+b+1_t. \] Define \[y_u(b)=\min\{y:\Theta(y)\text{ holds and }y\text{ is upward oriented}\}\] and \[y_d(b)=\min\{y:\Theta(y)\text{ holds and }y\text{ is downward oriented}\}\] where we will simply write \(y_u\) and \(y_d\) when context makes clear the value of \(b\). \end{definition} \begin{corollary}\label{cor_min_pair} If there is any pair that violates stability for the weight \((2,n-T)\), then at least one of the pairs \[(\sigma_u=[1_t+1,y_u],\tau_u=[m_t,n_t])\text{ or }(\sigma_d=[1_t+1,y_d],\tau_d=[m_t,n_t-k(y_d)])\] also violates stability for that weight and is maximally anti-stable out of all pairs of intervals over the poset (that is, the value of \(R=D_{\mathrm{AR}}-D_{\mathrm{I}}\) is positive and maximal for the correct pair). \end{corollary} In the event that there is any anti-stable pair for the poset, call the pair above with the greater anti-stability the \emph{maximal anti-stable pair for the poset}. If both pairs are just as anti-stable, choose \((\sigma_u,\tau_u)\). \begin{proof}\label{proof_prop_min_pair} This follows from Propositions \ref{prop_anti1} and \ref{prop_anti2}. \end{proof} \subsection{Maximally Anti-stable Pairs} Let \(P\) be a shallow and non-central orientation of \(\mathbb{A}_n\). Suppose there exists a pair \(\hat{\hat{\sigma}}=[x_1',y_1'],\hat{\hat{\tau}}=[x_2',y_2']\) with \(W_{\mathrm{AR}}(\hat{\hat{\sigma}})\geq W_{\mathrm{AR}}(\hat{\hat{\tau}})\) such that \((\hat{\hat{\sigma}},\hat{\hat{\tau}})\) is an anti-stable pair for any weight \((2,b)\) with \(b\geq n-T\). \begin{prop}\label{prop_anti1} If \((\hat{\hat{\sigma}},\hat{\hat{\tau}})\) is an anti-stable pair, then \(\hat{\sigma}=[1_t+1,y_1'],\hat{\tau}=[m_t,y_2']\) also comprise an anti-stable pair. Furthermore, \(R(\hat{\sigma},\hat{\tau})\geq R(\hat{\hat{\sigma}},\hat{\hat{\tau}})\) and \(W_{\mathrm{AR}}(\hat{\sigma})\geq W_{\mathrm{AR}}(\hat{\tau})\). \end{prop} \begin{proof} It is immediate that this choice of \(x_1,x_2\) maximize the value of \(\delta_{\mathrm{AR}}(\sigma,\tau)\). The opposite assignment would do the same, however, \(y_1\) (which maximizes \(\delta_{\mathrm{AR}}\) by being \emph{small}) has an \(x_1\)-dependent lower bound, while \(y_2\) has no \(x_2\)-dependency. For this reason the precise assignment of \(x_1,x_2\) in the proposition is ideal going forward. \end{proof} Suppose there exists a pair \(\hat{\sigma}=[1_t+1,y_1'],\hat{\tau}=[m_t,y_2']\) with \(W_{\mathrm{AR}}(\hat{\sigma})\geq W_{\mathrm{AR}}(\hat{\tau})\) such that \((\hat{\sigma},\hat{\tau})\) is an anti-stable pair for any weight \((2,b)\) with \(b\geq n-T\). \begin{prop}\label{prop_anti2} If \((\hat{\sigma},\hat{\tau})\) is an anti-stable pair, then \(\hat{\sigma}=[1_t+1,y_1'],\tau=[m_t,y_2]\) also comprise an anti-stable pair, where \(y_2=n_t\) or \(y_2=n_t-k(y_1')\): whichever has opposite \(y\)-orientation from \(y_1'\). Furthermore, \(R(\hat{\sigma},\tau)\geq R(\hat{\sigma},\hat{\tau})\), and \(\hat{\sigma}\) has larger dimension than \(\tau\). \end{prop} \begin{proof} \textbf{(1)} Suppose \(y_1'\in[\textrm{max},\textrm{next min})\). Then \(\tau=[m_t,n_t-k(y_1')]\) and \(\hat{\tau}=[m_t,y_2]\), with \(y_2\geq n_t-k(y_1')\) and having orientation \(y_2\in[\textrm{min},\textrm{next max})\). If \(n_t-1-k(y_1')<y_2<n_t\), then \[ D_I(\hat{\sigma},\tau)=D_I(\hat{\sigma},\hat{\tau}) \] but \[ D_{\mathrm{AR}}(\hat{\sigma},\tau)-D_{\mathrm{AR}}(\hat{\sigma},\hat{\tau}) =y_2-(n_t-k(y_1'))\geq 0, \] and so \[ R(\hat{\sigma},\tau)\geq R(\hat{\sigma},\hat{\tau}). \] Otherwise, \(y_2\in[m_p,n_p)\) for some \(p\geq t+1\). From \(\tau\) to \(\hat{\tau}\), the right endpoint \emph{increases}, and so the value of \(D_I\) may \emph{decrease}. Specifically, if \(D_I(\hat{\sigma},\tau)\) was determined by a particularly large \(2\)-V that is then included in the larger support of \(\hat{\tau}\), it will not be taken into account for that interleaving distance, and we will have a non-zero value for \[ D_I(\hat{\sigma},\tau)-D_I(\hat{\sigma},\hat{\tau})= 2\left(\max_{m_t<m_i\leq y_1'}\{T_i\}-\max_{y_2< m_i\leq y_1'}\{T_i\}\right). \] Let \(\displaystyle T_j=\max_{m_t<m_i\leq y_1'}\{T_i\}\). Then the difference above is at most \(2(T_j-1)\). If we can show that the difference between the \(D_{\mathrm{AR}}\)'s is larger than this, we will have shown a net increase in \(R(\hat{\sigma},\tau)\) over \(R(\hat{\sigma},\hat{\tau})\). \[ D_{\mathrm{AR}}(\hat{\sigma},\tau)-D_{\mathrm{AR}}(\hat{\sigma},\hat{\tau}) =y_2-(n_t-k(y_1'))\geq m_j-n_t+k(y_1'), \] as the drop in \(D_I\)'s was assumed to have happened by \(y_2\) exceeding the value of \(m_j\) (and so \(n_j\) by orientation conditions). As \(k(y_1')=T_j-1\), the difference in \(D_{\mathrm{AR}}\)'s becomes \[ m_j-n_t+k(y_1')\geq T_j+T_j-1=2(T_j)-1. \] This is precisely what was desired, and so we have the inequality for \(R\)-values. \textbf{(2)} Suppose next that \(y_1'\in[\textrm{min},\textrm{next max})\). Let \(y_2>n_t\) of orientation \([\textrm{max},\textrm{next min})\). If \(n_t<y_2<m_{t+1}\), then \[ D_I(\hat{\sigma},[m_t,n_t])=D_I(\hat{\sigma},[m_t,y_2]) \] and \[ D_{\mathrm{AR}}(\hat{\sigma},[m_t,n_t])>D_{\mathrm{AR}}(\hat{\sigma},[m_t,y_2]), \] so \(R\) strictly increases from choosing the left endpoint of \(\tau\) to be \(n_t\). Otherwise, by the requirement of orientation, \(n_{t+1}\leq y_2\). Then \[ D_I(\hat{\sigma},[m_t,n_t])-D_I(\hat{\sigma},[m_t,y_2])= 2\left(\max_{m_t<m_i\leq y_1'}\{T_i\} -\max_{y_2<m_i\leq y_1'}\{T_i\}\right). \] The above difference is bounded above by \(2(T_j-1)\), where \(\displaystyle T_j:=\max_{y_2<m_i\leq y_1}\{T_i\}\). At the same time, \[ D_{\mathrm{AR}}(\hat{\sigma},[m_t,n_t])-D_{\mathrm{AR}}(\hat{\sigma},[m_t,y_2])=y_2-n_t, \] where \(y_2\geq n_j\). But, \(y_2-m_j\geq 2T_j\), and so \(y_2-n_t>2T_j\). Combined, we see that \(R(\hat{\sigma},[m_t,n_t])>R(\hat{\sigma},[m_t,y_2])\) for any choice of \(y_2>n_t\). \end{proof} \subsection{Permissibility of \(n-T/2\)} \begin{corollary}\label{cor_shallow_non_central} Let \(P=V_1\wedge V_2\wedge\ldots\wedge V_p\) be a shallow and \emph{non}-central orientation of \(\mathbb{A}_n\). The minimal weight such that \(D_{\mathrm{AR}}\leq D_{\mathrm{I}}\) is \((2,b)\) where \(b\) is bounded above by \[ b\leq n-T/2-1. \] \end{corollary} \begin{proof} \emph{The minimal pair is always stable for} \(b\geq n-T/2-1\) : Of the two possible minimals pairs of Corollary \ref{cor_min_pair} we will only show the proof of \(\sigma_d=[1_t+1,y_d]\) and \(\tau_d=[m_t,n_t-k(y_d)]\). (The proof for \(\sigma_u,\tau_u\) is incredibly similar, and a slightly less restrictive inequality.) Recall that \(y_d\) is minimal such that \(x_1+D_I=1_t+n-T/2+2k(y_d)\leq y_d\) (Definition \ref{defn_k_stuff}). So \(D_I=b+2(k(y_d)-1)\) and \(\delta_{\mathrm{AR}}(\sigma,\tau)=m_t-1_t-1+n-(n_t-k(y_d))+n-y_d\). Comparing, we get \begin{align*} \delta_{\mathrm{AR}}(\sigma_d,\tau_d)&\leq D_I(\sigma,\tau)\text{ if}\\ m_t-1_t-1+n-n_t+k(y_d)+n-y_d &\leq b+2(k(y_d)-1)\text{ if} \\ m_t-1_t-1+2n-n_t+k(y_d)-(1_t+b+2k(y_d))&\leq b+2(k(y_d)-1)\text{ if}\\ m_t-1-2\cdot1_t+2n-n_t-k(y_d)-2b&\leq 2(k(y_d)-1)\text{ if}\\ m_t-1-2\cdot1_t+2n-n_t-2n+T&\leq 2k(y_d)-2\\ m_t-n_t+T-2\cdot1_t+3&\leq 3k(y_d)\text{ if}\\ -2\cdot1_t+3&\leq 3k(y_d) \end{align*} the last statement of which is true due to the left being \(\leq 1\) and the right being \(\geq 3\). \end{proof} \begin{figure} \includegraphics[scale = 4]{tikz_final_ex_largeb} \caption{General example of a poset that attains \((2,n-T/2-1)\) as its minimal stable weight.} \label{fig_upper_bound} \end{figure} \begin{ex}\label{ex_upper_bound}(See Figure \ref{fig_upper_bound}.) We show a sample poset in which the minimal stable value equals the upper bound \(b=n-T/2-1\). For \(T>1\), let \(1_t=1\), \(m_t=T+1\), \(n_t=2T+1\), \(n=1+4T\). Let the region from \(n_t\) to \(n\) consist of \(V_p\)'s with \(T_p=1\) and of orientation such that \(y_u\) is forced to be (even just slightly) larger than the minimization given by \ref{defn_k_stuff}. Then \((\sigma_d,\tau_d)\) form the minimal pair, and we can explicity check that \(b=n-T/2-1\) is permissible while no smaller weight will be: \begin{align*} \delta(\sigma_d,\tau_d)&\leq D_I(\sigma_d,\tau_d)\text{ iff}\\ T-1+n-2T+n-(2+b)&\leq b\text{ iff}\\ 2n-T-3&\leq 2b\text{ iff}\\ n-T/2-1&\leq b\text{ if }T\text{ is even, or}\\ n-T/2-2&\leq b\text{ if }T\text{ is odd.} \end{align*} \end{ex}
2,877,628,088,673
arxiv
\section{Introduction} Topological phases of quantum systems have been at the focus of intense studies in recent years.\cite{HasanKane} Many topological insulators are exotic band insulators where the energy bands are characterized by non-trivial topological quantum numbers. These topological quantum numbers reflect the non-trivial topological ground state structure, arising from the symmetries and the dimensionality of the system.\cite{TopInsPerTable} In a finite sample, the non-trivial topological structure of the ground state gives rise to topologically protected gapless edge states in the otherwise gapped system. These edge states are protected by topology and are robust against perturbations and disorder which do not break the underlying symmetries of the system. One of the simplest examples of an insulating state with a non-trivial topological structure is provided by the Integer Quantum Hall (QH) Effect.\cite{vonKlitzing} In a two dimensional electron gas, a homogeneous magnetic field splits the energy spectrum into Landau levels that are broadened into Landau bands by disorder. States within a Landau band are localized, except for a single critical, extended state at the center of each band.\cite{QHallDisorder} Each Landau band is characterized by a non-trivial topological invariant, the Chern number.\cite{ThoulessChern} It can be shown that the Chern number of a band -- apart from a universal pre\-fac\-tor $e^2/h$ -- equals the contribution of the band to the Hall conductance. As a result, if the Fermi energy lies between two Landau bands, then the Hall conductance is the sum of the Chern numbers associated with the filled Landau bands. The topological character of this insulating phase is also manifested through the emergence of chiral edge states:\cite{edge_exp1,edge_exp2,edge_exp3,edge_exp4} In fact, the total Chern number equals the number of chiral states. As mentioned above, in each Landau band there is a single delocalized state and an associated critical energy ($E_{c,i}$ in the $i$-th band). Topologically distinct QH phases are separated by these critical states,\cite{Halperin, QHallCriticalEnergies} and near them a critical behavior is observed, with the localization length (the size of the localized wave functions) diverging as \begin{equation} \xi \sim \left|{E-E_{c,i}}\right|^{-\nu} \; \mathrm{.} \end{equation} Experimentally, a quantized Hall conductance is observed if the system size (or the inelastic scattering length, $L_{\rm{in}}$) is much larger than the localization length at the Fermi energy, $\xi(E_{F})$. \cite{footnote1} The characterization of the topological quantum phase transition at these critical energies was a challenging task. Based on nonlinear $\sigma$-model calculations, Pruisken and Khmelnitskii proposed a two parameter scaling theory, formulated in terms of the diagonal and offdiagonal elements of the dimensionless conductance tensor $g \equiv g_{xx}$ and $g_H \equiv g_{xy}$, respectively.\cite{Pruisken,Khmelnitskii} According to this theory, by increasing the system size $L$ (or lowering the temperature), the conductances follow the trajectories of a two dimensional flow diagram (see Fig. \ref{flowqualitatively}), \begin{equation} \label{renormflow_eq} \frac{d \ln g}{d \ln L} = \beta(g,g_H) \, ; \quad \frac{d \ln g_H}{d \ln L} = \beta_H(g,g_H) \, , \end{equation} determined by the universal beta functions $\beta(g,g_H)$ and $\beta_H(g,g_H)$. In this flow, attractive QH fixed points appear at integer dimensionless Hall conductances and vanishing diagonal conductance. Each of these fixed points corresponds to a QH phase and is associated with a plateau in the Hall conductance. Between these attractive fixed points, other, hyperbolic fixed points emerge: these correspond to the critical state and describe transition between the QH plateaus. \begin{figure} \includegraphics[width = 0.35 \textwidth]{flowqualitatively.pdf} \caption{\textit{(Color online)} Sketch of the proposed two parameter renormalization flow, in terms of the diagonal and Hall conductances $g$ and $g_H$, respectively. The attractive QH fixed points at integer dimensionless Hall conductances and vanishing diagonal conductance are denoted by cyan circles ($QH_{n}$ and $QH_{n+1}$). The transition fixed point ($T$) at a finite Hall and diagonal conductance is denoted by the red circle.}\label{flowqualitatively} \end{figure} While certain predictions of the universal scaling theory were confirmed experimentally, apart from some recent results for graphene,\cite{graphene_flow} a direct numerical verification of the two parameter renormalization flow for ordinary QH systems has never been established. In this paper, we demonstrate numerically the two parameter scaling theory, and estimate the relevant and irrelevant critical exponents. To this end, we investigate a system of noninteracting, spinless, charged fermions on a square lattice, as described by the Hamiltonian \begin{equation} \label{hamiltonian} H = \sum_{i} \varepsilon_i c_i^\dag c_i - \sum_{\langle i,j \rangle} t_{ij} c_i^\dag c_j + h.c. \; \textrm{.} \end{equation} Here $c_i^\dag$ and $c_i$ denote fermionic operators that create or annihilate a fermion on the lattice site $i$, respectively. The site energies $\varepsilon_i \in [-W/2, W/2]$ are uniformly and independently distributed and the external magnetic field is introduced by using the usual Peierls substitution\cite{Peierls} \begin{equation} \label{eq_peierls} t_{ij} = e^{i 2\pi A_{ij}} \; , \end{equation} with the lattice vector potential defined as \begin{equation} A_{ij} = \frac{e}{h} \int_{i}^j \mathbf{A} \cdot d \mathbf{l} \; . \end{equation} In this work, we introduce a new lattice gauge, which -- in contrast to the Landau gauge -- allows us to perform computations for small magnetic fields corresponding to a \emph{single} flux quantum through the system. We then perform exact diagonalization at various system sizes, $L$, while applying twisted boundary conditions with phases $\phi_x$ and $\phi_y$ in the $x$ and $y$ directions, respectively. By studying the sensitivity of the energy levels $E_\alpha = E_\alpha(\underline{\phi})$ and eigenstates $\ket{\alpha} = \ket{\alpha(\underline{\phi})}$ to the phase $\underline{\phi} = (\phi_x, \phi_y)$, we are able to determine $g(L)$ and $g_H(L)$, and reconstruct the renormalization group flow, Eq. (\ref{renormflow_eq}). We indeed find that, as predicted by Pruisken and Khmelnitskii, the flow exhibits stable QH fixed points with quantized values of $g_H$ and $g = 0$. Neighboring QH fixed points are separated by a critical point of a finite Hall and diagonal conductance. The critical exponents extracted from the flow are in agreement with previous transfer matrix results.\cite{QHallExponent_Slevin} \subsection{Thouless formula and Hall conductance} The Kubo-Greenwood conductance formula\cite{KuboGreenwood} cannot be straightforwardly applied to a finite size system to extract its $T = 0$ temperature conductance in the thermodynamic limit. Fortunately, however, the Hall and the diagonal conductances can both be related to the sensitivity of the states to the boundary conditions. The single particle eigenstates of Eq. (\ref{hamiltonian}) can be expanded as \begin{equation} \ket{\alpha} = \sum_{i} \alpha(i) \; c_i^\dag \ket{0} \; . \end{equation} Labeling for the moment each site $i$ by its coordinates $i \rightarrow x,y$, a twisted boundary condition is defined by wrapping the system on a torus with the periodicity condition \begin{equation} \alpha(x + nL,y + mL) = e^{i(n \phi_x + m \phi_y)} \alpha(x,y) \; . \end{equation} The phases $(\phi_x, \phi_y) = \underline{\phi}$ can be interpreted as magnetic fluxes pierced through the torus (and in its interior), while the external magnetic field pierces through the surface of the torus (see Fig. \ref{torus}). \begin{figure} \includegraphics[width = 0.3 \textwidth]{toruszok.pdf} \caption{\textit{(Color online)} \textbf{(a)} The phase $\phi_x$ can be interpreted as a magnetic flux pierced through the torus. \textbf{(b)} The external magnetic field pierces through the surface of the torus.}\label{torus} \end{figure} Solving the eigenvalue equation $H \ket{\alpha} = E_\alpha \ket{\alpha}$, one obtains the phase dependent eigenstates and eigenvalues $\ket{\alpha(\underline{\phi})}$ and $E_\alpha(\underline{\phi})$. In a seminal work, Thouless and Edwards conjectured a relation between the diagonal conductance and the mean absolute curvature of eigenenergies at the Fermi energy,\cite{ThoulessFormula} \begin{eqnarray} \label{thouless_formula} g \approx g_T & =& \overline{|c_T(\alpha)|}_{E_\alpha = E_F} \; \textrm{,} \nonumber \\ c_{T}(\alpha) & =& \frac{\pi}{\Delta(E_\alpha)} \frac{\partial^2 E_\alpha}{\partial \phi_x^2} \; \textrm{,} \end{eqnarray} with $\Delta(E_F)$ denoting the mean level spacing at the Fermi energy, and the overline indicating disorder averaging. Although this formula cannot be derived rigorously, it has been verified numerically for a wide range of disorder.\cite{Montambaux} The Hall conductance can be directly related to the phase dependence of the eigenstates.\cite{HallKubo} In a finite system, the average Hall conductance at $T=0$ is \begin{equation} \label{Hall_conductance} g_{H} = \overline{\sum_{E_\alpha < E_F} c_{H}(\alpha)} \, . \end{equation} Here $c_H(\alpha)$ denotes the Hall conductance of level $\ket{\alpha}$, and is given by \begin{equation} c_{H}(\alpha) = 2 \pi i \left( \scalarprod{\frac{\partial \alpha}{\partial \phi_y}}{ \frac{\partial \alpha}{\partial \phi_x}} - \scalarprod{\frac{\partial \alpha}{\partial \phi_x}}{ \frac{\partial \alpha}{\partial \phi_y}} \right) \; , \end{equation} the Berry curvature associated with $\ket{\alpha(\phi_x,\phi_y)}$. In the following, we shall use Eqs. (\ref{thouless_formula}) and (\ref{Hall_conductance}) to determine the dimensionless conductances and establish the flow. \subsection{Lattice gauge for small magnetic fields} In a finite size system with periodic or twisted boundary condition, a homogeneous magnetic field cannot be arbitrary; the hopping matrix elements must respect the periodicity of the system, i.e., the hoppings at sites $(x + L,y)$ and $(x,y + L)$ must be equal with the one at $(x,y)$. \begin{figure} \includegraphics[width = 0.3 \textwidth]{gauge.pdf} \caption{\textit{(Color online)} Sketch of bond vector potentials on a $4 \times 4$ lattice with periodic boundary condition: The circles denote equivalent sites.}\label{gauge} \end{figure} The complex phases of the hopping matrix elements are related to the magnetic vector potential through the Peierls substitution, Eq. (\ref{eq_peierls}). The periodicity of the system requires the complex phase of the hopping to be changed by $2 \pi \, n$ as the $x$ or $y$ coordinates are shifted by $L$, and imposes restrictions on the total field pierced through the system. The magnetic flux through a unit cell can be determined by summing the hopping phases around the cell, while the magnetic field in a cell can be defined as the flux divided by the area of the cell. Setting the lattice size to $a = 1$, the magnetic field reads \begin{eqnarray} B_{(x+1/2\, ,\; y+1/2)} &=& \frac{h}{e} \left[ A_{(x,y)(x+1,y)} + A_{(x+1,y)(x+1,y+1)} + \right. \nonumber \\ &+& \left. A_{(x+1,y+1)(x,y+1)} + A_{(x,y+1)(x,y)} \right] \; . \nonumber \\ & & \end{eqnarray} Periodic boundary conditions imply that the total magnetic flux through the whole system is a multiple of the flux quantum $\Phi_0 = h / e$. Therefore, the minimal non-zero magnetic flux through the system (the surface of the torus in Fig. \ref{torus}.b) is $\Phi_0$. Most numerical calculations use the Landau gauge with $A^{\mathrm{Landau}}_{(x,y),(x,y+1)} = \frac{x}{L} \, m$ with $m$ an integer and $A^{\mathrm{Landau}}_{(x,y),(x+1,y)} = 0$, which results in a total flux \begin{equation} \Phi_{\mathrm{Landau}} = m \cdot L \, \Phi_0 \end{equation} through the system. The minimal non-zero flux in the Landau gauge is thus $L$ times larger than the flux quantum $\Phi_0$, and, consequently, the possible values of magnetic field, $B_{\mathrm{Landau}} = \frac{\Phi_{\mathrm{Landau}}}{L^2} = \frac{m}{L} \, \Phi_0$, are restricted and rather large. Clearly, to perform efficient finite size scaling at a fixed magnetic field, one needs to construct a lattice gauge, which is able to produce magnetic fields below the Landau gauge limit, $B^{\min}_{\mathrm{Landau}} = \frac{1}{L} \Phi_0$. Here we propose to use a lattice gauge, as illustrated in Fig. \ref{gauge}, that realizes the minimal flux and the corresponding minimal magnetic field. Along the $y$ bonds, we use a Landau gauge \begin{equation} A_{(x,y)(x,y+1)} = m \frac{x}{L^2} \; \phantom{ab} x \in {1\dots L} \mathrm{.} \end{equation} This is by a factor $1/L$ smaller than the usual Landau gauge and, consequently, amounts in an additional jump in the phase of the hopping between lattice sites $x=L$ and $x = 1$, $\Delta \varphi = - 2 \pi \, m \frac{1}{L}$. Such a jump would introduce a strong magnetic field at the boundaries, if it is not compensated. Therefore at the boundary between $x = L$ and $x = 1$, we apply a lattice vector potential in the $x$ direction\cite{footnote2} \begin{equation} A_{(x=L,y)(x=1,y)} = - m \frac{y}{L} \; \mathrm{.} \end{equation} One can verify that the magnetic field in each cell is $\Phi_0 / L^2$, therefore the total magnetic flux is just the minimal non-zero flux $\Phi_0$. Within this new gauge, we can thus reach magnetic field values of $B = \frac{m}{L^2} \, \Phi_0$. This new gauge allows us to change the system size in relatively small steps when the magnetic field is fixed. \section{Results} \subsection{RG flow and critical behavior} Let us start by analyzing the critical behavior of the dimensionless conductances. The Thouless and Hall conductances were calculated for system sizes between $L=9$ and $L=33$, magnetic fields $B = 1 / 9$, $1 / 4$ and $1 / 16$ (in units of $\Phi_0 / a^2$) , and disorder strengths $1 \le W \le 3.5$. A total number of $\sim 5 \cdot 10^8$ eigenstates were computed for each system size, magnetic field, and disorder. The behavior of the ensemble averaged conductances as a function of the Fermi energy is shown in Fig. \ref{ConductancesEF}. The Hall step becomes sharper with increasing system size and the peak in the Thouless conductance gets sharper as well. \begin{figure} \includegraphics[width = 0.4 \textwidth]{hallstep_gH.pdf} \caption{\textit{(Color online)} Thouless conductance (upper panel) and Hall conductance (lower panel) as a function of Fermi energy around the first Landau band for $B = \Phi_0 / 9$ and $W = 1$. System sizes are $L = 9 \, , \; 12 \, , \; 18 \, , \; 27$. \textit{Inset:} Hall conductance $g_H$ as a function of the Fermi energy, $E_F$, in the whole band. The shaded region highlights the first QH step, shown in the main panel. In the lower half of the band electron-like behavior, while in the upper half hole-like behavior is observed.}\label{ConductancesEF} \end{figure} Based upon the two parameter scaling theory, near the transition, the dimensionless Hall conductance is expected to scale with the system size as \begin{equation}\label{gHfit} g_H(L') - g_H^* \cong \left(\frac{L'}{L}\right)^{y}(g_H(L) - g_H^*) \; \textrm{,} \end{equation} where $y$ is the scaling dimension of the Hall conductance, and $g_H^*$ denotes the critical Hall conductance. In contrast, the Thouless conductance is predicted to be an irrelevant scaling variable on the critical surface, where \begin{equation}\label{gTfit} g_T(L') - g_T^* \cong \left(\frac{L'}{L}\right)^{-|y_2|}(g_T(L) - g_T^*) \; \textrm{,} \end{equation} with $y_2$ the scaling dimension of the leading irrelevant operator. We estimated the critical values of the Hall and Thouless conductances and the exponents $y$ and $y_2$ by performing a finite size scaling analysis, yielding: \begin{equation} g_H^* = 0.612 \pm 0.023 \phantom{a}, \phantom{bc} y = 0.351 \pm 0.082 \; , \end{equation} and \begin{equation} g_T^* = 0.386 \pm 0.011 \phantom{a} , \phantom{bc} |y_2| = 0.43 \pm 0.14 \; \textrm{.} \end{equation} The critical exponents $y$ and $y_2$ agree within our numerical accuracy with the values $y = 1/\nu \approx 0.385$ and $|y_2| \approx 0.4$, extracted through transfer matrix methods.\cite{QHallExponent_Slevin,QHallIrrelevant,Evers} \begin{figure} \includegraphics[width = 0.48 \textwidth]{flow.pdf} \caption{\textit{(Color online)} Two parameter renormalization flow extracted from finite size scaling for $B = \Phi_0 / 4$, $W \in 2 \dots 3.3$ and $L \in 12 \dots 24$. The arrows show the direction of increasing system size. The extrapolated position of the critical point is denoted by a red circle ($T$), the zeroth and first QH fixed points are denoted by cyan circle ($QH_0$ and $QH_1$).}\label{flow} \end{figure} \begin{figure} \includegraphics[width = 0.42 \textwidth]{pcTgH.pdf} \caption{\textit{(Color online)} Distribution of the Thouless curvature (upper panel) and the level resolved Hall conductance (lower panel). We used B=1/9 and W=1, and varied the system size (see legend). For each system size we selected energy regions corresponding to a fixed $g_T= 0.2$, and determined the distributions for a large number of disorder realizations. Distributions for a fixed $g_T$ depend explicitly on the system size, $L$, but converge to a limiting distribution for large $L$ (see data for $ L = 21$ and $27$).}\label{pcTgH} \end{figure} The system size driven $(g_T,g_H)$ flow is displayed in Fig. \ref{flow}. The qualitative agreement with the Pruisken-Khmelnitskii scaling is apparent. As mentioned before, the new gauge introduced above enables us to increase the system size in smaller steps, and to get a much better resolution. Nevertheless, it remains challenging to collect data from the exterior or deep interior of the critical dome (flipped ``U'' shape), because the trajectories remain always close to it. Interestingly, the flow is slightly asymmetrical, and the critical point is closer to the $n=1$ QH state than the trivial n=0 state. We do not have a firm explanation for this asymmetry. The lack of electron-hole symmetry could provide a natural explanation of such asymmetry. However, the fact that the flows extracted for various fillings overlap within our numerical accuracy, seems to rule out this possibility. The observed asymmetry may also be a peculiarity of lattice calculations or non-universal finite size corrections. \subsection{Curvature distributions} The presence of two characteristic scaling variables is also clear from a careful analysis of the distribution of level resolved Hall conductances and Thouless curvatures. Single parameter scaling\cite{GangOfFour} would imply that these distributions should be characterized by a single dimensionless parameter, which we can choose to be the Thouless conductance, $g_T = g_T(W,E,L,B,\dots)$. To test the single parameter scaling hypothesis, we selected regions in the energy spectra with a fixed Thouless conductance, $g_T$ (i.e., fixed average absolute curvature $\overline{| c_T |}$), and determined the distributions $p(c_T | g_T, L, W, B)$ and $p(c_H | g_T, L, W, B)$.\cite{footnote3} We found that the single parameter scaling hypothesis is clearly violated for small system sizes; both $p(c_T)$ and $p(c_H)$ depend explicitly on the system size, $L$. The explicit $L$-dependence is more pronounced in the distribution of the level resolved Hall conductance, but can also be seen in the distribution of the level curvatures. Increasing L, however, the distributions converge to a limiting distribution (see data for $L = 21\; \textrm{and} \; 27$ in Fig. \ref{pcTgH}). This behavior can be understood in terms of the two parameter scaling theory. According to the latter, the distributions $p(c_T)$ and $p(c_H)$ depend on two dimensionless parameters, $g_T$ and $g_H$: $p(c_T) = p(c_T | g_T, g_H)$ and $p(c_H) = p(c_H | g_T, g_H)$. For a given value of $g_T$, increasing $L$ moves the corresponding $(g_T, g_H)$ point towards the flipped ``U'' envelope in the $(g_T, g_H)$ plane. That means that for systems with large $L$, $g_H$ becomes effectively a function of $g_T$, $g_H \rightarrow g_H(g_T)$, and therefore $p(c_T)$ depends solely on $g_T$. \begin{figure} \includegraphics[width = 0.42 \textwidth]{pcTfit.pdf} \caption{\textit{(Color online)} Curvature distributions in the vicinity of the transition fixed point (upper panel, $g_T \approx 0.35, g_H \approx 0.55$), and close to the Quantum Hall fixed points (lower panel, $g_T \approx 0.003$, $g_H \approx 0.001$ ). Continuous lines denote modified Cauchy (top) and lognormal (bottom) fits.}\label{pcTfit} \end{figure} The distributions $p(c_T)$ and $p(c_H)$ vary considerably within the $(g_T, g_H)$ plane (see Fig. \ref{pcTfit}). Near the transition point, the distribution of the dimensionless Thouless curvatures can be well fitted by a modified Cauchy distribution, \begin{equation} p(c_T) \propto \frac{1}{(c_T^\kappa+a)^{(2+\beta)/\kappa}} \; \textrm{,} \label{curv_distr} \end{equation} with the constant $\beta=2$ characterizing the unitary ensemble, and $\kappa$ a symmetry class dependent anomalous dimension. Such a distribution has been conjectured for the critical curvature distribution in orthogonal and unitary ensembles, and verified numerically for the orthogonal case.\cite{Kravtsov,footnote4} By fitting the numerically obtained distributions, we extract an exponent \begin{equation} \kappa = 1.603 \pm 0.026 \; . \end{equation} This value is close to the exponent $\kappa = 2$, predicted for disordered metallic systems in the unitary ensemble by random matrix theory.\cite{vonOppen_GUE,vonOppen_GUE2} In fact, although a modified Cauchy distribution is needed to reach a high quality fit of the small curvature part of the distribution, the random matrix expression ($\kappa = 2$) also provides an acceptable fit of the data. Close to the attractive Quantum Hall fixed points, on the other hand, the dimensionless curvature is lognormally distributed with a good accuracy, a behavior characteristic of strongly localized states.\cite{Titov} \section{Conclusion} In this work, we investigated disordered Quantum Hall systems by performing numerical computations within a torus geometry. We introduced a new magnetic gauge, which enabled us to reach the smallest magnetic field allowed by the periodic boundary condition, $B = \frac{1}{L^2} \frac{h}{e}$. With this new gauge, we were able to increase the system size in smaller steps, and could perform efficient finite size scaling. We determined the boundary condition (phase) dependence of the eigenstates and eigenenergies, and computed from these the diagonal and Hall conductances. We established the system size driven renormalization flow of the dimensionless conductances, and found it to be consistent with the theoretical predictions of Pruisken and Khmelnitskii. We identified the Quantum Hall fixed points, responsible for the quantized values of the Hall conductance, and the critical fixed point characterizing the transition between neighboring Quantum Hall phases. In the vicinity of this critical point, the Hall conductance is found to be a relevant scaling variable, while the diagonal conductance becomes irrelevant. We estimated the critical exponents of the transition fixed point, and found them to agree with the values calculated using transfer matrix methods. We investigated the distributions of level curvatures, and observed a clear violation of the one parameter scaling, demonstrating the necessity of a second parameter. For large system sizes, however, the system flows towards a critical line, and the single parameter scaling is found to be restored, in agreement with the Pruisken-Khmelnitskii scaling theory. Near the critical point, the distribution of the Thouless curvature is found to agree with the predictions of random matrix theory (Gaussian Unitary Ensemble). Close to the Quantum Hall points the curvature distribution is lognormal. This research has been supported by the Hungarian Scientific Research Fund OTKA under Grant Nos. K105149, and CNK80991. We also acknowledge the support of the Helmholtz Virtual Institute "New states of matter and their excitations" as well as the DFG Schwerpunkt 1666 Topological Insulators, and a Mercator Guest Professorship.
2,877,628,088,674
arxiv
\section{Introduction} \label{SEC:intro} Recall the classical Caffarelli-Kohn-Nirenberg inequality \cite{CKN84}: \begin{thm}\label{clas_CKN} Let $n\in\mathbb{N}$ and let $p_{1}$, $p_{2}$, $p_{3}$, $a$, $b$, $d$, $\delta\in \mathbb{R}$ be such that $p_{1},p_{2}\geq1$, $p_{3}>0$, $0\leq\delta\leq1$, and \begin{equation}\label{clas_CKN0} \frac{1}{p_{1}}+\frac{a}{n},\, \frac{1}{p_{2}}+\frac{b}{n},\, \frac{1}{p_{3}}+\frac{c}{n}>0, \end{equation} where $c=\delta d + (1-\delta) b$. Then there exists a positive constant $C$ such that \begin{equation}\label{clas_CKN1} \||x|^{c}f\|_{L^{p_{3}}(\mathbb R^{n})}\leq C \||x|^{a}|\nabla f|\|^{\delta}_{L^{p_{1}}(\mathbb R^{n})} \||x|^{b}f\|^{1-\delta}_{L^{p_{2}}(\mathbb R^{n})} \end{equation} holds for all $f\in C_{0}^{\infty}(\mathbb R^{n})$, if and only if the following conditions hold: \begin{equation}\label{clas_CKN2} \frac{1}{p_{3}}+\frac{c}{n}=\delta \left(\frac{1}{p_{1}}+\frac{a-1}{n}\right)+(1-\delta)\left(\frac{1}{p_{2}}+\frac{b}{n}\right), \end{equation} \begin{equation}\label{clas_CKN3} a-d\geq 0 \quad \textrm{if} \quad \delta>0, \end{equation} \begin{equation}\label{clas_CKN4} a-d\leq 1 \quad \textrm{if} \quad \delta>0 \quad \textrm{and} \quad \frac{1}{p_{3}}+\frac{c}{n}=\frac{1}{p_{1}}+\frac{a-1}{n}. \end{equation} \end{thm} It is a natural problem, also important for applications, to find an analogue of the above Caffarelli-Kohn-Nirenberg inequalities on Lie groups or on Riemannian manifolds. On Lie groups, we refer, for example, to \cite{ZHD14} for Heisenberg groups, to \cite{Yac18} for Lie groups of polynomial volume growth, to \cite{RSY17_strat} and to \cite{RS17_strat} for stratified groups, to \cite{RSY17_hom1}, to \cite{RSY17_hom2} and to \cite{ORS17_hom} for general homogeneous groups. On Riemannian manifolds, in \cite{CX04} and \cite{Mao15} the authors assuming that Caffarelli-Kohn-Nirenberg type inequalities hold, investigated the geometric property related to the volume of a geodesic ball on an $n$-dimensional $(n\geq3)$ complete open manifold with non-negative Ricci curvature and on an $n$-dimensional $(n\geq3)$ complete and noncompact smooth metric measure space with non-negative weighted Ricci curvature, respectively. Recently, the following Caffarelli-Kohn-Nirenberg type inequalities have been obtained on {\em Cartan-Hadamard manifolds} $M$, that is, complete simply connected manifolds of non-positive sectional curvature, in \cite{Ngu17}: Let $n\geq2$, $p>1$, $r>0$, $\alpha,\beta,\gamma\in\mathbb{R}$ and $\gamma=(1+\alpha)/r+(p-1)\beta/(pr)$ be such that \begin{equation}\label{CKN_Nguyen1} \frac{1}{r}-\frac{\gamma}{n}>0,\;\frac{1}{p}-\frac{\alpha}{n}>0,\;1-\frac{\beta}{n}>0. \end{equation} Then we have for all $f\in C_{0}^{\infty}(M)$ \begin{equation}\label{CKN_Nguyen2} \int_{M}\frac{|f(x)|^{r}}{(\rho(x))^{\gamma r}}dx\leq \frac{r}{n-\gamma r} \left(\int_{M}\frac{|\partial_{\rho}f(x)|^{p}}{(\rho(x))^{\alpha p}}dx\right)^{\frac{1}{p}} \left(\int_{M}\frac{|f(x)|^{\frac{p(r-1)}{p-1}}}{(\rho(x))^{\beta}}dx\right)^{\frac{p-1}{p}}, \end{equation} for $r\neq 1$, where $\partial_{\rho}$ is the radial derivation along geodesic curves, and $\rho(x)={\rm dist}(x,x_{0})$ is the geodesic distance to any fixed point $x_{0}$ of $M$. Now, on the hyperbolic space $\mathbb{H}^{n}$ with $n\geq2$, let us recall the following another recent result on Caffarelli-Kohn-Nirenberg type inequalities for radially symmetric functions \cite{ST17}: Let $2\leq p\leq \infty$, then there exists a positive constant $c_{r}=c_{r}(n,p)$ such that for all $f\in W^{1,2}_{0,rad}(\mathbb{H}^{n})$ we have \begin{equation}\label{ST_CKN1} \int_{\mathbb{H}^{n}}|\nabla_{g}f|^{2}dV_{g}\geq c_{r}(n,p)\left(\int_{\mathbb{H}^{n}}A_{p}|f|^{p}dV_{g}\right)^{\frac{2}{p}}, \end{equation} where $$A_{p}(r)=\frac{(f(r))^{2}(1-r^{2})^{2}}{(G(r))^{\frac{p+2}{2}}},\;1\leq p<\infty,\;A_{\infty}(r)=\frac{1}{\sqrt{G(r)}},$$ and $dV_{g}$ is the volume form (see Section \ref{SEC:prelim}), and $G(r)=\int_{r}^{1}\frac{(1-t^{2})^{n-2}}{t^{n-1}}dt$ (note that $G/(n\omega_{n-1})$ is the fundamental solution of the hyperbolic Laplacian). Here, $W^{1,2}_{0,rad}(\mathbb{H}^{n})$ is the subspace of radially symmetric functions of $W^{1,2}_{0}(\mathbb{H}^{n})$. Moreover, $c_{r}(n,2)=1/16$ and $c_{r}(n,\infty)=2^{n-2}\omega_{n-1}$, and $c_{r}(n,p)\leq c_{r}(n,2)(c_{r}(n,\infty))^{p-2}$ with $2<p<\infty$. In this paper, we introduce a class of new Caffareli-Kohn-Nirenberg type inequalities with sharp constants on hyperbolic spaces. We also do not assume that any of the functions are radially symmetric. Moreover, our method allows us to show their equivalence with Trudinger-Moser inequalities. Using this method, we actually prove Hardy type inequalities on complete, simply connected Riemannian manifold $M$ with negative curvature, and on hyperbolic space $\mathbb{H}^{n}$ for $n\geq2$ with sharp constants. Furthermore, we show extended versions of weighted Trudinger-Moser inequalities on $\mathbb{H}^{n}$ for $n\geq2$ with sharp constants. We refer to Section \ref{SEC:prelim} for precise definitions. Now, let us briefly state our main results: Let $\omega_{n-1}$ be the area of the surface of the unit $n$-ball. Then we have \begin{itemize} \item {\bf (Hardy type inequalities on $M$)} Let $M$ be a complete, simply connected Riemannian manifold of dimension $n\geq2$ with negative curvature. Let $0\leq\beta<n$. Then for any $n\leq q<\infty$ there exists a positive constant $C_{1}=C_{1}(n, \beta, q, M)$ such that \begin{equation}\label{Hardy_manif1_intro} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}\leq C_{1}q^{1-1/n}\|f\|_{W^{1,n}(M)} \end{equation} holds for all functions $f\in W_{0}^{1,n}(M)$, and such that $\underset{q\rightarrow\infty}{\rm limsup\;}C_{1}(n, \beta, q, M)<\infty$. The asymptotically sharp constant for \eqref{Hardy_manif1_intro} is given in Remark \ref{rem_B1}. Here, $\rho(x)={\rm dist}(x,x_{0})$ is the geodesic distance to any fixed point $x_{0}$ of $M$. Moreover, the Hardy type inequalities \eqref{Hardy_manif1_intro} with relation \eqref{equiv_identity_Hardy} are equivalent to the weighted Trudinger-Moser inequalities \eqref{Trudinger_manif1} with $0<\alpha<\alpha_{\beta}$. \item {\bf (Hardy type inequalities on $\mathbb{H}^{n}\;(n\geq2)$)} Let $0\leq\beta<n$. Then for any $n\leq q<\infty$ there exists a positive constant $C_{2}=C_{2}(n, \beta, q)$ such that \begin{equation}\label{Hardy_hyper1_intro} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{2}q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, and such that $\underset{q\rightarrow\infty}{\rm limsup\;}C_{2}(n, \beta, q)<\infty$. The asymptotically sharp constant for \eqref{Hardy_hyper1_intro} in the sense of Remark \ref{rem_B2} is given in Theorem \ref{Hardy_hyper_thm}. Furthermore, the Hardy type inequalities \eqref{Hardy_hyper1_intro} with relation \eqref{equiv_identity_Hardy_hyper} are equivalent to the weighted Trudinger-Moser inequalities \eqref{Trudinger_hyper1} with $0<\alpha<\alpha_{\beta}$. \item {\bf (Uncertainty type principle on $\mathbb{H}^{n}\;(n\geq2)$)} Let $0\leq\beta<n$. Then we have \begin{equation}\label{uncer_1_intro} \begin{split} \left(\int_{\mathbb{H}^{n}}|\nabla_{g} f(x)|^{n}dV_{g}\right)^{1/n}&\left(\int_{\mathbb{H}^{n}}\rho^{q'}|f(x)|^{q'}dV_{g}\right)^{1/q'}\\ &\geq C_{2}^{-1}q^{1/n-1}\int_{\mathbb{H}^{n}}\rho^{\frac{q-\beta}{q}}|f(x)|^{2}dV_{g} \end{split} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ and any $n\leq q<\infty$, where $1/q+1/q'=1$, and $C_{2}$ is the constant from \eqref{Hardy_hyper1_intro}. \item {\bf (Weighted Trudinger-Moser inequalities on $\mathbb{H}^{n}\;(n\geq2)$ I)} Let $0\leq\beta<n$ and let $0<\alpha<\alpha_{\beta}$ with $\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n)$. Then there exists a positive constant $\widetilde{C_{5}}=\widetilde{C_{5}}(\beta, n, \alpha)$ such that \begin{equation}\label{Trudinger_GN2_hyper1_analog_intro} \begin{split} \int_{\mathbb{H}^{n}}&\frac{1}{(1+|f(x)|)^{n/(n-1)}\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n/(n-1)})\right.\\&\left.-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f(x)|^{kn/(n-1)}}{k!}\right)dV_{g}\leq \widetilde{C_{5}}\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g} \end{split} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$. Moreover, the power $n/(n-1)$ in the denominator is sharp. \item {\bf (Weighted Trudinger-Moser inequalities on $\mathbb{H}^{n}\;(n\geq2)$ II)} Let $0\leq \beta_{1}<n$ and $\beta_{2}\in \mathbb{R}$. Let $0<\alpha<\alpha_{\beta_{1}}$ with $\alpha_{\beta_{1}}=n\omega_{n-1}^{1/(n-1)}(1-\beta_{1}/n)$. Let $\delta$ be as in \eqref{delta}. Then there exists a positive constant $\widetilde{C_{3}}=\widetilde{C_{3}}(n, \alpha, \beta_{1}, \beta_{2},\delta)$ such that \begin{equation}\label{analog_LT13_1_intro} \begin{split} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\left(\exp(\alpha|f(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\right.&\left.\frac{ \alpha^{k}|f(x)|^{kn/(n-1)}}{k!}\right)dV_{g} \\&\leq \widetilde{C_{3}}\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta} \end{split} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$. Moreover, the constant $\alpha_{\beta_{1}}$ is sharp. \item {\bf (Caffareli-Kohn-Nirenberg inequalities on $\mathbb{H}^{n}\;(n\geq2)$ I) } Let $b$, $c\in\mathbb{R}$, $0<p_{3}<\infty$ and $1<p_{2}<\infty$. Let $\delta\in(0,1]\cap\left(\frac{p_{3}-p_{2}}{p_{3}},1\right]$. Let $0\leq b(1-\delta)-c<n(1/p_{3}-(1-\delta)/p_{2})$ and $n\leq\frac{\delta p_{2}p_{3}}{p_{2}-(1-\delta)p_{3}}$. Then we have \begin{equation}\label{CKN_1_intro} \|\rho^{c}f\|_{L^{p_{3}}(\mathbb{H}^{n})} \leq \widehat{C_{3}}\|\nabla_{g} f\|^{\delta}_{L^{n}(\mathbb{H}^{n})} \|\rho^{b}f\|^{1-\delta}_{L^{p_{2}}(\mathbb{H}^{n})} \end{equation} for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, where $\widehat{C_{3}}=\widehat{C_{3}}(p_{2},p_{3},b,c,n,\delta)$ is given in Theorem \ref{CKN_thm}. \item {\bf (Caffareli-Kohn-Nirenberg inequalities on $\mathbb{H}^{n}\;(n\geq2)$ II) } Let $0\leq\beta_{1}<n$ and $\beta_{2}\in\mathbb{R}$. Let $\delta$ be as in \eqref{delta}. Then for any $n\leq q<\infty$ there exists a positive constant $C_{3}=C_{3}(n, \beta_{1}, \beta_{2}, q, \delta)$ such that \begin{equation}\label{GN_hyper1_intro} \left\|\frac{f}{\rho^{\frac{\beta_{1}}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{3}q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{q}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{q}} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, and such that $\underset{q\rightarrow\infty}{\rm limsup\;}C_{3}(n, \beta_{1}, \beta_{2}, q, \delta)<\infty$. The asymptotically sharp constant for \eqref{GN_hyper1_intro} in the sense of Remark \ref{rem_B3} is given in Theorem \ref{GN_hyper_thm}. Furthermore, the Caffarelli-Kohn-Nirenberg type inequalities \eqref{GN_hyper1_intro} with relation \eqref{equiv_identity_GN_hyper} are equivalent to the weighted Trudinger-Moser inequalities \eqref{analog_LT13_1_intro}. \item {\bf (Caffareli-Kohn-Nirenberg inequalities on $\mathbb{H}^{n}\;(n\geq2)$ III) } Let $0\leq\beta<n$. Then for any $n\leq q<\infty$ there exists a positive constant $C_{5}=C_{5}(n, \beta, q)$ such that \begin{equation}\label{GN2_hyper1_intro} \left\|\frac{f}{\rho^{\frac{\beta}{q}}(1+|f|)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{5}q^{1-1/n} \|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, and such that $\underset{q\rightarrow\infty}{\rm limsup\;}C_{5}(n, \beta, q)<\infty$. The asymptotically sharp constant for \eqref{GN2_hyper1_intro} in the sense of Remark \ref{rem_LT16_2} is given in Theorem \ref{GN3_hyper_thm}. Moreover, the Caffarelli-Kohn-Nirenberg type inequalities \eqref{GN2_hyper1_intro} with relation \eqref{equiv_identity_GN3_hyper} are equivalent to the weighted Trudin\-ger-Moser inequalities \eqref{Trudinger_GN2_hyper1_analog_intro}. \end{itemize} We note that the obtained Caffarelli-Kohn-Nirenberg type inequalities are not covered by \eqref{CKN_Nguyen2} and \eqref{ST_CKN1}. For example, the obtained inequality \eqref{GN_hyper1_intro}, after the change of variables $1-n/q=n/t$ for $q>n$, has the following form \begin{multline}\label{GN_hyper1_intro22} \int_{\mathbb{H}^{n}}\frac{|f(x)|^{\frac{tn}{t-n}}}{\rho^{\beta_{1}}}dV_{g}\\ \leq C_{3}^{\frac{tn}{t-n}}\left(\frac{tn}{t-n}\right)^{\frac{t(n-1)}{t-n}} \left(\int_{\mathbb{H}^{n}}|\nabla_{g}f(x)|^{n}dV_{g}\right)^{\frac{t}{t-n}-(1-\delta)} \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta}, \end{multline} and holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. Moreover, the constant $B_{3}^{\frac{tn}{t-n}}$ is asymptotically sharp for \eqref{GN_hyper1_intro22} in the sense of Remark \ref{rem_B3}, where $B_{3}$ is given in Theorem \ref{GN_hyper_thm}. Here, we see that \eqref{GN_hyper1_intro22} is not covered by \eqref{CKN_Nguyen2}, actually being completely different from \eqref{CKN_Nguyen2} in terms of parameters. We also note that \eqref{CKN_1_intro} gives different inequalities than \eqref{CKN_Nguyen2}. Indeed, for example when $p_{2}=p_{3}=n$ if we take $1+b\leq 0$ or $1+c\leq 0$ in \eqref{CKN_1_intro}, then the condition \eqref{CKN_Nguyen1} fails: $$\frac{1}{r}-\frac{\gamma}{n}=\frac{1}{p_{3}}+\frac{p_{3}c}{rn}=\frac{1+c}{n}\leq0\;\;\text{or}\;\;1-\frac{\beta}{n}=1+\frac{bp_{2}}{n} =1+b\leq0.$$ We also note that the obtained weighted Trudinger-Moser inequalities \eqref{Trudinger_GN2_hyper1_analog_intro} and \eqref{analog_LT13_1_intro} generalise the known results in \cite[Theorem 1]{LC17} and \cite[Theorem 1.3]{LT13}, respectively. Of course there exist a variety of different functional and other inequalities on hyperbolic spaces. For example we can refer to \cite{RS16_BMS} for some spectral and isoperimetric inequalities for different classes of integral operators on $\mathbb{H}^{n}$, as well as to other works referred to in this paper. This paper is organised as follows. In Section \ref{SEC:prelim} we briefly recall the main concepts of Riemannian manifolds with negative curvature and hyperbolic spaces. The Hardy type inequalities with sharp constants on $M$ and $\mathbb{H}^{n}$ are discussed in Section \ref{SEC:Hardy_manif} and in Section \ref{SEC:Hardy_hyper}, respectively. In Section \ref{SEC:CKN} we introduce Caffarelli-Kohn-Nirenberg type and weighted Trudinger-Moser inequalities with sharp constants. \section{Preliminaries} \label{SEC:prelim} In this section we briefly review some main concepts of Riemannian manifolds with negative curvature and refer to \cite{GHL04}, \cite{Li93} and \cite{SY94} for more detailed information. Let $M$ be an $n$-dimensional complete Riemannian manifold with the Riemannian metric $$ds^{2}=\sum g_{ij}dx^{i}dx^{j}$$ for the local coordinate system $\{x^{i}\}_{1\leq i\leq n}$, where $g=\det(g_{ij})$. Let $dV_{g}$ be the volume form associated to the metric $g$, and $\nabla_{g} f$ is the gradient with respect to the metric $g$. Let $K$ be the sectional curvature on $M$. We say that $M$ has {\em negative curvature}, if $K\leq 0$ along every plane section at every point of $M$. Moreover, $M$ contains no points conjugate to any point $x_{0}$ of $M$. If $M$ is simply connected, then the exponential mapping $$\exp_{x_{0}}:T_{x_{0}}M\rightarrow M$$ is a diffeomorphism, where $T_{x_{0}}M$ is the tangent space to $M$ at a point $x_{0}$. We will work on complete, simply connected Riemannian manifold with negative curvature. Let $x_{0}\in M$. Then, $\rho(x)={\rm dist}(x,x_{0})$ is smooth on $M\backslash\{x_{0}\}$, and satisfies the condition $$|\nabla_{g} \rho(x)|=1,\;\;x\in M\backslash\{x_{0}\},$$ where ${\rm dist}(\cdot,\cdot)$ is the geodesic distance. In particular, we will also work on the Poincar\'{e} ball model (coordinate map) of the hyperbolic space $\mathbb{H}^{n}$ ($n\geq2$), that is, when $M$ has constant curvature equal to $-1$. This is the unit ball $B$ in $\mathbb R^{n}$ centered at the origin and equipped with the Riemannian metric $$ds^{2}=\frac{4\sum_{i=1}^{n}dx_{i}^{2}}{(1-|x|^{2})^{2}},$$ where $|\cdot|$ is the Euclidean distance. The Riemannian measure, the gradient and the hyperbolic distance in the Poincar\'{e} ball model are, respectively, $$dV_{g}=\frac{2^{n}}{(1-|x|^{2})^{n}}dx,$$ $$\nabla_{g}=\left(\frac{1-|x|^{2}}{2}\right)^{2}\nabla,$$ and $$\rho(x)=\ln\frac{1+|x|}{1-|x|},$$ where $\nabla$ is the usual gradient, and $dx$ is the Lebesgue measure in $\mathbb R^{n}$. We also use the polar coordinate change formula \begin{equation}\label{polar} \int_{\mathbb{H}^{n}}fdV_{g}=\int_{0}^{+\infty}\int_{\mathbb{S}^{n-1}}f\cdot (\sinh \rho)^{n-1}d\rho d\sigma \end{equation} for $f\in L^{1}(\mathbb{H}^{n})$, where $\mathbb{S}^{n-1}$ is the unit sphere in $\mathbb{H}^{n}$. The Sobolev space $W_{0}^{1,n}(M)$ is defined as the completion of $C_{0}^{\infty}(M)$ in the norm $$\|f\|_{W^{1,n}(M)}=\left(\int_{M}(|\nabla_{g}f(x)|^{n}+|f(x)|^{n})dV_{g}\right)^{1/n}.$$ \section{Hardy type inequalities on manifolds} \label{SEC:Hardy_manif} In this section we prove a family of Hardy type inequalities on complete, simply connected Riemannian manifold $M$ with negative curvature. Let us first recall the Moser-Trudinger inequality on $M$: \begin{thm}[{\cite[Theorem 1.3]{DY16}}] \label{Trudinger_manif_thm} Let $M$ be a complete, simply connected Riemannian manifold of dimension $n\geq2$ with negative curvature. Let $0\leq\beta<n$ and let $0<\alpha\leq \alpha_{\beta}$ with $\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n)$. Then there exists a positive constant $\widetilde{C_{1}}=\widetilde{C_{1}}(\alpha, \beta,n,M)$ such that \begin{equation}\label{Trudinger_manif1} \int_{M}\frac{1}{\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\frac{\alpha^{k} |f(x)|^{kn/(n-1)}}{k!}\right)dV_{g}\leq \widetilde{C_{1}} \end{equation} holds for all functions $f\in W_{0}^{1,n}(M)$ with $\|f\|_{W^{1,n}(M)}\leq1$, where $\omega_{n-1}$ is the area of the surface of the unit $n$-ball in $M$. Moreover, the constant $\alpha_{\beta}$ is sharp. \end{thm} Now we give our result on the Hardy inequalities, and on their equivalence with \eqref{Trudinger_manif1} when $0<\alpha<\alpha_{\beta}$. \begin{thm}\label{Hardy_manif_thm} Let $M$ be a complete, simply connected Riemannian manifold of dimension $n\geq2$ with negative curvature. Let $0\leq\beta<n$. Then for any $n\leq q< \infty$ there exists a positive constant $C_{1}=C_{1}(n,\beta, q, M)$ such that \begin{equation}\label{Hardy_manif1} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}\leq C_{1}q^{1-1/n}\|f\|_{W^{1,n}(M)} \end{equation} holds for all functions $f\in W_{0}^{1,n}(M)$. Moreover, we have \begin{equation}\label{equiv_identity_Hardy} \frac{1}{\alpha_{\beta} n'e}=A_{1}^{n'}=B_{1}^{n'}, \end{equation} where $$\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n),$$ \begin{equation*} \begin{split} A_{1}=\inf\{C_{1}>0; \exists r=r(n,&\beta, C_{1}) \textrm{ with } r\geq n:\\& \eqref{Hardy_manif1}\textrm{ holds }\forall f\in W_{0}^{1,n}(M), \forall q \textrm{ with } r\leq q<\infty\}, \end{split} \end{equation*} \begin{equation}\label{alphaAB_Hardy} B_{1}=\limsup_{q\rightarrow \infty}\sup_{f\in W_{0}^{1,n}(M)\backslash\{0\}} \frac{\left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}}{q^{1-1/n}\|f\|_{W^{1,n}(M)}}. \end{equation} The weighted Trudinger-Moser inequalities \eqref{Trudinger_manif1} with $0<\alpha<\alpha_{\beta}$ are equivalent to the Hardy type inequalities \eqref{Hardy_manif1} with relation \eqref{equiv_identity_Hardy}. \end{thm} \begin{rem}\label{rem_B1} By \eqref{equiv_identity_Hardy} and \eqref{alphaAB_Hardy}, we see that the constant $$B_{1}=(n\omega_{n-1}^{1/(n-1)}(1-\beta/n)n'e)^{-1/n'}$$ is asymptotically sharp for \eqref{Hardy_manif1}, i.e. \eqref{Hardy_manif1} does not hold for $0<C_{1}<B_{1}$. In fact, \eqref{Hardy_manif1} implies \eqref{Trudinger_manif1} for $0<\alpha<\widehat{\alpha}$ for some $\widehat{\alpha}>0$, while \eqref{Hardy_manif1} and \eqref{equiv_identity_Hardy} together imply \eqref{Trudinger_manif1} for all $0<\alpha<\alpha_{\beta}$. The same remark applies to Theorems \ref{Hardy_hyper_thm}, \ref{GN_hyper_thm} and \ref{GN3_hyper_thm}. \end{rem} \begin{proof}[Proof of Theorem \ref{Hardy_manif_thm}] Since $B_{1}\leq A_{1}$, in order to obtain \eqref{equiv_identity_Hardy} it is enough to show that \eqref{Trudinger_manif1}$\Rightarrow$\eqref{Hardy_manif1} with $\alpha_{\beta}\leq(en'A_{1}^{n'})^{-1}$ and \eqref{Hardy_manif1}$\Rightarrow$\eqref{Trudinger_manif1} with $1/\alpha_{\beta}\leq n'eB_{1}^{n'}$. Let us start to prove \eqref{Trudinger_manif1}$\Rightarrow$\eqref{Hardy_manif1} with $\alpha_{\beta}\leq(en'A_{1}^{n'})^{-1}$. In the case $\|f\|_{W^{1,n}(M)}=0$ taking into account the definition of $f\in W_{0}^{1,n}(M)$ we have $f\equiv0$, that is, \eqref{Hardy_manif1} is trivial. Therefore, we can assume that $\|f\|_{W^{1,n}(M)}\neq0$. Replacing $f$ by $f/\|f\|_{W^{1,n}(M)}$ in \eqref{Trudinger_manif1} with $0<\alpha<\alpha_{\beta}$ we get \begin{equation}\label{Hardy_manif3_0} \int_{M}\frac{1}{\rho^{\beta}}\sum_{k=n-1}^{\infty} \frac{\alpha^{k}|f(x)|^{kn'}}{k!\|f\|_{W^{1,n}(M)}^{kn'}}dV_{g}\leq \widetilde{C_{1}}. \end{equation} It implies that for any $\varepsilon$ with $0<\varepsilon<\alpha_{\beta}$ there exists $C_{\varepsilon}$ such that \begin{equation}\label{Hardy_manif3} \int_{M}\frac{1}{\rho^{\beta}}\sum_{k=n-1}^{\infty} \frac{(\alpha_{\beta}-\varepsilon)^{k}|f(x)|^{kn'}}{k!\|f\|_{W^{1,n}(M)}^{kn'}}dV_{g}\leq C_{\varepsilon}. \end{equation} In particular, it follows that \begin{equation}\label{Hardy_manif4} \left\|\frac{f}{\rho^{\frac{\beta}{kn'}}}\right\|_{L^{kn'}(M)}\leq (C_{\varepsilon}k!)^{1/(kn')} (\alpha_{\beta}-\varepsilon)^{-1/n'}\|f\|_{W^{1,n}(M)} \end{equation} for all $k\geq n-1$. Moreover, for any $q\geq n$, there exists an integer $k\geq n-1$ satisfying $n'k\leq q <n'(k+1)$. Then, using H\"{o}lder's inequality for $\frac{\theta q}{n'k}+\frac{(1-\theta) q}{n'(k+1)}=1$ with $0<\theta\leq1$ we calculate \begin{equation*} \begin{split} \int_{M}\frac{|f(x)|^{q}}{\rho^{\beta}}dV_{g}&=\int_{M}\frac{|f(x)|^{\theta q}}{\rho^{\frac{\beta\theta q}{n'k}}}\cdot\frac{|f(x)|^{(1-\theta)q}}{\rho^{\frac{\beta(1-\theta) q}{n'(k+1)}}}dV_{g}\\& \leq \left(\int_{M}\frac{|f(x)|^{n'k}}{\rho^{\beta}}dV_{g}\right)^{\frac{\theta q}{n'k}} \left(\int_{M}\frac{|f(x)|^{n'(k+1)}}{\rho^{\beta}}dV_{g}\right)^{\frac{(1-\theta) q}{n'(k+1)}}\\& =\left\|\frac{f}{\rho^{\frac{\beta}{n'k}}}\right\|_{L^{n'k}(M)}^{\theta q} \left\|\frac{f}{\rho^{\frac{\beta}{n'(k+1)}}}\right\|_{L^{n'(k+1)}(M)}^{(1-\theta)q}, \end{split} \end{equation*} that is, \begin{equation}\label{Hardy_manif5} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)} \leq\left\|\frac{f}{\rho^{\frac{\beta}{n'k}}}\right\|_{L^{n'k}(M)}^{\theta} \left\|\frac{f}{\rho^{\frac{\beta}{n'(k+1)}}}\right\|_{L^{n'(k+1)}(M)}^{1-\theta}. \end{equation} Combining this with \eqref{Hardy_manif4}, we obtain \begin{equation}\label{Hardy_manif6} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}\leq C_{\varepsilon}^{\frac{1}{q}}(\alpha_{\beta}-\varepsilon)^{-\frac{1}{n'}} ((k+1)!)^{\frac{1}{q}}\|f\|_{W^{1,n}(M)}. \end{equation} Since $q\geq n'k$ we have $(k+1)!\leq \Gamma(q/n'+2)$, then \eqref{Hardy_manif6} implies that \begin{equation}\label{Hardy_manif7} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}\leq (C_{\varepsilon}\Gamma(q/n'+2))^{1/q}(\alpha_{\beta}-\varepsilon)^{-1/n'} \|f\|_{W^{1,n}(M)} \end{equation} for any $q\geq n$ and for all $f\in W_{0}^{1,n}(M)$, which is \eqref{Hardy_manif1}. Now applying the Stirling formula for $q\rightarrow+\infty$, one gets \begin{equation}\label{Gamma1} \begin{split} \Gamma(q/n'+2)^{1/q}&=\left((1+o(1))\sqrt{2\pi\left(q/n'+1\right)}\left(\frac{q/n'+1}{e}\right)^{q/n'+1}\right)^{1/q} \\&=(1+o(1))\left(\frac{q}{en'}\right)^{1/n'}. \end{split} \end{equation} Combining this with \eqref{Hardy_manif7}, we have as $q\rightarrow+\infty$, asymptotically \begin{equation*} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}\leq (1+o(1))\left(\frac{q}{en'(\alpha_{\beta}-\varepsilon)}\right)^{1/n'}\|f\|_{W^{1,n}(M)}, \end{equation*} that is, for any $\delta>0$ there exists $r\geq n$ such that \begin{equation}\label{Hardy_manif8} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}\leq ((n'e(\alpha_{\beta}-\varepsilon))^{-1/n'}+\delta)q^{1-1/n}\|f\|_{W^{1,n}(M)} \end{equation} holds for all $f\in W_{0}^{1,n}(M)$ and all $q$ with $r\leq q<\infty$. Thus, we see that $A_{1}\leq (n'e(\alpha_{\beta}-\varepsilon))^{-1/n'}+\delta$, then by the arbitrariness of $\varepsilon$ and $\delta$ we obtain $\alpha_{\beta}\leq(en'A_{1}^{n'})^{-1}$. Now we show that \eqref{Hardy_manif1}$\Rightarrow$\eqref{Trudinger_manif1} with $1/\alpha_{\beta}\leq n'eB_{1}^{n'}$. By \eqref{Hardy_manif1}, for any $q$ with $n\leq q<\infty$ there is $C_{1}=C_{1}(n,\beta,q,M)>0$ such that \begin{equation}\label{Hardy_manif9} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(M)}\leq C_{1}q^{1-1/n}\|f\|_{W^{1,n}(M)} \end{equation} holds for all $f\in W_{0}^{1,n}(M)$. With the help of this and $\|f\|_{W^{1,n}(M)}\leq1$, we write \begin{equation}\label{Hardy_manif10} \int_{M}\frac{1}{\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n'})-\sum_{k=0}^{n-2} \frac{1}{k!}(\alpha|f(x)|^{n'})^{k}\right)dV_{g} \leq \sum_{n'k\geq n,\;k\in\mathbb{N}}\frac{(\alpha n'kC_{1}^{n'})^{k}}{k!}. \end{equation} The series in the right hand side of \eqref{Hardy_manif10} converges when $0\leq\alpha<1/(n'eC_{1}^{n'})$. Thus, we have obtained \eqref{Trudinger_manif1} with $0\leq\alpha<1/(n'eC_{1}^{n'})$. Hence $\alpha_{\beta}\geq 1/(n'eC_{1}^{n'})$ for all $C_{1}\geq B_{1}$, which gives $\alpha_{\beta}\geq 1/(n'eB_{1})^{n'}$. Thus, we have completed the proof of Theorem \ref{Hardy_manif_thm}. \end{proof} \section{Hardy type inequalities on hyperbolic spaces} \label{SEC:Hardy_hyper} In this section we show Hardy type inequalities with sharp constants on hyperbolic spaces and prove their equivalence with the Trudinder-Moser inequalities. Let us start by recalling the following result on $\mathbb{H}^{n}\;(n\geq2)$: \begin{thm}[{\cite[Theorem 1.1]{Zhu15}}] \label{Trudinger_hyper_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space. Let $0\leq\beta<n$ and let $0<\alpha\leq \alpha_{\beta}$ with $\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n)$. Then there exists a positive constant $\widetilde{C_{2}}=\widetilde{C_{2}}(\alpha, \beta, n)$ such that \begin{equation}\label{Trudinger_hyper1} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\frac{\alpha^{k} |f(x)|^{kn/(n-1)}}{k!}\right)dV_{g}\leq \widetilde{C_{2}} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, where $\omega_{n-1}$ is the area of the surface of the unit $n$-ball in $\mathbb{H}^{n}$. Furthermore, the constant $\alpha_{\beta}$ is sharp. \end{thm} We now show that this is equivalent to the following Hardy inequality. \begin{thm}\label{Hardy_hyper_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space and let $0\leq\beta<n$. Then for any $n\leq q <\infty$ there exists a positive constant $C_{2}=C_{2}(n,\beta,q)$ such that \begin{equation}\label{Hardy_hyper1} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{2}q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. Furthermore, we have \begin{equation}\label{equiv_identity_Hardy_hyper} \frac{1}{\alpha_{\beta} n'e}=A_{2}^{n'}=B_{2}^{n'}, \end{equation} where $$\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n),$$ \begin{equation*} \begin{split} A_{2}=\inf\{C_{2}>0; \exists r=r(n,&\beta,C_{2}) \textrm{ with } r\geq n:\\& \eqref{Hardy_hyper1}\textrm{ holds }\forall f\in W_{0}^{1,n}(\mathbb{H}^{n}), \forall q \textrm{ with } r\leq q<\infty\}, \end{split} \end{equation*} \begin{equation}\label{alphaDF_Hardy} B_{2}=\limsup_{q\rightarrow \infty}\sup_{f\in W^{1,n}(\mathbb{H}^{n})\backslash\{0\}} \frac{\left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}}{q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}. \end{equation} The weighted Trudinger-Moser inequalities \eqref{Trudinger_hyper1} with $0<\alpha<\alpha_{\beta}$ are equivalent to the Hardy type inequalities \eqref{Hardy_hyper1} with relation \eqref{equiv_identity_Hardy_hyper}. \end{thm} \begin{rem}\label{rem_B2} An analogue of Remark \ref{rem_B1} holds, in particular, $B_{2}$ is asymptotically sharp for \eqref{Hardy_hyper1}. \end{rem} The proof is similar to that of Theorem \ref{Hardy_manif_thm} but we give it here for clarity. \begin{proof}[Proof of Theorem \ref{Hardy_hyper_thm}] Since $B_{2}\leq A_{2}$, in order to obtain \eqref{equiv_identity_Hardy_hyper} it suffices to show that \eqref{Trudinger_hyper1}$\Rightarrow$\eqref{Hardy_hyper1} with $\alpha_{\beta}\leq(en'A_{2}^{n'})^{-1}$ and \eqref{Hardy_hyper1}$\Rightarrow$\eqref{Trudinger_hyper1} with $1/\alpha_{\beta}\leq n'eB_{2}^{n'}$. We first show that \eqref{Trudinger_hyper1}$\Rightarrow$\eqref{Hardy_hyper1} with $\alpha_{\beta}\leq(en'A_{2}^{n'})^{-1}$. The case $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}=0$ is trivial, since we have $f\equiv0$ by the definition of $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. Therefore, we can replace $f$ by $f/\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}$ in \eqref{Trudinger_hyper1} with $0<\alpha<\alpha_{\beta}$ to get \begin{equation}\label{Hardy_hyper3_0} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}}\sum_{k=n-1}^{\infty} \frac{\alpha^{k}|f(x)|^{kn'}}{k!\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{kn'}}dV_{g}\leq \widetilde{C_{2}}. \end{equation} In other words, it means that there exists $C_{\varepsilon}$ for any $\varepsilon$ with $0<\varepsilon<\alpha_{\beta}$ such that \begin{equation}\label{Hardy_hyper3} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}}\sum_{k=n-1}^{\infty} \frac{(\alpha_{\beta}-\varepsilon)^{k}|f(x)|^{kn'}}{k!\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{kn'}}dV_{g}\leq C_{\varepsilon}. \end{equation} In particular, it follows that \begin{equation}\label{Hardy_hyper4} \left\|\frac{f}{\rho^{\frac{\beta}{kn'}}}\right\|_{L^{kn'}(\mathbb{H}^{n})}\leq (C_{\varepsilon}k!)^{1/(kn')} (\alpha_{\beta}-\varepsilon)^{-1/n'}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})} \end{equation} for all $k\geq n-1$. Moreover, for any $q\geq n$, there exists an integer $k\geq n-1$ satisfying $n'k\leq q <n'(k+1)$. Then, applying H\"{o}lder's inequality for $\frac{\theta q}{n'k}+\frac{(1-\theta) q}{n'(k+1)}=1$ with $0<\theta\leq1$ one calculates \begin{equation}\label{Hardy_hyper41} \begin{split} \int_{\mathbb{H}^{n}}\frac{|f(x)|^{q}}{\rho^{\beta}}dV_{g}&=\int_{\mathbb{H}^{n}}\frac{|f(x)|^{\theta q}}{\rho^{\frac{\beta\theta q}{n'k}}}\cdot\frac{|f(x)|^{(1-\theta)q}}{\rho^{\frac{\beta(1-\theta) q}{n'(k+1)}}}dV_{g}\\& \leq \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{n'k}}{\rho^{\beta}}dV_{g}\right)^{\frac{\theta q}{n'k}} \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{n'(k+1)}}{\rho^{\beta}}dV_{g}\right)^{\frac{(1-\theta) q}{n'(k+1)}}\\& =\left\|\frac{f}{\rho^{\frac{\beta}{n'k}}}\right\|_{L^{n'k}(\mathbb{H}^{n})}^{\theta q} \left\|\frac{f}{\rho^{\frac{\beta}{n'(k+1)}}}\right\|_{L^{n'(k+1)}(\mathbb{H}^{n})}^{(1-\theta)q}, \end{split} \end{equation} which implies that \begin{equation}\label{Hardy_hyper5} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})} \leq\left\|\frac{f}{\rho^{\frac{\beta}{n'k}}}\right\|_{L^{n'k}(\mathbb{H}^{n})}^{\theta} \left\|\frac{f}{\rho^{\frac{\beta}{n'(k+1)}}}\right\|_{L^{n'(k+1)}(\mathbb{H}^{n})}^{1-\theta}. \end{equation} We can combine this with \eqref{Hardy_hyper4} to derive that \begin{equation}\label{Hardy_hyper6} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{\varepsilon}^{\frac{1}{q}}(\alpha_{\beta}-\varepsilon)^{-\frac{1}{n'}} ((k+1)!)^{\frac{1}{q}}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}, \end{equation} that is, \begin{equation}\label{Hardy_hyper7} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq (C_{\varepsilon}\Gamma(q/n'+2))^{1/q}(\alpha_{\beta}-\varepsilon)^{-1/n'} \|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})} \end{equation} for any $q\geq n$ and for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, which is \eqref{Hardy_hyper1}, where we have used $(k+1)!\leq \Gamma(q/n'+2)$ when $q\geq n'k$. Now taking into account the behavior of $\Gamma(q/n'+2)$ for $q\rightarrow+\infty$ by \eqref{Gamma1}, \eqref{Hardy_hyper7} gives that for any $\delta>0$ there exists $r\geq n$ such that \begin{equation}\label{Hardy_hyper8} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq ((n'e(\alpha_{\beta}-\varepsilon))^{-1/n'}+\delta)q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})} \end{equation} holds for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ and all $q$ with $r\leq q<\infty$. Thus, we get $A_{2}\leq (n'e(\alpha_{\beta}-\varepsilon))^{-1/n'}+\delta$. Since $\varepsilon$ and $\delta$ are arbitrary, it implies that $\alpha_{\beta}\leq(en'A_{2}^{n'})^{-1}$. It remains to show that \eqref{Hardy_hyper1}$\Rightarrow$\eqref{Trudinger_hyper1} with $1/\alpha_{\beta}\leq n'eB_{2}^{n'}$. Since we have \eqref{Hardy_hyper1}, we can write that for any $q$ with $n\leq q<\infty$ there is $C_{2}=C_{2}(n,\beta,q)>0$ such that \begin{equation}\label{Hardy_hyper9} \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{2}q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})} \end{equation} holds for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. Employing this and $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, we get \begin{equation}\label{Hardy_hyper10} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n'})-\sum_{k=0}^{n-2} \frac{1}{k!}(\alpha|f(x)|^{n'})^{k}\right)dV_{g}\leq\sum_{n'k\geq n,\;k\in\mathbb{N}}\frac{(\alpha n'kC_{2}^{n'})^{k}}{k!}. \end{equation} The series in the right hand side of \eqref{Hardy_hyper10} converges when $0\leq\alpha<1/(n'eC_{2}^{n'})$. Thus, we have obtained \eqref{Trudinger_hyper1} with $0\leq\alpha<1/(n'eC_{2}^{n'})$. Hence, $\alpha_{\beta}\geq 1/(n'eC_{2})^{n'}$ for all $C_{2}\geq B_{2}$, that is, $\alpha_{\beta}\geq 1/(n'eB_{2}^{n'})$. This completes the proof of Theorem \ref{Hardy_hyper_thm}. \end{proof} Now let us show the corresponding uncertainty type principle on hyperbolic spaces. \begin{thm}\label{uncer_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space and let $0\leq\beta<n$. Then we have \begin{multline}\label{uncer_1} \left(\int_{\mathbb{H}^{n}}|\nabla_{g} f(x)|^{n}dV_{g}\right)^{1/n}\left(\int_{\mathbb{H}^{n}}\rho^{q'}|f(x)|^{q'}dV_{g}\right)^{1/q'}\\ \geq C_{2}^{-1}q^{1/n-1}\int_{\mathbb{H}^{n}}\rho^{\frac{q-\beta}{q}}|f(x)|^{2}dV_{g} \end{multline} for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, where $1/q+1/q'=1$, and $C_{2}$ is the constant from \eqref{Hardy_hyper1}. \end{thm} \begin{proof}[Proof of Theorem \ref{uncer_thm}] Using \eqref{Hardy_hyper1} and H\"{o}lder's inequality, we calculate \begin{equation*} \begin{split} \left(\int_{\mathbb{H}^{n}}|\nabla_{g} f(x)|^{n}dV_{g}\right)^{1/n}&\left(\int_{\mathbb{H}^{n}}\rho^{q'}|f(x)|^{q'}dx\right)^{1/q'}\\& \geq C_{2}^{-1}q^{1/n-1}\left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{q}}{\rho^{\beta}}dV_{g}\right)^{1/q} \left(\int_{\mathbb{H}^{n}}\rho^{q'}|f(x)|^{q'}dV_{g}\right)^{1/q'}\\& \geq C_{2}^{-1}q^{1/n-1}\int_{\mathbb{H}^{n}}\rho^{\frac{q-\beta}{q}}|f(x)|^{2}dV_{g}, \end{split} \end{equation*} which gives \eqref{uncer_1}. \end{proof} \section{Caffarelli-Kohn-Nirenberg inequalities on hyperbolic spaces} \label{SEC:CKN} In this section we give new Caffarelli-Kohn-Nirenberg inequalities on hyperbolic spaces, and show their equivalence with the weighted Trudinger-Moser inequalities. Let us first show that the obtained Hardy inequalities in turn imply the following Caffarelli-Kohn-Nirenberg type inequalities on hyperbolic spaces. \begin{thm}\label{CKN_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space. Let $b$, $c\in\mathbb{R}$, $0<p_{3}<\infty$ and $1<p_{2}<\infty$. Let $\delta\in(0,1]\cap\left(\frac{p_{3}-p_{2}}{p_{3}},1\right]$. Let $0\leq b(1-\delta)-c<n(1/p_{3}-(1-\delta)/p_{2})$ and $n\leq\frac{\delta p_{2}p_{3}}{p_{2}-(1-\delta)p_{3}}$. Then we have \begin{equation}\label{CKN_1} \|\rho^{c}f\|_{L^{p_{3}}(\mathbb{H}^{n})} \leq \widehat{C_{3}}\|\nabla_{g} f\|^{\delta}_{L^{n}(\mathbb{H}^{n})} \|\rho^{b}f\|^{1-\delta}_{L^{p_{2}}(\mathbb{H}^{n})}, \end{equation} for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, where \\$\widehat{C_{3}}=C_{2}^{\delta}\left(\frac{\delta p_{2}p_{3}} {p_{2}-(1-\delta)p_{3}}\right)^{\delta-\frac{\delta}{n}}$, and $C_{2}$ is the constant from \eqref{Hardy_hyper1}. \end{thm} \begin{proof}[Proof of Theorem \ref{CKN_thm}] {\bf Case $\delta=1$}. In this case, we have $0\leq-c<n/p_{3}$, $n\leq p_{3}<\infty$ and $\widehat{C_{3}}=C_{2}p_{3}^{1-1/n}$, so \eqref{CKN_1} is equivalent to \eqref{Hardy_hyper1}. {\bf Case $\delta\in(0,1)\cap\left(\frac{p_{3}-p_{2}}{p_{3}},1\right)$}. Using H\"{o}lder's inequality for $\frac{ p_{2}-(1-\delta)p_{3}}{p_{2}}+\frac{(1-\delta)p_{3}}{p_{2}}=1$, we calculate \begin{equation}\label{CKN_cal} \begin{split} \|\rho^{c}f\|_{L^{p_{3}}(\mathbb{H}^{n})}&= \left(\int_{\mathbb{H}^{n}}\rho^{cp_{3}}|f(x)|^{p_{3}}dV_{g}\right)^{\frac{1}{p_{3}}}\\& =\left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{\delta p_{3}}}{\rho^{\delta p_{3}\left(\frac{b(1-\delta)-c}{\delta}\right)}}\cdot \frac{|f(x)|^{(1-\delta)p_{3}}}{\rho^{-bp_{3}(1-\delta)}}dV_{g}\right)^{\frac{1}{p_{3}}} \\& \leq \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{\frac{\delta p_{2}p_{3}}{p_{2}-(1-\delta)p_{3}}}}{\rho^{\frac{b(1-\delta)-c}{\delta} \cdot\frac{\delta p_{2}p_{3}}{p_{2}-(1-\delta)p_{3}}}}dV_{g}\right)^{\frac{p_{2}-(1-\delta)p_{3}}{{p_{2}p_{3}}}} \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{p_{2}}}{\rho^{-bp_{2}}}dV_{g}\right)^{\frac{1-\delta}{p_{2}}}\\& =\left\|\frac{f}{\rho^{\frac{b(1-\delta)-c}{\delta}}}\right\|^{\delta}_{L^{\frac{\delta p_{2}p_{3}}{p_{2}-(1-\delta)p_{3}}}(\mathbb{H}^{n})} \left\|\frac{f}{\rho^{-b}}\right\|^{1-\delta}_{L^{p_{2}}(\mathbb{H}^{n})}. \end{split} \end{equation} Since we have $\delta>\frac{p_{3}-p_{2}}{p_{3}}$, that is, $n\leq\frac{\delta p_{2}p_{3}}{p_{2}-(1-\delta)p_{3}}<\infty$, and $0\leq\frac{b(1-\delta)-c}{\delta}<\frac{n(p_{2}-(1-\delta)p_{3})}{\delta p_{2}p_{3}}$, then using \eqref{Hardy_hyper1} in \eqref{CKN_cal} we obtain the desired inequality \eqref{CKN_1}. \end{proof} Now we show other types of Caffarelli-Kohn-Nirenberg inequalities with sharp constants, which are equivalent to Moser-Trudinger inequalities. First, let us start by recalling the following Moser-Trudinger inequality on $\mathbb{H}^{n}\;(n\geq2)$: \begin{thm}[{\cite[Theorem 1.3]{LT13}}] \label{Trudinger_GN_hyper_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space. Let $0\leq\beta<n$ and let $0<\alpha<\alpha_{\beta}$ with $\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n)$. Then there exists a positive constant $\widehat{C_{4}}=\widehat{C_{4}}(n, \alpha, \beta)$ such that \begin{multline}\label{Trudinger_GN_hyper1} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f(x)|^{kn/(n-1)}}{k!}\right)dV_{g}\leq \widehat{C_{4}}\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g} \end{multline} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, where $\omega_{n-1}$ is the area of the surface of the unit $n$-ball in $\mathbb{H}^{n}$. Moreover, the constant $\alpha_{\beta}$ is sharp. \end{thm} First, we establish an extension of this result allowing weights of different orders. \begin{thm}\label{analog_LT13} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space. Let $0\leq\beta_{1}<n$ and $\beta_{2}\in \mathbb{R}$. Let $0<\alpha<\alpha_{\beta_{1}}$ with $\alpha_{\beta_{1}}=n\omega_{n-1}^{1/(n-1)}(1-\beta_{1}/n)$. Let \\\begin{equation}\label{delta} \delta= \begin{cases} 0, \text{\;if}\; \beta_{1}=\beta_{2};\\ \delta:0\leq \beta_{1}-\beta_{2}(1-\delta)<n\delta\leq n, \text{\;if}\; \beta_{1}\neq\beta_{2}.\end{cases} \end{equation} Then there exists a positive constant $\widetilde{C_{3}}=\widetilde{C_{3}}(n, \alpha, \beta_{1}, \beta_{2}, \delta)$ such that \begin{multline}\label{analog_LT13_1} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\left(\exp(\alpha|f(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f(x)|^{kn/(n-1)}}{k!}\right)dV_{g} \\\leq \widetilde{C_{3}}\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta} \end{multline} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, where $\omega_{n-1}$ is the area of the surface of the unit $n$-ball in $\mathbb{H}^{n}$. Moreover, the constant $\alpha_{\beta_{1}}$ is sharp. \end{thm} \begin{rem}\label{rem_LT13} We note that the Theorem \ref{analog_LT13} implies Theorem \ref{Trudinger_GN_hyper_thm} when $\beta_{1}=\beta_{2}$. \end{rem} \begin{proof}[Proof of Theorem \ref{analog_LT13}] By \eqref{Trudinger_GN_hyper1} we have \begin{multline}\label{analog_LT13_2} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\left(\exp(\alpha|f(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f(x)|^{kn/(n-1)}}{k!}\right)dV_{g}\leq \widehat{C_{4}}\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta_{1}}}dV_{g}. \end{multline} When $0\leq\beta_{1}<n$ and $\beta_{2}\in\mathbb{R}$, we have by \eqref{CKN_1} with $p_{2}=p_{3}=n\geq2$, $b=-\beta_{2}/n$, $c=-\beta_{1}/n$ and $0\leq \beta_{1}-\beta_{2}(1-\delta)<n\delta\leq n$ that \begin{equation}\label{analog_LT13_3} \int_{\mathbb{H}^{n}}\frac{|f(x)|^{n}}{\rho^{\beta_{1}}}dV_{g}\leq \widehat{C_{3}} \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta} \end{equation} for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, where $\widehat{C_{3}}=\widehat{C_{3}}(n,\beta_{1},\beta_{2},\delta)$ is the constant from \eqref{CKN_1}. Then, by this we note that there exists a positive constant $C=C(n,\beta_{1},\beta_{2},\delta)$ such that \begin{equation}\label{analog_LT13_3_8} \int_{\mathbb{H}^{n}}\frac{|f(x)|^{n}}{\rho^{\beta_{1}}}dV_{g}\leq C \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta} \end{equation} holds for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, where $0\leq\beta_{1}<n$, $\beta_{2}\in\mathbb{R}$ and $\delta$ is given in \eqref{delta}. Then, combining \eqref{analog_LT13_2} and \eqref{analog_LT13_3_8}, we obtain \eqref{analog_LT13_1}. Now we show the sharpness of the constant $\alpha_{\beta_{1}}$ in \eqref{analog_LT13_1}, that is, we prove that \eqref{analog_LT13_1} fails when $\alpha\geq\alpha_{\beta_{1}}$. By \cite{LT13}, we know that if we take the sequence $\{f_{j}\}_{j=1}^{\infty}\in W_{0}^{1,n}(\mathbb{H}^{n})$ as follows \begin{equation*} f_{j}(x):=\omega_{n-1}^{-1/n}D_{j} \begin{cases} 0, \text{\;if}\;\rho>1;\\ j^{\frac{n-\beta_{1}-1}{n-\beta_{1}}}\left(\frac{-\ln\rho}{j}\right), \text{\;if}\;e^{-j}\leq\rho\leq1;\\ j^{\frac{n-\beta_{1}-1}{n-\beta_{1}}}, \text{\;if}\; 0\leq \rho \leq e^{-j},\end{cases} \end{equation*} where $$D_{j}=\left(j^{-\frac{n}{n-\beta_{1}}}\int_{e^{-j}}^{1}\rho^{-n}(\sinh \rho)^{n-1}d\rho\right)^{-\frac{1}{n}},$$ then we have $$D_{j}j^{-\frac{\beta_{1}}{n(n-\beta_{1})}}\rightarrow 1\;\;\text\;\;\text{as}\;\;j\rightarrow\infty,$$ $$\int_{\mathbb{H}^{n}}|\nabla_{g}f_{j}(x)|^{n}dV_{g}=1\;\;\text{and}\;\;\int_{\mathbb{H}^{n}} \frac{|f_{j}(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}=O\left(\frac{1}{j}\right).$$ Plugging $f_{j}(x)$ into the left hand side of \eqref{analog_LT13_1} and using the polar coordinates \eqref{polar}, we calculate $$ \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\left(\exp(\alpha|f_{j}(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f_{j}(x)|^{kn/(n-1)}}{k!}\right)dV_{g}$$ $$=\int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\sum_{k=n-1}^{\infty}\frac{ \alpha^{k}|f_{j}(x)|^{kn/(n-1)}}{k!}dV_{g}$$ $$ \geq \int_{\rho\leq e^{-j}}\frac{1}{\rho^{\beta_{1}}}\sum_{k=n-1}^{\infty}\frac{ \alpha^{k}|f_{j}(x)|^{kn/(n-1)}}{k!}dV_{g}$$ $$=\omega_{n-1}\sum_{k=n-1}^{\infty}\frac{ \alpha^{k}\left(\omega_{n-1}^{-1/n}D_{j}j^{\frac{n-\beta_{1}-1}{n-\beta_{1}}}\right)^{kn/(n-1)}}{k!} \int_{0}^{e^{-j}}\frac{(\sinh \rho)^{n-1}}{\rho^{\beta_{1}}}d\rho$$ $$ \sim \sum_{k=n-1}^{\infty}\frac{ \alpha^{k}\left(\omega_{n-1}^{-1/n}D_{j}j^{\frac{n-\beta_{1}-1}{n-\beta_{1}}}\right)^{kn/(n-1)}}{k!} e^{-j(n-\beta_{1})}$$ $$ \sim \sum_{k=n-1}^{\infty} \frac{\left(\frac{\alpha}{\omega_{n-1}^{1/(n-1)}}\right)^{k}\left(j^{\frac{\beta_{1}}{n(n-\beta_{1})}+ \frac{n-\beta_{1}-1}{n-\beta_{1}}}\right)^{\frac{kn}{n-1}}}{k!} e^{-j(n-\beta_{1})}$$ $$ \sim \sum_{k=n-1}^{\infty} \frac{\left(\frac{\alpha jn}{n\omega_{n-1}^{1/(n-1)}}\right)^{k}}{k!} e^{-j(n-\beta_{1})}$$ $$ \geq \sum_{k=n-1}^{\infty} \frac{j^{k}(n-\beta_{1})^{k}}{k!} e^{-j(n-\beta_{1})}$$ $$= 1-\sum_{k=0}^{n-2} \frac{j^{k}(n-\beta_{1})^{k}}{k!} e^{-j(n-\beta_{1})}. $$ Thus we obtain that \begin{multline*} \frac{\int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\left(\exp(\alpha|f_{j}(x)|^{n/(n-1)})-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f_{j}(x)|^{kn/(n-1)}}{k!}\right)dV_{g}} {\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta}}\\ \geq\frac{1-\sum_{k=0}^{n-2} \frac{j^{k}(n-\beta_{1})^{k}}{k!} e^{-j(n-\beta_{1})}}{O\left(\frac{1}{j}\right)}\rightarrow\infty \end{multline*} as $j\rightarrow\infty$. \end{proof} We now show that this is equivalent to the following Caffarelli-Kohn-Nirenberg type inequalities. \begin{thm}\label{GN_hyper_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space. Let $0\leq\beta_{1}<n$ and $\beta_{2}\in \mathbb{R}$. Let $\delta$ be as in \eqref{delta}. Then for any $n\leq q<\infty$ there exists a positive constant $C_{3}=C_{3}(n,\beta_{1},\beta_{2},q,\delta)$ such that \begin{equation}\label{GN_hyper1} \left\|\frac{f}{\rho^{\frac{\beta_{1}}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{3}q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{q}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{q}} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. Moreover, we have \begin{equation}\label{equiv_identity_GN_hyper} \frac{1}{\alpha_{\beta_{1}} n'e}=A_{3}^{n'}=B_{3}^{n'}, \end{equation} where $$\alpha_{\beta_{1}}=n\omega_{n-1}^{1/(n-1)}(1-\beta_{1}/n),$$ \begin{equation*} \begin{split} A_{3}=\inf\{C_{3}>0; \exists r=r(n,\beta_{1},&\beta_{2},C_{3}) \textrm{ with } r\geq n:\\& \eqref{GN_hyper1}\textrm{ holds }\forall f\in W_{0}^{1,n}(\mathbb{H}^{n}), \forall q \textrm{ with } r\leq q<\infty\}, \end{split} \end{equation*} \begin{equation}\label{alphaDF_GN} B_{3}=\limsup_{q\rightarrow \infty}\sup_{f\in W^{1,n}(\mathbb{H}^{n})\backslash\{0\}} \frac{\left\|\frac{f}{\rho^{\frac{\beta_{1}}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}}{q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{q}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{q}}}. \end{equation} The weighted Trudinger-Moser inequalities \eqref{analog_LT13_1} are equivalent to the Caffarelli-Kohn-Nirenberg type inequalities \eqref{GN_hyper1} with relation \eqref{equiv_identity_GN_hyper}. \end{thm} \begin{rem}\label{rem_B3} An analogue of Remark \ref{rem_B1} holds, in particular, $B_{3}$ is asymptotically sharp for \eqref{GN_hyper1}. \end{rem} \begin{rem} In \cite{INW14}, similar inequalities to \eqref{analog_LT13_1} and \eqref{GN_hyper1} are investigated for radially symmetric functions in $\mathbb R^{n}$. \end{rem} \begin{proof}[Proof of Theorem \ref{GN_hyper_thm}] Since $B_{3}\leq A_{3}$, then, as in the proof of Theorem \ref{Hardy_hyper_thm}, we show the following two cases: \eqref{analog_LT13_1}$\Rightarrow$\eqref{GN_hyper1} with $\alpha_{\beta_{1}}\leq(en'A_{3}^{n'})^{-1}$ and \eqref{GN_hyper1}$\Rightarrow$\eqref{analog_LT13_1} with $1/\alpha_{\beta_{1}}\leq n'eB_{3}^{n'}$. So, we start to show \eqref{analog_LT13_1}$\Rightarrow$\eqref{GN_hyper1} with $\alpha_{\beta_{1}}\leq(en'A_{3}^{n'})^{-1}$. In the case $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}=0$ we have $f\equiv0$ by taking into account the definition of $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, so there is nothing to prove. Therefore, we can assume that $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\neq0$. Replacing $f$ by $f/\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}$ in \eqref{analog_LT13_1} we get \begin{equation}\label{GN_hyper3_0} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\sum_{k=n-1}^{\infty} \frac{\alpha^{k}|f(x)|^{kn'}}{k!}dV_{g}\leq \widetilde{C_{3}} \|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{kn'-n(1-\delta)}\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta}. \end{equation} It implies that for any $\varepsilon$ with $0<\varepsilon<\alpha_{\beta_{1}}$ there exists $C_{\varepsilon}$ such that \begin{equation}\label{GN_hyper3} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\sum_{k=n-1}^{\infty} \frac{(\alpha_{\beta_{1}}-\varepsilon)^{k}|f(x)|^{kn'}}{k!}dV_{g}\leq C_{\varepsilon} \|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{kn'-n(1-\delta)}\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta}. \end{equation} In particular, we get from this that \begin{equation}\label{GN_hyper4} \left\|\frac{f}{\rho^{\frac{\beta_{1}}{kn'}}}\right\|_{L^{kn'}(\mathbb{H}^{n})}\leq (C_{\varepsilon}k!)^{1/(kn')} (\alpha_{\beta_{1}}-\varepsilon)^{-1/n'}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{n'k}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{n'k}} \end{equation} for all $k\geq n-1$. Moreover, for any $q\geq n$, there exists an integer $k\geq n-1$ satisfying $n'k\leq q <n'(k+1)$. Then, combining \eqref{Hardy_hyper5} with \eqref{GN_hyper4}, one gets \begin{equation}\label{GN_hyper6} \left\|\frac{f}{\rho^{\frac{\beta_{1}}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{\varepsilon}^{\frac{1}{q}}(\alpha_{\beta_{1}}-\varepsilon)^{-\frac{1}{n'}} ((k+1)!)^{\frac{1}{q}}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{q}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{q}}. \end{equation} Since $(k+1)!\leq \Gamma(q/n'+2)$ for $q\geq n'k$, we rewrite \eqref{GN_hyper6} as \begin{equation}\label{GN_hyper7} \left\|\frac{f}{\rho^{\frac{\beta_{1}}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\leq (C_{\varepsilon}\Gamma(q/n'+2))^{1/q}(\alpha_{\beta_{1}}-\varepsilon)^{-1/n'}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{q}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{q}} \end{equation} for any $q\geq n$ and for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, which is \eqref{GN_hyper1}. With the help of \eqref{Gamma1} for $q\rightarrow+\infty$ and \eqref{GN_hyper7}, we know that for any $\delta>0$ there exists $r\geq n$ such that \begin{equation}\label{GN_hyper8} \left\|\frac{f}{\rho^{\frac{\beta_{1}}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\leq ((n'e(\alpha_{\beta_{1}}-\varepsilon))^{-1/n'}+\delta)q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{q}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{q}} \end{equation} holds for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ and all $q$ with $r\leq q<\infty$. Thus, we note that $A_{3}\leq (n'e(\alpha_{\beta_{1}}-\varepsilon))^{-1/n'}+\delta$, then by the arbitrariness of $\varepsilon$ and $\delta$ we obtain $\alpha_{\beta_{1}}\leq(en'A_{3}^{n'})^{-1}$. Now let us show that \eqref{GN_hyper1}$\Rightarrow$\eqref{analog_LT13_1} with $1/\alpha_{\beta_{1}}\leq n'eB_{3}^{n'}$. By \eqref{GN_hyper1}, for any $q$ with $n\leq q<\infty$ there exists $C_{3}=C_{3}(n,\beta_{1},\beta_{2},q)>0$ such that \begin{equation}\label{GN_hyper9} \left\|\frac{f}{\rho^{\frac{\beta_{1}}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{3}q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-\frac{n(1-\delta)}{q}} \left\|\frac{f}{\rho^{\frac{\beta_{2}}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{\frac{n(1-\delta)}{q}} \end{equation} holds for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. Using this and $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, we arrive at \begin{multline}\label{GN_hyper10} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta_{1}}}\left(\exp(\alpha|f(x)|^{n'})-\sum_{k=0}^{n-2} \frac{(\alpha|f(x)|^{n'})^{k}}{k!}\right)dV_{g}\\ \leq \sum_{n'k\geq n,\;k\in\mathbb{N}}\frac{(\alpha n'kC_{3}^{n'})^{k}}{k!}\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta_{2}}}dV_{g}\right)^{1-\delta}. \end{multline} The series in the right hand side of \eqref{GN_hyper10} converges when $0\leq\alpha<1/(n'eC_{3}^{n'})$. Thus, we have obtained \eqref{analog_LT13_1} with $0\leq\alpha<1/(n'eC_{3}^{n'})$. Hence $\alpha_{\beta_{1}}\geq 1/(n'eC_{3}^{n'})$ for all $C_{3}\geq B_{3}$, which gives $\alpha_{\beta_{1}}\geq 1/(n'eB_{3})^{n'}$. Thus, we have completed the proof of Theorem \ref{GN_hyper_thm}. \end{proof} We now recall another version of the weighted Trudinger-Moser inequality with a more explicit expression for the exponent for radially decreasing functions. \begin{thm}[{\cite[Theorem 1]{LC17}}] \label{Trudinger_GN2_hyper_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space. Let $0\leq\beta<n$ and let $0<\alpha\leq \alpha_{\beta}$ with $\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n)$. Then there exists a positive constant $\widetilde{C_{4}}=\widetilde{C_{4}}(\beta, n, \alpha)$ such that \begin{multline}\label{Trudinger_GN2_hyper1} \int_{\mathbb{H}^{n}}\frac{1}{(1+|f(x)|)^{n/(n-1)}\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n/(n-1)})\right.\\\left.-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f(x)|^{kn/(n-1)}}{k!}\right)dV_{g}\leq \widetilde{C_{4}}\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g} \end{multline} holds for all radially decreasing functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, where $\omega_{n-1}$ is the area of the surface of the unit $n$-ball in $\mathbb{H}^{n}$. Moreover, the power $n/(n-1)$ in the denominator is sharp. \end{thm} \begin{rem}\label{rem_LT16} In \cite[Theorem 1.1]{LT16}, the authors proved that the constant $\alpha_{\beta}$ in \eqref{Trudinger_GN2_hyper1} is sharp when $\beta=0$ for all functions, not necessarily being radially decreasing. \end{rem} Let us show that actually Theorem \ref{Trudinger_GN2_hyper_thm} holds for any function $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ dropping the radial assumption. \begin{thm}\label{Trudinger_GN2_hyper_analog_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space. Let $0\leq\beta<n$ and let $0<\alpha<\alpha_{\beta}$ with $\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}(1-\beta/n)$. Then there exists a positive constant $\widetilde{C_{5}}=\widetilde{C_{5}}(\beta, n, \alpha)$ such that \begin{multline}\label{Trudinger_GN2_hyper1_analog} \int_{\mathbb{H}^{n}}\frac{1}{(1+|f(x)|)^{n/(n-1)}\rho^{\beta}}\left(\exp(\alpha|f(x)|^{n/(n-1)})\right.\\\left.-\sum_{k=0}^{n-2}\frac{ \alpha^{k}|f(x)|^{kn/(n-1)}}{k!}\right)dV_{g}\leq \widetilde{C_{5}}\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g} \end{multline} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ with $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, where $\omega_{n-1}$ is the area of the surface of the unit $n$-ball in $\mathbb{H}^{n}$. Moreover, the power $n/(n-1)$ in the denominator is sharp. \end{thm} \begin{proof}[Proof of Theorem \ref{Trudinger_GN2_hyper_analog_thm}] By Theorem \ref{GN_hyper_thm} with $\beta_{1}=\beta_{2}=\beta$, hence $\delta=0$ by \eqref{delta}, we obtain \begin{multline*} \left\|\frac{f}{\rho^{\frac{\beta}{q}}(1+|f|)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\leq \left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}\leq B_{3}q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q}, \end{multline*} where $B_{3}$ is given in Theorem \ref{GN_hyper_thm}. Then, using this and $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, one gets \begin{multline}\label{GN2_hyper10_trud} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}(1+|f(x)|)^{n'}}\left(\exp(\alpha|f(x)|^{n'})-\sum_{k=0}^{n-2} \frac{(\alpha|f(x)|^{n'})^{k}}{k!}\right)dV_{g}\\ \leq \sum_{n'k\geq n,\;k\in\mathbb{N}}\frac{(\alpha n'kB_{3}^{n'})^{k}}{k!}\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g}. \end{multline} The series in the right hand side of \eqref{GN2_hyper10_trud} converges when $0\leq\alpha<1/(n'eB_{3}^{n'})$. Thus, we have obtained \eqref{Trudinger_GN2_hyper1_analog} with $0\leq\alpha<1/(n'eB_{3}^{n'})$. Since $B_{3}=(\alpha_{\beta}n'e)^{-1/n'}$ by Remark \ref{rem_B3}, then we have \eqref{Trudinger_GN2_hyper1_analog} for $0\leq\alpha<\alpha_{\beta}$. The sharpness of the power $n/(n-1)$ in the denominator is obtained by Theorem \ref{Trudinger_GN2_hyper_thm}, since this constant is sharp for radially decreasing functions in \eqref{Trudinger_GN2_hyper1_analog}. \end{proof} Now we show that \eqref{Trudinger_GN2_hyper1_analog} is equivalent to the following Caffarelli-Kohn-Nirenberg type inequalities. \begin{thm}\label{GN3_hyper_thm} Let $\mathbb{H}^{n}\;(n\geq2)$ be the $n$-dimensional hyperbolic space and let $0\leq\beta<n$. Then for any $n\leq q<\infty$ there exists a positive constant $C_{5}=C_{5}(n,\beta,q)$ such that \begin{equation}\label{GN3_hyper} \left\|\frac{f}{\rho^{\frac{\beta}{q}}(1+|f|)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{5} q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q} \end{equation} holds for all functions $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. Moreover, we have \begin{equation}\label{equiv_identity_GN3_hyper} \frac{1}{\widetilde{\alpha}_{\beta} n'e}=A_{5}^{n'}=B_{5}^{n'}, \end{equation} where \begin{multline*} \widetilde{\alpha}_{\beta}=\sup\{\alpha>0; \exists \widetilde{C_{5}}=\widetilde{C_{5}}(\beta, n, \alpha):\eqref{Trudinger_GN2_hyper1_analog}\\ \textrm{ holds for all functions} \\ f\in W_{0}^{1,n}(\mathbb{H}^{n}) \textrm{ with } \|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1\}, \end{multline*} \begin{equation*} \begin{split} A_{5}=\inf\{C_{5}>0; \exists r=r(n,&\beta,C_{5}) \textrm{ with } r\geq n:\\& \eqref{GN3_hyper}\textrm{ holds }\forall f\in W_{0}^{1,n}(\mathbb{H}^{n}), \forall q \textrm{ with } r\leq q<\infty\}, \end{split} \end{equation*} \begin{equation}\label{alphaDF_GN3} B_{5}=\limsup_{q\rightarrow \infty}\sup_{f\in W^{1,n}(\mathbb{H}^{n})\backslash\{0\}} \frac{\left\|\frac{f}{\rho^{\frac{\beta}{q}}}\right\|_{L^{q}(\mathbb{H}^{n})}}{q^{1-1/n}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q}}. \end{equation} The weighted Trudinger-Moser inequalities \eqref{Trudinger_GN2_hyper1_analog} are equivalent to the Caffarelli-Kohn-Nirenberg type inequalities \eqref{GN3_hyper} with relation \eqref{equiv_identity_GN3_hyper}. \end{thm} \begin{rem}\label{rem_LT16_2} An analogue of Remark \ref{rem_B1} holds, in particular, $B_{5}$ is asymptotically sharp for \eqref{GN3_hyper}. When $\beta=0$, by Remark \ref{rem_LT16} we have $\widetilde{\alpha}_{\beta}=\alpha_{\beta}=n\omega_{n-1}^{1/(n-1)}$, which gives explicit expression for $B_{5}=(n\omega_{n-1}^{1/(n-1)}n'e)^{-1/n'}$. \end{rem} \begin{proof}[Proof of Theorem \ref{GN3_hyper_thm}] Since $B_{5}\leq A_{5}$, then, we need to show the following two cases: \eqref{Trudinger_GN2_hyper1_analog}$\Rightarrow$\eqref{GN3_hyper} with $\widetilde{\alpha}_{\beta}\leq(en'A_{5}^{n'})^{-1}$ and \eqref{GN3_hyper}$\Rightarrow$\eqref{Trudinger_GN2_hyper1_analog} with $1/\widetilde{\alpha}_{\beta}\leq n'eB_{5}^{n'}$. So, we start to show \eqref{Trudinger_GN2_hyper1_analog}$\Rightarrow$\eqref{GN3_hyper} with $\widetilde{\alpha}_{\beta}\leq(en'A_{5}^{n'})^{-1}$. As in the proof of Theorem \ref{GN_hyper_thm}, it suffices to show \eqref{GN3_hyper} for $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\neq0$. Then, we replace $f$ by $f/\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}$ in \eqref{Trudinger_GN2_hyper1_analog} with $0<\alpha<\widetilde{\alpha}_{\beta}$ to get \begin{equation}\label{GN2_hyper3_0} \begin{split} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}\left(1+\frac{|f(x)|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{n'}}\sum_{k=n-1}^{\infty}& \frac{\alpha^{k}|f(x)|^{kn'}}{k!}dV_{g}\\&\leq \widetilde{C_{5}} \|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{kn'-n}\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g}\right). \end{split} \end{equation} From this, we note that for any $0<\varepsilon<\widetilde{\alpha}_{\beta}$ there is $C_{\varepsilon}$ such that \begin{equation}\label{GN2_hyper3} \begin{split} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}\left(1+\frac{|f(x)|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{n'}}\sum_{k=n-1}^{\infty}& \frac{(\widetilde{\alpha}_{\beta}-\varepsilon)^{k}|f(x)|^{kn'}}{k!}dV_{g}\\&\leq C_{\varepsilon} \|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{kn'-n}\left(\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g}\right). \end{split} \end{equation} In particular, it implies that \begin{multline}\label{GN2_hyper4} \left\|\frac{f}{\rho^{\frac{\beta}{kn'}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{1/k}}\right\|_{L^{kn'}(\mathbb{H}^{n})}\\ \leq (C_{\varepsilon}k!)^{1/(kn')} (\widetilde{\alpha}_{\beta}-\varepsilon)^{-1/n'}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-(n-1)/k}\left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{(n-1)/k} \end{multline} holds for all $k\geq n-1$. Moreover, for any $q\geq n$, there exists an integer $k\geq n-1$ satisfying $n'k\leq q <n'(k+1)$. Then, using H\"{o}lder's inequality for $\frac{\theta q}{n'k}+\frac{(1-\theta) q}{n'(k+1)}=1$ with $0<\theta\leq1$ we calculate \begin{equation*} \begin{split} \int_{\mathbb{H}^{n}}&\frac{|f(x)|^{q}}{\rho^{\beta}\left(1+\frac{|f(x)|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{n'}}dV_{g}\\& =\int_{\mathbb{H}^{n}}\frac{|f(x)|^{\theta q}}{\rho^{\frac{\beta\theta q}{n'k}}\left(1+\frac{|f(x)|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{\frac{\theta q}{k}}}\cdot\frac{|f(x)|^{(1-\theta)q}}{\rho^{\frac{\beta(1-\theta) q}{n'(k+1)}}\left(1+\frac{|f(x)|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{\frac{(1-\theta) q}{k+1}}}dV_{g}\\& \leq \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{n'k}}{\rho^{\beta}\left(1+\frac{|f(x)|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{n'}}dV_{g}\right)^{\frac{\theta q}{n'k}} \left(\int_{\mathbb{H}^{n}}\frac{|f(x)|^{n'(k+1)}}{\rho^{\beta}\left(1+\frac{|f(x)|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{n'}}dV_{g}\right)^{\frac{(1-\theta) q}{n'(k+1)}}\\& =\left\|\frac{f}{\rho^{\frac{\beta}{n'k}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{1/k}}\right\|_{L^{n'k}(\mathbb{H}^{n})}^{\theta q} \left\|\frac{f}{\rho^{\frac{\beta}{n'(k+1)}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{1/(k+1)}}\right\|_{L^{n'(k+1)}(\mathbb{H}^{n})}^{(1-\theta)q}, \end{split} \end{equation*} that is, \begin{equation*} \begin{split} &\left\|\frac{f}{\rho^{\frac{\beta}{q}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})} \\& \leq\left\|\frac{f}{\rho^{\frac{\beta}{n'k}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{1/k}}\right\|_{L^{n'k}(\mathbb{H}^{n})}^{\theta} \left\|\frac{f}{\rho^{\frac{\beta}{n'(k+1)}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{1/(k+1)}}\right\|_{L^{n'(k+1)}(\mathbb{H}^{n})}^{1-\theta}. \end{split} \end{equation*} Combining this with \eqref{GN2_hyper4}, we obtain \begin{multline}\label{GN2_hyper6} \left\|\frac{f}{\rho^{\frac{\beta}{q}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\\ \leq C_{\varepsilon}^{\frac{1}{q}}(\widetilde{\alpha}_{\beta}-\varepsilon)^{-\frac{1}{n'}} ((k+1)!)^{\frac{1}{q}}\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q}. \end{multline} Since $q\geq n'k$ we get $(k+1)!\leq \Gamma(q/n'+2)$, then \eqref{GN2_hyper6} gives that \begin{multline}\label{GN2_hyper7} \left\|\frac{f}{\rho^{\frac{\beta}{q}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\\ \leq C_{\varepsilon}^{\frac{1}{q}}(\widetilde{\alpha}_{\beta}-\varepsilon)^{-\frac{1}{n'}} (\Gamma(q/n'+2))^{\frac{1}{q}}\|\nabla_{g}f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q} \end{multline} for any $q\geq n$ and for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$, which gives \eqref{GN3_hyper} after replacing $f$ by $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}f$. For $q\rightarrow+\infty$, using \eqref{Gamma1} in \eqref{GN2_hyper7} we see that for any $\delta>0$ there is $r\geq n$ such that \begin{multline}\label{GN2_hyper8} \left\|\frac{f}{\rho^{\frac{\beta}{q}}\left(1+\frac{|f|}{\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}}\right)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\\ \leq ((n'e(\widetilde{\alpha}_{\beta}-\varepsilon))^{-\frac{1}{n'}}+\delta) q^{\frac{1}{n'}}\|\nabla f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q} \end{multline} holds for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$ and all $q$ with $r\leq q<\infty$. Here, replacing $f$ by $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}f$ we obtain \eqref{GN3_hyper}. We see that $A_{5}\leq (n'e(\widetilde{\alpha}_{\beta}-\varepsilon))^{-1/n'}+\delta$, then by the arbitrariness of $\varepsilon$ and $\delta$ we obtain $\widetilde{\alpha}_{\beta}\leq(en'A_{5}^{n'})^{-1}$. It remains to show that \eqref{GN3_hyper}$\Rightarrow$\eqref{Trudinger_GN2_hyper1_analog} with $1/\widetilde{\alpha}_{\beta}\leq n'eB_{5}^{n'}$. By \eqref{GN3_hyper}, for any $q$ with $n\leq q<\infty$ there exists $C_{5}=C_{5}(n,\beta,q)>0$ such that \begin{equation}\label{GN2_hyper9} \left\|\frac{f}{\rho^{\frac{\beta}{q}}(1+|f|)^{\frac{n'}{q}} }\right\|_{L^{q}(\mathbb{H}^{n})}\leq C_{5}q^{1-1/n}\|\nabla_{g}f\|_{L^{n}(\mathbb{H}^{n})}^{1-n/q} \left\|\frac{f}{\rho^{\frac{\beta}{n}}}\right\|_{L^{n}(\mathbb{H}^{n})}^{n/q} \end{equation} holds for all $f\in W_{0}^{1,n}(\mathbb{H}^{n})$. By this and taking into account $\|\nabla_{g} f\|_{L^{n}(\mathbb{H}^{n})}\leq1$, one gets \begin{multline}\label{GN2_hyper10} \int_{\mathbb{H}^{n}}\frac{1}{\rho^{\beta}(1+|f(x)|)^{n'}}\left(\exp(\alpha|f(x)|^{n'})-\sum_{k=0}^{n-2} \frac{(\alpha|f(x)|^{n'})^{k}}{k!}\right)dV_{g}\\ \leq \sum_{n'k\geq n,\;k\in\mathbb{N}}\frac{(\alpha n'kC_{5}^{n'})^{k}}{k!}\int_{\mathbb{H}^{n}} \frac{|f(x)|^{n}}{\rho^{\beta}}dV_{g}. \end{multline} The series in the right hand side of \eqref{GN2_hyper10} converges when $0\leq\alpha<1/(n'eC_{5}^{n'})$. Thus, we have obtained \eqref{Trudinger_GN2_hyper1_analog} with $0\leq\alpha<1/(n'eC_{5}^{n'})$. Hence $\widetilde{\alpha}_{\beta}\geq 1/(n'eC_{5}^{n'})$ for all $C_{5}\geq B_{5}$, which gives $\widetilde{\alpha}_{\beta}\geq 1/(n'eB_{5})^{n'}$. Thus, we have completed the proof of Theorem \ref{GN3_hyper_thm}. \end{proof}
2,877,628,088,675
arxiv
\section{Introduction} Donaldson--Thomas (DT) invariants, first introduced by Thomas \cite{Tho00}, are integer-valued deformation invariants on a compact Calabi--Yau $3$-fold. It provides a virtual count of curves embedded into a Calabi--Yau $3$-fold, and is conjecturally equivalent to other enumerative invariants, such as Gromov--Witten invariants \cite{MNOP1}, Pandharipande--Thomas invariants \cite{PT09}, and Gopakumar--Vafa invariants \cite{MT18}. All of which are closely related to the BPS invariants, which are motivated by counting BPS states in the string theory. A quantum Fermat quintic threefold, introduced in \cite{Kan15}, is the Fermat quintic hypersurface in a quantum projective $4$-space, using the language of non-commutative projective schemes developed by Artin and Zhang \cite{AZ94}. DT invariants of a quantum Fermat quintic threefold are defined in the author's previous work \cite{Liu19} as follows. Any quantum Fermat quintic threefold is represented by a coherent sheaf $\sh{A}$ of non-commutative $\sh{O}_X$-algebras on $X\cong\mathbb{P}^3$. We consider Simpson's Hilbert schemes $\mathrm{Hilb}^h(\sh{A})$ parameterizing $\sh{A}$-module quotients $\sh{A}\to\sh{F}$. It is shown that if $\deg(h)\leq 1$, then $\mathrm{Hilb}^h(\sh{A})$ admits a symmetric obstruction theory, thus carries a virtual fundamental class $[\mathrm{Hilb}^h(\sh{A})]_{\mathrm{vir}}$ of virtual dimension $0$. DT invariants are defined as \[ \int_{[\mathrm{Hilb}^h(\sh{A})]_{\mathrm{vir}}} 1, \] which equals to the Euler characteristic $\chi\big(\mathrm{Hilb}^h(\sh{A}),\nu_{\mathrm{Hilb}^h(\sh{A})}\big)$ weighted by the Behrend function \cite{Beh09}. We will only consider constant Hilbert polynomials, and write \[ Z^{\sh{A}}(t)=\sum_{n=0}^{\infty}\chi\big(\mathrm{Hilb}^n(\sh{A}),\nu_{\mathrm{Hilb}^n(\sh{A})}\big)\,t^n \] for the generating function of degree zero DT invariants. \vspace{1mm} In this paper, we will focus on the generic quantum Fermat quintic threefold \cite[\textsection 2.3]{Liu19}. We first give explicit local models of the generic quantum Fermat quintic threefold $(X,\sh{A})$. \begin{theorem}[= Theorem~\ref{thm:local-model}] There exist a stratification $X=X_{(0)}\coprod\ldots\coprod X_{(3)}$ of $X$ and coherent sheaves $\sh{J}_{(i)}$ of non-commutative algebras on $\mathbb{C}^3$ such that for any point $p\in X_{(i)}$, there is an \emph{analytic} local chart $U\to\mathbb{C}^3$ of $p$ with a (non-unique) isomorphism \[ \sh{A}|_{U}\cong\sh{J}_{(i)}|_{U} \] of sheaves of non-commutative algebras. These sheaves $\sh{J}_{(i)}$'s of algebras are (up to Morita equivalence) Jacobi algebras of quivers with potential. \end{theorem} More specifically, $\sh{J}_{(i)}$'s are (up to Morita equivalence) just copies of $\mathbb{C}[x,y,z]$ for $i\neq 0$, whose DT invariants are well-studied. The Jacobi algebra $\sh{J}_{(0)}$ is defined by the quiver $Q$ \[ \begin{xy} 0;<2pt,0pt>:<0pt,-2pt>:: (40,16) *+{\bullet} ="0", (8,40) *+{\bullet} ="2", (32,40) *+{\bullet} ="1", (0,16) *+{\bullet} ="3", (20,0) *+{\bullet} ="4", (20,20) *+<2pt>{}, "4":{\ar@<-.5ex>@/^/"0"}, "4":{\ar@<.5ex>@/^/"0"}, "4":{\ar"2"}, "0":{\ar@<-.5ex>@/^/"1"}, "0":{\ar@<.5ex>@/^/"1"}, "0":{\ar"3"}, "1":{\ar@<-.5ex>@/^/"2"}, "1":{\ar@<.5ex>@/^/"2"}, "1":{\ar"4"}, "2":{\ar@<-.5ex>@/^/"3"}, "2":{\ar@<.5ex>@/^/"3"}, "2":{\ar"0"}, "3":{\ar@<-.5ex>@/^/"4"}, "3":{\ar@<.5ex>@/^/"4"}, "3":{\ar"1"}, \end{xy} \] with certain potential $W$. \vspace{1mm} These local models $\sh{J}_{(i)}$'s allows us to stratify Hilbert schemes $\mathrm{Hilb}^n(\sh{A})$ of points. The generating function $Z^{\sh{A}}(t)$ can be expressed in terms of Hilbert schemes $\mathrm{Hilb}^n(\sh{J}_{(i)})$. \begin{theorem}[= Theorem~\ref{thm:fibration}] We have \[ Z^{\sh{A}}(t)=\prod_{i=0}^3\left(\sum_{n=0}^{\infty}\chi\big(\mathrm{Hilb}^n(\sh{J}_{(i)})_0,\nu_{\mathrm{Hilb}^n(\sh{J}_{(i)})}\big)\,t^n \right), \] where $\mathrm{Hilb}^n(\sh{J}_{(i)})_0$ can be regarded as an analogue of punctual Hilbert scheme of points. \end{theorem} Combing with the known result on DT invariants of $\mathbb{C}^3$, it eventually leads to \[ Z^{\sh{A}}(t)=\Big(Z^{Q,W}(t)\Big)^{10}\Big(M(-t^5)\Big)^{-50}, \] where $Z^{Q,W}(t)$ is the generating function of DT invariants of the quiver $(Q,W)$ with potential, and $M(t)$ is the MacMahon function. \vspace{1mm} Finally, we turn to the computation of the DT invariants $Z^{Q,W}(t)$. We observe that the quiver $Q$ is the McKay quiver of the $\mu_5$-action on $\mathbb{C}^3$ with weight $(1,1,3)$. It associates an orbifold $[\mathbb{C}^3/\mu_5]$, and DT invariants on $[\mathbb{C}^3/\mu_5]$ were computed in \cite{BCY} using the notion of colored plane partitions \cite{You10}. However, our DT invariants $Z^{Q,W}$ use a different framing vector (stability condition). We introduce the notion of \emph{$Q$-multi-colored plane partitions} associated to a quiver $Q$. Each $Q$-multi-colored plane partition associates a dimension vector $\mathbf{d}$ of $Q$, and we denote by $n_{\mathbf{d}}(Q)$ the number of $Q$-multi-colored plane partitions with dimension vector $\mathbf{d}$. \begin{theorem}[= Theorem~\ref{thm:DT-to-part}] We have \[ Z^{Q,W}(t)=\sum_{n=1}^{\infty}\left(\sum_{|\mathbf{d}|=n}(-1)^{|\mathbf{d}|+\langle \mathbf{d},\mathbf{d}\rangle_Q}n_{\mathbf{d}}(Q)\right)t^n, \] \end{theorem} Unfortunately, we do not obtain a closed formula for $Z^{Q,W}(t)$. But we show the number $n_{\mathbf{d}}(Q)$ can be computed from the numbers of $\mu_5(1,1,3)$-colored plane partitions. \subsection*{Notations} We work over the field $\mathbb{C}$ of complex numbers. All schemes or algebras are separated and noetherian over $\mathbb{C}$. All (sheaves of) algebras are associative and unital. By ``non-commutative'', we mean not necessarily commutative, and we assume that non-commutative rings are both left and right noetherian. For a (sheaf of) non-commutative ring $A$, an $A$-module is always a left $A$-module. All rings without specified non-commutative are commutative. \vspace{1mm} We are particularly interested in a special class of non-commutative rings, \emph{quantum polynomial rings}. They are polynomial rings with variables only commute up to a non-zero scalar. We will use the notation \[ \mathbb{C}\langle x_1,\ldots, x_n\rangle_{(q_{ij})}:=\mathbb{C}\langle x_1,\ldots, x_n\rangle\big/\left(x_ix_j-q_{ij}x_jx_i \right), \] where $q_{ij}\in\mathbb{C}^*$ and $q_{ii}=q_{ij}q_{ji}=1$ for all $i,j$. \vspace{1mm} For any scheme $X$ of finite type (over $\mathbb{C}$), we write $\chi(X)$ for the \emph{topological} Euler characteristic of $X$. We also consider the weighted Euler characteristic \[ \chi_{\mathrm{vir}}(X):=\chi(X,\nu_X)=\sum_{c\in\mathbb{Z}} c\chi(\nu_X^{-1}(c)), \] where $\nu_X$ is the Behrend constructible function. For a locally closed subscheme $Z\subset X$, we write \[ \chi_{\mathrm{vir}}(X,Z)=\chi(Z,\nu_X|_Z). \] \vspace{1mm} We will study various of \emph{generating functions}. We will use the notation $Z(t_1,\ldots,t_n)$, which is an element in the formal power series $\mathbb{Z}[\![t_1,\ldots,t_n]\!]$. The constant term $Z(0,\ldots,0)$ will always be $1$, hence the product between generating functions makes sense. \vspace{4mm} Throughout this paper, we only consider the generic quantum Fermat threefold. We will fix a particular graded algebra \[ \mathbb{C}\langle t_0,\ldots,t_4\rangle_{(q_{ij})}\big/\left( \sum_{k=0}^4 t_k^5\right) \] with quantum parameters \[ (q_{ij})_{i,j}=\begin{pmatrix} 1 & q & q^{-1} & q & q^{-1} \\ q^{-1} & 1 & q & q^{-1} & q \\ q & q^{-1} & 1 & q & q^{-1} \\ q^{-1} & q & q^{-1} & 1 & q \\ q & q^{-1} & q & q^{-1} & 1 \end{pmatrix}, \] where $q\in\mu_5$ is a fixed primitive root. Let $(X,\sh{A})$ be the associated pair of smooth projective variety $X$ with a coherent sheaf $\sh{A}$ of non-commutative $\sh{O}_X$-algebras on $X$. See \cite[\textsection 3.3]{Liu19} for precise definitions. \section{Quivers with potential} We briefly review non-commutative DT theory for a quiver with potential and fix some notations. A standard reference is \cite{Sze08}. Let $Q=(Q_0,Q_1)$ be a quiver, where $Q_0$ is the set of vertices, and $Q_1$ the set of arrows. Let $\mathbb{C}Q$ be the path algebra. We will denote $N_Q:=\mathbb{Z}^{\oplus Q_0}$ the free abelian group of dimension vectors, and $N_Q^+:=\mathbb{Z}_{\geq 0}^{\oplus Q_0}$. There is a bilinear form on $N$ defined by \[ \langle \mathbf{d},\mathbf{d}'\rangle_Q=\sum_{i\in Q_0} d_i d'_{i}-\sum_{a\in Q_1} d_{s(a)} d'_{t(a)}. \] The Euler pairing is given by \[ \chi_Q(\mathbf{d},\mathbf{d}')=\langle \mathbf{d},\mathbf{d}'\rangle_Q-\langle \mathbf{d}',\mathbf{d}\rangle_Q. \] Let $\bm{f}\in N_Q^+$ be a framing vector, $\bm{f}\neq 0$. A $\bm{f}$-framed representation of $Q$ is a representation $V=(V_i,T_a)_{i\in Q_0, a\in Q_1}$ of $Q$ with vectors $v_i^1,\ldots, v_i^{f_i}$ in $V_i$ for each $i$ which generates $V$ (as a left $\mathbb{C}Q$-module). Let $W$ be a potential of $Q$, a linear combination of cyclic paths. We define the Jacobi algebra \[ \mathrm{Jac}(Q,W)=\mathbb{C}Q\big/\left(\partial_a W\right)_{a\in Q_1}, \] A representation of $(Q,W)$ is a finite-dimensional left $\mathrm{Jac}(Q,W)$-module, and one define framed representations of $(Q,W)$ similarly. It is known that the fine moduli space $M^{\bm{f},\mathbf{d}}(Q,W)$ of $\bm{f}$-framed representations of $(Q,W)$ is a critical locus of a regular function on a smooth scheme. Therefore it makes sense to define DT invariants via weighted Euler characteristics. Let \[ Z^{Q,W,\bm{f}}(\bm{t})=\sum_{\mathbf{d}\in N_Q^+}\chi_{\mathrm{vir}}\big(M^{\bm{f},\mathbf{d}}(Q,W)\big)\,\bm{t}^{\mathbf{d}} \] be the generating function of DT invariants of $(Q,W)$ with framing vector $\bm{f}$, where $\bm{t}=(t_i)_{i\in Q_0}$ and $\bm{t}^{\mathbf{d}}=\prod_{i\in Q_0} t_i^{d_i}$. The framing vector $\bm{f}$ should be interpreted as a choice of stability condition. If the framing vector $\bm{f}=(1,\ldots,1)$, we will simply write $Z^{Q,W}$ for $Z^{Q,W,(1,\ldots,1)}$. In this case, $(1,\ldots,1)$-framed representations are finite-dimensional \emph{cyclic} $\mathrm{Jac}(Q,W)$-modules. We write \[ \mathrm{Hilb}^{\mathbf{d}}(Q,W):= M^{(1,\ldots,1),\mathbf{d}}(Q,W) \] which can be viewed as Hilbert schemes of points on the non-commutative affine space $\mathrm{Jac}(Q,W)$. For integer $n$, let \[ \mathrm{Hilb}^n(Q,W)=\prod_{|\mathbf{d}|=n}\mathrm{Hilb}^{\mathbf{d}}(Q,W) \] We will abuse the notation and write \[ Z^{Q,W}(t):=Z^{Q,W}(t,\ldots,t)=\sum_{n=0}^{\infty}\chi_{\mathrm{vir}}\big(\mathrm{Hilb}^n(Q,W)\big)\,t^n. \] \section{Simple modules and the Ext-quiver} In this section, we study zero-dimensional coherent $\sh{A}$-modules, which we will simply call finite-dimensional $\sh{A}$-modules. Let $\mathrm{Coh}(\sh{A})_{\mathrm{fd}}$ be the category of finite-dimensional $\sh{A}$-modules. \subsection{Finite-dimensional $\sh{A}$-modules} We first note that the category $\mathrm{Coh}(\sh{A})_{\mathrm{fd}}$ can be studied locally. Any finite-dimensional $\sh{A}$-module is supported (as a coherent sheaf on $X$) in finitely many points. \begin{lemma} For any finite-dimensional $\sh{A}$-module $\sh{F}$, \[ \sh{F}=\bigoplus_{x\in\mathrm{supp}(\sh{F})}\sh{F}_x. \] \end{lemma} \begin{proof} We have $\sh{F}=\bigoplus_{x\in\mathrm{supp}(\sh{F})}\sh{F}_x$ as coherent sheaves on $X$. Since the $\sh{A}$-action on $\sh{F}$ is defined locally, $\sh{F}_x$ is naturally an $\sh{A}$-submodule for each $x$. Furthermore, it is clear that the projection $\sh{F}\to\sh{F}_x$ is a morphism of $\sh{A}$-modules. We see that $\sh{F}=\bigoplus_{x\in\mathrm{supp}(\sh{F})}\sh{F}_x$ in $\mathrm{Coh}(\sh{A})_{\mathrm{fd}}$. \end{proof} If we choose an affine open cover $\{U_i\}$ of $X$, then for each $U_i$, $\sh{A}|_{U_i}$ is given by a non-commutative algebra, and $\mathrm{Coh}(\sh{A}|_{U_i})$ is the category $\mathrm{Mod}(\sh{A}|_{U_i})$ of honest finite-dimensional modules. Then the categories $\{\mathrm{Coh}(\sh{A}|_{U_i})\}_i$ can be regarded as an affine open cover of $\mathrm{Coh}(\sh{A})$. For the generic quantum Fermat quintic threefold $(X,\sh{A})$, \[ X=\mathrm{Proj}\big(\mathbb{C}[x_0,\ldots,x_4]/(x_0+\ldots+x_4)\big) \] is a hyperplane in $\mathbb{P}^4$. Let $\{U_{ij}\}_{i\neq j}$ be the affine open cover of $X$ defined by $U_{ij}=(x_ix_j\neq 0)$. Let $A$ be the non-commutative algebra corresponding to $\sh{A}|_{U_{01}}$. By definition, $A$ is the degree zero part of the graded algebra \[ \left(\mathbb{C}\langle t_0,\ldots,t_4\rangle\big/\Big(\sum_k t_k^5, t_it_j-q_{ij}t_jt_i\Big)\right)\left[ \frac{1}{t_0^5},\frac{1}{t_1^5}\right]. \] Then we can write \begin{equation}\label{eq:local-algebra} A=\mathbb{C}\Big\langle u_1,\frac{1}{u_1},u_2,u_3,u_4\Big\rangle\Big/\left(1+\sum_{k=1}^4 u_k^5, u_iu_j-\overline{q}_{ij}u_ju_i\right), \end{equation} where $u_i=(t_it_0^4)/(t_0^5)$ and \begin{equation}\label{eq:local-quan} \big(\overline{q}_{ij}\big)=\begin{pmatrix} 1 & q^3 & q^4 & q^3 \\ q^2 & 1 & q^4 & q^4 \\ q & q & 1 & q^3 \\ q^2 & q & q^2 & 1 \end{pmatrix}. \end{equation} \begin{lemma} For each $i\neq j$, there is an isomorphism $f:U_{01}\to U_{ij}$ such that $f^*(\sh{A}|_{U_{ij}})$ is isomorphic to $A$, up to a possible change of primitive root $q\in\mu_5$. \end{lemma} \begin{proof} This is proved by an explicit computation. For each $i\neq j$, let $\sigma\in S_5$ be a permutation mapping $\{i,j\}$ to $\{0,1\}$. Then $\sigma$ defines a change of variables and induces an automorphism $f:U_{01}\to U_{ij}$. In this case we can compute the non-commutative algebra $f^*(\sh{A}|_{U_{ij}})$ as above. We see that there exists a permutation $\sigma$ so that $f^*(\sh{A}|_{U_{ij}})$ is equal to $A$, after a possible change of primitive root $q\in\mu_5$. \end{proof} Therefore it is sufficient to study finite-dimensional $A$-modules. \subsection{Simple modules and the Ext-quiver} As seen in \cite{KS08}, semistable objects in a Calabi--Yau-3 category should be locally given by representations of quivers with potential, and the quivers are obtained by the Ext-quivers at stable objects. See also \cite{Toda18} for the case of the category of coherent sheaves on a Calabi--Yau threefold. For the category $\mathrm{Coh}(\sh{A})_{\mathrm{fd}}$, a finite-dimensional $\sh{A}$-module is always semistable, and it is stable if and only if it is simple. We consider the natural forgetful map \[ \mathrm{Hilb}^n(\sh{A})\to \msp{M}^n(\sh{A}):=\msp{M}^{\mathrm{ss},n}(\sh{A}). \] This is the analogue of Hilbert--Chow map. The closed points of the coarse moduli scheme $\msp{M}^n(\sh{A})$ correspond to polystable $\sh{A}$-modules, that is, semisimple $\sh{A}$-modules. As shown in the previous section, we only need to consider finite-dimensional $A$-modules. \begin{lemma} All simple $A$-modules have dimension $1$ or a multiple of $5$. Furthermore, there are exactly $5$ one-dimensional $A$-module given by \[ (u_1,u_2,u_3,u_4)=(\xi,0,0,0), \] where $\xi\in\mathbb{C}$ and $1+\xi^5=0$. \end{lemma} \begin{proof} Let $V$ be a $d$-dimensional simple $\sh{A}$-module. We abuse the notation and write $u_i\in\mathrm{End}_{\mathbb{C}}(V)$ for the action of $u_i\in A$. The relations $u_i u_j=q_{ij}u_ju_i$ imply that for each $i$, $\ker(u_i)\subset V$ is an invariant subspace. Thus for each $i$, $u_i$ is either $0$ or invertible. Next, taking determinants of the relations yields \[ \det(u_i)\det(u_j)=q_{ij}^d\det(u_j)\det(u_i). \] If $d$ is not a multiple of $5$, then $q_{ij}^d\neq 1$. Since $u_0$ is invertible, $\det(u_i)=0$ for all $i\neq 0$ and thus $u_i=0$. We conclude that $d=1$ and $u_0$ acts as a scalar $\xi\in\mathbb{C}$ with $1+\xi^5=0$. \end{proof} We observe that the $5$ one-dimensional simple $A$-modules are supported at the point $p_{01}=\pcoor{1:-1:0:0:0}\in X$, and they are all simple $A$-modules supported at $p_{01}$. We denote these $5$ simple modules by $E_i$'s, which corresponds to $\xi=-q^i$, for $i\in\mathbb{Z}/5$. \begin{definition} The Ext-quiver $Q$ associated to $\{E_i\}_i$ is the quiver whose vertex set $Q_0=\{E_i\}_i$ and the number of arrows from $E_i$ to $E_j$ is equal to the dimension of $\mathrm{Ext}^1_A\big(E_i, E_j\big)$. \end{definition} \begin{proposition} The $\mathrm{Ext}$-quiver $Q$ associated to $(E_0,E_1,\ldots,E_4)$ is \begin{equation}\label{eq:the-quiver} \begin{xy} 0;<2pt,0pt>:<0pt,-2pt>:: (40,16) *+[Fo]{E_0} ="0", (8,40) *+[Fo]{E_2} ="2", (32,40) *+[Fo]{E_1} ="1", (0,16) *+[Fo]{E_3} ="3", (20,0) *+[Fo]{E_4} ="4", (20,20) *+<2pt>{}, "4":{\ar@<-.5ex>"2"}, "4":{\ar@<.5ex>"2"}, "4":{\ar@/_/"3"}, "0":{\ar@<-.5ex>"3"}, "0":{\ar@<.5ex>"3"}, "0":{\ar@/_/"4"}, "1":{\ar@<-.5ex>"4"}, "1":{\ar@<.5ex>"4"}, "1":{\ar@/_/"0"}, "2":{\ar@<-.5ex>"0"}, "2":{\ar@<.5ex>"0"}, "2":{\ar@/_/"1"}, "3":{\ar@<-.5ex>"1"}, "3":{\ar@<.5ex>"1"}, "3":{\ar@/_/"2"}, \end{xy} \end{equation} \end{proposition} \begin{proof} The group $\mathrm{Ext}^1_{A}(E_i,E_j)$ is classified by extensions $0\to E_j\to F\to E_i\to 0$. The $u_i$-action on $F$ is of the form \[ u_0= \begin{pmatrix} -q^j & 0 \\ 0 & -q^i \end{pmatrix} \text{ and } u_k= \begin{pmatrix} 0 & a_k \\ 0 & 0 \end{pmatrix} \text{ for $k>0$.} \] The relations implies that $a_k(\overline{q}_{1k}q^i-q^j)=0$. Thus \[ \dim\mathrm{Ext}^1_{A}(E_i,E_j)=\text{the number of $k$'s such that }\overline{q}_{1k}=q^{j-i}. \] \end{proof} We denote the arrows of $Q$ by $a_i,c_i:E_i\to E_{i+3}$ and $b_i: E_i\to E_{i+4}$. From the construction of the Ext-quiver $Q$, we see that the arrows $a_i$'s, $b_i$'s, and $c_i$'s correspond to the actions of $u_2$, $u_3$, and $u_4$ respectively. Then the $q$-commuting relations translate into \[ \begin{aligned}[t] a_{i}b_{i+1}&-q^4b_{i+4}a_{i+1}&=0, \\ b_{i}c_{i+2}&-q^3c_{i+1}b_{i+2}&=0, \\ a_{i}c_{i+2}&-q^4c_{i}a_{i+2}&=0. \end{aligned} \] As expected, these relations can be patched into a potential \[ W=\bm{b}\bm{a}\bm{c}-q^4\bm{b}\bm{c}\bm{a}, \] where \[ \begin{split} \bm{a} &=a_0+a_1+a_2+a_3+a_4, \\ \bm{b} &=q^4b_0+b_1+qb_2+q^2b_3+q^3b_4, \\ \bm{c} &=c_0+c_1+c_2+c_3+c_4. \end{split} \] In the next section, we will show that the quiver $(Q,W)$ with potential in fact gives a local model of $\sh{A}$ near the point $p_{01}\in X$. \section{Local models of $(X,\sh{A})$} Let $(Q,W)$ be the quiver with potential from the previous section. \begin{lemma} The Jacobi algebra $\mathrm{Jac}(Q,W)$ is isomorphic to \[ \mathbb{C}\langle e,u,v,w\rangle_{(\overline{q}_{ij})}\big/(e^5-1), \] where $(\overline{q}_{ij})$ is the quantum parameters \eqref{eq:local-quan}. \end{lemma} \begin{proof} Consider the element $e=e_0+qe_1+q^2e_2+q^3e_3+q^4e_4\in\mathbb{C}Q$, where $e_i$ is the idempotent corresponding to the vertex $i$, and $u = a_0+a_1+a_2+a_3+a_4$, $v = b_0+b_1+b_2+b_3+b_4$, and $w = c_0+c_1+c_2+c_3+c_4$. It is clear that $e^5=1$ and the elements $e,u,v,w$ satisfy the appropriated $q$-commuting relations. Therefore $\mathbb{C}\langle e,u,v,w\rangle_{(\overline{q}_{ij})}\big/(e^5-1)$ is naturally a subalgebra of $\mathrm{Jac}(Q,W)$. To show that they are isomorphic, it is sufficient to show that the element $e$ generates $e_i$ for all $i$. For each $k$, we have $e^k=e_0+q^ke_1+q^{2k}e_2+q^{3k}e_3+q^{4k}e_4$. So \[ \begin{pmatrix} 1 \\ e \\ e^2 \\ e^3 \\ e^4 \end{pmatrix}= \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & q & q^2 & q^3 & q^4 \\ 1 & q^2 & q^4 & q & q^3 \\ 1 & q^3 & q & q^4 & q^2 \\ 1 & q^4 & q^3 & q^2 & q \end{pmatrix}\begin{pmatrix} e_0 \\ e_1 \\ e_2 \\ e_3 \\ e_4 \end{pmatrix}. \] The matrix in the middle is a Vandermonde matrix, which is invertible. This completes the proof. \end{proof} \begin{remark} One alternative description of the Jacobi algebra $\mathrm{Jac}(Q,W)$ is as follows. Consider the algebra \[ \mathbb{C}\langle u,v,w\rangle_q:=\mathbb{C}\langle u,v,w\rangle\big/\left(uv-q^4vu, vw-q^4wv, wu-q^4wu \right), \] which is the Jacobi algebra of a quantized affine $3$-space (\cite{CMPS}). Let $G=\mu_5$ with an action on $\mathbb{C}\langle u,v,w\rangle_q$ defined by \[ q\cdot(u,v,w)=(q^3u,q^4v,q^3w). \] Then the Jacobi algebra $\mathrm{Jac}(Q,W)$ is isomorphic to the crossed product $\mathbb{C}\langle u,v,w\rangle_q\rtimes\mu_5$, note that here the $v$ corresponds to $q^4b_0+b_1+qb_2+q^2b_3+q^3b_4$. From this perspective, the Jacobi algebra $\mathrm{Jac}(Q,W)$ is a quantization of the orbifold $[\mathbb{C}^3/\mu_5]$. \end{remark} The Jacobi algebra $\mathrm{Jac}(Q,W)$ contains a subalgebra \[ \mathbb{C}[x,y,z]:=\mathbb{C}[u^5,v^5,w^5]\subset Z(\mathrm{Jac}(Q,W)) \] and is a finite $\mathbb{C}[x,y,z]$-module. Thus $\mathrm{Jac}(Q,W)$ can be regarded as a sheaf $\sh{J}$ of non-commutative algebras on $\mathbb{C}^3=\mathrm{Spec}\,\mathbb{C}[x,y,z]$. There is a canonical embedding \begin{equation}\label{eq:can-chart} U_{01}\hookrightarrow\mathbb{C}^3, \left(\frac{x_2}{x_0},\frac{x_3}{x_0}, \frac{x_4}{x_0}\right)=(x,y,z) \end{equation} which maps the special point $p_{01}$ to the origin. For any subset $U\subset U_{01}$, we will simply identify it with its image in $\mathbb{C}^3$ without further comment. \begin{theorem} For any point $p\in U_{01}$, there is an \emph{analytic} open neighborhood $U\subset U_{01}$ of $p$ such that there is a (non-unique) isomorphism \[ \sh{A}|_U\cong\sh{J}|_U \] of sheaves of non-commutative algebras on $U$. \end{theorem} \begin{proof} Both sheaves $\sh{A}$ and $\sh{J}$ are locally free of rank $625$, and we can write them down explicitly, \[ \sh{A}|_U=\sh{O}_U\langle u_1,u_2,u_3,u_4\rangle_{(\overline{q}_{ij})}\big/\left( 1+\sum_{k=1}^4 u_k^5, u_2^5-x, u_3^5-y, u_4^5-z\right) \] and \[ \sh{J}|_U=\sh{O}_U\langle e,u,v,w\rangle_{(\overline{q}_{ij})}\big/\left(e^5-1, u^5-x, v^5-y, w^5-z\right), \] where $x,y,z\in H^0(U,\sh{O}_U)$ are (holomorphic) functions corresponding to the coordinates of $U\subset\mathbb{C}^3$. Suppose $U\subset\mathbb{C}^3$ is an analytic open subset such that $5$-th roots of the holomorphic function $-1-x-y-z$ are well-defined, that is, there exists an element \[ f(x,y,z)\in H^0(U,\sh{O}_U)\text{ such that }f(x,y,z)^5=-1-x-y-z. \] Then we define a morphism of sheaves of non-commutative algebras \[ \sh{A}|_U\to\sh{J}|_U, (u_1,u_2,u_3,u_4)\mapsto \big(f(x,y,z)e, u, v, w\big). \] This defines an isomorphism since $f(x,y,z)$ is non-vanishing on $U$ and thus is invertible. Finally, $U_{01}$ is the open subset of $\mathbb{C}^3$ defined by $x+y+z\neq -1$. Therefore it can be covered by analytic open subsets satisfying the property above. \end{proof} Next, we analyze the Jacobi algebra $\mathrm{Jac}(Q,W)$ in more details. \begin{proposition}\label{prop:local-of-J} Let $p=(x_0,x_1,x_2)\in\mathbb{C}^3$ with $x_0\neq 0$. Then there is an analytic open neighborhood $U\subset\mathbb{C}^3$ of $p$ such that \[ \sh{J}|_U\cong M_{5}(\mathbb{C})\tens{\mathbb{C}}\Big(\sh{O}_U[v,w]\big/\left(v^5-y, w^5-x\right)\Big), \] where $M_5(\mathbb{C})$ is the ring of $5$-by-$5$ matrices. Similar results also hold for points with $y_0\neq 0$ or $z_0\neq 0$. \end{proposition} \begin{proof} Recall that \[ \sh{J}|_U=\sh{O}_U\langle e,u,v,w\rangle_{(\overline{q}_{ij})}\big/\left(e^5-1, u^5-x, v^5-y, w^5-z\right), \] Since $x_0\neq 0$, we can choose an analytic open neighborhood $U$ of $p$ such that there exists a holomorphic function $f(x,y,z)$ on $U$ such that $f(x,y,z)^5=x$. We consider a change of coordinates \[ e_1=e,\quad e_2=\frac{u}{f},\quad \overline{v}=(e_1^3e_2^2)v,\quad \overline{w}=(e_1^4e_2^3)w. \] Then $\sh{J}|_{U}$ is generated by $e_1,e_2,\overline{v},\overline{w}$ since $e_2^5=1$. A straightforward computation shows that the elements $\overline{v},\overline{w}$ are, in fact, lying in the center $Z(\sh{J}|_U)$. Consequently, we can write \[ \begin{split} \sh{J}|_U &=\sh{O}_U\langle e_1,e_2\rangle_{(q_{12})}[\overline{v},\overline{w}]\big/\left(e_1^5-1, e_2^5-1, \overline{v}^5-y, \overline{w}^5-z \right) \\ &\cong\Big(\mathbb{C}\langle e_1,e_2\rangle_{(q_{12})}\big/(e_1^5-1,e_2^5-1) \Big)\tens{\mathbb{C}}\Big(\sh{O}_U[\overline{v},\overline{w}]\big/(\overline{v}^5-y, \overline{w}^5-z)\Big). \end{split} \] To see that the finite-dimensional Frobenius algebra \[ \mathbb{C}\langle e_1,e_2\rangle\big/\left(e_1e_2-q^3e_2e_1, e_1^5-1,e^2-1\right) \] is isomorphic to $M_5(\mathbb{C})$, one may verify that the morphism defined by \[ e_1\mapsto\begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & q & 0 & 0 & 0 \\ 0 & 0 & q^2 & 0 & 0 \\ 0 & 0 & 0 & q^3 & 0 \\ 0 & 0 & 0 & 0 & q^4 \end{pmatrix},\quad e_2\mapsto\begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \end{pmatrix} \] is an isomorphism. \end{proof} \begin{remark}\label{rmk:about-J} We may consider the finite covering $\pi:\mathbb{C}^3\to\mathbb{C}^3$, $\pi(x,y,z)=(x,y^5,z^5)$. Then for any $U\subset\mathbb{C}^3$, we have \[ \begin{split} & M_5(\mathbb{C})\tens{\mathbb{C}}\Big(\sh{O}_U[v,w]\big/(v^5-y,w^5-z)\Big) \\ \cong &\, M_5(\mathbb{C})\tens{\mathbb{C}}\pi_*\sh{O}_{\pi^{-1}(U)}=\pi_*\Big(M_5(\mathbb{C})\tens{\mathbb{C}}\sh{O}_{\pi^{-1}(U)}\Big)\\ =&\, \pi_*\Big( M_5(\sh{O}_{\mathbb{C}^3})|_{\pi^{-1}(U)}\Big). \end{split} \] This gives an equivalence between coherent $\sh{J}$-modules on $U$ and coherent $M_5(\sh{O}_{\mathbb{C}^3})$-modules on $\pi^{-1}(U)$. \end{remark} Now we consider a stratification \[ \mathbb{C}^3=\mathbb{C}^3_{(0)}\coprod\mathbb{C}^3_{(1)}\coprod\mathbb{C}^3_{(2)}\coprod\mathbb{C}^3_{(3)}, \] where $\mathbb{C}^3_{(i)}$ consists of points with exactly $i$ coordinates being non-zero. Let $p(x_0,y_0,z_0)\in\mathbb{C}^3_{(2)}$. For simplicity we assume $z_0=0$, then Proposition~\ref{prop:local-of-J} shows that there is an analytic neighborhood $U$ of $p$ such that \[ \sh{J}|_U\cong \Big(M_{5}(\mathbb{C})^{\oplus 5}\Big)\tens{\mathbb{C}}\Big(\sh{O}_U[w]\big/\left(w^5-z\right)\Big). \] Similarly, for point $p\in \mathbb{C}^3_{(3)}$, there is an analytic neighborhood $U$ of $p$ such that \[ \sh{J}|_U\cong \Big(M_{5}(\mathbb{C})^{\oplus 25}\Big)\tens{\mathbb{C}}\sh{O}_U. \] Finally, we recall that there is an open cover $\{U_{ij}\}_{i\neq j}$ such that all sheaves $\sh{A}|_{U_{ij}}$ of non-commutative algebras are (canonically) isomorphic. There is a natural stratification \begin{equation}\label{eq:strat-X} X=X_{(0)}\coprod X_{(1)}\coprod X_{(2)}\coprod X_{(3)}, \end{equation} where $X_{(i)}$ consists of points with exactly $i+2$ coordinates (of $\mathbb{P}^4$) being non-zero. It is clear that the canonical isomorphism $U_{01}\cong U_{ij}$ and the canonical embedding \eqref{eq:can-chart} preserve the strata. \begin{definition} Let $p\in X$. An \emph{analytic chart} $U$ of $p$ is an analytic open neighborhood $U\subset X$ of $p$ with an embedding $U\to\mathbb{C}^3$ mapping $p$ to the origin. \end{definition} Putting all above results together, we obtain the following theorem. \begin{theorem}\label{thm:local-model} We define the sheaves of non-commutative algebras on $\mathbb{C}^3$ \[ \begin{split} \sh{J}_{(0)} &= \sh{J}, \\ \sh{J}_{(1)} &= M_{5}(\mathbb{C})\tens{\mathbb{C}}\Big(\sh{O}_{\mathbb{C}^3}[v,w]\big/\left(v^5-y, w^5-z\right)\Big), \\ \sh{J}_{(2)} &= \Big(M_{5}(\mathbb{C})^{\oplus 5}\Big)\tens{\mathbb{C}}\Big(\sh{O}_{\mathbb{C}^3}[w]\big/\left(w^5-z\right)\Big), \\ \sh{J}_{(3)} &= \Big(M_{5}(\mathbb{C})^{\oplus 25}\Big)\tens{\mathbb{C}}\sh{O}_{\mathbb{C}^3}. \\ \end{split} \] Then for all $i$ and any point $p\in X_{(i)}$, there is an analytic chart $U\to\mathbb{C}^3$ of $p$ such that \[ \sh{A}|_{U}\cong\sh{J}_{(i)}|_{U}. \] Moreover, if $p\in X_{(1)}$, then the chart $U\to\mathbb{C}^3$ maps $U\cap X_{(1)}$ to the locus $(y=z=0)$; and if $p\in X_{(2)}$, then the chart $U\to\mathbb{C}^3$ maps $U\cap X_{(2)}$ to the locus $(z=0)$. \end{theorem} \begin{remark}\label{rmk:chart} From the construction of the chart $U\to\mathbb{C}^3$, we see that if $p\in X_{(1)}$, then the chart maps $U\cap X_{(1)}$ to the locus $(y=z=0)$. Similarly, if $p\in X_{(2)}$, then the chart maps $U\cap X_{(2)}$ to the locus $(z=0)$. Besides, since the charts are constructed to make $5$-th roots of certain holomorphic function well-defined, we can choose finitely many charts to cover $X_{(i)}$'s for each $i$. \end{remark} In particular, if $p\in X_{(i)}$, then the category $\mathrm{Coh}(\sh{A})_p$ of coherent $\sh{A}$-modules supported at $p$ is equivalent to the category of $\sh{J}_{(i)}$-modules supported at the origin. \begin{corollary} Let $p\in X_{(i)}$, $i\neq 0$. There are $5^{i-1}$ simple $\sh{A}$-modules supported at $p$ and all of them are of dimension $5$. Furthermore, the Ext-quiver associated to these simple modules is the quiver consisting of $5^{i-1}$ vertices, and three loops at each vertex. \end{corollary} \begin{remark} For $i\neq 0$, the sheaves $\sh{J}_{(i)}$ of algebras on $\mathbb{C}^3$ are given by the non-commutative $\mathbb{C}[x,y,z]$-algebras \[ M_5(\mathbb{C}[x,y,z])^{\oplus 5^{i-1}}, \] which are Morita equivalent to the commutative algebras $\mathbb{C}[x,y,z]^{\oplus 5^{i-1}}$. Moreover, these algebras are Jacobi algebras of a quiver with potential. Therefore, one may interpret Theorem~\ref{thm:local-model} as an explicit (analytic) local model of $(X,\sh{A})$ by quivers with potential. \end{remark} \section{Computation of Donaldson--Thomas invariants} In this section, we will compute the generating function \[ Z^{\sh{A}}(t)=\sum_{n=0}^{\infty}\chi_{\mathrm{vir}}\big(\mathrm{Hilb}^n(\sh{A})\big)\,t^n \] of degree zero DT invariants of the generic quantum Fermat quintic threefold. \subsection{Analytic moduli spaces} For technical reasons, we consider the category $\mathcal{C}$ of \emph{analytic} schemes carrying an \emph{algebraic} constructible function. Objects in $\mathcal{C}$ are pairs $(U,\nu)$ such that $U$ is an analytic open subset of an algebraic scheme $X$ and $\nu$ is a function $U\to\mathbb{Z}$ that extends to an algebraic constructible function $\overline{\nu}:X\to\mathbb{Z}$. Morphisms from $(U_1,\nu_1)$ to $(U_2,\nu_2)$ are analytic morphisms $f:U_1\to U_2$ such that $\nu_2\circ f=\nu_1$. Also, the product exists in the category $\mathcal{C}$ given by \[ (U_1,\nu_1)\times (U_2,\nu_2)=(U_1\times U_2, \nu_1\times\nu_2), \] where $(\nu_1\times\nu_2)(x_1,x_2)=\nu_1(x_1)\nu_2(x_2)$. It is clear that if $(U_1,\nu_1)$ and $(U_2,\nu_2)$ are isomorphic in $\mathcal{C}$, then $\chi(U_1,\nu_1)=\chi(U_2,\nu_2)$. \begin{lemma} Let $X$ and $Y$ be schemes. If $X$ and $Y$ are analytic local isomorphic, that is, there exist analytic open covers $\{U_{\alpha}\}_{\alpha}$ of $X$ and $\{V_{\alpha}\}_{\alpha}$ of $Y$ such that for each $\alpha$, there is an analytic isomorphism $f_{\alpha}:U_{\alpha}\to V_{\alpha}$. Then \[ \chi_{\mathrm{vir}}(X)=\chi_{\mathrm{vir}}(Y). \] \end{lemma} \begin{proof} Since the Behrend function depends only on the analytic topology of a scheme, the analytic isomorphism $f_{\alpha}$ induces an isomorphism \[ f_{\alpha}:(U_{\alpha},\nu_{U_{\alpha}})\to (V_{\alpha},\nu_{V_{\alpha}}) \] in $\mathcal{C}$. Then the result follows from the fact that Euler characteristics can be computed from an open cover. \end{proof} \begin{definition} Let $f:X\to Y$ be a morphism of schemes, $F$ a scheme with constructible functions $\mu:X\to\mathbb{Z}$, $\nu:F\to\mathbb{Z}$. We say \[ f:(X,\mu)\to Y \] is an \emph{analytic local} fibration with fibre $(F,\nu)$ if there is an analytic open cover $\{U_{\alpha}\}_{\alpha}$ of $Y$ such that for each $\alpha$, there is an isomorphism \[ \big(f^{-1}(U_{\alpha}),\mu\big)\cong (U_{\alpha},1)\times(F,\nu) \] in $\mathcal{C}$. Note that $f:(X,\mu)\to (Y,1)$ is generally not a morphism in $\mathcal{C}$. \end{definition} \begin{lemma} If $f:(X,\mu)\to Y$ is an analytic local fibration with fibre $(F,\nu)$, then \[ \chi(X,\mu)=\chi(Y)\cdot\chi(F,\nu). \] \end{lemma} \begin{proof} It is easy to see that for any $c\in\mathbb{Z}$, $f:\mu^{-1}(c)\to Y$ is an analytic-local fibration with fibre $\nu^{-1}(c)$, thus \[ \chi(\mu^{-1}(c))=\chi(Y)\cdot\chi(\nu^{-1}(c)). \] \end{proof} Since our local models of $(X,\sh{A})$ are given in analytic topology, it is not obvious that it gives an analytic isomorphism between algebraic moduli spaces. For the rest of the section, let $X$ be a quasi-projective smooth variety and $\sh{A}$ a locally free sheaf of non-commutative algebras on $X$. \begin{definition} The Hilbert--Chow map is the composition \[ \mathrm{Hilb}^n(\sh{A})\to \msp{M}^n(\sh{A}) \to \msp{M}^n(X)\cong\mathrm{Sym}^n(X), \] where the middle morphism sends a finite-dimension $\sh{A}$-module to its underlying coherent sheaves on $X$, which has zero-dimensional support and is of length $n$. \end{definition} For any analytic or algebraic subset $S\subset X$, we define the fiber product \[ \xymatrix{ \mathrm{Hilb}^n(\sh{A})_{S}\ar@{^{(}->}[r]\ar[d]\ar@{}[dr]|{\Box} & \mathrm{Hilb}^n(\sh{A})\ar[d] \\ \mathrm{Sym}^n(S)\ar@{^{(}->}[r] & \mathrm{Sym}^n(X) } \] in the appropriate category. In other words, $\mathrm{Hilb}^n(X,\sh{A})_S$ parameterizes $\sh{A}$-module quotients supported in $S$. Clearly if $S$ is analytic open in $X$, then $\mathrm{Hilb}^n(\sh{A})$ is also analytic open in $\mathrm{Hilb}^n(\sh{A})$. Also if $S$ is a locally closed subscheme of $X$, then $\mathrm{Hilb}^n(\sh{A})$ is also a locally closed subscheme in $\mathrm{Hilb}^n(\sh{A})$ We first prove the equivalence between algebraic and analytic finite-dimensional $\sh{A}$-modules. \begin{lemma}\label{lem:GAGA} The analytification defines an equivalence \[ \mathrm{Coh}(\sh{A})_{\mathrm{fd}}\cong\mathrm{Coh}(\sh{A}^{\mathrm{an}})_{\mathrm{fd}} \] of categories, where $\mathrm{Coh}(\sh{A}^{\mathrm{an}})_{\mathrm{fd}}$ is the category of analytic coherent sheaves on $X$ with an $\sh{A}^{\mathrm{an}}$-action. \end{lemma} \begin{proof} As mentioned before, this statement is Zariski local on $X$. We may assume $X=\mathrm{Spec}(R)$ is affine, and then $\sh{A}$ is a non-commutative $R$-algebra. First, we may choose a compactification $X\subset\overline{X}$ so we can apply GAGA theorem \cite{GAGA}. Since analytification preserves supports of coherent sheaves, it induces an equivalence between $\mathrm{Coh}(\sh{O}_X)_{\mathrm{fd}}$ and $\mathrm{Coh}(\sh{O}_X^{\mathrm{an}})_{\mathrm{fd}}$. While $\sh{O}_X^{\mathrm{an}}$ and $\sh{A}^{\mathrm{an}}$ are not in the category $\mathrm{Coh}(\sh{O}_X^{\mathrm{an}})_{\mathrm{fd}}$, for any $\sh{F}^{\mathrm{an}}$ in $\mathrm{Coh}(\sh{O}_X^{\mathrm{an}})_{\mathrm{fd}}$, the ring $\mathrm{Hom}_{\sh{O}_X^{\mathrm{an}}}(\sh{F}^{\mathrm{an}},\sh{F}^{\mathrm{an}})$ is naturally an $R$-algebra. By definition, an $\sh{A}^{\mathrm{an}}$-module is given by an analytic coherent sheaf $\sh{F}^{\mathrm{an}}$ with a morphism \[ A\to \mathrm{Hom}_{\sh{O}_X^{\mathrm{an}}}(\sh{F}^{\mathrm{an}},\sh{F}^{\mathrm{an}}) \] of $R$-algebras, which must be algebraic. Therefore the analytification \[ \mathrm{Coh}(\sh{A})_{\mathrm{fd}}\cong\mathrm{Coh}(\sh{A}^{\mathrm{an}})_{\mathrm{fd}} \] is an equivalence of categories. \end{proof} \begin{proposition} Let $U$ be an analytic open subset of $X$. Then analytification of $\mathrm{Hilb}^n(\sh{A})_U$ is the analytic moduli space parameterizing analytic cyclic $\sh{A}^{\mathrm{an}}|_{U}$-modules. \end{proposition} \begin{proof} We first note that such analytic moduli space exists. Since $\sh{A}^{\mathrm{an}}$ is locally free, there is a Quot space $\mathcal{Q}$ parameterizing quotients of $\sh{A}^{\mathrm{an}}$, and we take $\mathcal{M}$ to the closed subspace of $\mathcal{Q}$ consisting of points $[\sh{A}^{\mathrm{an}}\to\sh{F}^{\mathrm{an}}]$ such that the kernel is $\sh{A}^{\mathrm{an}}$-invariant. Since any family of algebraic $\sh{A}$-modules is analytic, there is a canonical morphism \begin{equation}\label{eq:temp3} \mathrm{Hilb}^n(\sh{A})_U\to\mathcal{M}, \end{equation} which is bijective from the previous lemma. Note that the previous lemma also works for any family $\sh{A}$-modules over a proper scheme (where GAGA applies). It gives an equivalence between infinitesimal deformations of algebraic and analytic $\sh{A}$-modules. In other words, \eqref{eq:temp3} is \'{e}tale and hence is an analytic isomorphism. \end{proof} \begin{corollary}\label{cor:GAGA} Let $\sh{A}_1$ and $\sh{A}_2$ be two coherent sheaves of non-commutative algebras on $X$. Suppose there is an analytic open subset $U\subset X$ such that $\sh{A}_1|_{U}\cong\sh{A}_2|_{U}$. Then there is an analytic isomorphism \[ \mathrm{Hilb}^n(\sh{A}_1)_{U}\cong\mathrm{Hilb}^n(\sh{A}_2)_{U}. \] \end{corollary} \subsection{Stratifying Hilbert schemes of points} For any locally closed subset $Z\subset X$, we have $\mathrm{Hilb}^n(\sh{A})_Z$, the locally closed subscheme of $\mathrm{Hilb}^n(\sh{A})$ parameterizing $\sh{A}$-module quotients supported in $Z$ with the induced Hilbert--Chow map $\mathrm{Hilb}^n(\sh{A})_Z\to\mathrm{Sym}^n(Z)$. Let \[ Z^{\sh{A}}_Z(t)=\sum_{n=0}^{\infty}\,\chi_{\mathrm{vir}}\big(\mathrm{Hilb}^n(\sh{A}),\mathrm{Hilb}^n(\sh{A})_Z\big)\,t^n \] be the generating function. \begin{proposition}\label{prop:hilb_strata_sup} Let $X=X_1\coprod X_2\coprod\ldots\coprod X_r$ be a stratification of $X$. Then \[ Z^{\sh{A}}(t)=\prod_{i=1}^r Z^{\sh{A}}_{X_{i}}(t). \] \end{proposition} \begin{proof} Write $\pi_n$ for the set of $r$-tuples $(n_1,\ldots,n_r)$ of non-negative integers such that $n_1+\ldots+n_r=n$. The stratification $X=\coprod_i X_{i}$ induces a stratification \[ \mathrm{Hilb}^n(\sh{A})=\coprod_{(n_1,\ldots,n_r)\in \pi_n}\mathrm{Hilb}^n(\sh{A})_{(n_1,\ldots,n_r)}, \] where $\mathrm{Hilb}^n(\sh{A})_{(n_1,\ldots,n_r)}$ parameterizes quotients $\sh{A}\to\sh{F}$ such that $\sh{F}|_{X_i}$ is of length $n_i$ for all $i$. Then \begin{equation}\label{eq:hilb_strata_sup} \mathrm{Hilb}^n(\sh{A})_{(n_1,\ldots,n_r)}\cong\prod_{i=1}^r\mathrm{Hilb}^{n_i}(\sh{A})_{X_i}. \end{equation} It remains to show that the isomorphism \eqref{eq:hilb_strata_sup} induces an isomorphism \[ \big(\mathrm{Hilb}^n(\sh{A})_{(n_1,\ldots,n_r)},\nu_n\big)\cong\prod_{i=1}^r\big(\mathrm{Hilb}^{n_i}(\sh{A})_{X_i},\nu_{n_i}\big) \] in $\mathcal{C}$, where $\nu_n$ is the Behrend function on $\mathrm{Hilb}^n(\sh{A})$. Let $p=[\sh{A}\to\sh{F}]\in\mathrm{Hilb}^n(\sh{A})$ be a closed point. We choose analytic open subsets $U_i\subset X$ such that $\overline{U_i}\cap\overline{ U_j}=\varnothing$ and \[ \mathrm{supp}(\sh{F})\cap X_i\subset U_i. \] Then $\mathrm{Hilb}^n(\sh{A})_{\coprod_i U_i}$ is analytic open in $\mathrm{Hilb}^n(\sh{A})_{\coprod_i U_i}\subset\mathrm{Hilb}^n(\sh{A})$, and \[ \mathrm{Hilb}^n(\sh{A})_{\coprod_i U_i}\cong\prod_i\mathrm{Hilb}^{n_i}(\sh{A})_{U_i}\subset\prod_i\mathrm{Hilb}^{n_i}(\sh{A}). \] Thus \[ \begin{split} \big(\mathrm{Hilb}^n(\sh{A})_{\coprod_i U_i},\nu_n\big) &=\big(\mathrm{Hilb}^n(\sh{A})_{\coprod_i U_i},\nu_{\mathrm{Hilb}^n(\sh{A})_{\coprod_i U_i}}\big) \\ & \cong \prod_i\big(\mathrm{Hilb}^{n_i}(\sh{A})_{U_i},\nu_{\mathrm{Hilb}^{n_i}(\sh{A})_{U_i}}\big)=\prod_i\big(\mathrm{Hilb}^{n_i}(\sh{A})_{U_i},\nu_{n_i}\big). \end{split} \] which completes the proof. \end{proof} As in the classical case, $\mathrm{Hilb}^n(X,\sh{A})_S$ has a standard stratification indexed by partitions of $n$. The Hilbert--Chow map sends the stratum $\mathrm{Hilb}^n(X,\sh{A})_{S,(n)}$ to the diagonal $S\hookrightarrow\mathrm{Sym}^n(S)$. \begin{proposition}\label{prop:local-generating} Suppose the induced morphism \[ \big(\mathrm{Hilb}^n(\sh{A})_{S,(n)},\nu_n\big)\to S \] is an analytic-local fibration with fibre $(F_n,\mu_{n})$. Then \[ Z^{\sh{A}}_S(t)= \left(\sum_{n=0}^{\infty}\,\chi(F_n,\mu_n)\,t^n\right)^{\chi(S)}. \] \end{proposition} \begin{proof} This is the analogue of \cite[Theorem 4.11]{BF08}, but we use analytic open cover instead of \'{e}tale cover. The main idea is that there is a stratification \[ \mathrm{Hilb}^n(\sh{A})_{S}=\coprod_{\alpha\vdash n}\,\mathrm{Hilb}^n(\sh{A})_{S,\alpha} \] and for each $\alpha=(\alpha_1\leq\ldots\leq\alpha_r)\vdash n$, \[ \mathrm{Hilb}^n(\sh{A})_{S,\alpha}\subset\prod_{i=1}^r\mathrm{Hilb}^{\alpha_i}(\sh{A})_{S,(\alpha_i)}. \] By the same argument in Proposition~\ref{prop:hilb_strata_sup}, we get that \[ \big(\mathrm{Hilb}^n(\sh{A})_{S,\alpha},\nu_n\big)\subset\prod_{i=1}^r\big(\mathrm{Hilb}^{\alpha_i}(\sh{A})_{S,(\alpha_i)},\nu_{\alpha_i}\big) \] is an immersion in $\mathcal{C}$, and by assumption, the right hand side is an analytic-local fibration over $S$ with fibre $(F_{\alpha_i},\nu_{\alpha_i})$. Then the formula follows from a standard calculation. \end{proof} \subsection{Computation of DT invariants} Let $(X,\sh{A})$ be the quantum Fermat quintic threefold. We apply Proposition~\ref{prop:hilb_strata_sup} to the stratification \eqref{eq:strat-X} of $X$ and obtain \[ Z^{\sh{A}}(t)=\prod_{i=0}^3\, Z^{\sh{A}}_{X_{(i)}}(t). \] \begin{theorem}\label{thm:fibration} For each $i$, \[ (\mathrm{Hilb}^n(\sh{A})_{X_{(i)},(n)},\nu_n)\to X_{(i)} \] is an analytic local fibration with fibre $(\mathrm{Hilb}^n\big(\mathbb{C}^3,\sh{J}_{(i)})_0,\mu_n\big)$, where $\sh{J}_{(i)}$'s are the sheaves of algebras on $\mathbb{C}^3$ defined in Theorem~\ref{thm:local-model}, and $\mu_n$ is the Behrend function of $\mathrm{Hilb}^n(\mathbb{C}^3,\sh{J}_{(i)})$. \end{theorem} \begin{proof} We only prove for the case $i=1$, as the proofs of the other cases are similar. First, we choose an analytic open cover $\{U_{\alpha}\}$ of charts. Since $\sh{A}|_{U_{\alpha}}\cong\sh{J}_{(1)}|_{U_{\alpha}}$, we have an analytic isomorphism \[ \mathrm{Hilb}^n(\sh{A})_{U_{\alpha}}\cong\mathrm{Hilb}^n(\mathbb{C}^3,\sh{J}_{(1)})_{U_{\alpha}}, \] by Corollary~\ref{cor:GAGA}, which gives an isomorphism \begin{equation}\label{eq:temp1} \big(\mathrm{Hilb}^n(\sh{A})_{U_{\alpha}},\nu_n\big)\cong\big(\mathrm{Hilb}^n(\mathbb{C}^3,\sh{J}_{(1)})_{U_{\alpha}},\mu_n\big) \end{equation} in $\mathcal{C}$. Let $Z$ be the locus of $\mathbb{C}^3$ defined by $y=z=0$. Observe that \[ \sh{J}_{(1)} = M_{5}(\mathbb{C})\tens{\mathbb{C}}\Big(\sh{O}_{\mathbb{C}^3}[v,w]\big/\left(v^5-y, w^5-z\right)\Big) \] is invariance under the translation of $x$, the Hilbert--Chow map \[ \big(\mathrm{Hilb}^n(\mathbb{C}^3,\sh{J}_{(1)})_{Z,(n)},\mu_n)\to Z \] is a \emph{Zariski} local fibration with fibre $\big(\mathrm{Hilb}^n(\mathbb{C}^3,\sh{J}_{(i)})_0,\mu_n\big)$. Here we use the fact that the Behrend function is constant on orbits of a group action. Finally, since the chart $U_{\alpha}\to\mathbb{C}^3$ map $X_{(1)}$ into $Z$ (see Remark~\ref{rmk:chart}), the isomorphism \eqref{eq:temp1} induces an isomorphism \[ \big(\mathrm{Hilb}^n(\sh{A})_{U_{\alpha}\cap X_{(1)}},\nu_n\big)\cong\big(\mathrm{Hilb}^n(\mathbb{C}^3,\sh{J}_{(1)})_{U_{\alpha}\cap Z},\mu_n\big) \] Then the theorem follows. \end{proof} Next we deal with the sheaves $\sh{J}_{(i)}$ of algebras before we state our main theorem. Recall that for $i\neq 0$, $\sh{J}_{(i)}$ can be written as direct sums of $M_5(\sh{O}_{\mathbb{C}^3})$ (see Remark~\ref{rmk:about-J}). \begin{lemma} If $n$ is not a multiple of $5$, then $\mathrm{Hilb}^n\big(\mathbb{C}^3,M_5(\sh{O}_{\mathbb{C}^3})\big)=\varnothing$. For $n=5k$, there is a canonical isomorphism \[ \mathrm{Hilb}^{5k}\big(\mathbb{C}^3, M_5(\sh{O}_{\mathbb{C}^3})\big)\cong\mathrm{Quot}^k(\mathbb{C}^3,\sh{O}_{\mathbb{C}^3}^{\oplus 5}). \] \end{lemma} \begin{proof} The Morita equivalence \[ \mathrm{Coh}(\mathbb{C}^3)\to \mathrm{Coh}(M_5(\sh{O}_{\mathbb{C}^3})) \] is given by $\mathbb{C}^{\oplus 5}\otimes_{\mathbb{C}}-$, where $\mathbb{C}^{\oplus 5}$ is the canonical representation of $M_5(\mathbb{C})$. This implies that the dimension of a $M_5(\sh{O}_{\mathbb{C}^3})$-module must be a multiple of $5$. The isomorphism is a direct consequence of the following fact: Let $R$ be a commutative ring, and $M$ be an $R$-module. For any $n$, we consider the simple $M_n(R)$-module $R^{\oplus n}$. Then a morphism \[ s:M_n(R)\to R^{\oplus n}\otimes M \] of $M_n(R)$-modules is surjective if and only if the induced morphism \[ \delta(s): R^{\oplus n}\hookrightarrow M_n(R)\xrightarrow{s} R^{\oplus n}\otimes M\to M \] of $R$-modules is surjective, where the first map is the diagonal map, and the last map is defined by $(r_1,\ldots,r_n)\otimes m\mapsto\sum_i r_im$. \end{proof} \begin{theorem}\label{thm:main-computation} We have \[ Z^{\sh{A}}(t)=\Big(Z^{Q,W}(t)\Big)^{10}\Big(M(-t^5)\Big)^{-50}, \] where $M(t)$ is the MacMahon function. \end{theorem} \begin{proof} We use Proposition~\ref{prop:local-generating} with Theorem~\ref{thm:fibration} to obtain \[ Z^{\sh{A}}(t)=\prod_{i=0}^3 \left(Z^{\mathbb{C}^3,\sh{J}_{(i)}}_0(t)\right)^{\chi(X_{(i)})}. \] For $i\neq 0$, we have \[ \begin{split} Z^{\mathbb{C}^3,\sh{J}_{(i)}}_0(t)=&\; Z^{\mathbb{C}^3,M_5(\sh{O})^{\oplus 5^{i-1}}}_0(t) \\ = &\, \sum_{k=0}^{\infty} \,\chi_{\mathrm{vir}}\Big(\mathrm{Quot}^k\big((\mathbb{C}^3)^{\coprod 5^{i-1}},\sh{O}^{\oplus 5}\big),\mathrm{Quot}^k\big((\mathbb{C}^3)^{\coprod 5^{i-1}},\sh{O}^{\oplus 5}\big)_0\Big)\, t^{5k} \\ = &\, \left(\sum_{k=0}^{\infty} \,\chi_{\mathrm{vir}}\big(\mathrm{Quot}^k(\mathbb{C}^3,\sh{O}^{\oplus 5}),\mathrm{Quot}^k(\mathbb{C}^3,\sh{O}^{\oplus 5})_0\big)\, t^{5k}\right)^{5^{i-1}}. \end{split} \] For the first equality, see Remark~\ref{rmk:about-J}. The second equality is given by the previous lemma, and the third equality is a standard result about Hilbert schemes of a disjoint union of schemes. Now, DT invariants of Quot schemes of points on $\mathbb{C}^3$ are well-known: \[ \sum_{k=0}^{\infty} \,\chi_{\mathrm{vir}}\big(\mathrm{Quot}^k(\mathbb{C}^3,\sh{O}^{\oplus 5}),\mathrm{Quot}^k(\mathbb{C}^3,\sh{O}^{\oplus 5})_0\big)\, t^{k}= M(-t)^5. \] We conclude that \[ Z^{\sh{A}}(t)=Z_0^{\mathbb{C}^3,\sh{J}}(t)^{\chi(X_{(0)})}\cdot \Big(M(-t^5)^5\Big)^{\chi(X_{(1)})+5\chi(X_{(2)})+25\chi(X_{(3)})}, \] with $\chi(X_{(0)})=0$ and $\chi(X_{(1)})+5\chi(X_{(2)})+25\chi(X_{(3)})=-10$. Finally, recall that $\sh{J}_{(i)}$'s also are local models of $\sh{J}$ on strata $\mathbb{C}^3_{(i)}$ of $\mathbb{C}^3$. We repeat all above arguments to $(\mathbb{C}^3,\sh{J})$ which lead to \[ \begin{split} Z^{\mathbb{C}^3,\sh{J}}(t) &=Z_0^{\mathbb{C}^3,\sh{J}}(t)^{\chi(\mathbb{C}^3_{(0)})}\cdot \Big(M(-t^5)^5\Big)^{\chi(\mathbb{C}^3_{(1)})+5\chi(\mathbb{C}^3_{(2)})+25\chi(\mathbb{C}^3_{(3)})} \\ &=Z_0^{\mathbb{C}^3,\sh{J}}(t). \end{split} \] since $\chi(\mathbb{C}^3_{(3)})=\chi(\mathbb{C}^3_{(3)})=\chi(\mathbb{C}^3_{(3)})=0$. The sheaf $\sh{J}$ on $\mathbb{C}^3$ is given by the Jacobi algebra $\mathrm{Jac}(Q,W)$, and by definition, finite-dimensional quotients of $\sh{J}$ are exactly framed representations of $(Q,W)$ with framing vector $(1,1,1,1,1)$. Hence by our definition, $Z^{\mathbb{C}^3,\sh{J}}(t)=Z^{Q,W}(t)$. \end{proof} For the generating function $Z^{Q,W}$, we have seen that $(Q,W)$ can be viewed as a quantization of an orbifold $[\mathbb{C}^3/\mu_5]$, and DT invariants of an orbifold is known to be related to \emph{colored plane partitions} \cite{You10}. We will discuss more of this in the next section, and show that $Z^{Q,W}$ can be computed using combinatorics. \subsection{A geometric interpertation} As a final remark, we give a possible geometric interpretation of this result. Recall that finite-dimension $\sh{A}$-modules are of dimension $1$ or $5$. We denote $M^{d}_{\mathrm{sp}}$ the moduli space of $d$-dimensional $\sh{A}$-modules. Then Theorem~\ref{thm:local-model} implies that \begin{enumerate} \item[(a)] There is a morphism $M^{1}_{\mathrm{sp}}\to X_{(0)}$ which is a $\mu_5$-torsor. \item[(b)] $M^{5}_{\mathrm{sp}}$ is smooth, and there is a (ramified) covering $M^{5}_{\mathrm{sp}}\to X\setminus X_{(0)}$, which is $5^{i-1}$-to-$1$ on the stratum $X_{(i)}$. \end{enumerate} In particular, \[ \chi(M^{5}_{\mathrm{sp}})=\chi(X_{(1)})+5\chi(X_{(2)})+25\chi(X_{(3)}). \] The factor $M(-t^5)^{-50}$ can be expressed as \[ Z^{M^{5}_{\mathrm{sp}}, M_5(\mathbb{\sh{O}})}(t)=\sum_{k=0}^{\infty} \chi_{\mathrm{vir}}\big(\mathrm{Quot}^{k}(M^{5}_{\mathrm{sp}},\sh{O}^{\oplus 5})\big)\, t^{5k} \] On the other hand, the $M^1_{\mathrm{sp}}$ consists of $50$ points, and if we consider the Ext-quiver $\tilde{Q}$ associated to $M^1_{\mathrm{sp}}$, then $\tilde{Q}$ is the disjoint union of $10$ copies of the quiver $Q$. Therefore we can write \[ Z^{Q,W}(t)^{10}=Z^{\tilde{Q},\tilde{W}}(t) \] and \[ Z^{\sh{A}}(t)=Z^{\tilde{Q},\tilde{W}}(t)\cdot Z^{M^{5}_{\mathrm{sp}}, M_5(\mathbb{\sh{O}})}(t) \] This strongly suggests that the DT invariants of the Calabi--Yau-3 category $\mathrm{Coh}(\sh{A})_{\mathrm{fd}}$ can be computed \emph{directly} from the moduli space of simple objects in $\mathrm{Coh}(\sh{A})_{\mathrm{fd}}$ and the Ext-quivers between simple objects. \section{Multi-colored plane partitions}\label{sec:mcpp} A plane partition is a finite subset $\pi$ of $\mathbb{Z}_{\geq 0}^{\oplus 3}$ such that if any of $(i+1,j,k)$, $(i,j+1,k)$, $(i,j,k+1)$ are in $\pi$, then so is $(i,j,k)$. We will refer points in a plane partition as ``boxes'', and a plane partition can be viewed as a pile of boxes stacked in the positive octant. The size $|\pi|$ is the number of boxes. We denote by $\planep$ the set of plane partitions. \subsection{Colored plane partitions} In this section, we recall the notion of colored plane partitions and their relation with orbifolds. These results are taken from \cite{You10}, \cite{BCY} (see also \cite{DOS}). Let $G=\mu_r$ be the finite group of $r$-th roots of unity in $\mathbb{C}$. We consider the $\mu_r$-action on $\mathbb{C}^3$ with weights $(a,b,c)$. We will denote this action $\mu_5(a,b,c)$. We identify $\hat{G}$, the abelian group of characters, with $\mathbb{Z}/r\mathbb{Z}$. \begin{definition} A $\mu_r(a,b,c)$-colored plane partition is a plane partition $\pi\in\planep$ with the coloring $K:\pi\to\mathbb{Z}/r\mathbb{Z}$ defined by \[ K(i,j,k)=ai+bj+ck. \] For each color $i$, let $|\pi|_i$ be the number of boxes in $\pi$ with color $i$. \end{definition} We define the generating function of $\mu_r(a,b,c)$-colored plane partitions \[ Z^{\mu_r(a,b,c)}_{\mathrm{PL}}(t_0,\ldots,t_{r-1})=\sum_{\pi\in\planep} \,t_0^{|\pi|_0}\cdots t_{r-1}^{|\pi|_{r-1}}. \] For the $\mu_r(a,b,c)$-action, we consider the \emph{McKay quiver} $Q_r(a,b,c)$ whose vertices correspond to irreducible representations of $\mu_5$. Thus the set $Q_r(a,b,c)_0$ of vertices is identified with $\hat{\mu_r}\cong\mathbb{Z}/r\mathbb{Z}$. Arrows of $Q_r(a,b,c)$ are \[ x_i:i\to i+a,\quad y_i:i\to i+b,\quad z_i:i+c \] for all vertex $i$. There is a natural potential \[ W=\bm{y}\bm{x}\bm{z}-\bm{y}\bm{z}\bm{x}, \] where $\bm{x}=\sum_i x_i$, $\bm{y}=\sum_i y_i$, and $\bm{z}=\sum_i z_i$. Given any plane partition $\pi$, the $\mu_r(a,b,c)$-coloring defines a dimension vector \[ |\pi|:=(|\pi|_0,\ldots,|\pi|_{r-1})\in\mathbb{Z}^{\oplus Q_r(a,b,c)_0}. \] On the other hand, the $\mu_r(a,b,c)$-action defines an orbifold $\orbi{X}=[\mathbb{C}^3/\mu_r]$. For any $\rho\in K_0(\text{Rep}(\mu_r))$, we consider the Hilbert scheme $\mathrm{Hilb}^{\rho}(\orbi{X})$ parameterizing $\mu_5$-invariant subschemes $Z\subset\mathbb{C}^3$ such that the induced $\mu_5$-representation on $\sh{O}_Z$ is in the class $\rho$. The group $K_0(\text{Rep}(\mu_r))$ is canonically identified with $\mathbb{Z}^{\oplus Q_r(a,b,c)_0}$. Furthermore, it is well-known that the Hilbert scheme $\mathrm{Hilb}^{\mathbf{d}}(\orbi{X})$ is isomorphic to the fine moduli space $\msp{M}^{\mathbf{d},e_0}(Q_r(a,b,c),W)$ of framed representations of the quiver $Q_r(a,b,c)$ with potential $W$, with the framing vector $e_0=(1,0,\ldots,0)$. We define the generating function \[ Z^{\orbi{X}}(t_0,\ldots,t_{r-1})=\sum_{\mathbf{d}=(d_0,\ldots,d_{r-1})}\chi_{\mathrm{vir}}\big(\mathrm{Hilb}^{\mathbf{d}}(\orbi{X})\big)\,t_0^{d_0}\cdots t_{r-1}^{d_{r-1}} \] of DT invariants of the orbifold $\orbi{X}$, which is equal to the generating function $Z^{Q_r(a,b,c),W,e_0}$ in our notation. \begin{proposition}[\cite{BCY}] The generating function $Z^{\orbi{X}}$ is, up to signs, given by the generating function $Z^{\mu_r(a,b,c)}_{\mathrm{PL}}$ of colored plane partitions. More specifically, we have \[ Z^{\orbi{X}}(t_0,\ldots,t_{r-1})=\sum_{\pi\in\planep}(-1)^{|\pi|_0+\langle |\pi|,|\pi|\rangle}\, t_0^{|\pi|_0}\cdots t_{r-1}^{|\pi|_{r-1}}, \] where $\langle -,-\rangle$ is the bilinear form associated to the quiver $Q_r(a,b,c)$. \end{proposition} \subsection{Multi-colored plane partitions} Let $Q=(Q_0,Q_1)$ be a quiver with a labeling of arrows $\ell:Q_1\to\{x,y,z\}$. We denote $\mathcal{S}(Q_0)$ be the set of non-empty subsets of $Q_0$. \begin{definition} A $Q$-multi-colored plane partition consists of a plane partition $\pi\in\planep$ with a multi-coloring \[ K:\pi\to\mathcal{S}(Q_0) \] such that for any arrow $a:v\to w$ labeled with $x$, if $w\in K(i,j,k)$ for some $(i,j,k)\in\pi$, then $v\in K(i-1,j,k)$, and similar conditions hold for arrows labeled with $y$ and $z$. Note that there are many different $Q$-multi-colorings on one plane partition $\pi$. \end{definition} Given a $Q$-multi-colored plane partition $\overline{\pi}=(\pi,K)$, it associates a dimension vector $\mathbf{w}(\overline{\pi}):=\big(w_v(\overline{\pi})\big)_{v\in Q_0}$ defined by \[ \mathbf{w}_v(\overline{\pi})=\text{number of boxes $(i,j,k)\in\pi$ such that the color $v\in K(i,j,k)$}. \] For any dimension vector $\mathbf{d}\in\mathbb{Z}^{\oplus Q_0}$, we denote by $n_Q(\mathbf{d})$ the number of $Q$-multi-colored plane partitions with dimension vector $\mathbf{d}$. We define the generating function \[ Z^{Q}_{\mathrm{PL}}(\bm{t})=\sum_{\mathbf{d}} n_Q(\mathbf{d})\,\bm{t}^{\mathbf{d}} \] of $Q$-multi-colored plane partitions, where we write $\bm{t}$ for $(t_v)_{v\in Q_0}$, $\mathbf{d}=(d_v)_{v\in Q_0}$ and $\bm{t}^{\mathbf{d}}=\prod_{\mathbf{d}\in Q_0} t_v^{d_v}$. For simplicity, from now on we only consider the quiver $(Q,W)$ \eqref{eq:the-quiver} with potential from the quantum Fermat quintic threefold. We will rearrange the vertices and arrows, and write \[ \begin{xy} 0;<2pt,0pt>:<0pt,-2pt>:: (40,16) *+[Fo]{1} ="0", (8,40) *+[Fo]{3} ="2", (32,40) *+[Fo]{2} ="1", (0,16) *+[Fo]{4} ="3", (20,0) *+[Fo]{0} ="4", (20,20) *+<2pt>{}, "4":{\ar@<-.5ex>@/^/"0"}, "4":{\ar@<.5ex>@/^/"0"}, "4":{\ar"2"}, "0":{\ar@<-.5ex>@/^/"1"}, "0":{\ar@<.5ex>@/^/"1"}, "0":{\ar"3"}, "1":{\ar@<-.5ex>@/^/"2"}, "1":{\ar@<.5ex>@/^/"2"}, "1":{\ar"4"}, "2":{\ar@<-.5ex>@/^/"3"}, "2":{\ar@<.5ex>@/^/"3"}, "2":{\ar"0"}, "3":{\ar@<-.5ex>@/^/"4"}, "3":{\ar@<.5ex>@/^/"4"}, "3":{\ar"1"}, \end{xy} \] with outer arrows $x_i,y_i:i\to i+1$ and inner arrows $z_i:i+3$. The induced potential is \[ W=\bm{y}\bm{x}\bm{z}-q\bm{y}\bm{z}\bm{x}, \] where $\bm{x}=\sum x_i$, $\bm{y}=\sum y_i$, and $\bm{z}=z_0+qz_1+q^2z_2+q^3z_3+q^4z_4$. \vspace{1mm} Our main theorem of the section is to associate DT invariants of $(Q,W)$ with $Q$-multi-colored plane partitions. \begin{theorem}\label{thm:DT-to-part} The generating function $Z^{Q,W}$ is, up to signs, given by the generating function $Z^{Q}_{\mathrm{PL}}$ of $Q$-multi-colored plane partitions. More specifically, we have \[ Z^{Q,W}(\bm{t})=\sum_{\mathbf{d}\in N_Q^+}(-1)^{|\mathbf{d}|+\langle |\mathbf{d}|,|\mathbf{d}|\rangle}\, n_Q(\mathbf{d})\,\bm{t}^{\mathbf{d}}, \] where $|\mathbf{d}|=\sum_i d_i$ and $\langle -,-\rangle$ is the bilinear form associated to the quiver $Q$. \end{theorem} We will leave the proof to \ref{pf:part}. \vspace{1mm} Observe that this quiver is the same as the McKay quiver of $\mu_5(1,1,3)$. Recall that the vertices of the McKay quiver correspond to the irreducible representations of $\mu_5$, in which there is a distinguished one, the trivial representation. However, the vertices of $Q$ are simple $\sh{A}$-modules. There are exactly $5$ ways to identify the quiver $Q$ with the McKay quiver of $\mu_5(1,1,3)$, depending on a choice of a vertex in $Q_0$. In other words, for each $v\in Q_0$, there is a unique bijection $\alpha_v:Q_0\to\mathbb{Z}/5\mathbb{Z}$ with $\alpha_v(v)=0$ identifying the quiver $Q$ with the McKay quiver of $\mu_5(1,1,3)$. \begin{definition} Let $v\in Q_0$ be a vertex. A $(Q,v)$-colored plane partition is a plane partition $\pi$ with the coloring $K_v:\pi\to Q_0$ making $\alpha_v\circ K_v:\pi\to\mathbb{Z}/5\mathbb{Z}$ the $\mu_5(1,1,3)$-coloring of $\pi$. \end{definition} Given five plane partitions $\pi_v$ indexed by $Q_0$. We may define a $Q$-multi-colored plane partition by taking $\pi$ to be the union of $\pi_v$'s with multi-coloring \[ K(i,j,k)=\left\{K_v(i,j,k): (i,j,k)\in\pi_v\right\}\subset Q_0. \] \begin{lemma} Any $Q$-multi-colored plane partition is uniquely determined by the $(Q,v)$-colored plane partitions $\pi_v$ for $v\in Q_0$. \end{lemma} \begin{proof} This follows from the fact that $Q$ can be identified with a McKay quiver of a group action on $\mathbb{C}^3$. To be more specific, the quiver $Q$ satisfies the following properties: \begin{enumerate} \item[(a)] For each vertex $i$, there are exactly $3$ arrows starting at $i$ which are labeled with $x,y,z$; and there are exactly $3$ arrows ending at $i$, also labeled with $x,y,z$. \item[(b)] For each vertex $i$, the target of any non-trivial composition of arrows starting at $i$ depends only on the numbers of arrows labeled with $x,y,z$. \end{enumerate} Thus given any plane partition $\pi$ with a $Q$-multi-coloring $K$, we can define $\pi_v$ to be \[ \pi_v=\big\{(i,j,k)\in\pi :K_v(i,j,k)\in K(i,j,k)\big\}. \] It is easy to check that $\pi_v$ is a plane partition and the union of $\pi_v$'s is $\pi$. \end{proof} \begin{corollary}\label{cor:multi-colored} We have \[ Z^Q_{\mathrm{PL}}(t_0,t_1,t_2,t_3,t_4)=\prod_{i\in\mathbb{Z}/5\mathbb{Z}}\,Z^{\mu_5(1,1,3)}_{\mathrm{PL}}(t_i,t_{i+1},t_{i+2},t_{i+3},t_{i+4}). \] \end{corollary} This reduces the computation of numbers of $Q$-multi-colored plane partitions to the ones for $\mu_5(1,1,3)$-colored plane partitions. \begin{remark} Unfortunately, the signs in the DT invariants $Z^{Q,W}$ and $Z^{[\mathbb{C}^3/\mu_5]}$ do not agree. That is, \[ Z^{Q,W}(t_0,t_1,t_2,t_3,t_4)\neq\prod_{i\in\mathbb{Z}/5\mathbb{Z}}\,Z^{[\mathbb{C}^3/\mu_5]}(t_i,t_{i+1},t_{i+2},t_{i+3},t_{i+4}), \] and there is no obvious modification (e.g.\,changes of variables) to equalize them. \end{remark} \begin{remark} Both $Z^{Q,W}$ and $Z^{[\mathbb{C}^3/\mu_5]}$ are DT invariants of the quiver $(Q,W)$ with potential, with different framing vectors. To put it another way, they are DT invariants of the same Calabi--Yau-3 category with different stability conditions. Thus there should be a formula (``wall-crossing'') connecting two series $Z^{Q,W}$ and $Z^{[\mathbb{C}^3/\mu_5]}$. This can be achieved by, for example, Joyce--Song's generalized DT invariants \cite{JS12}. However, it does not reduce to a simple formula (at least not obvious to us) due to the fact that the Euler pairing ${\chi}_Q(-,-)$ of the quiver $Q$ is not trivial. Hence the formula in \cite[Corollary 7.24]{JS12} does not hold. \end{remark} Here we write down the series $Z^{Q,W}$ up to degree $5$. W denote $\bm{t}^{(a_0,\ldots,a_4)}=\sum_i t_i^{a_0}\cdots t_{i+4}^{a_4}$: \[ \begin{split} Z^{Q,W}(\bm{t})=1 &+\bm{t}^{(1,0,0,0,0)}+3\bm{t}^{(1,1,0,0,0)}-2\bm{t}^{(1,0,1,0,0)}\\ & +3\bm{t}^{(1,2,0,0,0)}+\bm{t}^{(2,0,1,0,0)}-8\bm{t}^{(1,1,1,0,0)}+8\bm{t}^{(1,1,0,1,0)}\\ & +\bm{t}^{(1,3,0,0,0)}+3\bm{t}^{(2,1,1,0,0)}-12\bm{t}^{(1,2,1,0,0)}+7\bm{t}^{(1,1,2,0,0)}\\ & -12\bm{t}^{(1,2,0,1,0)}+5\bm{t}^{(1,1,0,2,0)}-34\bm{t}^{(1,1,1,1,0)}\\ & -3\bm{t}^{(2,2,1,0,0)}-4\bm{t}^{(2,1,2,0,0)}-6\bm{t}^{(1,3,1,0,0)}+18\bm{t}^{(1,2,2,0,0)}-2\bm{t}^{(1,1,3,0,0)}\\ & +8\bm{t}^{(1,3,0,1,0)}+10\bm{t}^{(1,2,0,2,0)}+20\bm{t}^{(2,1,1,1,0)}+56\bm{t}^{(1,2,1,1,0)}+35\bm{t}^{(1,1,2,1,0)}\\ &-54\bm{t}^{(1,1,1,2,0)}-171t_0t_1t_2t_3t_4+O((t_0,t_1,t_2,t_3,t_4)^6) \end{split} \] Take $t=t_0=t_1=t_2=t_3=t_4$, we obtain \[ Z^{Q,W}(t)=1+5t+5t^2+20t^3-210t^4-131t^5+O(t^6). \] We conclude that \[ \begin{split} Z^{\sh{A}}(t) & = \Big(Z^{Q,W}(t)\Big)^{10}\Big( M(-t^5)\Big)^{-50} \\ & = 1 + 50t + 1175t^2 + 17450t^3 + 184275t^4 + 1450740t^5 + O(t^6). \end{split} \] \subsection{Proof of Theorem~\ref{thm:DT-to-part}}\label{pf:part} We mainly follow the same method in the computation of DT invariants on $\mathbb{C}^3$ in \cite{BF08}. The path algebra $\mathbb{C}Q$ is \[ \mathbb{C}Q=\mathbb{C}\langle e,u,v,w\rangle\big/(e^5-1, eu-que, ev-qve, ew-q^3we). \] There is a standard $T=(\mathbb{C}^*)^3$-action on $\mathbb{C}Q$ given by \[ (\lambda_1,\lambda_2,\lambda_3)\cdot(u,v,w)=(\lambda_1 u,\lambda_2 v,\lambda_3 w). \] Let $T_0\subset T$ be the sub-torus defined by $\lambda_1\lambda_2\lambda_3=1$, then $T_0$ fixes the potential $W$. Thus it gives a $T_0$-action on $\mathrm{Jac}(Q,W)$. The Hilbert schemes $\mathrm{Hilb}^{\bullet}(Q,W)$ parameterize quotients of $\mathrm{Jac}(Q,W)$, which are equivalent to left ideals of $\mathrm{Jac}(Q,W)$. Therefore the $T_0$-action on $\mathrm{Jac}(Q,W)$ induces a $T_0$-action on $\mathrm{Hilb}^{\bullet}(Q,W)$. The following is a generalization of \cite[Lemma 4.1]{BF08} \begin{lemma} For each dimension vector $\mathbf{d}$, there is a one-to-one correspondence between $T_0$-fixed points of $\mathrm{Hilb}^{\mathbf{d}}(Q,W)$ and $Q$-multi-colored plane partitions with dimension vector $\mathbf{d}$. \end{lemma} \begin{proof} Since $e_i$'s are idempotent, any ideal in $\mathrm{Jac}(Q,W)$ is generated by polynomials of the form $e_i f(x,y,z)$ for some $i$ and $f(u,v,w)$. We want to show that any $T_0$-invariant ideal can be generated by monomials. We remark that the proof of \cite[Lemma 4.1]{BF08} uses Hilbert's Nullstellensatz, hence it does not apply directly here. However, $\mathrm{Jac}(Q,W)$ contains a $T_0$-invariant subring $\mathbb{C}[u^5,v^5,w^5]$. We take $I_0=I\cap\mathbb{C}[u^5,v^5,w^5]$ is a $T_0$-invariant ideal in $\mathbb{C}[u^5,v^5,w^5]$. Therefore $I_0$ is a monomial ideal, and particularly, there exists $n$ such that $u^{5n},v^{5n},w^{5n}\in I_0\subset I$. Now, $I$ is generated by eigenvectors of $T_0$, which are polynomials of the form $m(u,v,w)g(uvw)e_i$ for some monomial $m$, $g\in\mathbb{C}[t]$ with $g(0)\neq 0$. Suppose $m(u,v,w)g(uvw)e_i\in I$. We write \[ g(uvw)=a_0+a_r(uvw)^r+\ldots \] Then $a_r(uvw)^rg(uvw)$ \[ \begin{split} &\left(g(uvw) - \frac{a_r}{a_0}(uvw)^r g(uvw)\right)m(u,v,w)e_i \\ =&\,\big(a_0+\tilde{a}_{2r}(uvw)^{2r}+\ldots\big)m(u,v,w)e_i\in I \end{split} \] Repeating this process, we get $(a_0+c(uvw)^{N}+\ldots)m(u,v,w)e_i\in I$ for some $N\geq 5n$, then $m(u,v,w)e_i\in I$. This shows that $I$ is a monomial ideal. For a monomial ideal $I$ in $\mathrm{Jac}(Q,W)$, we associate a $Q$-multi-colored plane partition $\pi$ as follows: we define \[ \pi=\big\{(i,j,k): e_{\ell}u^i v^j w^k\notin I\text{ for some }\ell\big\} \] and a $Q$-multi-coloring \[ K(i,j,k)=\big\{\ell: e_{\ell}u^iv^jw^k\notin I\big\}. \] We left the details to the reader to check that this indeed defines a $Q$-multi-colored plane partitions. Also, we can associate any $Q$-multi-colored plane partition to a monomial ideal in the obvious way. To compare the dimension vectors, for any (monomial) ideal $I$, there is a natural decomposition \[ I=e_0I\oplus e_1I\oplus e_2I\oplus e_3I\oplus e_4I \] as vector spaces, and the dimension vector $\mathbf{d}$ of the module induced by $I$ is given by \[ d_i=\dim_{\mathbb{C}} \big(e_i\mathrm{Jac}(Q,W)\big)/\big(e_iI\big), \] which agrees with the dimension vector of the associated multi-colored plane partition. \end{proof} \begin{corollary} For any dimension vector $\mathbf{d}$, we have \[ \chi\big(\mathrm{Hilb}^{\mathbf{d}}(Q,W)\big)= n_Q(\mathbf{d}). \] \end{corollary} Now we are ready to finalize the proof. \begin{proof}[Proof of Theorem~\ref{thm:DT-to-part}.] Recall that $\mathrm{Hilb}^{\mathbf{d}}(Q,W)$ is the critical locus of the function $\mathrm{Tr}(W)$ on the smooth scheme $\mathrm{Hilb}^{\mathbf{d}}(\mathbb{C}Q)$. Since $T_0$ acts on $\mathbb{C}Q$ and $W\in\mathbb{C}Q$ is $T_0$-invariant, the torus $T_0$ acts on $\mathrm{Hilb}^{\mathbf{d}}(\mathbb{C}Q)$ and the function $\mathrm{Tr}(W)$ is $T_0$-invariant. By \cite[Proposition 3.3]{BF08}, the Behrend function $\nu$ of $\mathrm{Hilb}^{\mathbf{d}}(Q,W)$ is equal to $(-1)^{m}$ on $T_0$-fixed points, where $m$ is the dimension of $\mathrm{Hilb}^{\mathbf{d}}(\mathbb{C}Q)$, and hence \[ \chi_{\mathrm{vir}}\big(\mathrm{Hilb}^{\mathbf{d}}(Q,W)\big)=(-1)^m\chi\big(\mathrm{Hilb}^{\mathbf{d}}(Q,W)\big)=(-1)^m\,n_Q(\mathbf{d}). \] The Hilbert scheme $\mathrm{Hilb}^{\mathbf{d}}(\mathbb{C}Q)$ is the fine moduli space of framed representations of $Q$ with framing vector $(1,\ldots,1)$, that is, \[ \mathrm{Hilb}^{\mathbf{d}}(\mathbb{C}Q)=\Big(\prod_{(a:i\to j)\in Q_1}\mathrm{Hom}(\mathbb{C}^{d_i},\mathbb{C}^{d_j})\times\prod_i\mathbb{C}^{d_i}\Big)\mathbin{/\mkern-6mu/} \prod_{i\in Q_0}\mathrm{GL}_{d_i}(\mathbb{C}) \] which has dimension \[ m=\dim\mathrm{Hilb}^{\mathbf{d}}(\mathbb{C}Q) =\sum_{a:i\to j} d_id_j + \sum_{i} d_i - \sum_{i} d_i^2 = |\mathbf{d}|-\langle \mathbf{d},\mathbf{d}\rangle_Q. \] \end{proof} \bibliographystyle{plain}
2,877,628,088,676
arxiv
\section{Introduction} \setcounter{equation}{0} Let $\mathcal{H}$ denote the class of complex valued harmonic functions $f$ in $\mathbb{D}$ normalized by $f(0)=f_z(0)-1=0.$ Each such function $f$ can be expressed uniquely as $f=h+\overline{g},$ where $h$ and $g$ have the following power series representations: \begin{equation}\label{intro1} h(z) = z + \sum_{n=2}^{\infty} a_nz^n \quad {\rm and } \quad g(z)=\sum_{n=1}^{\infty} b_nz^n. \end{equation} A result of Lewy \cite{lewy}, shows that $f\in \mathcal{H}$ is locally univalent in $\mathbb{D}$ if and only if $J_f(z) = |f_z(z)|^2-|f_{\overline{z}}(z)|^2$ is non-zero in $\mathbb{D},$ and is sense preserving if $J_f(z)>0 \;(z\in \mathbb{D}),$ or equivalently, if the dilatation $w=g' /h'$ is analytic and satisfies $|w|<1$ in $\mathbb{D}.$ Observe that, the class $\mathcal{H}$ reduces to the class $\mathcal{A}$ of normalized analytic functions if the co-analytic part is zero. Let $\mathcal{S}_\mathcal{H}$ be the subclass of $\mathcal{H}$ consisting of univalent and sense-preserving harmonic mappings in $\mathbb{D}.$ The classical family $\mathcal{S}$ of normalized analytic univalent functions is subclass of $\mathcal{S}_{\mathcal{H}}$ as $\mathcal{S}=\{f=h+\overline{g}\in \mathcal{S}_{\mathcal{H}}:g\equiv 0 \quad {\rm in} \quad \mathbb{D} \}.$ Also, we denote by $\mathcal{H}^0=\left\lbrace f\in \mathcal{H}: f_{\overline{z}}(0)=0 \right\rbrace $ and $\mathcal{S}_\mathcal{H}^0=\left\lbrace f\in \mathcal{S} _\mathcal{H}: f_{\overline{z}}(0)=0 \right\rbrace.$ It is well known that the class $\mathcal{S}_\mathcal{H}^0$ is compact and normal, whereas the class $\mathcal{S}_\mathcal{H}$ is normal but not compact. In 1984, Clunie and Sheil-Small \cite{clunie} investigated the class $\mathcal{S}_\mathcal{H},$ together with some of its geometric subclasses. A function $h \in \mathcal{A}$ is called close-to-convex in $\mathbb{D},$ if the complement of $h(\mathbb{D})$ can be written as the union of non-intersecting half lines. Let $\mathcal{C}$ denote the class of close-to-convex functions in $\mathbb{D}$. By $\mathcal{C}_{\mathcal{H}},$ we denote the class of close-to-convex harmonic mappings $f=h+\overline{g}$ for which $f(\mathbb{D})$ is close-to-convex in $\mathbb{D}.$ An analytic function $h \in \mathcal{A}$ is close-to-convex in $\mathbb{D},$ if there exists an convex function $\phi$ (not necessarily normalized) in $\mathbb{D}$ such that $$\Re\left( \dfrac{h'(z)}{\phi'(z)}\right)> 0\qquad (z\in \mathbb{D}).$$ If $\phi(z)=z,$ then functions $h\in \mathcal{A}$ which satisfy $\Re( h'(z))>0,$ are close-to-convex in $\mathbb{D}.$ A function $h \in \mathcal{A}$ is said to be close-to-convex function of order $\beta \;(0 \leq \beta <1)$, if it satisfies $\Re(h'(z))>\beta \;(z\in \mathbb{D})$. Let $\mathcal{W}(\alpha,\beta)$ denote a class of functions $h \in \mathcal{A}$ such that $\Re(h'(z)+\alpha zh''(z))>\beta \;\; (\alpha \geq 0, 0 \leq \beta <1).$ The class $\mathcal{W}(\alpha, \beta)$ was studied by Gao and Zohu \cite{AAaa} for $\beta<1$ and $\alpha>0.$ They determined the extreme points of $\mathcal{W}(\alpha, \beta)$ and obtained a number $\beta(\alpha)$ such that $\mathcal{W}(\alpha, \beta)\subset \mathcal{S}^*$ for fixed $\alpha \in [1,\infty).$ The class $\mathcal{W}(\alpha,\beta)$ is generalization of class $\mathcal{W}(\alpha) \equiv \mathcal{W}(\alpha,0)$, which was studied by Chichra \cite{chichra77}. In \cite{singh89}, Singh and Singh proved that functions in $\mathcal{W}(1,0)$ are starlike in $\mathbb{D}.$ A harmonic function $f\in \mathcal{H}$ is said to be convex in $\mathbb{D}$, if $f(\mathbb{D})$ is convex in $\mathbb{D}$. We denote by $\mathcal{K}_{\mathcal{H}}\,$ the class of functions in $\mathcal{H}$ which are convex in $\mathbb{D}.$ A sense preserving harmonic mapping $f=h+\overline{g} \in \mathcal{H}$ is known to be convex in $\mathbb{D},$ if $\frac{\partial}{\partial \theta}\left(arg\,\left(\frac{\partial}{\partial \theta}f(re^{i \theta})\right)\right)>0$ for all $z=re^{i \theta}\in \mathbb{D}/\{0\}.$ Hence, $f=h+\overline{g}\in \mathcal{H}$ is convex in $\mathbb{D},$ if $f(z)\neq 0$ for all $z\in \mathbb{D}/\{0\}$ and condition $$\Re\left\{\dfrac{z(h'(z)+zh''(z))+\overline{z(g'(z)+zg''(z))}}{zh'(z)-zg'(z)} \right\}>0$$ is satisfied for all $z \in \mathbb{D}/\{0\}.$ Let $h\in \mathcal{S}$ be given by $h(z)=\sum_{n=0}^{\infty}a_n z^n.$ Then the $n^{th}$ partial sum (or section) of $h(z)$ is defined by $$s_n(h)=\sum_{k=0}^{n}a_kz^k \quad {\rm for}\quad n\in \mathbb{N},$$ where $a_0=0$ and $a_1=1.$ One of the classical results of Szeg\"{o} \cite{szego28} shows that if $h \in \mathcal{S},$ then the partial sum $s_n(h)(z)=\sum_{k=0}^{n}a_kz^k$ is univalent in disk $|z|<1/4$ for all $n\geq 2,$ and number $1/4$ can not be replaced by larger one. In \cite{AAA}, Robertson proved that $n^{th}$ partial sum of the Koebe function $k(z)=z/(1-z)^2$ is starlike in the disk $|z|<1-3n^{-1} \log n \quad (n\geq 5),$ and number $3$ can not be replaced by smaller constant. It is known by a result \cite[p. 256, 273]{Durenp}, that $s_n(h)$ is convex, starlike, or close-to-convex in the disk $|z|<1-3n^{-1} \log n\quad (n\geq 5),$ whenever $h$ is convex, starlike or close-to-convex in $\mathbb{D}.$ The largest radius $r_n$ of univalence of $s_n(h) \,(h \in \mathcal{S})$ is not yet known. However, Jenkins \cite{jenkins} (see also \cite[Section 8.2]{Durenp}) observed that $r_n \geq 1-(4+\varepsilon) n^{-1} \,\mbox{log} \,n$ for each $\epsilon\,(|\epsilon|=1)$ and for large $n$. There exists a considerable amount of results in the literature for partial sums of functions in the class $\mathcal{S}$ and some of its geometric subclasses. Analogously in the harmonic case, the $(p,q)$-th partial sum of a harmonic mapping $f=h+\overline{g}\in \mathcal{H}$ is defined by $$ s_{p,q}(f)=s_p(h)+\overline{s_q(h)},$$ where $s_p(h)=\sum_{k=1}^{p}a_kz^k$ and $s_q(g)=\sum_{k=1}^{q}b_kz^k$, $p,q\geq 1$ with $a_1=1, \, p\geq 1$ and $q\geq2$. In \cite{li13}, Li and Ponnusamy studied the radius of univalency of partial sums of functions in the class $\mathcal{P}_\mathcal{H}^0=\{f=h+\bar{g}\in \mathcal{H}^0: \,\, \Re(h'(z))>|g'(z)| \;(z\in \mathbb{D})\}.$ Further, in \cite{s13}, Li and Ponnusamy studied partial sums of functions in the class $\mathcal{P}_\mathcal{H}^0(\alpha)=\{f=h+\overline{g} \in \mathcal{H}^0: \Re(h'(z)-\alpha) > |g'(z)| \; (\alpha<1, \;z \in \mathbb{D})\}$. Recently, Ghosh and Vasudevarao \cite{ghosh17} studied a class of harmonic mappings $\mathcal{W}_\mathcal{H}^0(\alpha)=\{f=h+\bar{g} \in \mathcal{H}^0 : \,\Re(h'(z)+\alpha zh''(z))> |g'(z)+\alpha zg''(z)|\; (z \in \mathbb{D})\}$ and gave some results concerning growth, convolution and convex combination for the members of the class $\mathcal{W}_\mathcal{H}^0(\alpha).$ For two analytic functions $\psi_1(z)=\sum_{n=0}^{\infty}a_nz^n$ and $\psi_2(z)=\sum_{n=0}^{\infty}b_nz^n,$ the convolution (or Hardamard product) is defined by $ \left( \psi_1\ast \psi_2 \right) (z)= \sum_{n=0}^{\infty} a_nb_nz^n \; (z\in\mathbb{D}).$ Analogously in the harmonic case, for two harmonic mappings $f_1=h_1+\overline{g_1}$ and $f_2=h_2+\overline{g_2}$ in $\mathcal{H}$ with the power series of the form $$f_1(z)=z+\sum_{n=2}^{\infty}a_nz^n+\overline{\sum_{n=1}^{\infty}b_n\, z^n} \quad {\rm and} \quad f_2(z)=z+\sum_{n=2}^{\infty}A_nz^n+\overline{\sum_{n=1}^{\infty}B_n\, z^n},$$ we define the harmonic convolution as follows: $$\; f_1 \ast f_2 = h_1\ast h_2+\overline{g_1\ast g_2}=z+\sum_{n=2}^{\infty}a_n A_n z^n+\overline{\sum_{n=1}^{\infty}b_n\,B_n\, z^n}.$$ Clearly, the class $\mathcal{H}$ is closed under the convolution, {\it i.e.} $\mathcal{H}\ast \mathcal{H}\subset \mathcal{H}.$ In the case of conformal mappings, the literature about convolution theory is exhaustive. Unfortunately, most of these results do not necessarily carry over to the class of univalent harmonic mappings in $\mathbb{D}.$ We refer \cite{droff1, kumar, ELiu}, for more information about convolution of harmonic mappings. \medskip We now define a new class of close-to-convex harmonic mappings as follows: \medskip \noindent {\bf Definition 1.1.} For $\alpha \geq 0$ and $0 \leq \beta <1$, let $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ denote the class of harmonic mappings $f=h+\overline{g}$, which is defined by \begin{equation*} \mathcal{W}_\mathcal{H}^0(\alpha,\beta)=\{f=h+\bar{g} \in \mathcal{H}^0: \;\Re(h'(z)+\alpha zh''(z)-\beta)> |g'(z)+\alpha zg''(z)|\quad (z \in \mathbb{D}) \}. \end{equation*} We observe that, the class $\mathcal{W}_{\mathcal{H}}^0(\alpha,\beta)$ generalizes several previously studied classes of harmonic mappings, as $\mathcal{W}_{\mathcal{H}}^0(\alpha,0) \equiv \mathcal{W}_{\mathcal{H}}^0(\alpha)$ \;\;(see \cite{ghosh17}), $\mathcal{W}_\mathcal{H}^0(0,\beta) \equiv \mathcal{P}_\mathcal{H}^0(\beta)$ \;(see \cite{s13}), $\mathcal{W}_\mathcal{H}^0(1,0) \equiv \mathcal{W}_\mathcal{H}^0$ \;(see \cite{nagpal14}), and $\mathcal{W}_\mathcal{H}^0(0,0)\equiv \mathcal{P}_{\mathcal{H}}^0$ \;(see \cite{li13}). In this article, we establish that functions in the class $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ are close-to-convex $\mathbb{D}$. In section $3,$ we obtain certain coefficient inequalities and growth results for the functions in $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. In section $4,$ we prove that the functions in $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ are closed under convex combinations and establish certain convolution results. In section $5$, we determine the radius of convexity of partial sums $s_{p,q}(f)$ of functions in $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. Finally, in section $6,$ we consider the harmonic mappings which involve the hypergeometric function and obtain conditions on its parameters such that it belongs to the class $\mathcal{W}_\mathcal{H}^0(\alpha,\beta).$ Further we construct the univalent harmonic polynomials belonging to $\mathcal{W}_\mathcal{H}^0(\alpha,\beta).$ The following results will be needed in our investigation. \medskip \noindent \begin{lemma} \label{LEMA} \,(see, \cite{goodman}). Let $p\in \mathcal{P},$ where $\mathcal{P}$ denotes the class of Carath\'eodory functions in $\mathbb{D}.$ Then $$\left|p'(z)\right|\geq \dfrac{1-|z|}{1+|z|}\qquad {\rm and} \qquad \left|\dfrac{p''(z)}{p'(z)}\right|\leq \dfrac{2}{1-|z|^2} \quad (z\in \mathbb{D}).$$ These inequalities are sharp. Equality occurs for suitable $z\in \mathbb{D}$ if and only if $p(z)=-z-2e^{i \theta}\log (1-z e^{i \theta}) \quad (0\leq \theta\leq 2 \pi).$ \end{lemma} \begin{lemma}[see \cite{clunie}]\label{lm.6} If the harmonic mapping $f=h+\overline{g}:\mathbb{D}\rightarrow\mathbb{C}$ satisfies $|g'(0)|<|h'(0)|$ and the function $F_\epsilon=h+\epsilon g$ is close-to-convex for every $|\epsilon|=1,$ then $f$ is close-to-convex function. \end{lemma} \section{The Close-to-Convexity} \setcounter{equation}{0} The first result provides a one-to-one correspondence between the classes $\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta)$ of harmonic mappings and the class $\mathcal{W}(\alpha, \beta)$ of analytic functions. \begin{theorem} The harmonic mapping $f=h+\overline{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ if and only if $F_{\epsilon}=h+\epsilon g \in \mathcal{W}(\alpha,\beta)$ for each $|\epsilon|=1$. \label{th1} \end{theorem} \begin{proof} Let $f=h+\overline{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. Then for each $|\epsilon|=1$, we have \begin{eqnarray*} \Re(F_{\epsilon}'(z)+\alpha zF_{\epsilon}''(z))&=& \Re(h'(z)+\epsilon g'(z)+\alpha z(h''(z)+\epsilon g''(z))\\ &=& \Re(h'(z)+\alpha zh''(z)+\epsilon (g'(z)+\alpha g''(z))\\ &>& \Re(h'(z)+\alpha zh''(z))-|g'(z)+\alpha zg''(z)|> \beta \quad (z \in \mathbb{D}). \end{eqnarray*} Hence $F_{\epsilon} \in \mathcal{W}(\alpha,\beta)$ for each $|\epsilon|=1$. Conversely, let $F_{\epsilon} \in \mathcal{W}(\alpha,\beta).$ Then \begin{equation*} \Re(h'(z)+\alpha zh''(z))> \Re(-\epsilon (g'(z)+\alpha zg''(z)))+\beta \quad (z\in \mathbb{D}). \end{equation*} As $\epsilon(|\epsilon|=1)$ is arbitrary, then for an appropriate choice of $\epsilon,$ we obtain \begin{equation*} \Re(h'(z)+\alpha zh''(z)-\beta) >|g'(z)+\alpha zg''(z)| \quad (z\in \mathbb{D}), \end{equation*} and hence we conclude that $f \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \end{proof} To establish the next result, we need to establish that functions in the class $\mathcal{W}(\alpha,\beta)$ are close-to-convex in $\mathbb{D}$, and to prove this, we shall need the following result. \begin{lemma} (Jack's Lemma \cite{jack71}) Let $\omega(z)$ be analytic in $\mathbb{D}$ with $\omega(0)=0.$ If $|\omega(z)|$ attains its maximum value on the circle $|z|=r<1$ at a point $z_0\in \mathbb{D}$, then we have $z_0 \omega'(z_0)=k\omega(z_0)$ for a real number $k\geq 1.$ \label{3} \end{lemma} \begin{lemma} If $f \in \mathcal{W}(\alpha,\beta)$, then $\Re(f'(z))>\beta \,\,(0\leq\beta<1)$, and hence $f$ is close-to-convex in $\mathbb{D}$. \label{5} \end{lemma} \begin{proof} If $f \in \mathcal{W}(\alpha,\beta),$ then $\Re (\psi(z))>0$, where $\psi(z)=f'(z)+\alpha z f''(z)-\beta$. Let $w$ be an analytic function in $\mathbb{D}$ such that $w(0)=0$ and \begin{equation*} f'(z)=\frac{1+(1-2\beta)w(z)}{1-w(z)}. \end{equation*} To prove the result, we need to show that $|w(z)|<1$ for all $z$ in $\mathbb{D}$. If not, then by Lemma \ref{3}, we could find some $\xi (|\xi|<1)$, such that $|w(\xi)|=1$ and $\xi w'(\xi)=kw(\xi)$, where $k \geq 1$. A computation gives \begin{eqnarray*} \Re \left\{\psi(\xi) \right\}&=&\Re \left\{\frac{1+(1-2\beta)w(\xi)}{1-w(\xi)}+\frac{2\alpha k(1-\beta)w(\xi)}{(1-w(\xi))^2}-\beta \right\} \\ &=& \Re \left\{\frac{ 2\alpha k(1-\beta)w(\xi)}{(1-w(\xi))^2} \right\} = - \frac{4\alpha k(1-\beta) (1-\Re(w(\xi))}{|1-w(\xi)|^4} \leq 0 \end{eqnarray*} for $|w(\xi)|=1$. This contradicts the hypotheses. Hence, $|w(z)|<1,$ which lead to $\Re(f'(z))>\beta \,\,(0\leq\beta<1).$ \end{proof} \begin{theorem} \label{newthm} The functions in the class $\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta)$ are close-to-convex in $\mathbb{D}.$ \end{theorem} \begin{proof} From Lemma \ref{5}, we find that functions $F_\epsilon=h+\epsilon g \in \mathcal{W}(\alpha, \beta)$ are close-to-convex in $\mathbb{D}$ for each $\epsilon(|\epsilon|=1).$ Now in view of Lemma \ref{lm.6} and Theorem \ref{th1}, we obtain that functions in $\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta)$ are close-to-convex in $\mathbb{D}.$ \end{proof} \section{Coefficient Inequalities and Growth Estimates} \setcounter{equation}{0} The following results provides sharp coefficient bounds for the functions in $\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \begin{theorem} \label{thm1} Let $f=h+\overline{g}\in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ be of the form \eqref{intro1} with $b_1=0.$ Then we have \begin{equation} |b_n|\leq\dfrac{1-\beta}{n(1+\alpha(n-1))}. \label{eq6} \end{equation} The result is sharp and equality in \eqref{eq6} is obtained by $f(z)=z+\dfrac{1-\beta}{n(1+\alpha(n-1))}\overline{z}^n$. \end{theorem} \begin{proof} Since $f=h+\overline{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$, then using the series representation of $g$, we have \begin{eqnarray*} r^{n-1} n(1+\alpha(n-1))|b_n|&\leq& \frac{1}{2\pi}\int_{0}^{2\pi}|g'(re^{i\theta })+\alpha re^{i\theta}g''(re^{i\theta})|d\theta\\ &<& \frac{1}{2\pi}\int_{0}^{2\pi}\{\Re(h'(re^{i\theta})+\alpha re^{i\theta}h''(re^{i\theta}))-\beta\}d\theta \\ &=& \dfrac{1}{2 \pi} \int_0^{2\pi}\{1-\beta+ n(1+\alpha (n-1))a_n r^{n-1} e^{i(n-1)\theta} \}d \theta =1-\beta. \end{eqnarray*} Now $r\rightarrow 1^{-}$ gives the desired bound. Further, it is easy to see that the equality in \eqref{eq6} is obtained for the function $f(z)=z+\dfrac{1-\beta}{n(1+\alpha(n-1))}\overline{z}^n$. \end{proof} \begin{theorem}\label{th3} Let $f=h+\overline{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ be of the form \eqref{intro1} with $b_1=0.$ Then for $n \geq 2$, we have \begin{itemize} \item[(i)] $|a_n|+|b_n|\leq \dfrac{2(1-\beta)}{n(1+\alpha(n-1))},$ \item[(ii)] $||a_n|-|b_n||\leq \dfrac{2(1-\beta)}{n(1+\alpha(n-1))},$ \item[(iii)] $|a_n|\leq \dfrac{2(1-\beta)}{n(1+\alpha(n-1))}. $ \end{itemize} All these results are sharp for the function $f(z)=z+\sum_{n=2}^{\infty}\dfrac{2(1-\beta)}{n(1+\alpha(n-1))}\overline{z}^n$. \end{theorem} \begin{proof} (i) Since $f=h+\overline{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$, then Theorem \ref{th1} implies that $F_{\epsilon}=h+\epsilon g \in \mathcal{W}(\alpha,\beta)$ for each $\epsilon(|\epsilon|=1)$. Thus for each $|\epsilon|=1$, we have \begin{equation*} \Re((h+\epsilon g)'(z)+\alpha z(h+\epsilon g)''(z))>\beta \quad {\rm{for}} \quad z \in \mathbb{D}. \end{equation*} This implies that there exists a {\it Carath\'{e}odory} function of the form $p(z)=1+\sum_{n=1}^{\infty}p_nz^n$, with $\Re(p(z))>0$ in $\mathbb{D}$, such that \begin{equation} h'(z)+\alpha zh''(z)+\epsilon (g'(z)+\alpha zg''(z))=\beta+(1-\beta)p(z). \label{eq7} \end{equation} Comparing coefficients on both sides of \eqref{eq7}, we obtain \begin{equation} n(1+\alpha(n-1))(a_n+\epsilon b_n)=(1-\beta)p_{n-1}\quad {\rm for} \quad n\geq 2. \label{eq8} \end{equation} Since $|p_n|\leq 2$ for $n\geq 1$ (see \cite[p. 41]{Durenp}), and $\epsilon(|\epsilon|=1)$ is arbitrary, therefore the result follows from \eqref{eq8}. Part (ii) and (iii) follows from part (i). \end{proof} \medskip The following result gives a sufficient condition for a function to be in the class $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \begin{theorem}\label{th5} Let $f=h+\overline{g} \in \mathcal{H}^0$, where $h$ and $g$ are of the form \eqref{intro1}. If \begin{equation} \sum_{n=2}^{\infty}n(1+\alpha(n-1))(|a_n|+|b_n|)\leq 1-\beta, \label{eq13} \end{equation} then $f \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \end{theorem} \begin{proof} If $f=h+\bar{g} \in \mathcal{H}^0$, then using \eqref{eq13}, we have \begin{eqnarray*} \Re(h'(z)+\alpha zh''(z))&=& \Re\Big(1+\sum_{n=2}^{\infty} n(1+\alpha(n-1))\,a_n \,z^{n-1}\Big) \\ &\geq& 1-\sum_{n=2}^{\infty}n(1+\alpha(n-1))|a_n| \geq\sum_{n=2}^{\infty}n(1+\alpha(n-1))|b_n|+\beta\\ &\geq& |\sum_{n=2}^{\infty}n(1+\alpha(n-1)) \,b_n|+\beta =|g'(z)+\alpha zg''(z)|+\beta, \end{eqnarray*} and so $f \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \end{proof} \medskip The following theorem gives sharp inequalities in the class $\mathcal{B}_{\mathcal{H}}^0(\alpha, \beta).$ \begin{theorem} \label{th4} If $f=h+\overline{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$, then \begin{equation} |z|-2\sum_{n=2}^{\infty}\dfrac{(-1)^{n-1}(1-\beta)|z|^n}{\alpha n^2+n(1-\alpha)} \leq |f(z)|\leq |z|+2\sum_{n=2}^{\infty}\dfrac{(1-\beta)|z|^n}{\alpha n^2+n(1-\alpha)}. \label{eq9} \end{equation} Both the inequalities are sharp when $f(z)=z+\sum_{n=2}^{\infty} \dfrac{2(1-\beta)}{\alpha n^2+n(1-\alpha)}\overline{z}^n$, or its rotations. \end{theorem} \begin{proof} Let $f=h+\bar{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. Then $F_{\epsilon}=h+\epsilon g \in \mathcal{W}(\alpha,\beta)$ for each $\epsilon\,(|\epsilon|=1)$. Thus there exists an analytic function $w(z)$ with $w(0)=0$ and $|w(z)|<1$ in $\mathbb{D}$, such that \begin{equation} F_{\epsilon}'(z)+\alpha zF_{\epsilon}''(z)=\frac{1+(1-2\beta)w(z)}{1-w(z)}. \label{eq11} \end{equation} Simplifying \eqref{eq11}, we get \begin{eqnarray*} z^{1/\alpha}F_{\epsilon}'(z) =\dfrac{1}{\alpha}\int_{0}^{z}\xi^{\frac{1}{\alpha}-1}\frac{1+(1-2\beta)w(\xi)}{1-w(\xi)}d\xi = \dfrac{1}{\alpha}\int_{0}^{|z|}(te^{i\theta})^{\frac{1}{\alpha}-1}\frac{1+(1-2\beta)w(te^{i\theta})}{1-w(te^{i\theta})}e^{i\theta}dt. \end{eqnarray*} Therefore using Schwarz Lemma, we have \begin{eqnarray*} |z^{1/\alpha}F_{\epsilon}'(z)|=\Big|\dfrac{1}{\alpha}\int_{0}^{|z|}(te^{i\theta})^{\frac{1}{\alpha}-1}\frac{1+(1-2\beta)w(te^{i\theta})}{1-w(te^{i\theta})}e^{i\theta}dt\Big| \leq \frac{1}{\alpha}\int_{0}^{|z|}t^{\frac{1}{\alpha}-1}\frac{1+(1-2\beta)t}{1-t}dt, \end{eqnarray*} and \begin{eqnarray*} |z^{1/\alpha}F_{\epsilon}'(z)|&=&\Big|\dfrac{1}{\alpha}\int_{0}^{|z|}(te^{i\theta})^{\frac{1}{\alpha}-1}\frac{1+(1-2\beta)w(te^{i\theta})}{1-w(te^{i\theta})}e^{i\theta}dt\Big|\\ &\geq &\dfrac{1}{\alpha}\int_{0}^{|z|}t^{\frac{1}{\alpha}-1}\; \Re{\frac{1+(1-2\beta)w(te^{i\theta})}{1-w(te^{i\theta})}}dt\\ &\geq&\frac{1}{\alpha}\int_{0}^{|z|}t^{\frac{1}{\alpha}-1}\; \frac{1+(1-2\beta)t}{1-t}dt. \end{eqnarray*} Further computation gives \begin{equation} |F'(z)|=|h'(z)+\epsilon g'(z)|\leq 1+2(1-\beta)\sum_{n=1}^{\infty}\frac{|z|^n}{1+\alpha n}, \label{eq12} \end{equation} and \begin{equation*} |F'(z)|=|h'(z)+\epsilon g'(z)|\geq 1+2(1-\beta)\sum_{n=1}^{\infty}\frac{(-1)^n|z|^n}{1+\alpha n}. \end{equation*} Since $\epsilon(|\epsilon|=1)$ is arbitrary, it follows from \eqref{eq12} that \begin{equation*} |h'(z)|+| g'(z)|\leq 1+2(1-\beta)\sum_{n=1}^{\infty}\frac{|z|^n}{1+\alpha n}, \end{equation*} and \begin{equation*} |h'(z)|-| g'(z)|\geq 1-2(1-\beta)\sum_{n=1}^{\infty}\frac{(-1)^n|z|^n}{1+\alpha n}. \end{equation*} Let $\Gamma$ be the radial segment from 0 to $z$, then \begin{eqnarray*} |f(z)|&=&\Big|\int_\Gamma \dfrac{\partial f}{\partial \xi}d\xi +\frac{\partial f}{\partial \bar{\xi}}d\bar{\xi}\Big|\leq \int_\Gamma (|h'(\xi)|+| g'(\xi)|)|d\xi|\\ &\leq& \int_{0}^{|z|}\Big( 1+2(1-\beta)\sum_{n=1}^{\infty}\dfrac{|t|^n}{1+\alpha n}\Big)dt=|z|+2(1-\beta)\sum_{n=2}^{\infty}\frac{|z|^n}{\alpha n^2+(1-\alpha)n}, \end{eqnarray*} and \begin{eqnarray*} |f(z)|&=&\int_\Gamma \Big|\dfrac{\partial f}{\partial \xi}d\xi +\frac{\partial f}{\partial \bar{\xi}}d\bar{\xi}\Big|\geq \int_\Gamma (|h'(\xi)|-| g'(\xi)|)|d\xi|\\ &\geq& \int_{0}^{|z|}\Big( 1-2(1-\beta )\sum_{n=1}^{\infty}\frac{(-1)^n|t|^n}{1+\alpha n}\Big)dt=|z|+2(1-\beta)\sum_{n=2}^{\infty}\frac{(-1)^{n-1}|z|^n}{\alpha n^2+(1-\alpha)n}. \end{eqnarray*} Equality in \eqref{eq9} holds for the function $ f(z)=z+\sum_{n=2}^{\infty}\dfrac{2(1-\beta)}{\alpha n^2+(1-\alpha)n}\overline{ z}^n$ or its rotations. \end{proof} \section{Convex combinations and convolutions} \setcounter{equation}{0} In this section, we prove that the class $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ is closed under convex combinations and convolutions. A sequence $\{c_n\}_{n=0}^{\infty}$ of non-negative real numbers is said to be a convex null sequence, if $c_n\rightarrow 0$ as $n\rightarrow \infty$, and $c_0-c_1\geq c_1-c_2 \geq c_2-c_3 \geq...\geq c_{n-1}-c_n\geq ...\geq 0.$ To prove results for convolution, we shall need the following Lemma \ref{7} and \ref{8}. \begin{lemma} \cite{LFlf} \label{7} If $\{c_n\}_{n=0}^{\infty}$ be a convex null sequence, then function $q(z)=\dfrac{c_0}{2}+\sum_{n=1}^{\infty}c_nz^n$ is analytic and $\Re(q(z)) >0$ in $\mathbb{D}$. \end{lemma} \begin{lemma}\cite{singh89}\label{8} Let the function $p$ be analytic in $\mathbb{D}$ with $p(0)=1$ and $\Re(p(z))>1/2$ in $\mathbb{D}$. Then for any analytic function $f$ in $\mathbb{D}$, the function $p*f$ takes values in the convex hull of the image of $\mathbb{D}$ under $f$. \end{lemma} \begin{theorem}\label{th6} The class $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ is closed under convex combinations. \end{theorem} \begin{proof} Let $f_i=h_i+\overline{g_i} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ for $i=1,2,...n$ and $\sum_{i=1}^{n}t_i=1(0\leq t_i \leq 1)$. Write the convex combination of $f_i's$ as \begin{equation*} f(z)=\sum_{i=1}^{n}t_if_i(z)=h(z)+\overline{g(z)}, \end{equation*} where $ h(z)=\sum_{i=1}^{n}t_ih_i(z)$ and $ g(z)=\sum_{i=1}^{n}t_ig_i(z)$. Clearly both $h$ and $g$ are analytic in $\mathbb{D}$ with $h(0)=g(0)=h'(0)-1=g'(0)=0.$ A simple computation yields \begin{eqnarray*} \Re(h'(z)+\alpha zh''(z))&=& \Re\Big(\sum_{i=1}^{n}t_i(h'(z)+\alpha zh''(z))\Big) > \sum_{i=1}^{n}t_i(|g_i'(z)+\alpha zg_i''(z)|+\beta)\\ &\geq& |g'(z)+\alpha zg''(z)|+\beta. \end{eqnarray*} This shows that $f \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \end{proof} \begin{lemma}\label{9} If $F \in \mathcal{W}(\alpha,\beta)$, then $\Re\Big(\dfrac{F(z)}{z}\Big) > \dfrac{1}{2-\beta}$. \end{lemma} \begin{proof} If $F \in \mathcal{W}(\alpha,\beta)$ be given by $F(z)=z+\sum_{n=2}^{\infty}A_nz^n$, then \begin{equation*} \Re \Big(1+\sum_{n=2}^{\infty} n(1+\alpha (n-1))A_n z^{n-1}\Big)>\beta \quad (z \in \mathbb{D}), \end{equation*} which is equivalent to $\Re(p(z))>\dfrac{1}{2-\beta}\geq \dfrac{1}{2}\;$ in $\mathbb{D}$, where $p(z)=1+\dfrac{1}{2-\beta}\sum_{n=2}^{\infty}n(1+\alpha (n-1))A_n z^{n-1}.$ Now consider a sequence $\{c_n\}_{n=0}^{\infty}$ defined by $c_0=1$ and $c_{n-1}=\dfrac{2-\beta}{n(1+\alpha(n-1))}$ for $n\geq 2$. We can easily see that the sequence $\{c_n\}_{n=0}^{\infty}$ is convex null sequence and hence in view of Lemma \ref{7}, the function $q(z)=\frac{1}{2}+\sum_{n=2}^{\infty}\dfrac{2-\beta}{n(1+\alpha(n-1))}z^{n-1}$ is analytic and $\Re(q(z))>0$ in $\mathbb{D}$. Further $$\frac{F(z)}{z}=p(z)*\Big(1+\sum_{n=2}^{\infty}\dfrac{2-\beta}{n(1+\alpha(n-1))}z^{n-1}\Big).$$ Hence an application of Lemma \ref{8} gives that $\Re\Big(\dfrac{F(z)}{z}\Big)>\dfrac{1}{2-\beta}$ for $z\in \mathbb{D}$. \end{proof} \begin{lemma}\label{lm10} Let $F_1$ and $F_2$ belong to $\mathcal{W}(\alpha,\beta)$, then $F_1*F_2 \in \mathcal{W}(\alpha,\beta)$. \end{lemma} \begin{proof} The convolution of $F_1=z+\sum_{n=2}^{\infty}A_nz^n$ and $F_2=z+\sum_{n=2}^{\infty}B_nz^n$ is given by $$ F(z)=(F_1*F_2)(z)=z+\sum_{n=2}^{\infty}A_nB_nz^n.$$ Since $zF'(z)=zF_1'(z)*F_2(z)$, therefore a computation shows that \begin{equation} \frac{1}{1-\beta} \big(F'(z)+z\alpha F''(z)-\beta \big) =\frac{1}{1-\beta}(F_1'(z)+z\alpha F_1''(z)-\beta)*\Big(\frac{F_2(z)}{z}\Big). \label{eq14} \end{equation} Since $F_1 \in \mathcal{W}(\alpha,\beta)$, hence it satisfy $\Re(F_1'(z)+\alpha zF_1''(z)-\beta)>0.$ Further from Lemma \ref{9}, we have $\Re(\dfrac{F_2(z)}{z})> \dfrac{1}{2-\beta}\geq\dfrac{1}{2}$ in $\mathbb{D}$. Now applying Lemma \ref{8}, we get $F=F_1*F_2 \in \mathcal{W}(\alpha,\beta)$. \end{proof} Now using Lemma \ref{lm10}, we will show that the class $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ is closed under convolutions. \begin{theorem}\label{11} If functions $f_1$ and $f_2$ belong to $\mathcal{W}_\mathcal{H}^0(\alpha,\beta),$ then $f_1*f_2 \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \end{theorem} \begin{proof} Let the functions $f_1=h_1+\overline{g_1}$ and $f_2=h_2+\overline{g_2}$ are belongs to $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. To show $f_1*f_2 \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$, it is sufficient to show that $F_{\epsilon}=h_1*h_2+\epsilon( {g_1*g_2}) \in \mathcal{W}(\alpha,\beta)$ for each $\epsilon(|\epsilon|=1)$. By Lemma \ref{lm10}, $\mathcal{W}(\alpha,\beta)$ is closed under convolutions. If $h_i+\epsilon g_i \in \mathcal{W}(\alpha,\beta)$ for each $\epsilon(|\epsilon|=1)$ and for $i=1,2$. Then both $F_1$ and $F_2$ given by \begin{equation*} F_1(z)=(h_1-g_1)*(h_2-\epsilon g_2) \quad {\mbox and} \quad F_2(z)=(h_1+g_1)*(h_2+\epsilon g_2), \end{equation*} belong to $\mathcal{W}(\alpha,\beta)$. Since $\mathcal{W}(\alpha,\beta)$ is close under convex combinations, then the function $F_{\epsilon}=\dfrac{1}{2}(F_1+F_2)=(h_1*h_2)+\epsilon (g_1*g_2)$ belongs to $\mathcal{W}(\alpha,\beta)$. Hence $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ is closed under convolution. \end{proof} In \cite{Goodloe02}, Goodloe considered the Hadamard product of a harmonic function with an analytic function defined as follows: $$f\,\widehat{*}\phi=h*\phi+\overline{g*\phi},$$ where $f=h+\overline{g}$ is harmonic function and $\phi$ is an analytic function in $\mathbb{D}.$ \begin{theorem}\label{12} \rm Let $f \in \mathcal{W}_{\mathcal{H}}^0(\alpha,\beta)$ and $\phi \in \mathcal{A}$ be such that $\Re\Big(\dfrac{\phi(z)}{z}\Big)> \dfrac{1}{2}$ for $z \in \mathbb{D}$, then $f\,\widehat{*}\phi \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \end{theorem} \begin{proof} Let $f=h+\overline{g} \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. To prove that $f\,\widehat{*}\phi$ belongs to $\mathcal{W}_\mathcal{H}^0(\alpha,\beta)$, it suffices to prove that $F_{\epsilon}=h*\phi +\epsilon (g*\phi)$ belongs to $\mathcal{W}(\alpha,\beta)$ for each $\epsilon (|\epsilon|=1)$. Since $f=h+\overline{g}\in \mathcal{W}_\mathcal{H}^0(\alpha,\beta),$ then $F_{\epsilon}=h+\epsilon g$ belongs to $\mathcal{W}(\alpha,\beta)$ for each $\epsilon (|\epsilon|=1)$. Therefore \begin{equation*} \frac{1}{1-\beta} (F_\epsilon'(z)+\alpha zF_{\epsilon}''(z)-\beta)=\frac{1}{1-\beta}(F_{\epsilon}'(z)+\alpha zF_{\epsilon}''(z)-\beta)*\frac{\phi(z)}{z}. \end{equation*} Since $\Re\Big(\dfrac{\phi(z)}{z}\Big)> \dfrac{1}{2}$\, and\, $\Re(F_\epsilon'(z)+\alpha zF_\epsilon''(z))>\beta$ in $\mathbb{D}$, then in view of Lemma \ref{8}, we obtain that $F_\epsilon \in \mathcal{W}(\alpha,\beta)$. \end{proof} \begin{corollary} \rm Suppose $f \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$ and $\phi \in \mathcal{K}$, then $f\,\widehat*\phi \in \mathcal{W}_\mathcal{H}^0(\alpha,\beta)$. \end{corollary} \begin{proof} It is well known that, if $\phi$ is convex then $\Re\Big(\dfrac{\phi(z)}{z}\Big)> \dfrac{1}{2}$ for $z \in \mathbb{D}$. Hence result follows from Theorem \ref{12}. \end{proof} \section{Partial sums} \setcounter{equation}{0} In this section, we determine the value of $r$ such that the partial sums of $f \in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta)$ are convex in the disk $|z|<r.$ \begin{theorem}\label{P1} Let $f=h+\overline{g}\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ If $p$ and $q$ satisfies one of the following conditions: \begin{itemize} \item[(i)] $1=p\,<\,q$ \item[(ii)] $3\,\leq \, p\,<\, q$ \item[(iii)] $3\leq q<p,$ \end{itemize} then $s_{p,q}(f)(z)$ is convex in $|z|<1/4.$ \end{theorem} \begin{proof} \;(i)\; By assumption, we know that $$s_{1,q}(f)(z)=z+\overline{s_q(g)(z)}=z+\sum_{n=2}^q\overline{b_n z^n}.$$ Since $$\Re\left\{\dfrac{z+\overline{z(zs'_q(g)(z))'}}{z-\overline{zs'_q(g)(z)}} \right\}=\Re \left\{ \dfrac{z+\sum_{n=2}^q \overline{n^2b_n z^n}}{z-\sum_{n=2}^q\overline{n b_n z^n}}\right\}\quad {\rm and} \quad \lim _{z \rightarrow 0} \dfrac{z+\sum_{n=2}^q\overline{n^2b_n z^n}}{z-\sum_{n=2}^q\overline{n b_n z^n}}=1,$$ it suffices to prove $$A=:\Re \left\{\left(z+\sum_{n=2}^q\overline{n^2 b_n z^n}\right) \left(\overline{z}-\sum_{n=2}^qn b_n z^n \right) \right\}>0 \qquad {\rm for}\qquad |z|=1/4.$$ Now, we find that \begin{eqnarray*} \qquad A &=& |z|^2+\Re \left(\sum_{n=2}^q \overline{n^2 b_n z^{n+1}}-\sum_{n=2}^q n b_n z^{n+1}\right)-\Re \left\{\left(\sum_{n=2}^q\overline{n^2 b_n z^n} \right) \left(\sum_{n=2}^q nb_n z^n \right) \right\} \\ &\geq& |z|^2-\sum_{n=2}^q n(n-1)|b_n||z|^{n+1}-\left(\sum_{n=2}^q n^2 |b_n||z|^n \right)\left(\sum_{n=2}^q n|b_n| |z|^n \right). \end{eqnarray*} Further, using Theorem \ref{thm1}, we obtain \begin{eqnarray*} A &\geq& |z|^2-\sum_{n=2}^q \dfrac{(1-\beta)(n-1)}{1+\alpha (n-1)} |z|^{n+1}-\left(\sum_{n=2}^{\infty}\dfrac{n (1-\beta)}{1+\alpha(n-1)}|z|^n \right) \left(\sum_{n=2}^{\infty}\dfrac{(1-\beta)}{1+\alpha (n-1)}|z|^n \right)\notag \\ &\geq& |z|^2-(1-\beta)\sum_{n=2}^q (n-1) |z|^{n+1}-(1-\beta)^2\left(\sum_{n=2}^{\infty}n |z|^n \right) \left(\sum_{n=2}^{\infty}|z|^n \right) \notag \\ &=& |z|^2-(1-\beta)|z|^3\dfrac{1-q|z|^{q-1}+(q-1)|z|^q}{(1-|z|)^2}\\ && \qquad -(1-\beta)^2 |z|^4 \dfrac{(2-|z|-(q+1)|z|^{q-1}+q|z|^q)(1-|z|^{q-1})}{(1-|z|)^3}. \notag \end{eqnarray*} Thus, for $|z|=1/4$, we have \begin{eqnarray*} \dfrac{A\,(1-|z|)^3}{|z|^2} &\geq&(1-|z|)^3-(1-\beta)|z|(1-|z|)(1-q|z|^{q-1}+(q-1)|z|^q) \notag \\ && \;\;\; -(1-\beta)^2|z|^2(2-|z|-(q+3)|z|^{q-1}+(q+1)|z|^q+(q+1)|z|^{2q-2}-q|z|^{2q-1}) \notag \\ &\geq& \dfrac{27}{64}-\dfrac{3}{16}\left(1-\dfrac{q}{4^{q-1}}-\dfrac{q-1}{4^q}\right)-\dfrac{1}{16} \left(\dfrac{7}{4}-\dfrac{q+3}{4^{q-1}}+\dfrac{q+1}{4^q}+\dfrac{q+1}{4^{2(q-1)}}-\dfrac{q}{4^{2q-1}}\right) \notag \\ &=& \dfrac{1}{8}+ \dfrac{12q+14}{4^{q+2}}-\dfrac{3q+4}{4^{2q-1}} = \dfrac{1}{8}+\dfrac{12 q(4^q-1)+14\times 4^q-16}{4^{2q+2}}>0. \notag \end{eqnarray*} Hence the result follows. (ii) Let $\sigma_p(h)(z)=\sum_{n=p+1}^{\infty}a_n z^n$ and $\sigma_q(g)(z)=\sum_{n=q+1}^{\infty}b_nz^n,$ so that $h(z)=s_p(h)(z)+\sigma_p(h)(z)$ and $g(z)=s_q(g)(z)+\sigma_q(g)(z).$ Thus for each $|\epsilon|=1$, we may write \begin{equation}\label{.eq1} 1+z\,\dfrac{s_p''(h)(z)+\epsilon s_q''(g)(z)}{s_p'(h)(z)+\epsilon s'_q(g)(z)}=1+\phi(z)+\psi(z), \end{equation} where $$\phi(z)=\dfrac{z(h''(z)+\epsilon g''(z))}{h'(z)+\epsilon g'(z)}\qquad \rm and$$ $$\psi(z)=\dfrac{\phi(z)(\sigma'_p(h)(z)+\epsilon\sigma'_q(g)(z))-z(\sigma_p''(h)(z)+\epsilon\sigma_q''(g)(z))}{h'(z)+\epsilon g'(z)-(\sigma_p'(h)(z)+\epsilon \sigma'_q(g)(z))}.$$ Since $h+\epsilon g \in \mathcal{P},$ using Lemma \ref{LEMA}, we have \begin{equation}\label{.eq2} |\phi(z)|\leq \dfrac{2|z|}{1-|z|^2}\qquad {\rm and} \qquad |h'(z)+\epsilon g'(z)|\geq \dfrac{1-|z|}{1+|z|}. \end{equation} Now, if $p\leq q,$ then Theorem \ref{thm1}, yields that \begin{eqnarray}\label{.eq3} |\sigma_p'(h)(z)+\epsilon \sigma'_q(g)(z)| &=& \left| \sum_{n=p+1}^qna_nz^{n-1}+\sum_{n=q+1}^\infty n(a_n+\epsilon b_n)z^{n-1} \right| \notag \\ &\leq & \sum_{n=p+1}^{\infty}\dfrac{2(1-\beta)}{1+\alpha(n-1)}|z|^{n-1}\,\leq\, 2(1-\beta)\sum _{n=p+1}^{\infty}|z|^{n-1} \notag \\ &=& 2(1-\beta) \dfrac{|z|^p}{1-|z|}. \end{eqnarray} Similarly, \begin{eqnarray} \label{.eq4} |z(\sigma_p''(h)(z)+\epsilon \sigma _q''(g)(z))| &= &\left|\sum_{n=p+1}^q n(n-1)a_nz^{n-1}+\sum_{n=q+1}^{\infty} n(n-1)(a_n+\epsilon b_n)z^{n-1} \right| \notag \\ &\leq& \sum_{n=p+1}^{\infty}\dfrac{2(1-\beta)(n-1)}{1+\alpha (n-1)}|z|^{n-1}\,\leq \,2(1-\beta)\sum_{n=p+1}^{\infty}(n-1)|z|^{n-1} \notag \\ &= & 2(1-\beta) \left(\dfrac{p|z|^p}{1-|z|}+\dfrac{|z|^{p+1}}{(1-|z|)^2} \right). \end{eqnarray} Using estimates \eqref{.eq2} - \eqref{.eq4}, by the triangle inequality we deduce that $$\left| \psi(z)\right| \leq \dfrac{2(1-\beta)|z|^p\{3|z|+|z|^2+p(1-|z|^2)\}}{(1-|z|)\{(1-|z|)^2-2(1-\beta)|z|^p(1+|z|)\}}.$$ Thus \begin{eqnarray} \Re(1+\phi(z)+\psi(z))&\geq& 1-|\phi(z)|-|\psi(z)|\notag \\ &\geq& 1-\dfrac{2|z|}{1-|z|^2}-\dfrac{2(1-\beta)|z|^p\{3|z|+|z|^2+p(1-|z|^2)\}}{(1-|z|)\{(1-|z|)^2-2(1-\beta)|z|^p(1+|z|)\}}\notag\\ &=& \dfrac{1-|z|^2-2|z|}{1-|z|^2}-\dfrac{2(1-\beta)|z|^p\{3|z|+|z|^2+p(1-|z|^2)\}}{(1-|z|)\{(1-|z|)^2-2(1-\beta)|z|^p(1+|z|)\}}, \notag \end{eqnarray} which for $|z|=1/4$ gives $$\Re(1+\phi(z)+\psi(z))\geq \dfrac{1}{3}\left\{\dfrac{7}{5}-\dfrac{2(1-\beta)(13+15 p)}{9\times 4^{p-1}-10(1-\beta)} \right\}=B(p,\beta).$$ Since the function $B(p,\beta)$ is monotonically increasing with respect to $p$ for $p\geq3,$ the least estimate shows that $\Re(1+\phi(z)+\psi(z))\geq A(p)\geq A(3)>0.$ Thus \eqref{.eq1} implies for each $|\epsilon|=1,$ that the section $s_p(h)+\epsilon s_q(g)$ is convex in $|z|\leq 1/4$ for $3\leq p \leq q.$ As $\epsilon$ is arbitrary, this shows that $s_{p,q}(f)$ is convex in $|z|<1/4,$ for $3\leq p\leq q.$ \medskip (iii) If $p>q,$ then using Theorem \ref{thm1}, we have \begin{eqnarray}\label{.eq5} \left| \sigma_p'(h)(z)+\epsilon \sigma_q'(g)(z)\right| &=& \left|\sum_{n=q+1}^p\epsilon n b_n z^{n-1}+\sum_{n=p+1}^{\infty}n(a_n+\epsilon b_n)z^{n-1} \right| \notag \\ &\leq& \sum_{n=q+1}^p\dfrac{1-\beta}{1+\alpha (n-1)}|z|^{n-1}+\sum_{n=p+1}^{\infty}\dfrac{2(1-\beta)}{1+\alpha(n-1)}|z|^{n-1} \notag \\ &\leq& (1-\beta)\left(\sum_{n=q+1}^p|z|^{n-1} +2\sum _{n=p+1}^{\infty}|z|^{n-1}\right)\,=\, \dfrac{(1-\beta)(|z|^p+|z|^q)}{1-|z|}, \end{eqnarray} and \begin{eqnarray} \label{.eq6} \left| z(\sigma_p''(h)(z)+\epsilon \sigma_q''(g)(z)) \right| &=& \left| \sum_{n=q+1}^p\epsilon n(n-1)b_n z^{n-1}+\sum_{n=p+1}^{\infty}n(n-1)(a_n+\epsilon b_n)z^{n-1}\right| \notag \\ &\leq& \sum_{n=q+1}^p\dfrac{(n-1)(1-\beta)}{1+\alpha(n-1)}|z|^{n-1}+\sum_{n=p+1}^{\infty}\dfrac{2(n-1)(1-\beta)}{1+\alpha(n-1)}|z|^{n-1} \notag \\ &\leq& (1-\beta)\left(\sum_{n=q+1}^{\infty}(n-1)|z|^{n-1}+\sum_{n=p+1}^{\infty}2(n-1)|z|^{n-1} \right)\notag\\ &=& \dfrac{(1-\beta)\{p|z|^p+q|z|^q-(p-1)|z|^{p+1}-(q-1)|z|^{q+1}\}}{(1-|z|)^2}. \end{eqnarray} Using estimates \eqref{.eq2}, \eqref{.eq5} and \eqref{.eq6}, we obtain that $$ |\psi(z)|\leq \dfrac{(1-\beta)}{(1-|z|)}\left(\dfrac{p|z|^p+q|z|^q+3|z|^{p+1}+3|z|^{q+1}-(p-1)|z|^{p+2}-(q-1)|z|^{q+2}}{1-2|z|+|z|^2-(1-\beta)(1+|z|)(|z|^p+|z|^q)} \right). $$ Thus $\Re \left( 1+\phi(z)+\psi(z)\right)\geq 1-|\phi(z)|-|\psi(z)|,$ which for $|z|=1/4$ reduces to \begin{eqnarray} \Re \left( 1+\phi(z)+\psi(z)\right)&\geq & \dfrac{4}{3}\left(\dfrac{7}{20}-\dfrac{(1-\beta)\{4^p(15 q+13)+4^q(15 p+13)\}}{9\times 4^{p+q}-20(1-\beta)(4^p+4^q)} \right) \notag\\ &>& \dfrac{4}{3}\left(\dfrac{7}{20}-\dfrac{4^p(15 q+13)+4^q(15 p+13)}{9\times 4^{p+q}-20(4^p+4^q)} \right). \notag \end{eqnarray} Moreover, for $p>q\geq3,$ we have $$ \Re \left( 1+\phi(z)+\psi(z)\right)> \dfrac{4}{3}\left(\dfrac{7}{20}-\dfrac{305}{2204}\right)>0, \notag$$ which implies that for each $\epsilon$ with $|\epsilon|=1,$ $s_p(h)+\epsilon s_q(g)$ is convex in $|z|<1/4,$ for $3\leq q\leq p,$ and thus each section $s_{p,q}(f)$ is convex in $|z|<1/4$ for $3\leq q \leq p.$ \end{proof} \begin{theorem} Let $f=h+\overline{g}\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ Then \begin{itemize} \item[(i)] For $q>2,\, s_{2,q}(f)(z)$ is convex in the disk $|z|<R_1, $ where $R_1$ is smallest positive root of the equation \begin{equation}\label{.eq7} 1-4r+(6\beta-2)r^2-8(1-\beta)r^3+(1-2\beta)r^4+4(1-\beta)r^5=0 \end{equation} in $(0,1).$ \item[(ii)] For $p>2,\, s_{p,2}(f)(z)$ is convex in the disk $|z|<R_2,$ where $R_2$ is smallest positive root of the equation \begin{equation}\label{.eq8} 1-4r+(1-\beta)r^2-(8-3\beta)r^3-(5-2\beta)r^4-(4-3\beta)r^5+3(1-\beta)r^6=0 \end{equation} in $(0,1).$ \end{itemize} \end{theorem} \begin{proof} (i) Let $f=h+\overline{g}\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta),$ and suppose that $p=2<q.$ Then for each $|\epsilon|=1,$ it is sufficient to show that $$X=\Re\left(1+\dfrac{z(s_2''(h)(z)+\epsilon s_q''(g)(z))}{s_2'(h)(z)+\epsilon s'_q(g)(z)} \right)>0$$ in the disk $|z|<R_1.$ For $2=p<q,$ the estimates in \eqref{.eq2}-\eqref{.eq4} are continue to hold. Therefore, we deduce that \begin{eqnarray} (1-|z|)X &\geq & \dfrac{1-|z|^2-2|z|}{1+|z|}-\dfrac{2(1-\beta)|z|^2\{3|z|+2(1-|z|^2)+|z|^2\}}{1-2|z|+|z|^2-2(1-\beta)|z|^2(1+|z|)} \notag \\ &=& \dfrac{1-|z|^2-2|z|}{1+|z|}-\dfrac{2(1-\beta)|z|^2\{2+3|z|-|z|^2\}}{1-2|z|+(1-2(1-\beta))|z|^2-2(1-\beta)|z|^3} \notag \\ &=& \dfrac{1-4|z|+\{4-6(1-\beta)\}|z|^2-8(1-\beta)|z|^3+\{2(1-\beta)-1\}|z|^4+4(1-\beta)|z|^5}{(1+|z|)\{1-2|z|+(1-2(1-\beta))|z|^2-2(1-\beta)|z|^3\}} \notag \\ &=& \dfrac{1-4|z|+(6\beta-2)|z|^2-8(1-\beta)|z|^3+(1-2\beta)|z|^4+4(1-\beta)|z|^5}{(1+|z|)\{1-2|z|+(1-2(1-\beta))|z|^2-2(1-\beta)|z|^3\}}, \notag \end{eqnarray} which is greater then zero in $|z|<R_1,$ where $R_1$ is the smallest positive root of the equation \eqref{.eq7} in $(0,1).$ (ii) Let $f=h+\overline{g}\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta)$ and suppose that $q=2<p.$ Then for each $|\epsilon|=1,$ it is sufficient to show that $$Y=\Re\left(1+\dfrac{z(s_p''(h)(z)+\epsilon s_2''(g)(z))}{s_p'(h)(z)+\epsilon s'_2(g)(z)} \right)>0$$ in the disk $|z|<R_2.$ Since for $2=q<p,$ the estimates in equations \eqref{.eq2}, \eqref{.eq5} and \eqref{.eq6} continue to hold. Therefore we deduce that \begin{eqnarray} (1-|z|)Y &\geq & \dfrac{1-|z|^2-2|z|}{1+|z|}-\dfrac{(1-\beta)(p|z|^p+2|z|^2+3|z|^{p+1}+3|z|^4-(p-1)|z|^{p+2}-|z|^4)}{1-2|z|+|z|^2-(1-\beta)(|z|^p+|z|^2)(1+|z|)} \notag \\ &=& \dfrac{1-|z|^2-2|z|}{1+|z|}-\dfrac{(1-\beta)|z|^p(p+3|z|-(p-1)|z|^2)+(1-\beta)|z|^2(2+3|z|-|z|^2)}{1-2|z|+|z|^2-(1-\beta)(|z|^p+|z|^2)(1+|z|)} \notag \\ &\geq & \dfrac{1-|z|^2-2|z|}{1+|z|}-\dfrac{(1-\beta)\{|z|^3(3+3|z|-2|z|^2)+|z|^2(2+3|z|-|z|^2)\}}{1-2|z|+|z|^2-(1-\beta)(2|z|^3+|z|^2+|z|^4)} \notag \\ &=& \dfrac{1-|z|^2-2|z|}{1+|z|}-\dfrac{(1-\beta)\{3|z|^3+3|z|^4-2|z|^5+2|z|^2+3|z|^3-|z|^4\}}{1-2|z|+|z|^2-(1-\beta)(2|z|^3+|z|^2+|z|^4)} \notag \\ &=&\dfrac{1-4|z|+(1+\beta)|z|^2-(8-3\beta)|z|^3-(5-2\beta)|z|^4-(4-3\beta)|z|^5+3(1-\beta)|z|^6}{(1+|z|)\{1-2|z|+|z|^2-(1-\beta)(|z|^2+2|z|^3+|z|^4)\}},\notag \end{eqnarray} which is greater then zero in $|z|<R_2,$ where $R_2$ is the smallest positive root of \eqref{.eq8} in $(0,1).$ \end{proof} \begin{theorem} If $f=h+\overline{g}\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta),$ then $s_{2,2}(f)(z)$ is convex in $|z|<(1+\alpha)/4(1-\beta).$ \end{theorem} \begin{proof} Let $s_{2,2}(f)(z)\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ Then for each $|\epsilon|=1$, it is sufficient to show that $$\Re\left(1+\dfrac{z(s_2''(h)(z)+\epsilon s_2''(g)(z)}{s_2'(h)(z)+\epsilon s_2'(g)(z)} \right)>0$$ in the disk $|z|<\dfrac{1+\alpha}{4(1-\beta)}.$ In the view of Theorem \ref{th3}, we have \begin{eqnarray} \Re\left(1+\dfrac{z(s_2''(h)(z)+\epsilon s_2''(g)(z)}{s_2'(h)(z)+\epsilon s_2'(g)(z)} \right) &\geq& 1-\left|\dfrac{z(s_2''(h)(z)+\epsilon s_2''(g)(z)}{s_2'(h)(z)+\epsilon s_2'(g)(z)} \right| \notag \\ &=& 1-\left|\dfrac{2(a_2+\epsilon b_2)z}{1+2(a_2+\epsilon b_2)z} \right| \geq 1-\dfrac{2|a_2+\epsilon b_2||z|}{1-2|a_2+\epsilon b_2| |z|} \notag \\ &=& \dfrac{1-\dfrac{4(1-\beta)}{1+\alpha}|z|}{1-\dfrac{2(1-\beta)}{1+\alpha}|z|}>0 \notag. \end{eqnarray} Hence the result follows. \end{proof} \section{Applications} \setcounter{equation}{0} In this section, we consider the harmonic mappings whose co-analytic part involve the Gaussian hypergeometric function $_2F_1(a,b;c;z)$, which is defined by \begin{equation}\label{G1} _2F_1(a,b;c;z)=F(a,b;c;z)=\sum_{n=0}^{\infty}\dfrac{(a)_n \,(b)_n}{(c)_n\, n!}z^n \qquad (z\in \mathbb{D}), \end{equation} where $a,b,c \in \mathbb{C}, c\neq 0, -1, -2, \cdots$ and $(a)_n$ is the Pochhammer symbol defined by $(a)_n=a(a+1)(a+2)\cdots(a+n-1)$ and $(a)_0=1$ for $n\in \mathbb{N}.$ The series \eqref{G1} is absolutely convergent in $\mathbb{D}.$ Moreover, if $\Re(c-a-b)>0,$ then the series \eqref{G1} is convergent in $|z|\leq 1.$ Further, for $z=1,$ we have the following well-known Gauss formula \cite{NMT} \begin{equation}\label{G2} F(a,b;c;1)=\dfrac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}<\infty. \end{equation} We shall use the following Lemma to prove our results in this section: \begin{lemma}\label{lemaG} \cite{G21} \ Let $a,b>0.$ Then the following holds: \begin{itemize} \item[(i)] For $c>a+b+1,$ $$\sum_{n=0}^{\infty}\dfrac{(n+1)(a)_n (b)_n}{(c)_n n!}=\dfrac{\Gamma(c) \Gamma(c-a-b-1)}{\Gamma(c-a)\Gamma(c-b)}(ab+c-a-b-1).$$ \item[(ii)] For $c>a+b+2,$ $$\sum_{n=0}^{\infty} \dfrac{(n+1)^2(a)_n (b)_n}{(c)_n n!}=\dfrac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a) \Gamma(c-b)}\left(\dfrac{(a)_2 (b)_2}{(c-a-b-2)_2}+\dfrac{3ab}{c-a-b-1}+1 \right).$$ \item[(iii)] For $a,b,c\neq1$ with $c>\,{\rm max}\, \{0, a+b+1\},$ $$\sum_{n=0}^{\infty}\dfrac{(a)_n (b)_n}{(c)_n (n+1)!}=\dfrac{1}{(a-1)(b-1)}\left[\dfrac{\Gamma(c)\Gamma(c-a-b-1)}{\Gamma(c-a)\Gamma(c-b)}-(c-1) \right].$$ \end{itemize} \end{lemma} \begin{theorem} \label{thmG} Let $f_1(z)=z+\overline{ z^2 F(a,b;c;z)},\quad f_2(z)=z+\overline{z(F(a,b;c;z)-1)}$ and $f_3(z)=z+\overline{z\int_0^z F(a,b;c;t)dt},$ where $a,b,c$ are positive real numbers such that $c>a+b+2.$ Then the following holds: \begin{itemize} \item[(i)] If \begin{equation}\label{G3} \dfrac{\Gamma(c) \Gamma(c-a-b-1)}{\Gamma(c-a) \Gamma(c-b)}\left[ \dfrac{\alpha (a)_2 (b)_2}{c-a-b-2}+(1+4\alpha)ab+2(1+\alpha)(c-a-b-1)\right]\leq 1-\beta, \end{equation} then $f_1\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \item[(ii)] If \begin{equation}\label{G4} \dfrac{\Gamma(c)\Gamma(c-a-b-2)}{\Gamma(c-a)(c-b)}\left[\alpha ab(ab+c-1)+(1+\alpha)ab(c-a-b-2)+1 \right] \leq 2-\beta, \end{equation} then $f_2 \in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \item[(iii)] If $a,b,c\neq 1 $ and $c>\,{\rm max}\, \{0, a+b+1\},$ \begin{equation}\label{G5} \dfrac{\Gamma(c)\Gamma(c-a-b-1)}{\Gamma(c-a)\Gamma(c-b)}\left[\alpha ab+(1+2\alpha)(c-a-b-1)+\dfrac{1}{(a-1)(b-1)} \right] \end{equation} $$ -\dfrac{(c-1)}{(a-1)(b-1)} \leq 1-\beta,$$ then $f_3 \in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \end{itemize} \end{theorem} \begin{proof} (i) Let $f_1(z)=z+\overline{ z^2 F(a,b;c;z)}=z+\overline{\sum_{n=2}^{\infty}C_nz^n,}$ where $$C_n=\dfrac{(a)_{n-2}(b)_{n-2}}{(c)_{n-2}(n-2)!} \quad {\rm for}\quad n\geq 2.$$ Therefore, we have \begin{eqnarray} \sum_{n=2}^{\infty}n(1+\alpha(n-1))|C_n| &=& \sum_{n=2}^{\infty}n(1+\alpha(n-1)) \dfrac{(a)_{n-2}(b)_{n-2}}{(c)_{n-2}(n-2)!}\notag \\ &=& (1+\alpha)\sum_{n=0}^{\infty}(n+1)\dfrac{(a)_n(b)_n}{(c)_n n!}+\alpha \sum_{n=0}^{\infty}(n+1)^2\dfrac{(a)_n (b)_n}{(c)_n n!}+\sum_{n=0}^{\infty}\dfrac{(a)_n(b)_n}{(c)_n n!}. \notag \end{eqnarray} Now, using Lemma \ref{lemaG} and Gauss formula \eqref{G2}, we have\\ $\sum_{n=2}^{\infty}n(1+\alpha(n-1))|C_n|=$ $$\dfrac{\Gamma(c) \Gamma(c-a-b-1)}{\Gamma(c-a) \Gamma(c-b)}\left[ \alpha \dfrac{(a)_2 (b)_2}{c-a-b-2}+(1+4\alpha)ab+2(1+\alpha)(c-a-b-1)\right].$$ If \eqref{G3} holds, then $\sum_{n=2}^{\infty}n(1+\alpha(n-1))|C_n|\leq 1-\beta.$ Hence the result follows. \medskip (ii) Let $f_2(z)=z+\overline{z(F(a,b;c;z)-1)}=z+\overline{\sum_{n=2}^{\infty}D_nz^n},$ where $$D_n=\dfrac{(a)_{n-1}(b)_{n-1}}{(c)_{n-1}(n-1)!}\quad {\rm for}\quad n\geq2.$$ Therefore, we have \begin{eqnarray} \label{G7} \sum_{n=2}^{\infty}n(1+\alpha(n-1))|D_n| &=& \sum_{n=2}^{\infty}n(1+\alpha(n-1))\dfrac{(a)_{n-1}(b)_{n-1}}{(c)_{n-1}(n-1)!} \notag \\ &=& \sum_{n=0}^{\infty}(\alpha(n+1)^2+(1+\alpha)(n+1)+1)\dfrac{(a)_{n+1}(b)_{n+1}}{(c)_{n+1}(n+1)!}. \notag \end{eqnarray} Now using the identity $(\gamma)_{n+1}=\gamma(\gamma+1)_n$, we have \begin{eqnarray}\label{G8} \sum_{n=2}^{\infty}n(1+\alpha(n-1))|D_n|&=& \dfrac{ab}{c} \alpha \sum_{n=0}^{\infty}(n+1)\dfrac{(a+1)_n(b+1)_n}{(c+1)_n n!} \notag \\ &+& \dfrac{ab}{c}\left[(1+\alpha)\sum_{n=0}^{\infty} \dfrac{(a+1)_n(b+1)_n}{(c+1)_n n!}+\sum_{n=0}^{\infty} \dfrac{(a+1)_n(b+1)_n}{(c+1)_n (n+1)!} \right]. \notag \end{eqnarray} Further, using Lemma \ref{lemaG} and Gauss formula \eqref{G2}, we obtain \\ $\sum_{n=2}^{\infty}n(1+\alpha(n-1))|D_n|= $ $$\dfrac{\Gamma(c) \Gamma(c-a-b-2)}{\Gamma(c-a) \Gamma(c-b)}\left[\alpha ab(ab+c-1)+(1+\alpha)ab(c-a-b-2)+1 \right]-1.$$ Now, if \eqref{G4} holds, then $\sum_{n=2}^{\infty}n(1+\alpha(n-1))|D_n| \leq 1-\beta,$ hence the result follows. \medskip (iii) Let $f_3(z)=z+\overline{z\int_0^z F(a,b;c;t)dt}=z+\overline{\sum_{n=2}^{\infty}E_nz^n},$ where $$E_n=\dfrac{(a)_{n-2}(b)_{n-2}}{(c)_{n-2}(n-1)!}\quad {\rm for}\quad n\geq2.$$ Therefore, \begin{eqnarray} \label{G9} \sum_{n=2}^{\infty}n(1+\alpha(n-1))|E_n| &=& \sum_{n=2}^{\infty}n(1+\alpha(n-1))\dfrac{(a)_{n-2}(b)_{n-2}}{(c)_{n-2}(n-1)!} \notag \\ &=& \alpha \sum_{n=0} ^{\infty} (n+1) \dfrac{(a)_n(b)_n}{(c)_n n!}+(1+\alpha) \sum_{n=0} ^{\infty}\dfrac{(a)_n(b)_n}{(c)_n n!}+\sum_{n=0} ^{\infty} \dfrac{(a)_n(b)_n}{(c)_n (n+1)!}. \notag \end{eqnarray} Now using Lemma \ref{lemaG} and Gauss formula \eqref{G2}, we obtain \begin{equation*}\label{10} \sum_{n=2}^{\infty}n(1+\alpha(n-1))|E_n|= \end{equation*} $$\dfrac{\Gamma(c)\Gamma(c-a-b-1)}{\Gamma(c-a)\Gamma(c-b)}\left[\alpha ab+(1+2\alpha)(c-a-b-1)+\dfrac{1}{(a-1)(b-1)}\right]- \dfrac{(c-1)}{(a-1)(b-1)}.$$ Further, if \eqref{G5} holds, then the result follows. \end{proof} \medskip Note that for $\eta \in \mathbb{C}/ \{-1, -2, \cdots \}$ and $n\in \mathbb{N}\cup \{0\},$ we have $$\dfrac{(-1)^n(-\eta)_n}{n!}=\binom {\eta}{n} = \dfrac{\Gamma(\eta+1)}{n! \Gamma(\eta-n+1)}.$$ In particular, when $\eta=m (m\in \mathbb{N}, m\geq n), $ we have $$(-m)_n=\dfrac{(-1)^n m!}{(m-n)!}.$$ Using above relations in Theorem \ref{thmG}, we get harmonic univalent polynomials which belongs to the class $\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ Setting $a=b=-m\,(m\in \mathbb{N}),$ we get \begin{corollary}\label{c11} Let $m\in \mathbb{N},$ $c$ be a positive real number and $$F_1(z)=z+\overline{\sum_{n=0}^m \binom {m}{n} \dfrac{(m-n+1)_n}{(c)_n}z^{n+2}},$$ $$F_2(z)=z+\overline{\sum_{n=0}^m \binom {m}{n} \dfrac{(m-n+1)_n}{(c)_n}z^{n+1}},$$ $$F_3(z)=z+\overline{\sum_{n=0}^m \binom {m}{n} \dfrac{(m-n+1)_n}{(n+1)(c)_n}z^{n+2}}.$$ Then the following holds: \begin{itemize} \item[(i)] If \begin{equation} \dfrac{\Gamma(c) \Gamma(c+2m-1)}{[\Gamma(c+m)]^2}\left[ \dfrac{\alpha m^2 (m-1)^2}{c+2m-2}+(1+4\alpha)m^2+2(1+\alpha)(c+2m-1)\right]\leq 1-\beta, \end{equation} then $F_1\in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \item[(ii)] If \begin{equation} \dfrac{\Gamma(c)\Gamma(c+2(m-1))}{[\Gamma(c+m)]^2}\left[\alpha m^2(m^2+c-1)+(1+\alpha)m^2(c+2m-2)+1 \right] \leq 2-\beta, \end{equation} then $F_2 \in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \item[(iii)] If \begin{equation} \dfrac{\Gamma(c)\Gamma(c+2m-1)}{[\Gamma(c+m)]^2}\left[\alpha m^2+(1+2\alpha)(c+2m-1)+\dfrac{1}{(m+1)^2} \right]-\dfrac{(c-1)}{(m+1)^2}\leq 1-\beta, \end{equation} then $F_3 \in \mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \end{itemize} \end{corollary} Further setting $m=2$ and $c=1$ in Corollary \ref{c11}, we get \begin{corollary} If $G_1(z)=z+\overline{z^2+4z^3+z^4},\,\,G_2(z)=z+\overline{4z^2+z^3},\,\,$ and $G_3 (z)=z+\overline{z^2+2z^3+\dfrac{1}{3}z^4},$ then the following holds: \begin{itemize} \item[(i)] If $2(19\alpha+9)\leq 1-\beta,$ then $G_1(z)\in\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \item[(ii)] If $ 28\alpha+13\leq 2(2-\beta),$ then $G_2(z)\in\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \item[(iii)] If $ 108\alpha+37\leq 6(1-\beta),$ then $G_3(z)\in\mathcal{W}_{\mathcal{H}}^0(\alpha, \beta).$ \end{itemize} \end{corollary}
2,877,628,088,677
arxiv
\section{Introduction} At birth, protostars are surrounded by circumstellar (CS) gaseous disks that become the natural sites for planet formation. An important constraint for any planet formation scenario is the timescale available during which planet formation can occur, before the CS disks dissipate. Several studies have shown that near and mid-infrared excess flux, indicative of inner CS disks, rapidly declines with 50\% of low-mass stars losing their very inner dust disk within $\sim 3$~Myr \citep[e.g.][]{Haisch01, Aurora06}. The disappearance of inner disks seems to coincide with the end of accretion, suggesting that both dust and gas disappear on similar timescales \citep{Gullbring98, Jayawardhana06}. The dissipation of the primordial disks in which gas planets could form seems to be completed by $\sim 10$~Myr. However, CS disk evolution is currently poorly-constrained within the $\sim$~5--10~Myr age range due to the lack of observationally well-characterized systems. To study the influence of environment and stellar properties on disk dissipation timescales, nearby young stellar cluster provide ideal targets as many of their stellar properties, their ages and environment can be well constrained. The nearby, coeval $\eta$~Chamaeleontis cluster at a distance of 97 pc and an isochronal age of $\simeq 8$~Myr \citep{Mamajek99, Lawson01, Lyo04a, Luhman04} is an excellent laboratory to study disk evolution. The cluster consists of three early-type systems, and 15 low-mass stars with stellar types ranging from K6 to M5 \citep{Lyo04a, Luhman04}. \citet{Lyo03} showed that a substantial fraction of the $\eta$~Cha members have an near-IR excess. This was confirmed by \citet{Megeath05} who observed all late-type members at wavelengths between 3.6--8~$\mu$m with the Infrared Array Camera (IRAC) on the {\it Spitzer Space Telescope}, detecting disks around 40\% of the stars. Given the age of the $\eta$~Cha cluster, this large disk fraction seems inconsistent with the $\sim$6~Myr maximum lifetime of inner disks derived by \cite{Haisch01} from $L$-band observations of younger clusters. This argues against a single dissipation timescale in all environments and/or for all disk radii. In this paper, we will employ the longer wavelength capabilities of the Infrared Spectrograph \citep[IRS;][]{Houck04} onboard {\it Spitzer\,} to firmly establish the census of disks for the 15 low-mass K- and M-type members of the cluster. We then determine and discuss the influence of binarity on the CS disk dispersal timescales, and on the specific angular momentum distribution of the cluster members. \section{{\it Spitzer\,} IRS observations of $\eta$ Cha} We obtained $7.5-35$ $\mu$m low-resolution ($R = 60-120$) spectra of the $\eta$ Cha cluster members with the IRS spectrograph. Integration times were set such that stellar photospheres could be detected with a signal-to-noise ratio $> 5$ across most of the IRS bandpass. Our spectra are based on the {\tt droopres} products processed through the S13.2.0 version of the {\it Spitzer\,} data pipeline. Partially based on the {\tt SMART} software package \citep{Higdon04}, our data was further processed using spectral extraction tools developed for the "Formation and Evolution of Planetary Systems" (FEPS) {\it Spitzer\,} science legacy team. The spectra were extracted using a 6.0 pixel and 5.0 pixel fixed-width aperture in the spatial dimension for the observations with the first order of the short- ($7.5-14$ $\mu$m) and the long-wavelength ($14-35$ $\mu$m) modules, respectively. The background was subtracted using associated pairs of imaged spectra from the two nodded positions along the slit, also eliminating stray light contamination and anomalous dark currents. Pixels flagged by the data pipeline as being "bad" were replaced with a value interpolated from an 8 pixel perimeter surrounding the errant pixel. The low-level fringing at wavelengths $>20$~$\mu$m was removed using the {\tt irsfinge} package \citep{Fred03}. The spectra are calibrated using a spectral response function derived from IRS spectra and Cohen stellar models for a suite of calibrators provided by the {\it Spitzer\,} Science Centre. To remove any effect of pointing offsets, we matched orders based on the point spread function of the IRS instrument, correcting for possible flux losses. The relative errors between spectral points within one order are dominated by the noise on each individual point and not by the calibration. We estimate a relative flux calibration across an order of $\approx 2$~\% and an absolute calibration error between orders/modules of $\approx 5$~\%. Our observations are sensitive to CS disks with substantial inner gaps not radiating at wavelengths $\leq 8$ $\mu$m as covered by IRAC. The maximum wavelength of 35 $\mu$m of our spectra corresponds to a blackbody temperature of $\sim$90 K. Assuming a $\lambda^{-1}$ dependency for the dust opacity, and a typical stellar temperature and radius of 3500 K and 1 $R_{\odot}$, respectively, dust grains attain a temperature of 90 K at a radius of about 25 AU from the central star. Therefore our observations should be sensitive to CS disks with a maximum inner gap radius of about this value. Additionally, for circumbinary disks, \citet{Artymowicz94} argue that the inner edge of circumbinary disks will have a tidal truncation radius of $2.0-2.5\,a$. Given a maximum detection radius of about 25 AU, we should be able to detect circumbinary disks in any $\eta$ Cha binary systems with separations $2a\leq20$ AU. In Fig.~\ref{Spectra.fig} we show three representative IRS spectra. These examples demonstrate the excellent agreement between the IRS and IRAC data at 8 $\mu$m, both independently calibrated, and between the IRS spectra and the optical photometry-scaled stellar models for diskless systems. From inspection of the spectra, we add RECX 3 and 4 to the disk census, meaning that 8 out of 15 (or about 50~\%) of low-mass $\eta$ Cha stars have IRS-detected disks. The newly detected disks lack excesses shortward of $\approx 15$~$\mu$m, consistent with the IRAC non-detection and are detected with $>5\sigma$ certainty at 33~$\mu$m. To quantitatively test for the presence of disks, we calculated the $f(33-13)$ flux ratio derived from the IRS fluxes at 13 $\mu$m and 33 $\mu$m, integrating over bandpasses of 1.3 $\mu$m and 3.3 $\mu$m centred at each wavelength, respectively. Table \ref{Table1.tab} lists the 13 $\mu$m ($f_{13}$) and 33 $\mu$m ($f_{33}$) fluxes, and $f(33-13)$ colors for all the late-type $\eta$ Cha stars. Our detection limits at 13 $\mu$m and 33 $\mu$m are $\approx 0.15$ and 0.3 mJy, respectively, showing the remarkable sensitivity of the IRS instrument. The six diskless late-type $\eta$ Cha stars with 33~$\mu$m detections of the stellar photosphere have a $f(33-13)$ color of $0.17 \pm 0.05$ (1$\sigma$). In contrast, the weakest disks detected in the transitional disk objects RECX 3 and 4 lack 13~$\mu$m excesses and have a $f(33-13)$ color of $\simeq 0.6$, while those stars with protoplanetary disks displaying significant disk excesses at both 13 and 33~$\mu$m have $f(33-13)$ colors exceeding $\simeq 1$. \section{The influence of binarity on the disk fraction and the angular momentum \label{F33.sec}} Among the 15 late-type stars, six are binaries with projected separations $\leq 20$~AU (column~2, Table~1). Colour-magnitude diagram placement shows nearly half of the late-type stars are elevated by $0.5-0.7$ magnitudes compared to other cluster members of comparable spectral type, indicative of binary systems with near-equal luminosity components \citep{Lyo04b}. Of these stars, RECX 1 and 9 are resolved with a projected spatial separation of $\simeq 20$ AU, following speckle and AO imaging surveys for close companions \citep{Kohler02, Brandeker06}. RECX 12 is an unresolved binary with an inferred separation of $\approx 4$ AU \citep{Brandeker06}, and has a dual-periodicity (1.3 d and 8.5 d) light curve measured during a photometric survey for starspot-modulated variability \citep{Lawson01}. From $v$\,sin$i$ measurements of RECX 12 \citep{Covino97, Jayawardhana06}, we assume the 1.3-d mode to be associated with the primary. RECX 7 is a 2.6-d period, dual-lined spectroscopic binary of mass ratio 2.3:1 and separation $\approx 0.1$ AU \citep{Lyo03}. Given the spatial resolution of the \citet{Brandeker06} study, the projected binary separations for the ECHA J0836.2--7908 and J0838.9--7916 systems are $<$4 AU. The cluster has a deficit of wide binaries at projected separations $> 20-50$~AU \citep[e.g.][]{Brandeker06}. The disk frequency is summarized in Fig.~\ref{F33.fig}. Our spectra reveal a remarkable divergence in disk presence as a function of binarity in the cluster's late-type population. Of the eight detected disks, all but one are associated with single stars. With nine single stars in total, about 80\% have a CS disk. Of the six known or suspected close binaries, only RECX 9AB has a CS disk associated with it. The presence of this disk, likely a circumprimary disk, was already inferred from ground-based $L$-band photometry \citep{Lyo03}, H$\alpha$ spectroscopic measurements of disk accretion \citep{Lawson04}, and IRAC photometry \citep{Megeath05}. No circumbinary disks are detected. A non-parametric Mann-Whitney two-sample $U$-test indicates that the probability that the singles and binaries are drawn from the same parent sample is $P = 0.03$, i.e. CS disks are under-represented in $\eta$ Cha binaries with $> 2\sigma$ significance. We surmise that the last "remaining" disk in a binary exists principally because the physical separation of RECX~9AB greatly exceeds the projected $\simeq 20$ AU separation, given that no significant change in spatial separation or position angle was detected between the surveys of \citet{Kohler02} and \citet{Brandeker06}. The evolution of the disk around RECX~9A, therefore, effectively follows that of a disk in a single star system. The $f(33-13)$ color of the RECX 9 disk suggests that the outer disk extends to at least 20 AU. The presence of an extended disk in combination with a circumprimary tidal truncation radius of $\approx 0.4\,a$ \citep{Artymowicz94} implies a physical binary separation $2a \geq 50$ AU. If the evolution of RECX~9 is treated as a single star then a $U$-test gives $P = 0.003$, indicating that the evolution of single stars or binaries with separations equal or larger than that of RECX~9 is significantly different from the evolution of close binary systems. To estimate a typical disk disappearance timescale, we assume a rectangular probability distribution for the disk to dissipate around a mean timescale $t_{d}$ and width $2\sigma$. As constraints we assume that at $t = 0$ Myr all systems have disks, and that by $t = 12$ Myr (the age of the $\beta$ Pic moving group) all disks have dissipated. We find for the binary and single star members, respectively, $t_{d} = 5 \pm 5$ Myr and $t_{d} = 9 \pm 3$ Myr. If for our widest binary system, RECX 9AB, single star evolution is more applicable, the derived $t_{d}$ for disks in close binary systems could be even shorter. \citet{HerbstMundt05} analysed the rotational evolution of samples of solar-like PMS stars. Their key conclusion was that around half of PMS stars lose significant surface angular momentum in the first $5-6$~Myr owing to rotational coupling between star and disk. During this phase, spin-up of the star is prevented as it evolves towards lower luminosity, because angular momentum is transferred to the CS disk. The other half of PMS stars evolve at almost constant surface angular momentum, having lost their disks within the first few million years. These two groups of PMS stars are believed to evolve to form the slowly- and rapidly-rotating groups of young main sequence stars, respectively. We calculated the magnitude of the specific surface angular momentum $j$ (surface angular momentum per unit mass) for the 12 late-type cluster members with rotation periods and photometry from \citet{Lawson01, Lawson02} and precise spectral types from \citet{Lyo04a}. We adopt dwarf temperature and bolometric correction scales to calculate stellar luminosity and radii, with compensation applied for companion stars within the binary systems based upon observed light and radial velocity ratios. For binaries, we assume the observed rotation period is associated with the primary star. In Table \ref{Table1.tab} we express the derived $j$ values in solar units, where $j_{\odot} = 9.5 \times 10^{15}$ cm$^{2}$s$^{-1}$. In Fig. \ref{SAM.fig} we plot the cluster's $j$ distribution for various groupings of cluster objects. We see that {\it all\,} the high-$j$ systems are the primaries of $\eta$ Cha binary systems. With a mean $j = 22$ for binary primaries, this group differs significantly from the single stars with a mean $j = 6$. A $U$-test gives a probability that the single stars and primaries are drawn from the same parent sample of only $P = 0.02$. The comparison would have been even more extreme if we had not compensated for the presence of secondary stars. For uncorrected binaries having a mean $j = 37$, the $U$-test gives a probability of $P = 0.004$. We thus provide novel empirical evidence for rotational disk locking, supporting models of star-disk magnetic coupling \citep[e.g.][]{Bouvier97, Kueker03}. We concur with \citet{HerbstMundt05} that high-$j$ stars are freed from any form of locking mechanism. In the $\eta$ Cha cluster, none of the high-$j$ stars have CS disks. We additionally add the result that the high-$j$ tail in $\eta$ Cha consists entirely of the primaries of binary stars. This again argues that disk lifetimes in binary systems are shorter than those in single star systems. The $j$ samples of \citet{HerbstMundt05} are not corrected for binarity. From our results for the $\eta$ Cha cluster, we surmise that the high-$j$ tail of their distributions are populated with, and possibly dominated by, diskless binary systems. \section{Conclusions} The link between disk presence, angular momentum and binarity may have profound astrophysical importance in evaluating the time available for planet formation, as this is determined by the dissipation timescale of protoplanetary disks. The high angular momentum and paucity of disks in $\eta$ Cha binaries indicates a characteristic disk lifetime considerably less than the cluster's age of $\simeq 8$ Myr. This is in contrast to the $\eta$ Cha singles, where the high disk fraction implies that their disk evolutionary timescale is comparable to, or slightly greater than, the age of the cluster. Using a simple statistical approach, we estimate a mean disk dispersion timescale of $\sim 5$ Myr and $\approx 9$ Myr for close binary and single star systems, respectively. Our results suggest that the correct evaluation of the {\it disk fraction\,} in PMS groups, and consequently the characteristic timescale for {\it disk longevity\,} during the PMS phase, critically depends on knowledge of binarity within a given PMS population. We cannot rule out that disk dissipation timescales could also depend on stellar mass, as our sample only consists of late K- and M-type stars. Also, the derived timescales depend upon the detection wavelength, reflecting that disk dissipation can have a radial dependence. However, even if we limit our detections to those stars with IRAC excesses, dissipation timescale for single stars only shortens by $\sim$10 \% to $8 \pm 4$ Myr. In any case, our results imply that the assumption of a single timescale for disk dissipation is not correct. The rapid decline of the disk fraction in the first few Myr as inferred by several near- and mid-IR studies, could be dominated by the dissipation of disks in close binary systems. The slower dissipation of disks around single stars could be the explanation for the long-lived disks seen in older PMS clusters such as $\eta$ Cha. The strong tidal torques exerted on disks in close binary systems will have a negative impact on the efficiency of planet formation \citep[e.g.][]{Kley01, Mayer05}. Indeed, while $\sim$25 \% of exo-planets are detected in binaries, no planets have been found in binaries with projected separations $< 20$ AU \citep{Raghavan06}, though at this point observational selection effects can not be ruled out. Our results imply that in close binary systems the time available for planet formation is considerably shorter than in single star systems, which could severely inhibit the formation of planets. The apparent lack of planets in close binaries, therefore, might reflect the shorter disk dissipation timescale in binary systems. \acknowledgements EDF and AGGMT acknowledge support from {\it Spitzer\,} GO grant No. 3508 (PI WAL). WAL acknowledges support from UNSW@ADFA Faculty Research Grant Programs 2005 and 2006. JB and ThH acknowledge support from the EU HPR network contract No. HPRN-CT-2002000308. We thank Leen Decin, University of Leuven, for the stellar models.
2,877,628,088,678
arxiv
\section{Linear scattering of photons} We consider a typical scattering scenario, where a highly coherent many photon state of light is injected through waveguides into a complex array of optical elements, such as \textit{e.g.}~in \cite{hong-ou-mandel,QW4,DQW_Coin1,DQW_twophotons,DQW_disorder}. We further assume that decoherence and dephasing due to losses and /or coupling with uncontrolled degrees of freedom can be neglected. The simulation of such a scattering process between multiparticle photonic (or in general, bosonic) input and output states is a computationally hard problem because it involves, as shown bellow, the calculation of permanents of large matrices. The complexity of this problem is expected to render the problem of sampling the space of matrices with a distribution given by their permanents, the Boson Sampling (BS) problem, also hard. Thus, a quantum optical device that samples for us scattering probabilities between many-body states constitutes a quantum computer that eventually beats any classical computer \cite{BosonSampling} in the BS task, an observation that has attracted enormous attention during the last years \cite{BosonSamplingExp1,BosonSamplingExp2,BosonSamplingExp3,BosonSamplingExp4,Malte_tutorial}. The physical operation of our scattering device consists of mapping the incoming many-photon states $|{\rm in}\rangle$ into the output states $|{\rm out}\rangle$. By injecting the same incoming state several times and counting the number of times we get $|{\rm out}\rangle$ as output, we will eventually obtain the transition probability \begin{equation} P(|{\rm in}\rangle \to |{\rm out}\rangle):=|A(|{\rm in}\rangle \to |{\rm out}\rangle)|^{2}=|\langle{\rm out}|{\rm in}\rangle|^{2}, \end{equation} and our goal is to study this quantity. As any other quantum state of the field, the $|{\rm in}\rangle,|{\rm out}\rangle$ states belong to the Hilbert (Fock) space ${\cal H}$ of the system, which consists of all possible linear combinations of Fock states \cite{NO} \begin{equation} |{\bold n}\rangle:=|n_{1},n_{2},\ldots,n_{M}\rangle \end{equation} specifying the set of integer occupation numbers $n_{1},\ldots,n_{M}$. An occupation number $n_{i}$ specifies how many photons (bosons) occupy the $i$th single-particle state. The choice of these channels (or orbitals) is a matter of convenience, depending on the particular features of the system. In the scattering problem there are two preferred options to construct the Fock space, namely, by defining occupation numbers specifying how many photons occupy a given single-particle state with either incoming or outgoing boundary conditions in the asymptotic region far away from the scatterer. The operators that create a particle in the case of given incoming boundary conditions are denoted by $\hat{b}^{\dagger}$, and their action on the vacuum state $|{\bf 0}\rangle$ produces Fock states in the incoming modes \cite{NO}: \begin{equation} |{\bold n}^{\rm in}\rangle:=|n_{1}^{\rm in},n_{2}^{\rm in},\ldots,n_{M}^{\rm in}\rangle=\prod_{i}\frac{\left(\hat{b}^{\dagger}_{i}\right)^{n_{i}^{\rm in}}}{\sqrt{n_{i}^{\rm in}!}}|{\bf 0}\rangle. \label{eq:Fock-state_in} \end{equation} Any operator acting in ${\cal H}$ can be written as a multilinear combination of the creation operators and their adjoints $\hat{b}$, called annihilation operators. The operator algebra is thus uniquely fixed by the canonical commutation relations \begin{equation} [\hat{b}_i^{},\hat{b}_j^{}]=0 {\rm \ \ and \ \ \ }[\hat{b}_i^{},\hat{b}_j^\dagger]=\delta_{ij}^{}. \label{eq:comm} \end{equation} Similarly, the operators $\hat{d}^{\dagger}$ create photons in the single-particle states defined by outgoing boundary conditions, and representing physically photons exiting the scattering region along a given channel. A fundamental observation is that the Fock space can be equally well constructed out of the many-body states defined by specifying occupation numbers in the single-particle outgoing states: \begin{equation} |{\bold n}^{\rm out}\rangle:=|n_{1}^{\rm out},n_{2}^{\rm out},\ldots,n_{M}^{\rm out}\rangle=\prod_{i}\frac{\left(\hat{d}^{\dagger}_{i}\right)^{n_{i}^{\rm out}}}{\sqrt{n_{i}^{\rm out}!}}|{\bf 0}\rangle. \label{eq:Fock-state_out} \end{equation} The relation between incoming and outgoing Fock states is fully determined by a single-particle property, namely, the transition amplitude of the single-particle process \begin{equation} \sigma_{ij}=\langle 0,\ldots,0,n_{i}^{\rm out}=1,0,\ldots,0|0,\ldots,0,n_{j}^{\rm in}=1,0,\ldots,0 \rangle, \end{equation} which defines the single-particle scattering matrix with entries $\sigma_{i,j}$. By comparison with Eqns.~(\ref{eq:Fock-state_in}) and (\ref{eq:comm}), we find \begin{equation} \label{eq:bds} \hat{d}_{j}=\sum_{i}\sigma_{ji}\hat{b}_{i} \end{equation} which then allows us to relate the expansion coefficients $c_{\bf n}^{\rm in}$ and $c_{\bf m}^{\rm out}$, appearing in the "in" and "out" representations of an arbitrary many-body state, \begin{equation} |\psi\rangle=\sum_{\bf n}c_{\bf n}^{\rm in} |{\bold n}^{\rm in}\rangle=\sum_{\bf m}c_{\bf m}^{\rm out} |{\bold m}^{\rm out}\rangle, \end{equation} through the amplitude \begin{equation} \label{eq:AF} A^{\rm F}({\bf n},{\bf m}):=\langle {\bold m}^{\rm out}|{\bold n}^{\rm in}\rangle. \end{equation} So far we have focused on the transformation properties between Fock states, but the same questions can be addressed for other type of many-body states. Consider for example the common eigenstates of the incoming creation operators \cite{NO} in the incoming basis, \begin{equation} \hat{b}_{i}^{}|{\boldsymbol \phi}^{\rm in}\rangle=\phi_{i}^{\rm in}|{\boldsymbol \phi}^{\rm in}\rangle, \end{equation} so-called coherent states, which are labeled by a continuous set of complex numbers $\phi_{i}$. Although coherent states are not eigenstates of a commuting set of hermitian operators, they can be experimentally prepared \cite{Wolf:2004}, and in some sense they are the most classical states of the electromagnetic field. Again, it can also be shown that both, in- and out-going coherent states, are an (over)complete basis of the Fock space, and the amplitudes \begin{equation} A^{\rm C}({\boldsymbol \chi},{\boldsymbol \phi}):=\langle {\boldsymbol \phi}^{\rm out}|{\boldsymbol \chi}^{\rm in}\rangle \end{equation} are the matrix elements of a many-body unitary transformation performing the change of representation from incoming to outgoing coherent states. The third basis set that we are going to discuss is defined by the common eigenstates $|{\bold q}^{\rm in, out}\rangle$ of the so-called quadrature operators \cite{VogelWelsch200607} that correspond to the quantum operator associated with the observable electric field \cite{Cohen-Tannoudji:1997} \begin{equation} \hat{q}_{i}^{\rm in}:=\hat{b}_{i}^{}+\hat{b}_{i}^{\dagger} {\rm \ \ , \ }\hat{q}_{i}^{\rm out}:=\hat{d}_{i}^{}+\hat{d}_{i}^{\dagger}. \end{equation} It is easy to show that quadrature eigenstates \begin{equation} \hat{q}_{i}^{\rm in, out}|{\bold q}^{\rm in, out}\rangle=q_{i}^{\rm in, out}|{\bold q}^{\rm in, out}\rangle \end{equation} are labeled by a continuous set of real variables and that they are normalized (to the Dirac delta), complete and orthogonal. We can again define the corresponding transmission amplitude \begin{equation} A^{\rm Q}({\bf q},{\bf Q}):=\langle {\bf Q}^{\rm out}|{\bf q}^{\rm in}\rangle. \end{equation} The construction of the transformations between the different basis sets is cumbersome but straightforward and we refer the reader to the references \cite{NO,VogelWelsch200607} for further details. We just note that for a given choice of single-particle orbitals, all operators (number, creation/destruction and quadratures) commute with each other if they correspond to different single-particle states (or index $i$). Therefore the results for a given mode \cite{NO,VogelWelsch200607} are sufficient, \begin{align} \langle q|n\rangle=&\frac{{\rm e}^{-\frac{q^{2}}{4}}}{\sqrt{2^nn!\sqrt{2\pi}}}{\rm H}_{n}\left(\frac{q}{\sqrt{2}}\right),&& {\rm \ number \ to \ quadrature,} \nonumber \\ \langle q|\phi\rangle=&\frac{1}{(2\pi)^{1/4}}{\rm e}^{-\frac{|\phi|^2}{2}-\left(\frac{q}{2}-\phi\right)^2+\frac{\phi^2}{2}},&& {\rm \ coherent \ to \ quadrature,} \label{eq:basi-trafo} \\ \langle n|\phi\rangle=&\frac{1}{\sqrt{n!}}\phi^{n}{\rm e}^{-|\phi|^{2}/2},&& {\rm \ coherent \ to \ number}, \nonumber \end{align} where ${\rm H}_{n}(q)$ is the $n$-th Hermite polynomial. \section{The Boson Sampling problem} \subsection{Outline of the problem} With the formalism presented in the last section, we return to our original problem, namely the explicit calculation of scattering amplitudes between Fock states. Using Eq.~(\ref{eq:AF}) and the definitions in Eqns.~(\ref{eq:Fock-state_in}) and (\ref{eq:Fock-state_out}) we get the exact expression \begin{equation} \label{eq:BS} A^{\rm F}({\bf n},{\bf m}):=\langle {\bf 0}|\left[\prod_{j}\frac{\left(\hat{d}_{j}\right)^{m_{j}^{\rm out}}}{\sqrt{m_{j}^{\rm out}}}\right] \left[\prod_{i}\frac{\left(\hat{b}_{j}^{\dagger}\right)^{n_{i}^{\rm in}}}{\sqrt{n_{i}^{\rm in}}}\right]|{\bf 0}\rangle. \end{equation} Our goal is to obtain an expression for $A^{\rm F}$, and eventually for the transition probabilities $|A^{\rm F}|^{2}$, in terms of the single-particle scattering matrix ${\boldsymbol \sigma}$. At first glance, due to the absence of interactions this seems to be an easy task since in the scattering process the total amplitude factorizes in terms of the amplitudes of individual, single-particle processes. However, while this is the case for systems of non-interacting {\it distinguishable} particles, for the case of interest here quantum effects due to {\it indistinguishability} render the calculation of scattering amplitudes a hard problem \cite{BosonSampling,Malte_tutorial}, as it is apparent when trying to calculate the amplitudes by substitution of Eq.~(\ref{eq:bds}) into Eq.~(\ref{eq:BS}) and further using the commutator in Eq.~(\ref{eq:comm}) that the complexity of the result is combinatorial in origin. Since the explicit calculation has been reported elsewhere, we just present the final result \begin{equation} \label{eq:BSFP} A^{\rm F}({\bf n},{\bf m})={\rm Perm}~{\bf M}({\boldsymbol \sigma}). \end{equation} and refer the reader to \cite{BosonSampling,Malte_tutorial} for further details. Following \cite{BosonSampling,Malte_tutorial}, the transition amplitude obtained by this procedure is given by summing up products of entries of ${\boldsymbol \sigma}$, where each term is actually a permutation of the multi dimensional indices labeling output channels. It can be therefore written in terms of a new matrix ${\bf M}({\boldsymbol \sigma})$ (obtained by repeating the $j$-th row of ${\bf \sigma}$ $n_j$ times and the $j$-th column $m_j$ times\footnote{The precise and primary definition of ${\bf M}$ is $M_{jk}=\sigma_{d_j({\bf n})d_k({\bf m})}$, with ${\bf d}({\bf n})$ being the $N$-dimensional vector defined by $d_j({\bf n})=\min\left\{k\in\{1,\ldots,M\}\ :\ n_{k-1}<j\leq n_k\right\}$. By permuting columns and rows of ${\bf M}$ one arrives at the definition above.}). The key observation is that we indeed {\it sum} all the terms obtained by permuting the second index of this enlarged matrix, resulting in an object known as {\it permanent}. In this way, the physical scattering of photons provides a physical device that calculates permanents of large matrices. One only needs to repeatedly measure the output state and the accuracy with which the device calculates (or simulates) the precise value of the associated permanent can be made arbitrarily large by repeating the measurement as many times as needed. In a further step, by randomly changing ${\boldsymbol \sigma}$ the device can be used to sample the space of matrices with a weight given by their permanent. It is this task, the BS problem, where under certain conditions it is expected that the quantum device beats any classical computer \cite{BosonSampling}. \subsection{Many-body scattering as canonical transformation} Small scattering devices calculating permanents can be actually realized, with several examples now available \cite{BosonSamplingExp4}, while preparation techniques that allow for coherent creation of correlated photons beyond $N\simeq30$, where the sampling using the quantum device will beat any classical computer, is presently matter of intense research \cite{BosonSampling}. Realistic scenarios, however, seem to reach severe complications already for $N\simeq 12$. Thus, it seems important to explore whether the fundamental aspects of complex many-body scattering allow for other types of implementation, different from the photonic ones. Here we try to approach this question from a more abstract perspective. We only demand the matrix ${\boldsymbol \sigma}$ to be a unitary single-particle scattering matrix, \begin{equation} [{\boldsymbol \sigma}^{-1}]_{i,j}=[{\boldsymbol \sigma}]_{j,i}^{*}. \end{equation} This means that the calculation of permanents of large matrices is realized by a device with outcomes given by the amplitudes \begin{equation} A^{\rm F}({\bf n},{\bf m}):=\langle {\bf m}'|{\bf n}\rangle. \end{equation} We call $|{\bf m}\rangle$ the state specified by occupations ${\bf m}$ in the unprimed basis and $|{\bf m}'\rangle$ the state \ specified by ${\bf m}$ in the primed basis. The later is constructed out of the operators \begin{equation} \hat{b}'_{j}=\sum_{i}u_{j,i}^{}\hat{b}_{i}^{} {\rm \ \ and \ \ }\left(\hat{b}'_{j}\right)^{\dagger}=\sum_{i}u_{j,i}^{*}\left(\hat{b}_{j}^{}\right)^{\dagger} \label{eq:bbp} \end{equation} for {\it any} unitary matrix ${\bf u}$. Note that for the choice ${\bf u}={\boldsymbol \sigma}$ we recover the scattering version with $\hat{b}'_{j}=\hat{d}_{j}^{}$. Finally, a straight forward calculation shows that, \begin{equation} [\hat{b}'_{i},\hat{b}'_{j}]=0 {\rm \ \ and \ \ \ } [\hat{b}'_{i},\left(\hat{b}'\right)^{\dagger}_{j}]=\delta_{i j}^{} \end{equation} follows from Eq.~(\ref{eq:comm}). The transformations (\ref{eq:bbp}) are linear and canonical, the latter because they do not change the algebraic relations between the basic operators. We conclude that the BS problem can be realized by any nontrivial device where transition amplitudes are measured between Fock states defined by two different sets of creation operators, defined with respect two different single-particle basis sets. The physical implementation of BS requires then, first of all, a measurement protocol that provides the many-body transformation between Fock states after a linear canonical transformation. Note that for the BS problem, the essential property of the scattering device is that it provides permanents, and the possibility of connecting permanents with transition amplitudes is entirely due to the linearity of the single-particle transformation. Any physical system where the mapping ${\hat b} \to {\hat b}'$ is nonlinear, as happens for general (non-linear) unitary transformations \begin{equation} {\hat b} \to {\hat b}'=\hat{U}^{\dagger}\hat{b}\hat{U}, {\rm \ \ \ }\hat{U}={\rm e}^{iG(\hat{{\bf b}},\hat{{\bf b}}^{\dagger})}, \end{equation} with a hermitian but non-quadratic generator $G$, defines a quantum device that still calculates transition amplitudes but {\it not} permanents. Using this broader view, it is in principle possible to implement other processes where the output of a measurement is given by permanents and therefore can be used as a basis for the physical implementation of BS. The quantum optical scenario involving scattering of photon states has some very attractive features, in particular that the many-body output states can be indeed measured at the single-photon level by photocounting, while its main drawback is the difficulty to prepare of photonic Fock states with large total number of photons (the state of the art is $N=6$). Since quantum states of indistinguishable bosonic atoms with macroscopic occupations can be prepared by cooling techniques, cold-atoms alternative offer an interesting possibility for BS. The drawback here is the difficulty of performing tomography of many-body cold atom systems at the single-particle level, but advances in this direction are under way \cite{oberth}. Assuming for the moment that the measurement of many-body states in cold atom systems reaches the regime of single-atom precision, we sketch a possible BS scenario for such systems. Consider a system of ultracold atoms in an optical lattice, where the hopping amplitude between adjacent sites is $J$ and the strength of the interparticle interaction is $V$. Assume now that initially (at time $t^{-}$) we have $V \gg J$, and the interaction energy is so large that hopping gets completely suppressed \cite{Mott-insulator_BHM}, the so-called Mott phase. In good approximation the ground state of the many-body system is a Fock state where the occupations refer to the number of atoms in each site, namely, a Fock state constructed out of single-particle states defined by localized (Wannier) orbitals \cite{Lewenstein:2012}. The "quench" scenario is defined by an abrupt change of parameters (possible by tuning the atoms through a Feshbach resonance \cite{feshbach-resonance_ultracold-atoms,feshbach-resonance_ultracold-atoms2}) at time $t^{+}$, such that we have now $J \gg V$. We are interested in the transition amplitudes between the initial state and the eigenstates of the quenched Hamiltonian, where the later are again Fock states but built from delocalized (momentum) single-particle orbitals. The calculation of these transition amplitudes is strictly given by $A^{\rm F}({\bf n},{\bf m})$, with the specific choice for $\bf u$ as the matrix that linearly relates the Wannier and the momentum orbitals. Thus, such a device provides permanents as its output. Furthemore, if the on-site energies in the Mott phase are chosen to be random, the matrix $\bf u$ is itself random, and BS can be fully implemented. \section{Equivalent representations} We return now to the question of how a general single-particle linear canonical transformation is reflected in the transformation of the different (Fock, quadrature and coherent) many-body states and how the hardness of calculating permanents gets reflected in the different representations. \subsection{Coherent states} The simplest transformation between many-body states after a single-particle canonical transformation is the one for the coherent states, and hence we start with this case. Any coherent state can be constructed out of the vacuum state $|{\bf 0}\rangle$ by the application of the displacement operator \cite{NO} \begin{equation} \hat{D}({\boldsymbol \phi},{\boldsymbol \phi}^{*})={\rm e}^{{\boldsymbol \phi}^{\ast}\cdot \hat{{\bf b}}+{\boldsymbol \phi}\cdot \hat{{\bf b}}^{\dagger}} \end{equation} as \begin{equation} |{\boldsymbol \phi}\rangle=\hat{D}({\boldsymbol \phi},{\boldsymbol \phi}^{*})|{\bf 0}\rangle, \end{equation} and similarly for the primed states $|{\boldsymbol \psi}'\rangle$ \begin{equation} |{\boldsymbol \psi}'\rangle={\rm e}^{{\boldsymbol \psi}^{\ast}\cdot \hat{{\bf b}}'+{\boldsymbol \psi}\cdot \hat{{\bf b}}'^{\dagger}}|{\bf 0}\rangle. \end{equation} From this, and the defining relation between primed and unprimed canonical operators in Eq.~(\ref{eq:bbp}), we get \begin{equation} |{\boldsymbol \psi}'\rangle=|{\bf u} {\boldsymbol \psi}\rangle. \end{equation} Using again well known properties of the coherent states \cite{NO}, the transition amplitude is given by \begin{equation} \label{phipsi} A^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi})={\rm e}^{-\frac{1}{2}{\boldsymbol \phi}^{\ast}\cdot{\boldsymbol \phi}-\frac{1}{2}{\boldsymbol \psi^{\ast}}\cdot{\boldsymbol \psi}+{\boldsymbol \psi}^{\ast}\cdot {\boldsymbol \sigma}\cdot {\boldsymbol \phi}}. \end{equation} This result implies in turn for the corresponding transition probability \begin{equation} P^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi}):=|A^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi})|^{2}={\rm e}^{-|{\boldsymbol \psi}-{\boldsymbol \sigma}\cdot {\boldsymbol \phi}|^{2}}, \end{equation} admitting a straightforward interpretation, very much consistent with the idea that quantum coherent states are the most classical states of light: At the classical level, the canonical transformation simply consists of a linear transformation between the field amplitudes given by ${\boldsymbol \phi} \to {\bf u}\cdot {\boldsymbol \phi}$. The classical probability to obtain the state ${\boldsymbol \psi}$ after a canonical transformation of the state ${\boldsymbol \phi}$ is implemented is nonzero only if ${\boldsymbol \psi}={\bf u}\cdot {\boldsymbol \phi}$. In the quantum case, this sharp peak is smoothed into a Gaussian. In terms of the scattering scenario, the transition probability between coherent states also agrees with intuition: The probability is strongly peaked around the output state labeled by the classical field amplitude resulting from scattering of the classical input field. \subsection{Quadrature states} In the same spirit as in the case of coherent states, the transformation rule for quadrature states can be deduced by the corresponding transformation for the defining canonical operators. In the coherent state case, the canonical pair is ${\hat b},{\hat b}^{\dagger}$, and therefore for the quadrature case we must find the set of canonical conjugate partners of the $\hat{q}$'s. The obvious choice that turns out to do the job, is to define \cite{VogelWelsch200607} \begin{equation} \hat{q}_{i}^{}:=\hat{b}_{i}^{}+\hat{b}_{i}^{\dagger} {\rm \ \ , \ \ } \hat{p}_{i}^{}:=-i(\hat{b}_{i}^{}-\hat{b}_{i}^{\dagger}). \nonumber \end{equation} As the $q$-quadratures, the $p$-quadratures have a complete, orthogonal and Dirac-normalized common set of eigenstates, \begin{equation} \hat{p}_{i}|{\bf p}\rangle=p_{i}|{\bf p}\rangle . \end{equation} The analogy with the usual position and momentum operators in particle (first quantized) quantum mechanics is evident after using their definition to obtain \cite{VogelWelsch200607} \begin{equation} \langle {\bf q}|{\bf p}\rangle=\frac{{\rm e}^{\frac{i}{2} {\bf q}\cdot{\bf p}}}{(4\pi)^{M/2}}. \end{equation} However, it must be stressed that quadrature states do not represent any single-particle property at all. In fact, it can be shown that they do not represent states with a well defined total number of particles, thus making their interpretation as any sort of localization property in real space impossible. Our goal is again to inter-relate the two quadrature states $|{\bf Q}'\rangle$ and $|{\bf q}\rangle$, defined by \begin{eqnarray} \hat{{\bf q}}|{\bf q}\rangle&=&{\bf q}|{\bf q}\rangle, \\ \hat{{\bf q}}'|{\bf Q}'\rangle&=&{\bf Q}|{\bf Q}'\rangle, \nonumber \end{eqnarray} using as input the canonical transformation given by \begin{equation} \hat{{\bf q}}+i\hat{{\bf p}}\to \hat{{\bf q}}'+i\hat{{\bf p}}'={\bf u}(\hat{{\bf q}}+i\hat{{\bf p}}). \end{equation} This canonical transformation can be solved for $\hat{{\bf q}}'$ simply by taking its hermitian part on both sides using the decomposition \begin{equation} {\bf u}={\bf u}^{\rm r}+i{\bf u}^{\rm i} \end{equation} into real and imaginary parts. The eigenvalue equation defining $|{\bf Q}'\rangle$ is then found to be \begin{equation} \left[i{\bf u}^{\rm i}\cdot\frac{\partial}{\partial{\bf q}}-\frac{1}{2}\left( {\bf u}^{\rm r}\cdot {\bf q}-{\bf Q}\right)\right]\langle {\bf Q}'|{\bf q} \rangle=0. \end{equation} This can be solved using a Gaussian ansatz to get \begin{equation} \label{eq:Qq} \begin{split} A^{\rm Q}({\boldsymbol q},{\boldsymbol Q}):=&\langle {\bf Q}'|{\bf q} \rangle \\ =&\frac{\exp\left\{-\frac{i}{4}\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\left(\begin{array}{cc} \left({\bf u}^i\right)^{-1}{\bf u}^r & -\left({\bf u}^i\right)^{-1} \\ -\left[\left({\bf u}^i\right)^T\right]^{-1} & {\bf u}^r\left({\bf u}^i\right)^{-1} \end{array}\right)\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\right\}}{\sqrt{\det\left[-4\pi i{\bf u}\left({\bf u}^i\right)^T\right]}}, \end{split} \end{equation} with similar expressions for the $p$-quadrature states. Using Eq.~(\ref{eq:Qq}), we obtain an interesting result for the transition probability between quadratures, \begin{equation} P^{\rm Q}({\boldsymbol q},{\boldsymbol Q}):=|A^{\rm Q}({\boldsymbol q},{\boldsymbol Q})|^{2}=\frac{1}{\left|\det4\pi{\bf u}\left({\bf u}^i\right)^T\right|}. \end{equation} It is {\it fully independent of the initial and final states}. In the scattering scenario this means that the probability to obtain a given configuration after measuring the electric field in the output channels is the same for any input and output configuration. As it is clearly seen in Eq.~(\ref{eq:Qq}), however, the amplitudes themselves are very structured functions of the input and output quadrature states and it is only the associated probabilities that display a flat profile. \section{Exact representations} Armed with the results of the last section we can now construct different exact expressions for the transition amplitudes between Fock states, $A^{\rm F}({\bf n},{\bf m})$, that, supplemented with an ensemble of random ${\bf u}$ matrices, provide different representations of BS. Since the transition amplitudes, Eqs.~(\ref{phipsi},\ref{eq:Qq}), in both quadrature and coherent representation are not difficult to evaluate, the complexity of calculating permanents must stem from the transformations between the different basis sets. In the following we will make this connection explicit. Using the transformation rules, Eq.~(\ref{eq:basi-trafo}), between coherent, quadrature and Fock states we obtain \cite{NO,VogelWelsch200607}, \begin{equation} \label{eq:AFCO} \begin{split} A^{\rm F}({\bf n},{\bf m}):=\langle{\bf m}'|{\bf n}\rangle=\frac{1}{\pi^{2M}}\int d{\boldsymbol \psi}d{\boldsymbol \phi}\langle{\bf m}'|{\boldsymbol \psi}'\rangle \langle{\boldsymbol \psi}'|{\boldsymbol \phi}\rangle \langle {\boldsymbol \phi}|{\bf n}\rangle& \\ =\int d{\boldsymbol \psi}d{\boldsymbol \phi}\prod_{i}\frac{\psi_{i}^{m_{i}} \left(\phi_{i}^{\ast}\right)^{n_{i}}{\rm e}^{-\frac{1}{2}|\psi_{i}|^{2}-\frac{1}{2}|\phi_{i}|^{2}}}{\pi^2\sqrt{m_i!n_i!}}A^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi})&, \end{split} \end{equation} and \begin{equation} \label{eq:AFQU} \begin{split} A^{\rm F}&({\bf n},{\bf m}):=\langle{\bf m}'|{\bf n}\rangle=\int d{\boldsymbol Q}d{\boldsymbol q}\langle{\bf m}'|{\boldsymbol Q}'\rangle \langle{\boldsymbol Q}'|{\boldsymbol q}\rangle \langle {\boldsymbol q}|{\bf n}\rangle \\ &=\int d{\boldsymbol Q}d{\boldsymbol q}\prod_{i}\frac{{\rm H}_{m_{i}}\left(\frac{Q_{i}}{\sqrt{2}}\right) {\rm H}_{n_{i}}\left(\frac{q_{i}}{\sqrt{2}}\right) {\rm e}^{-\frac{Q_{i}^{2}}{4}-\frac{q_{i}^{2}}{4}}}{\sqrt{2^{n_i+m_i+1}\pi n_i!m_i!}}A^{\rm Q}({\boldsymbol q},{\boldsymbol Q}). \end{split} \end{equation} Equations~(\ref{eq:AFCO}) and(\ref{eq:AFQU}) are two equivalent representations of the scattering amplitudes and provide a basis for realizing BS when supplemented with a physical ensemble of unitary matrices ${\bf u}$. The first expression in terms of the coherent state transition amplitude $A^{\rm C}$ is convenient for exact calculations, while the second equation in terms of $A^{\rm Q}$ will be important when we connect BS with a three-step canonical transformation in order to understand its asymptotics for large $N$. \section{A generating function for transition amplitudes} It is instructive to show how one finds yet another version of the transition amplitudes using the equivalence of the two representations in Eqns.~(\ref{eq:AFCO}) and (\ref{eq:AFQU}). To this end we use the identities \begin{equation} \phi^{n}=\left.\frac{\partial^{n}}{\partial k^{n}}{\rm e}^{k \phi}\right|_{k=0} {\rm \ \ , \ \ }{\rm H}_{n}(q)= {\rm e}^{q^{2}}\left.\frac{\partial^{n}}{\partial k^{n}}{\rm e}^{-(q-k)^{2}}\right|_{k=0}, \nonumber \end{equation} which allow us to perform exactly the integrals over the intermediate variables ${\boldsymbol \psi},{\boldsymbol \phi}$ in the coherent state representation and ${\boldsymbol Q},{\boldsymbol q}$ in the quadrature case. After some calculations we get the exact, surprisingly simple expression, \begin{equation} \label{eq:main} \begin{split} A^{\rm F}({\bf n},{\bf m})=&\left. \left(\prod_{i}\frac{1}{\sqrt{m_{i}!n_{i}!}}\frac{\partial^{m_{i}}}{\partial x_{i}^{m_{i}}} \frac{\partial^{n_{i}}}{\partial y_{i}^{n_{i}}}\right){\rm e}^{{\bf x} \cdot {\boldsymbol u} \cdot {\bf y}}\right|_{{\bf x}={\bf y}={\bf 0}} \\ =&\left(\prod_{i}\frac{\sqrt{m_{i}!n_{i}!}}{(-4\pi^{2})}\right)\oint\left(\prod_{i}\frac{dx_{i}dy_{i}}{x_{i}^{m_{i}+1}y_{i}^{n_{i}+1}}\right){\rm e}^{{\bf x} \cdot {\boldsymbol u} \cdot {\bf y}} \end{split} \end{equation} which is one of our main results. It is a generating function providing the transition amplitudes as high-order derivatives of an multivariate exponential function and generalizes \cite{Fyodorov}. Here it is clear that in any representation the complexity of many-body scattering comes from the combinatorics involved in taking high-order derivatives of large products of exponentials. The second expression, obtained by using the Cauchy integral formula (the closed integration contours enclose the origin), further transforms the problem in a way suitable for asymptotic analysis. The generating function approach provides a way to eventually address some open questions, in particular the calculation of high order moments of the distribution of transition amplitudes (or transition probabilities) over the ensemble of single-particle canonical transformations \cite{BosonSampling}. The particular advantage of this representation is that the average over the unitary group of matrices ${\bf u}$ representing the single-particle canonical transformation can be performed exactly. In section~(\ref{sec:EPQ}) we show how to follow this program in the simpler case of Ginibre (complex) matrices, and provide for the first time exact, explicit expressions for the third moments of the distribution of squared permanents. So far, all equivalent versions of the transition amplitudes have been obtained by exact, identical transformations. In the rest of this section we will focus on the particular regime of high densities, i.e, when $N:=\sum_{i}n_{i} \gg M$, where we can safely assume that the majority of configurations satisfy \begin{equation} \label{eq:HD} n_{i} \gg 1, m_{i} \gg 1, \end{equation} and powerful methods of asymptotic analysis can be safely applied. However, although BS involves the regime of large $N$ and $M$, it is expected to be a hard problem only in a specific asymptotic limit given by $M \gg N^{2}$ \cite{BosonSampling}, and the high-density limit cannot be used to make statements about it. The study of the behavior of the scattering amplitudes in the appropriate dilute limit of interest for BS is currently under progress. If the conditions in Eq.~(\ref{eq:HD}) hold, we can then evaluate the contour integrals in Eq.~(\ref{eq:main}) by the method of steepest descent applied to \begin{equation} \begin{split} &A^{\rm F}({\bf n},{\bf m})=\left(\prod_{i}\frac{\sqrt{m_{i}!n_{i}!}}{-4\pi^{2}}\right) \\ &\times\oint\left(\prod_{i}dx_{i}dy_{i}\right){\rm e}^{-\sum_{i}(n_{i}+1)\log x_{i}-\sum_{i}(m_{i}+1)\log y_{i}+{\bf x} \cdot {\boldsymbol u} \cdot {\bf y}}, \end{split} \end{equation} thus making contact with the theory developed in \cite{Str}. Here we are not interested in the technical details of the full calculation of the large-$N$ asymptotics, but instead in the physical interpretation of the saddle point conditions \begin{eqnarray} \frac{\partial}{\partial x_{l}}\left[-\sum_{i}n_{i}\log x_{i}-\sum_{i}m_{i}\log y_{i}+{\bf x} \cdot {\bf u} \cdot {\bf y}\right]&=&0, \\ \frac{\partial}{\partial y_{l}}\left[-\sum_{i}n_{i}\log x_{i}-\sum_{i}m_{i}\log y_{i}+{\bf x} \cdot {\bf u} \cdot {\bf y}\right]&=&0 \end{eqnarray} selecting the optimal values of the, so far, purely formal complex variables ${\bf x},{\bf y}$. Under the variable transformation \begin{equation} x_{i}=\sqrt{n_{i}}{\rm e}^{-i\theta_{i}},y_{i}=\sqrt{m_{i}}{\rm e}^{i\chi_{i}} \end{equation} the resulting set of $2M$ complex equations can be reduced to find the $M$ real angles $\chi_{l}$ satisfying the conditions \begin{equation} \sum_{l,l'}u_{il}u_{il'}\sqrt{m_{l}m_{l'}}{\rm e}^{i(\chi_{l}-\chi_{l'})}= n_{i}. \end{equation} In other words, the asymptotic limit of many-body transition amplitudes for large densities is dominated by configurations $x,y$ satisfying \begin{equation} \label{eq:Shoot} {\bf y}={\bf u} \cdot {\bf x} {\rm \ \ with \ \ }|y_{l}|^{2}=m_{l} {\rm \ and \ }|x_{i}|^{2}=n_{i}. \end{equation} This shows that in the limit of large densities, the calculation of transition amplitudes requires the solution of (\ref{eq:Shoot}), namely the calculation of the phases of the classical input and output field amplitudes (linearly related through ${\bf u}$) required to satisfy {\it shooting} (instead of initial-value) boundary conditions. This interpretation can be made even more explicit by considering the quadrature representation of the amplitudes. To this end, we consider the chain \begin{equation} \label{eq:chain} ({\bf n},{\boldsymbol \theta}) \to ({\bf q},{\bf p}) \to ({\bf Q},{\bf P}) \to ({\bf N},{\boldsymbol \Theta}) \end{equation} defined by \begin{equation} q_{i}+ip_{i}=\sqrt{n_{i}}{\rm e}^{i\theta_{i}} {\rm \ \ , \ \ } Q_{i}+iP_{i}=\sqrt{N_{i}}{\rm e}^{i\Theta_{i}} \end{equation} and \begin{equation} {\bf Q}+i{\bf P}={\bf u}({\bf q}+i{\bf p}). \end{equation} The semiclassical approximation for the amplitudes that define the unitary operators representing the first and last canonical transformations of the chain in Eq.~(\ref{eq:chain}), \begin{equation} A^{\rm qn}({\bf n},{\bf q})=\langle {\bf q}|{\bf n}\rangle {\rm \ \ , \ \ } A^{\rm QN}({\bf N},{\bf Q})=\langle {\bf Q}|{\bf N}\rangle, \end{equation} is given by \cite{canonical_transformation_semiclassics1,canonical_transformation_semiclassics2} \begin{equation} \begin{split} A^{\rm qn}({\bf n},{\bf q})&\simeq \prod_{i=1}^{N}\sqrt{\frac{1}{2\pi i}\frac{\partial^{2} f({n_{i},q_{i}})}{\partial n_{i}\partial q_{i}}}{\rm e}^{if(n_{i},q_{i})}, \\ A^{\rm QN}({\bf N},{\bf Q})&\simeq \prod_{i=1}^{N}\sqrt{\frac{1}{2\pi i}\frac{\partial^{2} F({N_{i},Q_{i}})}{\partial N_{i}\partial Q_{i}}}{\rm e}^{iF(N_{i},Q_{i})} \end{split} \end{equation} in terms of generating functions $f(n,q),F(N,Q)=f(N,Q)$ satisfying \begin{equation} \theta=\frac{\partial}{\partial n}f(n,q), {\rm \ \ \ }\Theta=\frac{\partial}{\partial N}f(N,Q). \end{equation} Finding these generating functions is a standard problem with explicit solution \begin{equation} f(n,q)=\frac{q}{4}\sqrt{4n-q^2}-n\arccos\left(\frac{q}{2\sqrt{n}}\right). \end{equation} Interestingly, and contrary to $A^{\rm qn}$ and $A^{\rm QN}$, due to the linearity of the transformation $({\bf q},{\bf p}) \to ({\bf Q},{\bf P})$ the intermediate step $({\bf q},{\bf p}) \to ({\bf Q},{\bf P})$ (responsible for the change in single-particle representation) is not only approximated but it is in fact exactly given by the semiclassical expression. The result is then identical to $A^{\rm Q}({\bf q},{\bf Q})$ in Eq.~(\ref{eq:Qq}). We can now construct the semiclassical approximation for the full transformation $({\bf n},{\boldsymbol \theta}) \to ({\bf N},{\boldsymbol \Theta})$ by operator multiplication of the three intermediate transformations, \begin{equation} A^{\rm F}({\bf n},{\bf N})=\int d{\bf q}d{\bf Q}\left(A^{\rm QN}({\bf N},{\bf Q})\right)^{\ast}A^{\rm Q}({\bf Q},{\bf q})A^{\rm qn}({\bf n},{\bf q}), \end{equation} to get \begin{equation} \label{eq:main2} \begin{split} A&^{\rm F}({\bf n},{\bf N})= \\ &\begin{split} \int d{\bf q}d{\bf Q}\frac{\exp\left\{-\frac{i}{4}\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\left(\begin{array}{cc} \left({\boldsymbol \sigma}^i\right)^{-1}{\boldsymbol \sigma}^r & -\left({\boldsymbol \sigma}^i\right)^{-1} \\ -\left[\left({\boldsymbol \sigma}^i\right)^T\right]^{-1} & {\boldsymbol \sigma}^r\left({\boldsymbol \sigma}^i\right)^{-1} \end{array}\right)\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\right\}}{\sqrt{\det\left[-4\pi i{\boldsymbol \sigma}^i\left({\boldsymbol \sigma}^i\right)^T\right]}} \\ \times\prod\limits_{j}\frac{\exp\left\{i\left[-\frac{Q_j}{4}\sqrt{4N_j-Q_j^2}+N_j\arccos\left(\frac{Q_j}{2\sqrt{N_j}}\right)\right]\right\}}{\sqrt{2\pi i\sqrt{N_j-Q_j^2/4}}} \\ \times\prod\limits_{j}\frac{\exp\left\{i\left[\frac{q_j}{4}\sqrt{4n_j-q_j^2}-n_j\arccos\left(\frac{q_j}{2\sqrt{n_j}}\right)\right]\right\}}{\sqrt{2\pi i\sqrt{n_j-q_j^2/4}}}. \end{split} \end{split} \end{equation} This is exactly the same result we obtain by considering the large $n$ limit of the exact representation, Eq.~(\ref{eq:AFQU}), by using the asymptotics \begin{equation} \begin{split} H_{n}(q)\simeq \sqrt{\frac{2^{n+1}n^n{\rm e}^{-n+q^2}}{\sqrt{1-\frac{q^2}{2n+1}}}} \\ &\begin{split} \times\cos\left[\left(n+\frac{1}{2}\right)\arcsin\left(\frac{q}{\sqrt{2n+1}}\right)+\frac{q}{2}\sqrt{2n+1-q^2}\right. \\ \left.-\frac{\pi}{2}\left(n+\frac{1}{2}\right)\right]. \end{split} \end{split} \end{equation} Note that the complexity of many-body scattering is reflected in the coherent sums over quantum mechanical amplitudes explicitly appearing in Eq.~(\ref{eq:BSFP}), namely, quantum interference results in the highly irregular pattern one obtains for the transition probabilities as a function of the incoming and output states \cite{Malte_tutorial}. Very much opposite to the {\it semiclassical} method presented here, {\it quasiclassical} approaches, based on adding probabilities instead of amplitudes, capture only the gross features of these patterns. To stress this point, it is important to understand where quantum interference is hidden in our semiclassical approach. In terms of Eq.~(\ref{eq:BSFP}), by expanding the permanents of ${\bf M}({\boldsymbol \sigma})$ as sums over products of single-particle scattering matrices, these coherent sums over products of single-particle paths can be made very explicit, as in \cite{us}. The semiclassical interpretation of many-body scattering (at least for the case of large occupations) allows us to understand the complexity of the problem, and the origin of massive quantum interference in terms of classical canonical transformations. To this end, consider now the unique canonical transformation implementing the full change of canonical variables $({\bf n},{\boldsymbol \theta}) \to ({\bf N},{\boldsymbol \Theta})$ without the intermediate steps in terms of quadratures. Then the semiclassical theory of quantum canonical transformations indicates that we must find the generating function $w({\bf n},{\bf N})$ that, together with the definitions \begin{eqnarray} {\boldsymbol \theta}=\frac{\partial}{\partial {\bf n}}w({\bf n},{\bf N}) &{\rm \ \ , \ \ }& {\boldsymbol \Theta}=\frac{\partial}{\partial {\bf N}}w({\bf n},{\bf N}), \\ \sqrt{N_{i}}{\rm e}^{i\Theta_{i}}&=&\sum_{j}u_{ij} \sqrt{n_{j}}{\rm e}^{i\theta_{j}}, \end{eqnarray} gives the explicit form of the transformation as \begin{equation} {\bf N}={\bf N}({\bf n},{\boldsymbol \theta}) {\rm \ \ , \ \ }{\boldsymbol \Theta}= {\boldsymbol \Theta}({\bf n},{\boldsymbol \theta}), \end{equation} in order to write \begin{equation} A^{\rm F}({\bf n},{\bf N}) \propto \left|\det\frac{\partial^{2}w({\bf n},{\bf N})}{\partial {\bf n} \partial {\bf N}}\right|^{\frac{1}{2}}{\rm e}^{iw({\bf n},{\bf N})}. \end{equation} However, in this case we encounter a new issue that was not present in the canonical transformations we have seen before: although the {\it initial value problem} of finding $({\bf N},{\boldsymbol \Theta})$ from $({\bf n},{\boldsymbol \theta})$ admits a unique solution (given by the transformation equations), the {\it boundary problem} of finding $({\boldsymbol \theta},{\boldsymbol \Theta})$ for given $({\bf n},{\bf N})$ admits a very large set of solutions. Each of these solutions represents a branch $\gamma$ of the multi-valued generating function $w$, and the correct form of the semiclassical approximation to the transition amplitude is then, \begin{equation} A^{\rm F}({\bf n},{\bf N})=\sum_{\gamma} \left|\det\frac{1}{2\pi}\frac{\partial^{2}w_{\gamma}({\bf n},{\bf N})}{\partial {\bf n} \partial {\bf N}}\right|^{\frac{1}{2}}{\rm e}^{iw_{\gamma}({\bf n},{\bf N})+i\mu_{\gamma}\frac{\pi}{4}}. \end{equation} Here the index $\mu_{\gamma}$ is a topological property of the particular branch that can be computed from the classical transformation. As expected, this is also the solution of the calculation of the amplitudes using the generating function (\ref{eq:main2}), within the saddle point approximation. Hence the semiclassical origin of both, the complexity of many-body scattering and the massive quantum interference associated with it, is the highly non-linear form (and therefore the multi-valuedness) of the boundary problem connecting occupations. \section{Distribution of permanents} \label{sec:EPQ} In this section we will calculate the first three moments of the distribution of permanents over the (complex) Ginibre ensemble to show exemplary how the representation~(\ref{eq:main}) leads to a solvable combinatorial problem. The calculation is exact, in that it does not involve any asymptotics. It would be of course important to perform a similar calculation in the regime of interest for BS, and this is work under progress. Let $\sigma^2$ denote the variance of the independent real parts and imaginary parts of all matrix elements in ${\bf A}$ and let $N$ be its dimension. We start with an exact representation obtained from (\ref{eq:main}), in a slightly different form \begin{equation} {\rm Perm}~{\bf A}=\left.\left(\prod_{i=1}^{N}\frac{\partial^{2}}{\partial x_{i} \partial y_{i}}\right){\rm e}^{{\bf x}^{\tau}{\bf A}{\bf y}}\right|_{{\bf x}={\bf y}={\bf 0}} \,, \end{equation} where ${\bf x}=(x_{1},\ldots,x_{N}$) (and similarly for ${\bf y}$) is a column vector and $\tau$ is transposition. This representation allows the Gaussian average to be performed exactly. Define the tensor \begin{equation} {\bf \rho}^{(k)}=\left({\bf y}^{(k)}\right) \left({\bf x}^{(k)}\right)^{\tau} \end{equation} such that \begin{equation} \sum_{k=1}^{2n}\left({\bf x}^{(k)}\right)^{\tau}{\bf A}\left({\bf y}^{(k)}\right)={\rm Tr} \left[{\bf A}\sum_{k=1}^{2n}{\bf \rho}^{(k)}\right] \,. \end{equation} The average of $|{\rm Perm}~{\bf A}|^{2n}$ is evaluated by separating real and imaginary parts of the matrix ${\bf A}$ to get \begin{equation} \begin{split} \langle|{\rm Perm}~{\bf A}|^{2n}\rangle=\left(\prod_{k=1}^{n}\prod_{i=1}^{N}\frac{\partial^{2}}{\partial x_{i}^{(2k-1)}\partial y_{i}^{(2k-1)}} \frac{\partial^{2}}{\partial x_{i}^{(2k)}\partial y_{i}^{(2k)}}\right) \\ \times \left. {\rm e}^{2\sigma^{2}\sum_{i,j=1}^{N}\sum_{k,l=1}^{n}x_{i}^{(2k-1)} y_{j}^{(2k-1)} x_{i}^{(2l)} y_{j}^{(2l)}} \right|_{{\bf x}={\bf y}={\bf 0}} \,, \end{split} \end{equation} which is equivalent to \begin{equation} \label{eq:PerMcomp} \begin{split} \langle |{\rm Perm}~{\bf A}|^{2n}\rangle={\rm coefficient \ \ of~}\prod_{k=1}^{2n}\prod_{i=1}^{N}x_{i}^{(k)}y_{i}^{(k)} {\rm \ \ in~} \\ \prod_{i,j}\prod_{k,l}\left(1+2\sigma^{2}x_{i}^{(2k-1)}y_{j}^{(2k-1)} x_{i}^{(l)}y_{j}^{(l)}\right). \end{split} \end{equation} The evaluation of the coefficients in~(\ref{eq:PerMcomp}) is related to the following combinatorial problem. First of all we can remove the factor $2 \sigma^2$ in~(\ref{eq:PerMcomp}) and in return eventually multiply the overall coefficient with $(2\sigma^2)^{nN}$. For each value of $i=1,\ldots,N$ all the $x_i^{(k)}, \ k=1,\ldots,2n$, have to appear exactly once. They come in pairs $x_i^{(k')} x_i^{(l')}$ with $k'$ odd and $l'$ even. We start by counting the number of ways to combine different factors in $\prod_{k,l=1}^n (1 + x_i^{(2k-1)} x_i^{(2l)})$ to get each variable (for fixed $i$) exactly once. This is equivalent to counting pairings between $n$ (representing the even indexes) and $n$ (representing the odd indexes), which itself is equivalent to counting permutations of $n$. We write $ \sm{1&2&\cdots&n \\ P(1)&P(2)&\cdots&P(n)} $ or abreviated $ \sm{P(1)&P(2)&\cdots&P(n)} $ to adress a specific permutation $P \in S_n$. Specific pairs shall be denoted by the corresponding column $ \sm{k \\ l} = \sm{k \\ P(k)} $. For all $x$-variables one has to count $N$ independent permutations of $n$. Writing those one below the other will be referred to as {\it table}. For the $y$-variables again $N$ permutations of $n$ have to be counted. Since they come in combination with the $x$-variables in~(\ref{eq:PerMcomp}) they are not independent from the $x$-pairings. Each tuple $(i,k,l)$ representing a pair in the $N$ $x$-pairings actually comes with a fourth entry as a four-tuple $(i,j,k,l)$. This means that the pairs $(k,l)$ building up the $y$-pairings have to be taken from the $x$-pairings. In other words the $y$-pairings have to be a rearrangement of the $x$-pairings, keeping the $(k,l)$-indexes of all pairs. We will refer to this as a {\it vertical} rearrangement or permutation, depending on the context. In the process of rearranging identical pairs have to be taken distinguishable (\textit{e.g.}~vertically swapping two identical pairs in the $y$-table has to be counted additionally) since the set of touples $\{(i_1,j_1,k,l),(i_2,j_2,k,l)\}$ is different from the set $\{(i_1,j_2,k,l),(i_2,j_1,k,l)\}$ (if $i_1\neq i_2, j_1 \neq j_2$) although the $(k,l)$-indexes of the two pairs involved are the same. (i) $n=1$: There is trivially only one permutation for each $i$ concerning the $x$-variables. The same holds for the $y$-variables but there are $N!$ ways to vertically rearrange all the $\sm{1 \\ 1}$-pairs. We get \begin{equation} \langle |{\rm Perm}~{\bf A}|^{2}\rangle = (2\sigma^2)^{N} N! \,. \end{equation} (ii) $n=2$: The two different permutations of \(n=2\) are \( P_1 = \sm{1&2 \\ 1&2} \) and \( P_2 = \sm{1 & 2 \\ 2 & 1} \), which are {\it incompatible}, meaning they do not share any pair. Let \(N_1 (M_1)\) and \(N_2 (M_2) \) denote the multiplicities of \(P_1\) and \(P_2\) in the \(x (y)\)-table. The incompatibility implies $ M_1=N_1, M_2=N_2 $. The number of ways to distribute these permutations on \(N\) twice is \( \big( \frac{N!}{N_1! N_2!} \big)^2 \) and the number vertical permutations of pairs is \( (N_1!)^2 (N_2!)^2 \). We get \begin{eqnarray} \langle |{\rm Perm}~{\bf A}|^{4}\rangle &=& (2\sigma^2)^{2N} \sum_{N_1,N_2=0}^N \, \delta_{\sum N_a,N} (N!)^2 \nonumber \\ &=& (2\sigma^2)^{2N} N! (N+1)! \,. \end{eqnarray} (iii) $n=3$: The \(3!=6\) permutations of \(n=3\) are \( (P_1,\ldots,P_6) = ( \sm{1&2&3}, \sm{2&1&3}, \sm{1&3&2}, \sm{3&2&1}\), \(\sm{2&3&1}, \sm{3&1&2} ) \). Again we let \(N_a\) and \(M_a\) (\(a=1,\ldots,9\)) denote the multiplicities of the permutations \(P_a\) in the \(x\)- and \(y\)-table respectively. We define the \(9\) pair-counters \(p_\alpha\) \begin{eqnarray} \label{eq:paircountersC} p_1 &=& N_1+N_3 \,, \quad p_2 = N_2+N_5 \,, \quad p_3 = N_4+N_6 \,, \nonumber \\ p_4 &=& N_2+N_6 \,, \quad p_5 = N_1+N_4 \,, \quad p_6 = N_3+N_5 \,, \\ p_7 &=& N_4+N_5 \,, \quad p_8 = N_3+N_6 \,, \quad p_9 = N_1+N_2 \nonumber \end{eqnarray} for the pairs \( \sm{1\\1}, \sm{1\\2}, \sm{1\\3}, \sm{2\\1}, \sm{2\\2}, \sm{2\\3}, \sm{3\\1}, \sm{3\\2}, \sm{3\\3} \) (in that order). Taking into account (a) the multinomials for the distributions of the permutations among the \(N\) rows for both \(x\) and \(y\), (b) the restriction to \(y\)-tables that are vertical rearrangements of the \(x\)-table and (c) the vertical permutation of identical pairs for \(y\) yields \begin{equation} \label{eq:PerMcomplex6} \begin{split} \langle |{\rm Perm}~{\bf A}|&^{6}\rangle = \\ &\begin{split} (2\sigma^2)^{3N} \prod_{a=1}^{6} \left( \sum_{N_{a}=0}^N \right) \, \delta_{\sum N_{a},N} \prod_{a=1}^{6} \left( \sum_{M_{a}=0}^N \right) \, \delta_{\sum M_{a},N} \\ \times \prod_{\alpha=1}^{9} \delta_{p_\alpha({\bf N}), p_\alpha({\bf M})} \; \frac{(N!)^2}{\prod_{a} (N_{a}! M_{a}!)} \prod_{\alpha} p_\alpha({\bf M})! \,. \end{split} \end{split} \end{equation} The \(9 \times 6\)-matrix \( \left( \frac{\partial p_\alpha({\bf N})}{\partial N_a} \right)_{\alpha,a} \) has rank \(5\) so there are \(5\) independent restrictions from \( \prod_\alpha \delta_{p_\alpha({\bf N}),p_\alpha({\bf M})} \). Also the restriction \(\sum_a M_a = N\) is contained when \(\sum_a N_a = N\) applies. Thus~(\ref{eq:PerMcomplex6}) can also be expressed containing only \(6\) sums. In the following form the number of sums is reduced to \(7\), keeping one restriction and \(M_1\) independent. \begin{equation} \label{eq:PerMcomplex6reduced} \begin{split} \langle |{\rm Perm}~{\bf A}|^{6}\rangle = (2\sigma^2)^{3N} (N!)^2 \prod_{a=1}^{6} \left( \sum_{N_{a}=0}^N \right) \, \delta_{\sum N_{a},N} \sum_{M_1=0}^N \\ \times \frac{\prod_{\alpha} p_\alpha({\bf N})!}{M_1! \prod_{a} N_{a}! \prod_{a=2}^6 M_a({\bf N},M_1)!} \,, \end{split} \end{equation} where the \(M_a\) (\(a>1\)) are given by \begin{eqnarray} M_2 &=& N_1+N_2-M_1 \,, \quad M_3 = N_1+N_3-M_1 \,, \nonumber \\ M_4 &=& N_1+N_4-M_1 \,, \quad M_5 = N_5-N_1+M_1 \,, \\ M_6 &=& N_6-N_1+M_1 \nonumber \end{eqnarray} and $ \frac{1}{(-m)!} := 0 $ for $ k \in \mathbb{N}\backslash\{0\} $. Applying~(\ref{eq:PerMcomplex6reduced}) we evaluate the scaled third moment $\langle |{\rm Perm}~{\bf A}|^{6}\rangle / (2\sigma^2)^{3N} / (N!)^3$ for the lowest $N$ to $6$, $18$, $\frac{122}{3}$, $79$, $140$, $\frac{10508}{45}$, $\frac{13068}{35}$, $579$, $\frac{276442}{315}$, $\frac{228754}{175}$, $\frac{3697434}{1925}$, $\frac{48374363}{17325}$, $\frac{12084328}{3003}$, $\frac{55026632}{9555}$, $\frac{5536562488}{675675}$, $\frac{290360139}{25025}$, $\frac{3748239326}{229075}$, $\frac{73954590386}{3216213}$, $\frac{156246017726}{4849845}$, $\frac{33081258263}{734825}$, $\frac{95883756128092}{1527701175}$, $\frac{767871070556}{8793675}$, $\frac{750199663660}{6186609}$ for $N=1,\ldots,23$ respectively and use this to estimate an assymptotically exponential (as opposed to factorial) scaling of this quantity proportional to ${\rm e}^{\lambda N} N^\nu (1+\mathcal{O}(\frac{1}{N}))$ with $\lambda \sim 0.3$. \section{Conclusions} We have shown that the usual many body scattering scenario realizing the Boson Sampling problem (in the sense of sampling over an ensemble of large matrices using as weight their permanents) is a particular case of a much more general kind of physical situations where the transition amplitudes between many-body Fock states built from two different single-particle basis sets are measured. Within this general scenario, Boson Sampling requires the calculation of the many-body unitary operator representing a linear, canonical transformation at the single-particle level. We have provided different versions of the problem, obtained by expressing this transition amplitudes in different intermediate basis like coherent states and quadrature states of the field. Starting with these exact representations, we performed an asymptotic analysis valid in the limit of large occupations and provide their semiclassical approximation in the spirit of coherent sums over solutions of a classical boundary problem. Along the way, we have derived an exact form of the many body transition amplitudes, equivalent to the calculation of permanents, and use it to derive exact results for the moments of the distribution of permanents over the Ginibre ensemble. Work on the extension of our asymptotic analysis into the regime of low densities, where BS is expected to be hard, is currently under way. \begin{acknowledgements} We thank M.~Tichy, A.~Buchleitner, J.~Kuipers and V.~S.~Shchesnovich for valuable discussions and three anonymous referees for their help in improving the paper. \end{acknowledgements} \bibliographystyle{andp2012}
2,877,628,088,679
arxiv
\section{Introduction} Interest for studying the production of hyperon--antihyperon pairs following antiproton--proton annihilations is based largely on the aim to understand the nature of flavour production and its dynamics. There is a large amount of data from the PS185~\cite{PS185} collaboration as reported during the LEAP-05 conference by T.~Johansson~\cite{tord} and J.M.~Richard~\cite{JMR}. The goal of those PS185 experiments is to establish the definitive $\bar p p \to \bar Y Y$ data set in the low--energy regime. From the theoretical analysis of the data, one hopes to gain insight into the behaviour of hadron interactions at intermediate energies, i.e. in an energy regime where perturbative QCD is inappropriate and both quark--gluon and meson degrees of freedom are believed to be important.\\ Heavy hyperon--antihyperon production studies are relevant since i) rather little is known about heavy flavour hyperons, even down to the final proof of their existence; ii) different hyperon--antihyperon pairs are produced in distinct but specific hadronic environments; iii) threshold production studies keep the interpretation as simple as possible and iv) quark dynamic effects can be observed. The special interest for the investigation of $\bar \Omega \Omega$ production will be discussed in this contribution.\\ \section{Results from the PS185 experiment} Results of the PS185--collaboration studies --~as far as relevant for the recent discussion~-- are shown in figure~1 (left) and (right), demonstrating that the differential cross sections for the hyperon--antihyperon pair production following the $\bar p p$ annihilation feature the onset of higher partial waves already at very low excess energies and that the singlet fraction for the $\bar p p \to \bar \Lambda \Lambda$ reaction is largely consistent with zero, respectively.\\ \begin{figure} [h] \hspace{0.0cm} \includegraphics[height=.24\textheight]{PS185_cross_section_t.ps} \hspace{0.0cm} \vspace{+7.0cm} \includegraphics[height=.24\textheight]{PS185_singlet_fraction.ps} \caption{Left: PS185 results of differential cross sections at various energies, shown as a function of momentum transfer. $~$ Right: PS185 results of the singlet fractions for the reaction: $\bar p p \to \bar \Lambda \Lambda$ at various energies, shown as a function of the excess energy} \end{figure} \section{From light to heavy hyperons} Based on the experimental and theoretical experience from the production of light antihyperon--hyperon pairs at LEAR it is suggested to measure the production of both heavy antihyperon--hyperon pairs and the three $\phi$-meson system at the future facility FAIR at GSI. The foreseen momentum range~\cite{FAIR_LOI} of the $\bar p$--beam at the HESR/FAIR allows to study hyperons with one, two and three valence s-quarks or one valence c-quark up to the $\Xi_c$ with its mass of M = 2.47 GeV/c$^2$ in the $\bar p p$ interaction, see table 1.\\ \begin{table} \begin{tabular}{crrr} \hline \tablehead{1}{r}{b}{Hyperon\\ } & \tablehead{1}{r}{b}{Mass\\~~~~~~(MeV/c$^2$)} & \tablehead{1}{r}{b}{$\sqrt s$\\~~~~~~(MeV)} & \tablehead{1}{r}{b}{Beam \\~~~~~~~~Momentum\\(GeV/c)} \\ \hline $\Lambda~~~$ &~~~~~~~~~~~~~~~~~~1115.57 $\pm$ 0.06 &~~~~~~~~2231.14 & 1.435 \\ $\Sigma^+~$ &1189.37 $\pm$ 0.06 &2378.74 &1.854 \\ $\Sigma^0~~$ &1192.55 $\pm$ 0.10 &2385.1~~ &1.871 \\ $\Sigma^-~$ &1197.50 $\pm$ 0.05 &2395.0~~ &1.900 \\ $\Xi^0~~$ &1314.80 $\pm$ 0.8~~ &2629.60 &2.582 \\ $\Xi^-~~$ &1321.34 $\pm$ 0.14 &2642.68 &2.621 \\ $\Omega^-~~$ &1672.43 $\pm$ 0.14 &3344.86 &4.936 \\ \\ $\Lambda_c^+~~$ &2285.2~~ $\pm$ 1.2~~ &4570.4~~ &10.150 \\ $\Sigma_c^{0}~~$ &2452.7~~ $\pm$ 1.3~~ &4905.4~~ &11.848 \\ $\Sigma_c^{++}$ &2453.0~~ $\pm$ 1.2~~ &4906.0~~ &11.851 \\ $\Sigma_c^{+}~$ &2453.2~~ $\pm$ 1.2~~ &4906.4~~ &11.853 \\ $\Xi_c^{+}~~$ &2466.5~~ $\pm$ 2.5~~ &4933.0~~ &11.993 \\ $\Xi_c^{0}~~$ &2473.1~~ $\pm$ 2.0~~ &4945.2~~ &12.063 \\ $\Omega_c^{0}~~$ &2740.~~~~ $\pm$ 2.0~~ &5480.~~~~ &15.1 \\[0.1cm] \hline \\[-0.5cm] \end{tabular} \caption{{\mbox{Relevant parameters for the production of hyperons in $\bar p p$ annihilation}}} \label{tab:a} \end{table} \\ The common special feature of the hyperons is their weak (flavour changing) decay with $c\tau$ values of a few centimeters, as presented in table~2. Typical decay schemes of the lighter hyperons might be represented in two categories. The charged hyperons (antihyperons) decay mainly~(see~figure~\ref{fig:H_D}) delayed into a charged meson and a $\Lambda$ ($\bar \Lambda$) which then decays to the two charged particles $\pi^-$ and $p$ ($\pi^+$ and $\bar p$). The neutral hyperons (antihyperons) decay via $\Lambda + \pi^0$ ($\bar \Lambda + \pi^0$) into the two charged particle system $\pi^-$ and $p$ ($\pi^+$ and $\bar p$) and $\gamma$--quanta. The rather simple decay features allow an effective neutral or charge-multiplicity--step trigger. It is understood, however, that the hyperons with quarks heavier than the strange quark do not have a unique decay scheme any more and the restriction to particular decay channels seems to be appropriate.\\[-0.6cm] \begin{figure}[htp] \hspace{1.00cm} \includegraphics[width=1.11\textwidth]{Hyperon_Decay.eps} \caption{Decay scheme of the two hyperons $\Xi^0$ and $\Omega^-$} \label{fig:H_D} \vspace{-0.50cm} \end{figure} \section{The System: 3 $\times s$ and 3 $\times \bar s$ quarks} A systematic study of antihyperon-hyperon production with increasing strangeness (charm) content is certainly interesting due to the change of the quark dynamics in the different flavour environment of hadronic matter.\\ Besides the general interest of this research the production of the six--quark system with three $s$--quarks and three $\bar s$--quarks in the final state (i.e. the observation of the $\bar \Omega \Omega $ baryon pair and the $\phi \phi \phi$ three meson system) should be stressed. The angular distribution of $\bar \Omega $ (or $\Omega $) would be symmetric to 90$^0$ c.m. if the intermediate state is a compound like system state of gluonic matter with definite quantum numbers. If, however, other production mechanisms as boson exchange probabilities are relevant, the symmetry via a pure intermediate gluonic matter state will be broken. It would be an interesting and important result to see whether there is a correlation between the initial $\bar p$ ($p$) and the final $\bar \Omega \Omega $.\\ \begin{table} \begin{tabular}{crrrrrrrr} \hline \tablehead{1}{r}{b}{Hyperon\\ } & \tablehead{1}{r}{b}{Quark-\\content} & \tablehead{1}{r}{b}{I} & \tablehead{1}{r}{b}{J$^{\pi}$} & \tablehead{1}{r}{b}{Mass ~ \\~~~~~~(MeV/c$^2$)} & \tablehead{1}{r}{b}{~~~Mean life ~ \\(s $\times 10^{-13}$)} & \tablehead{1}{r}{b}{$c\tau ~$ \\~~~~(cm)} & \tablehead{1}{r}{b}{$\alpha_{Main}$ } \\ \hline $\Lambda~~~$ &uds &0 &$\frac{1}{2}$$^+$ &1115.57 $\pm$ 0.06 &2632 $\pm$ 20 &7.89 & +~0.642 $\pm$ 0.013 \\ $\Sigma^+~$ &uus &1 &$\frac{1}{2}$$^+$ &1189.37 $\pm$ 0.06 & 799 $\pm$ 4~~ &2.40 & -~0.980 $\pm$ 0.015 \\ $\Sigma^0~~$ &uds &1 &$\frac{1}{2}$$^+$ &1192.55 $\pm$ 0.10 &~~7.4 $\times 10~^{-7}$ \\ $\Sigma^-~$ &dds &1 &$\frac{1}{2}$$^+$ &1197.50 $\pm$ 0.05 &1479 $\pm$ 11 &4.40 & -~0.068 $\pm$ 0.008 \\ $\Xi^0~~$ &uss &$\frac{1}{2}$ &$\frac{1}{2}$$^+$ &1314.80 $\pm$ 0.8~~ &2900 $\pm$ 90 &8.69 & -~0.411 $\pm$ 0.022 \\ $\Xi^-~~$ &dss &$\frac{1}{2}$ &$\frac{1}{2}$$^+$ &1321.34 $\pm$ 0.14 &1639 $\pm$ 15 &4.91 & -~0.456 $\pm$ 0.014 \\ $\Omega^-~~$ &sss &0 &$\frac{3}{2}$$^+$ &1672.43 $\pm$ 0.14 & 822 $\pm$ 12 &2.46 & -~0.026 $\pm$ 0.026 \\ \\ $\Lambda_c^+~~$ &udc &0 &$\frac{1}{2}$$^+$ &2285.2~~ $\pm$ 1.2~~ &1.91 $\pm$0.15 &0.006 & \\ $\Sigma_c^{0}~~$ &ddc &1 &$\frac{1}{2}$$^+$ &2452.7~~ $\pm$ 1.3~~ & & & \\ $\Sigma_c^{++}$ &uuc &1 &$\frac{1}{2}$$^+$ &2453.0~~ $\pm$ 1.2~~ & & & \\ $\Sigma_c^{+}~$ &udc &1 &$\frac{1}{2}$$^+$ &2453.2~~ $\pm$ 1.2~~ & & & \\ $\Xi_c^{+}~~$ &usc &$\frac{1}{2}$ &$\frac{1}{2}$$^+$(?) &2466.5~~ $\pm$ 2.5~~ &3.0 $\pm$ 1.0 &0.009 & \\ $\Xi_c^{0}~~$ &dsc &$\frac{1}{2}$ &$\frac{1}{2}$$^+$(?) &2473.1~~ $\pm$ 2.0~~ &0.82 $\pm$ 0.6 &0.002 & \\ $\Omega_c^{0}~~$ &ssc &$\frac{1}{2}$ &$\frac{1}{2}$$^+$(?) &2740.~~~~ $\pm$ 2.0~~ & \\[0.1cm] \hline \\[-0.5cm] \end{tabular} \caption{Some properties of hyperons~[5]} \label{tab:b} \end{table} \\[-0.5cm] Since the asymmetry parameter $\alpha$ of the $\Omega$ decay is very small (see table~2), its polarization features cannot be measured via the weak decay directly. The determination of the polarization of the weakly decaying daughter hyperons allows to extract spin observables for the $\Omega$ particle. \\ A distinct feature of the $\bar \Omega \Omega$ production is the creation of two spin 3/2 objects out of the $\bar p p$ interaction. Results from the PS185 experiment proove a clear dominance of the triplet $\bar s s $ production at threshold. Since in the $\bar \Omega (\Omega)$ the three $s$--quarks ($\bar s$--quarks) are oriented parallel, the three $\bar s s $ pairs created out of the gluonic intermediate state should have spin quantum number as 3$^-$ if the $\bar \Omega (\Omega)$ is created with relative L~=~0 angular momentum.\\ The comparison of the $\bar \Omega (\Omega)$ baryon pair to the $\phi \phi \phi$ three meson production (where the three $I^G~(J^{PC})~=~0^-~(1^{--})$ $\bar s s$ mesons may not but can be produced with no relative correlation) would give valuable information for a unique determination of the intermediate matter state.\\[-0.2cm]
2,877,628,088,680
arxiv
\section{Introduction} An important step in the analysis of data from wide-field surveys is the classification of sources into stars and galaxies. Although challenging, this separation is crucial for many areas of cosmology and astrophysics. Different classification methods have been proposed in the literature, each having their respective advantages and disadvantages. One of the most used methods is based on morphological separation, where parameters related to the object structure and photometry are used \citep{Bertin:1996fj,2011MNRAS.412.2286H,Molino:2013oia,2019A&A...631A.156D,2019A&A...622A.177L}. In these methods one assumes that stars appear as point sources while galaxies as extended sources. This has been shown to be consistent with previous spectroscopic observations \citep{LeFevre:1995pq,Dawson:2012va,2013ApJS..208....5N}. However, at fainter magnitudes, the differences between these point-like and extended structures decrease and this method becomes unreliable. In what follows, by ``stars'' we mean point-like objects that are not galaxies, that is, both stars and quasars.% \footnote{Also very compact galaxies such as Green Peas fall into the category of point-like objects \citep{2009MNRAS.399.1191C, 2010ApJ...715L.128A}. } Future photometric surveys such as the Javalambre-Physics of the Accelerating Universe Astrophysical Survey \citep[J-PAS,][]{2014arXiv1403.5237B}\footnote{\href{http://www.j-pas.org}{www.j-pas.org}} and the Vera Rubin Observatory Legacy Survey of Space and Time \citep[LSST,][]{2017arXiv170804058L}\footnote{\href{https://www.lsst.org}{www.lsst.org}} will detect a large number of objects and are facing the management of data produced at an unprecedented rate. The LSST, in particular, will reach a rate of petabytes of data per year \citep{garofalo_botta_ventre_2016}. This wealth of data demands very efficient numerical methods but also gives us the opportunity to deploy Machine Learning (ML) algorithms, which, trained on big astronomical data, have the potential to outperform traditional methods based on explicit programming, if biases due to potentially unrepresentative training sets are kept under control. ML has been widely applied in the context of cosmology and astrophysics, see \citet{2017ConPh..58...99I}. A non-exhaustive list of applications is photometric classification of supernovae \citep{2016ApJS..225...31L,2017ascl.soft05017C,VargasdosSantos:2019ovq}, gravitational wave analysis \citep{2013PhRvD..88f2003B,2015JPhCS.654a2001C}, photometric redshift \citep{2018A&A...616A..69B,2015MNRAS.452.3100C}, morphology of galaxies \citep{2010arXiv1005.0390G,2010MNRAS.406..342B}, and determination of atmospheric parameters for stellar sources \citep{2019A&A...622A.182W}. ML applications to star-galaxy separation have been successfully performed on many surveys. \citet{2011AJ....141..189V}, for example, used various tree methods to classify SDSS sources. \citet{2015MNRAS.453..507K} used classifiers that mix supervised and unsupervised ML methods with CFHTLenS data. Recently, Convolutional Neural Networks (CNN) have been adopted: using images as input, they achieve an Area Under the Curve (AUC) > 0.99 for CFHTLenS and SDSS data \citep{2017MNRAS.464.4463K}. For more ML applications in the context of star/galaxy classification see \citet{2019arXiv190908626C,2018MNRAS.481.5451S,2019MNRAS.483..529C,2012ApJ...760...15F,2004AJ....128.3092O}. Our goal here is to classify the objects detected by Pathfinder miniJPAS \citep{Bonoli:2020ciz}, which observed $\sim$$1 \text{deg}^2$ of the AEGIS field with the 56 narrow-band J-PAS filters and the 4 $ugri$ broad-band filters, for a total of approximately 64000 objects (mag$_{AB} \lesssim 24$). The ML algorithms that we consider in this work are supervised and, for the learning process, need an external trustworthy classification. We adopt Sloan Digital Sky Survey \citep[SDSS,][]{2015ApJS..219...12A} and Hyper Suprime-Cam Subaru Strategic Program \citep[HSC-SSP,][]{2019PASJ...71..114A} data. We compare different ML models to each other and to the two classifiers adopted by the JPAS survey: the \texttt{CLASS\_STAR} provided by SExtractor \citep{Bertin:1996fj} and the stellar/galaxy loci classifier (SGLC) introduced in \citet{2019A&A...622A.177L}. This paper is organized as follows. In Section~\ref{jpas}, we briefly describe J-PAS and miniJPAS and we review the classifiers adopted in miniJPAS. In Section~\ref{mlsec} we present the ML algorithms used in this work, and in Section~\ref{metrics} we define the metrics that we use to assess the performance of the classifiers. Our results are presented in Sections~\ref{results} and~\ref{vac}, and our conclusions in Section~\ref{conclusions}. \section{J-PAS and miniJPAS} \label{jpas} J-PAS is a ground-based imaging survey that will observe 8500 deg$^2$ of the sky via the technique of quasi-spectroscopy: by observing with 56 narrow-band filters and 4 $ugr(i)$ broad-band filters% \footnote{miniJPAS features also the $i$ band, while J-PAS is not expected to have it.} it will produce a pseudo-spectrum ($R\sim 50$) for every pixel \citep[for the filters' specifications see][]{Bonoli:2020ciz}. It features a dedicated 2.5m telescope with an excellent étendue, equipped with a 1.2 Gigapixel camera with a very large field of view of $4.2 \deg^2$. The observatory is on the mountain range ``Sierra de Javalambre'' (Spain), at an altitude of approximately 2000 meters, an especially dark region with the very good median seeing of $0.7''$ \citep{2010SPIE.7738E..0VC}. Therefore, J-PAS sits between photometric and spectroscopic surveys, fruitfully combining the advantages of the former (speed and low cost) with the ones of the latter (spectra). In particular, thanks to its excellent photo-$z$ performance, it will be possible to accurately study the large scale structure of the universe using the galaxy and quasar catalogs produced by J-PAS \citep{Bonoli:2020ciz}. Between May and September 2018, the 2.5m J-PAS telescope with its filter set was equipped with the Pathfinder camera, used to test the telescope performance and execute the first scientific operations. The camera features a 9k $\times$ 9k CCD, with a 0.3 deg$^2$ field-of-view and 0.225 arcsec pixel size. This led to the miniJPAS survey which covered a total of $\sim 1 \text{deg}^2$ of the AEGIS field,% \footnote{See \citet{Davis:2006tn} for informations on the All-wavelength Extended Groth strip International Survey (AEGIS).} reaching the target depth planned for J-PAS (mag$_{AB}$, 5$\sigma$ in a 3'' aperture, between 21.5 and 22.5 for the narrow-band filters and up to 24 for the broad-band filters). miniJPAS consists of the 4 fields/pointings AEGIS1-4, each of approximately 0.25 deg$^2$ field-of-view. The miniJPAS primary catalogue contains 64293 objects in the $r$ detection band, with forced-photometry in all other filters. See \citet{Bonoli:2020ciz} for the presentation paper. The miniJPAS Public Data Release was presented to the public in December 2019.\footnote{\href{https://j-pas.org/datareleases/minijpas_public_data_release_pdr201912}{j-pas.org/datareleases/minijpas\_public\_data\_release\_pdr201912}} \subsection{Crossmatched catalogs} \label{xmatch} \begin{figure} \centering \includegraphics[width=.98 \columnwidth]{distribution_data_SDSS} \includegraphics[width=.98 \columnwidth]{distribution_data_HSC} \caption{ Histograms of the $r$-band magnitudes of the sources of the miniJPAS catalog crossmatched with the SDSS (top) and HSC-SSP (bottom) catalogs. Classification by SDSS and HSC-SSP, respectively. Galaxies are shown in red, stars in semi-transparent blue.} \label{fig:catalogs} \end{figure} \begin{figure} \centering \includegraphics[width=.98 \columnwidth]{duo_z_distrib} \caption{Histograms of the photometric redshifts of the galaxies of the miniJPAS catalog crossmatched with the SDSS (semi-transparent purple) and HSC-SSP (green) catalogs. } \label{fig:z_distrib} \end{figure} The goal of this paper is to develop an ML model that can accurately classify the objects detected by Pathfinder miniJPAS. As we will consider supervised ML algorithms, we need, for the learning process, a trustworthy classification by some other survey that has a sufficiently high overlap with miniJPAS. We use SDSS\footnote{\href{https://www.sdss.org/dr12/}{sdss.org/dr12/}} and HSC-SSP\footnote{\href{https://hsc-release.mtk.nao.ac.jp/doc/}{hsc-release.mtk.nao.ac.jp/doc/}} data, whose classification is expected to be trustworthy within the intervals $15\le r \le20$ and $18.5\le r \le23.5$, respectively. As said earlier, by ``stars'' we mean point-like objects that are not galaxies, that is, both stars and quasars. We assume that the classification by SDSS and HSC-SSP is trustworthy within this definition \citep{2015ApJS..219...12A,2019PASJ...71..114A}. We found 1810 common sources with SDSS, 691 galaxies and 1119 stars, and 11089 common sources with HSC-SSP, 9398 galaxies and 1691 stars. See Fig.~\ref{fig:catalogs} for the $r$-band distributions of stars and galaxies and Fig.~\ref{fig:z_distrib} for the redshift distribution of galaxies. \subsubsection{SDSS classification} SDSS is a photometric and spectroscopic survey conducted at the Apache Point Observatory (New Mexico, USA) with a 2.5-m primary mirror. We used the SDSS DR12 photometric catalog \texttt{minijpas.xmatch\_sdss\_dr12}\footnote{For details, see \href{http://archive.cefca.es/catalogues/minijpas-pdr201912}{archive.cefca.es/catalogues/minijpas-pdr201912}}. Stars are defined according to an extendedness (difference between the CModel and PSF magnitudes) less than 0.145.\footnote{\href{https://www.sdss.org/dr12/algorithms/classify}{www.sdss.org/dr12/algorithms/classify/\#photo\_class}} In order to test the photometric calibration by SDSS we crossmatched the latter with the catalog from the ALHAMBRA (Advance Large Homogeneous Area Medium Band Redshift Astronomical) survey \citep{2008AJ....136.1325M}.% \footnote{\href{http://svo2.cab.inta-csic.es/vocats/alhambra}{svo2.cab.inta-csic.es/vocats/alhambra}} We obtained 1055 sources after imposing mask and saturation flags. As discussed in \citet{Molino:2013oia}, ALHAMBRA provides a trustworthy classification in the magnitude range $15\le r \le 21$. As one can see from Fig.~\ref{fig:sdssclass} (top) ALHAMBRA covers the relevant magnitude range and agrees with SDSS well till $r=20$ (bottom). Indeed, within $15\le r \le 20$, the percentages of false negatives and false positives are 0.2\% and 1.9\%, respectively (positive refers to the object being a galaxy). Note that, for the value added catalog, we will use SDSS in the more limited range $15\le r \le 18.5$ so that the percentages of false negatives and false positives are 0\% and 0.7\%, respectively (using $p_{\rm cut}=0.5$, see Section~\ref{coma}). \begin{figure} \centering \includegraphics[width=.98 \columnwidth]{sdssphoto_alhambra} \includegraphics[width=.98 \columnwidth]{alhambra_sdss_phot} \caption{ Top: Histograms of the $r$-band magnitudes of the objects resulting from the crossmatch between the SDSS catalog used in this paper and ALHAMBRA. Galaxies are shown in red, stars in semi-transparent blue. Bottom: disagreement between SDSS and ALHAMBRA as a function of $r$ magnitude. Sources classified as galaxies by ALHAMBRA and as stars by SDSS are in purple, while vice versa in semi-transparent green.} \label{fig:sdssclass} \end{figure} \subsubsection{HSC-SSP classification} The HSC-SSP is a photometric survey with a 8.2-m primary mirror located in Hawaii, USA. We crossmatched the miniJPAS data with the wide field from the Public Data Release 2. Stars are defined according to an extendedness less than 0.015.\footnote{\href{https://hsc-release.mtk.nao.ac.jp/doc/index.php/stargalaxy-separation-2/}{hsc-release.mtk.nao.ac.jp/doc/index.php/stargalaxy-separation-2/}} We used the following data quality constraints: \texttt{isprimary = True}, \texttt{r\_extendedness\_flag!=1} and \texttt{r\_inputcount\_value>=4} for HSC-SSP, and \texttt{flag=0} and \texttt{mask=0} for miniJPAS. The crossmatch was performed with the \texttt{TOPCAT}% \footnote{\href{http://www.star.bris.ac.uk/~mbt/topcat/}{www.star.bris.ac.uk/~mbt/topcat/}} software with a tolerance of 1 arcsec. In order to test the photometric calibration by HSC-SSP we crossmatched the latter with the spectroscopic catalogs from the DEEP2 Galaxy Redshift Survey (1992 sources \cite{2013ApJS..204...21M}). We could not use this spectroscopic catalog to check the photometric SDSS calibration because it does not cover the required magnitude range. As one can see from Fig.~\ref{fig:subaruclass} (top) DEEP2 covers the relevant magnitude range and agrees with HSC-SSP well (bottom). Indeed, for the range $18.5\le r \le23.5$, the percentages of false negatives and false positives are 1.9\% and 0\%, respectively. \begin{figure} \centering \includegraphics[width=.98 \columnwidth]{HSC-deep2} \includegraphics[width=.98 \columnwidth]{deep_hsc_phot} \caption{ Top: Histograms of the $r$-band magnitudes of the objects resulting from the crossmatch between the HSC-SSP catalog used in this paper and DEEP2. Bottom: disagreement between HSC-SSP and DEEP2 as a function of $r$ magnitude. No object was classified as galaxy by HSC-SSP and star by DEEP2.} \label{fig:subaruclass} \end{figure} \subsection{Input parameters for the ML algorithms} The features that are used as input for our algorithms can be grouped into photometric and morphological classes. Besides these two sets of features, we also consider the average PSF in the $r$ detection band of the 4 fields of miniJPAS, which is 0.70" for AEGIS1, 0.81" for AEGIS2, 0.68" for AEGIS3 and 0.82" for AEGIS4. The different PSF values signal different observing conditions: by including the PSF value we let the ML algorithms know that data is not homogeneous. \subsubsection{Photometric information} As photometric information we consider the \texttt{MAG\_AUTO} magnitudes associated to the 60 filters together with their errors. The rationale behind including the errors is that, in this way, one can characterize the statistical distribution associated to a magnitude measurement. Indeed, observations may suffer from inhomogeneity due to varying observing conditions and the measurement errors should be able to account, at least in part, for this potential bias. As we will see, how well can one measure the magnitude associated to a filter may be more important than the actual measurement. As said earlier, sources are detected in the $r$ band so that one may have non-detection in the other filters. Null or negative fluxes (after background subtraction) are assigned a magnitude value of 99. The ML algorithms are expected to learn that 99 marks missing values. \subsubsection{Morphological information} We consider the following 4 morphological parameters: \begin{itemize} \item concentration $c_r=r_{1.5''}-r_{3.0''}$, where $r_{1.5''}$ and $r_{3.0''}$ are the $r$-band magnitudes within fixed circular apertures of 1.5'' and 3.0'', respectively, \item ellipticity $A/B$, where $A$ and $B$ are the RMS of the light distribution along the maximum and minimum dispersion directions, respectively. \item the full width at half maximum $FWHM$ assuming a Gaussian core, \item \texttt{MU\_MAX/MAG\_APER\_3\_0} ($r$ band), where \texttt{MU\_MAX} and \texttt{MAG\_APER\_3\_0} are the peak surface brightness above background and the magnitude within 3.0", respectively. Note that here we are taking the ratio in order to have a parameter that is complementary to $c_r$. \end{itemize} Figures~\ref{fig:mor-sdss} and~\ref{fig:mor-hsc} show their distributions for stars and galaxies and the two catalogs. The stellar bimodality in $c_r$ and \texttt{MU\_MAX/MAG\_APER\_3\_0} is due to the fact that the four fields feature a different average PSF. We discuss these figures when examining feature importance in Section~\ref{featureimpo}. \begin{figure} \centering \includegraphics[width=.98 \columnwidth]{distribution_morphological_SDSS} \caption{ Distributions of the morphological parameters of stars and galaxies for the miniJPAS catalog crossmatched with SDSS. Galaxies are shown in red, stars in semi-transparent blue.} \label{fig:mor-sdss} \end{figure} \begin{figure} \centering \includegraphics[width=.98 \columnwidth]{distribution_morphological_HSC} \caption{ Distributions of the morphological parameters of stars and galaxies for the miniJPAS catalog crossmatched with HSC-SSP. Galaxies are shown in red, stars in semi-transparent blue.} \label{fig:mor-hsc} \end{figure} \subsection{J-PAS star/galaxy classifiers} \label{sgclass} Here, we briefly discuss the star/galaxy classifiers available for miniJPAS. However, first we show how HSC-SSP classifies objects into stars and galaxies. This is performed by drawing a ``hard cut'' in the source parameter space. In Figure~\ref{morphology_HSC-SSP} we plot the difference between $mag_{PSF}$ and $mag_{cmodel}$ as a function of $mag_{cmodel}$ for the HSC-SSP data using their $r$ band \citep[for the definitions see][]{2019PASJ...71..114A}. Stars are expected to have $mag_{PSF} \simeq mag_{cmodel}$ while galaxies, due to their extended structure, should feature $mag_{PSF} > mag_{cmodel}$. Therefore, one can separate stars from galaxies via a cut in the extendedness parameter $mag_{PSF}-mag_{cmodel}$, which we show with a yellow line in Figure~\ref{morphology_HSC-SSP}. The disadvantage of this model is that it provides an absolute classification for a scenario in which the uncertainties increase as we move toward weaker magnitudes. Note that for $r_{cmodel} \gtrsim 24$ the separation is not reliable as stars do not cluster anymore around a null extendedness. \begin{figure} \centering \includegraphics[width= \columnwidth]{morphologia} \caption{Density of objects as a function of extendedness (the difference between $mag_{PSF}$ and $mag_{cmodel}$) and $mag_{cmodel}$ for HSC-SSP data. The yellow line marks an extendedness of 0.015. According to this morphological classification the sources below the cut are stars and the ones above the cut are galaxies.} \label{morphology_HSC-SSP} \end{figure} \subsubsection{\texttt{CLASS\_STAR}} SExtractor \citep[Source Extractor,][]{Bertin:1996fj} is a software developed for processing large images (60k $\times$ 60k pixels). It has been widely applied to photometric surveys including miniJPAS. Besides detecting sources, SExtractor also classifies objects into stars and galaxies. The software has two internal classifiers, \texttt{CLASS\_STAR} and \texttt{SPREAD\_MODEL}. miniJPAS includes the classification via \texttt{CLASS\_STAR} which is based on neural networks (see Section~\ref{sec:ann}).\footnote{\href{https://sextractor.readthedocs.io/en/latest/ClassStar.html}{sextractor.readthedocs.io/en/latest/ClassStar.html}} The network has 10 inputs: 8 isophotal areas, the peak intensity and the ``seeing'' control parameter. The output is probabilistic and quasars are classified as stars (in agreement with our convention). \texttt{CLASS\_STAR} is reliable up to $r \sim 21$ \citep[see also][]{Bertin:1996fj}. \subsubsection{Stellar/galaxy loci classifier} \label{cpdf} miniJPAS includes the Bayesian classifier (SGLC) developed by \citet{2019A&A...622A.177L} for J-PLUS data.% \footnote{\href{http://www.j-plus.es/datareleases}{j-plus.es/datareleases}} The concentration versus magnitude diagram presents a bimodal distribution, corresponding to compact point-like objects and extended sources. \citet{2019A&A...622A.177L} models both distributions to obtain the probability of each source to be compact or extended. The model with suitable priors is then used to estimate the Bayesian probability that a source is a star or a galaxy. Also in this case quasars are expected to be classified as ``stars.'' This method was updated to miniJPAS data, in particular a different galaxy population model was adopted. See~\citet{Bonoli:2020ciz} for more details. \section{Machine learning} \label{mlsec} Machine learning is a branch of artificial intelligence that includes statistical and computational methods dedicated to providing predictions or taking decisions without being explicitly programmed to perform the task. Machine learning is employed in a variety of computing tasks, for which the explicit programming of well-performing algorithms is difficult or unfeasible. ML methods can either be supervised or unsupervised. The former learn from pre-classified data that has known inputs and outputs. When classification is unavailable, one relies instead on unsupervised methods, which can group items that are related in the parameter space, i.e., learn without the need of external information. In this paper, we focus on binary supervised classification methods. In this case, the model (the internal parameters of the algorithm) is implicitly adjusted via the ``training set.'' Its performance is then tested with the remaining part of the dataset---the ``test set.'' Specifically, the internal parameters of the prediction function $f: \mathbb{R}^n \to Y$ are trained via the training dataset $\mathbf{x}_i \in \mathbb{R}^n$ ($n$ is the dimensionality of the feature space, $i$ labels the elements of the training set) with classifications $y_i$ $\in \{0,1\}$, where 1 stands for galaxy and 0 for star. Classifiers are divided into non-probabilistic and probabilistic classifiers. The former type of classifier outputs the best class while the latter the probability of the classes (the best class is taken as the one with the highest probability). Here, we will consider only binary probabilistic classifiers so that it is $f: \mathbb{R}^n \to [0,1]$, that is, $f$ gives the probability that an object is a galaxy. The probability of being a star is simply $1-f$. A value of $f$ close to 1 means that the object is likely a galaxy. According to the No-Free Lunch theorem there is not an ideal algorithm that performs better than the others in any situation~\citep{6795940}. As it is impossible to test all the methods available with all the possible choices of hyperparameters, we followed the strategy to explore firstly some of the main ML methods, namely, K-Nearest Neighbors (KNN), Decision Trees (DT), Random Forest (RF) and Artificial Neural Networks (ANN).% \footnote{We also tested Support-Vector Machine (SVM) with the linear, polynomial and Radial Basis Function (RBF) kernels. We found results similar to DT and KNN.} Subsequently, because of the best response of the RF technique, we decide to focus on decision tree algorithms and ensemble models, so we added Extremely Randomized Trees (ERT) and Ensemble Classifier (EC) to our analysis. These algorithms can be used for both regression and classification.% \footnote{ While classification is used to predict if an object belongs to a class, regression is used to predict real valued outputs that do not belong to a fixed set. For example, regression is used when one uses photometric information in order to predict the source's redshift.} Here, we will only consider classification. We implemented these algorithms using the \texttt{scikit-learn}\footnote{\href{https://scikit-learn.org/}{scikit-learn.org}} package written in python \citep{Pedregosa:2012toh}. For more information about supervised learning see \citet{mitchell1997machine,hastie2009elements}. For the training and test sets we use 80\% and 20\% of the crossmatched catalogs, respectively. The division is performed randomly. This guarantees a good training and an accurate testing. A 70\%-30\% split is also a viable alternative. As mentioned in Section~\ref{xmatch}, the training sets are unbalanced as they feature a different number of galaxies and stars. We will show the purity curves for stars and galaxies in order to estimate the performance for each class. We now briefly review the six ML algorithms adopted in this paper. The hyperparameters used in the algorithms can be found in appendix \ref{hyperparameter}. \subsection{K-Nearest-Neighbors} The KNN algorithm is one of the most simple ML methods \citep{altman1992introduction,hastie2009elements}. It calculates the distance between the element to be classified (within the test set) and the ones belonging to the training set. The predicted class will be calculated using the $k$ nearest neighbors. Although in this work we use the Euclidean metric, it is possible to choose others metrics to compute the distances. This method is very fast and its computational cost is proportional to the size of training set. The output of the model is discrete if one uses the majority vote from the $k$ nearest neighbors.\footnote{A vote is a classification by a neighbor.} Here, we use the probabilistic version which assigns a probability to each class. In this case the classification is given by the average of the nearest $k$ neighbors: \begin{equation} f(\mathbf x_q)=\frac{\sum_{i=1}^k w_i f(\mathbf x_i)}{\sum_{i=1}^k w_i} \qquad \textrm{with} \qquad w_i = \frac{1}{d(\mathbf x_q, \mathbf x_i)^2} \,, \end{equation} where the sum over the $k$ nearest neighbors is weighted by the weights $w_i$ which are the inverse of the square of the distance $d(\mathbf x_q, \mathbf x_i)$ from the neighbors~($\mathbf x_i$) to the element to be classified~($ \mathbf x_q$, $q$ labels the test set), and $f( \mathbf x_i)=y_i$ are the classifications of the training set. As discussed in Section~\ref{kfold}, the number $k$ of neighbors is optimized via $k$-fold cross-validation. KNN has the advantage of being simple, intuitive and competitive in many areas. However, its computational complexity increases with the number of data. \subsection{Decision Trees} \label{sec:dt} DT methods \citep[see][]{breiman1984classification,hastie2009elements} divide recurrently the parameter space according to a tree structure, following the choice of minimum class impurity of the groups at every split. To build a Decision Tree we first define an Information Gain (IG) function: \begin{equation} IG(D_p,x_t)=I(D_p)-\frac{N_{\rm left}}{N_p}I(D_{\rm left})-\frac{N_{\rm right}}{N_p}I(D_{\rm right}) \,, \end{equation} where $D_p$ is the parent dataset of size $N_p$, $D_{\rm left}$ and $D_{\rm rigth}$ are the child datasets of sizes $N_{\rm left}$ and $N_{\rm right}$, respectively, and $I$ is a function called impurity. At every step the dataset is divided according to the feature and threshold $x_t$% \footnote{Within our notation, $x_t$ is the threshold for the feature that maximizes~$IG$ (there are $n$ features).} that maximize the $IG$ function, or, equivalently, that minimize the impurity in the children dataset. We considered several impurity functions, such as entropy, classification error and Gini. For example, the latter is: \begin{equation} \label{inpu} I_G(m)= 1-\sum_{i=0,1} p(i|m)^2 \,, \end{equation} where $p(i|m)$ is the fraction of data belonging to the class $i$ (0 or 1) for a particular node $m$ that splits the parent dataset into the child datasets. After the growth of the tree is completed, the feature space is divided with probabilities associated to each class, and the probability for a test element is that of the region it belongs to. During the branching process described above, some features appear more often than others. Using this frequency we can measure how important each feature is in the prediction process. We define the importance of each feature as: \begin{equation} \label{impo} Imp(x)=\sum_t \frac{N_p}{N_{\rm tot}}IG(D_p,x_t) \,, \end{equation} where $N_{\rm tot}$ is the size of the dataset. The higher the number of times a feature branches a tree, higher its importance. Note that the first features that divide the tree tend to be of greater importance because the factor $N_p/N_{\rm tot}$ in Eq.~\eqref{impo} decreases as the tree grows ($N_p$ decreases). DT is characterized by an easy interpretability and handling but it is sensitive to small changes in the training set, thus suffering from potential biases. \subsection{Random Forest} Random Forest \citep{breiman2001random,hastie2009elements} is an ensemble algorithm built from a set of decision trees (the forest). Each tree generates a particular classification and the RF prediction is the combination of the different outputs. Each tree is different because of the stochastic method used to find the features when maximizing the IG function. Moreover, using the bootstrap statistical method, different datasets are built from the original one in order to grow more trees. For the discrete case the output is built from the majority vote, as seen with the KNN algorithm. For the probabilistic case we calculate the RF output as the average of the probabilities of each class for each tree. Finally, one computes the feature importances $Imp(x)$ for each tree of the ensemble and then averages them to obtain the RF feature importance. The diversity of trees decreases the bias as compared to DT, generating globally better models. On the other hand, RF requires greater memory and time as compared to DT. \subsection{Extremely Randomized Trees} Extremely Randomized Trees \citep{geurts2006extremely} is an ensemble method similar to RF. There are only two differences between RF and ERT. The first is that ERT originally does not use bootstrap, although the implementation in \texttt{scikit-learn} allows one to insert it in the analysis. The second is that, while RF tries to find the best threshold for a features via the $IG$ function, in ERT the division is done randomly. Then, of all the randomly generated splits, the split that yields the highest score is chosen to split the node. For large datasets ERT algorithms are faster than RF ones and yield a similar performance \citep{geurts2006extremely}. \subsection{Artificial Neural Networks} \label{sec:ann} Artificial Neural Networks mimic the functioning of the nervous system, being able to recognize patterns from a representative dataset \citep[for an introduction see][]{mitchell1997machine,hastie2009elements}. The model we will use in our analysis consists of a simple supervised model called Multilayer Perceptron (MLP). MLP consists of a set of perceptrons arranged in different layers. A perceptron, or artificial neuron, is a binary classifier. The data features are inserted in the input layer, the learning process occurs in the hidden layers, and the object classification is performed by the output layer. The information in the hidden layers is passed through each perceptron several times until convergence. In this algorithm, we can have several layers containing hundreds of perceptrons. To train the neural network, one uses a Cost Function that should be minimized. As learning method we use backpropagation \citep{1986Natur.323..533R}. We use \texttt{LBFGS}, \texttt{Stochastic Gradient Descent} and \texttt{Adam} cost functions, besides various activation functions. The values of the hyperparameters that give the best performance are given in Appendix \ref{hyperparameter}. In particular, we adopted 1 hidden layer with 200 neurons. ANN algorithms are very competitive and have the ability to deal with complex non-linear problems, but possess low interpretability and require powerful processors. Finally, we briefly discuss the differences between \texttt{CLASS\_STAR} and the ANN classifier used in this work. First, our ANN classifier is trained on real miniJPAS data, while \texttt{CLASS\_STAR} was trained on simulated images. Second, although they both feature one hidden layer, \texttt{CLASS\_STAR} has an input layer with 10 parameters and a hidden layer with 10 neurons while our classifier uses an input layer with 64 parameters (4 morphological features plus 60 photometric bands) and has a hidden layer with 200 neurons. \subsection{Ensemble Classifiers} The Ensemble method aims to construct a meta classifier from the union of different algorithms. Generally, when efficiently combined, these classifiers can perform better than the single best algorithm. In order to combine the classifiers we adopt the weighted sum rule with equal weights. The probability prediction function $f$ can be written as: \begin{equation} f(\textbf x_q)= \frac{\sum^m_{j=1}w_j f_{j}(\textbf x_q)}{\sum^m_{j=1}w_j} \,, \end{equation} where $f_{j}(\textbf x_q)$ is the probabilistic binary classification from the classifier $j$ and $m$ is the number of classifiers considered. We implemented this algorithm using the \texttt{VotingClassifier} function from \texttt{scikit-learn}. In the following, the ensemble classifier (EC) comprises ANN, RF and SGLC methods with equal weight ($w_j =1/3$). Note that EC is not a pure ML classifier as it uses SGLC, see Section~\ref{cpdf}. These algorithms generally will inherit the advantages and disadvantages of the methods which are based on. \section{Performance metrics} \label{metrics} We will now introduce the metrics that we adopt in order to assess the performance of the classifiers. See \citet{mitchell1997machine,hastie2009elements} for more details. \subsection{Confusion Matrix} \label{coma} As we are considering probabilistic classifiers, the classification of sources into stars or galaxies depends on a probability threshold $p_{\rm cut}$ to be specified. In our case, all objects with $f>p_{\rm cut}$ will be classified as galaxies. The choice of $p_{\rm cut}$ depends on completeness and purity requirements. Once $p_{\rm cut}$ is specified, one can summarize the classification performance using the confusion matrix, which thoroughly compares predicted and true values. For a binary classifier the confusion matrix has four entries: True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). TP are sources correctly classified as galaxies by the model. TN are sources correctly classified as stars. FN are sources classified as stars by the model when, actually, they are galaxies. FP are sources classified as galaxies when they are stars. \subsection{Metrics} The receiver operating characteristic (ROC) curve represents a comprehensive way to summarize the performance of a classifier. It is a parametric plot of the true positive rate (TPR) and false positive rate (FPR) as a function of $p_{\rm cut}$: \begin{equation} TPR(p_{\rm cut})=\frac{TP}{TP+FN} \qquad FPR(p_{\rm cut})=\frac{FP}{FP+TN} \end{equation} with $0\le p_{\rm cut} \le 1$. TPR is also called ``recall'' and, in astronomy, is the completeness. The performance of a classifier can then be summarized with the area under the curve (AUC). The AUC can assume values between 0 and 1. A perfect classifier has a value of 1, while a random classifier, on average, a value of 1/2. The purity curve is a useful method to assess the performance of an unbalanced classifier (as the training set does not feature the same number of stars and galaxies). It is a parametric plot of the completeness (or recall) and the purity (or precision) as a function of $p_{\rm cut}$: \begin{equation} \text{Purity} = \frac{TP}{TP+FP} \,. \end{equation} In order to summarize the purity curve, we consider the average precision (AP) which is the area under the purity curve and takes values between 0 and 1. Finally, one can measure the algorithm performance with the mean squared error ($MS\!E$) defined as: \begin{equation} MS\!E= \frac{1}{N_{\rm test}} \sum_{q=1}^{N_{\rm test}} \left (y_q- f(\mathbf x_q) \right)^2 \,, \end{equation} where $y_q$ are the test-set classifications and $N_{\rm test}$ is the test-set size. $MS\!E = 0$ characterizes a perfect performance. In the present case of a binary classifier it is $MS\!E=(FP + FN)/N_{\rm test}$. \subsection{$k$-fold cross-validation}\label{kfold} We use the $k$-fold cross-validation method in order to optimize the algorithm's hyperparameters, test for overfitting and underfitting and estimate the errors on AUC and AP. $k$-fold cross-validation separates the training data in $k$ equal and mutually exclusive parts (we adopt $k=10$). The model is trained in $k-1$ parts and validated in the remaining one, called validation. This process is repeated cyclically $k$ times. The final result is the mean and standard deviation of the metric. The ML methods described in Section~\ref{mlsec} depend on several internal hyperparameters (for example, the number $k$ of neighbors in KNN). In order to optimize them we performed $k$-fold cross-validation for several hyperparameter configurations. The results of the next Section are relative to the best configuration according to the AUC. See Appendix \ref{hyperparameter} for details. We also tested the ML algorithms against overfitting and underfitting. The former happens when the training is successful (high AUC) but not the testing (low AUC). The latter when training and testing are not successful (both AUC's are low). We checked that the average AUC from the $k$-fold cross-validation agrees with the AUC from the test set; all the methods pass this test. Finally, we can use $k$-fold cross-validation in order to estimate the error in the determination of the AUC and AP. This will help us understand if the differences between two estimators are significative and also how sensitive a classifier is with respect to the division of the dataset into training and test sets. \section{Results} \label{results} \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{1.3} \begin{table*} \caption{ Performance of the classifiers considered in this paper for the miniJPAS catalog crossmatched with the SDSS catalog ($15\le r \le20$, top) and with the HSC-SSP catalog ($18.5\le r \le23.5$, bottom). The best performance is marked in bold (EC is not considered). $P$ stands for the analysis that uses only photometric bands while $M\!+\!P$ stands for the analysis that uses photometric bands together with morphological parameters. \label{table_foda}} \centering \begin{tabular}{@{}|l|llllll|l@{}} \hline \textbf{miniJPAS-SDSS} & $AUC_{M+P}$ & $AUC_{P}$ & $AP_{M+P}^{\rm gal}$ & $AP_{P}^{\rm gal}$ & $MS\!E_{M+P}$ & $MS\!E_{P}$ \\ \hline \hline SGLC & 0.994 & -- & 0.989 & -- & 0.006 & -- \\ \texttt{CLASS\_STAR} & 0.997 & -- & 0.993 & -- & 0.032 & -- \\ KNN & $0.996\!\pm\!0.003$ & $0.991\!\pm\!0.007$ & $0.990\!\pm\!0.008$ & $0.984\!\pm\!0.009$ & 0.015 & 0.027 \\ DT & $0.992\!\pm\!0.006$ & $0.984\!\pm\!0.012$ & $0.983\!\pm\!0.011$ & $0.974\!\pm\!0.018$ & 0.011 & 0.032 \\ RF & $\mathbf{0.997\!\pm\!0.006}$ & $0.996\!\pm\!0.004$ & $0.992\!\pm\!0.009$ & $0.995\!\pm\!0.010$ & 0.006 & 0.019 \\ EC & 0.997 & 0.997 & 0.995 & 0.996 & 0.006 & 0.014 \\ ANN & $0.997\!\pm\!0.004$ & ${0.988\!\pm\!0.009}$ & $\mathbf{0.994\!\pm\!0.017}$ & $0.983\!\pm\!0.015$ & 0.012 & 0.043 \\ ERT & $0.997\!\pm\!0.002$ & $\mathbf{0.997\!\pm\!0.003}$ & $0.993\!\pm\!0.006$ & $\mathbf{0.996\!\pm\!0.004}$ & \textbf{0.005} & \textbf{0.019} \\ \hline \hline \textbf{miniJPAS-HSC-SSP} & $AUC_{M+P}$ & $AUC_{P}$ & $AP_{M+P}^{\rm gal}$ & $AP_{P}^{\rm gal}$ & $MS\!E_{M+P}$ & $MS\!E_{P}$ \\ \hline \hline SGLC & 0.970 & -- & 0.992 & -- & 0.040 & -- \\ \texttt{CLASS\_STAR} & 0.956 & -- & 0.991 & -- & 0.053 & -- \\ KNN & $0.950\!\pm\!0.010$ & $0.824\!\pm\!0.023$ & $0.989\!\pm\!0.003$ & $0.959 \!\pm \!0.006$ & 0.053 & 0.098 \\ DT & $0.961\!\pm\!0.009$ & $0.855 \!\pm\!0.017$ & $0.990\!\pm\!0.003$ & $0.959 \!\pm \!0.007$ & 0.061 & 0.132 \\ RF & $0.978\!\pm\!0.005$ & $\mathbf{0.938\!\pm\!0.007}$ & $0.995\!\pm\!0.002$ & $\mathbf{0.986 \!\pm \!0.002}$ & 0.032 & 0.054 \\ EC & 0.979 & 0.967 & 0.996 & 0.993 & 0.031 & 0.040 \\ ANN & $0.970\!\pm\!0.007$ & $0.885\!\pm\!0.014$ & $0.993\!\pm\!0.003$ & $0.969 \!\pm \!0.005$ & 0.036 & 0.070 \\ ERT & $\mathbf{0.979\!\pm\!0.006}$ & $0.931\!\pm\!0.006$ & $\mathbf{0.995\!\pm\!0.002}$ & $0.982 \!\pm \!0.002$ & \textbf{0.032} & \textbf{0.053} \\ \hline \end{tabular} \end{table*} We now present our results for the algorithms introduced in Sections~\ref{mlsec} applied to the crossmatched catalogs described in Section~\ref{xmatch}. Regarding stars and galaxy number counts we refer the reader to the miniJPAS presentation paper~\citep{Bonoli:2020ciz}. \subsection{miniJPAS-SDSS catalog} The performance of the star/galaxy classifiers considered in this paper for the miniJPAS catalog crossmatched with the SDSS catalog in the magnitude interval $15\le r \le20$ is excellent. The results are summarized in Table~\ref{table_foda}, where the best result are marked in bold (EC is not considered as it is not a pure ML classifier).% \footnote{We omit the corresponding figures as they are not informative given the excellent performance.} The errors on the pure-ML classifiers are estimated via $k$-fold cross-validation. In order to assess the importance of photometric bands and morphological parameters, the analysis considers two cases: only photometric bands ($P$ subscript in the table) and photometric bands together with morphological parameters ($M+P$ subscript in the table). Note that this distinction does not apply to SGLC and \texttt{CLASS\_STAR} as they always include the use of morphological parameters. Regarding the analysis with photometric bands only, the best ML methods are RF and ERT, showing the power of combining several trees when making a prediction. Remarkably, using only photometric information, RF and ERT outperform SGLC and \texttt{CLASS\_STAR}. If now we add morphological information, the almost perfect performance of RF and ERT does not improve, showing again that, in this magnitude range, photometric information is sufficient. In Table~\ref{table_foda} we also show the $MS\!E$, whose results agree with the ones from the ROC and purity curves. Another way to analyze qualitatively the performance of a classifier is via a color-color diagram for objects classified as stars ($p \le p_{\rm cut}=0.5$). Figure \ref{locus_photo_SDSS} shows the stellar locus in the $g-r$ versus $r-i$ color space. The blue line is a fifth-degree polynomial fit, based on miniJPAS data that were classified as stars by SDSS. The various markers represent the averages of each classifier for different bins. We observe a small dispersion around the curve, which decreases when morphological parameters are included. This indicates that the classifiers and the classification from SDSS are in good agreement. \begin{figure} \centering \includegraphics[width=.98 \columnwidth]{stellar_locus_photo_SDSS} \includegraphics[width=.98 \columnwidth]{stellar_locus_morpho_SDSS} \caption{The blue small dots represent the stellar locus for the objects classified as stars ($p\le p_{\rm cut}=0.5$) of the miniJPAS catalog crossmatched with the SDSS catalog in the magnitude interval $15\le r \le20$. The dashed line represent a polynomial fit to the stellar locus. The top panel is relative to the analysis that uses only photometric bands, while the bottom panel is relative to the analysis that also uses morphological information. The colored larger symbols represent the mean stellar locus provided by the different ML models. For comparison it is shown also the classification by \texttt{CLASS\_STAR} and SGLC that always use morphological parameters.\label{locus_photo_SDSS}} \end{figure} \subsection{miniJPAS-HSC-SSP catalog} \label{hscresu} \begin{figure*} \centering \includegraphics[width=.98 \columnwidth]{photo_roc_HSC} \includegraphics[width=.98 \columnwidth]{photo_galaxy_precision_recall_HSC} \includegraphics[width=.98 \columnwidth]{morpho_roc_HSC} \includegraphics[width=.98 \columnwidth]{morpho_galaxy_precision_recall_HSC} \caption{ ROC curves (left panels) and purity curves for galaxies (right panels) for the classifiers considered in this paper for the miniJPAS catalog crossmatched with the HSC-SSP catalog in the magnitude interval $18.5\le r \le23.5$. The top panels are relative to the analysis that uses only photometric bands while the bottom panels to the analysis that uses photometric bands and morphological parameters. For comparison it is shown also the classification by \texttt{CLASS\_STAR} and SGLC that always use morphological parameters. Note that the axes ranges are varied in order to better show the curves. The results are summarized in Table~\ref{table_foda} (bottom).\label{roc_hsc_morpho}} \end{figure*} As shown in the previous Section, star/galaxy classification in the range $15\le r \le20$ is not problematic. However, the scenario changes when one moves to fainter magnitudes. As the amount of light decreases, with less information reaching the telescope, the performance of the algorithms decreases to the point that it is important to look for alternative solutions such as ML. Here, we present the analysis of the previous Section applied to the miniJPAS catalog crossmatched with the HSC-SSP catalog in the magnitude interval $18.5\le r \le23.5$. Figure~\ref{roc_hsc_morpho} and Table~\ref{table_foda} show the results. Using photometric information only, the RF algorithm achieves the remarkable score of $AUC=0.938$. Although it is less performant than SGLC and \texttt{CLASS\_STAR} (that use morphology), this result shows that ML has the potential of identifying compact galaxies, which share the same morphology of stars. Also, it has been argued that models that use just photometry can classify QSO's as extragalactic objects better than models that use morphological parameters \citep{2019arXiv190908626C}. The use of the morphological parameters improves the performance of the ML methods to the point that ERT and RF perform better than \texttt{CLASS\_STAR} and SGLC. In Appendix~\ref{aegis1} we repeat the analysis of Figure~\ref{roc_hsc_morpho} for the mJP-AEGIS1 field, which is the miniJPAS pointing with the best point spread function (PSF). It is interesting to note that, although the classifiers feature lower $AUC$'s and higher $MS\!E$'s as compared to the analyses of the previous Section, the $AP$'s reach similar values, even when we use only photometric bands. This is due to this dataset having many more galaxies and only 15.3\% of stars. Therefore, even if there are contaminations by stars, the impact is lower. Finally, in Figure~\ref{locus_photo_HSC-SSP} we show the stellar locus. We can observe a greater dispersion as compared with Figure~\ref{locus_photo_SDSS}, especially when we use only photometric bands in the analysis. Nevertheless, the ML methods return the correct shape of the stellar locus and their performance is similar to the one by SGLC. \begin{figure} \centering \includegraphics[width= \columnwidth]{stellar_locus_photo_HSC} \includegraphics[width= \columnwidth]{stellar_locus_morpho_HSC} \caption{ The blue small dots represent the stellar locus for the objects classified as stars ($p\le p_{\rm cut}=0.5$) of the miniJPAS catalog crossmatched with the HSC-SSP catalog in the magnitude interval $18.5\le r \le23.5$. The dashed line represent a polynomial fit to the stellar locus. The top panel is relative to the analysis that uses only photometric bands, while the bottom panel is relative to the analysis that also uses morphological information. The colored larger symbols represent the mean stellar locus provided by the different ML models. For comparison it is shown also the classification by \texttt{CLASS\_STAR} and SGLC that always use morphological parameters.} \label{locus_photo_HSC-SSP} \end{figure} \subsection{Value added catalog} \label{vac} The ultimate goal of this work is to release a value added catalog with our best alternative classification. In the previous Section we studied star/galaxy classification in the (partially overlapping) magnitude ranges $15\le r \le20$ and $18.5\le r \le23.5$. Here, in order to have a uniform dependence on $p_{\rm cut}$, we wish to produce a catalog that is obtained using a single classifier. As seen in Section~\ref{xmatch}, in the magnitude range $18.5\le r \le20$, the classification by HSC-SSP is more reliable than the one by SDSS. Therefore, we consider the classification by SDSS in the range $15\le r <18.5$ and the one by HSC-SSP in the range $18.5\le r \le23.5$. This catalog spans the magnitude range $15\le r \le 23.5$ and features a total of 11763 sources, 9517 galaxies and 2246 stars. We call it XMATCH catalog. Next, we train and test all the models on this catalog. Using only photometric information the best classifier is RF, which reaches $AUC=0.957 \pm 0.008$, close to the performance of SGLC that uses morphological information. Using photometric and morphological information the best classifier is ERT, which, with $AUC=0.986\pm 0.005$, outperforms SGLC.% \footnote{In Appendix~\ref{morphology_analysis} we show the analysis that considers only morphological information. We find that RF and ANN yield $AUC=0.970 \pm 0.006$.} Figure~\ref{cROCK} shows the ROC curve and the purity curve for galaxies and stars for the three classifiers above, with the addition of the probability threshold $p_{\rm cut}$ via color coding. These plots are meant to help choosing the probability threshold that best satisfies one's needs of completeness and purity (see also Appendix~\ref{histo}). These plots were made with the code available at \href{https://github.com/PedroBaqui/minijpas-astroclass}{github.com/PedroBaqui/minijpas-astroclass}. As shown in the bottom panel of Figure~\ref{cROCK}, the AP of stars is quite good (and significantly better than SGLC), showing that the fact that we used an unbalanced set did not affect the results regarding the least represented class. Finally, we show in Figure~\ref{fixcomp} the cumulative purity of the galaxy and star samples as a function of $r$ magnitude for a fixed completeness of 95\% and 99\%, which are achieved by choosing a suitable $p_{\rm cut}$. For a completeness of 95\% and the ERT classifier, the purity of the galaxy sample remains higher than 99\% throughout the magnitude range, better than SGLC. Regarding stars, for a completeness of 95\% and ERT, purity remains higher that 90\% for $r<22.5$. For fainter stars, ERT outperforms SGLC. In order to build our catalog, we applied our two best classifiers (RF without morphology and ERT with morphology) to the 29551 miniJPAS sources in the magnitude range $15 \le r \le 23.5$. It is important to note that, given the completeness of miniJPAS \citep[see][]{Bonoli:2020ciz}, sources outside this magnitude interval are less likely to enter scientific studies. The catalog is publicly available at \href{https://j-pas.org/datareleases}{j-pas.org/datareleases} via the ADQL table \texttt{minijpas.StarGalClass}. See Appendix~\ref{adql} for more informations and an ADQL query example. \begin{figure} \centering \includegraphics[width= \columnwidth]{fusion_roc_hsc_colored} \includegraphics[width= \columnwidth]{fusion_galaxy_precision_recall_colored} \includegraphics[width= \columnwidth]{xmatch_star_precision_recall_colored} \caption{ROC curve (top panel) and purity curve for galaxies (middle panel) and stars (bottom panel) for RF (no morphology), ERT (with morphology) and SGLC for sources in the magnitude range $15 \le r \le 23.5$. The color coding indicates the probability threshold $p_{\rm cut}$. Note that the axes ranges are varied in order to better show the curves.} \label{cROCK} \end{figure} \begin{figure} \centering \includegraphics[width= \columnwidth]{purity_complet_xmatch} \caption{Cumulative purity of the galaxy (top) and star (bottom) samples as a function of magnitude for the ML classifiers of Fig.~\ref{cROCK}, for a fixed completeness of 95\% (solid line) and 99\% (dashed line).} \label{fixcomp} \end{figure} \subsection{Feature importance} \label{featureimpo} We use the RF algorithm (see Eq.~\ref{impo}) to assess feature importance which can give us insights on the way objects are classified. The 15 most important features are listed in Table~\ref{tab:f-sdss}. The full tables are provided as machine readable supplementary material. When including morphological parameters, FWHM is the most important feature. This agrees with the distributions of FWHM in Figs.~\ref{fig:mor-sdss} and \ref{fig:mor-hsc} which show a good separation between stars and galaxies. Although this separation is less evident for the other parameters, they also contribute to classification. In particular, the mean PSF is the fourth most importante feature, while the least important morphological feature is the ellipticity parameter $A/B$. To some extent, these results could depend on the choice of the impurity function (see Eq.~\eqref{inpu}). We tested different impurity functions and confirmed that morphological parameters are generally more important than photometric bands. When using photometric information only, the importance of the features is more evenly distributed as more features work together towards object classification. In particular, broad bands are not necessarily more important than narrow bands and errors (the width of the distribution) are as important as the measurements (central value of the distribution). In other words, the full characterization of the measurement seems to be important. \begin{figure} \centering \includegraphics[width= \columnwidth]{importance_xmatch} \caption{ Top: The shaded area represents the relative importance (see Eq.~\ref{impo}) of the narrow-band filters as function of the filters' wavelength for the analysis relative to the full magnitude range $15\le r \le 23.5$ (see Section~\ref{vac}). The importance of the 4 broad-band filters is shown using black circles. The red and blue lines show the average photo-spectrum of stars and galaxies, respectively. Bottom: as the top panels but for the relative importance of the magnitude errors. The green line shows the percentage of missing values (magnitude of 99) for the narrow band filters. \label{features}} \end{figure} In order to get a physical insight on the regions of the spectrum that matter most for classification, we show in Figure~\ref{features} (top) the relative importance of the filters's magnitudes as function of the filters' wavelength together with the median star and galaxy photo-spectrum. It is clear that there are regions systematically more important than others (neighboring filters with higher importance) and that there is correlation between the most important regions and the average features in the spectra. In the bottom panel of Figure~\ref{features} we show the importance of the magnitude errors, which also show regions that are systematically more important than others. Particularly important is the error on the $i$ band. In the same panel we also show the fraction of missing values (magnitude of 99) for each narrow band filter. We can see that this fraction anti-correlates with the filter importance (top panel). \setlength{\tabcolsep}{10pt} \renewcommand{\arraystretch}{1.2} \begin{table} \caption{Feature importance with ($M+P$) and without ($P$) morphological parameters for the analysis relative to the full crossmatched catalog XMATCH ($15\le r \le 23.5$, see Section~\ref{vac}). The importance is normalized relative to the best feature. The quantity max/ap3 is \texttt{MU\_MAX/MAG\_APER\_3\_0}. The full tables are provided as machine readable supplementary material. See also Figure~\ref{features}.} \centering \begin{tabular}{p{1cm}l|p{1cm}l} \hline \hline \multicolumn{2}{c}{XMATCH ($P$)} & \multicolumn{2}{c}{XMATCH ($P+M$)} \\ \hline Feature & Importance & Feature & Importance \\ \hline iSDSSerr & 1.00 & FWHM & 1.00 \\ J0810err & 0.31 & $c_r$ & 0.30 \\ J0390 & 0.22 & max/ap3 & 0.18 \\ J0460err & 0.18 & \textrm{PSF}& 0.10\\ J0680 & 0.18 & iSDSSerr & 0.08 \\ rSDSSerr & 0.14 & J0820err & 0.02 \\ J1007err & 0.12 & J0390err & 0.02 \\ J0820err & 0.09 & A/B & 0.01 \\ gSDSSerr & 0.09 & J1007err & 0.01 \\ iSDSS & 0.08 & J0810err & 0.01 \\ J0720 & 0.08 & J0390 & 0.01 \\ J0660err & 0.07 & gSDSS & 0.009 \\ uJAVA & 0.05 & uJAVAerr & 0.008 \\ J1007 & 0.05 & J0790err & 0.008 \\ uJPAS & 0.05 & J0680 & 0.007 \\ ... &... &... &... \\ \hline \hline \end{tabular} \label{tab:f-sdss} \end{table} \subsection{Transmission curve variability} \label{trans} The transmission curves of the narrow band filters vary according to the relative position in the filters. In particular, the transmission curve variability depends on the SED of each object so that the map of relative variation in flux for a given filter is different for objects with different SEDs. This effect should affect classifications that depend strongly on particular narrow spectral features (even more if they fall in one of the edges of the narrow band transmission curve) and would have almost no effect when considering mainly the continuum. As we use photometric data, our results could be impacted by this effect. miniJPAS data, in particular the size of the XMATCH catalog, does not allow us to perform a thorough investigation of this effect. Therefore, we explore this issue by dividing the test set into the 4 quadrants of the filter area and compute the $AUC$ for each quadrant. The filter coordinates are given in pixels via the \texttt{X\_IMAGE} and \texttt{Y\_IMAGE} variables ($9000 \times 9000$ pixels). As can be seen from Table~\ref{tab:trans}, the $AUC$ variation is compatible with the overall performance of $AUC=0.957 \pm 0.008$ (RF) and $AUC=0.986\pm 0.005$ (ERT), showing that the effect should not strongly bias our results. \setlength{\tabcolsep}{10pt} \renewcommand{\arraystretch}{1.2} \begin{table} \caption{Area under the curve ($AUC$) for the 4 filter quadrants relative to the best classifiers shown in Figure~\ref{cROCK}.} \centering \begin{tabular}{c|cc} \hline \hline RF ($P$) & $\texttt{X} <4500$ & $4500\le \texttt{X} \le 9000$ \\ \hline $\texttt{Y} <4500$ & 0.9633 & 0.9592 \\ $4500\le \texttt{Y} \le9000$ & 0.9449 & 0.9588 \\ \hline \hline ERT ($P+M$) & $\texttt{X} <4500$ & $4500\le \texttt{X} \le 9000$ \\ \hline $\texttt{Y} <4500$ & 0.9917 & 0.9775 \\ $4500\le \texttt{Y} \le9000$ & 0.9822 & 0.9938 \\ \hline \hline \end{tabular} \label{tab:trans} \end{table} \section{Conclusions} \label{conclusions} In this work we applied different machine learning methods for the classification of sources of miniJPAS. The goal was to build models that are competitive with and complementary to those adopted in the miniJPAS 2019 public data release and to offer to the astronomical community a value added catalog with an alternative classification. As we considered supervised ML algorithms, we classified the miniJPAS objects that are in common with SDSS and HSC-SSP, whose classifications are trustworthy within the magnitude intervals $15\le r \le20$ and $18.5\le r \le23.5$, respectively. We used as input the magnitudes associated to the 60 filters along with their errors, 4 morphological parameters and the mean PSF of the pointings. The output of the algorithms is probabilistic. We tested K-Nearest Neighbors, Decision Trees, Random Forest, Artificial Neural Networks, Extremely Randomized Trees and Ensemble Classifier. Our results show that ML is able to classify objects into stars and galaxies without the use of morphological parameters. This makes ML classifiers quite valuable as they can distinguish compact galaxies from stars, differently from methods that necessarily use morphological parameters in the classification process. Of course, the inclusion of morphological parameters improves the results to the point that ERT can outperform \texttt{CLASS\_STAR} and SGLC (the default classifier in J-PAS). We used the RF algorithm to assess feature importance. When using morphological parameters, FWHM is the most important feature. When using photometric information only, we observe that broad bands are not necessarily more important than narrow bands and errors (the width of the distribution) are as important as the measurements (central value of the distribution). In other words, the full characterization of the measurement seems to be important. We have also shown that ML can give meaningful insights on the regions of the spectrum that matter most for classification. After having validated our methods, we applied our best classifiers, with and without morphology, to the full dataset. This classification is available as a value added catalog at \href{https://j-pas.org/datareleases}{j-pas.org/datareleases} via the ADQL table \texttt{minijpas.StarGalClass}. The ML models are available at \href{https://github.com/J-PAS-collaboration/StarGalClass-MachineLearning}{github.com/J-PAS-collaboration/StarGalClass-MachineLearning}. Our catalog both validates the quality of SGLC and produces an independent classification that can be useful to test the robustness of subsequent scientific analyses. In particular, our classification uses the full photometric information, with and without morphology, which is important for faint galaxies whose morphology is similar to the one of stars. We conclude stressing that our methodology can be further improved both at the algorithmic and at the data input level. A promising avenue is the direct use of the object images with convolutional neural networks. This approach has the potential of outperforming presently available classifiers. \begin{acknowledgements} POB thanks, for financial support, the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. VM thanks CNPq (Brazil) and FAPES (Brazil) for partial financial support. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 888258. LADG is supported by the Ministry of Science and Technology of Taiwan (grant MOST 106-2628-M-001-003-MY3), and by the Academia Sinica (grant AS-IA-107-M01). ES has been partly supported by the Spanish State Research Agency (AEI) Projects AYA2017-84089 and MDM-2017-0737 at Centro de Astrobiología (CSIC-INTA), Unidad de Excelencia María de Maeztu. MQ is supported by the Brazilian research agencies CNPq and FAPERJ. RGD acknowledges financial support from the State Agency for Research of the Spanish MCIU through the ``Center of Excellence Severo Ochoa'' award to the Instituto de Astrofísica de Andalucía (SEV-2017-0709) and through the projects AyA2016-77846-P and PID2019-109067GB-100. LS acknowledges support from Brazilian agencies CNPq (grant 304819/2017-4) and FAPESP (grant 2012/00800-4).\\ This work made use of the Virgo Cluster at Cosmo-ufes/UFES, which is funded by FAPES and administrated by Renan Alves de Oliveira.\\ Based on observations made with the JST/T250 telescope and PathFinder camera for the miniJPAS project at the Observatorio Astrof\'{\i}sico de Javalambre (OAJ), in Teruel, owned, managed, and operated by the Centro de Estudios de F\'{\i}sica del Cosmos de Arag\'on (CEFCA). We acknowledge the OAJ Data Processing and Archiving Unit (UPAD) for reducing and calibrating the OAJ data used in this work.\\ Funding for OAJ, UPAD, and CEFCA has been provided by the Governments of Spain and Arag\'on through the Fondo de Inversiones de Teruel; the Arag\'on Government through the Research Groups E96, E103, and E16\_17R; the Spanish Ministry of Science, Innovation and Universities (MCIU/AEI/FEDER, UE) with grant PGC2018-097585-B-C21; the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE) under AYA2015-66211-C2-1-P, AYA2015-66211-C2-2, AYA2012-30789, and ICTS-2009-14; and European FEDER funding (FCDD10-4E-867, FCDD13-4E-2685).\\ Based on data from ALHAMBRA Data Access Service the at CAB (CSIC-INTA).\\ Funding for the DEEP2 Galaxy Redshift Survey has been provided by NSF grants AST-95-09298, AST-0071048, AST-0507428, and AST-0507483 as well as NASA LTSA grant NNG04GC89G.\\ The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.\\ This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software at \href{http://dm.lsst.org}{dm.lsst.org}.\\ This paper is based [in part] on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center (ADC) at National Astronomical Observatory of Japan. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CfCA), National Astronomical Observatory of Japan.\\ Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III website is sdss3.org. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. \end{acknowledgements} \bibliographystyle{aaArxivDoi}
2,877,628,088,681
arxiv
\section*{Author Summary} Gene activation is an inherently random process because numerous diffusing proteins and DNA must first interact by random association before transcription can begin. For many genes the necessary protein--DNA associations only begin after activation, but it has recently been noted that a large class of genes in multicellular organisms can assemble the initiation complex of proteins on the core promoter prior to activation. For these genes, activation merely releases polymerase from the preassembled complex to transcribe the gene. It has been proposed on the basis of experiments that such a mechanism, while possibly costly, increases both the speed and the synchrony of the process of gene transcription. We study a realistic model of gene transcription, and show that this conclusion holds for all but a tiny fraction of the space of physical rate parameters that govern the process. The improved control of cell-to-cell variations afforded by regulation through a paused polymerase may help multicellular organisms achieve the high degree of coordination required for development. Our approach has also generated tools with which one can study the effects of analogous changes in other molecular networks and determine the relative importance of various molecular binding rates to particular system properties. \section*{Introduction} Investigations in yeast \cite{keaveney1998,ptashne1997} led to the hypothesis that in most organisms the recruitment of polymerase to the promoter is the primary regulated step in the activation of gene expression \cite{juvengershon2008,Margaritis2008,Gilmour2009,Chiba2010}. However, recent studies of multicellular organisms have revealed a diverse array of other regulatory strategies, including several types of post-initiation regulation \cite{zeitlinger2007,muse2007,hargreaves2009}. Zeitlinger et al.\ \cite{zeitlinger2007} generated tissue-specific whole-genome polymerase binding data in {\it Drosophila} and showed that regulation of polymerase release from the promoter is widespread during development. Their data shows that some 15\% of tissue-specific genes bind polymerase to their promoters in \emph{all} tissues, even though each gene only allows polymerase to proceed through the coding sequence in a specific tissue (see Supplemental Figure S1). Differential expression of these genes is made possible by a {\em paused state} wherein a polymerase remains stably bound but precisely stopped a short distance from the promoter and awaits a regulated release that is only triggered in the appropriate tissue \cite{zeitlinger2007}. Finally, many metazoa have been shown to have, genome-wide, disproportionate amounts of polymerase bound at promoter regions as compared to coding regions \cite{core2008,guenther2007,muse2007,zeitlinger2007}. This mechanism has been called {\em promoter proximal pausing}. It should not be confused with the stochastic stalling of a polymerase as it transcribes, a phenomenon which has also been termed ``polymerase pausing''. Furthermore, there are distinctions to be made between: {\bf stalled polymerase}, a polymerase which associates in a transient, unstable manner with the promoter but does not proceed into productive transcription; {\bf poised polymerase}, a polymerase for which the association is stable but has not escaped from the promoter to begin transcription; and {\bf promoter proximal paused polymerase}, a polymerase that completely escapes from the promoter but ``pauses'' in a stable, inducible state just downstream of the promoter. It is believed that most genes which have polymerase bound to their promoters in all tissues but expressed in only some tissues fall in the last category; this promoter proximal accumulation of pol II may indicate that regulation of pausing transitions is a general feature of metazoan transcriptional control. We remind the reader that a gene need not use the paused state as a waiting step at which to integrate regulatory information in order to be termed a paused gene, as even constitutive house-keeping genes may be denoted as paused \cite{gilchrist2010pausing}. In this study we will be interested only in the elongation regulated subset of paused genes. For further discussion of terminology and assays which distinguish these conditions, see the Supporting Information. It remains an open question why expression of some genes is controlled further downstream than others. Several groups have postulated that pausing may ready a polymerase for rapid induction \cite{core2008,muse2007,hendrix2008}. (Here {\em induction} refers to the first time at which all the components required for expression of a particular gene become available, and {\em expression} is when transcription of the first nascent mRNA transcript begins.) To motivate this idea, the preloaded, paused polymerase is described as a ``loaded gun'' ready to shoot off a single transcript as soon as it is induced. Experiments with heat shock genes -- the first class of genes for which paused promoters were identified -- show evidence of rapid induction consistent with this idea \cite{yao2007,rasmussen1993}. However, pre-loading only provides an argument for why the {\em first} transcript would be produced more quickly. Surprisingly then, it was also observed by Yao et al.\ \cite{yao2007} that {\em subsequent} polymerases are recruited rapidly to promoters of induced, elongation-regulated genes as well as the first, preloaded Pol II -- a phenomenon not accounted for by the loaded gun metaphor. Since most genes must be transcribed several times in order to produce functional levels of mRNA, changes in speed of induction as a whole are likely to be of more physiological consequence than changes in the time at which the first, pre-paused transcript releases. When whole-genome studies extended the observation of pausing to cover many key developmental regulatory genes \cite{zeitlinger2007}, further questions arose. While the selective advantage of rapid induction is reasonably apparent for stress response genes, it is harder to explain why rapid induction would be selected for in so many developmental transcription factors and signaling pathway components. An additional hypothesis, suggested by Boettiger and Levine \cite{boettiger2009synchronous}, is that regulation of transcriptional elongation (for instance, by promoter proximal polymerase pausing) may have evolved to ensure more coordinated expression across populations of cells. This hypothesis was motivated by the striking correspondence between genes shown experimentally to activate in a synchronous fashion and genes shown to bind polymerase at the promoter independent of activator state but not continue elongation until activator arrival. Recent work by Darzacq and colleagues \cite{darzacq2007} provides insight into why a regulatory interaction downstream of transcriptional pre-initiation complex (PIC) assembly may lead to more coordinated gene expression than does regulation upstream of PIC assembly. Using fluorescently tagged transcription components, they demonstrated that transcriptional initiation is a highly variable process, with only about one in ninety Pol II--gene interactions leading all the way to productive mRNA elongation \cite{darzacq2007}. Nonproductive interactions each lasted between several seconds and a minute, suggesting that abortion of transcriptional initiation can occur at different stages in assembly of the complex. Regulatory interactions that occur after this noisy assembly process would act only on transcriptionally competent polymerases, and so this mechanism might result in more synchronous expression -- a hypothesis we test here. The idea that gene expression itself is intrinsically variable (rather than variable as a result of extrinsic fluctuations in upstream quantities) is well established and is a recent focus of theoretical and experimental interest -- see \cite{raj2008review} and \cite{raj2009singlemolecule} for reviews. Stochasticity can arise at many stages of the process, including from the diffusion of molecules in the cell \cite{vanZon2006diffusion}, noisy gene regulation \cite{peccoud1995markovian}, chromatin and other conformal rearrangements \cite{degenhardt2009populationlevel}, random events during elongation \cite{rajala2010transcriptionalpausing,ribeiro2009delayed}, and random dynamics of translation and degradation of mRNA and proteins \cite{ribeiro2010stochastic}. Populations of single-celled organisms have been shown to take advantage of noisy gene expression to achieve clonal yet phenotypically heterogeneous populations \cite{maamar2007}. In metazoan development, however, proper growth and development generally relies on coordination and synchrony rather than stochastic switching. For example, certain cells in the Drosophila embryo are induced to become neurons if they are next to a mesoderm cell but not mesoderm themselves \cite{derenzis2006}, so uneven activation of mesoderm fate could produce early patches of mesoderm, thereby improperly inducing neuronal development in neighboring tissue. Although synchronous behavior is important for metazoa, particularly in development, it is not a universal property of all metazoan genes. For instance, genes with both synchronous and very stochastic patterns of induction have been observed in the Drosophila embryo \cite{boettiger2009synchronous}. The unique challenges of coordinating the behavior of a large number of independent cells may explain why elongation regulation aimed at release from a paused state appears to be much more dominant among metazoa like {\it D.\ melanogaster} and humans than \emph{E.\ coli} or \emph {S.\ cerevisiae}. Here we investigate mathematically whether the significant change in the coordination of expression observed in experiment \cite{boettiger2009synchronous} can be explained by a change in the regulation network topology which only effects whether regulation occurs before or after PIC assembly, while keeping other details (reactions and rates) of the PIC assembly process the same. We also seek to determine which interactions in the transcriptional pathway are most important for determining the coordination of expression, and what effect different topologies have on the speed of induction and variability between sister cells in total number of mRNA synthesized. We do this by constructing continuous-time Markov chain models of PIC assembly with states that correspond to joint configurations of the promoter and the enhancer. The (random) time taken for the chain to pass from a ``start'' state to an ``end'' state corresponds to the elapsed time between successive transcription events. The models we construct for the two different modes of regulation have a common set of transition rates, but the particular mode of regulation dictates that certain transitions are disallowed, resulting in two chains with different sets of states accessible from the ``start'' state. We describe this situation by saying that each model is a {\em topological rearrangement} of the other. Because the same set of transition rates completely parametrize both chains, (see figure \ref{fig:markovmodel}) we can make meaningful comparisons between the two models. Once the Markov chains are constructed, we use the Feynman--Kac formula \cite{fitzsimmons1999fk}, model-specific decomposition techniques and computer algebra to find symbolic expressions for features of these first passage times that correspond to the delay between induction and transcription. \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{simple-model-results} \end{center} \caption{ {\bf From regulatory mechanism to Markov Chain:} {\bf (A)} Schematics of two simplified models for initiation regulation (IR) and elongation regulation (ER). Transcription is represented in 4 steps: (1) naked DNA, (2) DNA-polymerase complex, (3) actively transcribing polymerase, and (4) completed mRNA. The enhancer is either (A) open or (B) bound. The enhancer must be bound (the permissive configuration) for the transcription chain to pass the gated step ($\vdash\!\dashv$), whose identity depends on the model (IR or ER). {\bf (B)} The corresponding Markov chains for each regulation scheme. Colors of arrows denote the transition rates from (A). Note that one set of rate parameters determines all the numerical values for both chains, allowing for a direct test of the effects of topological change.} {\bf (C)} Distributions of log ratios of speed ($\mu$), variance of expression time ($\sigma$), and transcript count variability ($\eta$) across 10,000 randomly chosen parameter vectors (as described in the text), showing that ER is faster, less variable, and produces less variability in transcript numbers over most possible combinations of rate parameters for this simple model. \label{fig:markovmodel} \end{figure} Although there has recently been much work modeling different sources of stochasticity in gene expression, most models refrain from a detailed representation of the different protein--DNA complexes involved in favor of more abstract approximations \cite{bialek2008cooperativity,pedraza-paulsson,pedraza2005noise, Thattai2001,thattai2002attenuation,tkavcik2008rin,maamar2007}. Two--state ``on--off'' Markov chains have been used many times to model stochasticity in transcription (e.g.\ \cite{peccoud1995markovian,becskei2005lownumber}), and provide analytic solutions. Such models have been used to explain, for instance, the observation that mRNA copy number does not in general follow Poisson statistics, implying that there are ``bursts'' of transcription in some sense. This bursting behavior can occur if the gene transitions between an {\em active} state (in which transcription can occur), and an {\em inactive} state (in which it does not), as shown by Raj et al.\ \cite{raj2006mRNAsynthesis}. Although more complicated Markov chain models have appeared, often presented via a stochastic chemical master equation \cite{samoilov2006deviant}, they are usually simulated rather than studied analytically (see \cite{resat2009kinetic} for a review of methods and software). A notable recent exception is Coulon et al.\ \cite{coulon2010spontaneous}, who use matrix diagonalization to study the power spectrum and other properties of several models of regulation. A complementary set of techniques takes a broader view, using the fluctuation--dissipation theorem to work on the scale of \emph{small} stochastic deviations from the differential equations that capture the average behaviors at equilibrium \cite{bialek2008cooperativity,pedraza-paulsson,pedraza2005noise,thattai2002attenuation,tkavcik2008rin}. We model the intrinsic noise of regulation and polymerase recruitment using biologically-derived Markov chain models. We focus on this particular piece of the larger process of expression in greater detail than has been done previously in order to provide a detailed mathematical investigation of the role of promoter proximal pausing. Unlike simulation methods, our approach provides a tractable way to compute analytic expressions for which interpretation is direct and reliable. Moreover, it does not depend on small-noise or equilibrium assumptions, or require the passage to a continuum limit. Furthermore, the structure of the models we use is determined by biological realism rather than being constrained by mathematical tractability. Our approach is most similar to that of \cite{coulon2010spontaneous}, although our methods are less computationally intensive and produce symbolic expressions which allow us to investigate phenomena in greater depth. In particular, we compare alternate modes of gene regulation and readily evaluate analytically the sensitivity of system properties to changes in rate parameters over a large proportion of parameter space. \section*{Methods} \subsection*{Framework for modeling regulatory interactions} As a prelude to describing the actual Markov chain model of transcriptional regulation we analyze, we describe a general approach to modeling promoters, enhancers and their interactions, and illustrate this approach with a toy model of transcription that is not too cumbersome to draw -- see figure \ref{fig:markovmodel}. We begin with two separate Markov chains, a {\em promoter chain} and an {\em enhancer chain} (figure \ref{fig:markovmodel}A). The states of the promoter chain are the possible configurations of the components involved in polymerase loading onto the promoter (e.g.\ ``naked DNA'' or ``DNA--polymerase complex'') and the allowable transitions correspond to the arrivals of these components, in whichever order is permissible by the underlying biochemistry. The states of the enhancer chain are the the components involved in enhancer activation (e.g.\ the binding of regulatory transcription factors to the appropriate cis-control sequence for that promoter). Next, to model the regulatory interaction between enhancer and promoter, we designate a particular configuration of the enhancer as the {\em permissive configuration}, and specify a particular transition of the promoter chain as the {\em regulated step}. We require the enhancer chain to be in the permissive configuration for the promoter chain to make the transition through the regulated step and we assume that the enhancer remains in the permissive configuration as long as the promoter chain is downstream of that step. (The specification that the enhancer remains in the bound/permissive state while the process is downstream of the regulated step is not the only possible choice, but it is perhaps the most realistic.) We choose the regulated step according to the regulation mechanism that we are modeling. The composite stochastic process that records the states of both the promoter and enhancer chains is our resulting Markov chain model of transcription. Varying the regulated step leads to alternative topologies for this chain. We stress that, as we change the choice of regulated step, the underlying promoter and enhancer chains remain the same. In particular, the same set of rate parameters are used in both schemes and they have the same meaning. This permits meaningful comparison of different methods of regulation. Two possible regulated steps, labeled ``IR gated'' and ``ER gated'', are shown along with the corresponding Markov chains in figure \ref{fig:markovmodel}. Each possible configuration of the components of the transcription complex and associated enhancer elements is represented by a state of the composite chain, and the composite chain jumps from one state to another when a single molecular binding or unbinding event converts one configuration of complexes into another. For simplicity, we assume that each arrival in the end state allows one transcript to be made. After transcription occurs, the transcription complex may dissociate entirely, returning the chain to its initial state, or it may leave behind a partial {\em scaffold}, returning the composite chain to an intermediate state (and possibly leading to successive rounds of reinitiation and thus a ``burst'' of transcription products -- i.e.\ multiple mRNA molecules being transcribed per promoter opening event). Formally, the general composite Markov chain model is constructed as follows. Consider two promoter configurations, say, $x_i$ and $x_j$, such that a direct transition from the first to the second is possible. Write $r_P(x_i,x_j)$ for the rate at which this transition occurs. For any two promoter configurations for which a direct transition is not possible, we set this rate equal to zero. Similarly, we write $r_E(y_i,y_j)$ for the transition rate from enhancer configuration $y_i$ to enhancer configuration $y_j$. Denote the permissive enhancer configuration by $y_*$. Suppose that the regulated step of the promoter chain is the step from state $x_a$ to state $x_b$. Let $X^*$ be the set of states downstream from $x_b$, i.e.\ those states that can only be reached from the unbound state by passing through $x_b$. Then, the composite Markov chain takes values in a set of pairs of configurations $(x,y)$, and it jumps from $(x_i,y_i)$ to $(x_j,y_j)$ at rate $q( (x_i,y_i), (x_j,y_j) )$, defined as follows: \begin{align*} q( (x_i,y_i), (x_j,y_i) ) &= r_P(x_i,x_j), \qquad &\mbox{if} \; (x_i,x_j) \neq (x_a,x_b), \\ q( (x_i,y_i), (x_i,y_j) ) &= r_E(y_i,y_j), \qquad &\mbox{if} \; x_i \notin X^*,\\ q( (x_a,y_i), (x_b,y_i) ) &= 0, \qquad &\mbox{if} \; y_i \neq y_*,\\ q( (x_a,y_*), (x_b,y_*) ) &= r_P(x_a,x_b), \qquad & \end{align*} and $q((x_i,y_i),(x_j,y_j))=0$, otherwise. Denote by $x_e$ the expressing promoter configuration with productively elongating mRNA. We are interested in the passage of the composite Markov chain from certain starting states -- either the state in which both promoter and enhancer are unbound or the state to which the system returns after elongation begins -- to the final, expressing state $(x_e, y_*)$. Depending on which transition is regulated, some pairs of promoter and enhancer configurations will be unreachable from the relevant starting states; these pairs are biochemically inaccessible and are never visited, and so need not appear in our depictions or in our generator matrices (e.g.\ state 2A in the IR-gated model of figure \ref{fig:markovmodel}). Because there are generally only two promoters per gene active at the same time in a given nucleus, binding of a general transcription factor (TF) at one locus does not decrease the total concentration of the TF in the nucleus sufficiently to affect the rate of binding at the homologous locus. Furthermore, since the observed timescales of variability in induction are shorter than the expected timescale for protein translation and folding, we neglect any feedback from mRNA synthesis which might modify the transition rates. This allows us, in particular, to assume that the jump rates of the Markov chain are homogeneous in time. \subsection*{Detailed model of transcription} We now apply this framework to examine a model of transcription that is more interesting and detailed than the toy model used above for illustrative purposes. Many general transcription factors (TFs), such as the protein complexes TFIIA, TFIIB, etc., function together in a coordinated fashion to form the pre-initiation complex (PIC) necessary for the proper activation of transcription \cite{hager2009transcription,kornberg2007,thomas2006}. Experiments with fluorescently labeled TFs {\em in vivo} indicate that the components of this complex assemble on the promoter DNA \cite{darzacq2007,sprouse2008} rather than float freely in the nucleoplasm, as had been previously argued \cite{parvin1998}. The steps of PIC assembly are not fully understood \cite{hager2009transcription}, although some important details are known. We analyze the assembly scheme depicted in figure \ref{fig:picassembly}, which is largely consistent with available data. The promoter is recognized by TFIID, the binding of which allows TFIIA and TFIIB to join the complex \cite{thomas2006}. We choose this complex as the first state in our promoter model (state 1 of figure \ref{fig:picassembly}), since it is only just after this step that the regulation method may differ. TFIIB facilitates the recruitment of RNA polymerase II (Pol II) \cite{thomas2006} (state 2). For many non-paused genes, polymerase is only detected in cells that have an activated enhancer (the cis regulatory sequence which controls expression) \cite{zeitlinger2007}. We call these genes {\em initiation regulated} and require that the enhancer reach its permissive state ($B$) before this association can occur. Since Mediator is important for many promoter--enhancer interactions \cite{hager2009transcription,fuda2009} it has likely also joined the complex prior to polymerase arrival. TFIIE, (state 3), and TFIIF (state 4), bind next, possibly in either order. Once both are bound (state 5), TFIIH must also bind (state 6) before Pol II starts synthesizing RNA and clears the promoter \cite{kornberg2007, hager2009transcription}. TFIIH is displaced upon promoter escape \cite{kornberg2007}, and if Ser 2 of the Pol II tail is not phosphorylated by CDK9 (pTEFb), transcription pauses 40--50 base pairs downstream of the promoter \cite{rasmussen1993,fuda2009,sims2004} (state 7). For elongation regulated genes, it is the release from this paused state that is possible only in the presence of an activated enhancer (permissive state) -- which is generally believed to recruit the necessary CDK9 (and possibly other factors). Phosphorylation of Ser 2 allows the fully competent polymerase to proceed through the gene and produce a complete mRNA (state 8). The transition rates between configurations depend on the energy of association of the bond created and the concentration of the reacting components. \begin{figure}[!ht] \begin{center} \includegraphics[width=4in]{complex-schematic} \end{center} \caption{ {\bf Model of PIC assembly.} Each possible complex in the process is enumerated as a state of the promoter Markov chain. (see text for description of each complex) The promoter chain (states 1--8) is combined with the enhancer chain (states A and B) to make the full 16 state model of transcription. Transitions that in some scheme require an activated enhancer (state B) are indicated by a gate, $\vdash\!\dashv$. Forward rate transitions are in light font and backward transitions in dark font. The $1 \to 2$ transition is regulated in the IR scheme, and the $7 \to 8$ transition is regulated in the ER scheme. } \label{fig:picassembly} \end{figure} Since we are interested in exploring the differences in which step of PIC assembly is regulated and not the different possible modes of enhancer activation, we use a simple abstracted two-state model of enhancer activation. A single transition switches the enhancer from the inactive state to the permissive state. For instance, a transition to the permissive state could represent the binding of a TF to the enhancer. This is not likely to be completely realistic, but if a particular step in the actual dynamics of transcription factor assembly and enhancer-promoter interaction is rate-limiting (e.g.\ the looping rate between a bound enhancer and its target promoter), then its behavior will be well approximated by our minimal model, with the transition from active to inactive corresponding to the rate for this limiting step. For many paused genes, it is the phosphorylation event which is believed to be regulated \cite{zeitlinger2007,fuda2009}. However, accumulating data suggests the molecular identity of the release factors may vary between paused genes. For example, some also require the recruitment of TFIIS in order to escape a ``backtracked'' paused state \cite{adelman2005}. We consider any such regulation by release from pausing after PIC assembly to be {\em elongation regulation} (ER), and any regulation acting upstream of PIC assembly {\em initiation regulation} (IR). Finally, the scaffold of transcriptional machinery that facilitates polymerase binding does not necessarily dissociate when transcription begins. Thus, reinitiation may occur by binding new polymerases (at step 5) which must still reload TFIIH which was evicted during promoter escape in order to proceed to step 6 and so on back to step 8. Repeated cycles of reinitiation may lead to a burst of mRNAs synthesized from a single promoter opening event. We denote by $b$ the probability that the scaffold survives to cycle in a new polymerase (see figure \ref{fig:picassembly}). The scaffold breaks down before the next polymerase arrives with probability $1-b$, in which case transcription activation must start again from state 1. We analyze both the time until the first transcript begins (for which such bursting is irrelevant) and the effect of this partial stability of the scaffold on cell--to--cell variation in total mRNA. Our aim is not to present a definitive model of PIC assembly itself. Rather, we seek to understand the impact of different modes of regulation on a reasonable model that incorporates sufficient detail and to develop tools that can analyze effectively models of this complexity. \subsection*{Statistical Methods} We are interested in the speed and variability of the transcription process, as measured, respectively, by the mean, $\mu_{\tau}$, and variance, $\sigma^2_{\tau}$, of the delay $\tau$ between induction of the gene and expression of the first functional mRNA transcript. (Recall that by {\em induction} we mean the first time at which all the components required for expression of a particular gene become available, and by {\em expression} we mean the time when transcription of the first nascent mRNA transcript begins.) We use the mean delay to explore the hypothesis that the mechanism of elongation regulation is faster than that of initiation regulation, even when there is no polymerase initially bound (as reported in \cite{yao2007}). The variance of the delay is related to the degree of synchrony of expression of the first transcripts in a population of identically induced cells (studied in \cite{boettiger2009synchronous}) -- allowing us to test if synchrony is a functional consequence of elongation regulation. We are also interested in the variation between activated cells of the total amount of mRNA produced in each. If we denote by $N(t)$ the random number of transcripts produced up until time $t$, then it follows from elementary renewal theory (see e.g.\ Section XI.5 in \cite{feller-vol-2}) that $N(t)$ has mean approximately $\mu_{N(t)} \approx t/\mu_{\tau}$ and variance approximately $\sigma^2_{N(t)} \approx \sigma^2_{\tau} t/\mu_{\tau}^3$. A natural measure of relative variability of $N(t)$ is the squared coefficient of variation of $N(t)$, $\sigma^2_{N(t)}/\mu^2_{N(t)}$ (i.e. \ the variance of $N(t)$ divided by the squared mean of $N(t)$), which is thus approximately $\sigma^2_{\tau}/(\mu_{\tau} t)$. We denote the coefficient $\sigma^2_{\tau}/\mu_{\tau}$ by $\eta$, and refer to it as {\em transcript count variability}. The transcript count variability provides a measure of the variation in total number of rounds of transcription initiated by identical cells that have been induced for the same amount of time. Note that $\eta$ has units of time: \begin{displaymath} \eta = \frac{\sigma_{\tau}^2}{\mu_{\tau}} \approx \frac{\sigma_{N(t)}^2 }{\mu_{N(t)}^2}t . \end{displaymath} However, the ratio of this quantity for the IR scheme to its counterpart for ER scheme does not depend on our choice of time scale. For any time $t$, this ratio is approximately the ratio of the squared coefficients of variation of $N(t)$ for the two schemes, and thus the ratio provides a way of comparing the relative variability in transcript counts between the two schemes across all times. Such a comparison is of interest because many of the known pausing regulated genes are transcription factors or cell signaling components that act in concentration dependent manners, and hence the precision of the total number of transcripts made directly affects the precision of functions downstream \cite{boettiger2009synchronous}. (Rather than the coefficient of variation, some authors consider the Fano factor of $N(t)$, defined to be $\sigma^2_{N(t)} / \mu_{N(t)}$ \cite{Thattai2001}. If $N(t)$ has a Poisson distribution, then its Fano factor is 1, and hence a Fano factor that differs from 1 indicates some form of ``non-Poisson-ness''. As such, the Fano factor capture a feature of the {\em character} of the stochasticity inherent in the number of transcripts made up to some time, whereas the squared coefficient of variation indicates the (relative) magnitude of the stochastic effects.) We use our model to examine how these three important system properties -- speed, synchrony, and transcript count variability -- depend on the jump rates and how they differ between an IR and an ER regulation scheme. In both cases, the delay $\tau$ between induction and transcription corresponds to the (random) time it takes for the corresponding Markov chain to go from an initial state $s$ to a final state $f$. For the chains corresponding to the models shown in figures \ref{fig:markovmodel} and \ref{fig:picassembly}, the moments of $\tau$, the Laplace transforms of $\tau$, and hence the probability distributions themselves, can be found analytically as we describe briefly here (for detailed discussion, see the Supporting Information, Text S1; and figure S2). Denote by $Q$ the {\em infinitesimal generator} matrix that has off-diagonal entries $q_{ij}$ given by the jump rate from state $i$ to state $j$, and diagonal entries $q_{ii}$ given by the negative of the sum of the jump rates out of state $i$. The infinitesimal generator of the chain {\em stopped when it hits state $f$} is the matrix $\widetilde Q$ obtained by replacing the entries in the row of $Q$ corresponding to $f$ with zeros. Writing $p(\cdot)$ for the probability density function of $\tau$, the Laplace transform of $p$ is \begin{equation} \label{eqn:laplacetransform} \phi(\lambda) = \int_0^\infty e^{-\lambda t} p(t) \, dt = (\lambda I - \widetilde Q)^{-1}_{sf} . \end{equation} In principle, the transform $\phi$ can be inverted to find $p$, as we do in figure \ref{fig:modelresults}D. Also, the $n^\mathrm{th}$ moment of $\tau$ can be found from the $n^\mathrm{th}$ derivative of $\phi$: \begin{equation} \label{eqn:moments} \int_0^\infty t^n p(t) \, dt = (-1)^n \frac{d^n}{d \lambda^n} \phi(\lambda) \Big \vert_{\lambda = 0} . \end{equation} In particular, the mean and variance of $\tau$ can be computed from the first and second derivatives of $\phi(\lambda)$. \begin{figure}[!ht] \begin{center} \includegraphics[width=6in]{markovmodel} \end{center} \caption{ {\bf Model Results:} {\bf (A)} Comparison of log ratios of mean expression speed for the IR/ER schemes for 10,000 uniformly sampled rates. For all jump rates, the log ratio is positive (red line), indicating the ER scheme is always faster. Extreme values that would be off the edge of the graph are collected into the outermost bins. {\bf (B)} Variance in timing of expression. {\bf (C)} $\log_2$ ratio of noise in transcript number, measured by the squared coefficient of variation between cells of total mRNA counts $N(t)$ up to time $t$: $\sigma_{N(t)}^2/\mu_{N(t)}^2$ -- the ratio is approximately independent of $t$. } \label{fig:modelresults} \end{figure} It is not necessary to carry out the differentiation in equation \eqref{eqn:moments} explicitly, since \eqref{eqn:moments} becomes \begin{equation} \int_0^\infty t^n p(t) \, dt = n! \sum_y \left(-Q_{-f}\right)^{-(n+1)}_{sy} \widetilde{Q}_{yf} \end{equation} after some matrix algebra, as derived in the Supporting Information. Here, $Q_{-f}$ is the submatrix of $Q$ obtained by removing the final row and column. As shown in the Supporting Information, these expressions can be computed much more efficiently than \eqref{eqn:laplacetransform} or \eqref{eqn:moments}. Equation \eqref{eqn:laplacetransform} is known as the Feynman--Kac formula \cite{fitzsimmons1999fk}, and it reduces our problem in principle to inverting the matrix $( \lambda I - \widetilde Q )$. This is easy to do numerically for particular rate parameter values, but in order to make detailed general predictions about the consequences of changing the step at which the enhancer regulates transcription we require symbolic expressions for the system properties with the rates as free parameters. However, for even moderately complex chains like that described in figure \ref{fig:picassembly}, symbolic inversion of the matrix is prohibitively difficult for commonly available software. To overcome this obstacle, we develop new analytic techniques that take advantage of the special structure of these matrices. First, we note that chains modeling transcription often have a block structure, in that we can decompose the state space according to the subset of states that must be passed through by any path of positive probability leading from the initial to the final state (we call such states {\em pinch points}) (see figure \ref{fig:picassembly}). A schematic of this decomposition is shown in figure S2. The models of initiation regulation we consider are amenable to this approach. In order for the ER model to be amenable to this approach, we assume that by the time the PIC assembly has reached the regulated step, the enhancer chain is in (stochastic) chemical equilibrium. Concretely, if $\pi$ is the stationary probability that the enhancer is in the permissive state, then at each time the promoter chain jumps to state 7 (of figure \ref{fig:picassembly}) we suppose it jumps to state 7B with probability $\pi$ and to state 7A with probability $(1-\pi)$. (To evaluate the effect of this approximation, we investigate how our results change after removing the parameter vectors in which the enhancer chain is slow to equilibrate and hence when this approximation is the worst.) A similar decomposition for elongation regulated genes is possible using spectral theory, but the computational savings are not as great as for the pinch point decomposition. We provide a detailed description of these techniques and the accompanying proofs (plus implementations coded in MATLAB) in the Supporting Information. Our approach has several advantages. Firstly, once we have derived symbolic expressions for features of interest, it is straightforward to substitute in a large number of possibilities for the transition rate vector in order to understand how those features vary with respect to the values of the transition rates. This would be computationally impossible using simulation and at best very expensive using a numerical version of the naive Feynman--Kac approach. Secondly, we are able to differentiate the symbolic expressions with respect to the transition rate parameters to determine the sensitivity with respect to the values of the parameters. It would be even more infeasible to use simulation or a numerical Feynman--Kac approach to perform such a sensitivity analysis. \section*{Results} \subsection*{Predictions for representative parameter values} To get an initial sense of the differences between these two schemes of regulation, we first compared the transcriptional behaviors for a best-guess set of parameters, guided by measurements of promoter binding and escape rates by Darzacq et al.\ \cite{darzacq2007} and Degenhardt et al.\ \cite{degenhardt2009} {\it in vivo} and observations in embryonic Drosophila transcription. These data do not allow us to uniquely estimate all 14 binding reaction rates in our model of PIC assembly, but they do constrain key properties, including the time scale of the rate-limiting reactions and the ratio of forward to backward reaction rates for both early binding events and later promoter engagement events. We chose parameters to be consistent with these measurements, and chose enhancer activation and deactivation rates to be consistent with induction times estimated in Drosophila \cite{boettiger2009synchronous} (which are also in the range recently reported in human cell lines \cite{degenhardt2009}). We used the following rate parameters for the model of figure \ref{fig:picassembly}: \[ \begin{split} & [ k_{12}, k_{21}, k_{23}, k_{32}, k_{24}, k_{42}, k_{35}, k_{53}, k_{45}, k_{54}, k_{56}, k_{65}, k_{67}, k_{78},k_{ab}, k_{ba}] \\ & \quad = [.108, .725, 10, 10, 10, 10, 10, 10, 10, .008, .005, 10, 10, 10, .01, 1] \mbox{sec}^{-1}. \\ \end{split} \] We found the probability density of the amount of time it takes the system to go from induced to actively transcribing, shown in figure \ref{fig:sim_results}A, by numerical inversion of the Laplace transform (equation \ref{eqn:laplacetransform}). With these rate parameters, the mean time between induction and the start of transcription for an elongation regulated scheme is around 5 minutes, with a standard deviation of about 4 minutes, whereas an initiation regulated scheme with the same rate parameters has a mean of 16 minutes and a standard deviation of 12 minutes, consistent with experimentally estimated initiation times in Drosophila \cite{boettiger2009synchronous}. \begin{figure}[!ht] \begin{center} \includegraphics[width=4in]{sim_results} \end{center} \caption{ {\bf Model Predictions:} {\bf (A)} Probability distributions for first passage times: Probability density functions of the time to first transcription, obtained by inversion of symbolically calculated Laplace transforms, using rate parameters computed in experimental studies of particular transcription systems. Rates inferred from Darzacq et al.\ \cite{darzacq2007} measurements of promoter binding and promoter escape rates (see text). {\bf (B)} Distribution of total transcripts among a population of simulated cells during 600 minutes of transcription under the ER model with parameters as in (A) and a reinitiation probability of 0.8. {\bf (C)} as in (B) but for the IR model. {\bf (D)} Individual cell simulation (see text) showing of the expected results for an mRNA counting assay on the population of cells plotted in (B). Each mRNA transcript is represented by a red dot randomly positioned within the cell. Cells with less than two-thirds of mean mRNA concentration are shaded blue, cells with more than three-halves of mean mRNA concentration are shaded red. {\bf (E)} as in (D) but for the IR scheme.} \label{fig:sim_results} \end{figure} We also described the number of mRNA produced over a given period of time at one choice of $b$ (the probability the GTF scaffold dissociates before the return of the next polymerase). Setting $b=0.8$, we found the distribution of the time delay between the beginning of the production of subsequent transcripts under each model. Using this distribution, we simulated the number of mRNA produced during a 600 minute period in 2000 independent cells, under both the IR and the ER scheme (for the common vector of rate parameters listed above). The resulting distributions of mRNA numbers are shown in figure \ref{fig:sim_results}B and C. To depict the amount of variability this represents, figures \ref{fig:sim_results}D and E show a cartoon of the results -- for each cell pictured, we sampled a random number of mRNA as above, which are shown red dots randomly scattered within the cell. To emphasize the variability, we then colored cells blue that have less than two-thirds the mean mRNA number and colored cells red that have more than three halves the mean mRNA number. In this example, $\eta$ is 2.8 times larger in the ER model than in the IR model, so these simulations also give a sense of how a given ratio of transcript count variabilities $\eta$ for the two schemes corresponds to a difference in cell-to-cell variability of transcript counts, a topic we explore in more detail below. \subsection*{Effects of regulation scheme on expression timing} Our predictions for the time of expression and the number of transcripts in the previous subsection depended on the chosen parameter values such as the association rate of different GTFs and the average burst size of the gene expression. The values of such parameters can, for the most part, be only very approximately estimated. Moreover, they may be expected to vary considerably between different genes and different species. Since a single vector of parameters simultaneously specifies our models for the two regulation mechanisms, we can systematically explore all possible combinations of promoter strength and enhancer activation rates and ask in each of these cases how the two mechanisms compare in terms of speed, synchrony and variability in transcript counts. To compare the two kinds of regulation of the model in figure \ref{fig:picassembly}, we sampled 10,000 random vectors of transition rates and substituted them into our analytic expressions for $\mu_\tau$, $\sigma^2_\tau$, and $\eta$, with each rate chosen independently and uniformly between 0 and 1 (we could also have used a regular grid of parameter vectors). Since we will use ratios of the relevant quantities to compare models, and these ratios are all invariant under a common linear rescaling of time, the fact that all rates are bounded by 1 is no restriction -- we are effectively sampling over {\em all} of parameter space. (For instance, the ratio of mean expression times of the two models does not change after multiplying every rate parameter by 100.) Furthermore, independent draws of new sets of 10,000 parameter vectors and substitutions give nearly identical results, confirming that our results are not sensitive to the specifics of the sample. Additionally, discarding parameter vectors for which the enhancer dynamics are significantly slower than for the promoter chain (i.e.\ $k_{ab}$ or $k_{ba}$ is smallest) does not qualitatively change any of the results, validating our treatment of the enhancer chain when analyzing the ER scheme. In figure \ref{fig:modelresults}A--C we plot the histogram of $\log_2$ ratios for the mean delay, variance in delay, and transcript count variability for the 10,000 randomly selected parameter combinations sampled uniformly across parameter space. We found that at all sampled choices of rate parameter, and therefore in the vast majority of parameter space, the time to the first transcription event after induction is smaller and less variable (i.e.\ more synchronous) for elongation regulation than for initiation regulation in the realistic model of figure \ref{fig:picassembly}. Thus, both the experimentally reported speed \cite{yao2007} and synchrony \cite{boettiger2009synchronous} for elongation regulated genes can be expected purely from effects of regulation topology without invoking changes in promoter strength or in the composition of the PIC. We emphasize that this conclusion is still consistent with the possibility that a particular initiation regulated gene is expressed in a more synchronous pattern or with more rapid kinetics than some other elongation regulated gene: it is only necessary that the rate parameters are also sufficiently different. However, for the fixed set of rates associated with a given gene, the network topology of the ER scheme always improved synchrony and speed in our model of transcription relative to the corresponding IR scheme for the parameter vectors we sampled. There is a plausible intuitive explanation for why elongation regulation is almost always faster than initiation regulation (figure \ref{fig:modelresults}A). When the regulation acts downstream, there are multiple paths which the system can take to before it reaches the regulated step -- (i.e.\ either the enhancer can reach the permissive state first or the polymerase can load), as illustrated for the simple model in figures \ref{fig:markovmodel}A and B. The system moves closer to the endpoint with whichever happens first, whereas the IR regulated scheme must wait for enhancer activation before proceeding. The combination of this intuition and our strong numerical evidence suggests a provable global inequality. However, recall that for the toy model IR is faster over about 6\% of parameter space, and one can reduce the realistic model to the toy model by making appropriate transitions very fast. For example, for the toy model the choice of parameters \begin{displaymath} [k_{ab}, k_{ba}, k_{12}, k_{21}, k_{23}, k_{34}] =[ 1, 1, .1, .1, .1, .0001] \end{displaymath} leads to a 5 fold increase in speed of the IR scheme relative to the ER scheme. This allows us to find parameter vectors where IR is faster than ER for the realistic model, for instance, \[ \begin{split} & [k_{ab}, k_{ba}, k_{12}, k_{21}, k_{23}, k_{32}, k_{24}, k_{42}, k_{35}, k_{53}, k_{45}, k_{54}, k_{56}, k_{65}, k_{67}, k_{78}] \\ & \quad = [.1, 1, .01, .01, .01, .01, .01, .01, .01, .01, .01, .01, .01, .01, .01, .0001] \\ \end{split} \] produces in the realistic model a 10 fold increase in speed for the IR scheme relative to the ER scheme. However, such reversals of the typical ordering must occur over less than one ten-thousandth of parameter space. The fact that the typical ordering is not universal and hence not the consequence of some analytically provable domination of one model by the other demonstrates the necessity of our numerical exploration of parameter space. \subsection*{Effect of regulation scheme on mRNA concentration} The effect of the regulatory scheme on the variation in the total amount of expression among cells is perhaps the most interesting and also experimentally untested consequence of regulating release from the paused state. As discussed above, we compute a factor $\eta \approx (\sigma_{N(t)}^2 / \mu_{N(t)}^2 )t$ for each scheme and compare the schemes by examining the ratio of the resulting quantities. If the ratio $\eta_{IR}/\eta_{ER}$ is larger than one at a particular set of parameter values, a population of cells using the IR scheme with those rate parameters will show more variability in mRNA concentrations between cells (relative to the average over all cells) than if they were using the ER scheme with the same rate parameters. In this case, we say that the ER scheme is more {\em consistent} than the IR scheme. We explored the logarithm of this ratio (equivalently, the difference of the logarithms of the respective $\eta$ quantities) at four different values of $b$ (the probability the scaffold does not disassemble; see figure \ref{fig:picassembly}); several of the resulting distributions are shown in figure \ref{fig:scaffold_effect}. \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{scaffold_effects} \end{center} \caption{{\bf Effect of scaffold stability} for variation in transcript number. {\bf (A)} $\log_2$ ratio of transcript variability, $\eta$, between the IR and ER model when all subsequent polymerases engage an assembled scaffold $b=1$. Extreme values that would be off the edge of the graph are collected into the outermost bins. {\bf (B)} As in (A) when $b=0.9$, note the ER scheme is more often substantially more coordinated, though a few parameters still make the IR scheme the more coordinated by a smaller margin. {\bf (C)} $b = 0.3$. {\bf (D)} $b=0$. } \label{fig:scaffold_effect} \end{figure} When the complex is very stable, so that all polymerases find a preassembled scaffold to return to ($b=1$, figure \ref{fig:scaffold_effect}A), the ER scheme is more consistent for most rate parameters, but the differences are small. In fact, in nearly all cases at which $\eta$ differs by a factor of at least 2, the IR scheme is the more consistent. When the scaffold is still stable but less so ($b=0.9$, figure \ref{fig:scaffold_effect}A; mean burst size 10), the ER scheme still almost always produces more consistent numbers of transcripts among cells than the IR scheme, and the differences are much larger. If the scaffold is less stable ($b=0.3$, figure \ref{fig:scaffold_effect}C; mean burst size 1.4), the ER scheme is still more often more consistent than the IR scheme. When we consider the simplest case with no bursting ($b=0$, figure \ref{fig:scaffold_effect}D), the ER scheme produces less variation in total transcript (smaller $\eta$) for most of parameter space. Moreover, the distribution is strongly skewed to the right, to the extent that for the 20\% of parameter space where there is more than a 1.5 fold difference between the two regulatory mechanisms the ER scheme is always less variable. We have found that, regardless of the value of $b$, the ER scheme is more consistent over most of parameter space. However, for that difference in consistency to be substantial, $b$ must not be too close to 1. This is at first surprising, because if the scaffold remains assembled, so that the chain returns to state 5 of figure \ref{fig:picassembly}, an IR scheme seems to have a clear ``advantage'' -- it does not have to wait for the enhancer to arrive, whereas the ER scheme does, and one might expect that this added stochastic event would only increase variability. Consideration of how each chain depends on its starting state suggests an intuitive explanation for this difference. The IR scheme differs more in the amount of time it takes to reach the synthesis state when started with or without a scaffold (state 5 or state 1) than does the ER scheme. Intermediate values of $b$ allow the possibility of some cells making many bursts by reverting to state 5 after each synthesis while other cells make dramatically less by reverting to state 1 after each synthesis. In contrast, under the ER regulation scheme, cells that start again from state 1 or from state 5 have relatively more similar synthesis times, and thus relatively less variation. The similar synthesis times result from the fact that ER is faster starting from state 1, for the reasons discussed above, and slower than IR when starting from state 5, because of the extra regulatory step before synthesis. Consequently, an ER scheme reduces the noise associated with very stable transcription scaffolds (see \cite{pedraza-paulsson, Thattai2001, tkavcik2008rin} for a discussion of this noise). \subsection*{Pertinent properties of elongation regulation} To further understand why elongation regulation results in faster, more synchronous, and more consistent gene expression over a wide range of parameters we investigated alternative post-initiation regulatory schemes. This allows us to explore how changing certain properties of the model of PIC assembly (the promoter chain) will affect the results: Is the difference large because there are many steps between the IR step and the ER step, or is it because there is no allowed transition leading backward out of the state immediately before the regulated step? To explore these questions, we made modifications to the toy model of figure \ref{fig:markovmodel} which we are able to analyze without the assumption of enhancer equilibrium. First note that, as is shown in figure \ref{fig:markovmodel}C, the ER model is still faster, less variable, and more reliable (smaller $\mu$, $\sigma$, and $\eta$) than the IR model over approximately 95\% of parameter space. (It is also reassuring that the results are so similar to those for the more realistic model.) We performed the same analysis after adding a reverse transition from state 3 back to state 2 (see figure S3A-B). The results are shown in figure S3C, and demonstrate that there is strikingly little difference between the two models of regulation. This suggests that the absence of a backwards transition from the state immediately preceding the regulated transition is an important factor in producing the differences between the models we observed above. In the ER scheme of figure \ref{fig:markovmodel}, PIC assembly becomes ``caught'' in state 3, awaiting arrival of the enhancer. (Similarly, the ER scheme of figure \ref{fig:picassembly} gets ``caught'' in state 7). After adding a transition $3 \to 2$, PIC assembly may run up and down the chain many times before it is in state 3 at the same time the enhancer is in the permissive configuration, and this counteracts any benefits in speed or reliability that may have been gained otherwise. (It is not obvious that this will happen: the ER scheme of figure S3B still has ``more routes'' from state 1A to state 4 than the IR model, so it may run counter to intuition that the IR model could be so often faster.) This furthermore suggests that regulating after a state in which PIC assembly is ``caught'' reduces variation -- some polymerases may run from state 1 to 8 smoothly and fire very quickly, while others may go up and down the assembly process many times before they actually escape the promoter and make a transcript (as is suggested by the data of Darzacq et al.\ \cite{darzacq2007}), and this will substantially spread out the times at which the first transcript is created. We also investigated the case in which the $2 \to 3$ transition is regulated and observed a similar pattern -- see figure S3D-F. This investigation supports the intuition that it is the stability of the paused state, not simply the parallel assembly of enhancer complex and promoter complex, that is most important in understanding the different behavior of the two regulatory schemes. It also suggests that these differences should be specific to genes that are regulated through paused (as opposed to poised or stalled) polymerase. \subsection*{Sensitivity analysis} Small variations in rate parameters between cells will occur if the number of TF or Pol II molecules is small, so it is of interest to investigate how robust the properties of each regulation scheme are to such variation and which jump rates affect each scheme the most. To measure this sensitivity, we compute the gradient of a quantity of interest (e.g.\ the mean induction speed) with respect to the vector of jump rates, square the entries, and normalize so that the entries sum to one, giving a quantity we refer to as {\em relative sensitivity} that is analogous to the ``percent variation explained'' in classical analysis of variance. Our analytic solutions for the quantities of interest make this computation possible. For example, let $m(\mathbf r)$ denote the mean transcription time of the chain when the vector of transition rates is $\mathbf r$. Then, the relative sensitivity of $m$ to each rate $r_i$ is $(\partial_{r_i} m(\mathbf r))^2 / \sum_j (\partial_{r_j} m(\mathbf r))^2$. The larger this quantity is, the larger is the relative effect a small change in $r_i$ has on $m$. To explore the sensitivity across parameter space, we computed relative sensitivities for each of the three system properties to all 16 parameters at each of the 10,000 random vectors of transition rates described above. Each of the system properties showed surprisingly similar sensitivity profiles, so we only discuss the results for the mean time to transcription. Marginal distributions of sensitivity of mean time to transcription to each parameter are shown in figure \ref{fig:sensitivity}. Corresponding plots for the variance of transcription time and for transcript count variability are shown in figures S4 and S5. \begin{figure}[!ht] \begin{center} \includegraphics[width=4in]{mean_marginalhists} \end{center} \caption{{\bf Sensitivity Analysis} for mean expression time. Histograms of the marginal distributions of relative sensitivities for both the ER and IR schemes, across uniform random samples from parameter space, as described in the text. The smallest bin of the histogram (values below $.05$) is disproportionately large, and so is omitted; shown instead is the percent of parameter space on which the relative sensitivity is at least $.05$. Note that often only a single parameter dominates (many sensitivities are near 1), that many parameters are almost never influential, and that ER and IR are similar except for the addition of sensitivity to $k_{ab}$. } \label{fig:sensitivity} \end{figure} As one might expect, for a given parameter vector the parameters to which the behavior of the models are most sensitive are generally those that happen to take the smallest value (and are thus rate-limiting): for each parameter vector, we recorded the sizes of the two parameters with the highest and second highest sensitivity values and found that their sample means were $0.147$ and $0.296$, respectively (whereas the sample mean of a typical parameter value will be very close to $0.5$). However, just how small a given transition rate must be before it controls the system properties depends on where the corresponding edge lies in the topology of the network. As shown in figure \ref{fig:sensitivity}, some parameters are relatively important throughout a large region of parameter space in both the ER and IR schemes, while others only dominate the response of the system in a small portion and some never appear. Two further observations are evident from this analysis. First, we see which transitions in the process of activating the gene are most sensitive to small fluctuations (due to small number of TF molecules or changes in binding strength). As is apparent from figure \ref{fig:sensitivity}, just 4 of the 16 promoter chain jump rates dominate the sensitivity, and these are the same for both IR and ER schemes ($k_{12}$, $k_{56}$, $k_{67}$, and $k_{78}$). The relative importance among those 4 jump rates depends on the position in parameter space, primarily through their relative sizes. Furthermore, although the ER and IR schemes have otherwise similar sensitivity profiles, the IR scheme is additionally sensitive to variation in the rate of enhancer--promoter interactions, $k_{ab}$. As this interaction between potentially distant DNA loci is likely rate-limiting for gene expression, the robustness of the elongation regulated scheme to fluctuations of this rate may provide a further explanation for why elongation regulated genes appear to exhibit considerably more synchronous activation. It suggests additionally that the rate of enhancer--promoter interactions is under more selective pressure for IR genes, where it has a large effect on their expression properties, than it is for ER genes, which may exhibit very similar expression properties despite having different enhancer interaction rates. Second, we also observe that the complex assembly steps which may occur in arbitrary arrival order, namely the recruitment of TFIIE or TFIIF (governed by the jump rates $k_{23}$, $k_{24}$, $k_{35}$, and $k_{45}$) are considerably more tolerant to stochastic variation than sequential assembly steps such as the initial recruitment of the polymerase ($k_{12}$), the arrival of the last component of the complex, TFIIH ($k_{56}$), or promoter escape ($k_{67}$). Although between--cell variation in the total concentration of these intermediate, non-sequential binding factors will affect their binding rate parameters, it will not greatly change properties of the time to expression, thus suggesting an additional benefit of ER. This observation leads to the conclusion that the regulatory processes controlling the concentration of factors arriving in arbitrary order and the binding affinities of such factors may be under less evolutionary pressure than the corresponding quantities for factors associated with other transitions. \section*{Discussion} Speed, synchrony, degree of cell--to--cell variability, and robustness to environmental fluctuations are important features of transcription. They are properties of the system rather than of a particular gene, DNA regulatory sequence, or gene product taken in isolation, and optimizing them can, for instance, reduce the frequency of mis-patterning events that arise due to the inherently stochastic nature of gene expression. Understanding how these properties emerge, the mechanism by which they change, and the tradeoffs involved in optimizing them all require tractable models of transcription. Through a study of stochastic models of transcriptional activation, we demonstrated that the increased speed and synchrony of paused genes, reported by Yao et al.\ \cite{yao2007} and Boettiger et al.\ \cite{boettiger2009synchronous} respectively, are expected consequences of the elongation regulation shown by such genes. We also predicted that ER genes produce more consistent numbers of total transcripts than IR genes. This hypothesis can be tested directly using recently developed methods (see \cite{bates2008,raj2009singlemolecule} for reviews and the Supporting Information for more details) We furthermore explored what aspects of ER make this possible. From an examination of the effect of scaffold stability we proposed that elongation regulation should reduce the noise-amplifying nature of bursty expression. By investigating alternative models of post-initiation regulation, we also determined that our predictions depend critically on the stability of the transcriptionally engaged, paused polymerase, and would not be expected from polymerases cycling rapidly on and off the promoter (i.e.\ polymerase stalling). Our investigation required us to introduce a general probabilistic framework for analyzing system properties of protein--DNA interactions. Stochastic effects, resulting from molecular fluctuations, are increasingly understood to play important roles in gene control and expression (see \cite{raj2008review} for a review). We can now determine quantitatively how an element's location in a network affects the general properties of that network, even when the rate constants and concentrations of the network components are unknown. In particular, we quantified the extent to which system properties are sensitive to each rate parameter, something which might predict the evolutionary constraint on that component. Most previous approaches to the analysis of protein--DNA interactions have either relied on simulations, which require some knowledge of numerical rate values, or use the fluctuation--dissipation theorem assuming the system is near equilibrium and the noise is small. Our methods avoid the limitations of those approaches and also make analysis of realistic models, as done in \cite{coulon2010spontaneous}, significantly more feasible. Finally, our approach is not restricted to investigating the assembly of transcriptional machinery, but may also prove useful in studying stochastic properties of a variety of regulatory DNA sequences (such as enhancers). Different assembly topologies, such as sequential versus arbitrary association mechanisms for the component TFs \cite{hager2009transcription}, may account for some of the observed differences in sensitivities and kinetics between otherwise similar regulatory elements. As new technologies allow better experimental determinations of these mechanisms, a theoretical framework within which one can explore their potential consequences will become increasingly important. \section*{Acknowledgments} We thank Graham Coop, Mike Levine, George Oster, Dan Rokhsar, Ken Wachter, Michael Cianfrocco and Teppei Yamaguchi for helpful discussions and comments on the manuscript.
2,877,628,088,682
arxiv
\section{Introduction} In a previous work \cite{2016PhPl...23g2507P} we extended the equilibrium code HELENA \cite{huysmans1991IsobicHerelesolGraequ} to stationary equilibria with rotation parallel to the magnetic field based on experimental and theoretical evidence that plasma flow has impact on the equilibrium, stability and transport properties of fusion plasmas. Plasma rotation is associated with the appearance of highly peaked density, pressure and temperature profiles, the suppression of some instabilities and the creation of transport barriers either in the H-mode or in discharges with Internal Transport Barriers (see for example \cite{gunter2000SimatthigeleiontemdisinttrabarASDupg}, \cite{romanelli2011OveJETres} and the review papers \cite{itoh1996rolelefiecon}-\cite{challis2004useinttrabartokpla_2}). Rotation can be the result of an external source such as electromagnetic power and neutral beam injection used for plasma heating and current drive or can manifest itself (intrinsic flows). It is believed that the reduction of turbulence and the mode decorrelation is the mechanism trough which flow affects the confinement. In addition to the flow itself, recent evidence indicates that the spatial variation of the flow affects more strongly the confinement, and thus making plasma rotation a significant ingredient for the exploitation of the future big machines as ITER and later DEMO in which due to the large plasma volume it would be difficult to induce large flow velocities. In these cases it is possible that the intrinsic rotation in these machines can be important \cite{solomon2007Momconlowtor}. It is proposed that in JET the driving mechanism for the appearance of intrinsic rotation is the pressure gradient \cite{eriksson1997TorrotICRH-mJET}. The heating methods used to drive plasma rotation also deposit energy into the charged particles in a specific direction and therefore generate significant pressure anisotropy in the plasma \cite{2007NucFu..47S.264F, 2001PPCF...43.1441Z, 2011PPCF...53g4021H, 1401.5520v2, 2010PPCF...52f5001P}, thus modifying the momentum conservation equation and ultimately affecting the equilibrium and stability properties. The magnitude of anisotropy can be significant. Depending on the direction of the energy deposition the pressure component parallel to that direction increases. For example in a MAST NBI discharge the ratio $p_\parallel / p_\perp$ has been found to be 1.7 \citep{2011PPCF...53g4021H} and for a JET ICRH discharge the ratio $p_\perp / p_\parallel$ has been found as high as 2.5 \citep{2001PPCF...43.1441Z}, with $p_\parallel$ and $p_\perp$ the pressure parallel and perpendicular to the magnetic field lines. Equilibrium is the starting point for stability and transport studies. For axisymmetric systems, as tokamaks, the governing equation is the so-called Grad-Shafranov (GS) equation whose analytic solutions, as the Solov\'ev one, have been found and used for equilibrium and stability studies. These analytic solutions are subject to some limitations which only numerical solutions can lift. To this end, fixed and free boundary equilibrium codes have been developed to solve the equation in realistic situations, i.e. for realistic choices of the boundary (or the currents in the coils for free boundary codes) and for the respective free functions, based on information from experimental data. Specifically here, we refer to the HELENA code, a fixed boundary solver of the GS equation using finite elements, which is used in the present study and further details are given in Sec. 3. In the presence of plasma rotation and pressure anisotropy the equilibrium is governed by a generalised Grad-Shafranov (GGS) equation together with a Bernoulli-type equation involving the effective pressure \cite{morozov1980Steplaflomagfie,hameiri1983equstarotpla, 2016PPCF58d5022E, 2009PPCF...51h5011C}. From a mathematical point of view, for compressible flows the GGS equation can be either elliptic or hyperbolic. The transition depends on certain critical values of the poloidal velocity. It must be pointed out that due to axisymmetry of the said configurations, the toroidal velocity is inherently incompressible. In the case of compressible flows the GGS equation and the Bernoulli equation are coupled through the density which, in that case, is not a surface quantity. In the first elliptic region, which is experimentally accessible\cite{velocity.scaling, mcclements2010steady}, and many codes have been developed to solve the system of these two coupled equations as DIVA \cite{semenzato1984ComsymideMHDfloequ, strumberger2005NumMHDstastutorrotvisreswalcurhol_2}, FINESSE \cite{belien2002FINAxiMHDEquFlo}, FLOW \cite{guazzotto2004Numstutokequarbflo}, including a version of FLOW that takes into account non-thermal populations \cite{2009PPCF...51c5014H}. In order to close the system in the aforementioned codes an adiabatic or isothermal equation of state is adopted. These equations of state are associated with either isentropic or isothermal magnetic surfaces, respectively. The problem of equilibrium with anisotropic pressure and toroidal rotation is examined by extension of HELENA \cite{Qu:2014:0741-3335:75007} or EFIT++ \cite{Fitzgerald:2013:0029-5515:113040}. For the incompressible flow case the consequence is that the density is uniform on the magnetic surfaces, thus the GGS equation (Eq. (\ref{1})) becomes elliptic and decouples from the Bernoulli equation. In the case of a fixed boundary, convergence is guaranteed under the requirement of monotonicity for the free functions \cite{courant1966Metmatphy}. A code that also assumes density uniform on magnetic surfaces is TRANSP \cite{budny1995SimalpparTFTDTsuphigfuspow}. Deviations of density on magnetic surfaces have been observed experimentally, thus the use of both compressible in incompressible assumptions in codes contribute in better understanding. The aim of this work is to develop further the previous work \cite{2016PhPl...23g2507P} where the fixed boundary equilibrium code HELENA was extended by including incompressible plasma flow parallel to the magnetic field by adding pressure anisotropy and examine the combined effect of rotation and anisotropy on the equilibrium properties. The code is extended taking advantage of the fact that the governing GGS, Eq. (\ref{1}), under a transformation can be put in a form identical with the static and pressure isotropic well known GS equation. In the following section the GGS equation for plasmas with incompressible flow and pressure anisotropy is reviewed. In Sec. 3 the HELENA code for parallel flow and pressure anisotropy is presented and the impact of rotation and pressure anisotropy on certain equilibrium quantities is examined on specific constructed equilibria. In Section 4 the main conclusions are presented. \section{Equilibrium equations} The equations governing a magnetically confined plasma with incompressible flow and pressure anisotropy are the following (\cite{1969PlPh...11..211D} and ref 6 therein, \cite{2016PPCF58d5022E}): \begin{align} \vec{\nabla}\cdot (\rho \vec{\upsilon})=0 \\ \rho (\vec{\upsilon}\cdot\vec{\nabla} \vec{\upsilon}) + \vec{\nabla} \mathds{P} = \vec{J}\times \vec{B} \label{momeq} \\ \vec{\nabla}\times \vec{B}=\mu_0 \vec{J} \\ \vec{\nabla}\cdot \vec{B}=0 \\ \vec{\nabla} \times \vec{E} =0 \\ \vec{E}+\vec{\upsilon}\times \vec{B}=0 \\ \mathds{P}=p_\perp \mathds{I} +\frac{\sigma}{\mu_0}|\vec{B}| \end{align} where $\rho$ is the mass density, $\vec{\upsilon}$ the plasma velocity, $\mathds{P}$ the pressure tensor, $\vec{J}$ the current density, $\vec{B}$ the magnetic field, $\vec{E}$ the electric field, $\mu_0$ the vacuum permeability and the quantity \begin{equation} \sigma = \mu_0 \frac{p_\parallel - p_\perp}{|\vec{B}|^2} \end{equation} measures the pressure anisotropy with respect to the parallel ($p_\parallel$) and perpendicular ($p_\perp$) to the magnetic surfaces directions. Assuming an axisymmetric system and defining an effective pressure: $$ \overline{p}=\frac{p_\parallel +p_\perp}{2} $$ one obtains the following Generalized Grad-Shafranov equation \cite{2016PPCF58d5022E} \begin{align} (1-\sigma - M_p^2) \Delta^\star \psi + \frac{1}{2}(\sigma-M_p^2)^\prime |\nabla \psi|^2 + \frac{1}{2}\left(\frac{X^2}{1-\sigma - M_p^2}\right)^\prime \nonumber \\ +\mu_0 R^2 \overline{p}_s^\prime + \mu_0 \frac{R^4}{2}\left[ \frac{\rho(\Phi^\prime)^2}{1-\sigma - M_p^2}\right]^\prime = 0 \label{1} \end{align} Here, $\psi(R,z)$ is the poloidal magnetic flux function associated to the magnetic surfaces, where ($R,\phi, z$) are cylindrical coordinates; $\phi$ is the ignorable coordinate; the function $M_p(\psi)$ is the Alfv\'en Mach number of the fluid velocity along the poloidal direction; $X(\psi)$ is a surface quantity that refers to the toroidal magnetic field, $B_\phi=I/R$. The relation that connects these two quantities is $I=(X-R^2\sqrt(\rho)M_p\Phi')/(1-\sigma - M_p^2)$; with $\Phi(\psi)$ being the electrostatic potential. In the static case $\overline{p}_s(\psi)$ coincides with the effective pressure; $B$ is the magnetic field modulus depending on surface quantities and the radial coordinate; $\Delta^\star=R^2\nabla\cdot(\nabla/R^2)$; while derivatives with respect to $\psi$ are denoted by the prime. As mentioned before, a consequence of incompressibility is that the density $\rho(\psi)$ becomes a surface quantity leading to the decoupling of the Bernoulli equation from the GGS (\ref{1}): \begin{equation} \overline{p}=\overline{p}_s(\psi) - \varrho \left( \frac{\upsilon^2}{2} - \frac{R^2 (\Phi^\prime)^2}{1-M_p^2}\right) \label{2} \end{equation} with $\upsilon$ being the velocity modulus. The functions $M_p(\psi)$, $\sigma(\psi)$, $X(\psi)$, $p_s(\psi)$, $\rho(\psi)$ and $\Phi(\psi)$ are free. In addition, the parallel and perpendicular components of the pressure tensor are given by: \begin{align} p_\perp = \overline{p} - \sigma \frac{B^2}{2\mu_0} \label{pv} \\ p_\parallel = \overline{p} + \sigma \frac{B^2}{2\mu_0} \label{ppar} \end{align} Details for the derivation of Eq. (\ref{1}) are given in \cite{2016PPCF58d5022E}. The main steps are first to express the divergence free fields ($\vec{B}$, $\vec{J}$ and $\rho \vec{v}$) in terms of scalar quantities and second, project the momentum equation (\ref{momeq}), and Ohm's law, along the toroidal direction, $\vec{B}$ and $\vec{\nabla}\psi$. The projections yield four first integrals in the form of surface quantities and Eqs. (\ref{1}) and \ref{2}. Applying the transformation \cite{1993NucFu..33..963C} \begin{equation} u(\psi) = \int_{0}^{\psi}\left\lbrack 1 - \sigma(f) - M_p^{2}(f)\right\rbrack^{1/2} df \label{3} \end{equation} to Eq. (\ref{1}) it becomes \begin{align} \Delta^\star u + \frac{1}{2}\frac{d}{du}\left(\frac{X^2}{1- \sigma - M_p^2}\right) + \mu_0 R^2\frac{d \overline{p}_s}{d u} \nonumber \\ + \mu_0\frac{R^4}{2}\frac{d}{du}\left[(1- \sigma) \rho\left(\frac{d \Phi}{du}\right)^2\right] = 0 \label{4} \end{align} The latter equation does not contain a quadratic term as $|{\bf\nabla}u|^{2}$. Once a solution of (\ref{4}) is obtained, the equilibrium can be completely constructed with calculations in the $u$-space by employing (\ref{3}), and the inverse transformation \begin{equation} \psi(u) = \int_{0 }^{u}\left\lbrack 1 - \sigma(f) - M_p^{2}(f)\right\rbrack^{-1/2} df \label{5} \end{equation} Specifically, the correspondence between $u$-space and the $\psi$-space for some quantities are: \begin{align} \overline{p}=\overline{p}_s(u)-\varrho(u)\left[\frac{\upsilon^2}{2} -\frac{R^2(1-\sigma)}{1-\sigma -M_p^2} \left(\frac { d\Phi(u) } { du } \right)^2\right] \label{pres1} \\ \vec{B}=I(\psi)\vec{\nabla}\phi -\vec{\nabla}\phi\times\vec{\nabla}\psi= I(u)\vec{\nabla}\phi-\frac{d\psi}{du}\vec{\nabla}\phi\times\vec{\nabla}u \label{magfiel} \\ \vec{J}=\frac{1}{\mu_0}\left(-\Delta^*\psi\vec{\nabla}\phi+\vec{ \nabla}\phi\times \vec{\nabla}I(\psi)\right)= \nonumber \\ \frac{d\psi}{du}R^2\vec{\nabla}\left(\frac{\vec{\nabla}u}{R^2}\right)+ \vec{\nabla} u \cdot \vec{\nabla}\frac{d\psi}{du} +\frac{dI(u)}{du}\vec{ \nabla}\phi\times \vec{\nabla}u \label{curden}\\ \vec{E}=-\vec{\nabla} \Phi= -\frac{d \Phi(\psi)}{d \psi} \vec{\nabla} \psi= -\frac{d \Phi(u)}{d u} \vec{\nabla} u \end{align} For flows aligned to the magnetic field, ($\Phi^\prime=0$), Eq. (\ref{4}) takes the form of the usual GS equation and can be shown that the poloidal, toroidal and total velocity Alfv\'{e}n Mach numbers are exactly equal; thus we drop the subscript in the Mach number. One additional surface quantity, $K$, can be obtained by setting \begin{equation} \varrho\vec{\upsilon}=K\vec{B}, \label{veloc} \end{equation} Applying the divergence operator and taking into account the continuity equation, $\vec{\nabla}\cdot\left(\varrho\vec{\upsilon}\right)=0$, one obtains $\vec{\nabla}K\cdot\vec{B}=0 $ as was shown in \cite{throumoulopoulos2003axiresmagequflofrePfidif}. Finally, the Bernoulli Eq. (\ref{2}) with the aid of Eq. (\ref{veloc}), becomes \begin{equation} \overline{p}=\overline{p}_s(\psi) - \frac{1}{2\mu_0}M^2B^2(\psi, R)= \overline{p}_s(u) - \frac{1}{2\mu_0}M^2B^2(u, R) \label{pres2} \end{equation} \section{Numerical equilibria with parallel plasma rotation and pressure anisotropy} In order to examine the impact of parallel plasma rotation in combination with the pressure anisotropy we constructed numerical equilibria by remapping and making appropriate use of the code HELENA. The code is a fixed boundary equilibrium solver which solves the static GS equation written as: \begin{equation} \Delta^*\psi = -F\frac{dF}{d\psi} - \mu_0R^2\frac{dP}{d\psi} = -\mu_0Rj_{tor} \label{eq5} \end{equation} The code makes use of isoparametric bi-cubic Hermite finite elements to solve the above equation by employing the Galerkin method, which is a non-linear iteration scheme, and using straight-field-line coordinates. The boundary condition consists of specific values for the function $\psi$ on a predefined curve which for the code coincides with the last closed flux surface of its computational domain. The technique has proved to produce high quality results with fast convergence \cite{KonzHELENA}. The following mapping can be established by comparison of Eq. (\ref{4}) for parallel plasma rotation ($\Phi^\prime=0$) with Eq. (\ref{eq5}): \begin{align} \psi \longleftrightarrow u \label{eq6} \\ F\frac{dF}{d\psi} \longleftrightarrow \frac{1}{2}\frac{d}{du}\left( \frac{X^2}{1 -\sigma - M^2}\right) \label{eq7} \\ P(\psi) \longleftrightarrow \overline{p}_s(u) \label{eq8} \end{align} On the basis of this correspondence, it is expected that the use of HELENA for the computation of stationary equilibria for parallel plasma flow and pressure anisotropy is possible, with the following clarification. The solver of the code remains unchanged, but the input/output quantities to it no longer refer to the $\psi$-space. A correspondence between the $\psi$-space and the $u$-space of the input and the calculated by the solver quantities respectively is required. For the minimum set of the basic quantities, the aforementioned mapping is: \begin{align} P_{\mbox{\scriptsize HELENA}}\longleftrightarrow \overline{p}_s \label{eq8a} \\ F_{\mbox{\scriptsize HELENA}}\longleftrightarrow \frac{X}{\sqrt{1- \sigma - M^2}} \\ \psi_{\mbox{\scriptsize HELENA}}\longleftrightarrow u \label{eq8c} \end{align} The mapping for the magnetic field, the current density and the pressure by making use of Eqs. (\ref{eq8a})-(\ref{eq8c}) on (\ref{magfiel}), (\ref{curden}) for $\Phi^\prime=0$ and (\ref{pres2}) is: \begin{align} \addtolength{\itemsep}{-5mm} \vec{B}=\frac{F_{\mbox{\scriptsize HELENA}}}{\sqrt{1- \sigma - M^2}}\vec{\nabla}\phi-\frac{1}{\sqrt{1- \sigma - M^2}} \vec{\nabla}\phi\times\vec{\nabla}u \label{eq:9} \\ \vec{J}=\left[\frac{-1}{\sqrt{1- \sigma - M^2}}\Delta^*u+\frac{1}{2}\frac{1}{(1- \sigma - M^2)^{3/2} } \frac{d(\sigma + M^2)}{du} |\vec{\nabla}u|^2\right]\vec{\nabla}\phi+ \nonumber \\ \frac{d}{du}\left(\frac{F_{\mbox{ \scriptsize HELENA}}}{ \sqrt{1- \sigma - M^2}}\right) \vec{\nabla} \phi\times\vec{\nabla}u \label{eq:10} \\ \overline{p}=P_{\mbox{\scriptsize HELENA}}-\frac{1}{2 \mu_0 R^2}\frac{M^2}{1-M^2}\left(F_{\mbox{\scriptsize HELENA}}^2+|\vec{\nabla} u|^2\right) \label{eq:11} \end{align} where the subscript HELENA refers to the quantities computed by the solver of the Grad-Shafranov equation. A point of interest is that owing to the fact that the transformation (\ref{3}) just relabels the magnetic surfaces and that HELENA is a fixed boundary solver- therefore the ``radial" dependence of the magnetic field is not affected by the plasma rotation and pressure anisotropy- the safety factor $q$ is flow and anisotropy independent, as long as the input to the solver remains fixed. One must clarify that the input to the solver is not the same as the input to the code, since the latter depends on the integral transformation, in the presence of rotation and pressure anisotropy. Specifically, the safety factor is given by the relation \cite{ideal_magnetohydrodynamics} \begin{equation} q(\psi)=\frac{1}{2\pi}\int^{2\pi}_0 \left(\frac{r B_\phi}{R B_\theta}\right)_S d\theta=\frac{F(\psi)}{2\pi}\oint\frac{ d\ell_p}{ R^2 B_p}=\frac{F(\psi)}{2\pi|\nabla\psi|}\oint\frac{ d\ell_p}{ R} \label{eq:q1} \end{equation} The integration in the last two expressions is performed along the curve of a magnetic surface in the poloidal plane and $r$, $\theta$ are cylindrical coordinates of a system with its origin located at the position of the magnetic axis, with $d\ell_p=\sqrt{(dr)^2+(rd\theta)^2}$, $B_p=\sqrt{B_r^2+B_\theta^2}=|\nabla\psi|/R$. Applying the integral transformation on $F(\psi)$ and $\nabla\psi$ and taking into account (\ref{eq:9}) for the components of the magnetic field we get \begin{equation} q(\psi)=q(u)= \frac{F(u)}{2\pi|\nabla u|}\oint\frac{ d\ell_p}{ R} \label{eq:q2} \end{equation} Therefore, as long as the input to the solver remains the same, so does the safety factor. It is worth pointing out that the solutions of (\ref{eq5}), and therefore of the extended code, hold for arbitrary functions of the Mach number $M(u)$, anisotropy $\sigma(u)$ and density $\varrho(u)$. By varying the input quantities of the modified code we obtained a number of equilibria. As an example, the magnetic surfaces of an ITER-like configuration with input values summarized in Table \ref{tab:0} and input functions for the quantities $P$ and $FF'$ those in Figs. \ref{fig:ipp} and \ref{fig:ffp}, respectively, are presented in Fig. \ref{fig:surf}. \begin{table} \centering \caption{Input values of the basic quantities used for the equilibria solutions.} \begin{tabular}{| c | c | c | c |} \hline $R_0$ & $B_{\phi 0}$ & $I_{plasma}$ & $\psi_{axis}$ \\ \hline 6.00 m & 5.3 T & 15.1 MA & 0.0 Wb \\ \hline \end{tabular} \label{tab:0} \end{table} \begin{figure}[ht!] \begin{center} \psfrag{ppr}{$\overline{p}'$} \psfrag{psi}{$\psi_{norm}$} \includegraphics[scale=0.46]{pprinput.eps} \caption{The input profile of the derivative of the effective pressure with respect to a normalized $\psi$ defined as $\psi/\psi_{boundary}$.} \label{fig:ipp} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{ffpr}{$FF'$} \psfrag{psi}{$\psi_{norm}$} \includegraphics[scale=0.46]{ffprinputb.eps} \caption{The input profile of $FF'$ used in the runs of the code with respect to normalized $\psi$.} \label{fig:ffp} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{R}{$R(m)$} \psfrag{z}{$z(m)$} \includegraphics[scale=0.28]{equilibrium1.eps} \caption{The magnetic surfaces of a diamagnetic equilibrium associated with the values of Table \ref{tab:0} and the input profiles of Figs. \ref{fig:ipp} and and \ref{fig:ffp} . The magnetic axis is located at ($R_{a}=6.394$ m, $z_{a}=0.54585$ m) where the toroidal magnetic field is 4.969 T with respective vacuum value 5.3 T.} \label{fig:surf} \end{center} \end{figure} In order to calculate all the equilibrium quantities we modelled the free functions $M^2(u)$ and $\sigma(u)$, in two different ways each e.g., \begin{eqnarray} M^2& =& M_0^2 \left(u^m - u_b^m\right)^n \label{prof1} \\ M^2&=& C \left[\left(\frac{u}{ u_b}\right)^m\left( 1-\left(\frac{u}{ u_b}\right)\right)\right]^n \label{prof2} \\ \sigma& =& \sigma_0 \left(u^k - u_b^k\right)^\ell \label{prof3} \\ \sigma&=& D \left[\left(\frac{u}{ u_b}\right)^k\left( 1-\left(\frac{u}{ u_b}\right)\right)\right]^\ell \label{prof4} \end{eqnarray} where $$ C=M_0^2\left(\frac{m+n}{m}\right)^m \left(\frac{n}{m+n}\right)^n $$ and $$ D=\sigma_0\left(\frac{k+\ell}{k}\right)^k \left(\frac{\ell}{k+\ell}\right)^\ell $$ In this notation, $ u_b$ refers to the plasma boundary; the free parameters $M_0^2$ and $\sigma_0$ correspond to the maximum value of $M^2$ and $\sigma$; and $m$, $n$ are related to flow shear and the position of the maximum $M^2$; $k$ and $\ell$ are related to the spatial anisotropy variation (shear) and the position of $\sigma_0$. In particular, (\ref{prof1}) and (\ref{prof3}) are peaked on- while (\ref{prof2}) and (\ref{prof4}) peaked off-axis. This specific choice is associated with respective auxiliary heating of tokamaks. For parallel rotation the density does not appear explicitly in the equilibrium equations and hence there is no need to specify it. The scaling $ M^2=\frac{v^2}{B^2/(\mu_0 \rho)} \sim \alpha \frac{v_s^2}{B^2/(\mu_0 \rho)}\sim \alpha \frac{\gamma P}{B^2/(\mu_0 \rho)}\sim \alpha \beta $ where $v_s=\left(\gamma P/\rho\right)^{1/2}$ is the sound speed and $\beta=2P/(B^2/\mu_0)$ can be used to estimate $M$. Since the maximum experimental value of $v$ in tokamaks is of the order of $v_s$ ($\alpha \sim 0.01-0.1$) and $\beta \sim 0.01$, the experimental values of $M$ lie in the interval ($10^{-2}, 10^{-1}$). In small tokamaks where the torque input can produce large plasma flow due to the small volume, the values can be even larger. The choice for peaked on- and off-axis of the Mach number function was motivated by experimental observations of equilibria with plasma rotation \citep{crombe2005poloidal, 2001NucFu..41..865S, fiore2012production}, while the numerical values of $M_0$ \citep{devries2008ScarotmomconJETpla}, in some cases well above the experimental values, are used for illustrative reasons. Similarly, the choice for the respective anisotropy profiles is based on physical considerations related to the position where the energy of the heating beams is deposited. In other studies (\citep{2010PPCF...52f5001P, 2011PPCF...53g4021H, 1401.5520v2}) the anisotropy was located in the plasma core region. In some cases it is possible to have anisotropy peaked off-axis when the heating is focused in some region away from the magnetic axis. Regarding the values of $\sigma_0$, it is reminded that in JET has been reported anisotropy as high as $p_\perp / p_\parallel \approx 2.5$ \citep{2001PPCF...43.1441Z}. The experimental profiles that are characterized either by a maximum at the magnetic axis or one at a point within the plasma volume was the motivation for the specific choice. At the same time the rotation or the pressure anisotropy are localized in a finite region of the poloidal plane. However, it should be clarified that the specific choices made for the input profiles do not reproduce precisely experimental profiles throughout the poloidal cross-section. Profile examples for the choices (\ref{prof1}) and (\ref{prof2}) for $M$ (\ref{prof3}) and (\ref{prof4}) for $\sigma$ by varying the free parameters are given in Figs. \ref{fig:mach1}, \ref{fig:sigma1} and \ref{fig:sigma2}, respectively. \begin{figure}[ht!] \begin{center} \psfrag{m}{$M$} \psfrag{r}{$\psi$ (Wb)} \psfrag{m20223s30252 }{\hspace{0.8cm}\small{Case 1}} \psfrag{m30452s20223 }{\hspace{0.8cm}\small{Case 2}} \includegraphics[scale=0.46]{mach1.eps} \caption{Plots of the Mach number profile with respect to $\psi$ used in the calculated equilibria. Case 1: (blue \textcolor{blue}{$+$}) peaked on-axis (Eq. (\ref{prof1})) Mach number with $M_0=0.02$, $m=2$, $n=3$; Case 2: (red \textcolor{red}{$\times$}) peaked off-axis (Eq. (\ref{prof1})) Mach number with $M_0=0.04$, $m=5$, $n=2$.} \label{fig:mach1} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{s}{$\sigma$} \psfrag{r}{$\psi$ (Wb)} \psfrag{m0s20223 }{\hspace{0.35cm}\small{Case 1}} \psfrag{m0s20226 }{\hspace{0.35cm}\small{Case 2}} \psfrag{m0s203523 }{\hspace{0.5cm}\small{Case 3}} \includegraphics[scale=0.46]{sigma1.eps} \caption{Plots of the on-axis peaked $\sigma$ profile (Eq. (\ref{prof3})) with respect to $\psi$ for three cases. Case 1: (blue \textcolor{blue}{$+$}) $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 2: (red \textcolor{red}{$\times$}) $\sigma_0=0.02$, $k=2$, $\ell=6$; Case 3: (green \textcolor{green}{$\triangledown$}) $\sigma_0=0.035$, $k=2$, $\ell=3$.} \label{fig:sigma1} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{s}{$\sigma$} \psfrag{r}{$\psi$ (Wb)} \psfrag{m0s30252 }{\hspace{0.2cm}\small{Case 1}} \psfrag{m0s300552 }{\hspace{0.35cm}\small{Case 2}} \psfrag{m0s300524 }{\hspace{0.35cm}\small{Case 3}} \psfrag{m30424s30241 }{\hspace{0.75cm}\small{Case 4}} \includegraphics[scale=0.46]{sigma2.eps} \caption{Plots of the off-axis peaked $\sigma$ profile (Eq. (\ref{prof4})) with respect to $\psi$ for various cases. Case 1: (blue \textcolor{blue}{$+$}) $\sigma_0=0.02$, $k=5$, $\ell=2$; Case 2: (red \textcolor{red}{$\times$}) $\sigma_0=0.005$, $k=5$, $\ell=2$; Case 3: (green \textcolor{green}{$\triangledown$}) $\sigma_0=0.005$, $k=2$, $\ell=4$; Case 4: (black \textcolor{black}{$\diamondsuit$}) $\sigma_0=0.02$, $k=4$, $\ell=1$.} \label{fig:sigma2} \end{center} \end{figure} By inspection of Eq. (\ref{4}) one expects that the rotation has a weak contribution. However, as already mentioned in Sec. 1, it seems that the velocity shear is more important than the velocity amplitude for the transition to improved confinement modes in tokamaks, a result that was reported in \citep{2016PhPl...23g2507P}, in association with equilibrium profiles compatible with ones present in configurations with transport barriers. The impact of pressure anisotropy is expected to be qualitatively the same as the one of the rotation for $\sigma>0$, though quantitatively it will be stronger than that of the rotation due to the fact that the Mach number enters squared in the equations while $\sigma$ enters linearly. In addition, the impact of $\sigma$ on equilibrium differs from that of $M$ because, depending on the energy deposition direction, it can take negative values. The presence of pressure anisotropy allows larger, though out of the experimental limits for the large tokamaks, values for the Mach number. One more point worth noting is that the two free quantities ($\sigma$, $M$) can be potentially used for the shaping of the equilibrium profiles, thus affecting the stability properties of the configuration, especially for gradient-driven instabilities. We examined the effect of plasma flow and pressure anisotropy in some equilibrium quantities by varying the parameters of the Mach number profile ($M_0$, $n$, $m$) and the pressure anisotropy function ($\sigma_0$, $k$, $\ell$). We note here that any external momentum or energy sources have not been included in the equilibrium equations and therefore the total energy of the system is conserved. Sustaining the configuration after having achieved a desired performance and consequently removing the external energy and momentum sources is desirable for the operation of a tokamak reactor. A few general remarks regarding the impact of rotation and pressure anisotropy are that the combination of these two quantities allow to shape the equilibrium profiles with great flexibility; specifically, the fact that $\sigma$ can become negative broadens the region of permissible values for $M$, thus allowing to access higher plasma rotation velocities. This, in turn, leads to stronger impact of the rotation on those equilibrium quantities where, in addition to the term $(1 - \sigma - M^2)$, there exist other explicit terms of $M$, $M^2$ or $(M^2)'$ (cf. Eqs. \eqref{4} and \eqref{eq:9}-\eqref{eq:11}). The impact of plasma rotation and pressure anisotropy with respect to the maximum values as well as to the shear of the profiles of these quantities is examined in detail as it follows. By varying $M_0$ (or $\sigma_0$) on the one hand and $n$, $m$ ($\ell$, $k$) on the other we examine the impact of plasma rotation (pressure anisotropy) and its shear on the equilibrium. We will focus mainly on the effect of pressure anisotropy and compare it to the one by parallel plasma rotation, while the effect of the latter was individually examined in \cite{2016PhPl...23g2507P}. By inspection of Eq. (\ref{eq:11}) is expected that for given $p_s$, plasma rotation and pressure anisotropy in the case of $\sigma > 0$ reduce the effective pressure values compared to the static and isotropic one (Fig. \ref{fig:pres1}). The fact that now there are two independently varied quantities ($M$, $\sigma$) and one of them can be negative, permits us to shape the profile with great flexibility. It is interesting that the shear of $\bar{p}$ depends more on the maximum value of the pressure anisotropy and less on the values of $k$ and $\ell$. It must be noted here that in general the impact of the pressure anisotropy on the effective pressure is negligible. Nevertheless, we present the observed results for completion. As is evident from Figs. \ref{fig:pres1} and \ref{fig:pres8} the peaked off-axis profile of the pressure anisotropy affects in greater extent the effective pressure profile compared to the peaked on-axis one. \begin{figure}[ht!] \begin{center} \psfrag{p}{$\overline{p}$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m30424s30252 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m30424s3m0252 }{\hspace{1cm}\small{Case 3}} \psfrag{m30424s30241 }{\hspace{0.8cm}\small{Case 4}} \includegraphics[scale=0.46]{pres1.eps} \caption{Radial profiles of the effective pressure on the mid-plane $z=0$ for various cases of pressure anisotropy and plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=5$, $\ell=2$; Case 3: (red \textcolor{red}{$\times$}) peaked on-axis pressure anisotropy with $\sigma_0=-0.02$, $k=5$, $\ell=2$; Case 4: (green \textcolor{green}{$\triangledown$}) peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=4$, $\ell=1$. For reference the static effective pressure profile (black $\diamondsuit$, Case 1) is also given.} \label{fig:pres1} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{p}{$\overline{p}$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m20223s20223 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m20223s20226 }{\hspace{0.8cm}\small{Case 3}} \psfrag{m20223s203523 }{\hspace{0.92cm}\small{Case 4}} \includegraphics[scale=0.46]{pres8.eps} \caption{Radial profiles of the effective pressure on the mid-plane $z=0$ for various cases of pressure anisotropy and plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 3: (red \textcolor{red}{$\times$}) peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=6$; Case 4: (green \textcolor{green}{$\triangledown$}) peaked on-axis pressure anisotropy with $\sigma_0=0.035$, $k=2$, $\ell=3$. In all cases, the rotation is peaked on-axis with $M_0=0.02$, $m=2$ and $n=3$. For reference the static effective pressure profile (black $\diamondsuit$, Case 1) is also given.} \label{fig:pres8} \end{center} \end{figure} Next, we will focus on the pressure components. It appears that for peaked on-axis $\sigma$, the parallel pressure profile flattens on the core, where the maximum value of $\sigma$ is located, thus helping in reducing pressure-gradient-driven modes while the higher the values of $\sigma_0$, $k$ and $\ell$, the larger the shear of the pressure component, in a region within the plasma volume, as is shown in Fig. \ref{fig:pres2}. In addition, the impact of plasma rotation has a weak effect on the components of the pressure as is evident in the same Fig. \ref{fig:pres2}. Pressure profiles with high shear region in the midplane of the poloidal cross-section are observed in discharges with ITBs. \begin{figure}[ht!] \begin{center} \psfrag{p}{$p_\parallel$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m0s20223 }{\hspace{0.25cm}\small{Case 2}} \psfrag{m0s20226 }{\hspace{0.25cm}\small{Case 3}} \psfrag{m0s203523 }{\hspace{0.4cm}\small{Case 4}} \includegraphics[scale=0.46]{pres2.eps} \caption{Radial profiles of the parallel pressure on the mid-plane $z=0$ for various cases of the pressure anisotropy parameters and no plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 3: (red \textcolor{red}{$\times$}) peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=6$; Case 4: (green \textcolor{green}{$\triangledown$}) peaked on-axis pressure anisotropy with $\sigma_0=0.035$, $k=2$, $\ell=3$. The static effective pressure (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:pres2} \end{center} \end{figure} As already mentioned above the fact that in the presence of anisotropy, $\sigma$ can be either positive or negative (see \cite{2001PPCF...43.1441Z}, \cite{2011PPCF...53g4021H}), while it vanishes for isotropic pressure, allows one to explore additional possible experimental configurations, in connection to the direction of the applied heating. The sign of $\sigma$ appears to affect equally the parallel and perpendicular components of the pressure as it is evident in Figs. \ref{fig:pres3}, \ref{fig:pres6} and \ref{fig:pres9}, regardless of whether the maximum of $\sigma$ is located at the magnetic axis or at another point within the plasma volume, thus not favouring a specific set-up. \begin{figure}[ht!] \begin{center} \psfrag{p}{$p_\parallel$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m20223s30252 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m20223s3m0252 }{\hspace{1cm}\small{Case 3}} \includegraphics[scale=0.46]{pres3.eps} \caption{Radial profiles of the parallel pressure on the mid-plane $z=0$ for various cases of the pressure anisotropy and plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=5$, $\ell=2$; Case 3: (red \textcolor{red}{$\times$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked off-axis pressure anisotropy with $\sigma_0=-0.02$, $k=5$, $\ell=2$. The static effective pressure (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:pres3} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{p}{$p_\perp$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m30452s20223 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m30452s2m0223 }{\hspace{1cm}\small{Case 3}} \includegraphics[scale=0.46]{pres6.eps} \caption{Radial profiles of the perpendicular pressure on the mid-plane $z=0$ for various cases of the pressure anisotropy and plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) peaked off-axis rotation with $M_0=0.04$, $m=5$, $n=2$ and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 3: (red \textcolor{red}{$\times$}) peaked off-axis rotation with $M_0=0.04$, $m=5$, $n=2$ and peaked on-axis pressure anisotropy with $\sigma_0=-0.02$, $k=2$, $\ell=3$. The static effective pressure (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:pres6} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{p}{$p_\perp$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m20223s20223 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m20223s2m0223 }{\hspace{1cm}\small{Case 3}} \includegraphics[scale=0.46]{pres9.eps} \caption{Radial profiles of the perpendicular pressure on the mid-plane $z=0$ for various cases of the pressure anisotropy and plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) peaked off-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 3: (red \textcolor{red}{$\times$}) peaked off-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked on-axis pressure anisotropy with $\sigma_0=-0.02$, $k=2$, $\ell=3$. The static effective pressure (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:pres9} \end{center} \end{figure} In general, the effect of pressure anisotropy on the pressure components is qualitatively the same, though reversed. As is evident from Figures \ref{fig:pres3} and \ref{fig:pres9} for $\sigma>0$ $p_\parallel$ increases while $p_\perp$ reduces, compared to the isotropic case. The impact of pressure anisotropy appears to be stronger for peaked off-axis $\sigma$-profiles compared to cases with peaked on-axis profiles (Figs. \ref{fig:pres3}, \ref{fig:pres9}), a result similar to that for isotropic plasmas with parallel rotation \citep{2016PhPl...23g2507P}. In the case of isotropic plasmas with parallel rotation the experimental values of $M$ in order to obtain pressure profiles similar to those observed in discharges with ITBs or H-mode plasmas, i.e. profiles with steep regions in the vicinity of the barrier and associated with the maximum of the Mach number profile, appear to be difficult to achieve, especially in large devices. On the contrary, pressure anisotropy which gives similar results for the pressure components profiles is achievable. For peaked off-axis $\sigma$-profiles the impact to the pressure components profiles is stronger in the case where the position of $\sigma_0$ is closer to the magnetic axis (Fig. \ref{fig:pres7}). \begin{figure}[ht!] \begin{center} \psfrag{p}{$p_\perp$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m30424s30252 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m30424s30241 }{\hspace{0.8cm}\small{Case 3}} \includegraphics[scale=0.46]{pres7.eps} \caption{Graphs of the perpendicular pressure components profile in the radial direction at $z=0$ for various cases of the pressure anisotropy and plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) peaked off-axis plasma rotation with $M_0=0.04$, $m=2$, $n=4$ and peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=5$, $\ell=2$; Case 3: (red \textcolor{red}{$\times$}) rotation as in Case 2 and peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=4$, $\ell=1$. The static effective pressure (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:pres7} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{p}{$p_\parallel$ (Pa)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m0s20223 }{\hspace{0.25cm}\small{Case 2}} \psfrag{m20223s20223 }{\hspace{0.8cm}\small{Case 3}} \psfrag{m20223s20226 }{\hspace{0.8cm}\small{Case 4}} \includegraphics[scale=0.46]{pres4.eps} \caption{Graphs of the parallel pressure components profile in the radial direction at $z=0$ for various cases of the pressure anisotropy and plasma rotation. Case 2: (blue \textcolor{blue}{$+$}) no rotation and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 3: (red \textcolor{red}{$\times$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 4: (green \textcolor{green}{$\triangledown$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=6$. The static effective pressure (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:pres4} \end{center} \end{figure} Paying attention to the current density, we must note that for the specific input, the impact on both the parallel and toroidal current density profiles is the same, and therefore we present as examples only of one of the components. The overall conclusion is that the pressure anisotropy can affect the current density much stronger than the parallel rotation. In addition, the impact is localized in the region where the shear of $\sigma$ is located. Therefore, for peaked on-axis $\sigma$-profiles (Fig. \ref{fig:jpar1}), the impact on the current density is located towards the edge of the poloidal cross-section, with the extent of this region being dependent on the shape of the $\sigma$ profile. For peaked off-axis $\sigma$-profiles, the current density is affected almost throughout the poloidal cross-section except for the $\sigma_0$ point, the boundary and the magnetic axis (Fig. \ref{fig:jpar2}). The region of strongest impact is in the middle of the distance between the point of maximum $\sigma$ and the magnetic axis or the boundary. The above results are similar with the ones obtained in \citep{2016PhPl...23g2507P} for the impact of parallel rotation on the current density. Compared to the case of parallel rotation, the impact of pressure anisotropy on the current density is apparently more profound, (Figs. \ref{fig:jpar1} and \ref{fig:jpar2}). \begin{figure}[ht!] \begin{center} \psfrag{j}{$J_\parallel$ (A/$m^2$)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m30452s20223 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m30452s2m0223 }{\hspace{1cm}\small{Case 3}} \psfrag{m0s20223 }{\hspace{0.25cm}\small{Case 4}} \includegraphics[scale=0.46]{jpar1.eps} \caption{Plots of the parallel to the magnetic field current density versus the radial distance from the axis of symmetry on the mid-plane $z=0$ for cases of pressure anisotropy peaked on-axis and parallel rotation peaked off-axis. Case 2: (blue \textcolor{blue}{$+$}) peaked off-axis rotation with $M_0=0.04$, $m=5$, $n=2$ and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 3: (red \textcolor{red}{$\times$}) peaked off-axis rotation with $M_0=0.04$, $m=5$, $n=2$ and peaked on-axis pressure anisotropy with $\sigma_0=-0.02$, $k=2$, $\ell=3$; Case 4: (green \textcolor{green}{$\triangledown$}) no rotation and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$. The case of static and isotropic (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:jpar1} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{j}{$J_\parallel$ (A/$m^2$)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m20223s30252 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m20223s3m0252 }{\hspace{1cm}\small{Case 3}} \psfrag{m0s30252 }{\hspace{0.25cm}\small{Case 4}} \includegraphics[scale=0.46]{jpar2.eps} \caption{Plots of the parallel to the magnetic field current density versus the radial distance from the axis of symmetry on the mid-plane $z=0$ for cases of pressure anisotropy peaked off-axis and parallel rotation peaked on-axis. Case 2: (blue \textcolor{blue}{$+$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=5$, $\ell=2$; Case 3: (red \textcolor{red}{$\times$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked off-axis pressure anisotropy with $\sigma_0=-0.02$, $k=5$, $\ell=2$; Case 4: (green \textcolor{green}{$\triangledown$}) no rotation and peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=5$, $\ell=2$. The case of static and isotropic (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:jpar2} \end{center} \end{figure} As can be seen from Figs. \ref{fig:jpar3} and \ref{fig:jpar4} for $\sigma<0$, the current density profiles have increased gradient therefore affecting the relevant modes. Consequently, from the stability point-of-view, it appears that heating parallel to the magnetic surfaces is desirable since it will smooth out the current density profiles. Moreover, the smaller the shear of $\sigma$, the smaller the current density gradient. \begin{figure}[ht!] \begin{center} \psfrag{j}{$J_\parallel$ (A/$m^2$)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m30424s30252 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m30424s3m0252 }{\hspace{1cm}\small{Case 3}} \psfrag{m30424s30241 }{\hspace{0.8cm}\small{Case 4}} \includegraphics[scale=0.46]{jpar3.eps} \caption{Plots of the parallel to the magnetic field current density versus the radial distance from the axis of symmetry on the mid-plane $z=0$ for cases of pressure anisotropy peaked on-axis and parallel rotation peaked off-axis. Case 2: (blue \textcolor{blue}{$+$}) peaked off-axis rotation with $M_0=0.04$, $m=2$, $n=4$ and peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=5$, $\ell=2$; Case 3: (red \textcolor{red}{$\times$}) peaked off-axis rotation with $M_0=0.04$, $m=2$, $n=4$ and peaked off-axis pressure anisotropy with $\sigma_0=-0.02$, $k=5$, $\ell=2$; Case 4: (green \textcolor{green}{$\triangledown$}) peaked off-axis rotation with $M_0=0.04$, $m=2$, $n=4$ and peaked off-axis pressure anisotropy with $\sigma_0=0.02$, $k=4$, $\ell=1$. The case of static and isotropic (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:jpar3} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \psfrag{j}{$J_\parallel$ (A/$m^2$)} \psfrag{r}{R(m)} \psfrag{staticiso }{\small{Case 1}} \psfrag{m20223s20223 }{\hspace{0.8cm}\small{Case 2}} \psfrag{m20223s20226 }{\hspace{0.8cm}\small{Case 3}} \psfrag{m0s2m0223 }{\hspace{0.45cm}\small{Case 4}} \includegraphics[scale=0.46]{jpar4.eps} \caption{Plots of the parallel to the magnetic field current density versus the radial distance from the axis of symmetry on the mid-plane $z=0$ for cases of pressure anisotropy peaked off-axis and parallel rotation peaked on-axis. Case 2: (blue \textcolor{blue}{$+$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=3$; Case 3: (red \textcolor{red}{$\times$}) peaked on-axis rotation with $M_0=0.02$, $m=2$, $n=3$ and peaked on-axis pressure anisotropy with $\sigma_0=0.02$, $k=2$, $\ell=6$; Case 4: (green \textcolor{green}{$\triangledown$}) no rotation and peaked on-axis pressure anisotropy with $\sigma_0=-0.02$, $k=2$, $\ell=3$. The case of static and isotropic (black $\diamondsuit$, Case 1) is plotted for reference.} \label{fig:jpar4} \end{center} \end{figure} Additionally, regardless of whether it is favourable or not from the stability point-of-view, for peaked on-axis $\sigma$-profiles the effect on the current density is stronger for $\sigma<0$ (Fig. \ref{fig:jpar2}) while the opposite is observed for peaked off-axis $\sigma$-profiles (Fig. \ref{fig:jpar4}). Regarding the safety factor, though the transformation does not affect its values (cf. Eqs. (\ref{eq:q1}) and (\ref{eq:q2})), the rotation and the pressure anisotropy alter the $q$-values through their impact on the input data. Specifically, for $\sigma>0$, $q$ increases in the region where the anisotropy is localized. For example, in the case of static anisotropic equilibrium with $\sigma$ peaked on-axis the values of $q$ at the magnetic axis get larger, as was also reported in \citep{2011PPCF...53g4021H} while at the boundary either get larger or smaller. For $\sigma<0$ and peaked on-axis, $q$ decreases thereon. The picture is different for $\sigma$-profiles peaked off-axis, i.e. for all the cases examined, the values of $q$ decrease at the magnetic axis as well as at the boundary. As in the case of peaked on-axis $\sigma$-profiles, just the opposite occurs for $\sigma<0$. Examining the impact of the pressure anisotropy on the position of the magnetic axis, we found an inward shift for peaked on-axis profiles and $\sigma>0$, a similar result found also in \citep{2001PPCF...43.1441Z}, and an outward shift for $\sigma<0$. For peaked off-axis $\sigma$ the position remains practically the same. These results regarding the safety factor and the position of the magnetic axis were not observed in \citep{2016PhPl...23g2507P} due to the weak impact of parallel rotation on these quantities, especially compared to the respective impact of pressure anisotropy. Finally, we examined the impact of pressure anisotropy on the toroidal $\beta$, concluding that for positive $\sigma$ a slight decrease is observed in its values for both peaked on-axis and peaked off-axis profiles for the anisotropy. Note that the toroidal $\beta$ in the table \ref{tab:1} is defined as \begin{equation} \beta_t=\frac{<p>}{B_0^2/2\mu_0}, \label{eq:beta} \end{equation} where $$ <p>=\frac{\int_0^{V_0} pdV}{V_0}, $$ $B_0$ the vacuum magnetic field at the geometrical center and $V_0$ the total plasma volume. \begin{table} \centering \caption{Values of the safety factor and the toroidal $\beta$ for various cases of rotation and pressure anisotropy.} \begin{tabular}{| c | c | c | c | c | c | c | c | c | c |} \hline \hline q & $\beta_t$ & $M_0$ & $m$ & $n$ & Type & $\sigma_0$ & $k$ & $\ell$ & Type \\ \hline 0.6478 & 0.04199 & 0 & - & - & - & 0.02 & 2 & 3 & on-axis \\ \hline 0.6491 & 0.04199 & 0 & - & - & - & 0.02 & 2 & 6 & on-axis \\ \hline 0.6517 & 0.04197 & 0 & - & - & - & 0.035 & 2 & 6 & on-axis \\ \hline 0.6477 & 0.04199 & 0.02 & 2 & 3 & on-axis & 0.02 & 2 & 3 & on-axis \\ \hline 0.6370 & 0.04204 & 0 & - & - & - & -0.02 & 2 & 3 & on-axis \\ \hline 0.6387 & 0.04200 & 0 & - & - & - & 0.02 & 5 & 2 & off-axis \\ \hline 0.6387 & 0.04200 & 0 & - & - & - & 0.02 & 5 & 2 & off-axis \\ \hline \end{tabular} \label{tab:1} \end{table} \section{Conclusions} We examined the impact of pressure anisotropy and parallel rotation on the equilibrium properties of axisymmetric, toroidally confined plasmas by means of numerically constructed ITER-like configurations. To this end the appropriate GGS equation, with the aid of an integral transformation is put in a form identical to the well-known GS equation. This transformation maps the poloidal flux function $\psi$ to another flux function $u$ preserving the shape of the magnetic surfaces. We modified the fixed boundary equilibrium code HELENA, so that via the direct and the inverse transformation the calculated by the Grad-Shafranov solver equilibrium quantities, now in the $u$-space, are mapped to the $\psi$-space. On the basis of the equilibria constructed by the code we examined the impact of pressure anisotropy, rotation and their shear on the pressure, toroidal current density, safety factor, position of the magnetic axis and toroidal beta. We mainly focused on the impact of the pressure anisotropy, while the rotation was primarily used for comparison since its impact on the equilibrium was examined in a previous study \citep{2016PhPl...23g2507P}. The presence of pressure anisotropy in addition to parallel rotation, allows access to configurations with higher values of Mach number, for $\sigma<0$, and more capabilities in shaping the equilibrium quantities profiles. The effect of pressure anisotropy is much stronger compared to that of parallel rotation on all the quantities we examined, with the exception of the effective pressure. As expected, for $\sigma>0$ the effective pressure, which enters the GGS equation, is reduced by the pressure anisotropy, though the impact is minimal. The impact on the pressure components of peaked off-axis pressure anisotropy is stronger than that for peaked on-axis anisotropy, and especially for pressure anisotropy localized close to the magnetic axis. In addition, the impact of pressure anisotropy is the same regardless of the sign of $\sigma$ and also the same for the two pressure components. For the current density the peaked off-axis anisotropy has a stronger impact throughout the poloidal cross-section as opposed to the peaked on-axis case where the impact is localized close to the magnetic axis. This is due to the fact that the shear of pressure anisotropy has a stronger effect on the current density that the anisotropy itself. In this case that impact is direction independent, unlike the case of parallel rotation which affects more drastically the parallel current component than the toroidal one. The peaked off-axis $\sigma$-profile with low shear is smoothing out the current density profiles thus being favourable from the stability point-of-view, since it can affect current-gradient instabilities. It must be noted that, in general $\sigma>0$ is beneficial for the configuration because it does not produce large current density gradients. For peaked on-axis $\sigma$, the safety factor increases at the magnetic axis, while the effect for peaked off-axis profiles is negligible. Also, in certain cases we found that the magnetic axis is shifted inwards for $\sigma>0$ and outward for $\sigma<0$. Finally, the toroidal $\beta$ decreases for $\sigma>0$ and increases for $\sigma<0$. As a next step, it is planned to extend further the computation for non-parallel incompressible plasma rotation. In this case the rotation is associated with electric fields which are believed to play a role in the transitions to improved confinement modes. This can be done on the basis of Eq. (\ref{1}) (or Eq. (\ref{4})) by including the additional electric field dependent $R^4$-term therein. \section{Acknowledgements} This work has been carried out within the framework of the EUROfusion Consortium and has received funding from (a) the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053 and (b) the National Program for the Controlled Thermonuclear Fusion, Hellenic Republic. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
2,877,628,088,683
arxiv
\section{Introduction} Let $\Omega\subset \mathbb{R}^3$ be a bounded domain with smooth boundary. We consider weak solutions to a variant of the Navier-Stokes-Fourier system, in the presence of no external forces and subject to heat conduction driven by Fourier's law: \begin{align} \label{nsf_system}\left\lbrace \begin{array}{rl} \partial_t\rho+\ebdiv (\rho u)&=0\\ \partial_t (\rho u)+\ebdiv (\rho u\otimes u)+\nabla p&=\ebdiv \mathbb{S}\\ \partial_t (\rho s)+\ebdiv (\rho s u)+\ebdiv(\frac{-\kappa\nabla \theta}{\theta})&=\sigm \end{array}\right. \end{align} in $(0,T)\times\Omega$, with the initial and boundary conditions \begin{align*} \left\lbrace\begin{array}{ll}\rho(0,\cdot)=\rho_0,\quad (\rho u)(0,\cdot)=(\rho u)_0,\quad \theta(0,\cdot)=\theta_0,\\ u(t,\cdot)|_{\partial\Omega}=0,\quad \nabla \theta(t,\cdot)\cdot n(\cdot)|_{\partial\Omega}=0.\end{array}\right. \end{align*} This system of equations models the motion of a viscous, compressible, and heat-conducting fluid, where $\rho=\rho(t,x)$ denotes the density of the fluid, $u=u(t,x)$ denotes the velocity of the fluid, and $\theta=\theta(t,x)$ denotes the temperature of the fluid. The quantity $p$ determines the internal pressure of the system, while $\mathbb{S}$, $\kappa$, $s$ and $\sigma$ denote the stress tensor, heat conduction coefficient, entropy, and entropy production rate, respectively -- our assumptions on the behavior of these quantities are determined by the particular constitutive relations of our model; for more details, see the discussion after the statement of Theorem $\ref{thm_temp1}$, as well as the complete specification in Section $2$. Recently, Mellet et al. \cite{MelletVasseur} studied bounds from below on the temperature for a suitable class of weak solutions of a variant of (\ref{nsf_system}) when the pressure $p(\rho,\theta)$ is affine in the temperature variable, i.e. \begin{align*} p=p_e(\rho)+R\rho\theta, \end{align*} in which case the entropy equation (that is, the third equation in (\ref{nsf_system})) is replaced by \begin{align*} \partial_t (\rho\theta)+\ebdiv (\rho \theta u)-\ebdiv(\kappa\nabla\theta)=2\mu|D(u)|^2+\lambda |\ebdiv u|^2-R\rho\theta\ebdiv u. \end{align*} In \cite{MelletVasseur}, the authors use an instance of the De Giorgi argument \cite{DeGiorgi} for boundedness and regularity of solutions to elliptic equations with bounded measurable coefficients to establish uniform (in space) bounds on the logarithm of the temperature, which in turn give uniform bounds on the temperature itself. The goal of the present work is to adapt the methods of \cite{MelletVasseur} to treat the system ($\ref{nsf_system}$), in the case that the pressure is no longer strictly affine in the temperature variable. This change in assumption on the pressure corresponds to a somewhat more physically accurate model; in particular, the constitutive assumptions on the quantities driving heat conduction in the system can now be related to basic thermodynamical principles (see \cite{FeireislCMAP,FeireislNovotnyBook} for further discussion on this point). Our main result is then the following: \begin{theorem} \label{thm_temp1} Fix $T>0$ and $\Omega$ a bounded open set. Suppose that $\mathbb{S}$, $\kappa$, $\sigma$ and the state relations $s$ and $p$ (which respectively represent the entropy and pressure relations of the system) satisfy the criteria established in Section $2$, and let $(\rho,u,\theta)$ be a weak solution to the Navier-Stokes-Fourier system $(\ref{nsf_system})$ satisfying $u\in L^2(0,T;H^1_0(\Omega))$, \begin{align} \int_{\Omega} \rho_0\max\left\{\log\left(\frac{1}{\theta_0}\right),0\right\}dx<\infty.\label{eqAA3} \end{align} and $\rho\in L^\infty(0,T;L^\omega(\Omega))$ for some $\omega>3$, along with the local entropy inequality ($\ref{eq_admiss}$) for a.e. $0<t<T$ and a.e. $0<\mathfrak{s}<t$, as well as for a.e. $0<t<T$ with $\mathfrak{s}=0$.\footnote{We shall describe the significance and relevance of these restrictions on the class of weak solutions in the discussion below.} Then for all $\tau\in (0,T]$, there exists $\eta_{\tau,T}>0$ such that \begin{align*} \theta(t,x)\geq \eta_{\tau,T}. \end{align*} for a.e. $\tau<t<T$ and a.e. $x\in\Omega$. \end{theorem} Theorem $\ref{thm_temp1}$ states that for a particular class of weak solutions, the temperature is bounded away from zero uniformly in space.\footnote{The proof in fact gives a slightly stronger statement, since the argument does not require that the triple $(\rho,u,s)$ satisfy the full conditions of a weak solution for ($\ref{nsf_system}$). In particular, the only properties used in the argument are: (i) the bounds $u\in L^2(H_0^1)$, ($\ref{eqAA3}$), $\rho \in L^\infty(L^\omega)$ with $\omega>3$, (ii) the local entropy inequality ($\ref{eq_admiss}$) and (iii) the conservation of total mass $\int_{\Omega} \rho(t,x)dx=\int_{\Omega} \rho_0(x)dx$ (which follows from ($\ref{eq-weak-continuity}$)). We use the present statement to emphasize the connection with the system ($\ref{nsf_system}$).} We remark that the assumptions on the system appearing in Section $2$ are all physically motivated and are quite general. In particular, the quantities $p=p(\rho,\theta)$ and $s=s(\rho,\theta)$ represent the internal pressure and entropy of the system, and their precise forms along with those of the viscous stress tensor $\mathbb{S}$, heat conduction coefficient $\kappa$ and entropy production rate $\sigma$ are determined by the particular properties of the fluid under study. We refer the reader to \cite[Chapter $1$]{FeireislNovotnyBook} for a full discussion of the derivation and physical relevance of the Navier-Stokes-Fourier system ($\ref{nsf_system}$). On the other hand, the assumptions on $\rho$, $u$ and $\theta(0,\cdot)$ are more closely connected with our tools and techniques. As we mentioned above, the authors in \cite{MelletVasseur} use a variant of the De Giorgi argument for $L^\infty$ bounds of solutions to elliptic equations with measurable coefficients to establish the desired $L^\infty$ control over the logarithm of the temperature (which corresponds in our setting to the entropy, i.e. the quantity $s(\rho,\theta)$). Generally speaking, this technique is based upon the balance of two key pieces of information: \begin{enumerate} \item[(a)] a localized form of an energy/entropy inequality (e.g. the local energy inequality satisfied by suitable weak solutions for the incompressible Navier-Stokes equations; in our case, this takes the form of the local entropy ineuqality $(\ref{eq_admiss})$), and \item[(b)] a nonlinear iteration argument driven by the Tchebyshev inequality. \end{enumerate} As is often the case (see, e.g. the discussion in \cite{CKN} for the case of incompressible Navier-Stokes), in order to obtain an appropriate form of the local entropy inequality we must restrict the class of weak solutions. In \cite{MelletVasseur}, the authors work with the solutions constructed by Feireisl in \cite{FeireislBook}, which arise as limits of a somewhat involved approximation procedure. This procedure in particular preserves an appropriate form of the entropy inequality at the last level of the approximation, which enables the authors to obtain the desired $L^\infty$ bounds uniformly in the approximation parameter. \vspace{0.2in} In the present work, we base our notion of weak solution on the existence theory developed by Feireisl, Novotn\'y et al. (see \cite[Chapter $3$]{FeireislNovotnyBook}, as well as the works \cite{FeireislBook,FeireislIUMJ53_2004,FeireislContempMath371,FeireislCommPDE31_2006,FeireislNovotnyProcRSE135A_2005}).\footnote{Note that the model described in these works contains an additional radiative term when compared to the system ($\ref{nsf_system}$); at present, this additional term is a required component of the known existence theory. We discuss this in more detail at the conclusion of this introduction.} In this setting, the identification of an appropriate form of the local entropy inequality is somewhat more subtle, since the entropy $s(\rho,\theta)$ may now depend on both the density $\rho$ and the temperature $\theta$ in a nonlinear way. In particular, recalling that these localized inequalities are typically obtained by multiplying the equation by an appropriate cutoff function, one observes that the (possibly nonlinear) interaction of $\rho$ and $\theta$ inside $s(\rho,\theta)$ imposes some difficulty. Moreover, the system possesses diffusion in $\theta$ but not in $\rho$. Nevertheless, when the functions involved have sufficient regularity, we can use the product and chain rules to obtain a suitable variant. We also remark that the De Giorgi technique does not apply to general systems; indeed, counterexamples (due to De Giorgi) to the corresponding regularity results exist. \vspace{0.2in} Note that the regularity required to perform this procedure is only present at the very beginning of the approximation procedure described in \cite{FeireislNovotnyBook}, where the equation has a number of additional terms which would interfere with the De Giorgi argument. Indeed, the existence of such smooth solutions for the original system $(\ref{nsf_system})$ is a major open question. In light of this, we first establish the local entropy inequality for smooth solutions to the Navier-Stokes-Fourier system ($\ref{nsf_system}$). Our main result, Theorem $\ref{thm_temp1}$, then imposes this inequality as an assumption used to derive the desired temperature bounds. We refer to the section below on the existence theory for weak solutions for further comments on this issue. \vspace{0.2in} \begin{center} {\bf Notion of weak solution} \end{center} As mentioned above, we consider a weak formulation of the system $(\ref{nsf_system})$, based upon the existence theory developed by Feireisl, Novotn\'y et al. for a related system with a additional radiative terms. We now recall the relevant notion of weak solution from \cite[Section $2.1$]{FeireislNovotnyBook}, written for the system ($\ref{nsf_system}$). Suppose that $\mathbb{S}$, $\kappa$, $\sigma$, $s$ and $p$ satisfy the consitutive relations established in Section $2$ below. We say that a triple of measurable functions $(\rho,u,\theta)$ is then a {\it weak solution} of the Navier-Stokes-Fourier system $(\ref{nsf_system})$ if $\rho\in L^1((0,T)\times \Omega)$, $\ebdiv u\in L^1((0,T)\times \Omega)$, and, for some $q>1$, $\nabla u\in L^1(0,T;L^q(\Omega;\mathbb{R}^{3\times 3}))$, $(\theta,\nabla\theta)\in L^q((0,T)\times \Omega)^2$, with \begin{enumerate} \item[(i)] $\rho\geq 0$, $\theta\geq 0$ a.e. on $(0,T)\times \Omega$, \item[(ii)] $u|_{\partial\Omega}=0$, and \item[(iii)] the continuity equation is satisfied in the renormalized sense of \cite{DiPerna-Lions}; that is, \begin{align} \nonumber &\int_0^T\int_{\Omega} \rho B(\rho)(\partial_t\phi^{(1)}+u\cdot \nabla \phi^{(1)})dxdt\\ \label{eq-weak-continuity} &\hspace{0.8in}=\int_0^T\int_{\Omega} b(\rho)\ebdiv u\phi^{(1)} dxdt-\int_{\Omega} \rho_0 B(\rho_0)\phi^{(1)}(0)dx,\\ \intertext{ for every $b\in L^\infty\cap C^0(0,\infty)$, $B:[0,\infty)\rightarrow\mathbb{R}$ defined by \begin{align*} B(\rho)=B_1+\int_1^\rho \frac{b(z)}{z^2}dz, \end{align*} and all test functions $\phi^{(1)}\in C^1_c([0,T)\times \overline{\Omega})$, while the momentum and entropy production equations are satisfied in the distributional sense; that is, } \nonumber &\int_0^T\int_{\Omega} \rho u\cdot \partial_t\phi^{(2)}+\rho[u\otimes u]:\nabla \phi^{(2)}+p\ebdiv \phi^{(2)} dxdt\\ &\hspace{0.8in}=\int_0^T\int_{\Omega} \mathbb{S}:\nabla\phi^{(2)} dxdt-\int_{\Omega} (\rho u)_0\cdot \phi^{(2)}(0)dx\label{eq-weak-momentum} \intertext{and} \nonumber &\int_0^T\int_{\Omega} \rho s(\partial_t\phi^{(3)}+u\cdot \nabla \phi^{(3)})dxdt+\int_0^T\int_{\Omega} \frac{-\kappa\nabla\theta}{\theta}\cdot \nabla\phi^{(3)} dxdt\\ &\hspace{0.8in}=-\langle \sigma,\phi^{(3)}\rangle-\int_{\Omega} \rho_0s(0)\phi^{(3)}(0)dx,\label{eq-weak-entropy-prod} \end{align} for all test functions $\phi^{(2)}\in C^1_c([0,T)\times \overline{\Omega};\mathbb{R}^3)$, $\phi^{(3)}\in C^1_c([0,T)\times \overline{\Omega})$, together with the integrability conditions required to make sense of each quantity in ($\ref{eq-weak-continuity}$), ($\ref{eq-weak-momentum}$) and ($\ref{eq-weak-entropy-prod}$). \end{enumerate} \vspace{0.2in} \begin{center} {\bf Comments on the existence theory for weak solutions} \end{center} As remarked above, the existence of weak solutions (in the sense described in the previous section) satisfying the hypotheses of Theorem $\ref{thm_temp1}$ is not known at present. In this context, two distinct issues arise: first, the existence of weak solutions for (\ref{nsf_system}) itself in the specific setting of the class of constitutive relations described in Section $2$ and second, existence of weak solutions satisfying the additional hypotheses identified in the statement of Theorem $\ref{thm_temp1}$. Concerning the first issue, existence of weak solutions for (\ref{nsf_system}), recent work of Feireisl and Novotn\'y \cite{FeireislBook,FeireislIUMJ53_2004,FeireislContempMath371,FeireislCommPDE31_2006,FeireislNovotnyProcRSE135A_2005,FeireislNovotnyBook} have developed an existence theory for weak solutions of (\ref{nsf_system}) when the constitutive relations on the heat conduction, entropy, internal energy and pressure adhere to the hypotheses described in Section $2$ and, moreover, admit an additional term describing the influence of radiation at high temperatures. It should be noted that this radiative term is currently required for the existence theory. However, at the present time, it is not clear how to adapt the proof of Theorem $\ref{thm_temp1}$ to allow for the presence of radiation, and we therefore consider the non-radiative case (for which existence of weak solutions is at present an open question). Turning to the second issue, existence of weak solutions satisfying the full hypotheses of Theorem $\ref{thm_temp1}$, the additional assumptions beyond the notion of weak solution amount to integrability for $u$ and $\rho$, and the local entropy inequality ($\ref{eq_admiss}$). Note that the integrability condition $u\in L^2(0,T;H_0^1(\Omega))$ can be ensured by taking $\alpha=1$ in ($\ref{eq_mu}$) and ($\ref{eq_kappa}$) (see \cite[Theorem $3.2$]{FeireislNovotnyBook}), while the bound $\rho\in L^\infty(0,T;L^\omega(\Omega))$ for some $\omega>3$ can be imposed by adding an additional term to the pressure; the desired bounds then follow for this adjusted equation via the energy inequality (see for instance the treatment in \cite{FeireislBook}). Concerning the local entropy inequality, it is reasonable to expect that the arguments we present can be further developed, adapting the proof of existence to preserve the local entropy inequality in the limit (corresponding to existence of suitable weak solutions obtained by Caffarelli, Kohn and Nirenberg in \cite{CKN}); such an approach is carried out for a compressible system without heat conduction in \cite{FNsuitable}. However, we choose not to pursue these issues further here. \vspace{0.2in} \begin{center} {\bf Outline of the paper} \end{center} We now give a brief outline of the rest of the paper. In Section $2$, we establish some notation, fix our assumptions on the constitutive relations, and give the formal statement of the main results of our study. Sections $3$ and $4$ are then devoted to the proofs of the local entropy inequality and the bounds from below on the temperature, respectively. We conclude with a brief appendix giving a basic distributional calculation that will be useful for our arguments, and describing how an additional hypothesis of bounded density can lead to some relaxation in the growth hypotheses imposed on the entropy. \section{Constitutive relations and general assumptions on the system} \label{prelim} We now introduce some hypotheses that further restrict the constitutive assumptions for the system ($\ref{nsf_system}$). In particular, in the remainder of the paper we will assume that $p,e,s\in C^1((0,\infty)\times (0,\infty))$, $\mu,\eta\in C^1([0,\infty))$ and $\kappa\in C^1([0,\infty))$, $P\in C^1([0,\infty))$ satisfy the hypotheses listed below. We begin by stating some structural hypotheses concerning the influence of viscosity and heat-conduction within the fluid. In particular, we will assume that $\mathbb{S}$ and $\sigma$ take the form \begin{align} \nonumber \mathbb{S}&=\mu(2D(u)-\tfrac{2}{3}I\ebdiv u)+\eta I\ebdiv u\\ \sigma&\geq \frac{\mu|2D(u)-\frac{2}{3}I\ebdiv u|^2+2\eta |\ebdiv u|^2}{2\theta}+\frac{\kappa |\nabla\theta|^2}{\theta^2}\label{sigma_def} \end{align} with $D(u)=\nabla u+(\nabla u)^\top$ and where $\sigma$ is a Borel measure on $[0,T]\times\overline{\Omega}$. Because we are working in a compressible model, the forces driving the fluids evolution include the pressure that the fluid exerts upon itself, in addition to the viscous interactions described by the stress tensor $\mathbb{S}$ above. The derivation of these forces arises from thermodynamical considerations, beginning with Gibbs' equation, \begin{align} \label{gibbs}\theta D_{(\rho,\theta)}s(\rho,\theta)&=D_{(\rho,\theta)}e(\rho,\theta)+p(\rho,\theta)D_{(\rho,\theta)}(\frac{1}{\rho}),\qquad \textrm{for}\quad\rho,\theta>0, \end{align} where $D_{(\rho,\theta)}=(\partial_\rho,\partial_\theta)$. As mentioned above, the quantities $s$ and $p$ represent the entropy and pressure of the system, while $e$ represents the internal energy. Regarding $p$ and $e$, we require that for all $\rho>0$, there exists $\underline{e}(\rho)>0$ such that $\lim_{\theta\rightarrow 0^+} e(\rho,\theta)=\underline{e}(\rho)$, and that for all $\rho,\theta>0$ one has $\partial_\rho p(\rho,\theta)>0$, $0<\partial_\theta e(\rho,\theta)\leq c$ and $|\rho\partial_\rho e(\rho,\theta)|\leq ce(\rho,\theta)$. Moreover, for the purposes of our study we restrict ourselves to the study of a monoatomic gas in the absence of thermal radiation effects, in which we have the further relation \begin{align} p(\rho,\theta)=\tfrac{2}{3}\rho e(\rho,\theta).\label{monatomic} \end{align} As a consequence of $(\ref{gibbs})$ and $(\ref{monatomic})$, there exists $P\in C^1$ such that $P(0)=0$, $P'(0)>0$, and \begin{align*} p(\rho,\theta)=\theta^\frac{5}{2}P(\frac{\rho}{\theta^\frac{3}{2}}) \end{align*} for all $\rho,\theta>0$. In accordance with ($\ref{gibbs}$) and the above hypotheses on $p(\rho,\theta)$ and $e(\rho,\theta)$, we have \begin{align} s(\rho,\theta)=S(\frac{\rho}{\theta^{3/2}})\quad\textrm{with}\quad S'(Z)=-\frac{3}{2}\left(\frac{\frac{5}{3}P(Z)-ZP'(Z)}{Z^2}\right).\label{eqAA1} \end{align} Moreover, these hypotheses on $p(\rho,\theta)$ and $e(\rho,\theta)$ ensure that \begin{align} S'(Z)\geq -c_1Z^{-1}, \quad Z>0.\label{eqAA2} \end{align} Finally, concerning the shear viscosity and heat conduction coefficients $\mu$ and $\kappa$, we shall assume that for some $\alpha\in (\frac{2}{5},1]$, the conditions \begin{align} (1+\theta^\alpha)\underline{\mu}\leq \mu(\theta)\leq (1+\theta^\alpha)\overline{\mu},\label{eq_mu} \end{align} \begin{align*} \sup_\theta |\mu'(\theta)|\leq \overline{m} \end{align*} and \begin{align} \underline{\kappa}\leq \kappa(\theta)\leq \overline{\kappa}(1+\theta^\alpha)\label{eq_kappa} \end{align} hold for some constants $0<\underline{\mu}<\overline{\mu}<\infty$, $0<\underline{\kappa}<\overline{\kappa}<\infty$ and $\overline{m}>0$. \vspace{0.2in} We remark that all of the above assumptions are physical, internally consistent, and also consistent with the work \cite{FeireislNovotnyBook} (see also \cite{FeireislBook}), up to the exclusion of the radiative term as discussed above. For technical reasons, we will also impose two additional constraints: \begin{enumerate} \item[(i)] the inequality $\theta\leq \eta(\theta)$ holds for $\theta$ sufficiently small, and \item[(ii)] there exists $C_2>0$ such that \begin{align} S'(Z)\leq -C_2Z^{-1}\quad\forall Z>0.\label{growth1} \end{align} \end{enumerate} The constraint (i) is a statement of non-degeneracy of the viscosity coefficient which enables us to use the diffusion in $\theta$ to control the growth of a quantity like $\log\theta$, while the constraint (ii) ensures that the entropy grows sufficiently fast as $\theta$ tends to zero. Note that the case of affine pressure treated in \cite{MelletVasseur} corresponds to $S'(Z)=-Z^{-1}$. We refer to Appendix B for a discussion of how $(\ref{growth1})$ can be relaxed when the density is known to remain bounded. Having established these assumptions on the constitutive relations, we now address the main results of our study. \section{The local entropy inequality for smooth solutions} We now turn to the local entropy inequality ($\ref{eq_admiss}$) that we described in the introduction. In particular, the statement of this inequality will make strong use of the following truncation operator, which will be applied to the temperature $\theta$: for $C>0$, we define $f_{C}:[0,\infty)\rightarrow [0,C)$ by \begin{align*} f_C(z)&=(z-C)_-+C=\min \{z,C\}. \end{align*} Fixing $0<\mathfrak{s}<t<T$, the local entropy inequality is then \begin{align} \nonumber &\int_{\Omega} \rho\tilde{s}_C(t,x)dx+\int_{\mathfrak{s}}^t\int_{\{\theta\leq C\}}\frac{\mu|2D(u)-\frac{2}{3}I\ebdiv u|^2+2\eta|\ebdiv u|^2}{2\theta}+\frac{\kappa|\nabla\theta|^2}{\theta^2}dxdt\\ &\hspace{0.2in}\leq \int_s^t\int_{\{\theta\leq C\}} \left(-\rho^2\partial_{\rho} s(\rho,C)\ebdiv u\right)dxdt+\int_{\Omega} \rho\tilde{s}_C(\mathfrak{s},x)dx,\label{eq_admiss} \end{align} where \begin{align*} \tilde{s}_C=s(\rho,C)-s(\rho,f_C(\theta)). \end{align*} In particular, smooth solutions to the Navier-Stokes-Fourier system ($\ref{nsf_system}$) satisfy $(\ref{eq_admiss})$: \begin{proposition} \label{thm_admissible} Fix $m\in\mathbb{N}$. If $(\rho,u,\theta)$ is a smooth solution to the Navier-Stokes-Fourier system ($\ref{nsf_system}$) with $\rho\in L^\infty(0,T;L^\omega(\Omega))$ for some $\omega>3$ and $u\in L^2(0,T;H^1_0(\Omega))$, then for every $0\leq \mathfrak{s}\leq t<\infty$ and $C>0$ the solution satisfies the local entropy inequality $(\ref{eq_admiss})$. \end{proposition} It should be noted that the existence of such smooth solutions is an oustanding open question. Nevertheless, Proposition $\ref{thm_admissible}$ indicates the plausability of imposing $(\ref{nsf_system})$ as an additional restriction on the class of weak solutions. \begin{proof}[Proof of Proposition $\ref{thm_admissible}$] Let $C>0$ be given and note that \begin{align} \partial_t (\rho \tilde{s}_C)+\ebdiv(\rho \tilde{s}_Cu)&=(I)-(II),\label{ref1} \end{align} where we have set \begin{align*} (I)&:=\partial_t (\rho s(\rho,C))+\ebdiv (\rho s(\rho,C)u) \end{align*} and \begin{align*} (II)&:=\partial_t (\rho s(\rho, f_C(\theta)))+\ebdiv(\rho s(\rho, f_C(\theta))u),\\ \end{align*} Note that a straightforward calculation gives \begin{align*} (I)&=(\partial_t \rho)s(\rho,C)+\rho\partial_t s(\rho,C)+\nabla s(\rho,C)\cdot (\rho u)+s(\rho,C)\ebdiv (\rho u)\\ &=\rho (\partial_\rho s(\rho,C)\partial_t\rho)+(\partial_\rho s(\rho,C)\nabla\rho)\cdot (\rho u)\\ &=-\rho^2\partial_\rho s(\rho,C)\ebdiv u, \end{align*} where we have used the continuity equation $\partial_t \rho+\ebdiv(\rho u)=0$ to obtain both the second and third equalities. For $(II)$, we will make use of the identity \begin{align} \partial_t\theta+\nabla\theta\cdot u&=\frac{1}{\alpha(\rho,\theta)}\bigg[\partial_t (\rho s)+\ebdiv(\rho s u)+\rho^2 \partial_\rho s(\rho,\theta)\ebdiv u\bigg],\label{eqident} \end{align} where we have set $\alpha(\rho,\theta)=\rho \partial_\theta s(\rho,\theta)$. Indeed, using the definition of $s$ and the product rule, we obtain \begin{align*} &\partial_t(\rho s)+\ebdiv (\rho su)\\ &\hspace{0.2in}=\partial_t (\rho s(\rho,\theta))+\ebdiv (\rho s(\rho,\theta)u)\\ &\hspace{0.2in}=(\partial_t \rho)s(\rho,\theta)+\rho \partial_t s(\rho,\theta)+\nabla s(\rho,\theta)\cdot (\rho u)+s(\rho,\theta)\ebdiv (\rho u). \end{align*} This is then equal to \begin{align*} &\rho \bigg[\partial_\rho s(\rho,\theta)\partial_t\rho+\partial_\theta s(\rho,\theta)\partial_t \theta\bigg]+\bigg[ \partial_\rho s(\rho,\theta)\nabla \rho+\partial_\theta s(\rho,\theta)\nabla\theta\bigg]\cdot (\rho u)\\ &\hspace{0.4in}=\rho\partial_\rho s(\rho,\theta)(\partial_t\rho+\nabla\rho \cdot u)+(\rho\partial_\theta s(\rho,\theta))(\partial_t\theta+\nabla\theta\cdot u), \end{align*} which gives the identity. Returning to $(II)$, we use the product rule to obtain, \begin{align*} (II)&=(\partial_t \rho)s(\rho,f_C(\theta))+\rho \partial_t s(\rho,f_C( \theta ))+\nabla s(\rho,f_C( \theta ))\cdot (\rho u)\\ &\hspace{0.2in}+s(\rho,f_C( \theta ))\ebdiv (\rho u)\\ &=\rho\partial_\rho s(\rho,f_C(\theta))(\partial_t\rho+u\cdot \nabla\rho)+\rho f'_C(\theta)\partial_\theta s(\rho,f_C(\theta))(\partial_t\theta+u\cdot \nabla\theta)\\ &=-\rho^2\partial_\rho s(\rho,f_C(\theta))\ebdiv u+\rho f'_C(\theta)\partial_\theta s(\rho,f_C(\theta))(\partial_t\theta+u\cdot \nabla\theta) \end{align*} so that \begin{align*} (II)&=-\rho^2\partial_\rho s(\rho,\theta)\ebdiv u+\rho\partial_\theta s(\rho,\theta)(\partial_t \theta+u\cdot\nabla\theta) \end{align*} when $\theta\leq C$ and $(II)=(I)$ when $\theta>C$. Combining these calculations, we obtain that $(\ref{ref1})$ is equal to \begin{align*} &\rho^2(\partial_\rho s(\rho,\theta)-\partial_\rho s(\rho,C))\ebdiv u\\ &\hspace{0.2in}-(\rho \partial_\theta s(\rho, \theta))(\partial_t\theta+u\cdot \nabla \theta)\\ &\hspace{0.4in}=(-\rho^2\partial_\rho s(\rho,C))\ebdiv u-\left[\partial_t(\rho s)+\ebdiv(\rho s u)\right] \end{align*} for $(t,x)$ such that $\theta\leq C$, and equal to $0$ when $\theta>C$. We then have \begin{align*} &\partial_t (\rho\tilde{s}_C)+\ebdiv (\rho \tilde{s}_Cu)\\ &\hspace{0.2in}\leq \bigg[ -\rho^2\partial_\rho s(\rho ,C)\ebdiv u\\ &\hspace{0.5in}-\frac{\mu|2D(u)-\frac{2}{3}I\ebdiv u|^2+2\eta |\ebdiv u|^2}{2\theta}-\frac{\kappa|\nabla\theta|^2}{\theta^2}\bigg]1_{\{\theta\leq C\}}(\theta)\\ &\hspace{0.5in}-\ebdiv\left(\frac{\kappa\nabla\theta}{\theta}\right)1_{\{\theta\leq C\}}(\theta)\\ &\hspace{0.2in}\leq \bigg[ -\rho^2\partial_\rho s(\rho ,C)\ebdiv u\\ &\hspace{0.5in}-\frac{\mu|2D(u)-\frac{2}{3}I\ebdiv u|^2+2\eta |\ebdiv u|^2}{2\theta}-\frac{\kappa|\nabla\theta|^2}{\theta^2}\bigg]1_{\{\theta\leq C\}}(\theta)\\ &\hspace{0.5in}-\ebdiv\left(\frac{\kappa \nabla f_C(\theta)}{\theta}\right) \end{align*} in the sense of distributions, where we have used ($\ref{nsf_system}$), ($\ref{sigma_def}$) and Lemma $\ref{lem_app}$.\footnote{In fact, solutions of ($\ref{nsf_system}$) satisfy ($\ref{sigma_def}$) with equality. We retain the inequality in our calculation to emphasize that ($\ref{eq_admiss}$) is consistent (at least formally) with ($\ref{sigma_def}$) under weaker notions of solution, provided that other steps in the argument can be given proper justification.} The desired result follows by integrating over $[\mathfrak{s},t]\times\Omega$. \end{proof} \section{Temperature bounds: the proof of Theorem $\ref{thm_temp1}$} We next turn to the proof of Theorem $\ref{thm_temp1}$. Recall that the goal of this theorem is to establish uniform bounds from below on the temperature $\theta$ for weak solutions satisfying the local entropy inequality $(\ref{eq_admiss})$ (together with certain integrability conditions on $\rho$ and $u$). The proof of this result follows the proof of \cite[Theorem $1$]{MelletVasseur} and, as we mentioned above, is based on the use of Stampacchia trunactions and De Giorgi's regularity theory for elliptic partial differential equations. We remark that these methods have seen much recent application in parabolic problems and the equations of fluid mechanics; see for instance \cite{CaffarelliVasseur} and the references cited there - we also point out the works of Caffarelli et al. \cite{CVAnnals}, Beir\"ao da Veiga \cite{BeiraoDaVeiga} and Chan \cite{Chan}, as well as a treatment of the partial regularity theory \cite{VasseurNoDEA}. To facilitate the De Giorgi iteration argument, we recall a lemma showing how superlinear bounds can lead to improved convergence properties. \begin{lemma} \label{lem_ineq} Let $C>1$ and $\beta>1$ be given and let $(W_k)_{k\in\mathbb{N}}$ be a sequence in $[0,1]$ such that for every $k\in\mathbb{N}$, $W_{k+1}\leq C^{k+1}W_k^\beta$. Then there exists $C_0^*$ such that $0<W_1<C_0^*$ implies $W_k\rightarrow 0$ as $k\rightarrow\infty$. \end{lemma} The estimate contained in Lemma $\ref{lem_ineq}$ is classical; for a proof, see for instance \cite{VasseurNoDEA}. With this lemma in hand, we now address the proof of the theorem: \begin{proof}[Proof of Theorem $\ref{thm_temp1}$] Let $\Omega\subset\mathbb{R}^3$, $T>0$, and $(\rho,u,\theta)$ be given as stated. Fix a decreasing sequence $(C_k)_{k\geq 0}\subset\mathbb{R}^+$ and an increasing sequence $(T_k)_{k\geq 0}\subset \mathbb{R}^+$, both to be chosen later in the argument, and define \begin{align} \nonumber U_k&:=U(C_k,T_k) \end{align} where for each $C>0$, $s>0$, we have set \begin{align} \nonumber U(C,\mathfrak{s})&:=\esssup_{\mathfrak{s}\leq t\leq T} \int_\Omega \rho(t,x)\log\left(C/f_{C}(\theta(t,x))\right)dx\\ \nonumber &\hspace{0.2in}+\int_{\mathfrak{s}}^T \int_{\Omega} \frac{\eta}{\theta}|\ebdiv u|^2\chi_{\theta\leq C}(t,x)dxdt\\ &\hspace{0.2in}+\int_{\mathfrak{s}}^T \int_{\Omega}\frac{\kappa |\nabla \theta|^2}{\theta^2}\chi_{\theta\leq C}(t,x)dxdt.\label{defu} \end{align} and where $\chi_{\theta\leq C}=\chi_{\{(t,x)\in [0,T]\times\Omega:\theta(t,x)\leq C\}}$. \vspace{0.2in} \noindent {\bf Step $1$}: {\it Boundedness of $U_{k+1}$.} \vspace{0.2in} \noindent Note that $\tilde{s}_{C_{k+1}}=0$ on $\{(t,x):\theta \geq C_{k+1}\}$, while on the set $\{\theta<C_{k+1}\}$, we use ($\ref{growth1}$) to estimate $\tilde{s}_{C_{k+1}}$, obtaining \begin{align*} s(\rho,C_{k+1})-s(\rho,\theta)&=\int_{\theta}^{C_{k+1}} (\partial_\theta s)(\rho,\omega)d\omega=\int_{\theta}^{C_{k+1}} -\frac{3\rho}{2\omega^{5/2}}S'(\frac{\rho}{\omega^{3/2}})d\omega\\ &\geq c\int_{\theta}^{C_{k+1}} \frac{3}{2\omega}d\omega =c\log\left(C_{k+1}/\theta\right). \end{align*} Invoking the local entropy inequality $(\ref{eq_admiss})$ and recalling $\rho_0(x)=\rho(0,x)$, $\theta_0(x)=\theta(0,x)$, we therefore get the inequality \begin{align*} U_{k+1}&\leq C\left(\int_{0}^T\int_{\Omega}\chi_{\theta\leq C_{k+1}}\rho^2(-\partial_\rho s(\rho,C_{k+1}))|\ebdiv u|dxdt\right.\\ &\hspace{2.6in}\left.+\int_{\Omega} \rho_0\tilde{s}_{C_{k+1}}(\rho_0,\theta_0)dx\right) \end{align*} Now, making use of ($\ref{eqAA1}$) and ($\ref{eqAA2}$), we obtain \begin{align} \nonumber U_{k+1}&=C\int_0^T\int_{\Omega} \chi_{\theta\leq C_{k+1}}\frac{\rho^2}{C_{k+1}^{3/2}}S'(\frac{\rho}{C_{k+1}^{3/2}})|\ebdiv u|dxdt\\ \nonumber &\hspace{0.2in}-\int_{\Omega}\int_{f_{C_{k+1}}(\theta_0)}^{C_{k+1}} \frac{3\rho_0^2}{2\omega^{5/2}}S'(\frac{\rho_0}{\omega^{3/2}})d\omega dx\\ \nonumber &\leq C\int_0^T\int_{\Omega} \chi_{\theta\leq C_{k+1}}\rho(t,x)|\ebdiv u|dxdt\\ &\hspace{0.2in}+\frac{3}{2}\int_{\Omega}\rho_0\log(C_{k+1}/f_{C_{k+1}}(\theta_0))dx\label{eqaaa1} \end{align} Using H\"older in the first term followed by the hypotheses $\rho\in L^\infty(0,T;L^\omega(\Omega))$, $u\in L^2(0,T;H_0^1(\Omega))$ and ($\ref{eqAA3}$), we therefore obtain \begin{align} U_{k+1}&\leq C^*\label{cstar} \end{align} for some $C^*>0$. \vspace{0.2in} \noindent {\bf Step $2$}: {\it Local entropy estimate for $U_{k+1}$.} \vspace{0.2in} Arguing as above, we again invoke the local entropy inequality $(\ref{eq_admiss})$ and expand the interval of integration to $[T_{k},T_{k+1}]$ (using $-\partial_\rho s\geq 0$), which gives the estimate \begin{align*} U_{k+1}&\leq C\left(\int_{T_k}^{T}\int_{\Omega}\chi_{\theta\leq C_{k+1}}\rho^2(-\partial_\rho s(\rho,C_{k+1}))|\ebdiv u|dxdt\right.\\ &\hspace{2.6in}\left.+\int_{\Omega} \rho\tilde{s}_{C_{k+1}}(\mathfrak{s},x)dx\right) \end{align*} for a.e. $T_k\leq s\leq T_{k+1}$. Integrating both sides of this inequality over $\mathfrak{s}\in [T_k,T_{k+1}]$ and dividing by $T_{k+1}-T_k$, we obtain \begin{align*} U_{k+1}&\leq C\left(\int_{T_k}^T\int_{\Omega} \chi_{\theta\leq C_{k+1}}\rho^2(-\partial_\rho s(\rho,C_k)|\ebdiv u|)dxdt\right.\\ &\hspace{1.6in}\left.+\frac{1}{T_{k+1}-T_k}\int_{T_{k}}^{T_{k+1}}\int_{\Omega} \rho\tilde{\mathfrak{s}}_{C_{k+1}}(0,x)dxd\mathfrak{s}\right) \end{align*} Arguing as in $(\ref{eqaaa1})$ and recalling that the sequence $(C_k)$ is decreasing, the Cauchy-Schwarz inequality gives the bound \begin{align*} &\int_{T_k}^T\int_{\Omega} \chi_{\theta\leq C_{k+1}}\rho^2(-\partial_\rho s(\rho,C_{k+1}))|\ebdiv u|dxdt\\ &\hspace{0.2in}\leq C\left\lVert \frac{\eta^{1/2}\ebdiv u}{\theta^{1/2}}\chi_{\theta\leq C_{k}}\right\rVert_{L^2([T_k,T];L^2(\Omega))}\left\lVert \rho\chi_{\theta\leq C_{k+1}}\right\rVert_{L^2([T_k,T];L^2(\Omega))}\\ &\hspace{0.2in}\leq CU_k^{1/2}\left(\int_{T_{k}}^T\int_{\Omega} \rho(t,x)^2\chi_{\theta \leq C_{k+1}}(t,x)dxdt\right)^{1/2}, \end{align*} where we have used the constraint (i) appearing at the end of Section $2$. This in turn gives \begin{align} U_{k+1}&\leq C\left[U_k^{1/2}(I)^{1/2}+(II)\right]\label{eq0bd} \end{align} with \begin{align*} (I)&:=\int_{T_{k}}^T\int_{\Omega} \rho(t,x)^2\chi_{\theta \leq C_{k+1}}(t,x)dxdt,\\ (II)&:=\frac{1}{T_{k+1}-T_k}\int_{T_k}^{T_{k+1}}\int_{\Omega} \rho\tilde{s}_{C_{k+1}}(\mathfrak{s},x)dxd\mathfrak{s}. \end{align*} The next two steps of the arugment consist of estimating the terms $(I)$ and $(II)$. \vspace{0.2in} \noindent {\bf Step $3$}: {\it Tchebyshev estimates for (I).} \vspace{0.2in} \noindent Define \begin{align*} F_k(\theta)&:=\chi_{\theta\leq C_k}\log\left(C_k/\theta\right),\\ R_k&:=\log\left(C_k/C_{k+1}\right), \end{align*} and observe that $(C_k)$ decreasing implies that the inequality $R_k\leq F_k(\theta(t,x))$ holds on the set $\{\theta<C_{k+1}\}$. Fix parameters $\alpha$, $\beta$, $p$ and $q$ satisfying \begin{align} \alpha\in (0,2),\quad \beta>0\quad\textrm{and}\quad p,q\geq 1\label{req1} \end{align} to be determined later in the argument, and let $p'$ and $q'$ be the conjugate exponents to $p$ and $q$. Using H\"older, we obtain the estimate \begin{align} \nonumber (I)&\leq \frac{1}{R_k^\beta}\int_{T_k}^T\int_{\Omega} \rho(t,x)^2F_k(\theta(t,x))^\beta dxdt\\ \nonumber &\leq \frac{1}{R_k^\beta}\lVert \rho\rVert^{2-\alpha}_{L^{(2-\alpha)p}([T_k,T];L^{(2-\alpha)q}(\Omega))}\lVert \rho^{\alpha} F_k(\theta)^\beta\rVert_{L^{p'}([T_k,T];L^{q'}(\Omega))}\\ &\leq \frac{C(T,|\Omega|,\alpha,p,q)}{R_k^\beta}\lVert \rho\rVert_{L^\infty(L^{\omega}(\Omega))}^{2-\alpha}\lVert \rho^{\alpha} F_k(\theta(t,x))^\beta\rVert_{L^{p'}([T_k,T];L^{q'}(\Omega))}\label{eqaa1} \end{align} provided that $\alpha$ and $q$ satisfy \begin{align} (2-\alpha)q<\omega.\label{req2} \end{align} We now turn to the task of estimating $\lVert \rho^\alpha F_k^\beta\rVert_{L^{p'}L^{q'}}$. In particular, we obtain \begin{align} \nonumber \lVert \rho^{\alpha} F_k^\beta\rVert_{L^{p'}L^{q'}}&=\lVert (\rho F_k)^{\alpha/\beta}F_k^{1-\frac{\alpha}{\beta}}\rVert_{L^{\beta p'}L^{\beta q'}}^\beta\\ \nonumber &\leq \lVert (\rho F_k)^{\alpha/\beta}\rVert_{L^{\infty}L_x^{\beta/\alpha}}^\beta\lVert F_k^{1-\frac{\alpha}{\beta}}\rVert_{L^\frac{2}{1-\frac{\alpha}{\beta}}L^\frac{6}{1-\frac{\alpha}{\beta}}}^\beta\\ \nonumber &=\lVert \rho F_k\rVert_{L^\infty L^1}^\alpha\lVert F_k\rVert_{L^2L^6}^{\beta-\alpha}\\ &\leq CU_{k}^\alpha\lVert F_k\rVert_{L^2L^6}^{\beta-\alpha}\label{eqaa2} \end{align} where we have set $p'=\frac{2}{\beta-\alpha}$ and $q'=\frac{6}{5\alpha+\beta}$, i.e. \begin{align} p=\frac{2}{2+\alpha-\beta}, \quad q=\frac{6}{6-5\alpha-\beta}.\label{req3} \end{align} The estimate of $\lVert F_k\rVert_{L^2L^6}$ is based the following inequality of Sobolev type adapted to the norms appearing in $U_k$, which we recall from \cite{MelletVasseur}. \begin{lemma}[Sobolev-type inequality, \cite{MelletVasseur}] \label{lem4} Let $\Omega\subset\mathbb{R}^3$ be a bounded domain with smooth boundary. Given $T>0$ and $\rho\in L^\infty([0,T];L^\omega(\Omega))$ for some $\omega>3$ such that \begin{align*} t\mapsto \int_{\Omega} \rho(t,x)dx \end{align*} is constant in $t$, there exists $C=C(\Omega,T,\rho,\omega)>0$ such that the inequality \begin{align*} \lVert F\rVert_{L^2([0,T];L^6(\Omega))}\leq C(\lVert \rho F\rVert_{L^\infty([0,T];L^1(\Omega))}+\lVert \nabla F\rVert_{L^2([0,T];L^2(\Omega))}) \end{align*} holds for every measurable $F:[0,T]\times\Omega\rightarrow [0,\infty)$, \end{lemma} Note that $\int \rho(t,x)dx=\int \rho_0(x)dx$ for a.e. $t\in [0,T]$. Recalling that $\rho\in L^\infty([0,T];L^\omega(\Omega))$ is satisfied by hypothesis, we may therefore invoke Lemma $\ref{lem4}$ in our setting to obtain \begin{align} \nonumber \lVert F_k\rVert_{L^2L^6}^{\beta-\alpha}&\leq C\left(U_k+\lVert \chi_{\theta\leq C_k}\frac{\nabla\theta}{\theta}\rVert_{L^2L^2}\right)^{\beta-\alpha}\\ &\leq C\left(U_k+U_k^{1/2}\right)^{\beta-\alpha}.\label{eqaa3} \end{align} Combining ($\ref{eqaa1}$) with ($\ref{eqaa2}$) and ($\ref{eqaa3}$) then gives \begin{align} (I)\leq \frac{C}{R_k^\beta}\left(U_k^\beta+U_k^{(\alpha+\beta)/2}\right).\label{eq1bd} \end{align} \vspace{0.2in} \noindent {\bf Step $4$}: {\it Estimate for (II).} \vspace{0.2in} \noindent Arguing as in ($\ref{eqaaa1}$) and recalling that the sequence $(C_k)$ is decreasing, we note that ($\ref{eqAA1}$) and $(\ref{eqAA2})$ imply \begin{align} \tilde{s}_{C_{k+1}}(\rho(\mathfrak{s},x),\theta(\mathfrak{s},x))&\leq c\chi_{\theta\leq C_{k+1}}(\mathfrak{s},x)F_k(\theta(\mathfrak{s},x)).\label{eqAAAB1} \end{align} for a.e. $\mathfrak{s}\in [T_k,T_{k+1}]$ and a.e. $x\in \Omega$. Invoking H\"older and arguing as in Steps $1$ and $2$ above, we obtain \begin{align*} (II)&\leq \frac{1}{T_{k+1}-T_k}\lVert F_{k}\rVert_{L^2([T_k,T_{k+1}];L^6(\Omega))}\lVert \rho\chi_{\theta\leq C_{k+1}}\rVert_{L^2([T_k,T_{k+1}];L^{6/5}(\Omega))}\\ &\leq \frac{1}{T_{k+1}-T_k}(U_k+U_k^{1/2})\lVert \rho\chi_{\theta<C_{k+1}}\rVert_{L^2([T_k,T_{k+1}];L^{6/5}(\Omega))}. \end{align*} On the other hand, proceeding as in Step $3$, we fix $\beta_1\in (0,1)$ to be determined later in the argument, and recall that $R_k\leq F_k(\theta)$ on $\{\theta<C_{k+1}\}$. This yields \begin{align*} &\lVert \rho\chi_{\theta<C_{k+1}}\rVert_{L^2([T_k,T_{k+1}];L^{6/5}(\Omega))}\\ &\hspace{0.2in}=\lVert \rho^{6/5}\chi_{\theta<C_{k+1}}\rVert_{L^{5/3}([T_k,T_{k+1}];L^1(\Omega))}^{5/6}\\ &\hspace{0.2in}\leq \frac{C(T)}{R_k^{\frac{5\beta_1}{6}}}\lVert \rho^{6/5} F_k(\theta)^{\beta_1}\rVert_{L^{\infty}([T_k,T];L^{1}(\Omega))}^{5/6}\\ &\hspace{0.2in}\leq \frac{C(T)}{R_k^{\frac{5\beta_1}{6}}}\lVert \rho^{\frac{6}{5}-\beta_1}\rVert_{L^{\infty}([T_k,T];L^{1/(1-\beta_1)}(\Omega))}^{5/6}\lVert (\rho F_k(\theta))^{\beta_1}\rVert_{L^{\infty}([T_k,T];L^{1/\beta_1})}^{5/6}\\ &\hspace{0.2in}\leq \frac{C(T,|\Omega|,\alpha_1,q_1)}{R_k^{\frac{5\beta_1}{6}}}\lVert \rho\rVert_{L^\infty(L^\omega(\Omega))}^{1-5\beta_1/6}\lVert \rho F_k(\theta)\rVert_{L^{\infty}([T_k,T];L^{1})}^{5\beta_1/6}\\ &\hspace{0.2in}\leq \frac{C(T,|\Omega|,\alpha_1,q_1)}{R_k^{5\beta_1/6}}\lVert \rho\rVert_{L^\infty (L^\omega(\Omega))}^{1-5\beta_1/6}U_k^{5\beta_1/6}, \end{align*} provided that $(\frac{6}{5}-\beta_1)/(1-\beta_1)<\omega$. This in turn gives \begin{align} (II)&\leq \frac{C}{(T_{k+1}-T_k)R_k^{5\beta_1/6}}(U_k+U_k^{1/2})U_k^{5\beta_1/6}\label{eq2bd} \end{align} \vspace{0.2in} \noindent{\bf Step $5$}: {\it Conclusion of the argument.} \vspace{0.2in} \noindent Combining ($\ref{eq0bd}$) with ($\ref{eq1bd}$) and ($\ref{eq2bd}$), we obtain \begin{align} U_{k+1}\leq \frac{C}{R_k^{\beta/2}}(U_k^{\frac{1+\beta}{2}}+U_k^{\frac{2+\beta+\alpha}{4}})+\frac{C}{(T_{k+1}-T_k)R_k^{5\beta_1/6}}(U_k^{1+\frac{5\beta_1}{6}}+U_k^{\frac{1}{2}+\frac{5\beta_1}{6}}) \label{finalUk} \end{align} whenever $\alpha,\beta,p$ and $q$ satisfy ($\ref{req1}$), ($\ref{req2}$), ($\ref{req3}$) and $\beta_1$, $q_1$ satisfy $\beta_1>0$, $q_1\geq 1$, $(\frac{6}{5}-\beta_1)/(1-\beta_1)<\omega$. To complete the proof, we will use ($\ref{finalUk}$) (with an appropriate choice of parameters) and Lemma $\ref{lem_ineq}$ to conclude that $U_k\rightarrow 0$ as $k\rightarrow\infty$ for suitably chosen sequences $(C_k)$ and $(T_k)$, with limits $C_k\rightarrow C_\infty>0$ and $T_k\rightarrow \tau>0$, respectively. Temporarily postponing the choice of $(C_k)$ and $(T_k)$, we remark that in order to apply Lemma $\ref{lem_ineq}$ the powers of $U_k$ appearing on the right side of ($\ref{finalUk}$) must be greater than $1$. This is the primary motivation behind our choice of parameters; combining this requirement with ($\ref{req1}$) and ($\ref{req3}$), it suffices to choose $\alpha$ and $\beta$ satisfying \begin{align*} |\beta-2|<\alpha<\frac{6-\beta}{5}, \quad \beta>1. \end{align*} This condition is compatible with ($\ref{req2}$) for $\omega>3$; we therefore choose such a pair $(\alpha,\beta)$. To choose $\beta_1$, we note that the condition $\frac{1}{2}+\frac{5\beta_1}{6}>1$ is satisfied for any $\beta_1>\frac{3}{5}$. Moreover, the condition $(\frac{6}{5}-\beta_1)/(1-\beta_1)<\omega$ is satisfied for $\beta_1$ sufficiently close to $\frac{3}{5}$; choosing such a value of $\beta_1$, we see that ($\ref{finalUk}$) holds. We now turn to the choice of the sequences $(C_k)$ and $(T_k)$. Fix $M>1$ to be determined later in the argument and let $\tau\in [0,T]$ be given. Now, setting \begin{align*} C_k=\exp(-M(1-2^{-k})) \end{align*} and \begin{align*} T_k=\tau(1-2^{-k}), \end{align*} and using Step $1$, we obtain \begin{align*} \frac{U_{k+1}}{C^*}&\leq \frac{C2^{(k+1)\beta/2}}{C^*M^{\beta/2}}U_k^{\gamma_1}+\frac{C2^{(k+1)(1+5\beta_1/6)}}{\tau C^*M^{5\beta_1/6}}U_k^{\gamma_2}\\ &=\frac{C2^{(k+1)\beta/2}(C^*)^{\gamma_1-1}}{M^{\beta/2}}\left(\frac{U_k}{C^*}\right)^{\gamma_1}\\ &\hspace{0.2in}+\frac{C2^{(k+1)(1+5\beta_1/6)}(C^*)^{\gamma_2-1}}{\tau M^{5\beta_1/6}}\left(\frac{U_k}{C^*}\right)^{\gamma_2}\\ &\leq C^{k+1}\left(\frac{U_k}{C^*}\right)^{\max\{\gamma_1,\gamma_2\}} \end{align*} for some $\gamma_1,\gamma_2>1$, where $C^*$ is as in ($\ref{cstar}$). Invoking Lemma $\ref{lem_ineq}$, we find $C_0^*>0$ such that $U_{1}\leq C_0^*$ implies $U_k\rightarrow 0$ as $k\rightarrow\infty$. On the other hand, by Step $1$, we have \begin{align*} U_1\leq \frac{C}{M^{\beta/2}}+\frac{C}{\tau M^{5\beta_1/6}}. \end{align*} Choosing $M$ sufficiently large, we obtain $U_k\rightarrow 0$ as desired. We now conclude the proof as in \cite{MelletVasseur}. In particular, taking the limit (since all integrands involved are nonnegative) we have \begin{align*} \int_\tau^T\int_{\Omega}\frac{\kappa |\nabla\theta|^2}{\theta^2}\chi_{\theta\leq e^{-M}}(t,x)dxdt\leq \liminf_{k\rightarrow\infty} \int_\tau^T\int_{\Omega} \frac{\kappa |\nabla\theta|^2}{\theta^2}\chi_{\theta\leq C_k}dxdt=0 \end{align*} so that after observing the identity $\frac{|\nabla\theta|^2}{\theta^2}\chi_{\theta\leq e^{-M}}=|\nabla\log(e^{-M}/f_{e^{-M}}(\theta))|^2$, we obtain that $x\mapsto \log(e^{-M}/f_{e^{-M}}(\theta))$ is constant in $x$ for a.e. $t\in [\tau,T]$; that is, for a.e. $t$ we can find $A(t)\geq 0$ such that $A(t)=\log(e^{-M}/f_{e^{-M}}(\theta(t,x)))$ for a.e. $x\in \Omega$. Taking the limit once more and using the weak (renormalized) form of the continuity equation $\partial_t\rho+\ebdiv(\rho u)=0$, we therefore have \begin{align*} 0&=\int_\Omega \rho(t,x)\log(e^{-M}/f_{e^{-M}}(\theta(t,x)))dx\\ &=A(t)\int_{\Omega} \rho(t,x)dx\\ &=A(t)\int_{\Omega} \rho_0dx,\quad \textrm{a.e.}\quad t\in [\tau,T]. \end{align*} We therefore obtain $A(t)\equiv 0$ for a.e. $t\in [\tau,T]$, which establishes \begin{align*} \theta(t,x)\geq e^{-M} \end{align*} for a.e. $t\in [\tau,T]$ and a.e. $x\in \Omega$. This completes the proof of Theorem $\ref{thm_temp1}$. \end{proof}
2,877,628,088,684
arxiv
\section*{Appendix A} We show below that within the statistical ensemble $(N{\bm P}T)$, there is a natural way to define and compute an internal instantaneous first Piola-Kirchhoff tensor that adopts a virial form and whose statistical average at equilibrium is automatically equal to the externally applied Piola-Kirchhoff stress. We also discuss the link between this instantaneous first Piola-Kirchhoff stress and an instantaneous Cauchy stress, even though the prior computation of a Cauchy stress is not required for our ($N\bm{P}T$) calculations.\\ As explained in the text, within the $(N{\bm P}T)$ ensemble, the internal degrees of freedom associated with the fluctuating box are the entries of the deformation gradient $\bm F$ whose dynamics are given by Eq.~\eqref{eq:lg_ext} which, for the sake of completeness, we recall here \begin{linenomath*} \begin{align} \frac{d F_{ij}}{dt} & = - \gamma^{-1} {\frac {\partial \tilde{H}}{\partial F_{ij}}} + \sqrt{2 k_B T \gamma^{-1}} \, \xi_{ij}(t) \ \ \ i,j = 1, ..., 3 \ , \label{eq:a1} \end{align} \end{linenomath*} where $ \xi_{ij}(t)$ is a white Gaussian noise such that $ \langle \xi_{ij}(t) \rangle =0$, $ \langle \xi_{ij}(t) \xi_{lm}(t') \rangle =\delta_{ij}\delta_{lm}\delta(t-t')$ and $ \langle \eta_i^n(t) \xi_{lm}(t') \rangle =0$. $\delta_{ij}$ and $\delta_{lm}$ are Kronecker symbols and $\delta(t-t')$ stands for the Dirac-delta distribution. As explained in the main text, the driving forces must be computed with the extended Hamiltonian $\tilde{H}$ given by Eq.~(\ref{eq:log}) \begin{linenomath*} \begin{equation} \tilde{H}= \Phi(\{F_{ij}L_j^0 \tilde{x}_j^n\}) + V_0 P_{ij} F_{ij} - N k_B T \ln \left( V_0 \det \bm{F}\right), \label{eq:a2} \end{equation} \end{linenomath*} which must be considered as a function of the scaled coordinates $\{\tilde{x}_i^n\}$ and of the deformation gradient $\bm{F}$. The scaled coordinates are related to the initial atomic coordinates by \begin{linenomath*} \begin{equation} \tilde{x}_i^n = \left( \bm{H}^{-1}\right)_{ij} x^n_j, \ \ \ \ \ \ \ i=1,2,3, \label{eq:a3} \end{equation} \end{linenomath*} where the matrix $\bm{H}$ is defined by $\bm{H} = \bm{F} \bm{L}^0$, where $ \bm{L}^0$ is a diagonal matrix containing the length of the orthogonal vectors that define the initial simulation box. The driving force that enters Eq.~\eqref{eq:a1} is given by \begin{equation} \begin{split} \frac {\partial \tilde{H}}{\partial F_{ij}} &= \frac{\partial \Phi}{\partial F_{ij}}+V_0P_{ij}-N k_B TF_{ij}^{-T}\\ &= \sum_{n,l}\frac{\partial \Phi}{\partial x_l^n}\frac{\partial x_l^n}{F_{ij}}+V_0P_{ij}-Nk_BTF_{ij}^{-T}. \label{eq:a4} \end{split} \end{equation} Using Eq.~\eqref{eq:a3}, it is trivial to show that \begin{linenomath*} \begin{equation} \frac{\partial x_l^n}{\partial F_{ij}}=\delta_{li}x_k^nF_{kj}^{-T}, \label{eq:a5} \end{equation} \end{linenomath*} which inserted in Eq.~\eqref{eq:a4}, leads to \begin{linenomath*} \begin{equation} \frac {\partial \tilde{H}}{\partial F_{ij}} = \sum_{n}\frac{\partial \Phi}{\partial x_i^n}x_k^nF_{kj}^{-T}+V_0P_{ij}-Nk_BTF_{ij}^{-T}. \label{eq:a6} \end{equation} \end{linenomath*} Introducing the virial tensor $\bm{\mathcal{V}}$, defined by \begin{linenomath*} \begin{equation} \mathcal{V}_{ij}=-\sum_n\frac{\partial \Phi}{\partial x_i^n}x_j^n, \label{eq:a7} \end{equation} \end{linenomath*} the driving forces become \begin{linenomath*} \begin{equation} \frac {\partial \tilde{H}}{\partial F_{ij}} = -\{Nk_BT\delta_{ik}+\mathcal{V}_{ik}\}F_{kj}^{-T}+V_0P_{ij}. \label{eq:a8} \end{equation} \end{linenomath*} We now define the instantaneous internal first Piola-Kirchhoff stress as \begin{linenomath*} \begin{equation} P_{ij}^{inst}=\frac{1}{V_0}(Nk_BT\mathbb{1}+\bm{\mathcal{V}}){\bm F}^{-T}, \label{eq:a9} \end{equation} \end{linenomath*} where $\mathbb{1}$ is the identity matrix. Eq.~\eqref{eq:a8} becomes \begin{linenomath*} \begin{equation} \frac {\partial \tilde{H}}{\partial F_{ij}} = V_0(P_{ij}-P_{ij}^{inst}), \label{eq:a10} \end{equation} \end{linenomath*} The kinetic equation \eqref{eq:a1} then reads \begin{linenomath*} \begin{equation} \frac{d F_{ij}}{ dt}=-\gamma^{-1}V_0(P_{ij}-P_{ij}^{inst}) +\sqrt{2 k_B T \gamma^{-1}} \, \xi_{ij}(t), \label{eq:a11} \end{equation} \end{linenomath*} At equilibrium, the statistical average of the l.h.s. of this equation is equal to zero (by "statistical average", we mean a time average over a sufficient long time window). As, by definition, the statistical average of the noise term is also equal to zero, we get \begin{linenomath*} \begin{equation} \langle P_{ij}^{inst} \rangle =P_{ij}, \label{eq:a12} \end{equation} \end{linenomath*} where $ \langle X \rangle $ stands for the statistical average of $X$. This equation invites us to define an internal first Piola-Kirchhoff stress ${\bm P}^{int}$ as the statistical average of the instantaneous stress ${\bm P}^{inst}$ \begin{linenomath*} \begin{equation} {\bm P}^{int} = \left \langle {\bm P}^{inst} \right \rangle = \frac{1}{V_0} \left \langle (Nk_BT\mathbb{1}+\bm{\mathcal{V}}){\bm F}^{-T} \right \rangle. \label{eq:a13} \end{equation} \end{linenomath*} Once the equilibrium is reached, this internal first Piola-Kirchhoff stress, which is defined unambiguously only at equilibrium because it relates on a time average, equilibrates exactly the imposed Piola-Kirchhoff stress: \begin{linenomath*} \begin{equation} {\bm P}^{int} = \bm P. \label{eq:a14} \end{equation} \end{linenomath*} We note that the internal stress defined in Eq.~\eqref{eq:a13} adopts a virial form, as it is common to any internal stress defined at the atomistic scale and, therefore, linked to interatomic forces. We also note that the numerical computation of this Piola-Kirchhoff stress is straightforward and does not require the prior computation of any other stress, such as a Cauchy stress, even tough an internal Cauchy stress could also be independently defined and related, through usual relations, to our internal first Piola-Kirchhoff stress (see below). Finally, we also note that this internal first Piola-Kirchhoff stress is not a local quantity defined for each point within the simulation box but rather a global quantity that is associated with the whole system. Therefore, its computation does not require any recipe to define and compute numerically a local stress, such as the ones proposed by Hardy \cite{hardy1982formulas}.\\ Up to this point, the first Piola-Kirchhoff stress ${\bm P}^{int}$ has been directly defined in the $(N\bm{P}T)$ ensemble within which the deformation gradient ${\bm F}$ is a fluctuating quantity. It has naturally emerged within the kinetic equations associated with the box shape and is such that, at equilibrium, it equilibrates the externally applied first Piola-Kirchhoff stress. We show now that, as expected, it may also be associated, by conjugacy, with the deformation gradient ${\bm F}$. To show this, we need to introduce the canonical $(N\bm{F}T)$ statistical ensemble, in which $\bm{F}$ is fixed, and to compute the associated free energy $\mathcal{F}$ whose derivative with respect to $\bm{F} $, when properly averaged, will lead to ${\bm P}^{int}$.\\ Within the $(N\bm{F}T)$ ensemble, the partition function $Z$ is given by \begin{linenomath*} \begin{equation} Z=\frac{1}{\Lambda^{3N}N!}\Pi_{n,i}\int dx_i^n e^{-\beta\Phi(\{x_i^n\})}, \label{eq:a15} \end{equation} \end{linenomath*} where $\Phi(\{x_i^n\})$ is the interatomic potential and $\beta= 1 /(k_B T)$. The term $\Lambda^{3N}$, where $\Lambda$ is the de Broglie wavelength, is reminiscent of the quantum and therefore discrete nature of the problem. It appears in the classical limit of quantum mechanics and ensures a proper normalization of entropy and free energy. Our aim here is to compute the derivation of the free energy $\mathcal{F}$ with respect to the deformation gradient $\bm{F}$. Therefore, we must introduce the scaled coordinates $\{\tilde{x}_i^n\}$ which are related to the initial coordinates $\{x_i^n\}$ through the deformation gradient $\bm F$: \begin{linenomath*} \begin{equation} \nonumber x_i^n=(\bm F \bm L^0)_{ij}\tilde{x}_j^n. \end{equation} \end{linenomath*} The partition function then becomes \begin{linenomath*} \begin{equation} \nonumber Z=\frac{1}{\Lambda^{3N}N!}(\det {\bm F \bm L}^0)^N \Pi_{n,i}\int_0^1 d\tilde x_i^n e^{-\beta\Phi( \{\bm F \bm L^0 \tilde{x}^n\})}. \end{equation} \end{linenomath*} Taking the derivative of the free energy $\mathcal{F}=-k_BT \log Z$ with respect to the deformation gradient $\bm F$, we get: \begin{equation} \begin{split} -\frac{\partial \mathcal{F}}{\partial F_{ij}} & =k_BT \left\{ NF_{ij}^{-T} + \frac{\Pi_{n,i}\int_0^1 d\tilde x_i^n \left(-\beta \frac{\partial \Phi}{\partial F_{ij}}\right)e^{-\beta\Phi}}{\Pi_{n,i}\int_0^1 d\tilde x_i^n e^{-\beta\Phi}} \right\}\\ & = k_BT \left\{ N F _{ij}^{-T} + \left \langle -\beta \frac{\partial \Phi}{\partial F_{ij}} \right \rangle _{N{\bm F}T} \right\} \\ & = k_BT \left\{ N F _{ij}^{-T} + \sum_n \left\langle -\beta \frac{\partial \Phi}{\partial x_{i}^n}x_k^n F_{kj}^{-T} \right\rangle _{N{\bm F}T} \right\}. \end{split} \label{eq:a16} \end{equation} where $\langle X \rangle_{N\bm{F}T}$ is the statistical average of $X$ within the ensemble ($N\bm{F}T$). We now define the canonical first Piola-Kirchhoff stress as the stress conjugated to the deformation gradient $\bm F$\footnote{Note that the sign convention used here is implied by the enthlapy definition $(\Phi + V_0 \bm{P}\bm{F})$ used to introduce the extended Hamiltonian in Eqs.~\eqref{eq:log} and \eqref{eq:a2}.} \begin{linenomath*} \begin{equation} {\bm P}^{ cano} = - \frac{1}{V_0}\frac{\partial \mathcal{F}}{\partial F_{ij}}. \label{eq:a17} \end{equation} \end{linenomath*} Using Eq.~\eqref{eq:a16}, we get: \begin{linenomath*} \begin{equation} {\bm P}^{cano} = \frac{1}{V_0} \left \langle \left (Nk_BT\mathbb{1}+\bm{\mathcal{V}}\right){\bm F}^{-T} \right \rangle _{N{\bm F}T}, \label{eq:a18} \end{equation} \end{linenomath*} where, as it does not fluctuate within the $({N{\bm F}T})$ ensemble, the constant term $Nk_BT{\bm F}^{-T}$ has been included within the statistical average and where the virial tensor $\bm{\mathcal{V}}$ has been defined in Eq.~\eqref{eq:a7}.\\ Comparison of Eq.~\eqref{eq:a13} which gives ${\bm P}^{int}$, the internal first Piola-Kirchhoff stress within the $(N{\bm P}T)$ ensemble, and Eq.~\eqref{eq:a18} which gives ${\bm P}^{cano}$, the canonical first Piola-Kirchhoff stress defined within the $({N{\bm F}T})$ ensemble, shows that ${\bm P}^{int}$ and ${\bm P}^{cano}$ differ only through different statistical averages associated with their respective statistical ensembles. Obviously, for any observable $X$, a statistical average within the $(N{\bm P}T)$ ensemble may be split into a statistical average at a fixed $\bm F$ followed by an average over the fluctuations of $\bm F$, which, with obvious notations, leads to \begin{linenomath*} \begin{equation} \langle X \rangle =\overline { \langle X \rangle }_{N{\bm F}T}^{\bm F}. \end{equation} \end{linenomath*} where, as in Eq.~\ref{eq:a12} and \ref{eq:a13}, $ \langle X \rangle$ refers to the statistical average of $X$ within the $(N\bm{P}T)$ ensemble. Thus, ${\bm P}^{int}$, defined in the $(N{\bm P}T)$ ensemble, is related to ${\bm P}^{cano}$, defined within the $(N{\bm F}T)$ ensemble, by \begin{linenomath*} \begin{equation} {\bm P}^{int}=\overline {{\bm P}^{cano}}^{\bm F}=\overline { \left \langle -\frac{1}{V_0}\frac{\partial \mathcal{F}}{\partial {\bm F}} \right \rangle }_{N{\bm F}T}^{\bm F}. \end{equation} \end{linenomath*} In conclusion, the internal Piola-Kirchhoff stress defined in Eq.~\eqref{eq:a13} is, as expected, related by conjugacy to the deformation gradient ${\bm F}$.\\ The introduction of the first Piola-Kirchhoff stress in the model used here is simply a consequence of the fact that it is the stress measure related by conjugacy to the deformation gradient $\bm{F}$, which, within a lagrangian setup, is the degree of freedom associated to the fluctuating box. Of course, we could also introduce other stress measures, even though this is not needed to integrate our Langevin dynamics. As example, we could define an instantaneous Cauchy stress in such a way that it is related to the instantaneous first Piola-Kirchhoff stress through the usual relation: \begin{linenomath*} \begin{equation} \bm{\sigma}_1^{inst} = \frac{1}{\det \bm{F}} \bm{P}^{inst} \bm{F}^T, \label{eq:a21} \end{equation} \end{linenomath*} which, with Eq.~\eqref{eq:a9}, leads to: \begin{linenomath*} \begin{equation} \bm{\sigma}_1^{inst} = \frac{1}{V}(Nk_BT\mathbb{1}+\bm{\mathcal{V}}), \label{eq:a22} \end{equation} \end{linenomath*} whose statistical average in the ($N \bm{P} T$) ensemble leads to the definition of an internal Cauchy stress: \begin{linenomath*} \begin{equation} \bm{\sigma}_1^{int} = \left \langle \frac{1}{V}(Nk_BT\mathbb{1}+\bm{\mathcal{V}}) \right \rangle. \label{eq:a23} \end{equation} \end{linenomath*} The question now arises as to whether this internal Cauchy stress is equal, at equilibrium, to an external Cauchy stress. Naturally, we would like this external Cauchy stress $\bm{\sigma}$ to be related to the applied first Piola-Kirchhoff stress by the usual relation: \begin{linenomath*} \begin{equation} \bm{\sigma} = \frac{1}{\langle \det \bm{F} \rangle} \bm{P} \langle \bm{F} \rangle^T. \label{eq:a24} \end{equation} \end{linenomath*} Using Eqs.~\eqref{eq:a13} and \eqref{eq:a14}, which say that, at equilibrium in the $(N \bm{P} T)$ ensemble, the average of the instantaneous first Piola-Kirchhoff stress is equal to the applied first Piola-Kirchhoff stress, and Eq.~\eqref{eq:a21} for the definition of the instantaneous Cauchy stress, we get: \begin{linenomath*} \begin{equation} \bm{\sigma} = \frac{1}{\langle \det \bm{F} \rangle} \langle \det \bm{F} \bm{\sigma}_1^{inst} \bm{F}^{-T} \rangle \langle \bm{F}\rangle^T. \label{eq:a25} \end{equation} \end{linenomath*} Because of the coupling between the fluctuations of $\det \bm{F}$, $\bm{\sigma}^{inst}$ and $\bm{F}$, the r.h.s of this equation cannot be further simplified. Therefore, strictly speaking, the statistical average of the instantaneous Cauchy stress as defined in Eqs.~\eqref{eq:a21}-\eqref{eq:a22} is not equal to the applied Cauchy stress defined in Eq.~\eqref{eq:a24} \begin{linenomath*} \begin{equation} \bm{\sigma} \ne \langle \bm{\sigma}_1^{inst}\rangle. \label{eq:a26} \end{equation} \end{linenomath*} As an alternative, we could define an instantaneous Cauchy stress in the following way: \begin{linenomath*} \begin{equation} \bm{\sigma}_2^{inst} = \frac{1}{\langle \det \bm{F} \rangle} \bm{P}^{inst} \langle \bm{F} \rangle^T. \label{eq:a27} \end{equation} \end{linenomath*} Using Eqs.~\eqref{eq:a13}, \eqref{eq:a14} and \eqref{eq:a24} we immediately see that the statistical average of this instantaneous stress is now equal to the applied Cauchy stress: \begin{linenomath*} \begin{equation} \bm{\sigma} = \langle \bm{\sigma}_2^{inst}\rangle. \label{eq:a28} \end{equation} \end{linenomath*} However, we note that, strictly speaking, the definition of the instantaneous Cauchy stress given in Eq.~\eqref{eq:a27} is not entirely satisfactorily because its numerical application requires the prior knowledge of the statistical averages of $\det \bm{F}$ and $\bm{F}$. As a final comment, we note that in the thermodynamic limit of an infinite system, fluctuations may be neglected (provided the system is not going through a phase transition): definitions \eqref{eq:a21} and \eqref{eq:a27} become equivalent and Eq.~\eqref{eq:a26} becomes and equality. \section*{Appendix B: integration scheme} Our Langevin dynamics is defined by stochastic equations with white noise (see Eqs.~\eqref{eq:lg_ext}) that display the following generic form: \begin{linenomath*} \begin{equation} \text{d}X_i(t) = a_i(\{X_i\}) dt + B \text{d}W_i(t) \label{eq:app1} \end{equation} \end{linenomath*} where $ a(\{X_i\})$ is a drift term, $B$ a noise amplitude and the differential $dW(t)$ denotes an infinitesimal increment of the Wiener process $W_i(t)$. The integral form of Eq.~(\ref{eq:app1}) is: \begin{linenomath*} \begin{equation} X_i(t) = \int_{0}^{t}a_i(\{ X_i(t)\}) \text{d}t + \\ \int_{0}^{t} B\text{d}W_i(t) + X_i(t_0), \label{eq:app2} \end{equation} \end{linenomath*} where the first term in the r.h.s. is a Riemann integral and the second term is a stochastic integral. To numerically evaluate Eq.~(\ref{eq:app2}) we used an explicit predictor-corrector method which results in the following numerical scheme \cite{Burrage2007-gj,baruffi2019overdamped}: \begin{linenomath*} \begin{equation} \label{eq:app3} \begin{split} X_i& (t + \Delta t) = X_i(t) + \\ &+ {\frac{\left[a_i(\{ \bar X_i(t+\Delta t)\})+a_i(\{X_i(t)\})\right]}{2}} \Delta t \\ & + B \Delta W_i(t), \end{split} \end{equation} \end{linenomath*} where the finite increment $\Delta W_i(t) = W_i(t + \Delta t) - W_i(t)$ can be calculated as $\Delta W_i(t) = \sqrt{\Delta t} \xi(t)$ with $\Delta t \in \mathbb{R}$ and $\xi(t)$ taken from a normal distribution with unit variance. $\bar X_i$ is evaluated using the explicit Euler method: \begin{linenomath*} \begin{equation} \bar X_i(t + \Delta t)= X_i(t) +a_i(\{X_i(t)\}) \Delta t + B\Delta W_i(t). \end{equation} \end{linenomath*} \section*{Appendix C: numerical procedure for variant identification} The numerical procedure used to identify the different variants is based on the definition of a local strain describing the transition from the BCC to the HCP structure. To calculate this local atomic strain, we implemented the following procedure as described in \cite{wuvariant}.\\ We start from a BCC structure with crystal axis $\langle 100 \rangle_{BCC}$ parallel to the mainframe axis. For every atom \textit{n}, we consider six possible sets of 14 neighbors by taking six different configurations defined based on the six cubic cells which can deform into the orthorhombic one, as schematically illustrated in Fig.~\ref{fig:basal_planes}. As already mentioned, the transformation from BCC to HCP cannot be fully described by a simple homogeneous deformation gradient, but supplementary atomic displacements applied on a sublattice of the deformed lattice are needed. These displacements consist in an alternate shuffling of $\{110\}$ planes along the $\langle \bar{1}10 \rangle$ directions. The overall deformation of the lattice does not describe the shuffling so, to numerically evaluate the local strain, half of the atoms of the original BCC lattice must be considered in the six possible configurations.\\ After defining the six possible sets of neighbors, we calculate for each atom \textit{n} six deformation gradients $\bm{F}^{(k)}_n$, following the approach proposed by Falk \cite{falk1998dynamics} and, by polar decomposition, six strains $\bm{U}^{(k)}_n$, each one associated to a $\{110\}_{BCC}$ plane in the undeformed configuration. The local strain for atom $n$ is then defined as the $\bm{U}^{(k)}_n$ with minimum $D_n^{{(k)}^2}$. Based on this assignment, we label the atom $n$ as belonging to the corresponding variant. \begin{figure} [htpb] \centering \includegraphics[width=0.45\textwidth]{./img/Fig_app_a_1.pdf} \includegraphics[width=0.22\textwidth]{./img/Fig_app_a_2.pdf} \caption{a) \textcolor{black}{Six possible orientations of the (110)$_{bcc}$ planes which may transform into (0001)$_{hcp}$ planes during the BCC-HCP transformation. Note that the central atom of the BCC cubic cell is not shown for the sake of clarity. b) Example of a neighbor set $\Omega_n$ (colored in black, containing 14 atoms) for a given atom $n$ (colored in red) for one of the six orientations.}} \label{fig:basal_planes} \end{figure} \section{Introduction}\label{sec:intro} Martensitic transformation (MT) is a particular sub-class of solid-to-solid structural phase transformations observed in many metals and alloys~\cite{Bhattacharya2003-lk}. In most general terms, the MT is a diffusionless displacive first-order phase transition. It involves a shear-dominated change of shape in the underlying crystal lattice on alteration of the external conditions, i.e., temperature and/or pressure or stress. Often, a complex microstructure governed by the symmetry of the different phases develops \cite{Khachaturyan1967-zz,Schryvers2002-sm,Delville2009-ed,Finel2010-rd,Salman2012-zg,Salmant,Vattre2016-ql,Salman2019-cg}, giving rise to exceptional mechanical properties such as shape-memory effect \cite{Hildebrand2008-vy}, superelasticity \cite{Tanaka1488} and high-strength \cite{Saito2003-iv,Zhang2017-mr}. In particular, materials of strong interest for the nuclear \cite{lemaignan2006zirconium}, aeronautic \cite{benmhenni2013micromechanical} and bio-medical industries \cite{leyens2003titanium,brunette2012titanium}, such as titanium, zirconium and their alloys, undergo martensitic phase transformation. The present paper specifically deals with martensitic phase transition in pure titanium. However, the observations and conclusions of our work are potentially valid, at least in general terms, also for elements such as zirconium and other alloys undergoing the BCC$\rightarrow$HCP transition.\\ Titanium exists as Hexagonal Close Packed (HCP) phase ($\alpha$-phase) at room temperature and atmospheric pressure. On raising the pressure, while keeping the temperature constant, it transforms to a hexagonal phase ($\omega$ phase) around 2 GPa pressure \cite{banerjee2010phase}. When the temperature is raised at atmospheric pressure conditions, the HCP structure transforms to a Body-Centered Cubic (BCC) structure ($\beta$ phase) at 1155 K, stabilized by lattice vibrations \cite{heiming1991phonon}. It is clear that, during any conventional transformation route (e.g., metal forming \cite{cabus2014influence}) or advanced elaboration processes (e.g., additive manufacturing \cite{chastand2018comparative,thijs2013strong,thijs2010study}), the transition between the three possible phases can generally occur multiple times, combined with other metallurgical and mechanical phenomena such as plasticity \cite{Conti2004-sv,Bhattacharya2004-es,otsuka2018fft} or recrystallization \cite{Jedrychowski2015-jq}. This fact induces significant modifications of the material microstructure and, as a result, of its mechanical properties. Optimizing these properties requires a clear understanding of the correlations between the processing parameters and the mechanisms involved in the microstructure formation and evolution.\\ Despite the large number of numerical and experimental studies, the debate is still open on topics such as the relationship between initial and final phases \cite{cabus2007orientation,humbert2015evaluation,Chen2016-ue}, the variant selection criterion in bulk material and at grain boundaries \cite{gao2014diffuse,wang2003effect,lischewski2011nucleation,Shi2013-sl,furuhara2001variant}, the exact kinetic and sequence of transformation events \cite{srivastava1995evolution,farabi2018five}, the structure of interfaces between different variant domains \cite{banerjee1998substructure,matsuda2008crystallography,otsuka2005physical}. \textcolor{black}{The present work aims to partially fill this gap of knowledge in the case of the temperature-driven BCC$\rightarrow$HCP phase transition in pure titanium and, in particular, performs a computational investigation on martensite microstructural evolution during the transition and its final morphologies growing under different stress conditions. To our knowledge, this is the first atomistic study investigating such processes in pure titanium.\\ We employ atomistic modeling to simulate the transition under different stress conditions. Contrary to microstructural models written at the mesoscale, this modeling technique does not either exclude any local mechanism or preselect a specific kinetic pathway. Thus, it is well suited to gain insight into the details of the transition. In particular, we used a recently developed atomistic approach based on the overdamped Langevin dynamics \cite{baruffit,baruffi2019overdamped}, which we here apply to simulate a displacive solid-state phase transition.\\ As opposed to experiments, which are mostly limited to the observation of final microstructures, modeling allows to follow the entire microstructural evolution during the transition. We analyse the sequence of nucleation events that subsequently lead to long-stage microstructures emergence under different stress conditions. The analysis of martensite morphologies resulting form the transformation confirms the importance of different stress conditions in the variant selection process and, consequently, on the appearance of different defects (e.g., intervariant boundaries and antiphase defects).Two types of interfaces are mainly observed. The first ones are a consequence of the shuffle degeneracy within each orientational variant. As they do not involve any elastic accomodation, these interfaces are essentially wavy. The second ones result from the impingement between different orientational variants and, consequently, display plane morphologies due to long-range elastic relaxation.} \begin{table*}[htpb] \centering \begin{tabular}{ccc} \hline \multicolumn{3}{c}{STRETCH TENSORS $\bm{U}^{(k)}$} \\ \hline % $ $$\bm{U}^{(1)} = \frac{1}{2} \begin{pmatrix} 2\eta_1 & 0 & 0 \\ 0 & {\eta_2 + \eta_3} & {\eta_3 - \eta_2} \\ 0 & {\eta_3 - \eta_2} & {\eta_2 + \eta_3} \end{pmatrix}$$ $& $ $$\bm{U}^{(2)} = \frac{1}{2} \begin{pmatrix} 2\eta_1 & 0 & 0 \\ 0 & {\eta_2 + \eta_3} & {\eta_2 - \eta_3} \\ 0 & {\eta_2 - \eta_3} & {\eta_2 + \eta_3} \end{pmatrix}$$ $ & $ $$\bm{U}^{(3)} = \frac{1}{2} \begin{pmatrix} \eta_2 + \eta_3 & 0 & \eta_3 - \eta_2 \\ 0 & 2 \eta_1 & 0 \\ \eta_3 - \eta_2 & 0 & {\eta_2 + \eta_3} \end{pmatrix}$$ $ \\ $ $$\bm{U}^{(4)} = \frac{1}{2} \begin{pmatrix} \eta_2 + \eta_3 & 0 & \eta_2 - \eta_3 \\ 0 & 2 \eta_1 & 0 \\ \eta_2 - \eta_3 & 0 & {\eta_2 + \eta_3} \end{pmatrix}$$ $ & $ $$\bm{U}^{(5)} = \frac{1}{2} \begin{pmatrix} \eta_2 + \eta_3 & -\eta_2 + \eta_3 & 0 \\ -\eta_2 + \eta_3 & \eta_2 + \eta_3 & 0 \\ 0 & 0 &2 \eta_1 \end{pmatrix}$$ $ & $ $$\bm{U}^{(6)} = \frac{1}{2} \begin{pmatrix} \eta_2 + \eta_3 & \eta_2 - \eta_3 & 0\\ \eta_2 - \eta_3 & \eta_2 + \eta_3 & 0 \\ 0 & 0 & 2 \eta_1 \end{pmatrix}$$ $ \\ \hline \end{tabular} \caption{The six transformation stretch tensors associated with the BCC$\rightarrow$HCP displacive transformation, written in orthonormal basis aligned with the BCC lattice cubic directions. Coefficients $\eta_1$, $\eta_2$ and $\eta_3$ are related to the BCC and HCP lattice parameters by $\eta_1 = \frac{a}{a_0}$, $\eta_2 = \sqrt{\frac{3}{2}}\frac{a}{a_0}$, $\eta_3 = \frac{c}{\sqrt{2} a_0}$, where $a_0$ and $(a,c)$ are the lattice parameters of the BCC and HCP lattices.} \label{tab:U} \end{table*} \section{Methods}\label{sec:met} \subsection{Modeling approach}\label{sec:model} We employed a recently introduced modeling approach describing the evolution of particle positions with an overdamped stochastic dynamics \cite{baruffit,baruffi2019overdamped} to model the BCC$\rightarrow$HCP transition under different stress conditions. With this method, particle positions are treated as stochastic variables that follow a first-order in time dynamics that do not explicitly incorporate high-frequency vibrations of the crystalline grid (phonons), which limits the time scale of classical Molecular Dynamics to a few nanoseconds \cite{paul1993molecular}. The chaotic nature of the Newtonian dynamics, which in the long time drives the system to a stochastic equilibrium state, is recovered in the first-order in time dynamics through the use of an additive noise term, carefully chosen to guarantee that the system converges to the correct thermodynamical state in the long-time limit. \textcolor{black}{While this approach has been widely used in studying the dynamics of soft matter systems as well as in biomolecular simulations \cite{ando2003multiple,manghi2006hydrodynamic,ma2016generalized}, it has, to our knowledge, never been employed for the simulation of crystalline materials.} In this section, we report the main equations used in the model. The reader may refer to Refs.~\cite{baruffit,baruffi2019overdamped} for more details on its analytical derivation.\\ In the proposed approach, the configurational space is restricted to the coordinates $x_i^n$, where the upper index $n=1,...,N$ refers to a particle and the lower index $i=1,2,3$ to a cartesian coordinate. Correspondingly, the dynamics involves only the first derivatives of $x_i^n$ and reads as \begin{linenomath*} \begin{equation} \frac{dx_i^n}{dt} = - \nu^{-1} {\frac {\partial \Phi}{\partial x_i^n}} + B \eta_i^n(t), \label{eq:lg} \end{equation} \end{linenomath*} where $\Phi(\{ x_i^n\})$ is the potential energy between particles, $\nu$ a viscosity coefficient and $B$ the amplitude of a white Gaussian noise $ \eta_i^n(t)$ such that $\langle\eta_i^n(t)\rangle=0$, $\langle\eta_i^n(t)\eta_j^m(t')\rangle=\delta_{nm}\delta_{ij}\delta(t-t')$. $\delta_{ij}$ and $\delta_{ij}$ are Kronecker symbols and $\delta(t-t')$ stands for the Dirac-delta distribution. Equations (\ref{eq:lg}) represent a first-order in time stochastic dynamics, also known as overdamped Langevin Dynamics~\cite{Nelson1967-ee}. We hypothesize that the coefficients $\nu$ and \textit{B} are independent from particle positions and related by a fluctuation-dissipation relation, i.e., $B=\sqrt{2k_BT\nu^{-1}}$. This guarantees that, in the long-time limit $t \rightarrow \infty$, the distribution probability $P (\{x_i^{n}\})$ generated by Eqs.~(\ref{eq:lg}) converges to a steady state characterized by the Boltzmann distribution \begin{linenomath*} \begin{equation} t \rightarrow \infty: \ \ \ \ \ \ P(\{x_i^{n}\}) \rightarrow A \exp \left( - {\frac{\Phi (\{x_i^{n}\})}{k_B T}}\right). \end{equation} \end{linenomath*} Supplemented by the constraint that the particles stay within a pre-defined simulation box, the dynamics represented by the set of equations (\ref{eq:lg}) is valid in the $(NVT)$ thermodynamical ensemble, i.e., the number of particle $N$, the volume $V$ and the temperature $T$ are fixed. To deal with applied stress conditions, we extended the model to the $(N\bm{P}T)$ ensemble, where $\bm P$ stands for the first Piola-Kirchhoff tensor. We present now briefly the stochastic dynamics required for this $(N\bm{P}T)$ ensemble.\\ First, we incorporate nine additional degrees of freedom into the model, which are the components of the deformation gradient $\bm{F}$ describing the change in the shape of the simulation box. Next, in order to couple $\bm{F}$ to the degrees of freedom associated to the atomic positions, we introduce scaled coordinates $\{\tilde{x}_i^n\}$ related to the actual coordinates by \begin{linenomath*} \begin{equation} \tilde{x}_i^n = \left( \bm{H}^{-1}\right)_{ij} x^n_j, \ \ \ \ \ \ \ i=1,2,3 \end{equation} \end{linenomath*} where the matrix $\bm{H}$ is defined by $\bm{H} = \bm{F} \bm{L}^0$, where $\bm{L}^0$ is a diagonal matrix containing the length of the orthogonal vectors $\bm{L}^0_1$, $\bm{L}^0_2$ and $\bm{L}^0_3$ that define the initial simulation box. The extended overdamped Langevin dynamics reads as \begin{linenomath*} \begin{align} \nonumber \frac{d \tilde{x}_i^n}{dt} & = - \nu^{-1}{\frac {\partial \tilde{H}}{\partial \tilde{x}_i^n}} + \sqrt{2 k_B T \nu^{-1}}\, \eta_i^n(t) \ \ \ i=1,2,3 \ ; \ n=1,...\text{N} \ , \\ \frac{d F_{ij}}{dt} & = - \gamma^{-1} {\frac {\partial \tilde{H}}{\partial F_{ij}}} + \sqrt{2 k_B T \gamma^{-1}} \, \xi_{ij}(t) \ \ \ i,j = 1, ..., 3 \ , \label{eq:lg_ext} \end{align} \end{linenomath*} where $\xi_{ij}(t)$ is a white Gaussian noise with $\langle \xi_{ij}(t) \rangle=0$ and $\langle \xi_{ij}(t) \xi_{kl}(t') \rangle=\delta_{ik}\delta_{jl}\delta(t-t')$, $\gamma$ a viscosity associated with the new degrees of freedom $F_{ij}$ and $\tilde{H}(\{ \tilde{x}_i^n \},\bm{F})$ the Hamiltonian for the extended set of DOF. The extended Hamiltonian should of course be such that Eqs.~(\ref{eq:lg_ext}) converge in the long time limit towards the thermodynamical equilibrium of the ($N\bm{P}T$) ensemble. As shown in \cite{baruffi2019overdamped}, this leads to: \begin{linenomath*} \begin{equation} \tilde{H}(\{\tilde{x}_i^n\}, \bm{F}) = \Phi(\{F_{ij}L_j^0 \tilde{x}_j^n\}) + V_0 P_{ij} F_{ij} - N k_B T \ln \left( V_0 \det \bm{F}\right), \label{eq:log} \end{equation} \end{linenomath*} where $V_0$ is the volume of the initial simulation box. \textcolor{black}{We note that this extended Hamiltonian is equal to the usual enthalpy\footnote{\textcolor{black}{Note that our sign convention is such that $P_{ij}= \delta_{ij} P$ where $P > 0$ corresponds to a system under hydrostatic pressure.}} $(\Phi + V_0 \bm{P} \bm{F})$ supplemented by an extra logarithmic term, which ensures that the Langevin dynamics converges towards the correct thermodynamic equilibrium \cite{baruffi2019overdamped}. % In principle, this logarithmic term should include a normalization volume unit $V_B$ that would make the argument of the logarithmic term dimensionless but, as long as we are considering kinetics and the associated driving forces, this normalizing term plays no role\footnote{\textcolor{black}{In principle, the logarithmic term in Eq.~\eqref{eq:log} should be written as $\log \left( V_0/V_B \det \bm{F}\right)$ where the quantity $V_B$, which has the dimension of a volume, is given by $V_B = \Lambda^3$ where $\Lambda$ is the Broglie wavelength. This term is reminiscent of the quantum and, therefore, discrete nature of the Hamiltonian and should be incorporated if we want to ensure a proper normalization of the entropies and energies. However, as it enters only through a constant term that disappears through derivation, this normalizing term does not enter the driving forces and plays no role within the dynamics. This is the reason why we do not include it in Eq.~\eqref{eq:log}, even though it would make the argument of the logarithmic term dimensionless.}} and, therefore, is not included in Eq.~\eqref{eq:log}. Altogether, the use of the extended Hamiltonian within Eq.~\eqref{eq:lg_ext} guarantees that the dynamics converges to the correct equilibrium state associated with the $(N\bm{P}T)$ ensemble in which an externally applied first Piola-Kirchhoff stress controls the system. We stress that the appearance of the first Piola-Kirchhoff stress is simply a consequence of the fact that this is the stress measure conjugated to the deformation gradient $\bm{F}$. The deformation gradient itself emerges because, within an atomic-scale approach that naturally uses atomic coordinates that refer to a fixed reference state, it is most convenient to use a Lagrangian description within which the degrees of freedom of the fluctuating simulation box are simply the entries of the deformation gradient $\bm{F}$ (see for example \cite{miller2016molecular}). Finally, we mention that, even though it is not required for the implementation and integration of the kinetic equations, we could define an instantaneous internal first Piola-Kirchhoff stress whose statistical average adopts a virial form which, at equilibrium, is automatically equal to the externally applied Piola-Kirchhoff stress (see Appendix A for details).}\\ \textcolor{black}{We conclude this paragraph with a general remark on the proposed overdamped Langevin dynamics. Such dynamics, also known as Brownian dynamics, have already been used for the modeling of a number of spatio-temporal processes, in which heavy particles (such as biomolecules) interact with a bath of light particles (e.g. solvent molecules). In the limit of vanishing ratio $m/M$, where $m$ and $M$ are the masses of the light and heavy particles, it can be argued that the heavy particles follow a Markovian dynamics, which permits to exclude from the simulation the light particles, which are of no direct interest (see for example \cite{erban2014molecular}). The present situation is different, as the atomic species that constitute our materials cannot be separated into subclasses with different masses. However, our aim is to show that, because of the randomness generated by the chaotic character of the phase space trajectories, the initial deterministic dynamics, which is of second order in time, can be replaced by a first order in time stochastic dynamics. Of course, an exact formulation of the overdamped Langevin equations should proceed through an explicit coarse-graining procedure over the initial Newtonian dynamics, which requires the identification of a characteristic time over which time averages can be performed while preserving the characteristic times associated to the dynamics processes we are interested in. This coarse-graining procedure would naturally lead to a coarse grained potential $\Phi_{cg}\left (\{x^n_i\} \right )$ that will differ from the original potential $\Phi \left(\{x^n_i\}\right)$, as phonons will be adiabatically embedded into $\Phi_{cg}\left (\{x^n_i\} \right )$, together with explicit expressions for the noise terms and for the viscosity coefficient $\nu$, whose knowledge is needed to access to the time scale of our first order in time dynamics. Because of the intrinsically anharmonic character of the initial potential, we anticipate that the coarse-grained potential will be significantly softer than the original one, allowing to use large time step when the dynamics is numerically discretised. In this paper, we propose a simplification that consists in replacing the coarse-grained potential by the original one. We therefore cannot associate a real time scale to our simulations. However, we show below that our first-order in time dynamics does reproduce the real nature of the dynamics in terms of reaction pathway and observed microstructures. This asserts that a first-order in time out-of-lattice dynamics can be safely used to simulate spatio-temporal processes even though the underlying atomic species cannot be separated into subclasses associated to different characteristic times.} \subsection{Numerical implementation} \textcolor{black}{The model described in section \ref{sec:model} has been implemented in a Fortran code of our own. We referred to the work of Goedecker \cite{goedecker2002optimization} to optimize interatomic forces computation by parallel computing. To time integrate Eqs.~(\ref{eq:lg_ext}), we used an explicit predictor-corrector method \cite{Burrage2007-gj,baruffi2019overdamped}. Dimensionless equations are obtained by introducing an energy unit $E_0$ and a time unit $t_0$ = $\nu / E_0$, where $\nu$ is the viscosity term that enters into the dynamics of the scaled atomic coordinates. When needed, we display our results with reference to the dimensionless time $\tau = t / t_0$. For simulation in the $(NVT)$ ensemble, the time integration is performed using an explicit predictor-corrector method with a dimensionless time step $\Delta \tau = 10^{-5}$. For the $(N\bm{P}T)$ ensemble, the simulation box viscosity $\gamma$ is set equal to 0.0855 $\nu$ and the dimensionless time step is $\Delta \tau = 10^{-7}$. More details on the numerical integration of Eqs.~(\ref{eq:lg_ext}) are given in Appendix B.} \subsection{Simulation setup and interatomic potential} To clarify the influence of different external conditions on martensite microstructure, we simulate the BCC$\rightarrow$HCP transition in both thermodynamic ensembles $(NVT)$ and $(N\bm{P}T)$. \textcolor{black}{We use periodic boundary conditions. In the (NPT) ensemble, we consider stress-free conditions by setting to zero the first Piola-Kirchhoff stress, allowing the material to change its macroscopic shape.} Although real conditions experienced by a region in bulk material would be an intermediate case between these two conditions, the two extreme scenarios are useful for a global understanding of the influence of local mechanical constraints preventing a free change in shape and/or volume of the matrix around a martensite nucleus. In the simulations, we first equilibrate BCC titanium at $1400$ K and then quench it at $700$ K. We perform the quenching by an instantaneous rescaling of the temperature parameter that fixes the noise amplitudes in Eqs. \ref{eq:lg_ext}. The simulation box size is set equal to $36 \times 36 \times 36$ $a_0^3$, where $a_0$ is the BCC equilibrium lattice constant at 1400 K ($a_0=3.417 \; \text{\normalfont\AA}$). The total number of atoms is 93312. \textcolor{black}{But first, to perform Molecular Dynamics or Langevin atomic simulations, a relevant interatomic potential is needed. We have considered two empirical atomic potentials for titanium from the literature that could be relevant because they were in particular developed to study the BCC$\rightarrow$HCP transition. The first potential, of the EAM type, proposed by Mendelev et al.~\cite{mendelev2016development} and referred to as Ti-1 EAM, was fitted to reproduce the HCP stacking fault energy, the BCC-HCP transformation temperature ($T \sim 1150$K) and the melting temperature. The second potential, of the MEAM type, proposed by Henning et al.~ \cite{hennig2008classical} describes the structure and energetics of $\alpha$, $\beta$ and $\omega$ phases in Ti. Optimization of the parameters is performed using a database of density-functional calculations and yields an accurate potential as verified by comparison to experimental and density-functional data for phonons, surface and stacking fault energies, and energy barriers for homogeneous martensitic transformations. In addition, the elastic constants, phonon frequencies, surface energies, and defect formation energies closely match density-functional results even when these were not included in the fitting procedure. The authors have also verified using Molecular Dynamics that the equilibrium phase diagram is in close agreement with experimental measurements.} We tested these EAM and MEAM potentials by performing preliminary simulations with classical Molecular Dynamics using the simulator LAMMPS~\cite{Plimpton1995-wa}. We obtained the following results: when the MEAM-type potential is used, we were able to observe a stable BCC phase transforming into HCP upon quenching, coherently with previous Molecular Dynamics simulations proving its ability to reproduce the whole temperature-pressure phase diagram of titanium \cite{hennig2008classical}. On the other hand, when using the EAM potential we did not observe phase transition after cooling, although we were able to get a stable BCC structure at high temperatures. We increased the simulation duration up to 1 nanosecond and we tested different simulation box sizes. However, the transition did not occur in any of the two thermodynamic ensembles. The possible reasons for that could be: i) the simpler functional form of the EAM potential compared to the MEAM. The lack of any angular dependency in the embedding term describing the electron density makes the EAM much cheaper than the MEAM from a computational point of view but it impacts its ability to model materials with strong bond directionality metals with partially full-\textit{d} shell, ii) the presence of a high energy barrier for the nucleation of the HCP phase. Based on these results, we finally decided to use the MEAM potential to perform the overdamped Langevin simulations and implemented it in a parallel code by following a previous work on many-body force field implementation~\cite{goedecker2002optimization}. \subsection{Variant and phase identification} To characterize the microstructure formation and evolution, we need to identify the different crystal structures and, especially, the different variants.\\ To identify the different HCP variants, we use a deformation gradient map representing the local lattice distortion. Indeed, when a material undergoes a martensitic transformation, several energetically equivalent variants differing in their relative crystallographic orientation appear \cite{pitteri2002continuum}. Each of these variants is associated with a stretch tensor $\bm{U}$ that can be easily identified once the local deformation gradient $\bm{F}$ is known: we just need to use the polar decomposition $\bm{F} = \bm{Q} \bm{U}$ (where $\bm{Q}$ is a rotation and $\bm{U}= \bm{U}^T$ is positive-definite), which is unique. However, due to the infinite degeneracy of the lattice groups of the parent and product phases, the identification of the deformation gradient $\bm{F}$ is not unique. Therefore, it is common to use a lattice correspondence between the parent and product phases to represent the actual lattice sites' displacements \cite{banerjee2010phase}. Before explaining the procedure used to identify $\bm{F}$, we first recall that a simple homogeneous deformation gradient cannot fully describe the martensitic transformation from BCC to HCP: the BCC lattice is a Bravais lattice, the HCP is not. Therefore, the transformation strain that we want to identify must be supplemented by atomic displacements applied on a sublattice of the deformed lattice. In the present situation, this shuffling consists in translating every second basal plane of the hexagonal lattice obtained after the homogeneous deformation gradient. We now turn to the procedure used to identify the local deformation gradient. This identification requires a lattice correspondence between the parent and product phases. Two lattice correspondences, given in terms of orientation relationships, have been proposed. \textcolor{black}{The mechanism given by Burgers \cite{burgers1934process} states that the following crystallographic planes and directions are parallel:} \begin{linenomath*} \begin{equation} (110)_{bcc} \parallel (0001)_{hcp};[\bar{1}11]_{bcc} \parallel [\bar{2}110]_{hcp}, \label{eq:burgers_text} \end{equation} \end{linenomath*} whereas the mechanism given by Mao \cite{mao1967effect} states the following correspondence: \begin{linenomath*} \begin{equation} (110)_{bcc} \parallel (0001)_{hcp};[00\bar{1}]_{bcc} \parallel [11\bar{2}0]_{hcp}. \label{eq:mao_text} \end{equation} \end{linenomath*} The two mechanisms differ only in that the Burgers mechanism requires a rotation of $\pm 5.26^\circ$ around the $[0001]$ HCP axis in order to obtain the proposed direction correspondence \cite{wang1998iron}. Consequently, whereas the Mao relationship generates only 6 HCP orientation variants, the Burgers mechanism generates 12 HCP lattices. However, as they differ only by rotations, the two mechanisms are associated with exactly the same six stretch tensors $\bm{U}^{(k)}, \ \ k=1, \ldots, 6$. These tensors are listed in Tab.~\ref{tab:U}. We define a procedure meant to identify these local stretch tensors $\bm{U}_n^{(k)}$. For each of the six $(110)$ BCC planes transforming in the final $(0001)$ HCP plane, we define a specific set of neighboring sites, see Appendix C. Then, for each atom \textit{n}, we identify six local deformation gradients $\bm{F}_n^{(k)}$, $k=1, \ldots, 6$, that minimize the following local descriptors: \begin{linenomath*} \begin{equation} k=1, \ldots, 6: \ \ D_n^{{(k)}^2} = \sum_{m \in \Omega_n} \lVert \Delta \mathbf{r}_{nm}(t^*) - \bm{F}_n^{(k)}\Delta, \mathbf{r}_{nm}(0)\lVert^2 \label{eq:Dsq} \end{equation} \end{linenomath*} where $\Omega_n$ is the neighborhood set that is associated with a given $(110)$ plane. The local deformation gradient $\bm{F}_n$ is defined as the one that, among the six tensors $\bm{F}_n^{(k)}$, leads to the smallest $D_n^{(k)}$. Finally, polar decomposition leads to the local stretch tensor $\bm{U}_n$ and to a local stretch deformation map. The non-affine displacement $D_n^{(k)^2}$ quantifies the degree at which an affine transformation can describe the local change in the lattice. In the following analysis, we set a threshold $D^{2}_{lim}=6.5$ \AA\textsuperscript{2} above which the calculated $\bm{F}_n$ is considered not meaningful and exclude from post-processing atoms with $D_n^{(k)^2} > D^{2}_{lim}$.\\ To monitor the evolution of the phase fraction of each phase without distinguishing variants, we use the Polyhedral Template Match analysis (PTM) \cite{larsen2016robust} implemented in the software OVITO \cite{stukowski2009visualization}. This method classifies crystal structures according to the topology of the local atomic environment. It provides a flexible tool for structural identification even in the presence of strong thermal fluctuations when other methods relying on interatomic distances (e.g., Common Neighbor Analysis \cite{honeycutt1987molecular}) are less robust. In our analysis, the cut-off for the Root-Mean-Square-Deviation (RMSD) between the local atomic structure and the ideal structural template has been set equal 0.14 \AA. \section{Results}\label{sec:ris} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{./img/Fig1_a-eps-converted-to.pdf} \includegraphics[width=0.4\textwidth]{./img/Fig1_b-eps-converted-to.pdf} \caption{Evolution of BCC and HCP volume fractions (a) and variant fractions (b) in the ($N\bm{P}T$) ensemble. \textcolor{black}{The symbols A,B,C,D indicates the times selected for displaying the microstructures in Fig.\,\ref{fig:npt}.}} \label{fig:npt_frac} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.23\textwidth]{./img/Fig2a-eps-converted-to.pdf} \includegraphics[width=0.23\textwidth]{./img/Fig2b-eps-converted-to.pdf} \includegraphics[width=0.23\textwidth]{./img/Fig2c-eps-converted-to.pdf} \includegraphics[width=0.23\textwidth]{./img/Fig2d-eps-converted-to.pdf} \caption{\textcolor{black}{Microstructure evolution in the ($N\bm{P}T$) ensemble. Only atoms within a thin slice of the simulation box and classified as HCP are shown and colored on the basis of the corresponding variant: a) initial nucleation stage, (b)-(c) two variants prevail, (d) final single variant domain. Crystallographic directions refer to the parent BCC phase.The microstructures a,b,c,d correspond to the reduced times 0.01, 0.04, 0.07, 0.2, respectively. These times are indicated in Fig.\ref{fig:npt_frac} }} \label{fig:npt} \end{figure} \subsection{Simulations in the (NPT) ensemble with stress-free boundary conditions} After equilibrating the system in the BCC state at T=1400 K, we drop the temperature down to T=700 K. After a short relaxation, the system transforms into a HCP structure.\\ In Fig.~\ref{fig:npt_frac}a, we report the evolution of the BCC and HCP volume fractions during the transition. Almost no BCC phase is left after the transformation has been completed. A non-negligible residual fraction of atoms ($\approx$ 0.20) exhibits crystallographic structure different than HCP, which suggests that some defects are generated.\\ In Fig.~\ref{fig:npt_frac}b, we report the evolution of variant fractions as a function of time. At the very beginning of the transition, all the six variants nucleate almost instantaneously. \textcolor{black}{However, very quickly, most of them disappear, giving rise to a microstructure composed of the variants $\bm{U}^{(1)}$ and $\bm{U}^{(5)}$. Afterwards, the structure coarsens further and forms a single variant $\bm{U}^{(5)}$ domain (Fig.~\ref{fig:npt}).} Snapshots taken during the transition (Fig.~\ref{fig:antiphase}) show that, when the microstructure is coarsening, HCP domains with same \textbf{c} axis orientation but different shuffling directions (referred to as a couple of ``anti-variants'' \cite{gao2014diffuse}) come into contact and generate an antiphase boundary (see the inset Fig.~\ref{fig:antiphase}b). This boundary is equivalent to a stacking fault when the plane between the two domains is parallel to the $\{0001\}$ HCP basal plane. % \begin{figure} [h] \centering {\includegraphics[scale=0.31]{./img/Fig3-eps-converted-to.pdf}} \caption{Final state in the $(N\bm{P}T)$ ensemble: atoms with crystallography different from HCP are colored in black: a) anti-variant domains (pink) separated by another variant (green), b) antiphase defect (see the inset) formed at the boundary between anti-variant domains after microstructure coarsened. Crystallographic directions refer to the parent BCC phase.} \label{fig:antiphase} \end{figure} \subsection{Simulations in the (NVT) ensemble} \label{sec:nvt} After quenching the system down to $T$=700\,K, the crystal structure transforms into a HCP structure without any remaining BCC domain, similarly to what is observed in the previous $(N\bm{P}T)$ simulation.\\ Fig.~\ref{fig:var_sel_4}a shows the evolution of the BCC and HCP volume fractions while Fig.~\ref{fig:var_sel_4}b shows the evolution of the six variant volume fractions. The overall transformation proceeds through different stages.\\ We first observe a nucleation stage, that extends up to point A in Fig.~\ref{fig:var_sel_4}a and \ref{fig:var_sel_4}b, during which local HCP fluctuations emerge. We underline that the length scale of these fluctuations is too small to allow the identification of different variants with the procedure outlined in section \ref{sec:met}, thus leading to an apparent incompatibility between Fig.~\ref{fig:var_sel_4}a and Fig.~\ref{fig:var_sel_4}b. Indeed, the procedure used to identify HCP variants relies on neighborhoods that extend beyond the second neighbor shell, whereas the PTM algorithm used to identify the local lattice relies on a neighboring set limited to the first two neighbor shells \cite{larsen2016robust}. Next, HCP nuclei grow rapidly (from point A to point B) and stabilizes to a quasi-stationary stage during which the six HCP variants reach finite volume fractions that are roughly constant (from point B to point C). Then, the system enters a stage during which the volume fraction of three HCP variants increases at the expense of the three others (from C to D). After this growing stage (from point D on),the system stabilizes in a microstructure that consists of only three variants. The selected variants share the $[111]$ BCC direction in the parent phase, i.e., a $\langle 11\bar{2}0 \rangle$ HCP dense direction.\\ \begin{figure} [h] \centering \includegraphics[width=0.4\textwidth]{./img/Fig4_a-eps-converted-to.pdf} \includegraphics[width=0.4\textwidth]{./img/Fig4_b-eps-converted-to.pdf} \caption{Evolution of BCC and HCP volume fractions (a) and variant fractions (b) in the ($NVT$) ensemble.} \label{fig:var_sel_4} \end{figure} \begin{figure} [h] \centering {\includegraphics[scale=.32]{./img/Fig5-eps-converted-to.pdf}} \caption{Final microstructure obtained at 700 K in the $(NVT)$ ensemble. The inset shows a triple junction detail. Atoms classified as HCP are colored according to the corresponding variant and atoms with crystallographic symmetry different than HCP in black. Crystallographic directions refer to the parent BCC phase.} \label{fig:microstru_1} \end{figure} Fig.~\ref{fig:microstru_1} shows the final 3-variant microstructure along a plane orthogonal to the common dense HCP direction, within a color map of the \textbf{c} axis orientation and a schematic indication of inter-variant misorientation. \textcolor{black}{The three variants organize around several triple junctions by forming three interfaces which show the structure of the type I $\{10\bar{1}1\}$ twin boundaries (twinning plane $K_1 =\{10\bar{1}1\}$, shear direction $\eta_1 = \langle 1 \bar{2}10\rangle$). The strains associated with the 3 variants forming the 3-plate morphology respect the twinning equations \cite{Bhattacharya2003-lk,bowles1954crystallography}. As later discussed in the Discussion part, this 3-plate geometry is not fully compatible with the formation of three $\{10\bar{1}1\}$ twins so further strain is required for its accomodation.}\\ In Fig.~\ref{fig:nvt}, we show four snapshots of the microstructural evolution during the transition. First, stable nuclei of all the six variants appear (a), and all the different HCP domains develop (b). At this point, two stable triple junctions (indicated by arrows) are already formed and lead to the final 3-plate morphology after the subsequent microstructure coarsening.\\ We repeated the simulation in the $(NVT)$ ensemble several times and changed the random noise term. The time evolution of variant fractions (Fig.~\ref{fig:fig6}) show that in each case the system behaves similarly and, after the nucleation of all the possible variants, progressively selects a triplet. In all the simulations, the selected triplets share a $\langle 111 \rangle$ BCC direction i.e., a $\langle 11\bar{2}0 \rangle$ HCP direction. In terms of microstructure, the selected triplet always organize in the 3-plate morphology. Only in one case, shown in Fig.~\ref{fig:microstru_nvt_2}, the selected variants form two laminates consisting of parallel twins along $\{10\bar{1}1\}$ HCP plane. At the crossing point between the laminates, an FCC domain appears. This FCC domain shares coherent interfaces with the neighboring HCP variants; these sharp interfaces consist in a one-layer thick transition from a $\{111\}$ FCC plane to a HCP basal plane. \begin{figure}[h] \centering {\includegraphics[scale=0.28]{./img/Fig6_a-eps-converted-to.pdf}} \hspace{0.3cm} {\includegraphics[scale=0.23]{./img/Fig6_b-eps-converted-to.pdf}} {\includegraphics[scale=0.28]{./img/Fig6_c-eps-converted-to.pdf}} \hspace{0.3cm} {\includegraphics[scale=0.28]{./img/Fig6_d-eps-converted-to.pdf}} \caption{Evolution of the microstructure in the $(NVT)$ ensemble, only atoms in a small slab normal to the $[111]$ BCC direction and classified as HCP are shown and colored according to the corresponding variant: a) nucleation stage (from A to B in Fig.~\ref{fig:var_sel_4}a), all the variants appear, b) quasi-stationary regime, two stable triple junction, highlighted by arrows, are identifiable, (c-d) the microstructure coarsens (from C to D in Fig.~\ref{fig:var_sel_4}a) in a 3-plate morphology (final stable state). Crystallographic directions refer to the parent BCC phase.} \label{fig:nvt} \end{figure} \begin{figure*} [htpb] \centering \includegraphics[width=0.35\textwidth]{./img/Fig7_a-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{./img/Fig7_b-eps-converted-to.pdf}\\ \includegraphics[width=0.35\textwidth]{./img/Fig7_c-eps-converted-to.pdf} \includegraphics[width=0.35\textwidth]{./img/Fig7_d-eps-converted-to.pdf} \caption{Evolution of the variant fraction in four different simulations a)-d) in the $(NVT)$ ensemble where the random noise term is different in each case.} \label{fig:fig6} \end{figure*} \section{Discussion}\label{sec:disc} The results of simulations performed in the $(NVT)$ and $(N\bm{P}T)$ ensembles give highlights on the origin of different defects experimentally observed in martensite and confirms how deeply local mechanical constraints influence martensite morphology.\\ In both thermodynamic ensembles, at the really beginning of the transition, all variants appear. This is expected, since we consider situations in which all variants are energetically equivalent and therefore equally probable to appear due to the randomness of thermal fluctuations~\cite{gao2014diffuse}. However, when the structure further evolves, only part of them end up to form the final microstructure. We observed two distinct microstructural evolutions, depending on the applied boundary conditions. For simulations in the $(N\bm{P}T)$ ensemble with $\bm{P}= \mathbf{0}$, the microstructure coarsens and forms a single variant domain with antiphase defects where anti-variants domains come into contact (Fig.~\ref{fig:antiphase}). We recall that a couple of anti-variants are two variants with the same orientation but different shuffling directions \cite{gao2014diffuse}. On the other hand, for simulations in the $(NVT)$ ensemble, a triplet of variants with a common $\langle 11\bar{2}0 \rangle$ HCP direction is systematically selected and form stable triple junctions that drive the overall microstructural evolution (Fig.~\ref{fig:nvt}). In this case, the final microstructure is richer in interfaces and, consequently, has a higher energy with respect to the mono-variant domain obtained for simulations in the $(N\bm{P}T)$ ensemble.\\ As discussed in section \ref{sec:ris}, in almost all the simulations performed in the ($NVT$) ensemble, the 3 selected variants cluster in a 3-plate geometry around the common dense direction. This morphology has been experimentally observed in pure titanium \cite{farabi2018five}, Ti-Nb shape memory alloys \cite{chai2009self}, zirconium alloys \cite{srivastava1993self}, and agrees with predictions based on the phenomenological theory of martensite \cite{bowles1954crystallography}. The 3-plate cluster minimizes the overall mesoscopic shape strain after transition \cite{farabi2018five,srivastava1993self,wang2003effect} because the three selected variants are self-accommodating \cite{miyazaki1989shape,pitteri2002continuum}. Consequently, this morphology is strongly favored when strain energy minimization is dominant in driving the microstructure evolution. The morphology is experimentally observed at different length scales (micrometer \cite{farabi2018five} and sub-micrometer scale \cite{srivastava1993self}) and numerically reproduced by us using a simulation domain of nanometer scale. This suggests that the dominant driving force is the elastic relaxation that largely dominates the interface energy. Furthermore, experiments show how these 3-variant triangles are formed in regions delimited by big martensite laths, which are supposed to originate from first nucleation events. The domain are then progressively filled by smaller and smaller triangles \cite{srivastava1993self,farabi2018five}. These observations together with our simulation results suggest that: i) the 3-variant cluster formation is mainly driven by elastic relaxation, which manifests itself at different length scales. \textcolor{black}{We here showed that, at the nanometer scale, the elastic relaxation is still dominant. Consequently, mechanisms driving the formation of larger microstructures can be easily studied by analysing the evolution of small BCC domains undergoing transition,} ii) this 3-variant cluster formation is directly related to martensite nuclei forming under a situation of local confinement. The agreement of our simulations with experiments indicates that simple fixed-volume conditions are well adapted to reproduce this state of local constraints experimented by real systems.\\ We mentioned previously that we occasionally observe the appearance of domains with FCC crystallography. In this case, the 3 variants form two laminates with an FCC domain at the crossing point. Low-energy coherent interfaces are formed between the HCP $\{0001\}$ basal planes and the $\{111\}$ FCC planes. While in the simulation showing a 3-plate morphology the three variants have similar volume fraction ($\approx 0.3$), in this case one variant (the one participating in both the laminates) is dominant with respect to the others (see Fig.~\ref{fig:var_sel_4}). The possible presence of FCC phase after transition has little experimental evidence \cite{nishiyama1967transmission} and could be an artifact due to the potential. Nevertheless, it has been also reported in other Molecular Dynamics simulations of martensitic phase transition in zirconium \cite{morris2001molecular,pinsook1998simulation,ackland2008molecular}. Subsequent mechanical/thermal treatments, leading to further evolution of the microstructure, could then lead to its progressive extinction in favour of the lower energy HCP phase.\\ \begin{figure} [htpb] \centering {\includegraphics[scale=0.32]{./img/Fig8-eps-converted-to.pdf}} \caption{Final microstructure at 700 K in the ($NVT$) ensemble, highlighting the crossing between laminates. Crystallographic directions refer to the parent BCC phase.} \label{fig:microstru_nvt_2} \end{figure} \begin{figure} [htpb] \centering \includegraphics[scale=0.33]{./img/Fig9_a-eps-converted-to.pdf} \includegraphics[scale=0.33]{./img/Fig9_b-eps-converted-to.pdf} \includegraphics[scale=0.33]{./img/Fig9_c-eps-converted-to.pdf} \includegraphics[scale=0.33]{./img/Fig9_d-eps-converted-to.pdf} \caption{Histograms of $\bm{U}_n$ diagonal and off-diagonal coefficients for a)-b) simulations in the ($N\bm{P}T$) and c)-d) ($NVT$) ensemble. The dotted line shows theoretical values.} \label{fig:histo} \end{figure} \begin{figure} [htpb] \centering \includegraphics[scale=0.33]{./img/Fig10_a-eps-converted-to.pdf} \includegraphics[scale=0.33]{./img/Fig10_b-eps-converted-to.pdf} \includegraphics[scale=0.33]{./img/Fig10_c-eps-converted-to.pdf} \caption{Histograms of off-diagonal coefficients for a single variant $\bm{U}^{(4)}$ in the ($NVT$) ensemble at three different time steps (point A,B,C in the HCP fraction curve of Fig.~\ref{fig:var_sel_4}). Theoretical values are shown in dotted line.} \label{fig:histo_2} \end{figure} \textcolor{black}{We go now on with the analysis of the defects generated during the transition under different stress conditions. Our simulations give insight on the origin of different kind of interfaces that are found in martensite in pure titanium and corroborates hypothesis based on experimental observations.\\ The single-variant microstructure rising from simulations in the $(N\bm{P}T)$ ensemble (i.e., mimicking the absence of any local constraint in the surroundings) shows antiphase boundaries separating HCP domains with the same orientation of the \textbf{c} axis but different shuffling directions (a couple of anti-variants \cite{gao2014diffuse}). Similarly, single-variant martensite plates containing antiphase boundary networks has been experimentally observed in titanium alloys \cite{banerjee1998substructure} as well as in shape-memory alloys \cite{otsuka2005physical,matsuda2008crystallography}. These early experimental studies led to the hypothesis that these interfaces origin from the nucleation, growth and subsequent impingement of martensite domains, i.e., they are a direct consequence of the randomness of shuffling displacements during the transition \cite{banerjee1998substructure,matsuda2008crystallography}. The microstructural evolution observed in our simulations (see Fig.~\ref{fig:antiphase}), where one variant domain disappear leading two anti-variants domains to come in contact, confirms these hypothesis.\\ For simulations performed in the $(NVT)$ ensemble showing the 3-variants plate morphology, we verified that the grain boundaries separating the different variant domains are all $\{10\bar{1}1\}$ type I twin boundaries (twinning plane $K_1 =\{10\bar{1}1\}$, shear direction $\eta_1 = \langle 1 \bar{2}10\rangle$) \cite{bowles1954crystallography,Bhattacharya2003-lk}), as shown in inset of Fig.~\ref{fig:microstru_1}. This triple junction is not fully consistent with the inter-variant misorientation expected from the Burgers orientation relationship and, from a purely geometrical point of view, requires some further strain to be present, because the $\{10\bar{1}1\}$ pyramidal plane form an angle of $61.5^\circ$ with the basal HCP plane. The stability showed by these triple junctions during the microstructural evolution (see Fig.~\ref{fig:nvt}) together with the observation of this specific interface arrangement in all our simulations suggests that the energy cost linked to the additional strain required to form three $\{10\bar{1}1\}$ symmetric boundary is negligible when compared to the energy gain in forming three low energy coherent interfaces \cite{wang2012atomic}. This corroborates experimental observations in pure titanium \cite{wang2003effect,farabi2018five}, titanium alloys \cite{beladi2014variant} and zirconium alloys \cite{srivastava1993self}. In particular, a recent study on grain boundary plane distribution in pure titanium subjected to temperature driven martensitic transformation, which completes previous experimental observations on grain boundary axis angle distribution \cite{wang2003effect}, highlights a strong anisotropy in the grain boundary plane distribution with most of the grain boundaries terminating on $\{10\bar{1}1\}$ pyramidal planes \cite{farabi2018five}. The authors report that these boundaries are associated with symmetric tilt $60^{\circ} [11\bar{2}0]$ inter-variant boundaries and explicitly observed the three-variant cluster in triple junction morphology \cite{farabi2018five}.}\\ We conclude our discussion by analyzing the numerically computed stretch tensors used to identify variants. Fig.~\ref{fig:histo} shows the histograms of the diagonal and off-diagonal components of $\bm{U}_n$ in the simulations in the ($N\bm{P}T$) and ($NVT$) ensemble, with dotted lines showing the theoretical values predicted by the Mao/Burgers mechanisms. We note that for the simulation in the ($N\bm{P}T$) ensemble, there is an overall good agreement between average values and theoretical predictions. On the other side, in fixed-volume conditions, the deviation is higher and more pronounced in the off-diagonal coefficients. Fig.~\ref{fig:histo_2} compares the histograms of the off-diagonal coefficients for the simulation in the ($NVT$) ensemble at three different time steps $t_1$, $t_2$, $t_3$, corresponding to the end of the nucleation stage, the quasi-stationary regime and the final stationary state (see Fig.~\ref{fig:var_sel_4}). Differently from the histograms of Fig.~\ref{fig:histo}, we report the value for only one of the three variants forming the final microstructure, the $\bm{U}^{(4)}$. The comparison between histograms highlights how the deviation from the theoretical strain values (shown in dotted line) starts developing after the nucleation stage, when almost all the BCC phase has disappeared and the HCP structure begins coarsening towards the final 3-plate morphology. This is particularly evident for the off-diagonal coefficient $U_{13}$. Although further investigations are needed, we can conclude that these deviations are related to the additional textural evolution arriving in fixed-volume conditions when the first nucleation stage has ended and specific variants domains start growing around the stable triple junctions to form the final 3-plate morphology. \section{Conclusions}\label{sec:conc} \textcolor{black}{To summarize, this work is the first numerical study at the atomic scale of the microstructural evolution of pure titanium undergoing temperature-driven BCC$\rightarrow$HCP transition under different stress conditions. For this purpose, we performed a set of extended atomistic simulations with suitable empirical interatomic potential, and we analyzed the microstructural evolution during the transition as well as the final microstructures.}\\ \textcolor{black}{Our main results concern the analysis of the atomistic mechanisms inducing the formation of different defects experimentally observed in martensite in pure titanium and the assessment of the influence of macroscopic constraints on these defects and on the final martensite morphology.\\ When no constraints are present, i.e., when the crystal is allowed to change its shape, a simple mono-variant domain decorated by wavy antiphase boundaries forms. Our simulations confirm previous experimental hypothesis that trace back the origin of these interfaces in the growth and subsequent impingement of variant domains with same orientation but different shuffling directions \cite{banerjee1998substructure,matsuda2008crystallography}.}\\ \textcolor{black}{In contrast, a poly-variant microstructure develops when local constraints prevent a free deformation of the environment surrounding the growing martensite nuclei. This microstructure shows a specific 3-variant morphology which has been extensively documented in experiments on titanium, zirconium and their alloys and which allow the minimization of the strain energy \cite{farabi2018five,srivastava1993self,beladi2014variant}. This microstructure develops around stable triple junctions between variant that are formed at the beginning of the transition after the first nucleation stage. The characterization of interfaces in this microstructure confirms a strong preference for the formation of boundaries along the $\{10\bar{1}1\}$ HCP pyramidal plane as experimentally documented \cite{farabi2018five}.}\\ \textcolor{black}{Also, we observed the possible appearance of an FCC phase after transition, although further studies are needed to check the absence of any artifact due to the use of empirical inter-atomic potentials \cite{morris2001molecular,pinsook1998simulation,ackland2008molecular}.}\\ \textcolor{black}{Finally, we stress that this is the first time that the overdamped Langevin dynamics, which has been mostly applied in field of soft matter and bio-molecular simulation, is successfully applied to simulate a fully 3D displacive solid phase transition.This is an important step towards the use of a first-order in time dynamics. The full application of this modelling tool would require its proper derivation through coarse-graining, which will adiabatically eliminate phonons through their incorporation within a coarse-grained potential. This potential will be much softer than the initial one, allowing to use much larger time steps than those required when using the original one.}\\ A natural extension of our work will be to investigate how final microstructures here obtained may influence the mechanical response of the material under external mechanical loading. This can easily be performed in our formulation through controlling the components of the Piola-Kirchhoff tensor (stress controlled) or the deformation gradient (strain controlled). Our findings can also be useful to develop appropriate mesoscale phase-field theories of BCC-HCP transition, formulated using finite strains \cite{Vattre2016-ql,Denoual2016-xi} and Landau-type theories with strain components used as the order parameter~\cite{Shchyglo2012-eb}.
2,877,628,088,685
arxiv
\section{Introduction} Nesterov's accelerated gradient descent algorithm~\cite{Nes83} was introduced in 1983, and it exhibits the convergence rate of $\mathcal{O}(1/k^2)$ when applied to a convex objective function, which is faster than the $\mathcal{O}(1/k)$ convergence rate of standard gradient descent methods. It is shown in~\cite{nesterov2003introductory} that this rate of convergence is optimal for the class of first-order gradient methods. This improved rate of convergence over the standard gradient method is referred to as acceleration, and there is a great interest in developing systematic approaches to the construction of efficient accelerated optimization algorithms, driven by potential applications in deep learning. A continuous time limit of the Nesterov algorithm was studied in~\cite{su2016differential}, whose flow converges to the minimum at $\mathcal{O}(1/t^2)$, and this was generalized in~\cite{wibisono2016variational} using a time-dependent Bregman Lagrangian and Hamiltonian to obtain higher-order convergence of $\mathcal{O}(1/t^p)$ for arbitrary $p\geq 2$. However, it has been shown that discretizing Bregman dynamics is not trivial as common discretizations fail to achieve the higher convergence rate guaranteed in the continuous time limit. As such, there have been several attempts to construct accelerated optimization algorithms using geometric structure-preserving discretizations of the Bregman dynamics~\cite{betancourt2018symplectic}. A natural class\footnote{Note that other classes of discretization methods exist, such as those based on splitting (e.g., \cite{HaLuWa2006,tao2020variational}) and composition (e.g., \cite{tao2010nonintrusive}), and such approaches also arise in variational discretization \cite{MarWesAN01}.} of geometric numerical integrators~\cite{HaLuWa2006} for discretizing such Lagrangian or Hamiltonian systems is variational integrators~\cite{MarWesAN01,LeZh2009}. They are constructed by a discrete analogue of Hamilton's variational principle, and therefore, their numerical flows are symplectic. They also satisfy a discrete Noether's theorem that relates symmetries with momentum conservation properties, and further exhibit excellent exponentially long-time energy stability. One complication is that such methods are typically developed for autonomous Lagrangian and Hamiltonian systems on the Euclidean space. To address this, variational integrators have been developed on a Lie group~\cite{LeeLeoCMAME07}, and time-adaptive Hamiltonian variational integrators have been proposed~\cite{DuScLe2021}. In this paper, we focus on the optimization problem to minimize an objective function defined on an a Lie group. Optimization on a manifold or a Lie group appears in various areas of machine learning, engineering, and applied mathematics~\cite{hu2020brief,absil2009optimization}, and respecting the geometric structure of manifolds yields more accurate and efficient optimization schemes, when compared to methods based on embeddings in a higher-dimensional Euclidean space with algebraic constraints, or using local coordinates. In particular, we formulate a Bregman Lagrangian system on a Lie group, and we further discretize it using the extended Lie group variational integrator to construct an intrinsic accelerated optimization scheme, which inherits the desirable properties of variational integrators while also preserving the group structure. Compared with~\cite{DuScLe2021} where the evolution of the stepsize is prescribed, the proposed scheme adaptively adjusts the stepsize according to the extended variational principle at the cost of increased computational load. The resulting computational properties of the proposed approach are analyzed with two examples in attitude determination and vision-based localization, where it is observed that the scheme exhibits an interesting convergence of the adaptive stepsize, and the variational discretization provides robustness against the choice of stepsize, which is exploited in the numerical experiments to improve computational efficiency. We also present benchmark studies against other discretization schemes applied to the Bregman dynamics, and other accelerated optimization schemes on a Lie group~\cite{tao2020variational}. \section{Extended Lagrangian Mechanics} This section presents Lagrangian mechanics for non-autonomous systems on a Lie group. It is referred to as \textit{extended} Lagrangian mechanics as the variational principle is extended to include reparamerization of time~\cite{MarWesAN01}. These are developed in both of continuous-time and discrete-time formulations. The latter yields a \textit{Lie group variational integrator}~\cite{LeeLeoCMAME07}, which will be applied to accelerated optimization using the Bregman Lagrangian in the next section. Consider an $n$-dimensional Lie group $\ensuremath{\mathsf{G}}$. Let $\ensuremath{\mathfrak{g}}$ be the associated Lie algebra, or the tangent space at the identity, i.e., $\ensuremath{\mathfrak{g}} = \ensuremath{\mathsf{T}}_e\ensuremath{\mathsf{G}}$. Consider a left trivialization of the tangent bundle of the group $\ensuremath{\mathsf{T}}\ensuremath{\mathsf{G}} \simeq \ensuremath{\mathsf{G}}\times \ensuremath{\mathfrak{g}}$, $(g,\dot g)\mapsto (g, L_{g^{-1}}\dot g)\equiv(g,\xi)$ More specifically, let $\L:\ensuremath{\mathsf{G}}\times\ensuremath{\mathsf{G}}\rightarrow\ensuremath{\mathsf{G}}$ be the left action defined such that $\L_g h = gh$ for $g,h\in\ensuremath{\mathsf{G}}$. Then the left trivialization is a map $(g,\dot g)\mapsto (g, L_{g^{-1}}\dot g)\equiv(g,\xi)$, where $\xi\in\ensuremath{\mathfrak{g}}$, and the kinematics equation can be written as \begin{align} \dot g = g\xi. \label{eqn:g_dot} \end{align} Further, suppose $\ensuremath{\mathfrak{g}}$ is equipped with an inner product $\pair{\cdot, \cdot}$, which induces an inner product on $\ensuremath{\mathsf{T}}_g\ensuremath{\mathsf{G}}$ via left trivialization. For any $v,w\in\ensuremath{\mathsf{T}}_g\ensuremath{\mathsf{G}}$, $\pair{w,v}_{\ensuremath{\mathsf{T}}_g\ensuremath{\mathsf{G}}} = \pair{ \ensuremath{\mathsf{T}}_g \L_{g^{-1}} v, \ensuremath{\mathsf{T}}_g \L_{g^{-1}} w}_\ensuremath{\mathfrak{g}}$. Given the inner product, we identify $\ensuremath{\mathfrak{g}}\simeq \ensuremath{\mathfrak{g}}^*$ and $\ensuremath{\mathsf{T}}_g \ensuremath{\mathsf{G}} \simeq \ensuremath{\mathsf{T}}^*_g \ensuremath{\mathsf{G}}\simeq G\times \ensuremath{\mathfrak{g}}^*$ via the Riesz representation. Throughout this paper, the pairing is also denoted by the dot product $\cdot$. Let $\mathbf{J}:\ensuremath{\mathfrak{g}}\rightarrow\ensuremath{\mathfrak{g}}^*$ be chosen such that $\pair{\mathbf{J}(\xi),\zeta}$ is positive-definite and symmetric as a bilinear form of $\xi,\zeta\in\ensuremath{\mathfrak{g}}$. Define the metric $\met{\cdot,\cdot}:\ensuremath{\mathfrak{g}}\times\ensuremath{\mathfrak{g}}\rightarrow\Re$ with $\met{\xi,\zeta} = \pair{\mathbf{J}(\xi),\zeta}$. This serves as a left-invariant Riemmanian metric on $\ensuremath{\mathsf{G}}$. Also $\|\xi\|^2 = \met{\xi,\xi}$ for any $\xi\in\ensuremath{\mathfrak{g}}$. The adjoint operator is denoted by $\ensuremath{\mathrm{Ad}}_g:\ensuremath{\mathfrak{g}}\rightarrow\ensuremath{\mathfrak{g}}$, and the ad operator is denoted by $\ensuremath{\mathrm{ad}}_\xi:\ensuremath{\mathfrak{g}}\rightarrow\ensuremath{\mathfrak{g}}$. See, for example~\cite{MarRat99} for detailed preliminaries. \subsection{Continuous-Time Extended Lagrangian Mechanics} Consider a non-autonomous (left-trivialized) Lagrangian $L(t,g,\xi):\Re\times\ensuremath{\mathsf{G}}\times\ensuremath{\mathfrak{g}}\rightarrow \Re$ on the \textit{extended state space}. The corresponding \textit{extended path space} is composed of the curves $(c_t(a),c_g(a))$ on $\Re\times \ensuremath{\mathsf{G}}$ parameterized by $a>0$. To ensure that the reparameterized time increases monotonically, we require $c'_t(a) > 0$. For a given time interval $[t_0,t_f]$, the corresponding interval $[a_0,a_f]$ for $a$ is chosen such that $t_0=c_t(a_0)$ and $t_f=c_t(a_f)$. For any path $(c_t(a),c_g(a))$ over $[a_0,a_f]$ in the extended space, the \textit{associated curve} is \begin{align} g(t) = c_g(c_t^{-1}(t)),\label{eqn:ac} \end{align} on $\ensuremath{\mathsf{G}}$ over the time interval $[t_0,t_f]$. For a given extended path, define the \textit{extended action integral} as \begin{align} \mathfrak{G}(c_t,c_g) = \int_{t_0}^{t_f} L(t,g,\xi)\bigg|_{g(t) = c_g(c_t^{-1}(t))} dt,\label{eqn:AI} \end{align} where the Lagrangian is evaluated on the associated curve \eqref{eqn:ac}, and $\xi$ satisfies the kinematics equation \eqref{eqn:g_dot}. Taking the variation of $\mathfrak{G}$ with respect to the extended path, we obtain the Euler--Lagrange equation according to the variational principle in the extended phase space. As discussed in~\cite[Sec. 4.2.2]{MarWesAN01}, the resulting Euler--Lagrange equations depends only on the associated curve \eqref{eqn:ac}, not on the extended path $(c_t,c_g)$ itself, and the variational principle does not dictate how the curve should be reparameterized. Further, the resulting Euler--Lagrange equation share the exactly same form as (unextended) Lagrangian mechanics for the associated curve. As such, the Euler--Lagrange equation for non-autonomous Lagrangian $L(t,g,\xi):\Re\times\ensuremath{\mathsf{G}}\times\ensuremath{\mathfrak{g}}\rightarrow \Re$ can be written as \begin{align} \frac{d}{dt}\!\parenth{\deriv{L}{\xi}} - \ensuremath{\mathrm{ad}}^*_\xi \deriv{L}{\xi} - \ensuremath{\mathsf{T}}^*_e \L_g (\ensuremath{\mathbf{D}}_g L) = 0, \label{eqn:EL} \end{align} where $\ensuremath{\mathbf{D}}_g$ stands for the differential with respect to $g$ (see~\cite[Sec. 8.6.3]{LeeLeo17} for derivation of the above equation for autonomous Lagrangians). Introducing the Legendre transform $\mu = \deriv{L}{\xi} \in\ensuremath{\mathfrak{g}}^*$, and assuming that it is invertible, the Euler--Lagrange equation can be rewritten as \begin{align} \dot \mu - \ensuremath{\mathrm{ad}}^*_{\xi} \mu - \ensuremath{\mathsf{T}}^*_e \L_g (\ensuremath{\mathbf{D}}_g L) = 0. \label{eqn:HE} \end{align} \subsection{Extended Lie Group Variational Integrator} Variational integrators are geometric numerical integration schemes that can be viewed as discrete-time mechanics derived from a discretization of the variational principle for Lagrangian mechanics~\cite{MarWesAN01}. The discrete-time flows of variational integrators are symplectic and they exhibit a discrete analogue of Noether's theorem. This provides long-term structural stability in the resulting numerical simulations. For Lagrangian mechanics evolving on a Lie group, the corresponding Lie group variational integrators were developed in~\cite{LeeLeoCMAME07}. Here, we develop extended Lie group variational integrators by discretizing the extended variational principle presented above, following the general framework of~\cite{MarWesAN01}. The \textit{extended discrete path space} is composed of the sequence $\{(t_k, g_k)\}_{k=0}^N$ on $\Re\times\ensuremath{\mathsf{G}}$, satisfying $t_{k+1}>t_k$. Next, the discrete kinematics equation is chosen to be \begin{align} g_{k+1} = g_k f_k, \label{eqn:gkp} \end{align} for $f_k \in\ensuremath{\mathsf{G}}$ representing the relative update over a single timestep. The discrete Lagrangian $L_d(t_k, t_{k+1}, g_k, f_k): \Re\times\Re\times\ensuremath{\mathsf{G}}\times\ensuremath{\mathsf{G}}\rightarrow \Re$ is chosen such that the following \textit{extended discrete action sum} \begin{align} \mathfrak{G}_d(\{(t_k, g_k)\}_{k=0}^N) = \sum_{k=0}^{N-1} L_d(t_k, t_{k+1}, g_k, f_k), \label{eqn:Gd} \end{align} approximates \eqref{eqn:AI}. \begin{prop} The discrete path $\{(g_k,f_k)\}_{k=0}^{N-1}$ that extremizes the discrete action sum \eqref{eqn:Gd} subject to fixed endpoints satisfies the following discrete Euler--Lagrange equation, \begin{gather} \ensuremath{\mathsf{T}}^*_e\L_{g_k}(\ensuremath{\mathbf{D}}_{g_k} L_{d_k})- \ensuremath{\mathrm{Ad}}^*_{f_k^{-1}} (\ensuremath{\mathsf{T}}^*_e\L_{f_k}(\ensuremath{\mathbf{D}}_{f_k} L_{d_k}))\nonumber \\ + \ensuremath{\mathsf{T}}^*_e\L_{f_{k-1}}(\ensuremath{\mathbf{D}}_{f_{k-1}} L_{d_{k-1}}) =0,\label{eqn:DEL}\\ \ensuremath{\mathbf{D}}_{t_k} L_{d_{k-1}} + \ensuremath{\mathbf{D}}_{t_k} L_{d_k} = 0, \label{eqn:DELt} \end{gather} which together with the discrete kinematic equation \eqref{eqn:gkp} defines an extended Lie group variational integrator. \end{prop} \begin{proof} From \eqref{eqn:gkp}, $\delta f_k = - g_k^{-1}( \delta g_k ) g_k^{-1} g_{k+1} + g_k^{-1}\delta g_{k+1}$. Since $\delta g_k$ can be written as $\delta g_k = g_k \eta_k $ for $\eta_k\in \ensuremath{\mathfrak{g}}$, \begin{align} f_k^{-1}\delta f_k = -\ensuremath{\mathrm{Ad}}_{f_k^{-1}} \eta_k + \eta_{k+1}.\label{eqn:del_fk} \end{align} Take the variation of \eqref{eqn:Gd} and substitute \eqref{eqn:del_fk} to obtain \begin{align*} \delta \mathfrak{G}_d = \sum_{k=0}^{N-1} & \ensuremath{\mathsf{T}}^*_e\L_{g_k}(\ensuremath{\mathbf{D}}_{g_k} L_{d_k}) \cdot \eta_k \\ & + \ensuremath{\mathsf{T}}^*_e\L_{f_k}(\ensuremath{\mathbf{D}}_{f_k} L_{d_k}) \cdot (-\ensuremath{\mathrm{Ad}}_{f_k^{-1}} \eta_k + \eta_{k+1}) \\ & + \ensuremath{\mathbf{D}}_{t_k} L_{d_k}\cdot \delta t_k + \ensuremath{\mathbf{D}}_{t_{k+1}} \ensuremath{\mathbf{D}}_{d_k}\cdot \delta t_{k+1}. \end{align*} Since the endpoints are fixed, we have $\eta_0=0$ and $\delta t_0 = 0$. Therefore in the above expression, the range of summation for the terms paired with $\eta_k$ and $\delta t_k$ can be reduced to $1\leq k\leq N-1$. Also, using $\eta_N=0$ and $\delta t_N=0$, for the other terms paired with $\eta_{k+1}$ and $\delta t_{k+1}$, the terms can be reindexed by reducing the subscripts by one and summed over the same range. According to the variational principle, $\delta\mathfrak{G}_d = 0$ for any $\eta_k$ and $\delta t_k$, which yields \eqref{eqn:DEL} and \eqref{eqn:DELt}. \end{proof} The most notable difference compared to the continuous-time counterpart is that in addition to the discrete Euler--Lagrange equation \eqref{eqn:DEL}, we have the additional equation \eqref{eqn:DELt} for the evolution of the discrete time. This is because the discrete action sum $\mathfrak{G}_d$ depends on the complete extended path $\{(t_k,g_k)\}_{k=1}^N$. Whereas the continuous-time action $\mathfrak{G}$ is only a function of the associated curve \eqref{eqn:ac}. The discrete Euler--Lagrange equation for the discrete time~\eqref{eqn:DELt} is associated with the energy. Define the discrete energy to be \begin{align} E^+_k &= - \ensuremath{\mathbf{D}}_{t_{k+1}} L_{d_k},\label{eqn:Ep}\\ E^-_k &= \ensuremath{\mathbf{D}}_{t_{k}} L_{d_k}.\label{eqn:Em} \end{align} Then, \eqref{eqn:DELt} can be rewritten as \begin{align} E^+_{k-1} = E^-_k,\label{eqn:DELE} \end{align} which reflects the evolution of the discrete energy. When the discrete Lagrangian is autonomous, \eqref{eqn:DELE} implies the conservation of discrete energy, thereby yielding a symplectic-energy-momentum integrator~\cite{KaMaOr1999}. To implement \eqref{eqn:DEL} and \eqref{eqn:DELt} as a numerical integrator, it is more convenient to introduce the \textit{extended discrete Legendre transforms}, $\mathbb{F}^\pm L_{d_k}: \Re\times\Re \times \ensuremath{\mathsf{G}} \times \ensuremath{\mathsf{G}} \rightarrow \Re\times \Re\times\ensuremath{\mathsf{G}}\times\ensuremath{\mathfrak{g}}^*$ as \begin{align} \mathbb{F}^+ L_{d_k} (t_k,t_{k+1}, g_k,f_k) & = (t_{k+1}, E_{k+1}, g_{k+1}, \mu_{k+1}),\\ \mathbb{F}^- L_{d_k} (t_k,t_{k+1}, g_k,f_k) & = (t_k, E_k, g_{k}, \mu_{k}). \end{align} where \begin{align} \mu_k & = -\ensuremath{\mathsf{T}}^*_e\L_{g_k}(\ensuremath{\mathbf{D}}_{g_k} L_{d_k})+ \ensuremath{\mathrm{Ad}}^*_{f_k^{-1}} (\ensuremath{\mathsf{T}}^*_e\L_{f_k}(\ensuremath{\mathbf{D}}_{f_k} L_{d_k})),\label{eqn:muk}\\ \mu_{k+1} & = \ensuremath{\mathsf{T}}^*_e\L_{f_k} (\ensuremath{\mathbf{D}}_{f_k} L_{d_k}),\label{eqn:mukp} \end{align} and $E_{k+1}$ and $E_k$ are given by \eqref{eqn:Ep} and \eqref{eqn:Em}, respectively. The resulting discrete flow map is defined by $\mathbb{F}^+L_{d_k} \circ (\mathbb{F}L_{d_k})^{-1}$. More specifically, for given $(t_k, E_k, g_k, \mu_k)$, \eqref{eqn:Em} and \eqref{eqn:muk} are solved together for $t_{k+1},f_k$ with the constraint $t_{k+1}>t_k$. Then, $(E_{k+1}, g_{k+1},\mu_{k+1})$ are computed by \eqref{eqn:Ep}, \eqref{eqn:gkp}, and \eqref{eqn:mukp}, respectively. This yields the discrete flow map $(t_k, E_k, g_k, \mu_k)\rightarrow(t_{k+1}, E_{k+1}, g_{k+1}, \mu_{k+1})$ consistent with \eqref{eqn:DEL} and \eqref{eqn:DELt}. While the flow map is expressed in terms of $E$ for convenience, the initial value of $E_0$ is often selected by choosing the initial timestep $h_0$ and calculating the corresponding value of $E_0$ through \eqref{eqn:Em}. This inherits the desirable properties of variational integrators, and the group structure is also preserved through \eqref{eqn:gkp}. \section{Bregman Lagrangian Systems on $\ensuremath{\mathsf{G}}$}\label{sec:Breg} \newcommand{\mathsf{f}}{\mathsf{f}} Let $\mathsf{f}:\ensuremath{\mathsf{G}}\rightarrow\Re$ be a real-valued smooth function on $\ensuremath{\mathsf{G}}$. We focus on the optimization problem: \begin{align} \min_{g\in\ensuremath{\mathsf{G}}} \mathsf{f} (g). \end{align} A variational accelerated optimization scheme for the above problem was developed in~\cite{tao2020variational}, where the Nesterov accelerated gradient (NAG) descent on a finite-dimensional vector space was intrinsically generalized to a Lie group. In this section, we introduce an intrinsic formulation of Bregman Lagrangian dynamics~\cite{wibisono2016variational}, which encompasses a larger class of accelerated optimization scheme, including NAG. More importantly, the continuous dynamics guarantees polynomial convergence rates up to an arbitrary order. \subsection{Continuous-Time Bregman Dynamics} The Bregman Lagrangian $L(t,g,\xi):\Re\times\ensuremath{\mathsf{G}}\times\ensuremath{\mathfrak{g}}\rightarrow$ is \begin{align} L(t,g,\xi) = \frac{t^{\lambda p+1}}{2p} \|\xi\|^2 - C p t^{(\lambda+1)p-1} \mathsf{f} (g),\label{eqn:BL} \end{align} where $\|\xi\|^2 = \met{\xi,\xi}=\pair{\mathbf{J}(\xi),\xi}$, for $p, C>0$, and $\lambda\geq 1$. When $\ensuremath{\mathsf{G}}=\Re^n$ and $\lambda=1$, this recovers the Bregman Lagrangian for vector spaces~\cite{wibisono2016variational}, and it yields the continuous-time limit of Nesterov's accelerated gradient descent for $p=2$~\cite{nesterov2005smooth}. Also, in case $p = 3$, it corresponds to the continuous-time limit of Nesterov’s accelerated cubic-regularized Newton’s method~\cite{nesterov2008accelerating}. When $\ensuremath{\mathsf{G}}$ is considered as a Riemannian manifold, this corresponds to the $p$-Bregman Lagrangian in~\cite{duruisseaux2021variational}. The additional term $\lambda$ accounts for the sectional curvature and diameter of the manifold~\cite{alimisis2020continuous}. The left-trivialized derivative of the objective function is \begin{align} \nabla_\L \mathsf{f}(g) = \ensuremath{\mathsf{T}}_e^*\L_g (\ensuremath{\mathbf{D}}_g \mathsf{f}(g)).\label{eqn:grad} \end{align} Applying \eqref{eqn:EL} to \eqref{eqn:BL}, the corresponding Euler--Lagrange equations are given below. \begin{prop}\label{prop:EL_Breg} The Euler--Lagrange equations corresponding to the Bregman Lagrangian \eqref{eqn:BL} ar \begin{align} \frac{d \mathbf{J}(\xi)}{dt} + \frac{\lambda p+1}{t}\mathbf{J}(\xi) - \ensuremath{\mathrm{ad}}^*_\xi \mathbf{J}(\xi) + Cp^2 t^{p-2} \nabla_\L \mathsf{f} (g) =0, \label{eqn:EL_Breg} \end{align} and \eqref{eqn:g_dot}. Further, the corresponding continuous flow locally converges to the minimizer $g^*$ of $\mathsf{f}$ with the rate given by \begin{align} \mathsf{f}(g(t)) - \mathsf{f}(g^*) \in \mathcal{O}(t^{-p}), \end{align} when $\mathsf{f}$ is geodesically convex. \end{prop} \begin{proof} We have \begin{align*} \deriv{L}{\xi} & = \frac{t^{\lambda p+1}}{p}\mathbf{J}(\xi) \end{align*} Substituting this into \eqref{eqn:EL} and using \eqref{eqn:grad}, \begin{gather*} \frac{t^{\lambda p+1}}{p}\frac{d \mathbf{J}(\xi)}{dt} + \frac{(\lambda p+1)t^{\lambda p}}{p}\mathbf{J}(\xi) -\frac{t^{\lambda p+1}}{p} \ensuremath{\mathrm{ad}}^*_\xi \mathbf{J}(\xi) \\ + Cpt^{(\lambda +1)p-1} \nabla_\L \mathsf{f}(g) =0. \end{gather*} Dividing both sides by $\frac{t^{\lambda p+1}}{p}$ yields \eqref{eqn:EL_Breg}. The convergence property is established by~\cite[Theorem 3.2]{duruisseaux2021variational}. \end{proof} Therefore, the optimization problem on $\ensuremath{\mathsf{G}}$ can be addressed by numerically integrating \eqref{eqn:EL_Breg} from an initial guess. However, it has been observed that a na\"\i ve discretization is not able to match the polynomial convergence rate established in~\cite{wibisono2016variational}. Further, we need a guarantee that the discrete trajectory evolves on the Lie group. These two challenges can be addressed by applying a Lie group variational integrator, as their structure-preserving properties provides long-term numerical stability, and preservation of the group structure. In the subsequent section, we derive Lie group variational integrators for the Bregman Lagrangian system. \subsection{Lie Group Variational Integrator for Bregman Dynamics} Let $h_k = t_{k+1}- t_k$ and $t_{k,k+1} = t_k + h_k/2$. We consider the following form of the discrete Lagrangian \begin{align} L_d(t_k, t_{k+1}, g_k, f_k ) & = \frac{\phi(t_{k,k+1}) }{h_k} T_d(f_k) -\frac{h_k}{2} \theta(t_k) \mathsf{f} (g_k)\nonumber \\ & \quad -\frac{h_k}{2} \theta(t_{k+1}) \mathsf{f} (g_kf_k), \label{eqn:Ld} \end{align} where $T_d(f_k):\ensuremath{\mathsf{G}}\rightarrow\Re$ is chosen such that it approximates $T(f_k) \approx h_k^2 \|\xi_k \|^2/2$, and $\phi,\theta:\Re\rightarrow\Re$ are \begin{align} \phi(t) & = \frac{t^{\lambda p +1}}{p},\\ \theta(t) & = C p t^{(\lambda+1)p-1}. \end{align} The corresponding variational integrators are presented as follows. \begin{prop}\label{prop:DEL_Breg} The discrete-time Euler--Lagrange equations, or the Lie group variational integrator for the discrete Lagrangian \eqref{eqn:Ld} corresponding to the Bregman Lagrangian~\eqref{eqn:BL} are given by \begin{align} \mu_k & = \frac{\phi_{k,k+1}}{h_k}\ensuremath{\mathrm{Ad}}^*_{f_k^{-1}} (\ensuremath{\mathsf{T}}^*_e\L_{f_k}(\ensuremath{\mathbf{D}}_{f_k} T_{d_k})) + \frac{h_k\theta_k}{2} \nabla_L \mathsf{f}_k, \label{eqn:muk_Breg}\\ \mu_{k+1} & = \ensuremath{\mathrm{Ad}}^*_{f_k}( \mu_k - \frac{h_k\theta_k}{2} \nabla_L\mathsf{f}_k ) -\frac{h_k\theta_{k+1}}{2} \nabla_\L \mathsf{f}_{k+1}, \label{eqn:mukp_Breg}\\ E_{k} & = \frac{\phi'_{k,k+1}}{ 2 h_k} T_{d_k} -\frac{h_k\theta'_{k}}{2} \mathsf{f}_{k} \nonumber\\\ & \quad + \frac{\phi_{k,k+1}}{h_k^2}T_{d_k} + \frac{\theta_k}{2}\mathsf{f}_k + \frac{\theta_{k+1}}{2}\mathsf{f}_{k+1}, \label{eqn:Ek_Breg}\\ E_{k+1} & = -\frac{\phi'_{k,k+1}}{ 2 h_k} T_{d_k} +\frac{h_k\theta'_{k+1}}{2} \mathsf{f}_{k+1}\nonumber \\\ & \quad + \frac{\phi_{k,k+1}}{h_k^2}T_{d_k} +\frac{\theta_k}{2}\mathsf{f}_k + \frac{\theta_{k+1}}{2}\mathsf{f}_{k+1}, \label{eqn:Ekp_Breg} \end{align} together with \eqref{eqn:gkp}. \end{prop} \begin{proof} These can be derived by substituting \eqref{eqn:Ld} into \eqref{eqn:muk}, \eqref{eqn:mukp}, \eqref{eqn:Em}, and \eqref{eqn:Ep}, respectively. \end{proof} As discussed at the end of Section III, these provide symplectic and momentum-preserving discrete time flow maps. Since these corresponds to a discretization of the Bregman Lagrangian system, they can be considered as a geometric numerical integrator for \eqref{eqn:EL_Breg}, or utilized as an optimization algorithm on $\ensuremath{\mathsf{G}}$. If $T_d(f_k)=T_d(f_k^{-1})$, then the discrete Lagrangian is self-adjoint, and the above integrator is symmetric and therefore at least second-order accurate. \section{Optimization on $\ensuremath{\mathsf{G}}$} In this section, we present both of the continuous Bregman Lagrangian system and the Lie group variational integrator for several Lie groups. \subsection{Euclidean Space $\Re^n$} Suppose $\ensuremath{\mathsf{G}}=\Re^n$, with the additive group action, and the inner product is chosen to be $\pair{x,y}=x^Ty$ for any $x,y\in\Re^n$. Let $\mathbf{J}(\dot x) = I_{n\times n} \dot x$, and $\lambda =1$. From \eqref{eqn:EL_Breg}, the continuous Euler--Lagrange equation is given by \begin{align} \ddot x + \frac{p+1}{t} \dot x + Cp^2 t^{p-2} \nabla \mathsf{f}(x) = 0,\label{eqn:EL_Re} \end{align} which recovers the differential equation derived in \cite{wibisono2016variational}. Next, we develop variational integrators. The discrete kinematics equation \eqref{eqn:gkp} is rewritten as $x_{k+1} = x_k + \Delta x_k$ for $\Delta x_k\in\Re^n$. The kinetic energy term in \eqref{eqn:Ld} is chosen as \begin{align} T_d =\frac{1}{2}\|\Delta x_k\|^2.\label{eqn:Td_Re} \end{align} According to \Cref{prop:DEL_Breg}, we obtain the discrete Euler--Lagrange equations as follows. \begin{prop} When $G=\Re^n$, the variational integrator for the discrete Bregman Lagrangian \eqref{eqn:Ld} is given by \begin{align} v_k & = \frac{\phi_{k,k+1}}{h_k}\Delta x_k + \frac{h_k\theta_k}{2} \nabla \mathsf{f}_k,\label{eqn:muk_Re} \\ v_{k+1} & = v_k - \frac{h_k\theta_k}{2} \nabla\mathsf{f}_k -\frac{h_k\theta_{k+1}}{2} \nabla \mathsf{f}_{k+1}, \label{eqn:mukp_Re} \end{align} and \eqref{eqn:Ek_Breg}, \eqref{eqn:Ekp_Breg} with \eqref{eqn:Td_Re}. \end{prop} These are implicit as \eqref{eqn:muk_Re} and \eqref{eqn:Ek_Breg} should be solved together for $\Delta x_k$ and $h_k$. One straightforward approach is fixed-point iteration. For a given $h_k$, \eqref{eqn:muk_Re} can be solved explicitly for $\Delta_k$, which yields $x_{k+1}$. Then, \eqref{eqn:Ek_Breg} can be solved for $h_k$. These procedure are iterated until $h_k$ converges. \subsection{Three-Dimensional Special Orthogonal Group $\ensuremath{\mathsf{SO(3)}}$} Next, consider $\ensuremath{\mathsf{SO(3)}}=\{R\in\Re^{3\times 3}\,|\, R^T R = I_{3\times 3},\, \mathrm{det}(R)]=1\}$. Its Lie algebra is $\ensuremath{\mathfrak{so}(3)}=\{S\in\Re^{3\times 3}\,|\, S^T=-S\}$ with the matrix commutator as the Lie bracket. This is identified with $\Re^3$ through the \textit{hat} map $\hat\cdot :\Re^3\rightarrow\ensuremath{\mathfrak{so}(3)}$ defined such that $\hat x\in\ensuremath{\mathfrak{so}(3)}$ and $\hat x y = x\times y$ for any $x,y\in\Re^3$. The inverse of the hat map is denoted by the \textit{vee} map $\vee: \ensuremath{\mathfrak{so}(3)}\rightarrow\Re^3$. The inner product is given by \begin{align*} \langle \hat \eta, \hat \xi \rangle_{\ensuremath{\mathfrak{so}(3)}} = \frac{1}{2}\tr{\hat \eta^T\hat \xi} = \eta^T\xi = \pair{\eta,\xi}_{\Re^3}. \end{align*} The metric is chosen as \begin{align} \langle \mathbf{J}(\hat \eta), \hat \xi \rangle_{\ensuremath{\mathfrak{so}(3)}} = \tr{\hat \eta^T J_d \hat \xi} = \eta^TJ \xi = \pair{J\eta,\xi}_{\Re^3},\label{eqn:met_SO3} \end{align} where $J\in\Re^{3\times 3}$ is a symmetric, positive-definite matrix, and $J_d=\frac{1}{2}\tr{J}I_{3\times 3}-J\in\Re^{3\times 3}$. Further, \begin{alignat*}{2} \ensuremath{\mathrm{ad}}_\eta\xi &= \eta\times \xi,& \quad \ensuremath{\mathrm{ad}}^*_\eta \xi &= \xi\times \eta,\\ \ensuremath{\mathrm{Ad}}_F\eta &= F\eta,& \quad \ensuremath{\mathrm{Ad}}^*_F \eta &= F^T\eta. \end{alignat*} Consider \begin{align*} L(t,R,\Omega) = \frac{t^{p+1}}{2p} \Omega\cdot J\Omega - Cpt^{2p-1} \mathsf{f}(R). \end{align*} From \eqref{eqn:EL_Breg}, the Euler--Lagrange equations are given by \begin{gather} J\dot\Omega + \frac{p+1}{t} J\Omega + \hat\Omega J\Omega + C p^2 t^{p-2} \nabla_\L \mathsf{f}(R) = 0,\label{eqn:EL_SO3}\\ \dot R = R\hat\Omega. \label{eqn:R_dot} \end{gather} Next, we derive variational integrators. The kinematics equation is written as \begin{align} R_{k+1} = R_k F_k, \label{eqn:Rkp} \end{align} for $F_k\in\ensuremath{\mathsf{SO(3)}}$. Similar with~\cite{LeeLeoCMAME07}, the angular velocity is approximated with $\hat\Omega_k \approx \frac{1}{h_k} R_k^T (R_{k+1}-R_k) = \frac{1}{h_k} (F_k-I_{3\times 3})$. Substituting this into \eqref{eqn:met_SO3}, \begin{align} T_d(F_k) & =\tr{(I_{3\times 3}-F_k)J_d}, \label{eqn:Td_SO3} \end{align} which satisfies $T_d(F_k)=T_d(F_k^T)$. \begin{prop}\label{prop:DEL_Breg_SO3} When $\ensuremath{\mathsf{G}}=\ensuremath{\mathsf{SO(3)}}$, the Lie group variational integrator for the discrete Bregman Lagrangian \eqref{eqn:Ld} with \eqref{eqn:Td_SO3} is given by \begin{align} \mu_k & =\frac{\phi_{k,k+1}}{h_k } (F_k J_d - J_dF_k^T)^\vee + \frac{h_k\theta_k }{2} \nabla_\L \mathsf{f}_k, \label{eqn:muk_SO3} \\ \mu_{k+1} & = F_k^T \mu_k -\frac{h_k\theta_k}{2}\nabla_\L \mathsf{f}_k - \frac{h_k\theta_k}{2} \nabla_\L \mathsf{f}_{k+1}, \label{eqn:mukp_SO3} \end{align} together with \eqref{eqn:Ekp_Breg}, \eqref{eqn:Rkp}, \eqref{eqn:Ek_Breg}, and \eqref{eqn:Td_SO3}. \end{prop} \begin{proof} Let $\delta F_k = F_k \hat\chi_k$. The derivative of \eqref{eqn:Td_SO3} is \begin{align*} \ensuremath{\mathbf{D}}_{F_k} T_{d_k} \cdot \delta F_k = \tr{-F_k\hat\chi_k J_d} = (J_dF_k - F_k^T J_d)^\vee \cdot \chi, \end{align*} where the last equality is from the identity, $\mathrm{tr}[-\hat x A]= x\cdot (A-A^T)^\vee$ for any $x\in\Re^3$ and $A\in\Re^{3\times 3}$. Thus, $\ensuremath{\mathsf{T}}^*_I \L_{F_k} (\ensuremath{\mathbf{D}}_{F_k} T_{d_k}) = (J_dF_k - F_k^T J_d)^\vee $. Substituting this into \eqref{eqn:muk_Breg} and \eqref{eqn:mukp_Breg} yields \eqref{eqn:muk_SO3} and \eqref{eqn:mukp_SO3}, respectively. \end{proof} To implement these, \eqref{eqn:mukp_SO3} and \eqref{eqn:Ek_Breg} should be solved together for $h_k$ and $F_k$. For a given $h_k$, computational approaches to solve \eqref{eqn:muk_SO3} for $F_k$ are presented in~\cite[Sec 3.3.8]{Lee08}. When $J=I_{3\times 3}$, or equivalently when $J_d = \frac{1}{2}I_{3\times 3}$, \eqref{eqn:muk_SO3} can be solved explicitly to obtain \begin{align} F_k = \exp \left(\frac{\sin^{-1}\|a\|}{\|a\|}\hat a\right),\label{eqn:Fk_SO3} \end{align} where $a = \frac{h_k}{\phi_{k,k+1}} (\mu_k - \frac{h_k\theta_k}{2}\nabla_\L \mathsf{f}_k)\in\Re^3$. This can replace \eqref{eqn:muk_SO3}. \subsection{Product of $\Re^n$ and $\ensuremath{\mathsf{SO(3)}}$} Suppose $\ensuremath{\mathsf{G}}=\ensuremath{\mathsf{SO(3)}}\times \Re^n$. As it is the direct product of $\ensuremath{\mathsf{SO(3)}}$ and $\Re^n$, the variation of the action sum is decomposed into two parts of $\ensuremath{\mathsf{SO(3)}}$ and $\Re^n$. Therefore, the continuous Euler--Lagrange equations on $\ensuremath{\mathsf{SO(3)}}\times\Re^n$ are given by \eqref{eqn:EL_Re} and \eqref{eqn:EL_SO3}, after replacing $\nabla \mathsf{f}(x)$ of \eqref{eqn:EL_Re} with $\nabla_x \mathsf{f}(R,x)$, and replacing $\nabla_\L f(R)$ of \eqref{eqn:EL_SO3} with $\ensuremath{\mathsf{T}}^*_I\L_R(\ensuremath{\mathbf{D}}_R\mathsf{f}(R,x))$. Similarly, the corresponding Lie group variational integrators are also given by \eqref{eqn:muk_Re}, \eqref{eqn:mukp_Re}, \eqref{eqn:muk_SO3}, and \eqref{eqn:mukp_SO3}, in addition to the energy equations \eqref{eqn:Ek_Breg} and \eqref{eqn:Ekp_Breg} with \begin{align*} T_{d_k}(F_k,\Delta x_k) = \frac{1}{2}\|\Delta x_k\|^2 + \tr{(I_{3\times 3}-F_k)J_d}. \end{align*} \section{Numerical Examples} \subsection{Optimization on $\ensuremath{\mathsf{SO(3)}}$} \label{sec:experimentA} Consider the objective function given by \begin{align} \mathsf{f} (R) & = \frac{1}{2}\| A- R\|^2_{\mathcal{F}} =\frac{1}{2}(\|A\|^2_{\mathcal{F}} + 3) - \tr{A^T R},\label{eqn:obj_SO3} \end{align} where $\|\cdot\|_{\mathcal{F}}$ denotes the Frobenius norm, and $A\in\Re^{3\times 3}$. Optimization of the above function appears in the least-squares estimation of attitude, referred to as Wahba's problem~\cite{WahSR65}. Let the singular value decomposition of $A=USV^T$ for a diagonal $S\in\Re^{3\times 3}$ and $U,V\in\mathsf{O}(3)$. The optimal attitude is explicitly given by $R^* = U \mathrm{diag}[1,1,\mathrm{det}(UV)] V^T$. The left-trivialized gradient is $\nabla_\L \mathsf{f}(R) = (A^T R - R^T A)^\vee$. \subsubsection{Order of Convergence} \begin{figure} \centerline{ \subfigure[convergence with respect to $t$]{\includegraphics[width=0.8\columnwidth]{comp_0}} } \centerline{ \subfigure[convergence with respect to $k$]{\includegraphics[width=0.8\columnwidth]{comp_1}} } \caption{Convergence rate of LGVI in \Cref{prop:DEL_Breg_SO3} for varying $p$}\label{fig:conv} \end{figure} First, we check if the theoretical order of convergence guaranteed by \Cref{prop:EL_Breg} is achieved by the discrete Euler--Lagrange equations presented in \Cref{prop:DEL_Breg}. The elements of the matrix $A$ in \eqref{eqn:obj_SO3} are randomly chosen from the uniform distribution on $[0,1]$. The initial guess of $R_0$ is chosen such that the initial error is $0.9\pi$ in terms of the Euler-axis rotation. Lie group variational integrators (LGVI) in \Cref{prop:DEL_Breg_SO3} are simulated with fixed $J=I_{3\times 3}$, $C=1$, and $h_0 = 0.1$ for varying $p\in\{2,4,6,8\}$. Since $J=I_{3\times 3}$, \eqref{eqn:muk_SO3} is replaced by \eqref{eqn:Fk_SO3}. The remaining implicit equation \eqref{eqn:Ek_Breg} is solved for $h_k$ via the Matlab equation solver, \texttt{lsqnonlin} with the tolerance of $10^{-4}$. The initial guess for $h_k$ is provided by $h_{k-1}$. The resulting convergence rate represented by $\mathsf{f}-\mathsf{f}^*$ over $t_k$ is illustrated in \Cref{fig:conv}.(a), where the empirical convergence rate computed by manual fitting are also marked. It is shown that LGVI empirically achieved the order of convergence greater than the theoretical guarantee of $\mathcal{O}(t^{-p})$. It has been reported that na\"\i ve discretizations of Bregman Lagrangian systems are not able to match the theoretical convergence rate, or it might cause numerical instability~\cite{wibisono2016variational,betancourt2018symplectic}. These results suggest that LGVIs do not suffer from these discretization issues, and their performance are consistent with the continuous-time analysis. Next, given that the step size $h_k$ is adjusted adaptively according to \eqref{eqn:Ek_Breg} and \eqref{eqn:Ekp_Breg}, it is likely that numerical simulation with higher $p$ requires a smaller step size. In fact, the average step sizes are given by $6.15\times10^{-2}$, $6.50\times 10^{-3}$, $4.89\times10^{-4}$ and $1.21\times 10^{-5}$, respectively for $p\in\{2,4,6,8\}$. To examine the effects of the step size variations, the convergence with respect to the discrete time step is illustrated in \Cref{fig:conv}.(b). It turns out that all of four cases of $p$ exhibit the similar order of long-term convergence, approximately $\mathcal{O}(k^{-2.3})$. This is not surprising, as Nesterov~\cite{nesterov2003introductory} showed that for every smooth first-order method, there exists a convex, $L$-smooth objective function, such that the rate of convergence is bounded from below by $\mathcal{O}(k^{-2})$, but it does not preclude the possibility of faster rates of convergence for strongly convex functions. However, the case of higher $p$ benefits from faster initial convergence, and as a result, the terminal error for $p=4$ is more than 400 times smaller than that of $p=2$. \subsubsection{Effects of Initial Step Size} \begin{figure} \centerline{ \subfigure[convergence with respect to $t$]{\includegraphics[width=0.8\columnwidth]{comp_2a}} } \centerline{ \subfigure[evolution of step size $h_k$]{\includegraphics[width=0.8\columnwidth]{comp_2b}} } \caption{Convergence rate of LGVI in \Cref{prop:DEL_Breg_SO3} for varying $h_0$.} \label{fig:conv_h} \end{figure} As discussed at the end of \Cref{sec:Breg}, the extended LGVI requires choosing the initial step size $h_0$. Here, we study the effects of $h_0$ in the convergence. More specifically, the order is fixed to $p=4$, and the initial step size is varied as $h_0\in\{0.001, 0.05, 0.01, 0.1, 0.4\}$. The corresponding results are illustrated at \Cref{fig:conv_h}. Interestingly, in \Cref{fig:conv_h}.(a), the convergence with respect to $t$ is not much affected by the initial step size $h_0$. Next, \Cref{fig:conv_h}.(b) presents the time-evolution of the step size, and it is shown that the step size computed by \eqref{eqn:Ek_Breg} decreases at the approximate order of $\mathcal{O}(t^{-1.6})$ for all cases. This might have been caused by the fact that the forcing term in \eqref{eqn:EL_SO3} increases over time. Another notable feature is that after a certain period, the step sizes tend to converge. More specifically, the step size initialized by $h_0=0.001$ converges to $1.8\times 10^{-4}$ when $t>10$, which is joined by the case of $h_0=0.005$ later. It is expected that the next case for $h_0=0.01$ would follow the similar trend if the simulation time is increased. This implies a certain stability property of the extended LGVI in the step size. Furthermore, observe that for the wide range of variations of step sizes presented in \Cref{fig:conv_h}.(b), the convergence in \Cref{fig:conv_h}.(a) is fairly consistent, which suggests that the LGVI is robust to the choice of the step size. \subsubsection{Comparison with Other Discretizations of Bregman Euler--Lagrange Equation} \begin{figure} \centerline{ \subfigure[convergence with respect to $t$]{\includegraphics[width=0.8\columnwidth]{comp_3a}} } \centerline{ \subfigure[orthogonality error of $R_k$]{\includegraphics[width=0.8\columnwidth]{comp_3b}} } \caption{Comparison with other discretization schemes for Bregman Euler--Lagrange equation}\label{fig:comp_disc} \end{figure} Next, we compare LGVI with other discretization schemes applied to \eqref{eqn:EL_SO3} and \eqref{eqn:R_dot}. Three methods are considered, namely the splitting approach introduced in \cite{tao2020variational} applied to the proposed continuous dynamics (abbreviated as SPLT), a 4-th order fixed-step Runge--Kutta method (RK4), and a variable stepsize Runge--Kutta method (RK45) implemented by the Matlab \texttt{ode45} function with the tolerance of $10^{-8}$. More precisely, the evolution of SPLT over step size $h$ is written as $\phi_{h/2} \circ \psi_h \circ \phi_{h/2}$, where $\phi_t$ is the exact flow map of \eqref{eqn:R_dot} with fixed $\Omega$, and $\psi_t$ is the exact $t$-time flow map of \eqref{eqn:EL_SO3} with fixed $R$ and $J=I_{3\times 3}$. The goal of this comparison is not to claim that a certain method is superior to the other methods. Rather, it is to identify the numerical properties of LGVI compared with others. Having stated that, LGVI is implicit, and \eqref{eqn:Ek_Breg} is solved by a general purpose nonlinear solver, instead of a numerical solver tailored for \eqref{eqn:Ek_Breg}. As a consequence, LGVI is substantially slower than the three explicit methods, to the extent that the comparison is not meaningful. Instead, for a more interesting comparison, we exploit the property of LGVI providing consistent results for a wide range of step sizes, and we only utilize \eqref{eqn:muk_SO3} and \eqref{eqn:mukp_SO3} with a fixed prescribed step size. The resulting scheme, denoted by ELGVI, is explicit as shown in \eqref{eqn:Fk_SO3}. Overall ELGVI is quite comparable with SPLT, but it benefits from a bit faster initial convergence, especially when $p$ is larger and $h$ is smaller. One particular case for $p=6$ and $h=0.001$ is illustrated in \Cref{fig:comp_disc}.(a). With regard to RK4 and RK45, their convergence is almost identical to ELGVI, but as presented in \Cref{fig:comp_disc}.(b), those methods do not preserve the orthogonality of the rotation matrix, which is problematic. Whereas, both of LGVI and SPLT conserve the structure of rotation matrices. Next, the computation time with Intel Core i7 3.2GHz, averaged for 10 executions, are 0.0727, 0.0258, 0.3847, and 1.1476 seconds for ELGVI, SPLT, RK4, and RK45, respectively. It is expected that RK4 requires more computation time as the gradient should be evaluated four times per a step, and it seems that the time-adaptive RK45 algorithm requires more frequent evaluations of the gradient. \subsubsection{Comparison with Other Optimization Schemes on Lie Groups} \begin{figure} \centerline{ \includegraphics[width=0.8\columnwidth]{comp_4} } \caption{Comparison with other accelerated optimization schemes on Lie groups}\label{fig:comp_accel} \end{figure} Finally, we compare ELGVI with other optimization schemes on Lie groups. In particular, we consider variationally accelerated Lie-group methods based on the NAG variational principle and operating splitting \cite{tao2020variational}, referred to as Lie-NAG-SC and Lie-NAG-C, which are conformally symplectic and group-structure preserving. Note that Lie-NAG-C corresponds to SPLT with $p=2$. Four cases are considered as marked in \Cref{fig:comp_accel} for varying $p$ and $h$. Compared with Lie-NAG-C, ELGVI exhibits faster convergence at a higher order. This does not contradict Nesterov's oracle lower bound: the continuous Bregman dynamics with $p>2$ should be discretized by smaller steps as $t$ increases, and therefore, the asymptotic order of convergence is still $\mathcal{O}(1/k^2)$ as illustrated above. However, since ELGVI uses a fixed stepsize, the initial error can decay faster than inverse quadratic, and depending on the level of accuracy required, we can take the advantage of it by employing early stopping. On the other hand, Lie-NAG-SC demonstrates exponential convergence asymptotically when applied to strongly convex functions. Overall, if moderate stopping criteria are employed, ELGVI may be preferred, as they exhibit the fastest initial decay of the cost function. \subsection{Optimization on $\ensuremath{\mathsf{SO(3)}}\times \Re^3$} Next, we present an optimization problem on $\ensuremath{\mathsf{SO(3)}}\times\Re^3$ to estimate the position and the attitude of a camera using the KITTI vision benchmark dataset~\cite{Geiger2013IJRR}. This is to verify the performance of ELGVI for a non-convex function in a higher-dimensional Lie group, with more relevance to engineering practice. More specifically, we consider $N=516$ distinct features on a single image frame, where their 2D pixel coordinates in the image plane, and the actual 3D location in the world coordinates are given by $p^i\in\Re^3$ and $P^i\in\Re^4$, respectively as homogeneous coordinates. Assuming that the camera calibration matrix $K\in\Re^{3\times 3}$ is also known, we wish to estimate the pose $(R,x)\in\ensuremath{\mathsf{SO(3)}}\times \Re^3$ of the camera. This is formulated as an optimization problem to minimize the reprojection error, which is the discrepancy between the actual pixel location of the features and the features projected to the image plane by the current estimate of $(R,x)$~\cite{ma2012invitation}. For example, let $\tilde p^i\in\Re^3$ be the homogeneous coordinates for the feature corresponding to $P^i$ projected to the image plane by $(R,x)$. From the perspective camera model, \begin{align*} \lambda \tilde p^i = K[R, x]P^i, \end{align*} for $\lambda >0$. The corresponding reprojected pixel is determined by the dehomogenization of $\tilde p^i$, namely $H^{-1}(\tilde p^i)\in\Re^2$ corresponding to the first two elements of $\tilde p^i$ divided by the last element. The objective function is the sum of the reprojection error given by \begin{align} \mathsf{f} (R,x) = \sum_{i=1}^N \| H^{-1}(p^i) - H^{-1}(\tilde p^i)\|^2.\label{eqn:obj_SE3} \end{align} \Cref{fig:comp_6} presents the optimization results by ELGVI, which are comparable to the benchmark examples presented for $\ensuremath{\mathsf{SO(3)}}$. However, the terminal phase is relatively noisy, partially because the gradients of \eqref{eqn:obj_SE3} are evaluated numerically with a finite-difference rule. \Cref{fig:KITTI} illustrates the reprojected features before and after the optimization. \begin{figure} \centerline{ \includegraphics[width=0.8\columnwidth]{comp_6a} } \caption{Optimization on $\ensuremath{\mathsf{SO(3)}}\times\Re^3$: convergence with respect to $k$}\label{fig:comp_6} \end{figure} \begin{figure} \centerline{ \subfigure[Initial guess $(R_0,x_0)$]{\includegraphics[width=1.0\columnwidth]{comp_6b.eps}} } \centerline{ \subfigure[Optimized $(R^*,x^*)$]{\includegraphics[width=1.0\columnwidth]{comp_6c}} } \caption{Reprojection error: the red $+$ markers denote the key points detected, and the yellow $+$ markers represent the key points projected by the estimated pose. The paired features are connected by solid lines. }\label{fig:KITTI} \end{figure} \section{Conclusions} In this paper, we proposed a Lie group variational integrator for the Bregman Lagrangian dynamics on Lie groups, to construct an accelerated optimization scheme. The variable stepsize prescribed by the extended variational principle exhibits an interesting convergence property, and the variational discretization is robust to the initial stepsize. It would be interesting to explore the role of variable time-stepping in geometric discretizations of the Bregman dynamics especially compared with Hamiltonian variational integrators.
2,877,628,088,686
arxiv
\section{Introduction: DSSV global analysis} Helicity-dependent parton densities (PDFs) tell us precisely how much quarks and gluons with a given momentum fraction $x$ tend to have their spins aligned with the spin direction of a nucleon in a helicity eigenstate. Their knowledge is essential in the quest to answer one of the most basic questions in hadronic physics, namely how the spin of a nucleon is composed of the spins and orbital angular momenta of its constituents. More than a dozen experiments have measured with increasing precision various observables sensitive to different combinations of quark and gluon polarizations in the nucleon. The experimental progress was matched by advancements in corresponding theoretical higher order calculations in the framework of pQCD and phenomenological analyses of available data \cite{ref:dssv,ref:otherfits}. The most comprehensive global fits \cite{ref:dssv} include data taken in spin-dependent DIS, semi-inclusive DIS (SIDIS) with identified pions and kaons, and proton-proton collisions. They allow for extracting sets of helicity PDFs consistently at next-to-leading order (NLO) accuracy along with estimates of their uncertainties. Contributions from the orbital angular momenta of quarks and gluons completely decouple from such type of experimental probes and need to be quantified by other means. One important asset in the DSSV global analysis framework is the use of a numerically fast Mellin moment technique \cite{Stratmann:2001pb,ref:dssv} which allows one to incorporate complicated NLO expressions for $pp$ processes \cite{ref:polnlo} without any approximations. Unlike unpolarized PDF fits, where a separation of different quark flavors is obtained from inclusive DIS data taken with neutrino beams, differences in polarized quark and antiquark densities are at present determined exclusively from SIDIS data and hence require knowledge of fragmentation functions (FFs). Reliable sets of FFs at NLO accuracy have been extracted in global fits to inclusive hadron yields in $e^+e^-$, $ep$, and $pp$ collisions \cite{ref:dss}. Even though pion FFs are rather well constrained by data, corresponding kaon FFs suffer from larger uncertainties which complicate current extractions of $\Delta s(x)$. \section{Recent DIS and SIDIS data} \begin{figure} \includegraphics[width=.4\textwidth]{sidis-compass-d} \includegraphics[width=.4\textwidth]{sidis-compass-p} \caption{\label{fig:newsidis} COMPASS results~\cite{Alekseev:2009ci,Alekseev:2010ub} for SIDIS spin asymmetries on a deuteron (left) and proton target (right) compared to DSSV and DSSV+ fits (see text).} \end{figure} Recently, the COMPASS collaboration has published new DIS~\cite{Alekseev:2010hc} and SIDIS~\cite{Alekseev:2009ci,Alekseev:2010ub} data. The latter extend the coverage in $x$ down to about $x\simeq 5\times10^{-3}$, almost an order of magnitude lower than the kinematic reach of the HERMES data used in the DSSV global analysis of 2008~\cite{ref:dssv}. For the first time, the new results comprise measurements of identified pions and kaons taken with a longitudinally polarized proton target. Clearly, these data can have a significant impact on fits of helicity PDFs and estimates of their uncertainties. In particular, the new kaon data will serve as an important check of the validity of the strangeness density obtained in the DSSV analysis, which instead of favoring a negative polarization as in most fits based exclusively on DIS data, prefers a vanishing or perhaps even slightly positive $\Delta s$ in the measured range of $x$, see below. The new data for the inclusive spin asymmetry $A_1^p$ appear to be well described by the original DSSV set of helicity PDFs yielding a $\chi^2/\mathrm{d.o.f.}\approx 1$. Figure~\ref{fig:newsidis} shows a detailed comparison between the new SIDIS spin asymmetries from COMPASS~\cite{Alekseev:2009ci,Alekseev:2010ub} and the original DSSV fit (dashed lines). Also shown is the result of a re-analysis at NLO accuracy (denoted as ``DSSV+'') based on the updated data set. The differences between the DSSV and the DSSV+ fits are hard to notice for both identified pions and kaons. The total $\chi^2$ of the fit drops only by a few units upon refitting, which is not really a significant improvement for a PDF analysis in view of non-Gaussian theoretical uncertainties. The change in $\chi^2$ is also well within the maximum $\Delta \chi^2/\chi^2=2\%$ tolerated as a faithful, albeit conservative estimate of PDF uncertainties within the DSSV global analysis \cite{ref:dssv}. At first sight it may seem that the new SIDIS data have only very little impact on the fit. This is not the case if one studies individual $\chi^2$ profiles in more detail. Compared to the original DSSV fit we find a trend towards smaller net polarization for $\Delta \bar{u}$ and $\Delta \bar{d}$ in the range $0.001\le x \le 1$. In addition, one finds a significant reduction in the uncertainties, as determined by the width of the $\chi^2$ profiles at a given $\Delta \chi^2$. There is, however, some mild tension with older SIDIS sets, but this is well within the tolerance of the fit and most likely caused by the different $x$ ranges covered by the different data sets. \subsection{Constraining the strangeness helicity density} A much debated feature of the strangeness helicity PDF obtained in the DSSV fit is its unexpected small value at medium-to-large $x$ which, when combined with a node at intermediate $x$, still allows for acquiring a significant negative first moment at small $x$, in accordance with expectations from SU(3) symmetry and fits to DIS data only. To investigate the possibility of a node in $\Delta s(x)$ further, we present in Fig.~\ref{fig:profiles-s} the $\chi^2$ profiles for two different intervals in $x$: (a) $0.02\le x \le 1$ and (c) $0.001\le x \le 0.02$. The middle panel (b) demonstrates the impact and consistency of kaon data from HERMES and COMPASS in constraining $\Delta s(x)$ in the region $0.02\le x \le 1$. \begin{figure} \includegraphics[width=.6\textwidth]{strangeness-profiles-2011} \vspace*{-0.5cm} \caption{\label{fig:profiles-s} (a), (c): $\chi^2$ profiles for the truncated first moment of $\Delta s$ in two different $x$ intervals. (b): impact of kaon data from COMPASS and HERMES in the range $0.02\le x \le 1$.} \end{figure} The profiles in Fig.~\ref{fig:profiles-s} clearly show that the result for $\Delta s$ for $0.001\le x \le 0.02$ is a compromise between DIS and SIDIS data, the latter favoring less negative values. For $0.02\le x \le 1$ everything is determined by SIDIS data, and all sets consistently ask for a small, slightly positive strange quark polarization. There is no hint of a tension with DIS data here as they do not provide a useful constraint at medium-to-large $x$. We note that at low $x$, most SIDIS sets in the original DSSV fit give indifferent results. The new COMPASS data, which extend towards the smallest $x$ values so far, actually show some preference for a slightly negative value for $\Delta s$. We also notice that in the range $x>0.001$ the hyperon decay constants, the so-called $F$ and $D$ values, do not play a significant role in constraining $\Delta s(x)$. To quantify possible SU(3) breaking effects one needs to probe $\Delta s(x)$ at smaller values of $x$, for instance in SIDIS at a future EIC \cite{Boer:2011fh}. Clearly, all current extractions of $\Delta s$ from SIDIS data suffer from a significant dependence on kaon FFs, see, e.g., Ref.~\cite{Alekseev:2009ci,Alekseev:2010ub}, and better determinations of $D^K(z)$ are highly desirable. Contrary to other fits of FFs \cite{:2008afa}, only the DSS sets \cite{ref:dss} provide a satisfactory description of pion and kaon multiplicities in the same kinematic range where we have polarized SIDIS data. \section{Outlook} Existing experiments, like PHENIX and STAR at RHIC, will continue to add data in the next couple of years. Preliminary single-inclusive jet data from STAR presented at this conference exhibit a non-zero double-spin asymmetry $A_{\mathrm{LL}}$ in the covered range of transverse momenta $p_T$ \cite{ref:star09}. Measurements of $A_{\mathrm{LL}}$ for di-jet correlations \cite{ref:dijet} should help to improve the current constraints on $\Delta g(x)$ and extend them towards somewhat smaller values of $x$. As soon as these data sets are finalized they will be incorporated in our global analysis framework, and their implications for $\Delta g(x)$ will be studied in detail. Parity-violating, single-spin asymmetries for $W$ boson production from RHIC should reach a level where they help to constrain $\Delta u$, $\Delta \bar{u}$, $\Delta d$, and $\Delta \bar{d}$ at moderately large $x$, $0.07\le x \le 0.4$ at scales $Q\simeq M_W$, much larger than typically probed in SIDIS \cite{deFlorian:2010aa}. The strangeness polarization is, however, very hard to access in polarized $pp$ collisions. In the future, JLab12 will add very precise DIS data at large $x$, which will allow one to challenge ideas like helicity retention, predicting that $\Delta f(x)/f(x)\rightarrow 1$ as $x\rightarrow 1$. Most of the remaining open questions concerning helicity PDFs are related to their behavior at small $x$ and can be only addressed at a future, high-energy polarized electron-proton collider. At an EIC, the gluon polarization can be determined precisely from studies of DIS scaling violations \cite{Boer:2011fh}. A full flavor decomposition down to about $x\simeq 10^{-4}$, including $\Delta s(x)$ and $\Delta \bar{s}(x)$, should be possible by studying the semi-inclusive production of pions and kaons. If necessary, unpolarized hadron multiplicites will help to constrain FFs better. An EIC also has the unique opportunity to access polarized electroweak structure functions via charged and neutral current DIS measurements. These novel probes constrain various different combinations of polarized quark PDFs. \vspace*{0.1cm} The research of M.S.\ is supported by the U.S.\ Department of Energy under Contract No. DE-AC02-98CH10886. \bibliographystyle{aipproc}
2,877,628,088,687
arxiv
\section*{Introduction} \label{sec-1} Membrane proteins are fundamental for life, and their structures and dynamics are essential for their biological functions. About 30 \% of proteins encoded in genomes are estimated to be of membrane proteins by bioinformatics.\cite{krogh2001predicting,sawada2012biological} Although membrane-protein folding has been studied extensively by experiments,\cite{fiedler_protein_2010,von_heijne_introduction_2011} only about 2 \% in whole known structures in PDB are membrane proteins, because biomembrane environment makes crystallization very difficult.\cite{pdbtm1,pdbtm2,mpdb,opm,pdbmpks} Thus, simulation studies are getting more important (for previous simulations, see, for instance, \cite{taylor1994method,suwa1995continuum,adams1996improved,pappu1999potential,hirokawa2000triangle,vaidehi2002prediction,bu2007membrane,miyashita2009transmembrane,leguebe2012hybrid}). However, simulations often suffer from sampling insufficiency and the efficient sampling methods like generalized-ensemble algorithms and/or the reduction of the size of systems are required. In particular, replica-exchange method (REM) \cite{rem1,swendsen1986replica,rem3,rem4} and its extensions are often used in generalized ensemble algorithms due to their efficiency, parallelization ease, and usability (for reviews, see, e.g., Refs.\cite{mitsutake_generalizedensemble_2001,rem_iba2001extended}). One of useful approaches to reduce system sizes is to employ an implicit membrane model, which mimics some elements of membrane properties such as dielectric profile, chain order, pressure profile, and intrinsic curvature by parameters for electrostatic solvent free energy.\cite{eef1lazaridis_effective_1999,gbim_implicit_2003,hdgb1tanizaki_generalized_2005} While these methods are mainly based on the free energy difference between solvent and solute, simpler implicit membrane model was introduced previously, where transmembrane helices keep a helix structure and are always restricted within membrane regions during folding, which greatly reduces the effort for the search in the conformational space during folding processes.\cite{Kokubo2004397} This model assumed that the native structure of membrane proteins can be predicted by helix-helix interactions between transmembrane helices with fixed helix structures, and that the membrane environment constraints the regions where helices can exist (namely, within membranes) and stabilizes transmembrane helix structures. This model is supported by many experimental data such as those leading to the two-stage model, in which each helix structure is formed first, and they aggregate each other by helix-helix packing in membrane protein folding to reach the native conformation (for a review, see Ref. \cite{popot_h_2000}). The previous method \cite{Kokubo2004397,kokubo_prediction_2004,kokubo_classification_2004,Kokubo2004168,kokubo_analysis_2009} could predict the native structures by the REM simulation using known native helix structures (for a review, see Ref. \cite{kokubo_replica-exchange_2006}). However, if the native structures consist of distorted helix structures, the previous prediction method will not work because the method treated helix structures as rigid bodies. It is actually known from experimental structures in PDB that transmembrane helices are distorted or bent in about 25 \% of all transmembrane helix structures. \cite{hall2009position} Therefore, in this article, we propose a new treatment of helix structures by taking into account helix distortions and kinks instead of treating them as rigid bodies. We tested our new prediction method for native structures. Our test systems consist of the case with only ideal helix structures and that with a distorted helix structure. This article is organized as follows. In Section 2, we explain the details of our methods. The potential energy function used for our new models and the method to introduce the helix kinks are described. In Section 3, we show the results of the REM simulation applied to glycophorin A and phospholamban. After we check that REM simulation are properly performed, the free energy minimum states are identified by the principal component analysis. Finally, Section 4 is devoted to the conclusions. \section*{Methods} \label{sec-2} \subsection*{Simulation details} \label{sec-2-1} We first review our previous method\cite{Kokubo2004397,kokubo_prediction_2004,kokubo_classification_2004,Kokubo2004168,kokubo_analysis_2009}. Only the transmembrane helices are used in our simulations, and loop regions of membrane proteins as well as lipid and water molecules are neglected. Our assumptions are that a role of water is to push the hydrophobic transmembrane regions of membrane proteins into the lipid bilayer and that a major role of lipid molecules is to prepare a hydrophobic environment and construct helix structures in the transmembrane regions. Loop regions of membrane proteins are often outside the membrane and we assume that they do not directly affect the structure of transmembrane regions. Due to the difference in surface shapes of helices and lipids, the stabilization energy for helix-helix packing will be larger than that for helix-lipid packing. Therefore, water, lipids, and loop-region of proteins are not treated explicitly in our simulations, although the features of membrane boundaries are taken into account by the constraint conditions below. We update configurations with a rigid translation and rotation of each $\alpha$-helix and torsion rotation of side-chains by Monte Carlo (MC) simulations. We use MC method although we can also use molecular dynamics in principle. There are 2$N_{\rm H}$ + $N_{\rm SD}$ kinds of MC move sets, where $N_{\rm H}$ is the total number of transmembrane helices in the protein, and $N_{\rm SD}$ is the total number of dihedral angles in the side-chain of $N_{\rm H}$ helices. We add the following three elementary harmonic constraints to the original potential energy function. The constraint function is given by \begin{eqnarray} E_{\rm constr} = E_{\rm constr1} + E_{\rm constr2} + E_{\rm constr3}, \label{const-ene} \end{eqnarray} where each term on the right-hand side is defined as follows: \begin{eqnarray} E_{\rm constr1} = \sum_{i=1}^{N_{\rm H}-1} k_1~ \theta \left( r_{i,i+1}-d_{i,i+1} \right) \left[ r_{i,i+1}-d_{i,i+1} \right]^2, \label{const-ene1} \end{eqnarray} \begin{eqnarray} E_{\rm constr2} &= \displaystyle{\sum_{i=1}^{N_{\rm H}}} \left\{ k_2~ \theta \left( \left| z^{\rm L}_{i}-z^{\rm L}_{0} \right| -d^{\rm L} \right) \left[ \left| z^{\rm L}_{i}-z^{\rm L}_{0} \right| -d^{\rm L}\right]^2\right. \nonumber \\ &+\left.k_2~ \theta \left( \left| z^{\rm U}_{i}-z^{\rm U}_{0} \right| -d^{\rm U} \right) \left[ \left| z^{\rm U}_{i}-z^{\rm U}_{0} \right| -d^{\rm U} \right]^2 \right\}, \label{const-ene2} \end{eqnarray} \begin{eqnarray} E_{\rm constr3} = \sum_{{\rm C}_{\alpha}} k_3~ \theta \left( r_{{\rm C}_{\alpha}}-d_{{\rm C}_{\alpha}} \right) \left[ r_{{\rm C}_{\alpha}}-d_{{\rm C}_{\alpha}} \right]^2. \label{const-ene3} \end{eqnarray} $E_{\rm constr1}$ is the energy that constrains pairs of adjacent helices along the amino-acid chain not to be apart from each other too much (loop constraints). $r_{i,i+1}$ is the distance between the C atom of the C-terminus of the $i$-th helix and the C$^{\alpha}$ atom of the N-terminus of the $(i+1)$-th helix, and $k_1$ and $d_{i,i+1}$ are the force constant and the central value constant of the harmonic constraints, respectively, and $\theta(x)$ is the step function: \begin{eqnarray} \theta(x)=\left\{ \begin{array}{ll} 1~, & {\rm for} ~x \geq 0~, \\ 0~, & {\rm otherwise}~. \\ \end{array} \right. \label{step-func} \end{eqnarray} This term has a non-zero value only when the distance $r_{i,i+1}$ becomes longer than $d_{i,i+1}$. Only the structures in which the distance between neighboring helices in the amino-acid sequence is short are searched because of this constraint term. $E_{\rm constr2}$ is the energy that constrains helix N-terminus and C-terminus to be located near membrane boundary planes. Here, the z-axis is defined to be the direction perpendicular to the membrane boundary planes. $k_2$ is the force constant of the harmonic constraints. $z^{\rm L}_{i}$ and $z^{\rm U}_{i}$ are the z-coordinate values of the C$^{\alpha}$ atom of the N-terminus or C-terminus of the $i$-th helix near the fixed lower membrane boundary and the upper membrane boundary, respectively. $z^{\rm L}_0$ and $z^{\rm U}_0$ are the fixed lower boundary z-coordinate value and the upper boundary z-coordinate value of the membrane planes, respectively. $d^{\rm L}$ and $d^{\rm U}$ are the corresponding central value constants of the harmonic constraints. This term has a non-zero value only when the C$^{\alpha}$ atom of the N-terminus or C-terminus of the $i$-th helix are apart more than $d^{\rm L}_{i}$ (or $d^{\rm U}_{i}$). This constraint energy was introduced so that the helix ends are not too much apart from the membrane boundary planes. $E_{constr3}$ is the energy that constrains all C$^{\alpha}$ atoms within the sphere (centered at the origin) of radius $d_{{\rm C}_{\alpha}}$. $r_{{\rm C}_{\alpha}}$ is the distance of C$^{\alpha}$ atoms from the origin, and $k_3$ and $d_{{\rm C}_{\alpha}}$ are the force constant and the central value constant of the harmonic constraints, respectively. This term has a non-zero value only when C$^{\alpha}$ atoms go out of this sphere and is introduced so that the center of mass of the molecule stays near the origin. The radius of the sphere $d_{{\rm C}_{\alpha}}$ is set to a large value in order to guarantee that a wide conformational space is sampled. These constraints are considered to be a simple implicit membrane model which mimics membrane environment during membrane protein folding. Moreover, all constraints limit the conformational space of proteins to improve sampling and are useful when we use limited computational resources. In summary, this procedure is consistent with the two-stage model, and it assumes that side-chain flexibility is essential in their folding. Because backbone structures of main chain are treated as rigid bodies in the previous method, the method can not be applied if transmembrane helices are distorted. However, most of transmembrane helix structures in PDB have distorted or bent helix structures. We, therefore, need to treat the deformations of backbone helix structures during simulations. Namely, the $\phi$ and $\psi$ torsion rotations and concerted rotation of backbone are used to reproduce the distorted helix structures of experimental structures from the initial ideal helix structures in Monte Carlo move sets. Here, we also update configurations with a rotation of torsion angles of backbones by directional manipulation and concerted rotation.\cite{dinner_local_2000,go_ring_1970,dodd_concerted_1993,wedemeyer1999exact,coutsias2004kinematic} There are 2$N_{\rm H}$ + $N_{\rm SD}$ +$N_{\rm BD}$ +$N_{\rm CR}$ kinds of MC moves now, where $N_{\rm BD}$ is the total number of $(\phi, \psi)$ torsion angles in the helix backbones, and $N_{\rm CR}$ is the total number of the combination of seven successive backbone torsion angle by the concerted rotation in the helix backbone. One MC step in this article is defined to be an update of one of these degrees of freedom, which is accepted or rejected according to the Metropolis criterion. In order to keep helix conformations of the distortions, we introduce the fourth constraint term as follows: \begin{eqnarray} E_{\rm constr} = E_{\rm constr1} + E_{\rm constr2} + E_{\rm constr3} + E_{\rm constr4}, \label{const-ene_new} \end{eqnarray} \begin{equation} \begin{split} E_{\rm constr4} &= \sum_{j=1}^{ N_{BD}} k_4 \theta(\mid \phi_j^{} - \phi_0 \mid -\alpha_j^{\phi}) (\mid \phi_j^{} - \phi_0\mid -\alpha_j^{\phi})^2 \\ & +\sum_{j=1}^{ N_{BD}} k_5 \theta(\mid \psi_j^{} - \psi_0 \mid -\alpha_j^{\psi}) (\mid \psi_j^{} - \psi_0\mid -\alpha_j^{\psi})^2, \label{const-ene4} \end{split} \end{equation} where $E_{\rm constr4}$ is the newly-introduced energy term which constrains dihedral angles of main chains within bending or kinked helix structures from ideal helix structures and prevent them from bending and distortions too much. $\phi_j^{}$ and $\psi_j^{}$ are the main-chain torsion angles of the $j$-th residue. $\phi_0$ and $\psi_0$ are the fixed reference values of the harmonic constraint, $k_4$ and $k_5$ are the force constants, and $\alpha_j^{\phi}, \alpha_j^{\psi}$ are the central values of the harmonic constraint. We now explain the replica-exchange method briefly. This method prepares $M$ non-interacting replicas at $M$ different temperatures. While conventional canonical MC simulation is performed for each replica, temperature exchange between pairs of replicas corresponding to temperatures is attempted at a fixed interval based on the following Metropolis criterion. Let the label $i$ (=1, $\cdots$, $M$) correspond to the replica index and label $m$ (=1, $\cdots$, $M$) to the temperature index. We represent the state of the entire system of $M$ replicas by $X = \left\{x_{m(1)}^{[1]} , \cdots, x_{m(M)}^{[M]} \right\}$, where $x_m^{[i]} =\left\{q^{[i]}\right\}$ are the set of coordinates of replica $i$ (at temperature $T_m$), and $m=m(i)$ is the permutation of $i$. The Boltzmann-like probability distribution for state $X$ is given by \begin{equation} W_{{\rm REM}}(X)=\prod_{i=1}^M \exp{[-\beta_{m(i)} E(q^{[i]})]}. \end{equation} We consider exchanging a pair of temperatures $T_m$ and $T_n$, corresponding to replicas $i$ and $j$: \begin{eqnarray} X = &\left\{ \cdots, x_m^{[i]} , \cdots, x_n^{[j]}, \cdots \right\} \rightarrow \nonumber \\ & X^\prime = \left\{ \cdots, x_m^{[j]} , \cdots, x_n^{[i]}, \cdots \right\} . \end{eqnarray} The transition probability $\omega (X\rightarrow X^\prime)$ of Metropolis criterion is given by \begin{eqnarray} \omega (X\rightarrow X^\prime ) &\equiv \omega (x_m^{[i]} \mid x_n^{[j]}) \nonumber \\ &= {\rm min}\left(1, \frac{W_{{\rm REM}} (X^\prime)}{W_{{\rm REM}} (X)}\right) \nonumber \\ &= {\rm min}(1,\exp(- \Delta )) , \end{eqnarray} where $\Delta = (\beta _m - \beta_n ) (E(q^{[j]}) - E(q^{[i]}) )$. Because each replica reaches various temperatures followed by replica exchange, the REM method performs a random-walk in temperature space during the simulation. Expectation values of physical quantities are given as functions of temperatures by solving the multiple-histogram reweighting equations.\cite{ferrenberg_optimized_1989,kumar_weighted_1992} The density of states $n(E)$ and dimensionless Helmholtz free energy are obtained by solving the following equations iteratively: \begin{equation} n(E) = { \frac{\sum\limits_{m=1}^{M}N_m(E)}{\sum\limits_{m=1}^{M}n_m e^{f_m-\beta _m E}}}, \end{equation} and \begin{equation} e^{-f_m}= \sum_{E}n(E) e^{-\beta _m E}, \end{equation} where $N_m (E)$ and $n_m$ be the energy histogram and the total number of samples obtained of temperature $T_m$, respectively. After we obtained $f_m$ at each temperature, the expectation value of a physical quantity $A$ at any temperature $T$ is given by \cite{wham3} \begin{equation} <A>_T = \frac{\sum\limits_{m=1}^{M} \sum\limits_{x_m} A(x_m) \frac{1}{\sum\limits_{l=1}^{M} n_l \exp{(f_l - \beta _l E(x_m))} } {\rm e}^{-\beta E(x_m)} } { \sum\limits_{m=1}^{M} \sum\limits_{x_m} \frac{1}{\sum\limits_{l=1}^{M} n_l \exp{(f_l - \beta _l E(x_m))} } {\rm e}^{-\beta E(x_m)} }, \label{wham} \end{equation} where $x_m$ are the set of coordinates at temperature $T_m$ obtained from the trajectories of the simulation. We analyze the simulation data by the principal component analysis (PCA).\cite{pca1,pca2,pca3,pca4,pca5,pca6} The structures are superimposed on an arbitrary reference structure, for example, the native structure from PDB. The variance-covariance matrix is defined by \begin{equation} C_{ij} = <(q_i - <q_i>) (q_j - <q_j>)>, \end{equation} where $q_i = (q_1, q_2, q_3,\cdots , q_{3n-1}, q_{3n})=(x_1, y_1, z_1, \cdots , x_n, y_n, z_n)$ and $<\vec{q} >=\sum_{k=1}^{n} \vec{q} (k) /n$. $x_i , y_i , z_i$ are Cartesian coordinates of the $i$-th atom, and $n$ is the total number of atoms. This symmetric 3$n$ $\times$ 3$n$ matrix is diagonalized, and the eigenvectors and eigenvalues are obtained. For this calculation, we used the R program package.\cite{Rpackage2013,ihak:gent:1996,rgl} The first superposition is performed to remove large eigenvalues from the translations and rotations of the system, because we want to analyze the internal differences of structures. Therefore, this manipulation results in the smallest value close to zero for the six eigenvalues corresponding to translations and rotations of the center of geometry. The eigenvalues are ordered in the decreasing order of magnitude. Thus, the first, second, and third principal component axes are defined as the eigenvectors corresponding to the largest, second largest, and third largest eigenvalues, respectively. The $i$-th principal component of each sampled structure $\vec{q}$ is defined by the following inner product: \begin{equation} \mu _i = \nu _i \cdot (\vec{q} - <\vec{q} >) , \ \ \ \ (i=1, 2, \cdots, n), \end{equation} where $\nu _i $ is the (normalized) $i$-th eigenvector. \subsection*{Simulation conditions} \label{sec-2-2} The MC program is based on CHARMM macromolecular mechanics program,\cite{brooks_charmm:_1983,hu_monte_2006} and replica-exchange Monte Carlo method was implemented in it. In this work, we studied two membrane proteins: glycophorin A and phospholamban. Both proteins are registered in Orientation of Proteins in Membrane (OPM).\cite{opm} The former has a dimer of an almost ideal helix structure in PDB (PDB code: 1AFO). The number of amino-acid residues in the helix is 18, and the sequence is identical and TLIIFGVMAGVIGTILLI. The other has a single transmembrane helix structure in PDB (PDB code: 1FJK). The number of amino-acid residues in the helix is 25, and the sequence is LQNLFINFCLILIFLLLICIIVMLL. The N-terminus and the C-terminus of each helix were blocked with the acetyl group and the N-methyl group, respectively. In the previous works, a 13-replica REM MC simulation of glycophorin A was performed with 13 replicas with the following temperatures: 200, 239, 286, 342, 404, 489, 585, 700, 853, 1041, 1270, 1548, and 1888 K. \cite{kokubo_prediction_2004,kokubo_classification_2004,kokubo_replica-exchange_2006} Although this simulation predicted the structures close to the native one successfully, the backbones structures were fixed to the ideal helix structures. In the present simulation, the flexibility of backbone helix structures is newly taken into account, and 16 replicas were used with the following temperatures: 300, 333, 371, 413, 460, 512, 571, 635, 707, 787, 877, 976, 1087, 1210, 1347, and 1499 K. The total number of MC steps was 60,000,000. For phospholamban, 16 replicas were also used with the following temperatures: 300, 340, 386, 438, 497, 564, 640, 727, 825, 936, 1062, 1205, 1368, 1553, 1762, and 2000 K. The total number of MC steps was 100,000,000. The above temperatures were chosen so that all acceptance ratios of replica exchange are almost uniform and sufficiently large for computational efficiency. The highest temperature was chosen sufficiency high so that no trapping in local-minimum-energy states occurs in both simulations. Replica exchange was attempted once at every 1000 MC steps for glycophorin A and 100 MC steps for phospholamban, respectively. We used the CHARMM19 parameter set (polar hydrogen model) for the original potential energy of the system.\cite{param19reiher1985theoretical,param19neria_simulation_1996} No cutoff was introduced to the non-bonded terms. Each structure was first minimized subjected to harmonic restraint on all the heavy atoms. The value of the dielectric constant was set as $\epsilon$ = 1.0, as in the previous works.\cite{kokubo_analysis_2009,Kokubo2004397,kokubo_prediction_2004,kokubo_classification_2004,Kokubo2004168} because previous studies showed that this value was better for the predictions of transmembrane helix structures than that of $\epsilon$ = 4.0, although $\epsilon$ = 4.0 is close to the lipid environment of electrostatic potential effects. This may be due to the fact that few lipid molecules lie between helices in native transmembrane structures. For concerted rotation we selected the backbone atoms except for those in cysteine residues. We selected 6 or 7 continuous bonds from the first atom along backbone for the driver torsion. Third bond and fifth bond were allowed to rotate following the driver bonds. The number of degrees of freedom in total was equal to 190 in glycophorin A and 132 in phospholamban. We set $N_{\rm H}=2$ for glycophorin A and $N_{\rm H}=1$ for phospholamban $k_1 = 5.0$ (kcal/mol)/\AA$^2$, $d_{i,i+1} = 30.0$ \AA~ , $k_2 = 5.0$ (kcal/mol)/\AA$^2$, $k_3 = 0.5$ (kcal/mol)/\AA$^2$, $d_{{\rm C}_{\alpha}} = 50$ \AA~, $k_4=k_5=1.0$ (kcal/mol)/degrees$^2$, $\phi_0=-62$ degrees, $\psi_0=-47$ degrees, $\alpha_j^{\phi}=15$ degrees, and $\alpha_j^{\psi}=18$ degrees for our simulations. For membrane thickness parameters, we set $z^{\rm L}_0 = -11$ \AA, $z^{\rm U}_0 = 11$ \AA, and $d^{\rm U} = d^{\rm L} = 1.0$ \AA~ for glycophorin A, and $z^{\rm L}_0 = -15$ \AA, $z^{\rm U}_0 = 15$ \AA, and $d^{\rm U} = d^{\rm L} = 1.0$ \AA~ for phospholamban. For PCA analyses, 60,000 and 100,000 conformational data were chosen in a fixed interval at each temperature from the REM simulation for glycophorin A and phospholamban, respectively. We used the PDB structures (PDB codes: 1AFO for glycophorin A and 1FJK for phospholamban) as the reference structures to judge the prediction ability. \section*{Results} \label{sec-3} \subsection*{Glycophorin A} \label{sec-3-1} \subsubsection*{Time series of various quantities} \label{sec-3-1-1} We first examine how the replica-exchange simulation performed. Fig. \ref{1afo-integrate}(a) shows the time series of the replica index at the lowest temperature of 300 K. We see that the minimum temperature visited different replicas many times during the REM simulation, and we observe a random walk in the replica space. The complementary picture is the temperature exchange for each replica. Fig. \ref{1afo-integrate}(b) shows the time series of temperatures for one of the replicas (Replica 11). We see that Replica 11 visited various temperatures during the REM simulation. We observe random walks in the temperature space between the lowest and highest temperatures. Other replicas behaved similarly. Fig. \ref{1afo-integrate}(c) shows the corresponding time series of the total potential energy for Replica 11. We see a strong correlation between time series of temperatures (Fig. \ref{1afo-integrate}(b)) and that of potential energy (Fig. \ref{1afo-integrate}(c)), as is expected. We next examine how widely the conformational space was sampled during the REM simulation. We plot the time series of the root mean-square deviation (RMSD) of all the C$^{\alpha}$ atoms from the experimental structure (PDB code: 1AFO) for Replica 11 in Fig. \ref{1afo-integrate}(d). When the temperature becomes high, the RMSD takes large values, and when the temperature becomes low, the RMSD takes small values. By comparing Figs. \ref{1afo-integrate}(b) and \ref{1afo-integrate}(d), we see that there is a positive correlation between the temperature and the RMSD values. The fact that the RMSDs at high temperatures are large implies that our simulations did not get trapped in local-minimum potential-energy states. These results confirm that the REM simulation was properly performed. \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1afointegraterep11.eps} \caption{\label{1afo-integrate}Time series of various quantities for the REM simulation of glycophorin A. (a) Time series of replica index at temperature 300 K. (b) Time series of temperature change for Replica 11. (c) Time series of total potential energy change for Replica 11. (d) Time series of the RMS deviation (in \AA{}) of all the C$^{\alpha}$ from the PDB structures for Replica 11.} \end{figure} Table \ref{1afo accept} lists the acceptance ratios of replica exchange between all pairs of nearest neighboring temperatures. We find that the acceptance ratio is high enough ($>$ 0.1) in all temperature pairs. Fig. \ref{1afo-wham}(a) shows the canonical probability distributions of the potential energy obtained from the REM simulation at 16 temperatures. We see that the distributions have enough overlaps between the neighboring temperature pairs. This ensures that the number of replicas was sufficient. In Fig. \ref{1afo-wham}(b), the average potential energy and its components, namely, the electrostatic energy $E_{elec}$, van der Waals energy $E_{vdw}$, torsion energy $E_{dih}$, and constraint energy $E_{geo}$, are shown as functions of temperature, which were calculated by eq. (\ref{wham}). Because the helices are generally far apart from each other at high temperatures, the energy components, especially electrostatic energy and van der Waals energy, are higher at high temperatures. At low temperatures, on the other hand, the side-chain packing among helices is expected. We see that as the temperature becomes lower, $E_{vdw}$, $E_{dih}$, and $E_{elec}$ decrease almost linearly up to $\sim$ 1200 K, and as a result $E_{tot}$ is also almost linearly decreasing up to $\sim$ 1200 K. On the other hand, when the temperature becomes $<$ 1200 K, $E_{vdw}$ contributes more to the decrease of $E_{tot}$. This is reasonable, because $E_{vdw}$ decreases as a result of side-chain packing and the stability of the conformation increases. Note that we used only transmembrane regions in the REM simulation. Transmembrane helices are generally considered to be hydrophobic, and helix-helix association is sometimes considered only by vdW packing (lock-and-key model). However, Fig. \ref{1afo-wham}(b) shows that $E_{elec}$ also changes much as a function of temperature. This implies that electrostatic effects also contribute to the formation of the native protein conformation. \begin{table}[!tbp] \caption{Acceptance ratios of replica exchange corresponding to pairs of neighboring temperatures from the REM simulation of glycophorin A.\label{1afo accept}} \begin{center} \begin{tabular}{lclc} \hline \multicolumn{1}{c}{Pairs of $T$ }&\multicolumn{1}{c}{Acceptance ratio}&\multicolumn{1}{c}{Pairs of $T$ }&\multicolumn{1}{c}{Acceptance ratio}\tabularnewline \hline 300 $\longleftrightarrow$ 333 &$0.43$& 707 $\longleftrightarrow$ 787 &$0.41$\tabularnewline 333 $\longleftrightarrow$ 371 &$0.42$& 787 $\longleftrightarrow$ 877 &$0.39$\tabularnewline 371 $\longleftrightarrow$ 413 &$0.41$& 877 $\longleftrightarrow$ 976 &$0.39$\tabularnewline 413 $\longleftrightarrow$ 460 &$0.42$& 976 $\longleftrightarrow$ 1087 &$0.30$\tabularnewline 460 $\longleftrightarrow$ 512 &$0.43$&1087 $\longleftrightarrow$ 1210 &$0.14$\tabularnewline 512 $\longleftrightarrow$ 571 &$0.42$&1210 $\longleftrightarrow$ 1347 &$0.20$\tabularnewline 571 $\longleftrightarrow$ 635 &$0.43$&1347 $\longleftrightarrow$ 1499 &$0.40$\tabularnewline 635 $\longleftrightarrow$ 707 &$0.42$&&\tabularnewline \hline \end{tabular} \end{center} \end{table} \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1afointegratewham.eps} \caption{\label{1afo-wham}(a) Canonical probability distributions of total potential energy at each temperature from the REM simulation of glycophorin A. The distributions correspond to the following temperatures (from left to right): 300, 333, 371, 413, 460, 512, 571, 635, 707, 787, 877, 976, 1087, 1210, 1347, and 1499 K. (b) The averages of the total potential energy Etot of glycophorin A and its component terms: electrostatic energy Eele, van der Waals Evdw, dihedral energy Edih, and constraint energy Egeo as functions of temperature.} \end{figure} \subsubsection*{Principal component analysis} \label{sec-3-1-2} \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{1afointergratecumucont.eps} \caption{\label{1afo-cumu}Cumulative contribution ratio of the first five eigenvalues in the principal component analysis from sampled structures of the REM simulation of glycophorin A at 300 K (a), 635 K (b), 1499 K (c).} \end{figure} We now classify the sampled structures into clusters of similar structures by the principal component analysis. In Fig. \ref{1afo-cumu}, we show the percentage of the cumulative contribution ratio of the first five eigenvalues at the chosen temperature of 300 K (lowest), 707 K, and 1499 K (highest). We see from the ratio values in Fig. \ref{1afo-cumu} that as the temperature becomes higher, more principal component axes are needed to represent the fluctuations of the structures, as is expected. This is reasonable because as the temperature becomes higher, the fluctuations of the system become larger and the simulation samples a wider conformational space. In Fig. \ref{1afo-cumu}(a), we see that more than 60 \% of the total fluctuations at 300 K is expressed by the first three principal components. Although we can express the system more precisely as we use more principal axes, we here classify and analyze the sampled structures at the lowest temperature by the first three principal components. The fact that most of the amplitudes of fluctuations in this protein system is represented only by a small number of principal components is consistent with that protein folding dynamics can be expressed as the diffusion over a low-dimensional free energy surface as is elucidated in the energy landscape theory. Fig. \ref{1afo-cumu}(c) shows that many principal component axes are needed to express the sampled structures properly at the highest temperature. The sampled structures are sometimes analyzed by other reaction coordinates such as native contact, RMSD, and radius of gyration. These are suitable as reaction coordinates in some cases but may not be appropriate in others. We do not know how many reaction coordinates we need for identifying important local-minimum free energy states in the free energy landscape. The principal component analysis is one of the methods that naturally provide us with the information as to how many reaction coordinates we need for such investigations. In Fig. \ref{1afo-cluster3d300}, the projection of sampled structures from the REM simulation on the first, second, and third principal component axes (PCA) at the chosen three temperatures is shown. In Fig. \ref{1afo-cluster3d300}(a), each cluster of structures is highlighted with different colors. If we perform constant temperature simulations at the lowest temperature, the simulations will get trapped in one of the clusters in Fig \ref{1afo-cluster3d300}(a), depending on the initial conformations of the simulations. However, each replica of the replica-exchange simulation will not get trapped in one of the local-minimum free energy states, by going through high temperature regions. Every replica can climb over energy barriers in Fig. \ref{1afo-cluster3d300}(c) by temperature exchange during the simulation. This is the reason why we adopted the replica-exchange method. At the lowest temperature, we classified sampled structures at the lowest temperature into five distinct clusters in Fig. \ref{1afo-cluster3d300}(a). They lie in the ranges (--13 --- 16; 2 --- 32; --77 --- --19), (--49 --- --13; --34 --- --2; --34 --- 13), (--35 --- 8; --21 --- 16; --81 --- --30), (--53 --- --14; --7 --- 39; --30 --- 89), and (--20 --- 28; --37 --- 37; --24 --- 81), which we refer to as Cluster 1, Cluster 2, Cluster 3, Cluster 4, and Cluster 5, respectively. \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1afointegratecluster3d.eps} \caption{\label{1afo-cluster3d300}Projection of sampled structures on the first, second, third principal axis from the REM simulation glycophorin A at temperature 300 K (a), 635 K (b), 1499 K (c). PCA1, PCA2, and PCA3 represent the principal axes 1, 2 and 3, respectively. Only structures in (a) are classified into clusters of similar structures and analyzed in detail. In panel (a), C1, C2, $\cdots$ , C5 stand for Cluster 1, 2, $\cdots$ , 5, respectively, and are highlighted by different colors.} \end{figure} \subsubsection*{Average quantities of clusters} \label{sec-3-1-3} Table \ref{1afo allcluster} lists average quantities of five clusters of similar structures. These structures were extracted from the trajectories at a fixed interval. The rows of Cluster 1, Cluster 2,$\cdots$ , and Cluster 5 represent various average values for the structures that belong to each cluster. We see that in the value of the "Str" column in Table \ref{1afo allcluster}, Cluster 5 has the most number of structures. Hence, Cluster 5 is the global-minimum free energy state in this simulation. Others are considered to be local-minimum free energy states. As for the RMSD values, Cluster 5 has the value 2.67 \AA{}, and it is the lowest value among the five clusters. Therefore, Cluster 5 corresponds to the global-minimum free energy state and it is also closest to the native structure. \begin{table}[!tbp] \caption{Various average quantities of glycophorin A for each cluster at the temperature of 300 K.\label{1afo allcluster}} \begin{center} \begin{tabular}{lrrrrrrr} \hline \multicolumn{1}{l}{}&\multicolumn{1}{c}{Str}&\multicolumn{1}{c}{Tote}&\multicolumn{1}{c}{Elec}&\multicolumn{1}{c}{Vdw}&\multicolumn{1}{c}{Dih}&\multicolumn{1}{c}{Geo}&\multicolumn{1}{c}{RMSD}\tabularnewline \hline Cluster 1& $ 325$& $-1414$& $-1332$& $-163$& $23.3$& $0.982$& $7.04$\tabularnewline Cluster 2& $ 2583$& $-1419$& $-1329$& $-173$& $24.2$& $0.851$& $6.75$\tabularnewline Cluster 3& $ 1349$& $-1422$& $-1329$& $-174$& $22.5$& $0.697$& $6.95$\tabularnewline Cluster 4& $ 5433$& $-1422$& $-1328$& $-177$& $24.9$& $0.660$& $6.67$\tabularnewline Cluster 5& $50309$& $-1421$& $-1330$& $-173$& $23.0$& $0.670$& $2.67$\tabularnewline \hline \end{tabular} \end{center} The following abbreviations are used: Str: the number of structures, Tote: total potential energy, Elec: electrostatic energy, Vdw: van der Waals energy, Dih: dihedral energy, Geo: constraint energy (all in kcal/mol), RMSD: root-mean-square deviation of all C$^\alpha$ atoms (in \AA). \end{table} We next examine the typical local-minimum free energy state structures in each cluster. The representative structures were selected in the highest density regions within the clusters. In Fig. \ref{1afo-minimumenestr}, the representative structure of each cluster and the solution NMR structure (PDB code:1AFO) are shown. We confirm that the structure of Cluster 5 is the closest to the experimental one. Note that each helix in all these structures has a similar structure, which is close to an ideal helix structure. This means that glycophorin A has only ideal helix structures as local-minimum free energy states in this simulation, although we allowed helix to be distorted or bent during the simulation. \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1afocluster.eps} \caption{\label{1afo-minimumenestr}(Color online) Typical structures of glycophorin A in each cluster selected by free energy local minimum state. The purple structure is native structure. The RMSD from the native conformation with respect to all the C$^{\alpha}$ atoms is 6.80, 6.65, 7.15, 6.52, and 2.25 \AA{} for Cluster 1, Cluster 2, Cluster 3, Cluster 4, and Cluster 5, respectively.} \end{figure} \subsection*{Phospholamban} \label{sec-3-2} \subsubsection*{Time series of variety quantities} \label{sec-3-2-1} We next examine how the replica-exchange simulation performed for phospholamban. Fig. \ref{1fjk-integrate}(a) shows the time series of the replica index at the lowest temperature of 300 K. We see that many replicas experience the minimum temperature many times during the REM simulation, and a random walk in the replica space was realized. The complementary picture is the temperature exchange for each replica. Fig. \ref{1fjk-integrate}(b) shows the results for one of the replicas (Replica 12). We see that Replica 12 reached various temperatures during the REM simulation and the random walk in the temperature space between the lowest and highest temperatures was also realized. Other replicas behaved similarly. Fig. \ref{1fjk-integrate}(c) shows the corresponding time series of the total potential energy. We next examine how widely the conformational space was sampled during the REM simulation. We plot the time series of RMSD of all the C$^{\alpha}$ atoms from the experimental structure (PDB code: 1FJK) for Replica 12 in Fig. \ref{1fjk-integrate}(d). When the temperature becomes high, the RMSD takes large values, and when the temperature becomes low, the RMSD takes small values. By comparing Figs. \ref{1fjk-integrate}(b), \ref{1fjk-integrate}(c), and \ref{1fjk-integrate}(d), we see that there is a strong correlation among the temperature, total potential energy and RMSD values. The fact that RMSD at high temperatures is large implies that our simulations did not get trapped in local-minimum potential-energy states. These results confirm that the REM simulation was properly performed. \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1fjkintegratereplica12.eps} \caption{\label{1fjk-integrate}Time series of various quantities for the REM simulation of phospholamban. (a) Time series of replica index at temperature 300 K. (b) Time series of temperature change for Replica 12. (c) Time series of total potential energy change for Replica 12. (d) Time series of the RMS deviation (in \AA{}) of all the C$^{\alpha}$ atoms from the PDB structures for Replica 12.} \end{figure} Table \ref{1fjk accept} lists the acceptance ratios of replica exchange between all pairs of nearest neighboring temperatures. We find that almost all acceptance ratios are high enough ($>$ 0.1) in temperature pairs. Fig. \ref{1fjk-wham}(a) shows canonical probability distributions of the potential energy obtained from the REM simulation at 16 temperatures. We see that the distributions have enough overlaps between the neighboring temperature pairs. This ensures that the number of replicas was sufficient. We also see that the fourth highest distribution is broader than other distributions, and it suggests that the phase transition occurs around this temperature (1205 K). In Fig. \ref{1fjk-wham}(b), the average potential energy and its components (the electrostatic energy $E_{elec}$, van der Waals energy $E_{vdw}$, torsion energy $E_{dih}$, and constraint energy $E_{geo}$) are shown as functions of temperature, which were calculated by eq. (\ref{wham}). Because the helix is distorted at high temperatures, the energy components, especially electrostatic energy and van der Waals energy, are higher at high temperatures. At low temperatures, on the other hand, we observe the formation of helix structures. We see that the change of energy components behaves similarly of glycophorin A. However, the drastic change at about 1200 K of phospholamban may be affected by the low acceptance ratio of the pair between 1062 K and 1205 K. All the energy components contribute to the formation of the native helix conformation. \begin{table}[!tbp] \caption{Acceptance ratios of replica exchange corresponding to pairs of neighboring temperatures from the REM simulation of phospholamban.\label{1fjk accept}} \begin{center} \begin{tabular}{lclcr} \hline \multicolumn{1}{c}{Pairs of $T$ }&\multicolumn{1}{c}{Acceptance ratio}&\multicolumn{1}{c}{Pairs of $T$ }&\multicolumn{1}{c}{Acceptance ratio}\tabularnewline \hline 300 $\longleftrightarrow$ 340 &$0.42$& 825 $\longleftrightarrow$ 936 &$0.41$\tabularnewline 340 $\longleftrightarrow$ 386 &$0.41$& 936 $\longleftrightarrow$ 1062 &$0.37$\tabularnewline 386 $\longleftrightarrow$ 438 &$0.41$&1062 $\longleftrightarrow$ 1205 &$0.08$\tabularnewline 438 $\longleftrightarrow$ 497 &$0.42$&1205 $\longleftrightarrow$ 1368 &$0.36$\tabularnewline 497 $\longleftrightarrow$ 564 &$0.42$&1368 $\longleftrightarrow$ 1553 &$0.39$\tabularnewline 564 $\longleftrightarrow$ 640 &$0.43$&1553 $\longleftrightarrow$ 1762 &$0.32$\tabularnewline 640 $\longleftrightarrow$ 727 &$0.42$&1762 $\longleftrightarrow$ 2000 &$0.11$\tabularnewline 727 $\longleftrightarrow$ 825 &$0.42$&&\tabularnewline \hline \end{tabular} \end{center} \end{table} \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1fjkintegratewham.eps} \caption{\label{1fjk-wham}(a) Canonical probability distributions of total potential energy at each temperature from the REM simulation of phospholamban. The distributions correspond to the following temperatures (from left to right): 300, 340, 386, 438, 497, 564, 640, 727, 825, 936, 1062, 1205, 1368, 1553, 1762, and 2000 K. (b) The averages of the total potential energy Etot of glycophorin A and its component terms: electrostatic energy Eele, van der Waals Evdw, dihedral energy Edih, and constraint energy Egeo as functions of temperature.} \end{figure} \subsubsection*{Principal component analysis} \label{sec-3-2-2} We now classify the sampled structures into clusters of similar structures by the principal component analysis again. In Fig. \ref{1fjk-cumu}, we show the percentage of the cumulative contribution ratio of the first five eigenvalues at the chosen temperatures of 300 K (lowest), 936 K, and 2000 K (highest). We see from the ratio values in Fig. \ref{1fjk-cumu} that as the temperature becomes higher, more principal component axes are needed to represent the fluctuations of the structures, as it is expected. In Fig. \ref{1fjk-cumu}, we see that more than 50 \% of the total fluctuations at the lowest temperature is expressed by the first three principal components. The less ratio compared to that of glycophorin A may have resulted from the fact that phospholamban had many helix structures including distorted ones, whereas glycophorin A had mostly ideal helix structures. Fig. \ref{1fjk-cumu}(c) shows that many principal component axes are needed to express the sampled structures properly at the highest temperature. In Fig \ref{1fjk-cluster3d300}, the projection of sampled structures from the REM simulation on the first, second, and third principal component axes (PCA) at the chosen three temperatures is shown. In Fig. \ref{1fjk-cluster3d300}(a), each cluster of structures is highlighted with different colors. Every replica can climb over energy barriers in Fig. \ref{1fjk-cluster3d300}(c) by temperature exchange during the REM simulation. Compared with the results of glycophorin A, the distribution of sampled structures projected by three principal components is simple, and one principal component axis already distinguishes the clusters. This may result from the fact that we have only a helix in the system without helix-helix interaction. Our simulation can not sample random coil structures and the conformational space is restricted into a narrow space. At the lowest temperature, we classified sampled structures at the lowest temperature into two distinct clusters in Fig. \ref{1fjk-cluster3d300}(a). They lie in the ranges (5 --- 50; --35 --- 32; --36 --- 39) and (--36 --- 13; --49 --- 40; --42 --- 50), which we refer to as Cluster 1 and Cluster 2, respectively. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{1fjkintegcumucont.eps} \caption{\label{1fjk-cumu}Cumulative contribution ratio of the first five eigenvalues in the principal component analysis from sampled structures of the REM simulation of phospholamban at 300 K (a), 936 K (b), 2000 K (c).} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1fjkintegrate3dcluster.eps} \caption{\label{1fjk-cluster3d300}Projection of sampled structures on the first, second, third principal axis from the REM simulation phospholamban at temperature 300 K (a), 936 K (b), 2000 K (c). PCA1, PCA2, and PCA3 represent the principal axes 1, 2 and 3, respectively. Only structures in (a) are classified into clusters of similar structures and analyzed in detail. In panel (a), C1 and C2 stand for Cluster 1 and Cluster 2, respectively, and are highlighted by different colors.} \end{figure} \subsubsection*{Average quantities of clusters} \label{sec-3-2-3} \begin{table}[!tbp] \caption{Various average quantities of phospholamban for each cluster at the temperature of 300 K. \label{1fjk allcluster}} \begin{center} \scalebox{0.95}[1]{ \begin{tabular}{lrrrrrrr} \hline \multicolumn{1}{l}{}&\multicolumn{1}{c}{Str}&\multicolumn{1}{c}{Tote}&\multicolumn{1}{c}{Elec}&\multicolumn{1}{c}{Vdw}&\multicolumn{1}{c}{Dih}&\multicolumn{1}{c}{Geo}&\multicolumn{1}{c}{RMSD}\tabularnewline \hline Cluster 1& $20519$& $-1064$& $-1011$& $-126$& $24.8$& $0.23$& $2.06$\tabularnewline Cluster 2& $79480$& $-1067$& $-1009$& $-132$& $25.8$& $0.26$& $2.93$\tabularnewline \hline \end{tabular} } \end{center} The following abbreviations are used: Str: the number of structures, Tote: total potential energy, Elec: electrostatic energy, Vdw: van der Waals energy, Dih: dihedral energy, Geo: constraint energy (all in kcal/mol), RMSD: root-mean-square deviation of all C$^\alpha$ atoms (in \AA). \end{table} \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{1fjkcluster.eps} \caption{\label{1fjk-minimumenestr}(Color online) Typical structures of phospholamban in each cluster selected by free energy local minimum state. The purple structure is native structure. The RMSD from the native conformation with respect to the backbone atoms is 1.27 and 2.89 \AA{} for Cluster 1 and Cluster 2, respectively.} \end{figure} Table \ref{1fjk allcluster} lists average quantities of two clusters of similar structures. The rows of Cluster 1 and Cluster 2 represent various average values for the structures that belong to each cluster. We see that RMSD is as small as 2.06 \AA{} for Cluster 1, while it is 2.93 \AA{} for Cluster 2. Hence, Cluster 1 has very similar structures to the native one. However, it is not the global-minimum free energy state but a local-minimum one, comparing the number of conformations (Str entries in Table \ref{1fjk allcluster}) in both clusters. In Fig. \ref{1fjk-minimumenestr}, representative structures of each cluster in Table \ref{1fjk allcluster} and the structure obtained by solution NMR experiments (PDB code: 1FJK) are shown. We confirm that Cluster 1 is very similar to the native structure. It is bent at the same position and in the same direction, although the amount of bent is not as much as the native one. Cluster 2 is also bent at the same position and about the same amount as the native one, but it has a bend in the opposite direction. Hence, the present simulation can predict the position of bend, but it gives both directons of bend as local-minimum free energy states and Cluster 2 as the global-minimum one. The present system is a helix monomar, and without interactions with other helices, it seems very difficult to decide the direction of distorsions within the approximation of the present method. We remark that a preliminary REM simulation of bacteriorhodopsin with seven helices predicts correct directions of helix bending (manuscript in preparation). \section*{Conclusions} \label{sec-4} In this article, we introduced deformations of helix structures to the replica-exchange Monte Carlo simulation for membrane protein structure predictions. The membrane bilayer environment was approximated by restraining the conformational space in virtual membrane region. The sampled helix structures were limited so that helix structures by introducing the restraints on the backbone $\phi$ and $\psi$ angles are not completely destroyed. In order to check the effectiveness of the method, we first applied it to the prediction of a dimer membrane protein, glycophorin A. We successfully reproduced the native-like structure as the global-minimum free energy state. We next applied the method to phospholamban, which has one distorted transmembrane helix structure in the PDB structure. The results implied that a native-like structure was obtained as a local-minimum free energy state. Two local-minimum free energy states were found with the same bend position as the native one, but the global-minimum free energy state had an opposite direction of helix bend. Therefore, our results seem to imply that the location of bends of helix structures in transmembrane helices are determined by their amino-acid sequence, but the direction and amount of distortion of helices are dependent on the interactions with surrounding lipid molecules, which we represented only implicitly. Our next targets will be more complicated membrane proteins with multiple transmembrane helices such as G protein coupled receptors. Our preliminary results for bacteriorhodopsin show that native-like structures with the correctly bent helices can be predicted by our method. \section*{Acknowledgements} \label{sec-5} Some of the computations were performed on the supercomputers at the Institute for Molecular Science, at the Supercomputer Center, Institute for Solid State Physics, University of Tokyo, and Center for Computational Sciences, University of Tsukuba. This work was supported, in part, Grants-in-Aid for Scientific Research (A) (No. 25247071), for Scientific Research on Innovative Areas (\lq\lq Dynamical Ordering \& Integrated Functions\rq\rq ), Program for Leading Graduate Schools \lq\lq Integrative Graduate Education and Research in Green Natural Sciences\rq\rq, and for the Computational Materials Science Initiative, and for High Performance Computing Infrastructure from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
2,877,628,088,688
arxiv
\section{Introduction} The advent of ubiquitous traffic sensing provides unprecedented real-time, high-resolution data that elucidate historical trends and current traffic conditions. However, traditional signal timing approaches have yet to take full advantage of these data \cite{Kurzhanskiy:2015fj}. Surveys of practitioners suggest that only sixty percent of the 300,000 signalized intersections in the United States are retimed at intervals less than five years \cite{NCHRP397}, and the National Transportation Operations Coalition has given a grade of ``C$-$'' to signal timing practices and a grade of ``F'' to traffic monitoring and data collection in the United States. These deficiencies contributed to the 6.9 billion hours of additional travel time caused by inefficient traffic management in 2015 \cite{Schrank:2015hs}. Increased urbanization demands more efficient use of this transportation infrastructure, which in turn requires full use of measured data. At signalized intersections, standard actuated signal timing plans are designed to make limited use of real-time measurements to, \emph{e.g.}, extend green time for approaching vehicles or enable actuation phases to be skipped if no waiting vehicles are present \cite{Koonce:2008gd}. However, actuated traffic signal timing only accommodates modest deviations from the nominal traffic conditions and is thus unable to respond to systematic changes in the traffic flow. This paper proposes a \emph{traffic predictive control} strategy for signalized intersections that predicts future traffic flow based on real-time measurements and adjusts the intersection's signal timing accordingly. The prediction algorithm first identifies trends in historical traffic flow and then uses real-time measurements to determine the degree to which these historical trends are exhibited by the current traffic conditions. For example, historical trends may indicate that increased flow in one commute direction during the morning correlates with increased flow in the opposite direction in the evening. Traffic predictive control identifies this relationship and uses it to adjust signal timing parameters in the afternoon using measurements of traffic flow in the morning. The present work thus lies within the extensive literature on traffic prediction and forecasting. See, for example, \cite{Vlahogianni:2004ct} for a survey of the literature. The majority of the forecasting literature focuses on freeways rather than urban street traffic \cite{Vlahogianni:2004ct}. For example, \cite{Jabari:2012fv} and \cite{Jabari:2013dz} develop a stochastic traffic flow model for freeways, and it is shown that this model is amenable to Kalman filtering techniques to estimate traffic conditions. Kalman filtering is also used in \cite{Ojeda:2013ya} to estimate traffic flow as the result of a random walk biased by historical increments in measured flow over time. In \cite{Wu:2014fb}, $k$-means clustering is used to divide historical data, and an ARMAX prediction is computed for each possible cluster. The most likely cluster is used to provide a forecast of future traffic flow. A large body of literature focuses on estimating flows throughout a network using flow measurements and conservation laws. For example, in \cite{Castillo:2008qc}, real-time link flow data is used to predict origin-destination and link flows by modeling a traffic network as a Gaussian Bayesian network, which provides conditional distributions and probability intervals for link flow. In \cite{Zhou:2007uo}, structural deviations from regular traffic patterns, modeled using polynomial trend filters, are used to estimate origin-destination flows. In contrast to these approaches, we focus on predicting traffic flow at a higher resolution for a single intersection. A variety of model-based and statistical approaches exist for estimating travel times on arterial roads, which is closely associated with the problem of estimating traffic flows and volumes. A flow model is combined with GPS probe data in \cite{Hofleitner:2012oj} to predict arterial travel times using a Bayesian network learning approach. Each link is modeled as being in a state of congestion or undersaturation, and the Bayesian network models the transition between these states. A similar approach is considered in \cite{Jenelius:2013tg} for using GPS probe data to estimate travel time where additional explanatory variables such as speed limits and number of lanes per link are used to reduce the number of parameters in the model. In \cite{Du:2012hc}, an approach for short-term travel time estimation that fuses past and real-time data is presented. These data are weighted based on their quality, which incorporates properties of the data including sensor accuracy and delay. A queuing theory approach is used to provide probability distributions on queue lengths in traffic networks in \cite{Osorio:2011fu}, and the model accounts for finite-capacity queues and captures spatial correlations. While the above approaches focus on aggregate estimates for larger networks, the focus of this paper is on high-resolution estimates of traffic flow for all movements at a single intersection. In \cite{Guardiola:2014ij}, aggregate daily traffic flow patterns are studied, and a functional principal component decomposition is used reduce the dimensionality of the data. This approach is similar to the analysis proposed in Section \ref{sec:princ-comp-analys} of the present paper, however, \cite{Guardiola:2014ij} focuses on identifying changes in daily traffic patterns for long term traffic monitoring, whereas, in the present paper, we use a principal component decomposition to lay the foundation for our traffic prediction approach. Moreover, \cite{Guardiola:2014ij} focuses on freeway networks and not signalized networks. When prediction is applied to signalized intersections for adaptive control, prediction horizons are short, \emph{e.g.}, seconds or minutes \cite{Mirchandani:2001zp}. In contrast, the present work focuses on longer prediction horizons on the order of hours for use in conjunction with more traditional traffic signal timing methods such as pre-defined timing plans. In this paper, we develop a principal-component based prediction scheme for signalized intersections using the \emph{projection to latent structures (PLS)} algorithm \cite{Rosipal:2006kx}. Abstractly, this algorithm decomposes two sets of data to find low-rank, correlated structure between the data sets. For example, in the case of traffic, suppose the objective is to predict traffic flow from 2pm to 8pm using traffic flow measurements up to time 10am. First, historical measurements of traffic flow up to 10am and of traffic flow between 2pm and 8pm are collected. Next, the PLS algorithm decomposes the data into a set of pairs of \emph{latent structures} which provide low-rank approximations of the data sets with the additional requirement that each pair of latent structures is highly correlated. Then, given real-time measurements of traffic flow for a particular day up to 10am, future traffic from 2pm to 8pm is predicted by computing weights for the latent structures. An important property of this approach is that the proposed traffic predictive control builds on existing standard practices for traffic signal timing. In particular, we consider the common practice of signal timing based on time-of-day plans which are preprogrammed to apply during certain periods of the day \cite[Chapter 5]{Koonce:2008gd}. Each plan is designed to accommodate a certain level of traffic flow at the intersection. Traditionally, this level of traffic flow is determined based on averaged historical measurements; often, these measurements span only a limited time window over several days and may have been collected years ago. Critically, averaged historical flow is unable to capture anomalous traffic patterns. In \cite{Abbas:2006zh}, a genetic algorithm selects from among a set of predefined plans based on current conditions. The primary contribution of this paper is a traffic predictive control scheme that uses a prediction of future traffic flow to adjust the time periods for which the time-of-day plans are active and, additionally, suggests predicted levels of traffic around which the timing plans should be designed. Our approach thus does not depend on the exact algorithm that is used to determine timing plans (\emph{i.e.}, green splits) from traffic flow, however, in our case study we employ a delay minimization policy. By ensuring that the proposed control approach is well-aligned with existing practices, the proposed controller integrates well with existing traffic control hardware which universally accommodate time-of-day timing plans and are often capable of remote changes to these plans. Additionally, practitioners familiar with standard practices are likely to be more receptive to the proposed traffic predictive control. This paper is organized as follows: Section \ref{sec:preliminaries} describes the problem setup, available data, and details of the case study. Section \ref{sec:princ-comp-analys} analyzes structural trends in the traffic flow data using a principal component analysis, which establishes the foundation for the prediction algorithm presented in Section \ref{sec:traff-pred-from}. Section \ref{sec:traff-pred-contr} proposes a traffic predictive control scheme that uses predictions of future traffic flow to adjust signal timing plans. Section \ref{sec:conclusions} provides concluding remarks, future directions of research, and plans for implementation. \section{Preliminaries} \label{sec:preliminaries} We first characterize the requirements and assumptions generally and then specialize to a specific test site as our case study throughout the paper. \subsection{Available Data} We consider a single traffic intersection consisting of a set of $M$ \emph{turn movements} indexed $1$ through $M$. For example, a prototypical intersection consists of four approaches, each approach consisting of a left turn movement, a right turn movement, and a through movement so that $M=12$, as in the case below. The flow rate of vehicles along each turn movement is measured and recorded once per interval of time $\Delta$. Thus, $T\triangleq (24\text{ hours})/\Delta$ measurements of the flow are made per movement per day. We assume measurements are available for a total of $D$ days. For day $d\in\{1,\ldots,D\}$ and movement $m\in\{1,\ldots,M\}$, let $x^d_m(t)$ denote the flow rate of vehicles executing the $m$-th turn movement on day $d$ during time interval $t\in\{1,\ldots,T\}$ in vehicles per hour (vph). From this notation, we aggregate the measurements into vectors and matrices as follows: \begin{alignat}{2} \label{eq:4} x^d_m&=\begin{bmatrix}x^d_m(1)&x^d_m(2)&\cdots&x^d_m(T)\end{bmatrix}^{\intercal},&&\qquad d=1,\ldots, D,\quad m=1,\ldots,M\\ x^d&=\begin{bmatrix}(x^d_1)^{\intercal}&(x^d_2)^{\intercal}&\cdots&(x^d_M)^{\intercal}\end{bmatrix}&&\qquad d=1,\ldots, D \end{alignat} where $(\cdot)^{\intercal}$ denotes vector transpose. That is, $x^d_m\in \mathbb{R}^{T}$ is the vector of flow measurements along the $m$-th movement on day $d$ and $x^d\in\mathbb{R}^{TM}$ is the vector of flow measurements along all $M$ movements on day $d$. We define the aggregated data measurement matrix as \begin{align} \label{eq:1} X= \begin{bmatrix} (x^1)^{\intercal}\\ \vdots\\ (x^D)^{\intercal} \end{bmatrix}\in \mathbb{R}^{D\times (TM)}. \end{align} Typically, $TM\gg D$ so that $X$ is a wide matrix. Table \ref{tab:notation} contains a summary of the notation. We consider measured turn movements to represent the exogenous demand on the system from the external environment. In particular, it is assumed that measured turn movements are negligibly affected by the choice of control action: \begin{assum} \label{assum:exo} Turn movement flows originate from an exogenous process and, in particular, are not influenced by the choice of control actions at the intersection. \end{assum} For this assumption to be reasonable, we must have the sampling period $\Delta$ long enough so that the impact of the control signal is negligible (\emph{e.g.}, if $\Delta$ is 5 minutes and the cycle time of the signal actuation at the intersection is 2 minutes, then a measurement $x^d_m(t)$ may include between two and three periods of actuation for movement $m$. Thus, $\Delta$ is too short since $x^d_m$ will exhibit undesired oscillations caused by the control signal). Empirical evidence suggests that $\Delta\approx 15$ minutes is reasonable for minimizing such effects. Additionally, implicit in Assumption \ref{assum:exo} is that vehicles do not reroute in response to changes in signal actuation. The test site presented below is a large intersection for which few alternative routes exist. Thus, the assumption that vehicles do not reroute is reasonable for our study.% \begin{table} \centering \begin{tabular}{r |p{4in} |l r} && \multicolumn{2}{l}{Value (val) or}\\ && \multicolumn{2}{l}{Dimension (dim)}\\ Notation& Meaning&\multicolumn{2}{l}{for Case Study}\\ \hline \hline $M$&Number of turn movements at the intersection &$12$&(val)\\ $\Delta$&Interval of time between subsequent measurement of turn movement flows&$15$ min& (val)\\ $D$&Number of days&$132$& (val)\\ $T$&Number of measurements per movement per day& $96$ &(val)\\ $x^d_m(t)$&Flow rate of the $m$-th turn movement on day $d$ at time $t$ in vehicles per hour&$1$ &(dim)\\ $x^d_m$&Vector of $T$ measurements of the flow rate of the $m$-th turn movement on day $d$&$96$ &(dim)\\ $x^d$& Vector of $(TM)$ measurements of the flow rate on all movements on day $d$&$1152$ &(dim)\\ $X$& Matrix of dimension $D\times (TM)$ containing measures of the flow rate on all movements for all days&$132\times 1152$& (dim) \end{tabular} \caption{Summary of notation.} \label{tab:notation} \end{table} \subsection{Case Study Test Site} \label{sec:case-study-test} \begin{figure} \centering \begin{tabular}{c c} \includegraphics[height=2.5in]{beaufort}& \includegraphics[height=2.5in, clip=true, trim=2in 0in 2in 0in]{intersection}\\ (a)&(b) \end{tabular} \caption{Test site in Beaufort, South Carolina. (a) Image of the test site which consists of one intersection with four approaches. Each approach consists of a left turn movement, a right turn movement, and a through turn movement. (b) Schematic depiction of the lane configuration and placement of sensors at the intersection. Stopbar sensors are indicated with red blocks, and departure lane sensors are indicated with blue blocks, and advance sensors placed upstream are indicated with green block. The sensors are manufactured by Sensys Networks, Inc. and their measurements are fused to determine the turn movement of each vehicle that transits the intersection.} \label{fig:beaufort} \end{figure} In this paper, we focus on a test site in Beaufort, South Carolina consisting of one intersection with four approaches. Each approach consists of a left turn movement, a right turn movement, and a through turn movement for a total of $M=12$ movements. On the left of Figure \ref{fig:beaufort} is a satellite image of the intersection and on the right of Figure \ref{fig:beaufort} is a schematic depiction of the lane configuration and sensor placements at the intersection. A total of 44 magnetometer sensors provide real-time measurements of the movement of vehicles through the intersection; the sensors are manufactured by Sensys Networks, Inc. \cite{Haoui:2008qf}. By measuring changes in magnetic field, the sensors are able to detect the presence of each vehicle at the intersection. Detection events from sensors on approaching lanes with subsequent detection events on departure lanes together with the signal phase are then used to determine the turn movement of each vehicle. These data are aggregated every $\Delta=15$ minutes to provide a measurement of the number of vehicles executing each turn movement at the intersection. These data are reported in vehicles per hour. There are therefore a total of $T=96$ measurements of flow per movement per day. This high-resolution data acquisition system provides a rich dataset for managing traffic at intersections \cite{Muralidharan:2016dp}. The intersection consists of four \emph{approaches} and four \emph{departures}. A physically adjacent approach/departure pair is referred to as a \emph{leg} of the intersection. As is labeled in Figure \ref{fig:beaufort}b, the left leg is the Eastbound (EB) leg, the right leg is the Westbound (WB) leg, the top leg is the Southbound (SB) leg, and the bottom leg is the Northbound leg (NB). Each approach consists of a left turn (LT) movement, a through (T) movement, and a right turn (RT) movement. We sometimes use, \emph{e.g.}, the notation ``$m=\text{EB LT}$'' to indicate the movement index corresponding to the Eastbound left turn movement. Traffic at the case study intersection throughout the week has similar profiles each Monday through Thursday, and a different profile on Fridays, on Saturdays, and on Sundays. Figure \ref{fig:DOW} shows the average flow at the intersection for each of these groups for data spanning December 2014 to July 2015. Eleven days from this period are omitted due to missing measurements on these days. To ensure the figures are legible, the plots do not include traffic flows originating from or bound for the leftmost (that is, EB) leg (\emph{i.e.}, movements WB T, NB LT, SB RT, and all EB movements) because these movements contribute much lower flow than the remaining movements (approximately 50--100 vph during peak periods). The grouping depicted in Figure \ref{fig:DOW} is verified via standard $k$-means clustering for which, when sufficiently many clusters are computed, the clusters tend to contain days from only one of these four groups, but for more than four clusters, no meaningful division of the Monday--Thursday group is discernible. \begin{figure} \centering \begin{tabular}{@{}l@{} @{}l@{}} \includegraphics[height=1.6in]{M-Th}& \includegraphics[height=1.6in]{Friday}\\ \includegraphics[height=1.6in]{Saturday}& \includegraphics[height=1.6in]{Sunday} \end{tabular} \caption{Average flow rates over a 7 month period from December 2014 to July 2015 for Monday--Thursday (132 days), Friday (32 days), Saturday (34 days), and Sunday (34 days). Traffic originating from and bound for the West (\emph{i.e.}, EB LT, EB T, EB RT, NB LT, SB RT, and WB T movements) are not shown since traffic volumes for these movements are much lower than the remaining movements. Eleven days in this period contain missing measurements and are omitted from the analysis.} \label{fig:DOW} \end{figure} \section{Principal Components of Traffic Flow} \label{sec:princ-comp-analys} In this section, we consider a low-rank decomposition of the measured traffic flow. This decomposition is obtained via a principal component (PC) analysis of the data, and we will see that this simple approach reveals much about the data. Furthermore, a PC analysis establishes the foundation for the PLS-based traffic prediction strategy that is the focus of Section~\ref{sec:traff-pred-from} and is the main contribution of this paper. \subsection{Computation of Principal Components} Recall our data matrix $X$ constructed in \eqref{eq:1} from the vectors $x^d$, $d=1,\ldots, D$ containing the turn movement flow measurements for each movement over the course of each day $d$. Throughout the remainder of the paper, we focus exclusively on data from December 2014 to July 2015, Monday--Thursday for a total of $D=132$ days. Define \begin{align} \label{eq:2} \bar{x}_m=\frac{1}{D}\sum_{d=1}^Dx_m^d\in\mathbb{R}^{T} \end{align} to be the mean measured flow along movement $m$ over the course of a day, and define \begin{align} \label{eq:3} \bar{x}=\begin{bmatrix}\bar{x}_1^{\intercal}&\ldots &\bar{x}_M^{\intercal}\end{bmatrix}^{\intercal}. \end{align} \begin{figure} \centering \begin{tabular}{@{}c@{} @{}c@{} @{}c@{}} \includegraphics[width=.33\textwidth, clip=true, trim=0in .8in 0in 0in]{Envsmall_1002}& \includegraphics[width=.33\textwidth, clip=true, trim=0in .8in 0in 0in]{Envsmall_1007}& \includegraphics[width=.33\textwidth, clip=true, trim=0in .8in 0in 0in]{Envsmall_1006}\\ \imagetop{\includegraphics[width=.33\textwidth, clip=true, trim=0in .8in 0in 0in]{Envsmall_1005}}& \imagetop{\includegraphics[width=.33\textwidth, clip=true, trim=0in 0in 0in 0in]{Envsmall_1014}}& \imagetop{\includegraphics[width=.33\textwidth, clip=true, trim=0in .8in 0in 0in]{Envsmall_1016}} \end{tabular} \caption{Plots of mean flow for six turn movements along with the envelope containing all measured data for all days Monday--Thursday. } \label{fig:env} \end{figure} The top-left image of Figure \ref{fig:DOW} plots $\bar{x}_m$ for the six indicated movements. Figure \ref{fig:env} shows separately the mean measured flow for each of these six movements and additionally displays a shaded region that represents the envelope containing $x^d_m$ for all $d=1,\ldots,D$. Clearly, there is a large variation in measured flow rate around the mean. Our first objective is to find a low-rank decomposition of the data to characterize this variation. Specifically, given the rank parameter $N\geq 1$, we wish to find a collection of $N$ principal components $q^1,q^2,\ldots, q^N$ with each $q^i\in\mathbb{R}^{TM}$ and for each $d=1,\ldots,D$ a vector of weights $w(d)\in \mathbb{R}^N$ with \begin{equation} \label{eq:8} w(d)= \begin{bmatrix}w^1(d)&w^2(d)&\ldots&w^N(d)\end{bmatrix}^{\intercal} \end{equation} such that \begin{align} \label{eq:5} x^d\approx \bar{x}+\sum_{i=1}^Nw^i(d)q^i, \end{align} that is, each of the 132 daily mean-centered measurement vectors $x^d-\bar{x}$ is approximately represented by a linear combination of the principal components. If \eqref{eq:5} holds, then we may effectively replace $x^d$ by its weight vector $w(d)$. As will be evident below, much of the day-to-day variation can be captured by a few (three to five) principal components. To make our search for principal components precise, let $\tilde{x}^d=x^d-\bar{x}$ for all $d=1,\ldots, D$ and define \begin{align} \label{eq:9} \tilde{X}= \begin{bmatrix} (\tilde{x}^1)^{\intercal}\\ \vdots\\ (\tilde{x}^D)^{\intercal} \end{bmatrix} = X-\mathbf{1}_{D}\bar{x}^{\intercal} \end{align} where $\mathbf{1}_n$ denotes the all-ones vector of length $n$, and let \begin{alignat}{2} \label{eq:10} Q&= \begin{bmatrix} q^1&\cdots&q^N \end{bmatrix}&&\in\mathbb{R}^{(TM)\times N}\\ W&= \begin{bmatrix} w^{\intercal}(1)\\ \vdots\\ w^{\intercal}(D) \end{bmatrix}&&\in\mathbb{R}^{D\times N} \end{alignat} where $w^{\intercal}(d)$ denotes the transpose of the weight vector $w(d)$. We reformulate \eqref{eq:5} and specifically seek $Q$ and $W$ to minimize \begin{align} \label{eq:11} ||\tilde{X}-WQ^{\intercal}||_F \end{align} where $||\cdot||_F$ denotes the Frobenius matrix norm. It is well known that a pair $(W,Q)$ minimizing \eqref{eq:11} is obtained via the singular value decomposition of $\tilde{X}$. This decomposition results in $q^1,\ldots,q^N$ that are orthonormal and are assumed ordered with respect to the (descending) order of the singular values of $\tilde{X}$. In this way, $q^1$ is the principal component that lies in the direction maximizing the explained variance of the data, \emph{i.e.}, maximizes $\sum_{d=1}^D (q^1)^{\intercal}\tilde{x}^d$, $q^2$ is the principal component that lies in the direction maximizing the explained variance subject to the constraint that $(q^1)^{\intercal}q^2=0$, \emph{etc.} It is standard to assume without loss of generality that each $q^i$ is of unit norm; here, we multiply each $q^i$ by a factor of 100 and therefore divide each $w^i(d)$ by a factor of 100 to make the plots more intuitive and legible. \subsection{Case Study} \label{sec:case-study-principal} \begin{figure} \centering \begin{tabular}{l l} \includegraphics[height=1.6in]{PComponent2_1}& \includegraphics[height=1.6in]{PComponent2_2}\\ \includegraphics[height=1.6in]{PComponent2_3}& \includegraphics[height=1.6in]{PComponent2_4} \end{tabular} \caption{The first four principal components of the traffic data. Each component is $TM$-dimensional but is plotted as $M$ traces of length $T$ in the figures, each trace corresponding to one turn movement over 24 hours.} \label{fig:pca} \end{figure} \begin{figure} \centering \includegraphics[width=.5\textwidth]{PCAVar} \caption{Relative weight of the singular values obtained from a singular value decomposition of $\tilde{X}$. The singular values are ordered by magnitude and the relative weight is a percent of the sum of all singular values. The plot indicates that a significant portion of the variation in traffic flow is explained by the first few principal components of the decomposition.} \label{fig:variance} \end{figure} In Figure \ref{fig:pca}, we plot the first four principal components of our dataset. Each $q^i$ is $TM$-dimensional, however, for ease of comprehension, each principal component is plotted as $M$ traces of length $T$, each trace corresponding to one turn movement. From the figure, we observe identifiable trends in each of the principal components, which are discussed next. We also note that the principal components exhibit some high-frequency oscillation, especially in the AM peak period. This oscillation is generally of lower magnitude than the main trends in the components and would likely be ameliorated with more data. Figure \ref{fig:variance} plots the singular values of $\tilde{X}$ normalized by the sum of all singular values. From the plot, we see that a significant portion of the variation in the traffic flow is explained by the first few principal components of the decomposition. \begin{figure} \centering \begin{tabular}{c c c} \includegraphics[width=.48\textwidth]{C0vC1DOW}& \includegraphics[width=.48\textwidth]{C0vC1MonthLabeled}\\ (a)&(b)\\ \includegraphics[width=.48\textwidth]{C2vC1DOW}&\includegraphics[width=.48\textwidth]{C2vC1MonthLabeled}\\ (c)&(d) \end{tabular} \caption{Scatter plots of weights for pairs of principal components. Each day is indicated by a small marker and the centroid average for each label category is indicated with a large hatched marker. (a) The weight of component 1 versus the weight of component 2 for each day, labeled by day of week. The centroid points for all four days are close, supporting the notion that no further clustering by day-of-week is necessary. (b) The weight of component 1 versus the weight of component 2, labeled by month. A large negative weight for component 1 indicates days when work/business traffic is low. (c) The weight of component 2 versus the weight of component 3 labeled by day. There is a relationship between day of week and the weight of component 3 but it is small compared to the spread of weights. (d) The weight of component 2 versus the weight of component 3 labeled by month. Positive weights for component 3 indicate days when school is not in session.} \label{fig:score} \end{figure} In Figure \ref{fig:score}, we present scatter plots of the weights for pairs of components. In Figure \ref{fig:score}a we plot the weight of component 1 versus the weight of component 2 for each day where the data is labeled by the day of the week. In Figure \ref{fig:score}b we again plot the weight of component 1 versus the weight of component 2 but now label the data by month. Similarly, Figure \ref{fig:score}c and Figure \ref{fig:score}d plot the weight of component 3 versus the weight of component 2 and label the data by day and by month, respectively. Each marker corresponds to one day and the centroid average for each label category is indicated with a large hatched marker. We now discuss some clear trends that emerge from these plots. In Figure \ref{fig:score}b, it is apparent that a large negative weight for component 1 indicates a nonbusiness day; all of the labeled days are on or near holidays in the U.S. except February 24 for which there were school and business closings due to weather conditions, \emph{i.e.}, it was a snow day. This observation is congruous with the plot of principal component 1 in Figure \ref{fig:pca} which corresponds to overall higher traffic volume, especially during the morning and afternoon periods. Thus, a negative score for component 1 indicates lower traffic during these periods. Furthermore, the morning and afternoon peaks for component 1 in Figure \ref{fig:pca} correspond to reversed commute directions, \emph{i.e.}, the morning peak for WB LT corresponds with the afternoon peak of NB RT, likewise for the NB T peak in the morning and the SB T peak in the afternoon. Approximately one mile to the north of the intersection is a school. In Figure \ref{fig:score}d, we see that a positive score for component 2 indicates days when school is not in session, which includes the entire months of June and July except June 1--4, which are labeled. We additionally label dates when school is not in session due to holidays or weather conditions; April 13--16 is the Spring Break holiday. Again, this observation is congruous with the plot of principal component 2 in Figure \ref{fig:pca}, which exhibits two particularly telling features. First, this component has narrow spikes around 7:30 and 15:30, corresponding to the school session. Second, the WB LT movement clearly differs from the others; this is the only movement that is not going to or coming from the north (\emph{i.e.}, Southbound leg) where the school is located. Figure \ref{fig:score}c indicates that the weight of component 3 generally increases from Monday to Thursday, although the increase is modest compared to the spread of the weights for this component, and the centroids in Figure \ref{fig:score}a are closely clustered. Thus these plots generally support our decision to cluster Monday--Thursday together. % Lastly, we observe longer term trends in the data. In Figure \ref{fig:score}d, the score of component three generally increases from December to May, the months when school is in session. This may correspond with seasonal variations; as the plot of principal component 3 in Figure \ref{fig:pca} suggests, a larger score for component 3 indicates more traffic in the evening. The days get longer from December to May, which may explain higher evening traffic. In Figure \ref{fig:score}b, there is also an increase in the score of component 1 per month for all months December to July, although this trend is somewhat obscured by the axis scaling. This suggests that for the entire data set, there is a general increase in business-related traffic, which may be seasonal or may reflect improving economic conditions from December 2014 to July 2015. \subsection{Discussion} By its nature, a PC-based approach offers a more nuanced interpretation of the data than cluster-based approaches, as we saw above, since it indicates the \emph{degree} to which a set of measurements exhibits a particular component. For example, rather than simply identifying that traffic patterns are different in the Winter season than in the Spring season (which clustering may reveal), we are able to quantify this difference. Furthermore, a PC-based approach retains the ability to identify categorical differences in the data. For example, our analysis indicates that traffic patterns for days when school is in session are qualitatively different than when school is not in session.% We also note that in a PC-based approach the components frequently offer an interpretation or explanation for the observed trends. For example, we could associate each of the observed trends in the scatter plots of Figure \ref{fig:score} with an intuitive hypothesis regarding the variation in traffic movement flows based on the principal components in Figure \ref{fig:pca}. We note that the trends observed above are unique for the available dataset. This means that the principal components and corresponding interpretations are unlikely to transfer to other datasets obtained from other traffic intersections or networks. An interesting direction for future research is to study the transferability of prediction and learning algorithms to other datasets. \section{Traffic Prediction from Low-Rank Structure} \label{sec:traff-pred-from} Suppose it were possible to obtain an estimate of the weight vector $w(d)$ for some day $d\in D$ by using measurements only up to some time $T^*<T$, that is, from the vector \begin{align} \label{eq:6} z^d&=\begin{bmatrix}(z^d_1)^{\intercal}&\ldots&(z^d_M)^{\intercal}\end{bmatrix}^{\intercal} \in \mathbb{R}^{T^*M} \end{align} where \begin{align} \label{eq:7} z^d_m=\begin{bmatrix}x^d_m(1)&\ldots&x^d_m(T^*)\end{bmatrix}^{\intercal}\in\mathbb{R}^{T^*}\quad m=1,\ldots,M. \end{align} Then it would be possible to \emph{predict} traffic flow for times $t>T^*$ by constructing an estimate of $x^d$ as $\hat{x}^d=Q\hat{w}(d)$ (see \eqref{eq:5}) where $\hat{x}^d$ is our estimate of the traffic flow for the entire day and $\hat{w}(d)$ is our estimate of the weight vector $w(d)$. A naive approach to estimating $w(d)$ using measurements up to time $T^*$ is to project the vector $z^d$ onto the corresponding truncation of each $q^i$. However, this approach is unreliable because the PC-based decomposition uses data for the whole day and does not consider that we wish to make a prediction after time $T^*$. For example, consider principal component 3 in Figure \ref{fig:pca} which, as argued in Section \ref{sec:case-study-principal}, corresponds with increased/decreased traffic in the evening. This component lies largely in the direction corresponding to flow measurements later in the day and thus it is difficult to predict $w^3(d)$ given $z^d$ for $T^*$ early in the day. To overcome this limitation, we propose computing a set of components that simultaneously explains the variation in the measured data \textit{before} time $T^*$ and correlates with a second set of components that explains the variation in the traffic flow data measured \textit{after} time $T^*$. To make this precise, we define the following vector of flow measurements after time $T^*$, complementary to $z^d$ defined in \eqref{eq:6}--\eqref{eq:7}: \begin{align} \label{eq:12} y^d&=\begin{bmatrix}(y^d_1)^{\intercal}&\ldots&(y^d_M)^{\intercal}\end{bmatrix}^{\intercal}\in \mathbb{R}^{(T-T^*)M} \end{align} where \begin{align} \label{eq:13} y^d_m=\begin{bmatrix}x^d_m(T^*+1)&\ldots&x^d_m(T)\end{bmatrix}^{\intercal}\in \mathbb{R}^{(T-T^*)},\quad m=1,\ldots,M. \end{align} Let \begin{alignat}{2} \label{eq:14} \bar{z}&=\frac{1}{D}\sum_{d=1}^Dz^d,\qquad& \bar{y}&=\frac{1}{D}\sum_{d=1}^Dy^d \end{alignat} and define \begin{alignat}{2} \label{eq:15} Z&= \begin{bmatrix} (z^1)^{\intercal}\\ \vdots\\ (z^D)^{\intercal} \end{bmatrix} ,\qquad& Y&= \begin{bmatrix} (y^1)^{\intercal}\\ \vdots\\ (y^D)^{\intercal} \end{bmatrix}\\ \label{eq:15-2}\tilde{Z}&=Z-\mathbf{1}_D\bar{z}^{\intercal},\qquad& \tilde{Y}&=Y-\mathbf{1}_D\bar{y}^{\intercal}. \end{alignat} The matrix $Z$ collects all flow measurements up to time $T^*$ and the matrix $Y$ collects all flow measurements after time $T^*$ onward to the final time $T$. The matrices $\tilde{Z}$ and $\tilde{Y}$ are mean-centered versions of $Z$ and $Y$. Note that $Z$ and $Y$ partition $X$, and, similarly, $\tilde{Z}$ and $\tilde{Y}$ partition $\tilde{X}$. Our objective is to find a collection of \emph{predictor} components $p^1,p^2,\ldots,p^N$ with each $p^i\in\mathbb{R}^{T^*M}$; a collection of \emph{predicted} components $c^1,c^2,\ldots,c^N$ with each $c^i\in\mathbb{R}^{(T-T^*)M}$; and, for each $d=1,\ldots,D$, a vector of weights $\omega(d)\in\mathbb{R}^N$ with \begin{align} \label{eq:16} \omega(d)=\begin{bmatrix}\omega^1(d)&\omega^2(d)&\ldots&\omega^N(d)\end{bmatrix}^{\intercal} \end{align} such that \begin{align} \label{eq:17} z^d&\approx \bar{z} +\sum_{i=1}^N\omega^i(d)p^i\\ \label{eq:17-2} y^d&\approx \bar{y} +\sum_{i=1}^N\omega^i(d)c^i. \end{align} Let \begin{align} \label{eq:19} \omega^i=\begin{bmatrix}\omega^i(1)&\omega^i(2)&\ldots&\omega^i(D)\end{bmatrix}^{\intercal}\in\mathbb{R}^D \quad \text{for all }i=1,\ldots,N. \end{align} The use of \eqref{eq:17}--\eqref{eq:17-2} for prediction is immediately apparent: if we are able to determine the weights $\omega(d)$ using only $z^d$, that is, measurements up to time $T^*$, then we are able to predict traffic flow after time $T^*$ using the same weights and the collection of predicted components $c^i$, $i=1,\ldots, N$. We formalize this prediction procedure in Section \ref{sec:pred-from-latent}. \subsection{The Projection to Latent Structures Algorithm} \label{sec:proj-latent-struct} To compute the collection of predictor and predicted components, we use a statistical technique called \emph{projection to latent structures (PLS)}, also known as \emph{partial least squares}. Given two sets of measured variables (\emph{e.g.}, measured traffic flow over two time intervals) $Z$ and $Y$, the PLS algorithm identifies low-rank approximations of both sets as in \eqref{eq:17}--\eqref{eq:17-2} in such a way that the low-rank components are highly correlated. The low-rank approximations are then used for prediction. While the PLS algorithm is a general statistical tool, it has particularly been developed in the domain of chemometrics beginning with the work of Wold \emph{et. al.} \cite{Wold:1984fy} and has since been applied to a wide range of chemical data analysis problems. References \cite{Geladi:1986fq}, \cite{Manne:1987xw}, and \cite{Hoskuldsson:1988by} provide early overviews of the PLS algorithm with emphasis on applications to chemometrics, and references \cite{Rosipal:2006kx} and \cite{Abdi:2010dn} provide more recent tutorials on the PLS algorithm. The PLS technique is iterative; it first computes a pair of components $(p^1,c^1)$ from the data matrices $\tilde{Z}$ and $\tilde{Y}$, then the algorithm deflates the data matrices by removing the contribution of this pair of components. Next, a new pair $(p^2,c^2)$ is computed from the updated data matrices, \emph{etc.} To determine the first pair $(p^1,c^1)$, we solve the following optimization problem: \begin{alignat}{2} \label{eq:18} (r^*,s^*)=\argmax_{r,s}&\qquad &&\hspace*{-25pt} (\tilde{Z}r)^{\intercal}(\tilde{Y}s)\\ \text{such that}& &|| r ||_2&=1\\ &&||s||_2&=1. \end{alignat} The interpretation of \eqref{eq:18} is as follows: we wish to find directions $r^*\in\mathbb{R}^{T^*M}$ and $s^*\in\mathbb{R}^{(T-T^*)M}$ to maximize the empirical covariance of the score vectors $u=\tilde{Z}r^*\in\mathbb{R}^D$ and $v=\tilde{Y}s^*\in\mathbb{R}^D$. These score vectors contain the projection of each day's data $z^d$ and $y^d$ onto the directions $r^*$ and $s^*$. We define the first score component vector \begin{align} \label{eq:20} \omega^1\triangleq \frac{1}{||\tilde{Z}r^*||_2}\tilde{Z}r^*. \end{align} Note that $r^*$ and $s^*$ are the first left and right, respectively, singular vectors of $\tilde{Z}^{\intercal}\tilde{Y}$. To obtain $p^1$ we project $\tilde{Z}$ onto $\omega^1$, and, similarly, to obtain $c^1$ we project $\tilde{Y}$ onto $\omega^1$: \begin{align} \label{eq:21} p^1=\tilde{Z}^{\intercal}\omega^1\\ c^1=\tilde{Y}^{\intercal}\omega^1. \end{align} Note that our treatment of the data matrices $\tilde{Z}$ and $\tilde{Y}$ is asymmetric, that is, $\omega^1$ is obtained from $\tilde{Z}$ and $r^*$. This asymmetry is because we ultimately wish to use the scores and components for \emph{prediction}. We next {deflate} the data matrices $\tilde{Z}$ and $\tilde{Y}$ using the first score vector $\omega^1$ and the computed components: \begin{align} \label{eq:22} \tilde{Z}^+=\tilde{Z}-\omega^1p^1\\ \tilde{Y}^+=\tilde{Y}-\omega^1p^1. \end{align} Above, $\tilde{Z}^+$ and $\tilde{Y}^+$ are the updated data matrices. To compute $\omega^2$, $p^2$, and $c^2$ we repeat the above procedure, replacing $\tilde{Z}$ and $\tilde{Y}$ with their updated versions $\tilde{Z}^+$ and $\tilde{Y}^+$. We repeat this process until we obtain $N$ score vectors and components where $N$ is a design parameter. We gather the computed scores and components: \begin{align} \label{eq:23} \Omega&=\begin{bmatrix}\omega^1&\ldots&\omega^N\end{bmatrix}\in\mathbb{R}^{D\times N}\\ P&=\begin{bmatrix}p^1&\ldots&p^N\end{bmatrix}\in\mathbb{R}^{(T^*M)\times N}\\ \label{eq:23-3} C&=\begin{bmatrix}c^1&\ldots&c^N\end{bmatrix}\in\mathbb{R}^{((T-T^*)M)\times N}. \end{align} We then have $ \tilde{Z}\approx \Omega P^{\intercal}$ and $ \tilde{Y}\approx \Omega C^{\intercal}.$ Above, to compute $r^*$ and $s^*$, one approach is to first multiply $\tilde{Z}^{\intercal}$ and $\tilde{Y}$ and then compute the singular value decomposition of the product, and this operation is performed for each iteration. For traffic flow measurements, it is typically the case that $D\ll \min\{T^*M,(T-T^*)M\}$, thus $\tilde{Z}^{\intercal}\tilde{Y}$ is a large matrix of dimension $T^*M\times (T-T^*)M$ and computing its singular value decomposition is computationally expensive. There exists efficient implementations of the PLS algorithm that use the \emph{kernel} matrices $\tilde{Z}\tilde{Z}^{\intercal}$ and $\tilde{Y}\tilde{Y}^{\intercal}$ to compute the predictor and prediction components by finding the principal eigenvector and eigenvalue pair of a (much smaller) $D\times D$ matrix instead \cite{Rannar:1994ee}. % The PLS algorithm is described by \eqref{eq:18}--\eqref{eq:23-3}. % \subsection{Prediction From Latent Structures} \label{sec:pred-from-latent} Once the predictor and predicted components have been computed, we are able to construction a prediction of future traffic flow given sample measurements. To this end, let \begin{align} \label{eq:24} z^s\in\mathbb{R}^{T^*M} \end{align} denote measured traffic flow for the $M$ turn movements up to time $T^*$. Our objective is to use $z^s$ along with the predictor and predicted components calculated from historical data to predict traffic flow for our sample day for time periods after $T^*$. To do so, we first calculate $\hat{\omega}$, a vector of scores for the sample day. As above, let $\bar{z}$ and $\bar{y}$ denote means of historical traffic flow up to time $T^*$ and after $T^*$, respectively. The score vector is computed as \begin{align} \label{eq:25} \hat{\omega}=\big((z^s-\bar{z})^{\intercal}(P^{\intercal})^\dagger\big)^{\intercal}\in\mathbb{R}^{N} \end{align} where $(P^{\intercal})^\dagger$ denotes the Moore-Penrose pseudoinverse of $P^{\intercal}$. Then \begin{align} \label{eq:26} \hat{y}^s&=\hat{\omega}^{\intercal}C^{\intercal} +\bar{y}\in\mathbb{R}^{(T-T^*)M} \end{align} is the prediction of traffic flow after time $T^*$. Combining \eqref{eq:25} and \eqref{eq:26}, we obtain one succinct equation for prediction: \begin{align} \label{eq:27} \hat{y}^s=(z^s-\bar{z})^{\intercal} (P^{\intercal})^\dagger C^{\intercal}+\bar{y}. \end{align} Notice that after $P^{\intercal}$ and $C^{\intercal}$ are computed via the PLS algorithm and the pseudoinverse of $P^{\intercal}$ is computed and stored, predicting traffic flow given a sample vector of measurements requires only matrix multiply operations and is thus computationally easy. \subsection{Extensions and Case Study} We now discuss a few immediate extensions to the PLS prediction algorithm presented above that we utilize in Section \ref{sec:traff-pred-contr}. Firstly, we assume above that our objective is to predict flow measurements for all time periods beyond time $T^*$ up to the end of the day, $T$. However, our objective may be to predict traffic flow for a shorter horizon or for a period not immediately subsequent to the measurement period. For example, we may wish to predict traffic during the evening from observed traffic flow in the morning. We can accommodate this scenario by appropriately truncating the matrices $Z$ and $Y$ above. Secondly, we assume above that the interval of time between subsequent flow measurements, $\Delta$, is the same for the measured data as well as the prediction. However, it is straightforward to accommodate differences in these intervals, or even nonuniform intervals so long as the measurement times are the same for each day. For example, we may wish to use flow measurements at 15 minutes intervals up to time $T^*$ to predict traffic flow at one hour intervals subsequently. To demonstrate the PLS prediction algorithm as well as these extensions, we return to our case study. We let $T^*$ correspond to 10:00 and consider the case when flow measurements are available up to $T^*$ in 15-minute increments and available thereafter in 1-hour increments. Figure \ref{fig:PLS} shows the first three predictor and predicted components generated from the dataset. To emphasize the different measurement intervals, individual data points are indicated with a solid marker point. Comparing Figure \ref{fig:PLS} to Figure \ref{fig:pca}, we see that each predictor/predicted component pair of Figure \ref{fig:PLS} resembles the corresponding principal component of Figure \ref{fig:pca}. The differences reflect that the predictor and predicted components are calculated to maximize their correlation. \begin{figure} \centering \begin{tabular}{l l} \includegraphics[height=1.6in]{PLSpredictor_1}& \includegraphics[height=1.6in]{PLSpredicted_1}\\ \includegraphics[height=1.6in]{PLSpredictor_2}& \includegraphics[height=1.6in]{PLSpredicted_2}\\ \includegraphics[height=1.6in]{PLSpredictor_3}& \includegraphics[height=1.6in]{PLSpredicted_3} \end{tabular} \caption{Projection to latent structures algorithm. We take $T^*$ corresponding to 10:00 and consider the case for which flow measurements are available in 15-minute increments up to 10:00 and available in 1-hour increments thereafter. The left column of plots contains the first three predictor components, the right column of plots contains the corresponding prediction components. } \label{fig:PLS} \end{figure} Figure \ref{fig:pred_ex} illustrates the use of the PLS algorithm for predicting traffic flow. We consider three unusual days from our data set: February 24, which, as described above, experienced winter weather resulting in school closures; July 2, which is a Thursday and preceded a long holiday weekend in the U.S.; and January 1, which is a significant holiday and resulted in dramatically different traffic patterns. As a demonstration, we only plot results for two of the twelve movements; the first (respectively, second) row of Figure \ref{fig:pred_ex} plots flow measurements and predictions for the SB through movement (respectively, NB right turn movement). The blue trace shows the average flow over the fourteen one-hour periods from 10:00 to midnight, the gold trace shows the actual flow on the given days, and the green trace shows the flow as predicted using \eqref{eq:27} with four predictor/predicted component pairs (three of which are shown in Figure \ref{fig:PLS}). To compute the prediction, we employ leave-one-out cross validation whereby, for each unusual day, the predictor/predicted components are computed from the dataset excluding the sample day of interest. From the plots, we see that the PLS algorithm correctly predicts the below-average flow on February 24 and the above-average flow on July 2 where the algorithm nearly exactly predicts the peak flow for both movements between 17:00 and 18:00. In addition, the algorithm adeptly predicts the well below-average flow on January 1 for which traffic conditions greatly differ from the norm. Thus, while the PLS algorithm is ultimately a linear prediction scheme as is apparent in \eqref{eq:27}, the technique ably accommodates significant variation in the traffic flow. To quantitatively assess the quality of the prediction, we compute a prediction for each of 132 days in the dataset using leave-one-out cross validation and compute the one-norm distance from the prediction to the actual flow measurements. That is, for each day $d=1,\ldots,D$, we compute the prediction error $E_\text{pred}^d$ as \begin{align} \label{eq:28} E_\text{pred}^d=||y^d-\hat{y}^d||_1=\sum_{m=1}^M\sum_{t=T^*+1}^{\intercal} |y_m^d(t)-\hat{y}_m^d(t)| \end{align} where $\hat{y}^d$ is the predicted traffic flow on day $d$ computed as in Section \ref{sec:pred-from-latent}. We use the one-norm as our distance metric because it corresponds to the absolute total difference of the number of vehicles that are measured versus predicted at the intersection along each movement. Likewise, we define the baseline error to be the difference between the average flow measurements and the actual flow measurements\footnote{Since we employ a cross validation scheme, $\bar{y}$ will change slightly for each day because we remove a different set of measurements from the dataset in each case.}: \begin{align} \label{eq:29} E^d_\text{base}=||y^d-\bar{y}||_1. \end{align} Figure \ref{fig:scatter} is a scatter plot of the normalized error decrease $(E_\text{base}^d-E_\text{pred}^d)/E_\text{base}^d$ versus the baseline error with no prediction $E_{\text{base}}^d$, for each day $d=1,\ldots, D$. Excluded are December 24 and January 1 which both have baseline errors exceeding 14\,000 with prediction errors less than 5\,500, an error decrease of over 60\%. % Nearly all of the points (113 out of 132) correspond to positive error decreases indicating that the prediction almost always is an improvement over the baseline average. The circles (respectively, crosses) are those days for which the total flow volume throughout the day is below (respectively, above) average. We see that the PLS algorithm performs well in both cases. For most days, the improvement in error is approximately 10\% to 20\%. However, for those days for which the baseline error is high, the improvement is higher and can be as high as 50\% to 60\%, and this holds for both those days for which traffic is below average and those days for which traffic is above average. Thus for days that differ significantly from the average, the PLS prediction is especially accurate. There is one particularly anomalous point for which the baseline error is high, approximately 7\,000, but the normalized error decrease is not significant and is in fact slightly negative. It appears that, on this day, total traffic volume was only moderately higher than normal but traffic flow was significantly above average for some movements and below average for other movements, although the cause is unclear. While the PLS algorithm was unable to predict this difference, we observe that the PLS algorithm did not result in a prediction error that was significantly worse than the baseline error. \begin{figure} \centering \begin{tabular}{@{}l@{} @{}l@{} @{}l@{}} \multicolumn{3}{c}{Southbound Through Movement}\hspace{.6in}\\ {\includegraphics[height=1.55in]{predict_example_2015-2-24_mov1002}} & \includegraphics[height=1.55in]{predict_example_2015-7-2_mov1002}& \includegraphics[height=1.55in]{predict_example_2015-1-1_mov1002}\\ \multicolumn{3}{c}{Northbound Right Turn Movement}\hspace{.6in}\\ \includegraphics[height=1.55in]{predict_example_2015-2-24_mov1016}& \includegraphics[height=1.55in]{predict_example_2015-7-2_mov1016}& \includegraphics[height=1.55in]{predict_example_2015-1-1_mov1016}\\ \end{tabular} \caption{Example prediction results on three unusual days for two movements. On February 24, the algorithm correctly predicts below average flow caused by winter weather and closed schools. On July 2, the algorithm correctly predicts above average flow caused by the subsequent holiday weekend. The algorithm additionally predicts the well below average traffic on the holiday January 1 which deviates substantially from the average flow.} \label{fig:pred_ex} \end{figure} \begin{figure} \begin{center} \begin{tabular}{r} \includegraphics[width=.8\textwidth]{improvement} \end{tabular} \end{center} \caption{Scatter plot of prediction error. The plot shows baseline error $E_\text{base}^d$ versus the normalized error decrease $(E_\text{base}^d-E_\text{pred}^d)/E_\text{base}^d$ for each day $d=1,\ldots,D$. Days with below average (respectively, above average) flow are indicated with circles (respectively, crosses). For clarity, the plot excludes December 25 and January 1 which experience error decreases exceeding 60\% with baseline errors exceeding $14\,000$.} \label{fig:scatter} \end{figure} \section{Traffic Predictive Control} \label{sec:traff-pred-contr} How do we utilize traffic flow predictions based on low-rank structure to improve traffic control? We first review a common technique for programming traffic signal controllers and describe an approach for optimizing this technique using only the average flow measurements. We will then use traffic predictions from the PLS algorithm to improve upon this approach. A primary goal is to ensure the algorithm is easily applied to existing hardware and well-aligned with conventional traffic control practices. \subsection{Time-of-Day Signal Operation} A common approach to traffic control supported by nearly all traffic control hardware is \emph{time-of-day (TOD)} scheduling \cite[Chapter 5]{Koonce:2008gd} whereby a pre-defined control plan is applied during specified periods of the day. For example, a pre-defined plan may be specified for 6:00 to 9:00, and a different plan may be specified for 9:00 to 13:00, \emph{etc.} Typically, the TOD periods are obtained via limited data collection at the intersection or simply by prior experience. Once the TOD periods have been selected, timing parameters are designed for each period and applied to the controller, constituting a TOD plan. Typical timing parameters include the cycle time of the intersection and green time allocations for the turn movements. A TOD plan may be completely fixed or it may be \emph{actuated} in which case the presence or absence of queued vehicles extends green time allocations by a certain amount or leads to early termination of a phase actuation. The timing parameters for a given TOD period are determined by expected turn movement flows at the intersection. Since the TOD plan is applied at fixed periods and the parameters are fixed, the plan is designed around a set of nominal turn movement flows, for example, average turn movement flows over the TOD period. Methods for determining desirable timing parameters given nominal turn movement flows have a long history going back at least to the seminal work of \cite{Webster:1958cs} and is not the focus of this paper. Instead, we assume that what is required for signal timing is only an estimation of the turn movement flows during a TOD period, for which any signal timing optimization scheme may be employed. This approach is reasonable if traffic flow does not deviate excessively from these estimated flows during a TOD period. In the following section, we suggest an algorithm for identifying TOD periods that minimize such deviations. \subsection{Optimal Time-of-Day Segmentation} \label{sec:optimal-time-day} We focus on the problem of determining TOD periods and representative turn movement flows for each period. In the sequel, we will consider adjusting the TOD periods and representative turn movement flows based on predictions using the PLS algorithm. % To this end, we now present an algorithm for optimally segmenting the 24-hour day into TOD periods. The intuitive idea is to identify segmentation times that minimize the variability in turn movement flows within each TOD period so that a particular fixed TOD plan works well throughout the period. For example, turn movement flows may remain relatively steady through the morning period but change substantially in the afternoon, requiring a different TOD plan. Increasing the number of TOD periods reduces variability in any given period, but there are practical limitations to the number of TOD periods that may employed at an intersection. For example, it typically takes several cycles totaling up to ten minutes to fully switch from one TOD plan to another; during this intermediate time, mobility at the intersection may be reduced. Furthermore, excessive changes to timing plans may confuse drivers. Traffic intersections commonly employ up to seven TOD plans throughout the day. Approaches for TOD segmentation proposed in the literature include randomized clustering algorithms \cite{Smith:2002xi, Guo:2014zt}, heuristic genetic algorithms \cite{Park:2003mq}, and simulation based algorithms \cite{Brian-Park:2004ta}. To improve performance, the paper \cite{Guo:2014zt} suggests explicitly incorporating time as a variable in determining the clusters. Here, we consider an optimal segmentation approach suggested for generic data sets in \cite{Auger:1989jt} and adapt it to the context of traffic flow measurements. Like \cite{Guo:2014zt}, this approach explicitly accounts for time when identifying clusters. However, our approach is not based on $k$-means clustering and is able to identify optimal clusters in computational time quadratic in the number of time steps. In contrast, $k$-means clustering requires exponential computational time, although efficient heuristics exist. Suppose we wish to obtain $S$ TOD periods for some $S>1$. We consider the equivalent problem of choosing $S-1$ segmentation times $\tau_1, \tau_2, \ldots, \tau_{S-1}$ with each $\tau_i\in\{1,2,\ldots, T-1\}$ satisfying \begin{align} \label{eq:30} \tau_1<\tau_2< \ldots< \tau_{S-1} \end{align} so that \begin{align} \label{eq:31} \{\tau_{i-1}+1,\tau_{i-1}+2,\ldots,\tau_{i}\} \end{align} defines the $i$th TOD period for $i\in\{1,\ldots,S\}$, with $\tau_0:=0$ and $\tau_{S}:=T$. To assess the quality of a given segmentation $(\tau_1,\ldots,\tau_{S-1})$, we consider a vector of turn movement flows $x\in\mathbb{R}^{TM}$ throughout the day and define the \emph{cost} of a time segment $(t_a,t_a+1,\ldots,t_b)$ given these flows for some $t_a\leq t_b$ as follows: \begin{align} \label{eq:33} \text{Cost}&=\min_{\mu\in\mathbb{R}^{P}} F(t_a,t_b,x,\mu) \end{align} where $\mu\in\mathbb{R}^P$ is a parameter vector of dimension $P\geq 1$ and $F(t_a,t_b,x,\mu)$ is a positive function that is convex in $\mu$. We call $F(t_a,t_b,x,\mu)$ the \emph{fit} of $x$ with the parameter vector $\mu$ on time segment $(t_a,t_a+1,\ldots,t_b)$. In this way, $F(\tau_{i-1}+1,\tau_i,x,\mu)$ is the fit of $x$ with $\mu$ on TOD period $i$. For $t_b<t_a$, we define the fit $F$ to be $0$. To make this concrete, in this paper, we assume $P=M$ so that $\mu\in\mathbb{R}^M$ and take \begin{align} \label{eq:32} F(t_a,t_b,x,\mu)&=\sum_{t=t_a}^{t_b}\Phi(x(t),\mu) \end{align} for $\Phi:\mathbb{R}^{M}\times\mathbb{R}^M\to\mathbb{R}^M_{\geq 0}$ a positive function that is convex in its second argument where $\mathbb{R}^M_{\geq 0}=\{x\in\mathbb{R}^M\mid x_m\geq 0 \text{ for all } m=1,\ldots,M\}$. For period $i$, let $\mu_i\in\mathbb{R}^{M}$ denote the minimizer of \eqref{eq:33} with $t_a:=\tau_{i-1}+1$ and $t_b:=\tau_i$. We interpret $\mu_i$ as a vector of flows that suitably represents the turn movement flows throughout the $i$th period and is used to determine the signal timing parameters. For example, if \begin{align} \label{eq:48} \Phi(x,\mu)=(x-\mu)^{\intercal}(x-\mu), \end{align} then the minimizing $\mu_i$ is given by $\mu_{i}=\frac{1}{\tau_i-\tau_{i-1}}\sum_{t=\tau_{i-1}+1}^{\tau_i}x(t)$, that is, $\mu_{i}$ is the average flow for the movements during period $i$. Choosing $\Phi(x,\mu)$ as in \eqref{eq:48} penalizes the difference between the turn movement flow and the corresponding value in the parameter vector. This choice equally penalizes turn movement flows that are above and below the parameter vector flow. However, it is often case that flows which exceed the parameter vector result in worse performance degradation at the intersection than flows which are less than the parameter vector. For this reason, we propose an asymmetric function \begin{align} \label{eq:34} \Phi(x(t),\mu)&=\sum_{m=1}^M\varphi(x_m(t),\mu_m)\\ \varphi(a,b)&= \begin{cases} C(a-b)^2&\text{if }a>b\\ (a-b)^2&\text{else} \end{cases} \quad m=1,\ldots,M \label{eq:46} \end{align} for some choice $C\geq 1$ where $\mu_m$ denotes the $m$th entry of $\mu$. For $C=1$, we recover \eqref{eq:48}. Finally, we choose segmentation times to minimize the sum of costs for all TOD periods: \begin{align} \label{eq:35} (\tau_1,\ldots,\tau_{S-1})=\argmin_{t_1,\ldots,t_{S-1}}\sum_{i=1}^S\min_{\mu}F(\tau_{i-1}+1,\tau_i,x,\mu). \end{align} Convexity of $\Phi$ ensures that \eqref{eq:33} is a convex optimization problem. Assuming that solving \eqref{eq:33} requires computational time of $\Gamma(M)$, then \eqref{eq:35} is solved in time $O(M^2\Gamma(M))$ \cite{Auger:1989jt}. We note that our segmentation approach prevents oscillations in implemented timing plans since TOD periods are required to be contiguous intervals of time, in contrast to some clustering-based segmentation algorithms in the literature that do not account for contiguity in the TOD periods and for which the implemented signal timing plans have been observed to oscillate. We return to our case study dataset and apply the TOD segmentation algorithm above where we take $\varphi$ as defined in \eqref{eq:46} with $C=2$. We first take $x$ to be the mean flow across the dataset as defined in \eqref{eq:2}--\eqref{eq:3}, that is, we first consider $x:=\bar{x}$ in \eqref{eq:33}--\eqref{eq:35}. This constitutes the nominal TOD periods. In Figure \ref{fig:seg}, we plot the results of the optimal segmentation problem \eqref{eq:35} with $S=4$ and $S=7$ for this nominal case. In general, we see that the segmentation algorithm chooses segmentation times corresponding to when turn movement flows are changing rapidly, resulting in intuitive divisions of the day. For example, with seven TOD periods, we clearly obtain the morning peak period from 7:00 to 9:00 and an afternoon peak period from 14:45 to 18:15. \begin{figure} \centering \begin{tabular}{@{}l@{} @{}l@{}} \includegraphics[height=1.6in]{switch_times_4Plans}& \includegraphics[height=1.6in]{switch_times_7Plans} \end{tabular} \caption{Optimal segmentation of average traffic flow over the day. (a) Optimal segmentation allowing for four time-of-day periods. (b) Optimal segmentation allowing for seven time-of-day periods.} \label{fig:seg} \end{figure} \subsection{Predictive Time-of-Day Plans} \label{sec:predictive-time-day} The segmentation algorithm presented in Section \ref{sec:optimal-time-day} where we use the mean flow $\bar{x}$ is a method for optimally determining nominal TOD periods, an important component of conventional traffic control. Thus this is a useful innovation in itself that can be immediately implemented with existing hardware, but it does not leverage the traffic prediction algorithm developed in Section \ref{sec:traff-pred-from}. Here we propose a method for extending this idea to allow real-time adjustments to TOD periods and TOD plans based on predicted traffic flow. The key idea is to enable limited and intuitive modifications to the standard TOD scheduling paradigm so that these methods can be implemented with modest modification to existing hardware, and traffic engineers are able to easily envision the benefits of traffic predictive control and therefore more easily adopt this approach. To this end, we assume a set of nominal segmentation times $\tau_1,\ldots,\tau_{S-1}$ have been chosen according to \eqref{eq:35}, defining a set of $S$ TOD periods. Additionally, a set of parameter vectors $\mu_1,\ldots,\mu_S$ are obtained for each TOD period as the minimizers of \eqref{eq:33}. These TOD periods and parameter vectors constitute the nominal intersection timing plans. We now suggest the following intuitive predictive traffic control scheme: throughout the day, online measurements are used to predict future traffic flow. When the current time approaches a nominal segmentation time, the predicted traffic flow is used to decide if the intersection controller should switch to a new TOD plan earlier or later than the nominal segmentation time. To formalize this idea, let $x^s\in\mathbb{R}^{TM}$ denote the traffic flow for a particular sample day and suppose that the current time is $t$ so that \begin{align} \label{eq:36} x^\text{meas}:=\begin{bmatrix}x^s(1)&x^s(2)&\ldots x^s(t)\end{bmatrix}\in\mathbb{R}^{tM} \end{align} is the vector of currently available time measurements. We let \begin{align} \label{eq:42} \textsc{SegmentWindow}(\tau)\subseteq \{1,2,\ldots,T\} \end{align} denote the window of times around a nominal segmentation time $\tau$ for which it is acceptable to switch early or late to a new TOD period. That is, the signal controller has the flexibility to switch from TOD period $i$ to TOD period $i+1$ at any time within $\textsc{SegmentWindow}(\tau_{i})$ and the switch must occur within this window. The acceptable window is a design parameter, \emph{e.g.}, it may be chosen by a traffic engineer to meet other system requirements. As an example, if the user-selected criterion allows for switching to a new TOD plan up to 45 minutes before or after the nominal segmentation time, we have $\textsc{SegmentWindow}(\tau)=\{\tau-3,\tau-2,\ldots,\tau+2,\tau+3\}$ when $\Delta=\text{15 minutes}$. Now suppose that the current TOD period is $i$, the currently active parameter vector is $\mu^*_i\in\mathbb{R}^{M}$, and that $t\in\textsc{SegmentWindow}(\tau_i)$ so that it is allowable to switch to the $(i+1)$-th TOD period at the current time $t$. To determine if switching is desirable, we use a prediction of future traffic flow to determine if it is possible to achieve a lower segment cost by choosing a different segmentation time. To this end, let \begin{align} \label{eq:37} \hat{y}=\begin{bmatrix}\hat{y}(t+1)&\hat{y}(t+2)&\ldots&\hat{y}(\tau_{i+1})\end{bmatrix}\in\mathbb{R}^{(\tau_{i+1}-t)M} \end{align} be a prediction of traffic flow from time $t+1$ to time $\tau_{i+1}$, the nominal ending time for the next TOD period. We wish to find the segmentation time which minimizes the predicted remaining segment cost for the $i$-th TOD period and the predicted segment cost for the $(i+1)$-th TOD period. We thus compute \begin{align} \label{eq:38} t_\text{opt}=\argmin_{\substack{u\in\textsc{SegmentWindow}(\tau_i)\\\text{s.t. }u\geq t}} \left(F(t+1,u,\hat{y},\mu_i^*)+\min_\mu F(u+1,\tau_{i+1},\hat{y},\mu)\right). \end{align} We interpret $t_\text{opt}$ as the optimal time to switch from TOD period $i$ to TOD period $i+1$ (recall that for $u<t+1$, we have $F(t+1,u,\hat{y},\mu_i^*)=0$). If $t_\text{opt}=t$, then it is best to switch to the $(i+1)$-th TOD period immediately. In this case, we define \begin{align} \label{eq:39} \tau^*_i&=t_\text{opt}\\ \label{eq:39-2} \mu^*_{i+1}&=\min_\mu F(u+1,\tau_{i+1},\hat{y},\mu) \end{align} as the \emph{predictive} segmentation time for the $i$-th period and the predictive parameter vector for the $(i+1)$-th TOD period. We then repeat the process for the next TOD segmentation time. However, if $t_\text{opt}>t$, then it is optimal to continue with the current TOD period, delaying the switch to the $(i+1)$-th period. Time advances to $t+1$ and the process repeats, where we update the measurement vector and recompute the predicted flow measurements. In particular, this recomputation implies that $t_\text{opt}$ updates based on the latest measurements. To initialize this approach, we define $\mu^*_1:=\mu_1$, that is, the parameter vector for the first TOD period is assumed to be the nominal parameter vector $\mu_1$ since no measurements are available yet that would alter this prediction. Finally, we highlight one particularly important possible modification to the above procedure. In computing $t_\text{opt}$ in \eqref{eq:38}, we assumed that we are able to establish a new parameter vector for the $(i+1)$-th TOD period, however due to constraints on the traffic signaling hardware or the preference of practitioners, this may not be possible. For example, the traffic signal controller may allow variability in the segmentation times defining TOD periods but not allow the timing plans themselves to be altered. In this case, we choose a new segmentation time assuming that the parameter vector remains fixed. We modify \eqref{eq:38} as \begin{align} \tag{\ref{eq:38}b} \label{eq:40} t_\text{opt}=\argmin_{\substack{u\in\textsc{SegmentWindow}(\tau_i)\\\text{s.t. }u\geq t}} \left(F(t+1,u,\hat{y},\mu_i)+ F(u+1,\tau_{i+1},\hat{y},\mu_{i+1})\right). \end{align} Algorithm \ref{fig:algo1} summarizes this traffic predictive approach. We now prove a desirable consistency property of this algorithm. In particular, we show that, for the fit function given in \eqref{eq:46}, if the predicted traffic flow is equal to the nominal traffic flow, then the segmentation times and parameter vectors obtained via Algorithm \ref{fig:algo1} are equal to the nominal segmentation times and parameter vectors. Below, we assume for simplicity that minimizing arguments are unique. \begin{prop} \label{prop:consist} Consider times $\tau_a<\tau_b<\tau_c$ defining two TOD periods $\{\tau_a+1,\ldots,\tau_b\}$ and $\{\tau_b+1,\ldots,\tau_c\}$, and consider the fit function given in \eqref{eq:46}. Assume that $\tau_b$ is the optimal time to switch from the first TOD period to the second TOD period when the flow is $x\in\mathbb{R}^{TM}$, and suppose $\mu_1$ and $\mu_2$ are the optimal parameter vectors for these periods, that is, \begin{align} \label{eq:43} (\tau_b,\mu_1,\mu_2)=\argmin_{\tilde{\tau}_b,\tilde{\mu}_1,\tilde{\mu}_2} F(\tau_a+1,\tilde{\tau}_b,x,\tilde{\mu}_1)+F(\tilde{\tau}_b+1,\tau_c,x,\tilde{\mu}_2). \end{align} Then, for all $t\in\{\tau_a,\ldots,\tau_b\}$, \begin{align} \label{eq:44} (\tau_b,\mu_2)=\argmin_{\tilde{\tau}_b,\tilde{\mu}_2} F(t+1,\tilde{\tau}_b,x,\mu_1)+F(\tilde{\tau}_b+1,\tau_c,x,\tilde{\mu}_2). \end{align} \end{prop} Proposition \ref{prop:consist} states that, if the measured data coincides with $x$, $\tau_b$ remains the optimal time to switch TOD periods even if the segmentation time and parameter vector of the second TOD is recomputed at time $t\geq \tau_a$. \begin{proof} Consider $t\in\{\tau_a,\ldots,\tau_b\}$. Let \begin{align} \label{eq:45} (\tau_b',\mu_2')=\argmin_{\tilde{\tau}_b,\tilde{\mu}_2} F(t+1,\tilde{\tau}_b,x,\mu_1)+F(\tilde{\tau}_b+1,\tau_c,x,\tilde{\mu}_2). \end{align} Observe that \begin{align} \label{eq:52} F(t_1,t_2,x,\mu)=F(t_1,t_3,x,\mu)+F(t_3+1,t_2,x,\mu)\quad \text{ for all $t_1,t_2,t_3$} \end{align} for any $x$, $\mu$. Thus, by \eqref{eq:43}, we have that \begin{align} \label{eq:49} F(\tau_a+1,\tau_b,x,\mu_1)+F(\tau_b+1,\tau_c,x,\mu_2)&= F(\tau_a+1,t,x,\mu_1)+ F(t+1,\tau_b,x,\mu_1)+ F(\tau_b+1,\tau_c,x,\mu_2) \\ &\leq \min_{\tilde{\mu}_1,\tilde{\mu}_2} F(\tau_a+1,\tau_b',x,\tilde{\mu}_1)+F(\tau '_b+1,\tau_c,x,\tilde{\mu}_2). \end{align} It follows that \begin{align} \label{eq:50} F(t+1,\tau_b,x,\mu_1)+ F(\tau_b+1,&\tau_c,x,\mu_2)\\ &\leq \min_{\tilde{\mu}_1,\tilde{\mu}_2} F(\tau_a+1,\tau_b',x,\tilde{\mu}_1)+F(\tau '_b+1,\tau_c,x,\tilde{\mu}_2)- F(\tau_a+1,t,x,\mu_1)\\ &=\min_{\tilde{\mu}_1} F(\tau_a+1,\tau_b',x,\tilde{\mu}_1) - F(\tau_a+1,t,x,\mu_1)+\min_{\tilde{\mu}_2}F(\tau '_b+1,\tau_c,x,\tilde{\mu}_2)\\ \label{eq:50-end}&\leq F(t+1,\tau_b',x,\mu_1)+\min_{\tilde{\mu}_2}F(\tau '_b+1,\tau_c,x,\tilde{\mu}_2) \end{align} where the last inequality follows because \begin{align} \label{eq:51} \min_{\tilde{\mu}_1} F(\tau_a+1,\tau_b',x,\tilde{\mu}_1)\leq F(\tau_a+1,t,x,\mu_1)+F(t+1,\tau_b',x,\mu_1). \end{align} Then \eqref{eq:44} follows from \eqref{eq:45} and \eqref{eq:50}--\eqref{eq:50-end}. \end{proof} The form of the fit function given in \eqref{eq:46} ensures \eqref{eq:52} in the proof of Proposition \ref{prop:consist}. Thus, the proposition may fail to hold for alternative choices of fit functions. \begin{cor} If $\hat{y}=\begin{bmatrix}x^s(t+1)& x^s(t+2)&\ldots&x^s(\tau_{i+1})\end{bmatrix}$, that is, the predicted traffic flow is equal to the measured traffic flow, then $\tau_i=\tau^*_i$ and $\mu_i=\mu^*_i$ for all $i=1,\ldots,S$, that is, the predictive segmentation times and predictive parameter vectors are equal to the nominal segmentation times and parameter vectors. \end{cor} From a computational perspective, Algorithm \ref{fig:algo1} consists of an online component which computes the optimal predictive segmentation time for each TOD period as well as an offline component which processes historical data to obtain the PLS components for the predictions required in line \ref{line:predict}. For the offline component, we compute PLS predictor/predicted components for each possible segmentation time. Thus, suppose $|\textsc{SegmentWindow}(\tau)|=W$ for all $\tau$ for some $W>0$, that is, there are $W$ allowable segmentation times around each nominal segmentation time. Then we execute the PLS algorithm a total of $SW$ times. While this process is potentially computationally taxing, it can be accomplished offline by processing historical data. To obtain $\hat{y}$ in line \ref{line:predict} of Algorithm \ref{fig:algo1} requires only matrix multiplication using the PLS components that are computed offline and stored. Next, computing $t_\text{opt}$ requires solving at most $W$ convex optimization problems if using \eqref{eq:38}, or $W$ evaluations of the fit function $F$ if using \eqref{eq:40}. Solving the convex optimization problem is typically fast, as are evaluations of $F$, thus $t_\text{opt}$ is easily computed within any practically sized time step $\Delta$, affording ample time to decide predictive segmentation times and parameter vectors online. \begin{algorithm} \centering \begin{minipage}{\linewidth} \begin{algorithmic}[1] \algblockdefx{InputS}{EndInputS}{\textbf{inputs: }}{} \algtext*{EndInputS} \algblockdefx{OutputS}{EndOutputS}{\textbf{outputs: }}{} \algtext*{EndOutputS} \Function{PredictiveTrafficControl}{$(\tau_1,\ldots,\tau_{S-1})$, $(\mu_1,\ldots,\mu_S)$, $x^s$} \InputS \hspace{.08in}$(\tau_1,\ldots,\tau_{S-1})$, nominal segmentation times defining $S$ TOD periods \State \hspace{3.55em}\relax $(\mu^1,\ldots,\mu^S)$, parameter vectors for the TOD periods \State \hspace{3.55em}\relax $x^s\in\mathbb{R}^{TM}$, measured traffic flow, available in real-time \EndInputS \OutputS $(\tau_1^*,\ldots,\tau_{S-1}^*)$, predictive segmentation times \State \hspace{3.55em}\relax $(\mu_1^*,\ldots,\mu_S^*)$, predictive parameter vectors \EndOutputS \State $\mu^*_1:=\mu_1$ % \State $i:=1$ \Comment Current TOD period \For{$t=1,2,\ldots,T$} \Comment Current time \State $x^\text{meas}:=\begin{bmatrix}x^s(1)&x^s(2)&\ldots x^s(t)\end{bmatrix}\in\mathbb{R}^{tM}$ \Comment Flow measurements up to current time \If{$t\in\textsc{SegmentWindow}(\tau_i)$} \State $\hat{y}=\textsc{Predict}(x^\text{meas},t+1,\tau_{i+1})$ \Comment Predicted traffic flow up to end of next TOD period \label{line:predict} \State $\displaystyle t_\text{opt}:=$ according to \eqref{eq:38} or \eqref{eq:40} \Comment Best time to switch according to current prediction \If{$t_\text{opt}=t$} \State $\tau^*_i:=t_\text{opt}$ \State $\displaystyle \mu^*_{i+1}:=\begin{cases}\argmin_\mu F(\tau^*_i+1,\tau_{i+1},\hat{y},\mu)&\text{ if using \eqref{eq:38}}\\ \mu_{i+1}&\text{ if using \eqref{eq:40}}\end{cases}$ \State $i:=i+1$ \EndIf \EndIf \EndFor \State \Return $((\tau_1^*,\ldots,\tau_{S-1}^*), (\mu_1^*,\ldots,\mu_S^*))$ \EndFunction \vspace{5pt} \algblockdefx{InputS}{EndInputS}{\textbf{inputs: }}{} \algtext*{EndInputS} \algblockdefx{OutputS}{EndOutputS}{\textbf{output: }}{} \algtext*{EndOutputS} \algblockdefx{Summary}{EndSummary}{\textbf{summary: }}{} \algtext*{EndSummary} \Function{Predict}{$x^\text{meas}$, $t_a$, $t_b$} \InputS \hspace{.08in}$x^\text{meas}\in\mathbb{R}^{tM}$, measured traffic flow up to time $t$ \State \hspace{3.55em}\relax $t_a$ and $t_b$, start and end times for traffic flow prediction \EndInputS \OutputS $\hat{y}\in\mathbb{R}^{(t_b-t_a+1)M}$, traffic flow prediction from time $t_a$ to time $t_b$ \EndOutputS \Summary \emph{Using historical data and the prediction scheme developed in Section \ref{sec:traff-pred-from}, predict traffic} \State \hspace{3.55em}\relax \emph{flow from time $t_a$ to time $t_b$ using real-time measurements $x^\textnormal{meas}$.} \EndSummary \EndFunction \end{algorithmic} \end{minipage} \caption{Algorithm for determining predictive segmentation times and predictive parameter vectors given a set of nominal segmentation times and vectors as well as a vector of current flow measurements. The algorithm runs online as time progress from $t=1$ to $t=T$.} \label{fig:algo1} \end{algorithm} \subsection{Case Study} We assume nominal segmentation times for seven TOD plans as shown in Figure \ref{fig:seg}b. Now consider a traffic predictive controller as developed in Section \ref{sec:predictive-time-day} that is able to adjust in real-time the TOD periods so that the segmentation times are up to 45 mins earlier or later than the nominal segmentation times. We first consider the case for which the traffic predictive controller is only able to establish predictive segmentation times and not predictive parameters. Figure \ref{fig:pred_cont} plots the results of executing the traffic predictive controller on the unusual dates February 24 (below average traffic due to weather conditions) and July 2 (above average flow due to holiday). The solid line in the figure depicts the mean \emph{nominal} total traffic flow over the dataset (excluding the unusual date), summed over the 12 movements, in 15 minute intervals. Here, we plot total traffic flow only for clarity; the analysis considers separate flow measurements for all 12 movements as above. The dashed line is the total predicted traffic flow for the next TOD period as predicted at the start of each TOD period. The vertical dashed blue lines are the nominal segmentation times, the lightly shaded bars represent the segmentation windows during which a change to a new TOD period is allowed, and the solid vertical lines are the actual segmentation times as determined using the traffic predictive controller. From the figure, it is evident that the traffic predictive controller behaves in a reasonable way. For example, on July 2, the nominal TOD period between 7:00 and 9:00 corresponds to the morning peak period. Since traffic is predicted to be above average at this time, this TOD period is extended by 45 minutes. Likewise, since traffic in the afternoon peak period is predicted to be above average, the corresponding TOD period is extended, beginning 45 minutes earlier and ending 30 minutes later. On February 24, we see the reverse phenomenon due to the prediction that traffic will be below average; the TOD periods for the morning and afternoon peak are correspondingly shortened. \begin{figure} \centering \includegraphics[width=.7\textwidth,clip=true, trim=0in .75in 0in 0in]{img001}\\ \includegraphics[width=.7\textwidth]{img002}\\ \caption{Example of traffic predictive control for two unusual days. The traffic predictive controller computes predictive segmentation times and adjusts the TOD periods to accommodate the predicted traffic flow, which deviates from the nominal average computed over the dataset. For clarity, the plots show total traffic flow through the intersection, summed over the twelve movements, however the prediction algorithm considers separate flow measurements for each movement.} \label{fig:pred_cont} \end{figure} A key feature of the traffic predictive controller as developed in Section \ref{sec:predictive-time-day} is that the scheme is agnostic regarding the specific green split algorithm utilized, that is, the algorithm seeks to predict flows and segmentation times, not green splits directly. However, to estimate the gain in performance, we must choose a particular green split algorithm. We employ a classical delay minimizing green split optimization algorithm as developed in \cite{Allsop:1971fh}, where we substitute modified delay formulas from the Highway Capacity Manual (HCM) \cite{Manual:2000qd}. These delay formulas account for queue buildup when the green splits are too short to accommodate queued vehicles. It is assumed that a fixed time control strategy is used within each TOD period calculated using the parameter vector, which is inflated by a factor to accommodate random fluctuations that would occur if vehicles arrive according to a Poisson process, a reasonable assumption. In Figure \ref{fig:delay}, we plot the result of using this delay minimizing green split optimization with our traffic predictive controller. In each subplot, the solid trace is the rate of delay for the intersection plotted over time for the case where the green splits are designed to minimize delay using the nominal segmentation times and nominal parameter vectors. The rate of delay is calculated using the analytical formulas found in the HCM. The dashed trace is the rate of delay obtained using the traffic predictive controller. The top plots consider a traffic predictive controller that uses predictive segmentation times but nominal parameter vectors, that is, the green splits are not allowed to differ from nominal during each TOD period, however, the duration and starting point of the TOD period is adjusted based on the predicted flow. The bottom plots consider the case when the traffic predictive controller uses predictive segmentation times and predictive parameter vectors, thereby adjusting the green splits based on predicted traffic flow. The dotted trace indicates a lower bound on the rate of delay that is computed assuming that the optimal green splits for each fifteen minute interval are applied at each time step. \begin{figure} \centering \begin{tabular}{@{}c@{} @{}c@{}} \includegraphics[width=.5\textwidth,clip=true,trim=1.65in .75in 1.7in 0in]{img003}& \includegraphics[width=.5\textwidth,clip=true,trim=1.65in .75in 1.7in 0in]{img004}\\ \includegraphics[width=.5\textwidth,clip=true,trim=1.65in .75in 1.7in 0in]{img005}& \includegraphics[width=.5\textwidth,clip=true,trim=1.65in .75in 1.7in 0in]{img006}\\ \multicolumn{2}{c}{ \includegraphics[width=.7\textwidth,clip=true,trim=0in 0in 0in 3.95in]{img006} } \end{tabular} \caption{Rate of delay induced by the traffic predictive controller that adjusts TOD periods and TOD plans based on real-time traffic data as compared to the nominal controller that does not adjust TOD periods or plans. It is assumed that the green splits are computed to minimize delay using the parameter vectors for each TOD period. The top plots are the case when only predictive segmentation is considered while the bottom plots consider predictive segmentation and predictive parameter vectors. Traffic predictive control eliminates excessive delay caused by queued vehicles that require multiple cycles to clear the intersection and reduces wasted green time. Table \ref{tab:final} quantifies these delay improvements.} \label{fig:delay} \end{figure} Suboptimality of green splits occurs due to two reasons: either the splits are too short for some movements, resulting in queued vehicles that must wait more than one cycle to clear the intersection; or the splits are too long, resulting in wasted green time whereby vehicles at other movements must wait longer than necessary to receive a green light. For the nominal control case, the former condition occurs on July 2 and the latter condition occurs on February 24. Indeed, the large increase in the rate of delay between 10:00 and 11:00 on July 2 results from queued vehicles waiting multiple cycles to move through the intersection, so-called \emph{cycle failures}. These queued vehicles contribute substantially to the rate of delay at the intersection. Allowing predictive segmentation mitigates the issue somewhat as the traffic predictive controller delays by 45 minutes the change to a new TOD period that nominally occurs at 9:00, which in turn delays the onset of queued vehicles. However, since the nominal parameter vectors are used within each TOD period, the issue returns between 10:00 and 14:00. However, when predictive parameter vectors are considered as in the top right figure, leading to recomputed green splits for each TOD period, the rate of delay substantially decreases and is nearly equal to the lower bound. On February 24, cycle failures are not a concern due to the below average traffic, however the nominal green splits lead to wasted green time. The rate of delay can be modestly improved by using predicted parameter vectors. The two trends exhibited by the conditions on July 2 and February 24 are general: cycle failures lead to large increases in the rate of delay while wasted green time typically affects delay by a lesser degree. Total delay is computed by integrating rate of delay over the course of the day. Table \ref{tab:final} collects the total delay values for the four cases in Figure \ref{fig:delay} as well as mean values taken over the entire data set. The traffic predictive controller reduces total delay at the intersection by up to 113.3 veh$\cdot$hr on July 2 by mitigating excessive delay caused by cycle failures. This leads to a 22.1\% decrease in total delay, a marked improvement especially considering that a lower bound on the best achievable delay reduction is 26.4\%. Approximately 45\,400 vehicles transited the intersection on this day, resulting in average savings of 9.0s per vehicle. Over the 132 days in the dataset, the traffic predictive controller that uses predictive segmentation and predictive parameters results in a delay improvement of 7.8 veh$\cdot$hr on average, and the delay improvement exceeds 25 veh$\cdot$hr on 11 occasions. \begin{table} \centering \begin{tabular}{l | l l l } &Feb 24 & July 2 & Dataset Mean\\ \hline \hline Delay, nominal control [veh$\cdot$hr]&209.1 & 512.1 & 305.7 \\ Delay, predictive segmentation [veh$\cdot$hr]&207.5&490.5&303.1\\ Delay, predictive segmentation and parameters [veh$\cdot$hr]& 204.2 &398.8 & 297.9 \\ Delay lower bd. [veh$\cdot$hr]& 192.5&377.5&276.4\\ \hline Delay improvement, predictive segmentation [veh$\cdot$hr]&1.7&21.6&2.6\\ Delay improvement, predictive seg. and param. [veh$\cdot$hr] &5.0&113.3&7.8\\ \end{tabular} \caption{Illustrative delay savings from using traffic predictive control to adjust TOD periods and plans. The traffic predictive controller reduces total delay at the intersection by 113.3 veh$\cdot$hr on July 2 by mitigating excessive delay caused by queued vehicles that wait multiple cycles to clear the intersection, so-called cycle failures. On average, the traffic predictive controller improves total delay by 7.8 veh$\cdot$hr per day by preventing cycle failures and reducing wasted green time.} \label{tab:final} \end{table} \section{Conclusions} \label{sec:conclusions} We have proposed a traffic predictive control scheme that identifies trends in historical traffic flow data and uses real-time measurements to predict future traffic flow. These trends manifest as low-rank structure in the data which are identified using decomposition techniques akin to principal component analysis and particularly suited for prediction. Using a rich dataset of traffic flow measurements over the course of eight months, we provide evidence that much of the day-to-day variation in traffic flow consists of these low-rank, latent structures.% The traffic predictive control adjusts the time periods and parameters for time-of-day traffic signal scheduling based on predictions of traffic flow using real-time measurements. This scheme is particularly well-suited for implementation on existing traffic control hardware, which universally support time-of-day plans and often are capable of remote updating of signal timing parameters. Furthermore, this approach is well-aligned with standard signal timing practices, increasing the likelihood of successful adoption by practitioners. Additionally, the traffic predictive control requires minimal tuning and accommodates any green split optimization scheme that requires expected traffic flow as input. The savings in delay for the case study intersection is found to be 7.8 veh$\cdot$hr per day. Valuing a driver's time at \$20 per hour and conservatively assuming that each vehicle carries only one occupant, this suggests annual savings of around \$57\,000. In addition, this metric is compared to a well-timed but pre-specified controller that does not account for real-time measurements; the savings are likely to be higher for intersections that are currently poorly timed. If these calculations are even only approximately correct, the savings are likely to be well worth the relatively small implementation costs. An important future direction of research is to consider the case of an arterial corridor. In this case, a straightforward extension of the proposed approach is to consider all movements along the corridor in aggregate. However, there is high correlation between the traffic flows for some sets of movements as vehicles progress along the corridor. This suggests an approach that explicitly accounts for these spatial correlations as is done in \cite{Min:2011kq}. Additionally, future research will investigate confidence bounds on the predictions. For example, can low-rank structure be used to predict the 90th or the 99th percentile traffic flow? Furthermore, traffic prediction from historical trends captures phenomena that have occurred previously in the historical data, and thus may not be well-suited for events that are truly one-off events such as lane closures due to construction. However, it may be that lane closures in one direction over an interval of time affects traffic in a similar manner as lane closures on a different leg at a different time. Learning these more universal trends is another future direction of research. Finally, these questions will be pursued alongside an implementation pilot planned in collaboration with Sensys Networks, Inc. \section*{Acknowledgements} The authors acknowledge Beaufort County, SC for use of the intersection data and thank Montasir Abbas for discussions regarding technical capabilities of common traffic control hardware and best practices for signal timing and plan selection. \section*{References} \bibliographystyle{ieeetr} \input{paper_bib.bbl} \end{document}
2,877,628,088,689
arxiv
\section{Introduction} The problem of (approximately) uniform sampling of simple graphs with a given degree sequence has received considerable attention, and has applications in domains as diverse as hypothesis testing in network structures \cite{Olding2014} and ecology, see, e.g., \cite{Rao1996} and references therein. We refer the interested reader to \cite{Blitzstein2011} for more pointers to possible applications. Several variants of the problem have been studied, including sampling undirected, bipartite, connected, or directed graphs. In particular, there is an extensive line of work on Markov Chain Monte Carlo (MCMC) approaches, see, e.g., \cite{Jerrum1990,Kannan1999,Feder2006,Cooper2007,Greenhill2017journal,Erdos2013,Erdos2015decomposition,ErdosMMS2018,CooperDGH17,CarstensK18}. In such an approach one studies a random walk on the space of all graphical realizations.\footnote{Sometimes the state space is augmented with a set of auxiliary states, as in \cite{Jerrum1990}.} This random walk is induced by making small local changes to a given realization using a probabilistic procedure that defines the Markov chain. The idea, roughly, is that after a sufficient number of steps, the so-called \emph{mixing time}, the resulting graph corresponds to a sample from an almost uniform distribution over all graphical realizations of the given degree sequence. The goal is to show that the chain mixes \emph{rapidly}, meaning that one only needs to perform a polynomial (in the size of the graph) number of transitions of the chain in order to obtain an approximately uniform sample. One of the most well-known probabilistic procedures for making these small changes uses local operations called \emph{switches} (also known as \emph{swaps} or \emph{transpositions}); see, e.g., \cite{Rao1996} and Figure \ref{fig:switch_0} for an example. The resulting \emph{switch Markov chain} has been shown to be rapidly mixing for various degree sequences \cite{Cooper2007,Greenhill2015, Greenhill2017journal,Erdos2013,Erdos2015decomposition,ErdosMMS2018}, but it is still open whether it is rapidly mixing for all degree sequences. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.6] \coordinate (A1) at (0,0); \coordinate (A2) at (0,1.5); \coordinate (M1) at (2,0); \coordinate (M2) at (2,1.5); \coordinate (P1) at (3,1); \coordinate (P2) at (4,1); \node at (A1) [circle,scale=0.7,fill=black] {}; \node (a1) [below=0.1cm of A1] {$v$}; \node at (A2) [circle,scale=0.7,fill=black] {}; \node (a2) [above=0.1cm of A2] {$w$}; \node at (M1) [circle,scale=0.7,fill=black] {}; \node (m1) [below=0.1cm of M1] {$x$}; \node at (M2) [circle,scale=0.7,fill=black] {}; \node (m2) [above=0.1cm of M2] {$y$}; \path[every node/.style={sloped,anchor=south,auto=false}] (A1) edge[-,very thick] node {} (A2) (M1) edge[-,very thick] node {} (M2) (P1) edge[->,very thick] node {} (P2); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.6] \coordinate (A1) at (0,0); \coordinate (A2) at (0,1.5); \coordinate (M1) at (2,0); \coordinate (M2) at (2,1.5); \node at (A1) [circle,scale=0.7,fill=black] {}; \node (a1) [below=0.1cm of A1] {$v$}; \node at (A2) [circle,scale=0.7,fill=black] {}; \node (a2) [above=0.1cm of A2] {$w$}; \node at (M1) [circle,scale=0.7,fill=black] {}; \node (m1) [below=0.1cm of M1] {$x$}; \node at (M2) [circle,scale=0.7,fill=black] {}; \node (m2) [above=0.1cm of M2] {$y$}; \path[every node/.style={sloped,anchor=south,auto=false}] (A1) edge[-,very thick] node {} (M2) (M1) edge[-,very thick] node {} (A2); \end{tikzpicture} \caption{Example of a switch in which edges $\{v,w\},\{x,y\}$ are replaced by $\{v,y\},\{x,w\}$. Note that the degree sequence is preserved when applying a switch operation.} \label{fig:switch_0} \end{figure} In this work, besides the problem of sampling undirected, as well as bipartite, simple graphs with a given degree sequence, we also focus on the problem of sampling undirected simple graphs with a given \emph{joint degree distribution}. That is, in addition to the degrees, the number of edges between nodes of degree $i$ and degree $j$ is also specified for every pair $(i,j)$.\footnote{We refer the reader to Subsection \ref{sec:jdm_model} for an exact definition and also Appendix \ref{app:example} for an example.} The motivation for using such a metric is that this extra information restricts the space of possible realizations to graphs with more desirable structure. This was first observed by Mahadevan et al.~\cite{MahadevanKFV06} who argued that the joint degree distribution is a much more reliable metric for a synthetic graph to resemble a real network topology, compared to just using the degree sequence. The joint degree matrix model of Amanatidis, Green, and Mihail \cite{Amanatidis2015} formalizes this approach. Although there are polynomial-time algorithms that produce a graphical realization of a given joint degree distribution \cite{Amanatidis2015,AGM2018,StantonP11,CzabarkaDEM15,GjokaTM15}, it is not known how to uniformly sample such a realization efficiently. In particular, bounding the mixing time of the natural restriction of the switch Markov chain for this setting has been an open problem since the introduction of the model \cite{Amanatidis2015,StantonP11,Erdos2015decomposition}. \vspace{-5pt} \paragraph{Our Contribution.} The proofs of the results in \cite{Erdos2013,Greenhill2015,Erdos2015decomposition,ErdosMMS2018} for the analysis of the switch Markov chain in undirected (or bipartite) graphs are all using conceptually similar ideas to the ones introduced by Cooper, Dyer and Greenhill \cite{Cooper2007} for the analysis of the switch chain for regular undirected graphs, and are based on the multicommodity flow method of Sinclair \cite{Sinclair1992}. The individual parts of this template can become quite technical and require long proofs. In this work we take a different approach for proving that the switch chain is rapidly mixing. First we analyze some easier auxiliary Markov chain; such a chain can be used to sample graphical realizations that \emph{almost} have a given fixed degree sequence or joint degree distribution. We show that there exists an efficient multicommodity flow for the auxiliary chain when the given instance is \emph{strongly stable}, \footnote{In the case of sampling graphs with a given degree sequence, the existence of such a flow for the auxiliary chain we use was already claimed in \cite{Jerrum1990}, at least for the degree sequences satisfying a more restrictive analog of inequality \eqref{eq:stable1}. For completeness, we give a detailed proof in Appendix \ref{app:js}.} and then show how it can be transformed into an efficient multicommodity flow for the switch chain. Note that this last step compares two Markov chains with different state spaces, as the auxiliary chain samples from a strictly larger set of graphs than the switch chain. Thus, for the flow transformation we use embedding arguments similar to those in Feder et al.~\cite{Feder2006}.\footnote{We also refer the reader to the survey of Dyer et al.~\cite{Dyer2006} for more on Markov chain comparison techniques.} \medskip \noindent Using the aforementioned approach we obtain the following two main results \begin{enumerate}[label=\arabic*),leftmargin=*] \item We show rapid mixing of the switch chain for \emph{strongly stable} families of degree sequences (Theorem \ref{thm:switch}), thus providing a rather short proof that unifies and extends the results in \cite{Cooper2007,Greenhill2015} (Corollaries \ref{lem:stable_jerrum} and \ref{cor:stable}). We introduce strong stability as a stricter version of the notion of $P$-stability \cite{Jerrum1990}. The strong stability condition is satisfied by the degree sequences in the works \cite{Kannan1999,Cooper2007,Greenhill2015,Erdos2013,Erdos2015decomposition,ErdosMMS2018} and by characterizations of $P$-stability \cite{Jerrum1989graphical}. These characterizations are employed to get Corollaries \ref{lem:stable_jerrum} and \ref{cor:stable}. In particular, investigating the mixing time for the degree sequences of Corollary \ref{cor:stable} was posed as an open question by Greenhill \cite{Greenhill2015}; here we resolve it in the positive. \item We show that the switch chain restricted on the space of the graphic realizations of a given joint degree distribution with two degree classes is \emph{always} rapidly mixing (Theorem \ref{thm:stable_jdm1}). Despite being for the case of two classes, this is the very first rapid mixing result for the problem. Establishing the rapid mixing of the auxiliary chain in this case presents significant challenges. To attack this problem, we rely on ideas introduced by Bhatnagar, Randall, Vazirani and Vigoda \cite{Bhatnagar2008}. At the core of this approach lies the \emph{mountain-climbing problem} \cite{Homma1952,Whittaker1966}. We should note that the auxiliary chain used here is analyzed in a much more general model than the joint degree matrix model. \end{enumerate} Besides the above results we also unify the results in \cite{Kannan1999,Erdos2013,Erdos2015decomposition,ErdosMMS2018} for bipartite degree sequences and extend them for the special case of an equally sized bipartition; see Corollaries \ref{cor:bipartite} and \ref{cor:bipartite2} in Appendix \ref{sec:bipartite}. We should note that the unification of the existing results mentioned so far is qualitative rather than quantitative, in the sense that our simpler, indirect approach provides weaker polynomial bounds for the mixing time. For examples of explicit mixing time bounds we refer the reader to \cite{Cooper2007,Cooper2012corrigendum,Greenhill2017journal}. Finally, as a byproduct of our analysis for the auxiliary Markov chain used to prove Theorem \ref{thm:stable_jdm1}, we obtain the first \emph{fully polynomial almost uniform generator} \cite{Sinclair1989} for sampling graphical realizations of certain sparse \emph{partition adjacency matrix} instances with two partition classes \cite{Czabarka14,ErdosHIM2017} (this is a generalization of the joint degree distribution problem; see Appendix \ref{sec:auxiliary} for definitions). See Corollary \ref{cor:first_sparse} in Appendix \ref{sec:stable_jdm}. \vspace{-5pt} \paragraph{Related Work.} Here the focus is on MCMC approaches. As such approaches have impractical mixing times in general, we should note that there is a line of work on non-MCMC sampling algorithms which, albeit having weaker properties, do have practical running times. See, e.g., \cite{BayatiKS10,GaoW18} and references therein. Jerrum and Sinclair \cite{Jerrum1990} give a fully polynomial almost uniform generator for generating graphical realizations of degree sequences coming from any \emph{$P$-stable} family of sequences (see preliminaries). The auxiliary chain we use to prove Theorem \ref{thm:switch}, henceforth referred to as Jerrum-Sinclair (JS) chain, is presented in \cite{Jerrum1990} as a more straightforward implementation of this generator. One drawback is that the algorithms in \cite{Jerrum1990} work with auxiliary states. Kannan, Tetali and Vempala \cite{Kannan1999} introduce the switch chain as a simpler and more direct generator that does not have to work with auxiliary states. They addressed the mixing time of such a switch-based Markov chain for the regular bipartite case. Cooper et al.~\cite{Cooper2007} then gave a rapid mixing proof for regular undirected graphs, and later Greenhill \cite{Greenhill2015} extended this result to certain ranges of irregular degree sequences (see also Greenhill and Sfragara \cite{Greenhill2017journal}). Mikl\'os, Erd\H{o}s and Soukup \cite{Erdos2013} proved rapid mixing for the \emph{half-regular} bipartite case, and Erd\H{o}s, Mikl\'os and Toroczkai \cite{Erdos2015decomposition} for the \emph{almost half-regular} case. Very recently, Erd\H{o}s et al.~\cite{ErdosMMS2018} presented a range of bipartite degree sequences unifying and generalizing the results in \cite{Erdos2013,Erdos2015decomposition}. Switch-based Markov chain Monte Carlo approaches have also been studied for other graph sampling problems. Feder et al.~\cite{Feder2006} as well as Cooper et al.~\cite{CooperDGH17} study the mixing time of a Markov chain using a switch-like probabilistic procedure (called a \emph{flip}) for sampling connected graphs. For sampling perfect matchings, switch-based Markov chains have also been studied, see, e.g., the recent work of Dyer, Jerrum and M\"{u}ller \cite{Dyer2017} and references therein. It is interesting to note that Dyer et al.~\cite{Dyer2017} also use a lemma on the mountain-climbing problem that is very similar to Lemma \ref{lem:mountain}. The joint degree matrix model was first studied by Patrinos and Hakimi \cite{PatrinosH76}, albeit with a different formulation and name, and was reintroduced in Amanatidis et al.~\cite{Amanatidis2015}. While it has been shown that the switch chain restricted on the space of the graphic realizations of any given joint degree distribution is irreducible \cite{Amanatidis2015,CzabarkaDEM15}, almost no progress has been made towards bounding its mixing time. Stanton and Pinar \cite{StantonP11} performed experiments based on the autocorrelation of each edge, suggesting that the switch chain mixes quickly. The only relevant result is that of Erd\H{o}s et al.~\cite{Erdos2015decomposition} showing fast mixing for a related Markov chain over the severely restricted subset of the so-called \emph{balanced} joint degree matrix realizations; this special case, however, lacks several of the technical challenges that arise in the original problem. \vspace{-5pt} \paragraph{Outline.} In Section \ref{sec:preliminaries} we give all the necessary preliminaries and we formally describe the JS chain, the switch chain, and the restricted switch chain. Our first main result is Theorem \ref{thm:switch} in Section \ref{sec:main_result}, where we show that the switch chain is rapidly mixing for families of strongly stable degree sequences. Given the rapid mixing of the JS chain (Appendix \ref{app:js}), the proof of Theorem \ref{thm:switch} in Section \ref{sec:main_result} is self-contained. The corresponding result for the bipartite case is completely analogous and is deferred to Appendix \ref{sec:bipartite}. In Section \ref{sec:switch_jdm_main} we state our second main result, Theorem \ref{thm:stable_jdm1}. For the sake of presentation, we defer the proof of Theorem \ref{thm:stable_jdm1} to Appendices \ref{sec:auxiliary}, \ref{sec:stable_jdm} and \ref{sec:switch_for_jdm}, and we only include a short discussion of our approach in Section \ref{sec:switch_jdm_main}. \section{Preliminaries}\label{sec:preliminaries} We begin with the preliminaries regarding Markov chains and the multicommodity flow method of Sinclair \cite{Sinclair1992}. For Markov chain definitions not given here, see for example \cite{Levin2009}. Let $\mathcal{M} = (\Omega,P)$ be an ergodic, time-reversible Markov chain over state space $\Omega$ with transition matrix $P$ and stationary distribution $\pi$. We write $P^t(x,\cdot)$ for the distribution over $\Omega$ at time step $t$ given that the initial state is $x \in \Omega$. The \emph{total variation distance} at time $t$ with initial state $x$ is \[ \Delta_x(t) = \max_{S \subseteq \Omega} \big| P^t(x,S) - \pi(S)\big| = \frac{1}{2}\sum_{y \in \Omega} \big| P^t(x,y) - \pi(y)\big| \,, \] and the \emph{mixing time} $\tau(\epsilon)$ is defined as $\tau(\epsilon) = \max_{x \in \Omega}\left\{ \min\{ t : \Delta_x(t') \leq \epsilon \text{ for all } t' \geq t\}\right\}$. Informally, $\tau(\epsilon)$ is the number of steps until the Markov chain is $\epsilon$-close to its stationary distribution. A Markov chain is said to be \emph{rapidly mixing} if the mixing time can be upper bounded by a function polynomial in $\ln(|\Omega|/\epsilon)$. It is well-known that, since the Markov chain is time-reversible, the matrix $P$ only has real eigenvalues $1 = \lambda_0 > \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_{|\Omega|-1} > -1$. We may replace the transition matrix $P$ of the Markov chain by $(P+I)/2$, to make the chain \emph{lazy}, and hence guarantee that all its eigenvalues are non-negative. It then follows that the second-largest eigenvalue of $P$ is $\lambda_1$. In this work we always consider the lazy versions of the Markov chains involved. It follows directly from Proposition 1 in \cite{Sinclair1992} that $\tau(\epsilon) \ \leq \ \frac{1}{1 - \lambda_1} \big(\ln(1/\pi_*) + \ln(1/\epsilon)\big)$, where $\pi_* = \min_{x \in \Omega} \pi(x)$. For the special case where $\pi(\cdot)$ is the uniform distribution, as is the case here, the above bound becomes $\tau(\epsilon) \leq \ln(|\Omega|/\epsilon)/(1 - \lambda_1)$. The quantity $(1 - \lambda_1)^{-1}$ can be upper bounded using the \emph{multicommodity flow method} of Sinclair \cite{Sinclair1992}. We define the state space graph of the chain $\mathcal{M}$ as the directed graph $\mathbb{G}$ with node set $\Omega$ that contains exactly the arcs $(x,y) \in \Omega \times \Omega$ for which $P(x,y) > 0$ and $x \neq y$. Let $\mathcal{P} = \cup_{x \neq y} \mathcal{P}_{xy}$, where $\mathcal{P}_{xy}$ is the set of simple paths between $x$ and $y$ in the state space graph $\mathbb{G}$. A \emph{flow} $f$ in $\Omega$ is a function $\mathcal{P} \rightarrow [0,\infty)$ satisfying $\sum_{p \in \mathcal{P}_{xy}} f(p) = \pi(x)\pi(y)$ for all $x,y \in \Omega, x \neq y$. The flow $f$ can be extended to a function on oriented edges of $\mathbb{G}$ by setting $f(e) = \sum_{p \in \mathcal{P} : e \in p } f(p)$, so that $f(e)$ is the total flow routed through $e\in E(\mathbb{G})$. Let $\ell(f) = \max_{p \in \mathcal{P} : f(p) > 0} |p|$ be the length of a longest flow carrying path, and let $ \rho(e) = f(e)/Q(e) $ be the \emph{load} of the edge $e$, where $Q(e) = \pi(x)P(x,y)$ for $e = (x,y)$. The maximum load of the flow is $ \rho(f) = \max_{e\in E(\mathbb{G})} \rho(e). $ Sinclair (\cite{Sinclair1992}, Corollary $6^{\,\prime}$) shows that $(1 - \lambda_1)^{-1} \leq \rho(f)\ell(f)$. We use the following standard technique for bounding the maximum load of a flow in case the chain $\mathcal{M}$ has uniform stationary distribution $\pi$. Suppose $\theta$ is the smallest positive transition probability of the Markov chain between two distinct states. If $b$ is such that $f(e) \leq b / |\Omega|$ for all $e\in E(\mathbb{G})$, then it follows that $ \rho(f) \leq b/\theta $. Thus, we have \begin{equation*} \tau(\epsilon) \leq \frac{\ell(f)\cdot b}{\theta}\ln(|\Omega|/\epsilon)\,. \end{equation*} Now, if $\ell(f), b$ and $1/\theta$ can be bounded by a function polynomial in $\log(|\Omega|)$, it follows that the Markov chain $\mathcal{M}$ is rapidly mixing. In this case, we say that $f$ is an \emph{efficient} flow. Note that in this approach the transition probabilities do not play a role as long as $1/\theta$ is polynomially bounded. \subsection{Graphical Degree Sequences}\label{sec:degree_sequence} A sequence of non-negative integers $d = (d_1,\dots,d_n)$ is called a \emph{graphical degree sequence} if there exists a simple, undirected, labeled graph on $n$ nodes having degrees $d_1,\dots,d_n$; such a graph is called a \emph{graphical realization of $d$}. For a given degree sequence $d$, $\mathcal{G}(d)$ denotes the set of all graphical realizations of $d$. Throughout this work we only consider sequences $d$ with positive components, and for which $\mathcal{G}(d) \neq \emptyset$. Let $\mathcal{G}'(d) = \cup_{d'} \mathcal{G}(d')$ with $d'$ ranging over the set $\left\{ d' : d'_j \leq d_j \text{ for all } j \text{, and } \sum_{i=1}^n |d_i - d_i'| \leq 2\right\}$. That is, we have \emph{(i)} $d' = d$, or \emph{(ii)} there exist distinct $\kappa,\lambda$ such that $d'_i = d_i - 1$ if $i \in \{\kappa,\lambda\}$ and $d'_i =d_i$ otherwise, or \emph{(iii)} there exists a $\kappa$ so that $d'_i = d_i - 2$ if $i = \kappa$ and $d'_i =d_i$ otherwise. In the case (ii) we say that $d'$ has two nodes with degree deficit one, and in the case (iii) we say that $d'$ has one node with degree deficit two. A family $\mathcal{D}$ of graphical degree sequences is called \emph{$P$-stable} \cite{Jerrum1990}, if there exists a polynomial $q(n)$ such that for all $d \in \mathcal{D}$ we have $|\mathcal{G}'(d)|/|\mathcal{G}(d)| \leq q(n)$, where $n$ is the number of components of $d$. Jerrum and Sinclair \cite{Jerrum1990} define the following Markov chain on $\mathcal{G}'(d)$, which will henceforth be referred to as the \emph{JS chain}.\footnote{A slightly different definition of stability is given by Jerrum, McKay and Sinclair \cite{Jerrum1989graphical}. Based on this variant, one could define the corresponding variant of the JS chain. Nevertheless, the definitions of stability in \cite{Jerrum1989graphical} and \cite{Jerrum1990} (and their corresponding definitions of strong stability) are equivalent. To avoid confusion, here we only use the definitions in \cite{Jerrum1990}.} Let $G \in \mathcal{G}'(d)$ be the current state of the chain: \begin{itemize}[topsep=3pt,itemsep=3pt,parsep=0pt,partopsep=20pt] \item With probability $1/2$, do nothing. \item Otherwise, select an ordered pair $i,j$ of nodes uniformly at random and \begin{itemize}[topsep=3pt,itemsep=3pt,parsep=0pt,partopsep=20pt] \item if $G \in \mathcal{G}(d)$ and $(i,j)$ is an edge of $G$, then delete $(i,j)$ from $G$, \item if $G \notin \mathcal{G}(d)$, the degree of $i$ in $G$ is less than $d_i$, and $(i,j)$ is not an edge of $G$, then add $(i,j)$ to $G$. If the new degree of $j$ exceeds $d_j$, then select an edge $(j,k)$ uniformly at random and delete it. \end{itemize} \end{itemize} The graphs $G,G' \in \mathcal{G}'(d)$ are \emph{JS adjacent} if $G$ can be obtained from $G'$ with positive probability in one transition of the JS chain and vice versa. The following properties of the JS chain are easy to check. \begin{theorem}[Follows by \cite{Jerrum1990}] The JS chain is irreducible, aperiodic and symmetric, and, hence, has uniform stationary distribution over $\mathcal{G}'(d)$. Moreover, $P(G,G')^{-1} \leq 2n^3$ for all JS adjacent $G,G' \in \mathcal{G}'(d)$, and also the maximum in- and out-degrees of the state space graph of the JS chain are bounded by $n^3$. \end{theorem} We say that two graphs $G, G'$ are within distance $r$ in the JS chain if there exists a path of at most length $r$ from $G$ to $G'$ in the state space graph of the JS chain. By $\text{dist}(G,d)$ we denote the minimum distance of $G$ from an element in $\mathcal{G}(d)$. The following parameter will play a central role in this work. Let \begin{equation}\label{eq:distance} k_{JS}(d) = \max_{G \in \mathcal{G}'(d)} \text{dist}(G,d) \,. \end{equation} Based on the parameter $k_{JS}(d)$, we define the notion of \emph{strong stability}. The simple observation in Proposition \ref{prop:strong_stable} justifies the terminology. For the different settings studied in this work, i.e., for sampling bipartite graphs or joint degree matrix realizations, the definition of $k_{JS}$ is adjusted accordingly (see Appendices \ref{sec:bipartite} and \ref{sec:auxiliary}). \begin{definition}[Strong stability]\label{def:strong_stable} A family of graphical degree sequences $\mathcal{D}$ is called \emph{strongly stable} if there exists a constant $\ell$ such that $k_{JS}(d) \leq \ell$ for all $d \in \mathcal{D}$. \end{definition} \begin{proposition}\label{prop:strong_stable} If $\mathcal{D}$ is strongly stable, then it is $P$-stable. \end{proposition} \begin{proof} Suppose $\mathcal{D}$ is strongly stable with respect to the constant $\ell$. Let $d \in \mathcal{D}$ be a degree sequence with $n$ components. For every $G \in \mathcal{G}'(d) \mathbin{\mathchoice{\mysetminusD}{\mysetminusT}{\mysetminusS}{\mysetminusSS}} \mathcal{G}(d)$ choose some $\varphi(G) \in \mathcal{G}(d)$ within distance $k = k_{JS}(d)$ of $G$. As the in-degree of any node in the state space graph of the JS chain is bounded by $n^3$, the number of paths with length at most $k$ that end up at any particular graph in $\mathcal{G}(d)$ is upper bounded by $(n^3)^k$. Therefore, $|\mathcal{G}'(d)|/|\mathcal{G}(d)| \leq n^{3k} \le n^{3\ell}$, meaning that $\mathcal{D}$ is stable, since $\ell$ is constant. \end{proof} Finally, the lazy version of the \emph{switch chain on $\mathcal{G}(d)$} is defined as follows (see, e.g., \cite{Kannan1999,Cooper2007}). Let $G\in \mathcal{G}(d)$ be the current state of the chain: \begin{itemize}[topsep=3pt,itemsep=3pt,parsep=0pt,partopsep=20pt] \item With probability $1/2$, do nothing. \item Otherwise, select two edges $\{a,b\}$ and $\{x,y\}$ uniformly at random, and select a perfect matching $M$ on nodes $\{x,y,a,b\}$ uniformly at random (there are three possible options). If $M \cap E(G) = \emptyset$, then delete $\{a,b\}, \{x,y\}$ from $E(G)$ and add the edges of $M$. This local operation is called a \emph{switch}. \end{itemize} The graphs $G,G' \in \mathcal{G}(d)$ are \emph{switch adjacent} if $G$ can be obtained from $G'$ with positive probability in one transition of this chain and vice versa. It is well-known that the switch chain is irreducible, aperiodic and symmetric (e.g., \cite{Greenhill2017journal} and references therein), and, thus, has uniform stationary distribution over $\mathcal{G}(d)$. Furthermore, is it a matter of simple counting that $P(G,G')^{-1} \leq 6n^4$ for all switch adjacent $G,G' \in \mathcal{G}(d)$, and the maximum in- and out-degrees of the state space graph of the switch chain are bounded by $n^4$. \subsection{Joint Degree Matrix Model}\label{sec:jdm_model} Here in addition to the degrees, we would also like to specify the number of edges between nodes of degree $i$ and nodes of degree $j$ for every pair $(i,j)$. Let $V = \{1,\dots,n\}$ be a set of nodes. An instance of the joint degree matrix (JDM) model is given by a partition $V_1 \cup V_2 \cup \dots \cup V_q$ of $V$ into pairwise disjoint (degree) classes, a symmetric \emph{joint degree matrix} $c = (c_{ij})_{i,j \in [q]}$ of non-negative integers, and a sequence $d = (d_1,\dots,d_q)$ of non-negative integers.\footnote{This is shorthand notation for the degree sequence. Alternatively, we could write $\hat{d} = \Big(d_1^1,\dots,d_1^{|V_1|},\dots,d_q^1,\dots,d_q^{|V_q|}\Big)$ corresponding to the definition of a graphical degree sequence. In such a case, $d_i^{j}=d_i$ for $i\in V$ and $j \in \{1,\ldots,|V_i|\}$.} We say that the tuple $((V_i)_{i \in q},c,d)$ (or just $(c,d)$ when it is clear what the partition is) is graphical, if there exists a simple, undirected, labeled graph $G = (V,E)$ on the nodes in $V$ such that all nodes in $V_i$ have degree $d_i$ and there are precisely $c_{ij}$ edges between nodes in $V_i$ and $V_j$. Such a $G$ is called a graphical realization of the tuple. We let $\mathcal{G}((V_i)_{i \in q},c,d)$ (or just $\mathcal{G}(c,d)$) denote the set of all graphical realizations of $((V_i)_{i \in q},c,d)$. % We focus on the case of $q=2$, i.e., when two degree classes are given. While switches maintain the degree sequence, this is no longer true for the joint degree constraints. However, some switches do respect these constraints as well, e.g., if $w, y$ in Figure \ref{fig:switch_0} are in the same degree class. Thus, we are interested in the following (lazy) \emph{restricted switch Markov chain} for sampling graphical realizations of $\mathcal{G}(c,d)$. Let $G\in \mathcal{G}(c,d)$ be the current state of the chain: \begin{itemize}[topsep=3pt,itemsep=3pt,parsep=0pt,partopsep=20pt] \item With probability $1/2$, do nothing. \item Otherwise, try to perform a \emph{switch move}: select two edges $\{a,b\}$ and $\{x,y\}$ uniformly at random, and select a perfect matching $M$ on nodes $\{x,y,a,b\}$ uniformly at random. If $M \cap E(G) = \emptyset$ and $G + M - (\{a,b\} \cup \{x,y\}) \in \mathcal{G}(c,d)$, then delete $\{a,b\}, \{x,y\}$ from $E(G)$ and add the edges of $M$. \end{itemize} This chain is irreducible, aperiodic and symmetric \cite{Amanatidis2015,CzabarkaDEM15}. Like the switch chain defined above, $P(G,G')^{-1} \leq n^4$ for all adjacent $G, G' \in \mathcal{G}'(c,d)$, and also the maximum in- and out-degrees of the state space graph are less than $n^4$. Since bounding the mixing time of this chain on $\mathcal{G}(c,d)$ has been elusive \cite{Amanatidis2015,StantonP11,Erdos2015decomposition}, we follow a similar approach as in the case of the switch chain for undirected and bipartite graphs. In Appendix \ref{sec:auxiliary} the simpler \emph{hinge flip Markov chain} is defined on a strictly larger state space (and even for a more general model than JDM). This also gives rise to the corresponding strong stability definition. As the whole analysis of this auxiliary chain takes place in the appendix, we defer any further definitions there. \section{Sampling Undirected Graphs}\label{sec:main_result} The result in Theorem \ref{thm:switch} below is our main result regarding the mixing time of the switch chain for strongly stable degree sequences. Its proof is divided in two parts. First, in Section \ref{sec:js_flow}, by giving an efficient multicommodity flow, we show that for any $d$ in a family of strongly stable degree sequences the JS chain is rapidly mixing on $\mathcal{G}'(d)$. Then, in Section \ref{sec:js_to_switch}, we show that such an efficient flow for the JS chain on $\mathcal{G}'(d)$ can be transformed into an efficient flow for the switch chain on $\mathcal{G}(d)$. This yields the following theorem. \begin{theorem}\label{thm:switch} Let $\mathcal{D}$ be a strongly stable family of degree sequences with respect to some constant $k$. Then there exists a polynomial $q(n)$ such that, for any $0 < \epsilon < 1$, the mixing time $\tau_{\mathrm{sw}}$ of the switch chain for a graphical sequence $d = (d_1,\dots,d_n) \in \mathcal{D}$ satisfies \[ \tau_{\mathrm{sw}}(\epsilon) \leq q(n)^k \ln(1/\epsilon) \,. \] \end{theorem} We discuss two direct corollaries of Theorem \ref{thm:switch}. Both corollaries are consequences of the corresponding results in \cite{Jerrum1989graphical}, where it is shown that the families of sequences satisfying \eqref{eq:stable1} and \eqref{eq:stable2}, respectively, are (strongly) stable. We work with a slightly different definition of stability here than the one used in \cite{Jerrum1989graphical}. The reason why the results from \cite{Jerrum1989graphical} carry over to the definition used here (which was introduced in \cite{Jerrum1990}) is explained in Appendix \ref{app:open_question}. The equivalence of these definitions of $P$-stability is also claimed in \cite{Jerrum1989graphical}. Corollary \ref{lem:stable_jerrum} below extends the rapid mixing results in \cite{Cooper2007,Greenhill2017journal}. In particular, the condition of \cite{Greenhill2017journal} for rapid mixing is $\delta\ge 1$ and $3\le \Delta \le \frac{1}{3}\sqrt{2m}$, which is a special case of the condition \eqref{eq:stable1} below.\footnote{The condition in Theorem 1.1 in \cite{Greenhill2017journal} is a special case of the condition of Theorem 3.1 in \cite{Jerrum1990} which in turn is a special case of the condition of Corollary \ref{lem:stable_jerrum}. See also the remark after Theorem 8 in \cite{Jerrum1989graphical}.} \begin{corollary}\label{lem:stable_jerrum} Let $\mathcal{D} = \mathcal{D}(\delta,\Delta,m)$ be the set of all graphical degree sequences $d = (d_1,\dots,d_n)$ satisfying \begin{equation}\label{eq:stable1} (2m - n \delta)(n\Delta - 2m) \leq (\Delta - \delta)\big[(2m - n\delta)(n - \Delta - 1) + (n\Delta - 2m)\delta\big] \end{equation} where $\delta$ and $\Delta$ are the minimum and maximum component of $d$, respectively, and $m = \frac{1}{2} \sum_{i = 1}^n d_i$. For any $d \in \mathcal{D}$, we have $ k_{JS}(d) \leq 6. $ Hence, the switch chain is rapidly mixing for sequences in $\mathcal{D}$. \end{corollary} The next corollary is a special case of the Corollary \ref{lem:stable_jerrum} and answers an open question posed in \cite{Greenhill2017journal}. It is a result of similar flavor, but the corresponding condition is stated only in terms of $\delta$ and $\Delta$. \begin{corollary}\label{cor:stable} Let $\mathcal{D} = \mathcal{D}(\delta,\Delta)$ be the set of all graphical degree sequences $d = (d_1,\dots,d_n)$ satisfying \begin{equation}\label{eq:stable2} (\Delta - \delta + 1)^2 \leq 4\delta(n - \Delta - 1) \end{equation} where $\delta$ and $\Delta$ are the minimum and maximum component of $d$, respectively. For any $d \in \mathcal{D}$, we have $ k_{JS}(d) \leq 6. $ Hence, the switch chain is rapidly mixing for sequences in $\mathcal{D}$. \end{corollary} Explicit families satisfying these conditions are given in \cite{Jerrum1989graphical}. For instance, all sequences $d = (d_1,\dots,d_n)$ with (i) $\delta(d) \geq 1$ and $\Delta(d) \leq 2\sqrt{n}-2$, or (ii) $\delta(d) \geq \frac{1}{4}n$ and $\Delta(d) \leq \frac{3}{4}n - 1$ satisfy \eqref{eq:stable2} but not necessarily the conditions in \cite{Greenhill2017journal}. We refer the reader to \cite{Jerrum1989graphical,Salas2016} for more examples. The bounds in Corollaries \ref{lem:stable_jerrum} and \ref{cor:stable} are in a sense best possible with respect to the graph parameters involved. Namely, there exist non-stable degree sequence families the members of which only slightly violate \eqref{eq:stable2}; see the discussion in \cite{Jerrum1989graphical} for details. \subsection{Flow for the Jerrum-Sinclair Chain}\label{sec:js_flow} Jerrum and Sinclair \cite{Jerrum1990} claim, without proof, that the JS chain is rapidly mixing for (some) families of stable degree sequences. For completeness, we prove in Theorem \ref{thm:js_mixing} that the chain is rapidly mixing for any family of strongly stable degree sequences. For the proof of the theorem see Appendix \ref{app:js}. \begin{theorem}[\cite{Jerrum1990}]\label{thm:js_mixing} Let $\mathcal{D}$ be a strongly stable family of degree sequences with respect to some constant $k$. Then there exist polynomials $p(n)$ and $r(n)$ such that for any $d = (d_1,\dots,d_n)\in \mathcal{D}$ there exists an efficient multicommodity flow $f$ for the JS chain on $\mathcal{G}'(d)$ satisfying $\max_e f(e) \leq p(n)/ |\mathcal{G}'(d)|$ and $\ell(f) \leq r(n)$. \end{theorem} Our proof of Theorem \ref{thm:js_mixing} uses conceptually similar arguments to the ones used in \cite{Cooper2007} for the analysis of the switch chain on regular undirected graphs. However, the analysis done here for the JS chain is, in our opinion, easier and cleaner than the corresponding analysis for the switch chain. In particular, the so-called \emph{circuit processing procedure} is simpler in our setting, as it only involves altering edges in the symmetric difference of two graphical realizations in a straightforward fashion. In the switch chain analyses \cite{Cooper2007,Greenhill2017journal,Erdos2013,Erdos2015decomposition,ErdosMMS2018} one also has to temporarily alter edges that are not in the symmetric difference and this significantly complicates things. Moreover, for the analysis of the JS chain, we can rely on arguments used (in a somewhat different context) by Jerrum and Sinclair \cite{Jerrum1989} for the analysis of a Markov chain for sampling (near) perfect matchings of a given graph. This usage of arguments in \cite{Jerrum1989} was suggested by Jerrum and Sinclair \cite{Jerrum1990} for showing that the JS chain is rapidly mixing for stable degree sequences. \subsection{Flow Transformation}\label{sec:js_to_switch} Next we show that, when $d$ comes from a family of strongly stable degree sequences, an efficient multicommodity flow for the JS chain on $\mathcal{G}'(d)$ can be transformed into an efficient multicommodity flow for the switch chain on $\mathcal{G}(d)$. In combination with Theorem \ref{thm:js_mixing} this implies that if $\mathcal{D}$ is strongly stable, then for any sequence in $\mathcal{D}$ there exists an efficient flow for the switch chain. For the sake of simplicity, we did not attempt to optimize the bounds in the proof of Theorem \ref{thm:transformation}. \begin{theorem \label{thm:transformation} Let $\mathcal{D}$ be a strongly stable family of degree sequences with respect to some constant $k$, and let $p(n)$ and $r(n)$ be polynomials such that for any $d = (d_1,\dots,d_n)\in \mathcal{D}$ there exists an efficient multicommodity flow $f_d$ for the JS chain on $\mathcal{G}'(d)$ with the property that $max_{e} f(e) \leq p(n) / |\mathcal{G}'(d)|$ and $\ell(f) \leq r(n)$. Then there exists a polynomial $t(n)$ such that for all $d = (d_1,\dots,d_n) \in \mathcal{D}$ there is a feasible multicommodity flow $g_d$ for the switch chain on $\mathcal{G}(d)$ with \emph{(i)} $\ell(g_d) \leq 2k \cdot \ell(f_d)$, and \emph{(ii)} for every edge $e$ of the state space graph of the switch chain, we have \vspace{-10pt} \begin{equation}\label{eq:good_bound} g_d(e) \leq t(n)^k \cdot \frac{p(n)}{|\mathcal{G}(d)|} \,. \end{equation} \end{theorem} \begin{proof} Let $d \in \mathcal{D}$. For simplicity we will write $f$ and $g$ instead of $f_d$ and $g_d$ respectively. Since there are two Markov chains involved in the proof, each with a different state space graph, we should clarify that $\mathcal{P}_{xy}$ refers to the set of simple paths between $x$ and $y$ in the state space graph of the \textit{JS chain}. We first introduce some additional notation. For every pair $(x,y) \in \mathcal{G}'(d) \times \mathcal{G}'(d)$ with $x \neq y$, and for any $p \in \mathcal{P}_{xy}$, we write $\alpha(p)= f(p)|\mathcal{G}'(d)|^2$. Recall that since the stationary distribution of the JS chain is uniform on $\mathcal{G}'(d)$ we have $\sum_{p \in \mathcal{P}_{xy}} f(p) = |\mathcal{G}'(d)|^{-2}$. Thus, $\sum_{p \in \mathcal{P}_{xy}} \alpha(p) = 1$. Moreover, we define $\alpha(e) = \sum_{p \in \mathcal{P}_{xy} : e \in p} \alpha(p) = f(e) |\mathcal{G}'(d)|^2$. Now, for every $G \in \mathcal{G}'(d) \mathbin{\mathchoice{\mysetminusD}{\mysetminusT}{\mysetminusS}{\mysetminusSS}} \mathcal{G}(d)$ choose some $\varphi(G) \in \mathcal{G}(d)$ that is within distance $k$ of $G$ in the JS chain, and take $\varphi(G) = G$ for $G \in \mathcal{G}(d)$. Based on the arguments in the proof of Proposition \ref{prop:strong_stable}, it follows that for any $H \in \mathcal{G}(d)$, \begin{equation}\label{eq:neighborhood_bound} |\varphi^{-1}(H)| \leq n^{3k} \,, \end{equation} using that the maximum in-degree of any element in the state space graph of the JS chain is upper bounded by $n^3$. In particular, this implies that \begin{equation}\label{eq:ratio} \frac{|\mathcal{G}'(d)|}{|\mathcal{G}(d)|} \leq n^{3k} \,. \end{equation} Let the flow $h$ be defined as follows for any given pair $(x,y)$. If $(x,y) \in \mathcal{G}(d) \times \mathcal{G}(d)$, take $h(p) = \alpha(p)/|\mathcal{G}(d)|^2$ for all $p \in \mathcal{P}_{xy}$. If either $x$ or $y$ is not contained in $\mathcal{G}(d)$, take $h(p) = 0$ for every $p \in \mathcal{P}_{xy}$. Note that $h$ is a multicommodity flow that routes $1/|\mathcal{G}(d)|^2$ units of flow between any pair $(x,y) \in \mathcal{G}(d) \times \mathcal{G}(d)$, and zero units of flow between any other pair of states in $\mathcal{G}'(d)$. Note that \begin{equation}\label{eq:traverse1} h(e) \le \frac{|\mathcal{G}'(d)|^2}{|\mathcal{G}(d)|^2} \cdot f(e) \leq \frac{|\mathcal{G}'(d)|^2}{|\mathcal{G}(d)|^2} \frac{p(n)}{|\mathcal{G}'(d)|} = \frac{p(n)}{|\mathcal{G}(d)|} \frac{|\mathcal{G}'(d)|}{|\mathcal{G}(d)|} \leq n^{3k} \cdot \frac{p(n)}{|\mathcal{G}(d)|} \,, \end{equation} using the definition of $h$ in the first inequality, the assumption on $f$ in the second inequality, and the upper bound of (\ref{eq:ratio}) in the last one. Next, we merge the ``auxiliary states'' in $\mathcal{G}'(d) \mathbin{\mathchoice{\mysetminusD}{\mysetminusT}{\mysetminusS}{\mysetminusSS}} \mathcal{G}(d)$, i.e., the states not reached by the switch chain, with the elements of $\mathcal{G}(d)$. Informally speaking, for every $H \in \mathcal{G}(d)$ we merge all the nodes in $\varphi^{-1}(H)$ into a \emph{supernode}. Self-loops created in this process are removed, and parallel arcs between states are merged into one arc that gets all the flow of the parallel arcs. Formally, we consider the graph $\Gamma$ where $V(\Gamma) = \mathcal{G}(d)$ and $e = (H,H') \in E(\Gamma)$ if and only if $H$ and $H'$ are switch adjacent or if there exist $G \in \varphi^{-1}(H)$ and $G' \in \varphi^{-1}(H')$ such that $G$ and $G'$ are JS adjacent. Moreover, for a given $h$-flow carrying path $(G_1,G_2,\ldots,G_q) = p \in \mathcal{P}_{xy}$, let $p'_{\Gamma} = (\varphi(G_1),\varphi(G_2),\dots,\varphi(G_q))$ be the corresponding (possibly non-simple) directed path in $\Gamma$. Any self-loops and cycles can be removed from $p'_{\Gamma}$ and let $p_{\Gamma}$ be the resulting simple path in $\Gamma$. Over $p_{\Gamma}$ we route $h_{\Gamma}(p_{\Gamma})= h(p)$ units of flow. Note that $h_{\Gamma}$ is a flow that routes $1/|\mathcal{G}(d)|^2$ units of flow between any pair of states $(x,y) \in \mathcal{G}(d) \times \mathcal{G}(d)$ in the graph $\Gamma$ and that $\ell(h_{\Gamma}) \leq \ell(f)$. Furthermore, the flow $h_{\Gamma}$ on an edge $(H, H') \in E(\Gamma)$ is then bounded by \begin{equation}\label{eq:up_bound_h_Gamma} h_{\Gamma}(H,H') \le \!\! \sum_{\substack{(G,G') \in \varphi^{-1}(H) \times \varphi^{-1}(H') \\ G \text{ and } G' \text{ are JS adjacent}}} \!\! h(G,G') \,, \end{equation} where the inequality (instead of an equality) follows from the fact that when we map a path $p \in \mathcal{P}_{xy}$ to the corresponding path $p_{\Gamma}$, some edges of the intermediate path $p'_{\Gamma}$ may be deleted. Using (\ref{eq:neighborhood_bound}), it follows that $|\varphi^{-1}(H) \times \varphi^{-1}(H')| \leq n^{3k} \cdot n^{3k} = n^{6k}$ and therefore, in combination with \eqref{eq:traverse1} and \eqref{eq:up_bound_h_Gamma}, we have that \begin{equation}\label{eq:traverse2} h_{\Gamma}(e) \leq n^{3k} \cdot n^{6k} \cdot \frac{p(n)}{|\mathcal{G}(d)|} \,. \end{equation} Now recall how $E(\Gamma)$ was defined. An edge $(H,H')$ might have been added because: \emph{(i)} $H$ and $H'$ are switch adjacent (we call these edges of $\Gamma$ \emph{legal}), or \emph{(ii)} $H$ and $H'$ are \emph{not} switch adjacent but there exist $G \in \varphi^{-1}(H)$ and $G' \in \varphi^{-1}(H')$ that are JS adjacent (we call these edges of $\Gamma$ \emph{illegal}). The final step of the proof consists of showing that the flow on every illegal edge in $E(\Gamma)$ can be rerouted over a ``short'' path consisting only of legal edges. In particular, for every flow carrying path $p$ using $e$, we are going to show that the flow $h_{\Gamma}(p)$ is rerouted over some legal detour, the length of which is bounded by a multiple of $k$. Doing this iteratively for every remaining illegal edge on $p$, we obtain a directed path $p''$ only using legal edges, i.e., edges of the state space graph of the switch chain. Of course, $p''$ might not be simple, so any self-loops and cycles can be removed, as before, to obtain the simple legal path $p'$. Figure \ref{fig:rerouting} illustrates this procedure for a path with a single illegal edge. Note that deleting self-loops and cycles only decreases the amount of flow on an edge. \begin{figure}[t!] \centering \scalebox{0.6}{ \begin{tikzpicture}[ ->, >=stealth', shorten >=0.5pt, auto, node distance=1cm, semithick, every state/.style={circle,text=black,inner sep=5pt,minimum size=1pt}, ] \begin{scope} \node[state] (M1) {x}; \node[state] (M2) [right=2cm of M1] {}; \node[state] (M3) [right=2cm of M2] {}; \node[state] (M4) [right=2cm of M3] {}; \node[state] (M5) [right=2cm of M4] {y}; \node[state] (T1) [above=1.5cm of M2] {}; \node[state] (T2) [above=1.5cm of M3] {}; \node[state] (B1) [below=1.5cm of M2] {}; \node[state] (B2) [below=1.5cm of M3] {}; \path[every node/.style={sloped,anchor=south,auto=false}] (M1) edge[line width=1.5pt] node {} (M2) (M2) edge[line width=1.5pt] node {} (M3) (M3) edge[dashed,line width=1.5pt] node {} (M4) (M4) edge[line width=1.5pt] node {} (M5) (M3) edge[line width=4pt] node {} (B2) (B2) edge[line width=4pt] node {} (B1) (B1) edge[line width=4pt] node {} (M2) (M2) edge[line width=4pt] node {} (T1) (T1) edge[line width=4pt] node {} (T2) (T2) edge[line width=4pt] node {} (M4); \end{scope} \end{tikzpicture} \ \ \ \ \ \ \ \ \ \quad \ \ \ \ \ \ \ \ \ \begin{tikzpicture}[ ->, >=stealth', shorten >=0.5pt, auto, node distance=1cm, semithick, every state/.style={circle,text=black,inner sep=5pt,minimum size=1pt}, ] \begin{scope} \node[state] (M1) {x}; \node[state] (M2) [right=2cm of M1] {}; \node[state] (M3) [right=2cm of M2] {}; \node[state] (M4) [right=2cm of M3] {}; \node[state] (M5) [right=2cm of M4] {y}; \node[state] (T1) [above=1.5cm of M2] {}; \node[state] (T2) [above=1.5cm of M3] {}; \node[state] (B1) [below=1.5cm of M2] {}; \node[state] (B2) [below=1.5cm of M3] {}; \path[every node/.style={sloped,anchor=south,auto=false}] (M1) edge[line width=2pt] node {} (M2) (M4) edge[line width=2pt] node {} (M5) (M2) edge[line width=2pt] node {} (T1) (T1) edge[line width=2pt] node {} (T2) (T2) edge[line width=2pt] node {} (M4); \end{scope} \end{tikzpicture}} \caption{The dashed edge on the left represents an illegal edge, and the bold path represents a ``short'' detour. The shortcutted path on the right is the result of removing any loops and cycles.} \label{fig:rerouting} \end{figure} The crucial observation here is that if $(H,H') \in E(\Gamma)$, then $|E(H) \triangle E(H')| \leq 4k$. That is, even though $H$ and $H'$ might not be switch adjacent, they are not too far apart. To see this, first note that the symmetric difference of any two JS adjacent graphs has size at most 2. Moreover, if one of any two JS adjacent graphs is in $\mathcal{G}(d)$, then their symmetric difference has size 1. In particular, for any $G^* \in \mathcal{G}'(d)$, we have $|E(G^*) \triangle E(\varphi(G^*))|\le 2k - 1$. Clearly, if $(H,H') \in E(\Gamma)$ is legal, then $|E(H) \triangle E(H')| = 4 \leq 4k$. Assume $(H,H') \in E(\Gamma)$ is illegal. Then there exist JS adjacent $G \in \varphi^{-1}(H)$ and $G'\in \varphi^{-1}(H')$ and according to the above we have \begin{eqnarray*} |E(H) \triangle E(H')| & \leq & |E(H) \triangle E(G)| + |E(G) \triangle E(G')| + |E(G') \triangle E(H')|\nonumber \\ & \leq & 2k-1 + 2 + 2k - 1 \leq 4k \,. \nonumber \end{eqnarray*} Moreover, this implies that we can go from $H$ to $H'$ in a ``small'' number of moves in the switch chain. This easily follows from most results showing that the state space of the switch chain is connected, e.g., from \cite{Taylor81}.\footnote{To be precise, we can focus on the subgraph induced by the nodes with positive degree in the symmetric difference. Taylor's proof on the connectivity of the state space of the switch chain \cite{Taylor81} implies that we can find $O(k^2)$ switches to get from $H$ to $H'$, only using edges in this induced subgraph.} Specifically, here we use the following result of Erd{\H{o}}s, Kir{\'a}ly, and Mikl{\'o}s \cite{Erdos2013swap} which implies that we can go from $H$ to $H'$ in $2k$ switches. \begin{theorem}[follows from Theorem 3.6 in \cite{Erdos2013swap}] \label{thm:swap} Let $d= (d_1,\dots,d_n)$ be a degree sequence. For any two graphs $H, H' \in \mathcal{G}(d)$, $H$ can be transformed into $H'$ using at most $\frac{1}{2}|E(H) \triangle E(H')|$ switches. \end{theorem} For every \emph{illegal} edge $e \in E(\Gamma)$, we choose such a (simple) path from $H$ to $H'$ with at most $2k$ transitions and reroute the flow of $e$ over this path. Note that for any legal edge $e \in E(\Gamma)$, the number of illegal edge detours that use $e$ for this rerouting procedure, is at most $( n^4)^{2 k}\cdot ( n^4)^{2 k} = n^{16 k}$, using the fact that in the state space graph of the switch chain the maximum degree of an element is at most $n^4$ and any illegal edge using $e$ in its rerouting procedure must lie within distance $2 k$ of $e$. Combining this with (\ref{eq:traverse2}), we see that the resulting flow, $g$, satisfies \begin{equation*}\label{eq:traverse3} g(e) \leq \frac{p(n) \cdot n^{9k} + p(n) \cdot n^{16 k}}{|\mathcal{G}(d)|} \,. \end{equation*} Note that the $\ell(g) \leq 2k \ell(h_{\Gamma})$. This holds because every illegal edge on a flow-carrying path gives rise to at most $2k$ additional edges as a result of rerouting the flow over legal edges, and the removal of loops and cycles from any resulting non-simple path can only decrease its length. Combining this inequality with $\ell(h_{\Gamma}) \leq \ell(f)$ (as we noted above), we get $\ell(g) \leq 2k \cdot \ell(f)$. This completes the proof of (\ref{eq:good_bound}), as we have now constructed a feasible multicommodity flow $g$ in the state space graph of the switch chain with the desired properties. \end{proof} \section{Sampling Graphs with a Given JDM}\label{sec:switch_jdm_main} We may use a similar high level approach to that in Section \ref{sec:main_result} to show that the (restricted) switch chain defined in Subsection \ref{sec:jdm_model} is \emph{always} rapidly mixing for JDM instances with two degree classes. \begin{theorem}\label{thm:stable_jdm1} Let $\mathcal{D}$ be the family of instances of the joint degree matrix model with two degree classes. Then the switch chain is rapidly mixing for instances in $\mathcal{D}$. \end{theorem} In analogy to the JS chain we first analyze a simpler Markov chain, called the \emph{hinge flip chain}, that adds and removes (at most) one edge at a time. Very much like the JS chain, the hinge flip chain might slightly violate the degree constraints. Now, however, the joint degree constraints might be violated as well. The definition of \emph{strong stability} is appropriately adjusted to account for both deviations from the original requirements. Finally, we use a similar embedding argument as in Theorem \ref{thm:transformation}. The relevant definitions, as well as the analysis of this auxiliary chain are deferred to Appendix \ref{sec:auxiliary} due to space constraints. Here we present a high level outline of the proof of Theorem \ref{thm:stable_jdm1}. \paragraph{Rapid Mixing of the Hinge Flip Chain.} The first step of the proof is to show that the hinge flip chain defined on a \emph{strict superset} of the target state space mixes rapidly for strongly stable instances. Appendix \ref{sec:auxiliary} is dedicated to this step. The fact that we do not want to deviate by more than a constant from the joint degree constraints makes the analysis much more challenging than the one for the JS chain presented in Appendix \ref{app:js}. In order to overcome the difficulties that arise due to this fact, we rely on ideas introduced by Bhatnagar et al.~\cite{Bhatnagar2008} for uniformly sampling bichromatic matchings. In particular, in the circuit processing part of the proof, we process a circuit at multiple places \emph{simultaneously} in case there is only one circuit in the canonical decomposition of a pairing, or we process multiple circuits \emph{simultaneously} in case the decomposition yields multiple circuits. At the core of this approach lies a variant of the \emph{mountain-climbing problem} \cite{Homma1952,Whittaker1966}. In our case the analysis is more involved than that of \cite{Bhatnagar2008}, and we therefore use different arguments in various parts of the proof. It is interesting to note that the analysis of the hinge flip chain is not carried out in the JDM model but in the more general Partition Adjacency Matrix (PAM) model \cite{Czabarka14,ErdosHIM2017}. The difference from the JDM model is that in each class $V_i$ the nodes need not have the same constant degree but rather follow a given degree sequence of size $|V_i|$. Given that small deviations from the prescribed degrees cannot be directly handled---by definition---by the JDM model, the PAM model is indeed a more natural choice for this step. \paragraph{Strong Stability of JDM Instances.} Next we show that for any JDM instance, any graph in the state space of the hinge flip chain (i.e., graphs that satisfy or \emph{almost} satisfy the joint degree requirements) can be transformed to a graphical realization of the original instance within 6 hinge flips at most. That is, the set of JDM instances is a strongly stable family of instances of the PAM model and thus the hinge flip chain mixes rapidly for JDM instances. See Theorem \ref{cor:stable_jdm} in Appendix \ref{sec:stable_jdm}. \paragraph{Flow Transformation.} The final step is an embedding argument, along the lines of the argument of Subsection \ref{sec:js_to_switch}, for transforming the efficient flow for the hinge flip chain to an efficient flow for the switch chain. As an intermediate step we need an analog of Theorem \ref{thm:swap}, but this directly follows from the proof of irreducibility of the switch chain in \cite{Amanatidis2015}. See Appendix \ref{sec:switch_for_jdm}. \section{Discussion} We believe that our ideas can be also used to simplify the switch chain analyses in settings where there is some given forbidden edge set, the elements of which cannot be used in any (bipartite) graphical realization \cite{Greenhill2011,Greenhill2017journal,Erdos2015,ErdosMMS2018}. This is an interesting direction for future work, as it captures the case of sampling directed graphs. Moreover, we suspect it can be shown that the condition in (\ref{eq:bip_same}) is essentially best possible in terms of $\delta_{r}$ and $\Delta_{r}$ in a similar sense as described in \cite{Jerrum1989graphical} for the results in Corollaries \ref{lem:stable_jerrum} and \ref{cor:stable}.\footnote{We suggest using the same ideas as the proof of Theorem 6 in \cite{Jerrum1989graphical} based on a non-stable family of bipartite degree sequences presented in \cite{Kannan1999}.} While this is an interesting question, our goal here is not to give a full bipartite analogue of \cite{Jerrum1989graphical}. Even so, a deeper understanding of when a family of bipartite degree sequences is strongly stable is missing. In particular, is it possible to unify the results of Corollaries \ref{cor:bipartite} and \ref{cor:bipartite2} under a single condition similar to (\ref{eq:stable1})? Further, it is not clear whether there exist degree sequence families---bipartite or not---that are $P$-stable but not strongly stable. For instance, in a recent work by Gao and Wormald \cite{GaoW18}, who provide a very efficient non-MCMC approximate sampler for certain power-law degree sequences, it is argued that these power-law degree sequences are $P$-stable. Is it the case these sequences are strongly stable as well? Theorem \ref{thm:switch} would then directly imply that the switch chain is rapidly mixing for this family. A central open question is how to go beyond (strong) stability. We suspect that the proof template of \cite{Cooper2007} cannot be used for proving rapid mixing of the switch chain for general families of degree sequences. The intuition is that it relies on the fact that there is a set of auxiliary states that is not much larger than the set of actual graphical realizations for a given degree sequence; this property seems very closely related to $P$-stability, and also arises explicitly in the analysis of the bipartite case in \cite{Kannan1999}. This observation suggests the need for a novel approach for studying the mixing of the switch chain on non-stable degree families. Finally, the problem of sampling graphic realizations of a given joint degree distribution with three or more degree classes is also open. Although our proof breaks down for more than two classes, we hope that our high level approach can facilitate progress on the problem. \section*{Acknowledgements} We are grateful to P{\'{e}}ter Erd{\H{o}}s, Tam{\'{a}}s Mezei and Istv{\'{a}}n Mikl{\'{o}}s for their useful comments. \newpage
2,877,628,088,690
arxiv
\section{Introduction}\label{intro} Data structures with the same value during the rest of their common life can share the same representation. This is exploited in various contexts, e.g., by the {\em intern} method for Strings in Java, by hash-consing in functional languages \cite{goto74}, and by data deduplication during backup. In programming language implementation, hash-consing is probably the best known representation sharing technique: hash-consing was invented by Ershov in \cite{ershov58} and used by Goto in \cite{goto74} in an implementation of Lisp. Originally, hash-consing was performed during all term creations so that no duplicate terms occurred during the execution of a program. \cite{appelhashconsinggc} explores the idea of using hash-consing only during generational garbage collection: the new generation contains non-hash-consed terms, and on promotion to the older generation, they are hash-consed: for the first time, a representation sharing technique is cooperating with the garbage collector. Our approach is most closely related to \cite{appelhashconsinggc}, but also has some important differences. \begin{sloppypar} Representation sharing has been given little explicit attention in the context of Prolog implementation. However, the issue pops up from time to time. Here are some historical highlights: \begin{itemize} \item in 1989, in his Diplomarbeit, Ulrich Neumerkel \cite{neumerkeldiplomarbeit} mentioned how by applying DFA-minimization to Prolog terms, certain programs can run in linear space (instead of quadratic); there was no implementation; in Section \ref{benchmarks}, his example program is used as a benchmark \item 1991: \cite{VariableShunting} ends with the sentence: {\em It still remains to be seen, however, what we meant by ``folding identical structures''}; the current paper offers a solution to this mysterious sentence \item in a 1995 comp.lang.prolog post, Edmund Grimley-Evans \cite{findall1archive} asked for more sharing in {\em findall/3}, i.e., he wanted the solution list of a call to {\em findall/3} to share with the generator; {\em input sharing} \cite{findallwithoutfindall} does exactly that; Section \ref{findall} describes input sharing more precisely and how it can be implemented efficiently \item in a 2001 Logic Programming Pearl \cite{OKeefePearl}, R. O'Keefe mentioned a {\em findall/3} query that could benefit from representation sharing in the answers; as for the previous bullet, Section \ref{findall} contains the solution \item in 2002, \cite{DemoenICLP2002fresh} gave a fresh view on garbage collection for Prolog; it detailed a number of desirable (optimal) properties of a garbage collector, one of which is the introduction of representation sharing (albeit naming it differently) \item in May 2009, Ulrich Neumerkel posted an excerpt of his Diplomarbeit in comp.lang.prolog and urged implementations to provide for more representation sharing, either during unification, or during garbage collection; he used the term {\em factoring}; we prefer {\em representation sharing}; the current paper is the result of exploring its implementation issues \end{itemize} \end{sloppypar} The paper is organized as follows: Section \ref{repsharversushash} starts with describing what we mean by representation sharing. Section \ref{somesharinginProlog} lists a number of more or less popular forms of representation sharing in Prolog. Section \ref{findall} describes how to retain {\em input sharing} for {\em findall/3} and evaluates our implementation on a number of benchmarks. Section \ref{generalities} sets the scene for the focus of the rest of the paper: general sharing for Prolog. Section \ref{intuition} forms the intuition on such sharing, while Section \ref{concepts} introduces the notion of {\em absorption}: it shows when individual cells can share their representation and the approximation that works for us. It then lifts representation sharing from individual cells to compound terms and discusses some properties of our notion of representation sharing. Section \ref{implementation} explains our implementation of representation sharing based on the earlier decisions. Section \ref{sharerbenchmarks} discusses the benchmarks and the experimental results. Section \ref{variants} shows extensions of the basic implementation, variations and related issues. Section \ref{related} discusses related work, and we conclude in Section \ref{conclusion}. We have used hProlog 3.1.* as the Prolog engine to experiment with, but it is clear that everything can be ported to other WAM-like systems as well: we make that more explicit later on. hProlog is a descendant of dProlog as described in \cite{wamvariations}. SICStus Prolog 4.1.1 serves as a yardstick to show that the hProlog time and space figures are close to a reliable state of the art system. All benchmarks were run on an Intel Core2 Duo Processor T8100 2.10 GHz. We assume the reader to be familiar with the WAM \cite{wam:hassan,DHWa83} and Prolog \cite{ClMe84}. We use the term {\em heap} when others use {\em global stack}, i.e., the place where compound terms are allocated. We use {\em local stack} and {\em environment stack} interchangeably and denote it by LS in pictures. \section{Representation Sharing versus Hash-Consing}\label{repsharversushash} Consider the predicates main1 and main2 defined as \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] main1 :- main2 :- X = f(1,2,Z), X = f(1,2,Z), Y = f(1,2,Z), X = Y, use(X,Y). use(X,Y). \end{Verbatim} In a naive\footnote{I.e., an implementation without compile time common subexpression elimination; however, note the danger of such optimization in the presence of destructive assignment: see Section \ref{mutable} } implementation, the execution of ?- main1. just before the call to {\em use/2}, results in a memory situation as in the left of Figure \ref{fig1}. In this figure, the heap cell with Z is a self-reference in the WAM. \begin{figure}[h] \begin{centering} {\epsfig{file=fig1.eps,width=0.6\textwidth}} \caption{Representation Sharing} \label{fig1} \end{centering} \end{figure} Clearly, the terms X and Y are exactly the same ever after they have been created, and therefore they can share the same representation: that sharing can be seen in the right of Figure \ref{fig1} and in the code for the predicate main2. Hash-consing is usually associated with the technique that keeps a hash table of terms and during term creation checks whether a term is new or exists already in the hash table. An implementation with hash-consing usually changes the representation of terms, and consequently the code that deals explicitly with this representation. For Prolog the affected code would be general unification and built-in predicates. That is too intrusive for our aims: we intend our implementation of representation sharing to be easy to integrate in other Prolog systems and there should be no global impact. So, we will keep the usual (WAM) term representation and do not touch any part of the implementation, except for the {\em sharer} module that introduces representation sharing. Given the complexity of current Prolog systems, this seems to us the only way to make representation sharing accepted by implementors. \subsection*{Some Forms of Representation Sharing for Prolog}\label{somesharinginProlog} Prolog implementations already provide some specific representation sharing. Here are a few examples: \begin{sloppypar} \begin{itemize} \item in older implementations, the predicate {\em copy\_term/2} copies ground terms; in newer implementations ---starting probably with SICStus Prolog \cite{matsphd}--- {\em copy\_term/2} avoids copying ground (sub)terms; this means that the second argument can have some representation sharing with the first argument; however, note that {\em mutable} ground terms must be copied by {\em copy\_term/2}, because otherwise sharing would become observable at the program level; we discuss this issue further in Section \ref{mutable} \item some programs contain ground terms at the source level; a typical example is the second argument of a goal like {\em member(Assoc,[fx,fy,xfx,xfy,yfx,xf,yf])}; ECLiPSe \cite{Wallace97eclipse} pre-allocates such ground terms, and makes sure that any time such a fact or goal is called, the ground term is re-used; Mercury performs this compile-time optimization as well \item when two terms are unified, they can share a common representation in the forward execution; at various stages in its life, BinProlog \cite{Tarau91:JAP} enforced such sharing by (in WAM speak) redirecting the S-tagged pointer of one of the two terms and (conditionally) trailing this change so that on backtracking it can be undone; if trailing is not needed, then the savings can be huge; otherwise, the locality of access can be improved, but memory and time savings can be negative; a similar technique was already used for strings only in the Logix implementation of Flat Concurrent Prolog \cite{logixFCP} \end{itemize} \end{sloppypar} In each of the above cases, the implementor of the Prolog system decided for more representation sharing than would be the case in a more straightforward implementation. Application programmers and library developers usually take care as well to let their runtime data structures share common parts. In the above, {\em copy\_term/2} and unification are built-in predicates that have a chance to increase representation sharing. In Section \ref{findall}, {\em findall/3} is added to this shortlist. \section{Input Sharing for {\em findall/3}}\label{findall} In \cite{findallwithoutfindall}, the notion of {\em input sharing} was introduced in the context of {\em findall/3}. Input sharing consists of a solution in the output from {\em findall/3} (its third argument) sharing with the input to {\em findall/3} (its second argument). Later, in the Logic Programming Pearl \cite{OKeefePearl} it is suggested that {\em findall/3} could avoid repeatedly copying the same terms over and over again: this would improve the space complexity of some queries that use {\em findall/3}, from $O(n^2)$ to $O(n)$. However, R. O'Keefe suggests that hash-consing should be used, with the consequence that the time complexity remains the same: our implementation of {\em input sharing} ---which is exactly what is needed here--- improves both the time and space complexity. The example used in \cite{OKeefePearl} is rather complicated, so for now, we use as an illustration a piece of simple Prolog code that was posted in \cite{findall1archive}; we changed the names of the predicates and variables. \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] findall_tails(L,Tails) :- findall(Tail,is_tail(L,Tail),Tails). is_tail(L,L). is_tail([_|R],L) :- is_tail(R,L). all_tails([],[[]]). all_tails(L,[L|S]) :- L = [_|R], all_tails(R,S). \end{Verbatim} Clearly, goals of the form ?- findall\_tails(L,Tails). and ?- all\_tails(L,Tails). with a ground argument L succeed with the same answer Tails. E.g., \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] ?- findall_tails([1,2,3],Tails). Tails = [[1,2,3],[2,3],[3],[]] \end{Verbatim} The usual implementation of {\em findall/3} copies over and over again parts of the input list L, and this results in quadratic behavior (in the length of L) for {\em findall\_tails/2}, while {\em all\_tails/2} is linear, both in space and time! Clearly, with enough {\em input sharing} the {\em findall\_tails/2} query could be linear. In the following sections we show how a traditional {\em findall/3} implementation in the context of the WAM can be easily adapted to cater for input sharing. An alternative copy-once implementation of {\em findall/3} is also shown. Before going into the details, it is worth pointing out the limitations of input sharing. Clearly, if L is a list with non-ground elements, the two queries \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] ?- findall_tails(L,Tails). ?- all_tails(L,Tails). \end{Verbatim} yield different answers. The first query makes fresh variants of the variables in each of the solutions in Tails, while the second query does not. As an example: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] ?- findall_tails([X,Y,Z],L), ?- all_tails([X,Y,Z],L), numbervars(L,0,_). numbervars(L,0,_). X = A Y = B Z = C L = [[A,B,C],[D,E],[F],[]] L = [[A,B,C],[B,C],[C],[]] \end{Verbatim} This means we can use an input sharing version of {\em findall/3} when the arguments of the generator are either ground or free: the danger is only in terms containing variables. \subsection{The Implementation of {\em findall/3}} The hProlog implementation of {\em findall/3} follows the same pattern as in many systems: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] findall(Template,Generator,SolList) :- findall_init(Handle), ( call(Generator), findall_add(Template,Handle), fail ; findall_get_solutions(SolList,Handle) ). \end{Verbatim} For simplicity, we have left out all error checking and error recovery code. The predicate {\em findall\_init/1} returns a handle, so that the particular invocation of {\em findall/3} is identified: this is used for correct treatment of nested calls to {\em findall/3}. {\em findall\_add/2} uses that handle, and copies the Template to a temporary zone. {\em findall\_get\_solutions/2} uses the handle as well: it retrieves the complete list of solutions from the temporary zone and unifies it with the third argument to {\em findall/3}. The next section describes how to turn this code into code that shares the input. \subsection{The basic Idea of Input Sharing for {\em findall/3}} \begin{sloppypar} The predicate {\em findall\_add/2} in our implementation of {\em findall/3} is just a version of {\em copy\_term/2}: at the implementation level, they both use the same C function for the actual copying. The same is true for {\em findall\_get\_solutions/2}. \end{sloppypar} The first idea might be to use an implementation of {\em copy\_term/2} that avoids copying ground terms. However, in the context of {\em findall/3}, groundness is not enough: the ground term must also be {\em old enough}, so that backtracking (over the Generator) cannot alter it. To be more precise, anything ground that survives backtracking over the Generator need not be copied by {\em findall\_add/2}. Or put still another way: anything ground before the call to findall(Template,Generator,SolList) need not be copied by {\em findall\_add/2}. Such terms can be recognized easily: their {\em root} resides in a heap segment that is not younger than the call to {\em findall/3}. So we need to be able to identify the older heap part relevant to a particular call to {\em findall/3}. That is quite easy in the WAM: we just remember the relevant heap pointer! \subsection{{\em findall/3} with Input Sharing: the Implementation}\label{impl} We use two new low-level built-in predicates: \begin{sloppypar} \begin{itemize} \item {\bf current\_heap\_top(--)}: unifies the argument with (an abstraction of) the current value of the heap pointer H \item {\bf set\_copy\_heap\_barrier(+)}: sets a global (C-)implementation variable (named {\em copy\_heapbarrier}) to the heap pointer value corresponding to its argument \end{itemize} \end{sloppypar} The following code shows how the new built-ins are used: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] sharing_findall(Template,Generator,SolList) :- current_heap_top(Barrier), findall_init(Handle), ( call(Generator), set_copy_heap_barrier(Barrier), findall_add(Template,Handle), fail ; set_copy_heap_barrier(Barrier), findall_get_solutions(SolList,Handle) ). \end{Verbatim} An additional small change needs to be made to the implementation of {\em findall\_add/2} (and {\em findall\_get\_solutions/2}) as well: when a term is about to be copied and it is older than {\em copy\_heapbarrier}, only the root pointer to this term is copied. It amounts to adding a statement like \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] if (struct_addr < copy_heapbarrier) { *whereto = struct_addr; continue; } \end{Verbatim} at a few places in the C code of {\em copy\_term/2}: this piece of code just copies the top pointer of the structured term instead of copying it recursively. The C variable {\em struct\_addr} holds the address of the structure about to be copied. For explanatory reasons, we have shown the implementation of {\em sharing\_findall/3} as a variant of the basic implementation of {\em findall/3} using two new built-ins. However, one can also fold the functionality of these new built-ins into adapted versions of findall\_[init, add, get\_solutions]: the top-of-heap at the moment of calling {\em sharing\_findall/3} is then stored in the data structures belonging to that particular call. This top-of-heap at the moment of calling {\em sharing\_findall/3} must also be appropriately treated by the garbage collector. \subsection{An Example} The heap and temporary findall zone are shown in Figure \ref{findallfig1} for the very simple query \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] ?- findall(X, X=f(1,2,3), L). \end{Verbatim} \begin{figure}[h] \begin{centering} {\epsfig{file=findallfig1.eps,width=1.0\textwidth}} \caption{Findall without and with Input Sharing} \label{findallfig1} \end{centering} \end{figure} The left part of the picture shows three snapshots during the execution of the query without input sharing. The right part shows the corresponding snapshots with input sharing. The snapshots are taken \begin{itemize} \item just before {\em findall\_add/2} is executed: the temporary zone is still empty \item just after {\em findall\_add/2} is executed; at the left, the temporary zone contains a copy of the term f(1,2,3); at the right, there is a pointer to the term on the heap \item just after {\em findall\_get\_solutions/2} is executed: the temporary zone can be discarded; at the left, the solution list contains a copy of f(1,2,3); at the right, there is just a pointer to the old term on the heap \end{itemize} The space savings are clear. \subsection{Copy-once {\em findall/3}} The usual implementation of {\em findall/3} copies the solutions twice. BinProlog was probably the first implementation copying the solution only once, by means of a technique named {\em heap lifting} or more popularly a {\em bubble in the heap} \cite{ecologicalPaul@IWMM-92}. Currently, the BinProlog implementation \cite{padl09inter} relies on {\em engines} for {\em findall/3}. Mercury also uses a copy-once findall (named {\em solutions/2}): as Mercury relies on the Boehm-collector \cite{hansboehm}, there is no memory management hassle with a bubble in the heap. It is rather easy to implement a copy-once {\em findall/3} in any Prolog system that has non-backtrackable destructive assignment (with {\em nb\_setarg/3}) as in hProlog or SWI-Prolog \cite{swiprolog}\footnote{Note that SWI-Prolog uses {\em nb\_linkarg/3} as the name for hProlog's {\em nb\_setarg/3}}: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] copy_once_findall(Template,Generator,SolList) :- Term = container([]), ( call(Generator), Term = container(PartialSolList), copy_term(Template,Y), nb_setarg(1,Term,[Y|PartialSolList]), fail ; Term = container(FinalSolList), reverse(FinalSolList,SolList) ). \end{Verbatim} As before, this code can be enhanced with the newly introduced built-ins to yield a copy\_once\_sharing\_findall. If copying the solutions dominates the execution, the copy-once {\em findall/3} is about twice as fast as the regular {\em findall/3}. However, its main drawback is that it consumes (for the benchmarks below) about three times as much heap space. The reason is that {\em nb\_setarg/3} must freeze the heap if its third argument is a compound term. The heap-lifting technique (which we have not implemented in hProlog) does not have this drawback. \subsection{Experimental Evaluation} Input sharing improves (sometimes) the complexity (space and time), and the constant overhead is really very small, as can be judged from the changes needed to implement it. One could therefore argue that benchmarks are not needed. Even so, we present two benchmarks: one is the {\em findall\_tails/2} example (see Section \ref{tailsbenchmark}). We start with a {\em findall/3} related query from \cite{OKeefePearl}: this pearl is about tree construction and traversal. It contains the following text: \begin{itemize} \item[] {\em Query q1\footnote{f1(N) in the Appendix} requires at least $O(n^2)$ space to hold the result.} {\em If findall/3 copied terms using some kind of hash consing, the space cost could be reduced to $O(n)$, but not the time cost, because it would still be necessary to test whether a newly generated solution could share structure with existing ones.} \end{itemize} Note that the $n$ above is the number of nodes in the tree, not the tree depth: the number of nodes is roughly $4^{depth}$ where {\em depth} is the depth of the tree. We needed to make a slight change to the program from \cite{OKeefePearl}: in its original form it contains a {\em mk\_tree/2} predicate defined as \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] mk_tree(D, node(D,C)) :- ( D > 0 -> D1 is D - 1, C = [T,T,T,T], mk_tree(D1, T) ; C = [] ). \end{Verbatim} Because of the conjunction {\em C = [T,T,T,T], mk\_tree(D1, T)}, the heap representation of the constructed tree is linear in the first argument D, even though it has an exponential number of nodes: indeed, the constructed tree has a lot of internal sharing. Such internal sharing is retained by most reasonable implementations of {\em copy\_term/2} and by {\em findall\_add/2}.\footnote{A notable exception is Yap.} In order to test what \cite{OKeefePearl} really meant, we have changed the particular conjunction to \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] C = [T1,T2,T3,T4], mk_tree(D1, T1), mk_tree(D1, T2), mk_tree(D1, T3), mk_tree(D1, T4) \end{Verbatim} so that the size of the representation of the tree is linear in the number of nodes in the tree (and exponential in D). This code rewrite achieves the desired effect because Prolog systems typically don't perform the analysis needed to notice that T1, T2, T3 and T4 are declaratively the same value, and neither is this detected at runtime. See the Appendix for all code necessary to run the benchmark. \subsubsection{The modified Tree Benchmark: Results} Table \ref{treecombined} shows timings (when considered meaningful) and space consumption for queries {\em ?- f1(Depth)} with different values of Depth. Times are reported in milliseconds, space in bytes. We have chosen SICStus Prolog for comparison with another system because the SICStus Prolog implementation performed better and more reliably than the other systems we tried. Moreover, the trend of the measurements with other systems was basically the same. The timings without sharing do not show anything interesting complexity-wise: neither of the implementations without input sharing can deal with more than about 5000 nodes. The input sharing implementation on the other hand can go easily up to one million nodes. The heap consumption columns give a good picture of how the heap size grows: the non-input-sharing implementations show a quadratic dependency of the heap consumption on the number of nodes. Only {\em hProlog input sharing} shows a linear dependency. \begin{table}[ht] \begin{center} \begin{tabular}{|r||r|r||r|r||r|r|} \hline & \multicolumn{2}{c||}{hProlog} & \multicolumn{2}{c||}{hProlog input sharing} & \multicolumn{2}{c|}{SICStus Prolog}\\ \cline{2-7} Depth & time & space & time & space & time & space \\ \hline 1 & & 964 & & 368 & &980 \\ 2 & & 11332 & & 1840 & &10964 \\ 3 & & 158020 & & 9776 & &155348 \\ 4 & & 2394436 & & 49712 & &2379476 \\ 5 & 156 & 37599556 & & 242224 & 250 &37523156 \\ 6 & 2616 & 598030660 & & 1143344 & 6820 &597659348 \\ 7 & & & & 5272112 & & \\ 8 & & & 156 & 23884336 & & \\ 9 & & & 652 &106721840 & & \\ 10 & & & 2804 &471626288 & & \\ \hline \end{tabular} \caption{Heap consumption in bytes and time in msecs for the tree benchmark}\label{treecombined} \end{center} \end{table} Table \ref{treecombined} shows clearly that our simple implementation to enforce input sharing is very effective and performs actually better than hoped for in \cite{OKeefePearl}. Indeed, we achieve linear space {\bf and} time complexity for the f1(Depth) query. Hash-consing would not be able to do that. \subsection{The {\em tails} Benchmark}\label{tailsbenchmark} Table \ref{tailscombined} shows the space consumption for the tails benchmark. The timings are meaninglessly small for the variants with sharing, and therefore only shown for the regular findall columns. The {\em Length/1000} column indicates the length of the ground input list L to queries of the form {\em ?- all\_tails(L,Tails)} and {\em ?- [sharing\_]findall(Tail,is\_tail(L,Tail),Tails)}. \begin{table}[ht] \begin{center} \begin{tabular}{|r|r|r|r|r||r|r|r|} \hline & \multicolumn{4}{c||}{hProlog} & \multicolumn{3}{c|}{SICStus Prolog}\\ \hline & \multicolumn{2}{c|}{regular} & \multicolumn{1}{c|}{findall with} & \multicolumn{1}{c||}{all\_tails}& \multicolumn{2}{c|}{regular } & \multicolumn{1}{c|}{all\_tails}\\ Length & \multicolumn{2}{c|}{findall} & \multicolumn{1}{c|}{input sharing} & \multicolumn{1}{c||}{} & \multicolumn{2}{c|}{findall } & \multicolumn{1}{c|}{ }\\ \cline{2-8} /1000 & time & space & space & space & time & space & space \\ \hline 1 &112 & 26034 & 8 & 8 & 120 & 26034 & 8 \\ 2 &444 & 104068 & 16 & 16 & 490 & 104068 & 16 \\ 3 &1028 & 234102 & 24 & 24 & 1160 & 234102 & 24 \\ 4 &1920 & 416136 & 32 & 32 & 2130 & 416136 & 32 \\ 5 & & & 40 & 40 & 3650 & 650170 & 40 \\ 6 & & & 48 & 48 & 5080 & 936204 & 48 \\ 7 & & & 56 & 56 & & & 56 \\ 8 & & & 64 & 64 & & & 64 \\ 9 & & & 72 & 72 & & & 72 \\ 10 & & & 80 & 80 & & & 80 \\ 100 & & &800 &800 & & &800 \\ 1000 & & &8000 &8000 & & &8000 \\ \hline \end{tabular} \caption{Heap consumption in KiB and time in msecs - tails benchmark}\label{tailscombined} \end{center} \end{table} {\em findall/3} with input sharing clearly beats the {\em findall/3} without input sharing. SICStus Prolog can do larger sizes with the ordinary {\em findall/3} implementation than hProlog: the latter runs out of memory earlier because of its different memory allocation and heap garbage collection policy. We have tried to measure the overhead of our method, but it is too small to show up meaningfully in any of our experiments. \subsection{Conclusion on Input Sharing for {\em findall/3}} Already in \cite{findall1archive} there was a demand for sharing between the input to {\em findall/3} and its output. Also \cite{OKeefePearl} points out that this would be beneficial to some programs. Optimal input sharing would attempt to share all (sub)terms that are ground just before the call to {\em findall/3}. Checking this at runtime can be involved and costly. Our implementation approximates that by just checking that the root of a term is old enough, and relying on the programmer (or some other means) to use {\em sharing\_findall/3} only when this simple check implies that the whole term was ground at the moment of the call to {\em sharing\_findall/3}. This is in particular true in the common case that the generator of {\em findall/3} (its second argument) is a goal of which every argument is ground or free: that was the case for our benchmark {\em findall\_tails/2}. That condition on the generator can be easily checked before calling {\em sharing\_findall/3} and could also be derived by program analysis. \begin{sloppypar} Our approach does not implement {\em solution sharing}: hash-consing, or maybe even better tries, could do the job. Sections \ref{generalities} and later provide a more general and lightweight solution to representation sharing. \end{sloppypar} In \cite{OKeefePearl}, one can also read: \begin{itemize} \item[] {\em One referee suggested that Mercury's `solutions/2' would be cleverer. A test in the 0.10 release showed that it is not yet clever enough.} \end{itemize} As Mercury \cite{zoltan:mercury} relies on the Boehm-allocator and -collector for its memory management, it is quite difficult to devise a simple dynamic test whether a (ground) term is old enough: on the whole, a Mercury implementation does indeed not benefit from keeping the address order of terms consistent with their age. On the other hand, in the WAM, such a test comes natural with the needs of a strict heap allocation discipline and conditional trailing. As a conclusion, we think we have succeeded in providing input sharing for {\em findall/3} with minimal change to the underlying Prolog execution engine: any Prolog implementation with a heap allocation strategy similar to the WAM can incorporate it easily. How to present the functionality in a safe way to the user is a language design issue and as such beyond the scope of this paper. \section{General Representation Sharing for Prolog}\label{generalities} \cite{appelhashconsinggc} adapts a copying collector to perform hash-consing for the data in the older generation. Since we would like our implementation of representation sharer to be a model for other Prolog implementations, we cannot just copy that idea. Indeed, hash-consing requires a serious adaptation of the term representation, and moreover Prolog systems typically have sliding collectors, the exceptions being hProlog and BinProlog. Therefore we want to investigate representation sharing in a way that does not require a change in term representation, and that is independent of the details of the garbage collector: this will make it easier for Prolog systems to implement their own sharing module based on our experience. \cite{appelhashconsinggc} argues that garbage collection time is a good moment to perform hash-consing, but there is no inherent need to do it only then. Still, we agree basically with \cite{appelhashconsinggc}: it is better to avoid putting any effort in sharing with dead terms. We use as Prolog goals in the examples {\em share} and {\em gc}: the former performs representation sharing, the latter just performs garbage collection. By keeping the two separated, the issues become clearer, i.e., we make no assumptions on the workings of the garbage collector. \cite{bakerwarpspeed} shows that the combination of tabling and hash-consing is particularly powerful: since duplicate terms do not occur, equality of terms can be decided by a single pointer comparison instead of by traversing the whole terms. However, in that context and in its original form, hash-consing guarantees representation sharing all the time, while that is not our aim. Unfortunately, \cite{bakerwarpspeed} does not show experimental data for hash-consing without tabling. \section{Representation Sharing in Prolog: Examples}\label{intuition} Two issues make representation sharing in Prolog-like languages different from other languages: the logical variable and backtracking. Subsequent subsections show by example how these affect the possibilities for representation sharing. \subsection{Sharing within the same Segment}\label{samesegment} The first example in Section \ref{repsharversushash} shows the simplest case of sharing: the two terms are identical, in the same heap segment (as delimited by the HB pointers in the choicepoints) and ground at creation time. The next example shows that identical ground terms in the same segment cannot always share their representation: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] main3 :- T1 = f(a), T2 = f(X), ( X = a, share ; write(T1 \== T2) ). \end{Verbatim} While executing the query ?- main3, just before the execution of {\em share}, the terms T1 and T2 are identical, ground, and they are completely within the same segment. However, it would be wrong to make them share their representation, since in the failure continuation, they are no longer identical. Loosely speaking, the occurrence of trailed variables in a term makes the term unsuitable for representation sharing. \subsection{Sharing between Segments} The previous examples dealt with representation sharing of terms that live in the same segment. The next example shows an issue with representation sharing of terms that live in different segments. Since we do not want to mix this issue with trailed heap locations, the example works with ground terms. \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] main4 :- T1 = f(a), ( T2 = f(a), share, use(T1,T2) ; use(T1) ). \end{Verbatim} During the execution of the query ?- main4, T1 and T2 live in two different segments. T1 lives in the oldest segment, as seen in the left of Figure \ref{fig2}\footnote{The dashed line indicates the heap segment barrier}. Since T1 is used after backtracking, the natural thing is to keep the representation of the oldest term, because it potentially lives longest. So the introduced sharing representation is as in the right of Figure \ref{fig2}. \begin{figure}[h] \begin{centering} {\epsfig{file=fig2.eps,width=0.9\textwidth}} \caption{Representation Sharing of two Terms in different Segments} \label{fig2} \end{centering} \end{figure} Alternatively, one could use as shared representation the one in the younger segment, but then the heap should be frozen, so that on backtracking the value of T1 does not get lost. We consider this a bad alternative, but a slight variation on the same example shows that the choice is not so clear cut: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] main5 :- main6 :- T1 = f(a), T1 = f(a), use(T1), use(T1), ( ( T2 = f(a), share, gc, use(T2) T2 = f(a), gc, use(T2) ; ; dontuseT1 dontuseT1 ). ). \end{Verbatim} The code of main5 and main6 differs only in the call to {\em share} in main5. \begin{itemize} \item {\bf with sharing in main5}: share keeps one representation of f(a) and puts it in the oldest segment; gc cannot reclaim that representation, because T2 is not dead; after backtracking to dontuseT1, the f(a) term is still on the heap \item {\bf without sharing in main6}: at the point gc kicks in, T1 is unreachable and its representation disappears; this means that after backtracking to dontuseT1, the heap is empty \end{itemize} This example shows that representation sharing between terms in different segments can lead to a higher heap consumption, or more invocations of the garbage collection. Finally, it is clear that mutable terms should not share their representation: it is in general impossible to know whether two mutable terms will be identical for the rest of their common lifetime. We deal with mutable terms in more detail in Section \ref{mutable}. \section{Sharable Terms and Absorption}\label{concepts} The examples in the previous section give some intuition on what we mean by representation sharing, and also about its pitfalls. The examples also have indicated that we are working towards an implementation of a sharer that introduces sharing between two terms T1 and T2 by keeping the representation of one of them, say T1, and making T2 point to it. We coin this process {\em T1 absorbs T2}. This leads naturally to considering the notion T1 {\em can absorb} T2. The most general definition of {\em T1 can absorb T2} would be that the sequence of solutions to the running program does not change by letting T1 absorb T2. That condition is of course not decidable, so we need a workable approximation to it. The next sections explore the notion {\em can absorb} further, first by focussing on representation sharing for individual heap cells and then by considering compound terms. \subsection{Representation Sharing for Individual Heap Cells}\label{individualcell} It pays off to study the most basic representation sharing of all: between two individual heap cells. Clearly, when two cells, say c1 and c2, have different contents (and are live), neither of them can absorb the other. And when the two cells have identical addresses, they have absorbed each other already. So, we are left with the possibilities that \begin{itemize} \item c1 and c2 are in the same heap segment or not \item c1 and/or c2 is trailed or not \end{itemize} Without loss of generality, we assume that c1 is older than c2. This results in the eight combinations shown in Figure \ref{fig3}: a trailed cell is shaded. The contents of the two cells at the moment of the snapshot is the same, but shaded cells will be set to {\em free} (a self-reference in the WAM) on backtracking to the appropriate choicepoint. The horizontal dashed lines now indicate one or more heap segment separations. The vertical lines just separate the different cases. \begin{figure}[h] \begin{centering} {\epsfig{file=fig3bis.eps,width=0.9\textwidth}} \caption{The 8 combinations of two cells} \label{fig3} \end{centering} \end{figure} \begin{enumerate} \item[{\bf a}:] c1 can absorb c2 and also vice versa, because the two cells have an identical contents, and that will remain so in the forward and in the backward computation \item[{\bf bcd}:] in the forward computation, the two cells remain identical, but not after backtracking; so no representation sharing can take place, and neither can absorb the other \item[{\bf e}:] on backtracking, c2 {\em dies} before the older cell c1, but for the duration of their common life, the two cells are identical, so representation sharing is allowed: c1 can absorb c2, but not the other way around \item[{\bf fg}:] these cases are similar to cases {bf bc} above: as soon as one of the trailed cells is untrailed by backtracking, the contents of c1 and c2 differ; therefore representation sharing is not allowed; neither can absorb the other \item[{\bf h}:] there are two possibilities now: \begin{enumerate} \item at the moment the older cell is untrailed, backtracking also recovers the segment in which the newer cell resides; this means that the newer cell dies, so the fact that the older cell is set to {\em free} does not prevent representation sharing; so c1 can absorb c2 (and not the other way around); this happens if c1 was trailed before the segment of c2 is final, or to put it differently: if the moment of trailing c1 is not after the segment of c2 is closed by a choicepoint; i.e., if c2 dies not later than c1 is untrailed, c1 can absorb c2 \item otherwise, representation sharing is disallowed; neither cell can absorb the other \end{enumerate} \end{enumerate} Anticipating an implementation, we notice that it is important to be able to check quickly whether a cell is trailed. One bit ---appropriately placed--- is enough for that: that bit could be in the heap cells themselves, or it could be allocated in an array parallel to the heap. This would make cases {\bf a} and {\bf e} easy to identify. To detect case {\bf h(a)} however, we also need to retrieve quickly from a heap address, the heap segment number in which it was trailed. That requires more setup, and it would slow down the sharer. We think the expected gain in space too small to make this worthwhile. Instead, we went for disallowing sharing in case {\bf h(a)}, so that our notion of {\em can absorb} becomes quite simple and leads to a simple decision procedure. In the following piece of code, pc1 and pc2 are pointers to heap cells c1 and c2: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] boolean can_absorb(cell *pc1, cell *pc2) { if (*pc1 != *pc2) return(FALSE); if (trailed(pc1)) return(FALSE); if (trailed(pc2)) return(FALSE); return(pc1 < pc2); } \end{Verbatim} If cell c1 can absorb cell c2, every (tagged) pointer to c2 can be changed into a (tagged) pointer to c1: this change does not affect the outcome of the execution. Note that it is immaterial whether the cell containing the (tagged) pointer to c2 is trailed or not. Since trailing prevents a cell from being able to absorb, or being absorbed, it is in the interest of maximizing the chances for representation sharing to keep the trail {\em tidy}: this is in many Prolog systems done at the moment a cut ({\em !/0}) is executed. Also during garbage collection, the trail can be tidied. \subsection{Representation Sharing for Compound Terms}\label{compoundterms} The representation of a compound term with principal functor {\em foo/n} in the WAM is an S-tagged pointer to an array of (n+1) contiguous heap cells, the first of which contains {\em foo/n}, and the next n cells contain one cell of the representation of one argument each. We name this array of (n+1) heap cells the {\em body} of the term. The idea of one term absorbing the other is that after absorption, there is only one body instead of two, but there are still two cells with an S-tagged pointer pointing to it. See Figure \ref{fig6}. Clearly a necessary condition for such representation sharing is that the two bodies have the same contents. Moreover, since a term body always belongs to a single segment, the condition worked out for absorption for two individual cells must hold for each pair of corresponding body elements. We arrive at the following \paragraph{Definition:} Term T1 can absorb term T2 if T1 is older than T2, T1 == T2 and neither T1 nor T2 contain trailed cells. Figure \ref{fig6} shows two bodies that fulfill the conditions. \begin{figure}[h] \begin{centering} {\epsfig{file=fig6.eps,width=1.0\textwidth}} \caption{Left: the bodies fulfill the conditions for representation sharing. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $~~~~~~~~~~~~~$Right: sharing has been performed} \label{fig6} \end{centering} \end{figure} Note that the example exhibits a situation we have not yet described: a variable chain. Dereferencing must be stopped when a trailed cell is found. Note the similarity of the above analysis with the one for variable shunting in \cite{VariableShunting}. An algorithm that given two terms decides whether one can absorb the other is now easily constructed. However, the naive use of this algorithm would be very inefficient. \subsection{ Properties of our notion of {\em can absorb}} Before going to the implementation of representation sharing, it is good to understand some properties of the {\em can absorb} relation: the optimality (if any) of our algorithms depends crucially on those properties. It is clear that {\em can absorb} is not symmetric: a newer term cannot absorb an older term in a different segment. Neither is {\em can absorb} anti-symmetric: case {\bf a} in Section \ref{individualcell} shows that. We denote by {\em absorbed(x,y)} the result of letting term x absorb term y, of course under the condition that x can absorb y. An important part of our definition of {\em can absorb} is that the terms do not contain trailed cells: it implies that a candidate term for absorbing or being absorbed can be recognized without knowing the other term, i.e., one checks whether it contains trailed cells or not and by keeping information about visited terms, one can assure that this information about the terms can be gathered in time proportional to the heap. From the definition, it also follows that {\em can absorb} is transitive: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] (x can absorb y) && (y can absorb z) ==> x can absorb z \end{Verbatim} Finally, the absorption process is also associative, i.e., \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] absorbed(absorbed(x,y),z) == absorbed(x,absorbed(y,z)) \end{Verbatim} (of course under the condition that x can absorb y and y can absorb z). This means that the order in which absorption takes place is immaterial: the end result is the same. Together, these properties allow for a basically linear sharing algorithm, on condition that term hashing is perfect. With a less than perfect hash function, the algorithm might need to traverse some terms more than once. \section{Implementation of Representation Sharing}\label{implementation} We have taken hProlog as the platform for an implementation of representation sharing. hProlog is based on the WAM \cite{wam:hassan,DHWa83} with a few differences: \begin{itemize} \item the choicepoint stack and environment stack are not interleaved as in the WAM, but separate stacks as in SICStus Prolog \item free variables only reside on the heap; i.e., there are no self-references in the environment stack, just as in Aquarius Prolog \cite{Aquarius} \item hProlog supports some more native types like char, string and bigint; it also has attributed variables \end{itemize} hProlog employs a mark-and-copy type of garbage collector, with its roots in \cite{BevemyrLindgren@PLILP-94}, and it preserves segment order as described in \cite{VandeginsteSagonasDemoenPADL2002}. Most other systems use a sliding collector based on \cite{SicstusGarbage@CACM-88}. hProlog does not implement variable shunting. hProlog is a direct descendant of dProlog \cite{wamvariations}. Its purpose is to offer a platform for experiments in WAM-like Prolog implementation. Its high performance gives the experiments an extra dimension of credibility. The implementation uses two data structures: they can be seen in Figure \ref{fig:pic1}. We name them {\em cached\_hash} table and {\em hashed\_terms} table. Together they form the sharer tables. \begin{itemize} \item cached\_hash: this is an array the size of the WAM heap (or global stack) and can be though of as parallel to the heap; its entries contain information about the corresponding heap cells; the information is one of the following three: \begin{itemize} \item {\bf no-info}: the corresponding heap cell has not been {\em treated} yet \item {\bf impossible}: the corresponding heap cell cannot participate in representation sharing; see Section \ref{commentscode} for more on this \item a pointer to the hashed\_terms table: the corresponding heap cell has been treated, and its sharing information can be found by following the pointer \end{itemize} \item hashed\_terms: this data structure contains records with two fields: {\em hashvalue} and {\em term}; suppose a pointer in the cached\_hash points to a record in the hashed\_terms, and the corresponding heap cell A is the entry point of term TermA, then \begin{itemize} \item the {\em hashvalue} field in the record is the hash value of term TermA \item the {\em term} field in the record is a pointer to a heap cell B that is the entry point of a term TermB that can absorb TermA (provided A and B are the not same cell); our implementation makes sure that the heap cell B is as old as possible, i.e., B is equal to A or older than A \end{itemize} \end{itemize} Treating a heap cell consists in filling out the corresponding cell in the cached\_hash table and possibly the hashed\_terms table. The implementation of the hashed\_terms table is actually as a hash table: the hashvalue of a term modulo the size of the hash table is used for determining the place in the hashed\_terms, and a linked list of buckets is used to resolve collisions. Many other implementations of this hashed\_terms table would be fine as well. Our first description of the algorithm only tries to introduce sharing between structures (not lists). Therefore, for now, hashed\_terms pointers can only appear in cached\_hash cells corresponding to a heap cell containing a functor descriptor. The main algorithm consists of two phases: \begin{itemize} \item {\bf build}: it builds the cached\_hash and hashed\_terms tables; during this phase nothing is changed to the heap; this phase {\em treats} all heap cells \item {\bf absorb}: it performs all absorption possible by using the cached\_hash and hashed\_terms tables \end{itemize} In the algorithms below, we use beginheap and endheap for the pointers to the first (oldest) cell in the heap and the last (newest). We assume no cell is trailed, and come back to this point later. \subsection{Phase I: building the cached\_hash and hashed\_terms tables}\label{fase1} \begin{sloppypar} The build phase performs the action {\em compute\_hash} for each cell in the heap: the corresponding cell in the cached\_hash is set to either {\em impossible} or to a pointer to the hashed\_terms. The function {\em compute\_hash} is always called with a tagged term as argument. In the code below, we use {\em STRUCT} as the tag of a pointer pointing to the functor cell of a compound term. In figures, this tag shows simply as {\em S}. The function call {\em tag(p,STRUCT)} returns such a STRUCT tagged pointer; the function {\em untag} has the oposite effect. A function call like {\em tag(term)} returns the tag of its argument. \end{sloppypar} Note that the following code ignores certain issues like checking whether a cell is trailed, and LISTS. We deal with them in Section \ref{commentscode}. \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] foreach p in [beginheap, endheap] && is_functor(*p) compute_hash(tag(p,STRUCT)); // ignore return value int compute_hash(p) { deref(p); switch tag(p) { case FREE: case ATOMIC: return(p); case STRUCT: p = untag(p,STRUCT); if (already_computed(p)) return(already_computed_hash(p)); hashvalue = *p; foreach argument of structure p do hashvalue += compute_hash(argument); save_hash(hashvalue,p); return(hashvalue); } } \end{Verbatim} The particular hash value computed above is not relevant for our discussion: in practice, there are better (more complicated) ways to compute hash values of terms. The function call {\em already\_computed(p)} checks whether the corresponding element in the cached\_hash table points to the hashed\_terms table. {\em already\_computed\_hash(p)} returns the hash value previously computed (for the term starting at p) from the hashed\_terms entry corresponding to p: in this way, re-computation (and re-traversal of the same term) is avoided. In the {\em save\_hash} function that follows, we have left out collision handling: for the sake of the presentation, we assume perfect hashing. \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] save_hash(hashvalue,p) { index = hashvalue cached_hash[p-beginheap] = hashed_terms + index; if (empty(hashed_terms[index])) { hashed_terms[index].term = p; hashed_terms[index].hashvalue = hashvalue; return; } // a non-empty entry might need to be adapted if newer(hashed_terms[index].term,p) hashed_terms[index].term = p; } \end{Verbatim} The last line in save\_hash makes sure that the term pointed at in an hashed\_terms entry is as old as possible. The reason is that it is safe to let an older term absorb a younger one. Figure \ref{fig:pic1} shows how three equal terms are treated by compute\_hash and the effect thereof on the cached\_hash and hashed\_terms tables. \begin{figure}[h] \begin{centering} \subfigure[After treating middle f(a,b)]{{\epsfig{file=implem2.eps,width=0.32\textwidth}} \label{implem2}} \subfigure[After treating younger f(a,b)]{{\epsfig{file=implem3.eps,width=0.32\textwidth}} \label{implem3}} \subfigure[After treating older f(a,b)]{{\epsfig{file=implem4.eps,width=0.32\textwidth}} \label{implem4}} \caption{Three identical terms are treated during the build phase} \label{fig:pic1} \end{centering} \end{figure} \subsection{Phase II: Absorbing} The absorb phase performs the actual representation sharing: an S-tagged pointer is redirected to the oldest term body that can absorb it. The code is very simple: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] foreach cell c in the heap in the local stack in the choicepoint stack in the argument registers do let p be the contents of c; if (tag(p) == STRUCT) { q = untag(p,STRUCT); if (cached_hash[q-beginheap] points to hashed_terms) replace c by tag(cached_hash[q-beginheap]->term,STRUCT); } \end{Verbatim} Figure \ref{fig:pic2} shows how the older term absorbs the two identical younger terms. \begin{figure}[h] \begin{centering} \subfigure[Just before absorbing]{{\epsfig{file=implem5.eps,height=0.2\textheight}} \label{implem5}} \subfigure[After absorbing]{{\epsfig{file=implem6.eps,height=0.2\textheight}} \label{implem6}} \subfigure[After one more GC]{{\epsfig{file=implem7.eps,height=0.2\textheight}} \label{implem7}} \caption{Absorption and GC in action} \label{fig:pic2} \end{centering} \end{figure} \subsection{Comments on the Code}\label{commentscode} The code in Section \ref{fase1} ignores certain issues: \begin{itemize} \item {\em checking whether a heap cell is trailed}: during the initialization of the build phase, the cached\_hash table entries corresponding to trailed heap entries are initialized to {\em impossible}; this requires traversing the trail once and it makes checking whether a cell is trailed constant time; the checks whether a heap cell is trailed are required during the dereferencing loop; when a trailed cell is encountered, the computation of the hash value is stopped and the corresponding cached\_hash table entries of the term containing the trailed cell are also set to {\em impossible} \item {\em other datatypes}: the code takes into account only non-list structured terms, atoms and variables; it is easy to extend it to other types that occupy a single cell; for other {\em atomic} types (real, string, bigint) we have followed the same principle as for non-list structured terms: those types are implemented roughly like such terms, i.e., with a tagged pointer to a header on the heap which is followed by the actual value that can span several heap cells; for lists, we have a different solution: see Section \ref{doinglists} \item {\em foreach}: our implementation uses a linear scan for the {\em foreach} constructs: this is possible for all the stacks in hProlog; if this is not the case, one can traverse the live data starting from the root set as the garbage collector does (e.g., during its marking phase) \end{itemize} The code implementing the above is less than 700 lines of plain C that reuses very little previously existing code. Note that in the context of our copying collector, the extra space needed for representation sharing is just the hashed\_terms table: the cached\_hash table has exactly the same size as the collector needs for performing its collector duties. \subsection{Representation Sharing of Lists}\label{doinglists} In the WAM, lists have no header like other compound terms. A list is represented by an L-tagged pointer to two consecutive heap cells containing the first element of the list and its tail respectively. Clearly, we cannot deal with lists as in the previous algorithm. The change is however small: we keep the hashed\_terms pointer in the cell corresponding to the list-pointer. Figure \ref{fig:pic3} shows an example with just lists. \begin{figure}[h] \begin{centering} \subfigure[Just before absorbing]{{\epsfig{file=implem8.eps,height=0.2\textheight}} \label{implem8}} \subfigure[After absorbing]{{\epsfig{file=implem9.eps,height=0.2\textheight}} \label{implem9}} \subfigure[After one more GC]{{\epsfig{file=implem10.eps,height=0.2\textheight}} \label{implem10}} \caption{Absorption and GC in action} \label{fig:pic3} \end{centering} \end{figure} \begin{sloppypar} Note that functor cells can only appear on the heap, while list pointers can occur also in environments, choicepoints, and the argument registers. As a result, with just a hashed\_terms pointer array parallel to the heap, some representation sharing in the other stacks can get lost for lists. A similar hashed\_terms pointer array parallel to the other stacks can solve this problem: our implementation does not do that. Another solution consists in using the cell of the first element of a list for keeping the corresponding hashed\_terms information. We have not explored that alternative. \end{sloppypar} \subsection{When to run the Sharer} It seems obvious that the sharer must be run either during GC, or just after GC. Our sharer can be adapted to run during GC most easily when the GC starts with a marking phase: the build phase of the sharer can indeed be integrated in the marking phase of the collector. The absorb phase can be run before the next GC phase, or be integrated with it. That would lead to a (mark+build)\&(copy+absorb) collector for hProlog. In a sliding GC context, this would become (mark+build)\&(compact+absorb). Still, we choose from the beginning to run the sharer as an independent module that could actually be run at any time. Just after GC seems the best, because at that moment, the heap has minimal size. We name that policy {\em after GC}. There is one snag in this: the space freed by the sharer cannot be used immediately, and the beneficial effect of the sharer can be seen only after the next GC. Therefore, it feels like immediately after the sharer, another GC should be done. We name that policy {\em between GC}. We have therefore added an option to hProlog: \begin{itemize} \item -r0: no sharing \item -r1: sharer with policy {\em after GC} \item -r2: sharer with policy {\em between GC} \end{itemize} Note that the absorb phase could estimate the amount of space it has freed, and the decision to switch from one policy to the other could be based on that. \section{The Benchmarks and the Results}\label{sharerbenchmarks} Since \cite{appelhashconsinggc} is closest to our representation sharing, we are inclined to use the same benchmarks. However, \cite{appelhashconsinggc} shows overall very little impact of {\em hash-consing} and unfortunately, the benchmarks were not analyzed so as to explain why hash-consing is not effective on them. On the other hand, one cannot a priori assume that our sharer will show the same behavior, because of the differences between our respective implementations, and even the language: \begin{itemize} \item hProlog only performs major collections, while SML/NJ has a generational collector (with two generations) \item our sharer does not alter the representation of terms, while \cite{appelhashconsinggc} performs hash-consing (which entails a representation change) on the old generation only \item SML/NJ is a deterministic language and a boolean SML/NJ function is like a semi-det predicate in Prolog; however, in a typical Prolog implementation, the data it creates is (on failure) backtracked over in Prolog and the WAM recovers its space: this can have a huge impact on some benchmarks (the mandelbrot benchmark is an example) \item in \cite{appelhashconsinggc} hash-consing was inseparably tied to the (generational) collector; in contrast, we have explicitly aimed at keeping the collector and the sharer separated (we argue why in Section \ref{generalities}); this has an impact on the efficiency of the sharing process \end{itemize} So it seems worthwhile to redo some of the benchmarks of \cite{appelhashconsinggc}. The following section describes those benchmarks as well as some others not appearing in \cite{appelhashconsinggc}. \subsection{The Benchmarks}\label{benchmarks} \subsubsection{Boyer}\label{boyer} Boyer is a famous benchmark initially conceived by R. Gabriel for Lisp, and later used in other functional and logic contexts. Essentially, it rewrites a term to a canonical form. Boyer has been the subject of many studies, and in particular for proving that it is not a good benchmark: see for instance \cite{bakerwarpspeed}. Anyway, in \cite{appelhashconsinggc}, this benchmark shows the best results for hash-consing. The inherent reason is that terms are rewritten to a canonical form and thus many initially different terms end up the same. We measured that the final result of the rewriting process needs 39\,834 heap cells without representation sharing, and only about 200 with representation sharing. This makes boyer close to an optimal benchmark for showing the effectiveness of representation sharing. Note that the boyer benchmark also benefits a lot from tabling \cite{ChWa96}. This means that repeated computations are going on, which explains also the high amount of representation sharing. However, while tabling does avoid the repetition of duplicate computations, as usually implemented, it does not avoid the creation of duplicate terms on the heap. It is possible to add to the tries enough info so that ground terms need be copied only once to the heap as long as this copy is not backtracked over. \subsubsection{Life}\label{life} \cite{appelhashconsinggc} also uses the well known {\em Game of Life} as a benchmark. We have written a version in Prolog following the ideas of Chris Reade \cite{ChrisReade}, just as \cite{appelhashconsinggc} did. A (live) cell is represented as a tuple in coordinate form (X,Y). A generation is a list of live cells. The program keeps a list of the first 1000 generations, starting from the {\em The Weekender}\footnote{See http://fano.ics.uci.edu/ca/rules/b3s23/g10.html} which is a glider, \begin{wrapfigure}{r}{.35\textwidth} \begin{center}\includegraphics[% width=0.35\textwidth, keepaspectratio]{weekender.eps}\end{center} \caption{The Weekender} \end{wrapfigure} i.e., a pattern that repeats itself after a few generations (7 in this case) translated a few cells (2 in this case). If just the most recent generation is kept alive, one expects little from running the sharer immediately after a major collection, as the just rewritten generation is garbage. Our benchmark still shows some 50\% memory improvement, because it keeps all computed generations in a list, so that the existing overlap between generations is shared. \cite{appelhashconsinggc} shows little gain from hash consing for this benchmark, but we could not retrieve the initial generation(s) on which the benchmark was run. \subsubsection{Mandelbrot} This benchmark was also used in \cite{appelhashconsinggc}: it computes (actually outputs) a bitmap of a Mandelbrot set of a given dimension. Since the output does not play a role in the heap usage, we have removed the code for the output. We took the version from the {\em Computer Language Benchmarks Game} (http://shootout.alioth.debian.org/) written for Mercury and based on a version by Glendon Holst. Mandelbrot uses quite a bit of heap and as such appears a good memory benchmark. However, one can see quickly that literally {\bf all} memory used by mandelbrot is by floating point numbers: computed floating point numbers have the tendency to be different and therefore representation sharing might not have much effect. We have indeed checked that half of the generated floating point numbers are unique during the benchmark. Almost all the floating point numbers are generated during a ground call to {\em mandel/5}, a semidet predicate called as the condition in an if-then-else as follows: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] (mandel(Height, Width, Y, X, 50) -> ByteOut1 is (ByteOut0 << 1) ; ByteOut1 is (ByteOut0 << 1) \/ 0x1 ) \end{Verbatim} In the setting of \cite{appelhashconsinggc} (generational collection + hash-consing) the Mandelbrot benchmark has the following characteristic: if the garbage collector runs during the test ({\em mandel/5}) then a few floats are copied to the older generation, otherwise, no float from the new generation survives the collection. So, not even all computed floats end up in the zone subject to hash-consing. In our setting (only major collections + representation sharing), at each collection, only some floats in the test are alive. Exactly at that moment, the chance for duplicates is very small. The effect of hash-consing or representation sharing is expected to be very small for the mandelbrot benchmark. Our test runs of the mandelbrot benchmark indeed show zero gain from representation sharing. \subsubsection{One more classical Prolog benchmark: {\em tsp}} We were unable to retrieve more benchmarks from \cite{appelhashconsinggc}, so we tried different benchmarks from the established general Prolog benchmark suite. None showed any benefit from representation sharing. We report only on {\em tsp}: just like {\em mandelbrot} and the other benchmarks showing no benefit, it is mainly good for showing the overhead of the useless sharer. \subsubsection{{\em blid/1}}\label{blid} \begin{sloppypar} The next program was altered slightly from what Ulrich Neumerkel posted in comp.lang.prolog; it appears also in his Diplomarbeit \cite{neumerkeldiplomarbeit}. \end{sloppypar} \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] blid(N) :- blam([]). length(L, N), blam([L|L]) :- blam(L), blam(L). id(L,K), use(K). id([], []). id([L1|R1], [L2|R2]) :- id(L1,L2), use(_). id(R1,R2). \end{Verbatim} His question was {\em Are there systems, that execute a goal blid(N) in space proportional to N? Say blid(24)}. At first we expected that with our representation sharing, space would be indeed linear in N. However, the expansion policy and order in which events (garbage collection and representation sharing) take place is also crucial. \begin{itemize} \item with the {\em after GC} policy, the following happens: \begin{itemize} \item[a1:] the first GC finds that 99\% (or more) of the data is live, and decides to expand the heap \item[a2:] the sharer shares most data \item[a3:] the next triggered GC finds that about half of the heap is live, so does not expand \item[a4:] the following sharer shares most of the data \item[a5:] points a3 and a4 are repeated \end{itemize} \item with the {\em between GC} policy, the following happens: \begin{itemize} \item[b1:] the first GC finds that 99\% (or more) of the data is live, and decides to expand the heap \item[b2:] the sharer shares most data \item[b3:] the second GC collects almost all data \item[b4:] points b1, b2 and b3 are repeated \end{itemize} \end{itemize} The first GC in a1 and b1 is triggered by lack of space, the second GC (in b3) is there by policy. A GC can decide to expand the heap (in hProlog when the occupancy is more than 75\%: this is known after marking). So one sees that in the case of the {\em between GC} policy, the heap is repeatedly expanded, even though the program could run in constant space (with the aid of the sharer). With the {\em after GC}, we do not get into this repeated expansion. If hProlog also had a heap shrinking policy, the {\em between GC} policy would after its second collection shrink the heap, and this would amount to almost the same effect as the {\em after GC} policy. This shows that the combination of a reasonable heap expansion policy and a reasonable sharer policy can result in an overall bad policy. More work could be done on this. Note that in its original form, {\em id/2} also contains the two commented out unifications, and that with unification factoring these would also introduce the sharing needed to run in O(N) heap (always with the aid of GC of course). \subsubsection{Four Applications} The next four benchmarks provide some insight in what to expect from the sharer in a some typical applications of Prolog: there is little impact on memory and performance. \paragraph{Tree Learner.} This realistic benchmark consists of a {\em best-first relational regression tree learner} written by Bernd Gutmann \cite{Gutmann}. The program is about 900 LOC. It works on a data set of 350K facts. \paragraph{Emul.} \begin{sloppypar} Emul is a BAM emulator \cite{Aquarius} written by Peter Van Roy in Prolog. The benchmark consists in executing the BAM code for the famous {\em SEND+MORE=MONEY} problem. It is about 1K LOC. \end{sloppypar} \paragraph{An XSB compiler.} xsbcomp is an old version of the XSB compiler \cite{SaSW94} and a run of the benchmark consists in compiling itself. The XSB compiler is about 5K LOC and also uses the hProlog or SICStus Prolog reader which are also in Prolog. \paragraph{The hProlog compiler.} In this benchmark, the hProlog compiler compiles itself. It uses {\em setarg/3} heavily, so we need a {\em mutable/1} (see Section \ref{mutable}) declaration for 7 functors. This benchmark cannot be run by SICStus. The hProlog compiler (which is a version of the hipP compiler \cite{querypacks} written by Henk Vandecasteele) plus all other code it needs (reader, optimizer ...) totals more than 10K LOC. \subsubsection{Worst and best Case} It is not clear what the best and worst case for our sharer is: if the heap were just one huge flat term (say of the form f(1,2,3,...)) then only one hash value would have to be saved in the hashed\_terms table, and in some sense that is both best and worst, because the least time is lost in collisions etc, but also no sharing can be performed. So we choose the following as best-versus-worst case: a large complete binary tree in which every node is of the form {\em node(tree,tree,number)}. In what we consider the best case, the number is always the same (and thus resembles a bit the blid data structure in Section \ref{blid}). This leads to a very sparse hashed\_terms table, and a large amount of sharing. In the worst case, the number is different in all nodes: as a result, the hashed\_terms becomes quite full, and no sharing is possible at all. The main reason for this benchmark is to find out how the build and the absorb phase contribute to the total time of the sharer. \subsection{The Benchmark Results for Representation Sharing} The results are shown in Table \ref{sharertimespace}. Time is in milliseconds. Space is in Mib or Kib as indicated in the table. The first two columns denote the benchmark and system used (with the sharing option for hProlog). Then follow the total time taken by heap garbage collection (including the stack shifter), the total time taken by the sharing module, the total execution time and the number of garbage collections. Then follow four columns related to space: the initial heap size, the final heap size and the amount of space collected by the garbage collections are given in megabytes. Finally, there is the heap high water mark at the end of the benchmark given in KiB instead of Mib because the figures vary widely. It measures the size of the result computed by the benchmark and it includes a small system specific overhead from the toplevel: for {\em mandelbrot}, the figure is just that overhead. \begin{table}[!] \begin{center} \footnotesize \begin{tabular}{|r|r||r|r|r||r|r|r|r|r|} \hline bench & system & gc & share & total &\#gc & initial & final &collected&at end \\ & & time & time & runtime & & heap & heap & & \\ & & msecs & msecs & msecs & & Mib & Mib & Mib & Kib \\ \hline \hline boyer & hProlog -r0 & 920 & 0 & 3130 & 19 & 6.10 & 24.42 & 96.75 & 6953.96 \\ boyer & hProlog -r1 & 280 & 380 & 2820 & 24 & 6.10 & 6.10 & 111.60 & 287.76 \\ boyer & hProlog -r2 & 260 & 380 & 2750 & 36 & 6.10 & 6.10 & 109.74 & 0.91 \\ boyer & SICStus & 1440 & - & 6170 & 83 & 6.10 & 26.01 & 240.76 & 6954.23 \\ \hline life & hProlog -r0 & 2570 & 0 & 28960 & 100 & 3.76 & 15.04 & 17.70 & 8896.39 \\ life & hProlog -r1 & 1400 & 3060 & 30860 & 100 & 3.76 & 7.52 & 22.23 & 4211.53 \\ life & hProlog -r2 & 2820 & 2910 & 32200 & 200 & 3.76 & 7.52 & 22.27 & 4164.63 \\ life & SICStus & 4730 & - & 69250 & 100 & 3.76 & 9.91 & 22.20 & 8896.60 \\ \hline mandelbrot & hProlog -r0 & 10 & 0 & 29370 & 360 & 16.04 & 16.04 & 5774.14 & 0.09 \\ mandelbrot & hProlog -r1 & 50 & 0 & 29460 & 360 & 16.04 & 16.04 & 5774.14 & 0.09 \\ mandelbrot & hProlog -r2 & 20 & 10 & 29390 & 720 & 16.04 & 16.04 & 5774.14 & 0.09 \\ mandelbrot & SICStus & 2090 & - & 219070 & 171 & 16.04 & 16.02 & 2656.25 & 0.36 \\ \hline tspgc & hProlog -r0 & 1190 & 0 & 47960 & 1625 & 3.76 & 3.76 & 5684.49 & 258.23 \\ tspgc & hProlog -r1 & 1070 & 1610 & 52030 & 1625 & 3.76 & 3.76 & 5684.49 & 258.23 \\ tspgc & hProlog -r2 & 2730 & 1770 & 55540 & 3250 & 3.76 & 3.76 & 5684.49 & 258.23 \\ tspgc & SICStus & 3760 & - & 96430 & 1304 & 3.76 & 3.77 & 2867.55 & 258.43 \\ \hline \hline blid & hProlog -r0 & 1860 & 0 & 2480 & 6 & 3.76 & 240.64 & 0.01 & 131072.08 \\ blid & hProlog -r1 & 970 & 1910 & 3520 & 34 & 3.76 & 7.52 & 123.94 & 301.76 \\ blid & hProlog -r2 & 860 & 1750 & 3250 & 10 & 3.76 & 120.32 & 116.52 & 0.26 \\ blid & SICStus & 5660 & - & 7050 & 15 & 3.76 & 179.22 & 0.00 & 131072.33 \\ \hline worst & hProlog -r0 & 40 & 0 & 80 & 1 & 16.04 & 16.04 & 0.01 & 8192.06 \\ worst & hProlog -r1 & 70 & 110 & 240 & 1 & 16.04 & 16.04 & 0.01 & 8192.06 \\ worst & hProlog -r2 & 120 & 100 & 280 & 2 & 16.04 & 16.04 & 0.01 & 8192.06 \\ worst & SICStus & 90 & - & 240 & 1 & 16.04 & 16.05 & 0.00 & 8192.33 \\ \hline best & hProlog -r0 & 40 & 0 & 80 & 1 & 16.04 & 16.04 & 0.07 & 8192.06 \\ best & hProlog -r1 & 70 & 100 & 220 & 1 & 16.04 & 16.04 & 0.01 & 0.37 \\ best & hProlog -r2 & 70 & 110 & 240 & 2 & 16.04 & 16.04 & 8.01 & 0.37 \\ best & SICStus & 90 & - & 230 & 1 & 16.04 & 16.05 & 0.00 & 8192.33 \\ \hline \hline treelearner & hProlog -r0 & 990 & 0 & 41380 & 88 & 35.19 & 35.19 & 2936.31 & 0.08 \\ treelearner & hProlog -r1 & 920 & 760 & 39900 & 88 & 35.19 & 35.19 & 2936.50 & 0.08 \\ treelearner & hProlog -r2 & 1980 & 750 & 40570 & 176 & 35.19 & 35.19 & 2936.75 & 0.08 \\ treelearner & SICStus & 5340 & - & 95860 & 75 & 42.13 & 42.08 & 2947.34 & 0.35 \\ \hline emul & hProlog -r0 & 30 & 0 & 4610 & 70 & 3.76 & 3.76 & 258.28 & 0.08 \\ emul & hProlog -r1 & 60 & 20 & 4500 & 70 & 3.76 & 3.76 & 260.04 & 0.08 \\ emul & hProlog -r2 & 80 & 20 & 4310 & 140 & 3.76 & 3.76 & 260.08 & 0.08 \\ emul & SICStus & 740 & - & 8330 & 90 & 3.76 & 3.72 & 318.77 & 0.36 \\ \hline xsbcomp & hProlog -r0 & 20 & 0 & 390 & 3 & 3.76 & 3.76 & 10.02 & 0.07 \\ xsbcomp & hProlog -r1 & 20 & 0 & 360 & 3 & 3.76 & 3.76 & 10.48 & 0.07 \\ xsbcomp & hProlog -r2 & 20 & 0 & 410 & 6 & 3.76 & 3.76 & 10.72 & 0.07 \\ xsbcomp & SICStus & 30 & - & 850 & 4 & 3.76 & 3.77 & 12.96 & 0.35 \\ \hline dpcomp & hProlog -r0 & 190 & 0 & 850 & 13 & 3.81 & 3.81 & 29.76 & 0.07 \\ dpcomp & hProlog -r1 & 130 & 110 & 900 & 11 & 3.81 & 3.81 & 29.62 & 0.07 \\ dpcomp & hProlog -r2 & 240 & 120 & 1010 & 22 & 3.81 & 3.81 & 30.30 & 0.07 \\ \hline \end{tabular} \end{center} \caption{The Sharer and the Collector}\label{sharertimespace} \end{table} \begin{sloppypar} Table \ref{sharertimespace} shows sometimes a large difference between the memory consumption of SICStus Prolog and hProlog. Also the time spent in garbage collection, and the number of collections can be very different. The reason is that although both systems are based on the WAM, they differ in a number of other design decision. In particular, their heap expansion policy differs, their garbage collectors differ (the SICStus Prolog one is generational and compacting, while the hProlog one is non-generational and copying), they have a different approach to floating point arithmetic, and hProlog does not allocate free variables in the local stack. \end{sloppypar} In addition to the results in Table \ref{sharertimespace}, we can also mention that the build phase takes between 8.3 (for blid) and 2.6 (for worst) times as long as the absorb phase: the absorb phase is indeed much simpler. \subsection{Conclusions from the Benchmarks} By and large our results confirm the findings of \cite{appelhashconsinggc}: most benchmarks hardly benefit from representation sharing, and sometimes the space and time performance becomes worse. Apart from the artificial benchmark {\em blid/1}, only for boyer do we find a much larger ---huge in fact--- benefit from representation sharing than in \cite{appelhashconsinggc}. We have not been able to pinpoint why: the benchmarks used in \cite{appelhashconsinggc} are not even available anymore, let alone the {\em queries}. The fact that generational collection retains terms longer than an only-major-collections strategy might play a role. Still, our result is in line with the (confluent) rewriting character of boyer. The time taken by our implementation of representation sharing is reasonable: the algorithm is linear in the size of the heap, localstack, choicepointstack and trail, so complexity wise not worse than an actual garbage collection. The traversal of the stacks is however less complicated, since one does not need to take into account the liveness of the locations anymore and less copying is going on. In our application benchmarks, the sharer always takes less time than the garbage collection. It is clear that a better policy, and improvements to our implementation code, can make the sharer even more efficient. Our sharer does not depend on the efficiency of the underlying Prolog system, neither its garbage collector, so we feel it is safe to say that our sharer can be implemented with the same (or better) performance in other WAM-like systems. \section{Variations, Extensions and related Issues}\label{variants} \paragraph{\bf Unusual Sharing.} In \cite{DemoenICLP2002fresh}, the rather unusual representation sharings depicted in Figure \ref{fig:layout2} are described. \begin{figure}[h] \begin{centering} {\epsfig{file=weird.eps,width=1.0\textwidth}} \caption{Unusual representation sharing} \label{fig:layout2} \end{centering} \end{figure} Our current representation sharing implementation does not achieve the above sharings. Still, all ingredients are present and while the expected gains are small, it is nice that the above unusual sharing can be achieved in time linear in the size of the heap (assuming perfect hashing). \paragraph{\bf Cyclic Terms.} \cite{appelhashconsinggc} deals with cyclic terms by excluding them from hash-consing. It is easy to do the same in our implementation as follows: \begin{enumerate} \item besides the special values {\bf no-info} and {\bf impossible}, cached\_hash entry can also have the value {\bf busy} \item when a functor cell is visited for the first time, that corresponding cached\_hash entry is set to {\bf busy} \item when a functor cell is visited recursively, a check on the corresponding cached\_hash entry detects that there is a cycle: the field is set to {\bf impossible} \item as usual, when a term is visited completely, its corresponding field is set to an appropriate value, i.e., {\bf impossible} or a pointer to the hashed\_terms table \end{enumerate} However, one can do better: a variation of point 3 above yields a procedure that can perform representation sharing also for cyclic terms. \begin{enumerate} \item[3'.] when a functor cell is visited recursively, a check on the corresponding cached\_hash entry detects that there is a cycle: a fixed value (say 17) is returned as the hash value of this term; the corresponding cached\_hash entry is not updated at this time: this happens when the visit has returned to the point where the entry was set to {\bf busy} \end{enumerate} The procedure for testing equality of terms must also be adapted to deal correctly with cycles: this is common practice now in most Prolog systems. Note that it does not matter which value is chosen in (3') above. What matters is only that the hash value of terms that can share their representation is the same. Still, our procedure can attach a different hash value to cyclic terms that are equal (in the sense of {\em ==/2}) and could share their representation. This results in no representation sharing for those cyclic terms. As an example: \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] test :- X = f(1,f(1,X)), share, use(X). \end{Verbatim} does not result in the same heap representation as \begin{Verbatim}[fontsize=\small, frame=single,samepage=true] test :- X = f(1,X), use(X). \end{Verbatim} The procedure based on minimization of finite automata described in \cite{neumerkeldiplomarbeit} does. \paragraph{\bf Mutable Terms.}\label{mutable} Prolog systems supporting destructive update ---through {\em setarg/3}, mutable terms or for attributed variables--- often do this using a trail in which each entry keeps the old value: clearly, these old values can point to sharable terms and they can be updated accordingly in the final absorb phase. However, just as a ground mutable term must be copied by {\em copy\_term/2}, a mutable term itself is not allowed to absorb or be absorbed. This means that mutable terms should be recognizable during the build phase. In SICStus Prolog this is the case ({\em \$mutable/2} is reserved for this), but not so in other systems (e.g., SWI Prolog, Yap, hProlog ...). In hProlog we have resolved that problem by introducing a declaration: {\em :- mutable foo/3.} declares that the arguments of any {\em foo/3} term can be destructively updated, and effectively prevents sharing of {\em foo/3} terms. We use one bit in the functor table and the overhead during the build phase is unnoticeable. Note that the {\em :- mutable} declaration does not readily work across modules. \paragraph{\bf Cooperation between Collector and Sharing.} We have implemented the representation sharing module independent of the garbage collector module. The advantage is less dependency and a higher potential that the sharer can be integrated in other systems. The disadvantage is that some information that the garbage collector has computed, needs to be recomputed by the sharing module. For instance, the collector might leave behind information on which cells are trailed, and which cells contain sharable information. This would speed up the sharer and in particular the build phase. \paragraph{\bf What if Representation Sharing does not work.} The benchmark programs show that representation sharing is not always effective: it depends indeed highly on the type of program. When representation sharing does not work, this can be noticed during a run of the representation sharing module by observing the hashed\_terms. If it keeps growing, it means that lots of different terms are found. This in turn gives an indication that representation sharing is not effective. An important advantage of our implementation is that the representation sharing process can be abandoned at any time since no changes to the WAM run-time data structures are made until the absorb phase in which structure (or list ...) pointers are updated, and even the absorb phase can be stopped before finishing. Also, if representation sharing is run from time to time only ---as suggested by Ulrich Neumerkel--- then the frequency of running it can take into account the effectiveness of representation sharing up to that moment. Such tuning could depend also on the relative performance of the garbage collector and the representation sharing module. \paragraph{\bf Parallelization.} During the scanning phase, the stacks (heap, local stack ...) are read-only, while the cached\_hash and the hashed\_terms can be read and written by different workers. During the absorb phase, the cached\_hash and hashed\_terms are read-only, and only the stacks are written to. By giving different workers a different part of the heap to start working on, duplicate work might be avoided and synchronization slowdown kept low in the scanning phase. During the absorb phase, giving different workers different parts of the stacks makes their actions completely independent. \paragraph{\bf Variable Chains.} We have not treated variable chains in much detail, as we were mostly interested in sharing between the bodies of compound data. However, a slight extension of the code for the build phase can also call save\_hash for all reference cells. That results in a similar effect as variable shunting as described in \cite{VariableShunting}, but is not as {\em complete} as the method described there. Figure \ref{fig:chain} shows an example of how a chain of references is transformed. \begin{figure}[h] \begin{centering} \subfigure[Just before absorbing]{{\epsfig{file=chain1.eps,height=0.2\textheight}} \label{chain1}} \subfigure[After absorbing]{{\epsfig{file=chain2.eps,height=0.2\textheight}} \label{chain2}} \caption{Absorption for chains of references in action} \label{fig:chain} \end{centering} \end{figure} \paragraph{\bf Backtrackable Representation Sharing.}\label{backtrackablerepshar} Backtrackable representation sharing would follow the principle that when two terms are identical (as for {\em ==/2}) then one can absorb the other, regardless of whether they have trailed cells or not. The change made (to a LIST or STRUCT-tagged pointer) by the absorb phase is now conditionally (and value) trailed. This costs extra trail space of course. On cut, the trail can be tidied, so in case the computation becomes eventually deterministic, the amount of sharing can be arbitrarily larger than without this form of backtrackable representation sharing. However, suppose that all sharing were trailed, then it is possible that an immediately following GC would not be able to recover anything. And if the computation becomes deterministic eventually, running the sharer will do the same job as was done in the case of the backtrackable representation sharing, only later -- which might be even better, because the earlier sharing could have been unnecessary because backtracking has destroyed it. All in all, our feeling is that backtrackable representation sharing is not worth its while. \paragraph{\bf Partial Sharing.} {\em Partial sharing} refers to running the sharer in an incomplete way, i.e., it achieves part of its potential effect, but maybe not all. Partial sharing can result for instance from restricting the part of the heap in which duplicate terms are identified, i.e., restricting the scan phase to part of the heap. Another possibility is to restrict sharing to certain terms, e.g., just for lists, or to certain parts of the other stacks. It is one of the strengths of our implementation approach that all such variations can be incorporated rather easily. \paragraph{\bf Incremental Sharing.} The notion of {\em incremental sharing} refers to the possibility to perform a partial sharer pass, e.g., on part of the heap, and continue that pass later on, eventually obtaining the same effect as running the sharer completely. The ability to perform partial sharing is certainly needed, but there is more: information must be passed from one partial run to the other, and the user program and the sharer must be able to run in an interleaved way. This raises immediately question of the completeness, but also efficiency is at stake. The issues with incremental sharing are similar to the ones with generational sharing in the next paragraph and we do not discuss further incremental sharing separately. \paragraph{\bf Generational Sharing.} The notion of {\em generational sharing} refers to the possibility to avoid performing sharing on a part of the heap on which it was performed earlier. In analogy with generational garbage collection, there is a rationale for performing generational sharing: for generational garbage collection, the rationale is that new objects tend to die quickly. For generational sharing the rationale is that redoing sharing on old data (on which sharing was performed earlier) does not pay off. \begin{sloppypar} Our strategy to non-generational sharing is to recompute the cached\_hash and hashed\_terms tables from scratch every time after a new garbage collection. With generational sharing, one would like to reuse the part of the tables corresponding to the older generation. \end{sloppypar} We reason about forward computation first: The information on terms in the older generation that were ground at the previous run of the sharer and eligible for sharing at that moment is still valid. The same is generally not true for a non-ground term: it can now contain cells that are trailed, and in that case the information about the term is to be discarded from the table, or at least not used. Since it is not straightforward to keep track of which information in the tables is no longer valid because of this reason, it might be best to restrict a generational sharer to ground terms only. Now suppose that backtracking has taken place between two activations of the sharer: generally, this invalidates entries in the sharer tables because terms have disappeared. It is easy to adapt the cached\_hash table (it shrinks with the heap on backtracking), but the hashed\_terms table also needs to be adapted. By keeping high and low water marks of the top of heap pointer, this can also be achieved. The cost of adapting the tables might be larger than the cost of rebuilding them however. \section{Related Work}\label{related} \cite{appelhashconsinggc} describes how hash-consing can be performed during garbage collection in an implementation of Standard ML (SML/NJ). The collector is generational, and the data structures in the old generation are hash-consed. In this way, the operation of hash-consing is restricted to data structures that are expected to live long. The reported performance and space gains are disappointing: half of the benchmarks lose performance (up to 25\%) and the gain is maximally 10\% (for boyer). The space improvement is even smaller: on most benchmarks less than 1\%. Also for space, boyer is the exception with about 15\%. Note however, that these space figures are about the amount of data copied to the older generation, i.e., the data that is hash-consed, and which is collected infrequently. As such, these numbers do not give full insight in the potential of hash-consing. Still, \cite{appelhashconsinggc} is most closely related to our implementation of representation sharing for Prolog: our strategy is to perform representation sharing after a (major) garbage collection, so we introduce sharing only for data that just survived a collection. Mercury \cite{zoltan:mercury} is basically a functional language, and the issue of trailing does not enter. In the developers' mailing list in August 1999, the issue of hash consing was raised with a proposal for an implementation as well as how to present it to the user. It is interesting that at some point, the opposite of our {\em :- mutable} declaration was proposed. As an example {\em :- pragma hash\_cons(foo/3)} tells the compiler to hash-cons the constructors of type {\em foo/3}. As far as we know, the proposals were not implemented. Last but not least, \cite{neumerkeldiplomarbeit} provides the example {\em blid/1}, and gives a high-level outline of an algorithm for minimization of heap terms seen as DFAs. Our implementation can be seen as a concrete version of that algorithm. However, our {\em minimization} shows mostly similarities with \cite{ershov58} in which Ershov uses (for the first time in the published history of computer science) hashing to detect common subtrees in a given tree. \section{Conclusion}\label{conclusion} Without the questions by Ulrich Neumerkel on comp.lang.prolog, we would not have worked on this topic and we are grateful for his insistence that Prolog systems should have a sharer. We have provided a practical and efficient implementation of representation sharing, that can be incorporated without problems in most WAM based systems. Our implementation has the advantage that it does not rely on a particular garbage collection strategy or implementation. On the other hand, a tighter integration of the garbage collector with the representation sharing module can make the latter more efficient. Still, representation sharing is not effective for all programs, so it must not be applied indiscriminately, i.e., it needs its own policy. We have also shown that input sharing for {\em findall/3} is easy to implement. \section*{Acknowledgements} We thank the anonymous referees: their suggestions have improved the paper considerably. The second author acknowledges support from project {\em GOA/08/008 Probabilistic Logic Learning}, and from the Research Foundation Flanders (FWO) through projects {\em WOG: Declarative Methods in Computer Science} and {\em G.0221.07 Platform independent analysis and implementation of Constraint Handling Rules}.
2,877,628,088,691
arxiv
\section{Introduction} Transform-based methods are widely employed in digital signal processing applications~\cite{ahmed1975}. In this context, the efficient computation of discrete transforms has constantly attracted community efforts and the proposition of fast algorithms~\cite{Blahut2010}. In particular, the 8-point discrete cosine transform (DCT) has a proven record of scientific and industrial applications, as demonstrated by the multitude of image and video coding standards that adopt it, such as: JPEG~\cite{Wallace1992}, MPEG~\cite{Gall1992, roma2007hybrid, mpeg2}, H.261~\cite{h261, Liou1990}, H.263~\cite{h263, roma2007hybrid}, H.264/AVC~\cite{wiegand2003, h264}, and the recent high efficiency video coding (HEVC)~\cite{hevc,hevc1}. The HEVC is capable of achieving high compression performance at approximately half the bit rate required by its predecessor H.264/AVC with same image quality~\cite{hevc1, Park2012, Ohm2012, Potluri2013}. On the other hand, the HEVC requires a significantly higher computational complexity in terms of arithmetic operations~\cite{Park2012, Ohm2012, sullivan2012, Potluri2013}, being 2--4 times more computationally costly than H.264/AVC~\cite{Park2012, Potluri2013}. In this context, the efficient computation of the DCT is a venue for improving the performance of above-mentioned codecs. Since its inception, several fast algorithms for the DCT have been proposed~\cite{Chen1977, hou1987fast, Arai1988, Loeffler1989, fw1992, britanak2007discrete}. However, traditional algorithms aim at the computation of the \emph{exact} DCT, which requires several multiplication operations. Additionally, several algorithms have achieved theoretical multiplicative complexity lower-bounds~\cite{Loeffler1989,winograd1980}. As a consequence, the progress in this area headed to approximate methods~\cite{Haweel2001,lengwehasatit2004scalable,bas2008}. In some applications, a simple DCT approximation can provide meaningful results at low arithmetic complexity~\cite{cb2011}. Thus, approximation techniques for the DCT are becoming increasingly popular~\cite{Haweel2001,bas2008, bas2009, bas2013,Cintra2014-sigpro}. Such approximations can reduce the computational demands of the DCT, leading to low-power, high-speed realizations~\cite{Potluri2013}, while ensuring adequate numerical accuracy. Furthermore, it is a well-known fact that in many DCT applications~\cite{Makkaoui2010,Docef2002,rao1990discrete}, the most useful signal information tends to be concentrated in the low-frequency coefficients. This is because the DCT presents good energy compaction properties, which are closely related to the Karhunen-Lo\`eve transform~\cite{Ahmed1974}. Therefore, only the low-frequency DCT components are necessary to be computed in these applications. A typical example of this situation occurs in data compression applications~\cite{Rao2001}, where high-frequency components are often zeroed by the quantization process~\cite[p.~586]{Malepati2010}. Then, only the quantities that are likely to be significant should be computed~\cite{Huang2000}. This approach is called frequency-domain \emph{pruning} and has been employed for computing the discrete Fourier transform~(DFT)~\cite{wang2012generic, airoldi2010energy, whatmough2012vlsi,kim2011islanding, carugati2012variable}. Such methodology was originally applied in the DCT context in~\cite{wang1991pruning} and~\cite{skodras1994fast}. In~\cite{Makkaoui2010, Lecuire2012}, the two-dimensional (\mbox{2-D}) version of the pruned DCT was proposed. In the context of low-powered wireless vision sensor networks, a pruned DCT was proposed in~\cite{kouadria2013low} based on the binary DCT~\cite{bas2013}. In~\cite{meher2014efficient}, Meher~\emph{et al.} proposed a HECV architecture where the wordlength was maintained fixed by means of discarding least significant bits. In that context, the goal was the minimization of the computation complexity at the expense of wordlength truncation. Such approach was also termed `pruning'. However, it is fundamentally different from the approach discussed in the current paper. This terminology distinction is worth observing. Thus, in response to the growing need for high compression of image and moving pictures for various applications~\cite{hevc}, we propose a further reduction of the computational cost of the 8-point DCT computation in the context of JPEG-like compression and HEVC processing. In this work, we introduce pruned DCT approximations for image and video compression. Essentially, DCT-like pruning consists of extracting from a given approximate DCT matrix a submatrix that aims at furnishing similar mathematical properties. We advance the application of pruning techniques to several DCT approximations listed in recent literature. In this paper, we aim at identifying adequate pruned approximations for image % compression applications. VLSI realizations of both \mbox{1-D} and \mbox{2-D} of the proposed methods are also sought. This paper is organized as follows. In Section~\ref{sec:math_back}, a mathematical review of DCT approximation and pruning methods is furnished. Exact and approximate DCT are presented and the pruning procedure is mathematically described. In Section~\ref{sec:complex_perform}, we propose several pruned methods for approximate DCT computation and assess them by means of arithmetic complexity, coefficient energy distribution in transform-domain, and image compression performance. A combined figure of merit considering performance and complexity is introduced. In Section \ref{sec:vlsi}, a VLSI realization of the optimum pruned method according to the suggested figure of merit is proposed. Both FPGA and ASIC realizations are assessed in terms of area, time, frequency, and power consumption. Section \ref{sec:conclusion} concludes the paper. \section{Mathematical Background} \label{sec:math_back} \subsection{Discrete Cosine Transform} Let $\mathbf{x}=\begin{bmatrix} x_0 & x_1 & \cdots & x_{N-1} \end{bmatrix}^\top$ be an $N$-point input vector. The one-dimensional DCT is a linear transformation that maps $\mathbf{x}$ into an output vector~$\mathbf{X}=\begin{bmatrix} X_0 & X_1 & \cdots & X_{N-1} \end{bmatrix}^\top$ of transform coefficients, according to the following expression~\cite{Oppenheim2010}: \begin{align} \label{equation-dct-summation} X_k = \alpha_k \cdot \sqrt{\frac{2}{N}} \cdot \sum_{n=0}^{N-1} x_n \cdot \cos\left\{\frac{(n+\frac{1}{2})k\pi}{N} \right\} , \end{align} where $k = 0,1,\ldots,N-1$, $\alpha_0 = 1/\sqrt{2}$ and $\alpha_k = 1$, for $k>0$. In matrix formalism, \eqref{equation-dct-summation} is given by: \begin{align} \mathbf{X} = \mathbf{C} \cdot \mathbf{x} , \end{align} where $\mathbf{C}$ is the $N$-point DCT matrix whose entries are expressed according $c_{m,n} = \alpha_m \cdot \sqrt{2/N}\cdot \cos\left\{(n+\frac{1}{2})m\pi/N \right\}$, $m,n = 0,1,\ldots,N-1$~\cite{britanak2007discrete}. Being an orthogonal transform, the inverse transformation is given by: $\mathbf{x} = \mathbf{C}^\top \cdot \mathbf{X}$. Because DCT satisfies the kernel separability property, the \mbox{2-D} DCT can be expressed in terms of the \mbox{1-D} DCT. Let $\mathbf{A}$ be an $N\times N$ matrix. The forward \mbox{2-D} DCT operation applied to $\mathbf{A}$ yields a transform-domain image~$\mathbf{B}$ furnished by: $\mathbf{B} = \mathbf{C} \cdot \mathbf{A} \cdot \mathbf{C}^\top $. In fact, the \mbox{2-D} DCT can be computed after eight column-wise calls of the \mbox{1-D} DCT to~$\mathbf{A}$; then the resulting intermediate image is submitted to eight row-wise calls of the \mbox{1-D} DCT. In this paper, we devote our attention to the case $N=8$. \subsection{DCT Approximations} In general terms, a DCT approximation~$\hat{\mathbf{C}}$ is constituted of the product a low-complexity matrix~$\mathbf{T}$ and a scaling diagonal matrix~$\mathbf{S}$ that ensures orthogonality or quasi-orthogonality~\cite{Cintra2014-sigpro}. Thus, we have $\hat{\mathbf{C}} = \mathbf{S} \cdot \mathbf{T}$~\cite{bas2008,cb2011,bc2012,Potluri2013}. The entries of the low-complexity matrix are defined over the set $\{0\,\pm1,\pm2\}$, which results in a multiplierless operator---only addition and bit-shifting operations are required. Usually possessing irrational elements, the scaling diagonal matrix~$\mathbf{S}$ does not pose any extra computation overhead for image and video compression applications. This is due to the fact that the matrix~$\mathbf{S}$ can be conveniently merged into the quantization step of compression algorithms~\cite{bas2008,bas2009,bc2012,Potluri2013}. Among the various DCT approximations archived in literature, we separate the following methods: (i)~the signed DCT~(SDCT), which is the seminal method in the DCT approximation field~\cite{Haweel2001}; (ii)~Bouguezel-Ahmad-Swamy approximations~\cite{bas2008,bas2009,bas2013}; (iii)~the rounded DCT (RDCT)~\cite{cb2011}, and (iv)~the modified RDCT (MRDCT)~\cite{bc2012}. These approximations were selected because they collectively exhibit a wide range of complexity vs. performance trade-off figures~\cite{bc2012}. Moreover, such approximations have been demonstrated to be useful in image compression. The low-complexity matrices of above methods are shown in Table~\ref{table-approximations}. Additionally, we also considered the 8-point naturally ordered Walsh-Hadamard transform~(WHT), which is a well-known low-complexity transform with applications in image processing~\cite{Elliot1982,bas2013}. \begin{table} \centering \caption{Approximate DCT methods} \label{table-approximations} \begin{tabular}{lcc} \toprule Method & $\mathbf{T}$ & Orthogonal? \\ \midrule SDCT~\cite{Haweel2001} & $ \left[ \begin{rsmallmatrix} 1& 1& 1& 1& 1& 1& 1& 1 \\ 1& 1& 1& 1&-1&-1&-1&-1 \\ 1& 1&-1&-1&-1&-1& 1& 1 \\ 1&-1&-1&-1& 1& 1& 1&-1 \\ 1&-1&-1& 1& 1&-1&-1& 1 \\ 1&-1& 1& 1&-1&-1& 1&-1 \\ 1&-1& 1&-1&-1& 1&-1& 1 \\ 1&-1& 1&-1& 1&-1& 1&-1 \end{rsmallmatrix} \right] $ & No \\ WHT~\cite{Elliot1982} & $ \left[ \begin{rsmallmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1\\ 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1\\ 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1\\ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1\\ 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1\\ 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1\\ 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \end{rsmallmatrix} \right] $ & Yes \\ BAS-2008~\cite{bas2008} & $ \left[ \begin{rsmallmatrix} 1& 1& 1& 1& 1& 1& 1& 1 \\ 1& 1& 0& 0& 0& 0&-1&-1 \\ 1& \frac{1}{2}&-\frac{1}{2}&-1&-1&-\frac{1}{2}& \frac{1}{2}& 1 \\ 0& 0&-1& 0& 0& 1& 0& 0 \\ 1&-1&-1& 1& 1&-1&-1& 1 \\ 1&-1& 0& 0& 0& 0& 1&-1 \\ \frac{1}{2}&-1& 1&-\frac{1}{2}&-\frac{1}{2}& 1&-1& \frac{1}{2} \\ 0& 0& 0&-1& 1& 0& 0& 0 \end{rsmallmatrix} \right] $ & Yes \\ BAS-2009~\cite{bas2009} & $ \left[ \begin{rsmallmatrix} 1& 1& 1& 1& 1& 1& 1& 1 \\ 1& 1& 0& 0& 0& 0&-1&-1 \\ 1& 1&-1&-1&-1&-1& 1& 1 \\ 0& 0&-1& 0& 0& 1& 0& 0 \\ 1&-1&-1& 1& 1&-1&-1& 1 \\ 1&-1& 0& 0& 0& 0& 1&-1 \\ 1&-1& 1&-1&-1& 1&-1& 1 \\ 0& 0& 0&-1& 1& 0& 0& 0 \end{rsmallmatrix} \right] $ & Yes \\ BAS-2013~\cite{bas2013} & $ \left[ \begin{rsmallmatrix} 1& 1& 1& 1& 1& 1& 1& 1 \\ 1& 1& 1& 1&-1&-1&-1&-1 \\ 1& 1&-1&-1&-1&-1& 1& 1 \\ 1& 1&-1&-1& 1& 1&-1&-1 \\ 1&-1&-1& 1& 1&-1&-1& 1 \\ 1&-1&-1& 1&-1& 1& 1&-1 \\ 1&-1& 1&-1&-1& 1&-1& 1 \\ 1&-1& 1&-1& 1&-1& 1&-1 \end{rsmallmatrix} \right] $ & Yes \\ RDCT~\cite{cb2011} & $ \left[ \begin{rsmallmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 &-1 &-1 &-1 \\ 1 & 0 & 0 &-1 &-1 & 0 & 0 & 1 \\ 1 & 0 &-1 &-1 & 1 & 1 & 0 &-1 \\ 1 &-1 &-1 & 1 & 1 &-1 &-1 & 1 \\ 1 &-1 & 0 & 1 &-1 & 0 & 1 &-1 \\ 0 &-1 & 1 & 0 & 0 & 1 &-1 & 0 \\ 0 &-1 & 1 &-1 & 1 &-1 & 1 & 0 \end{rsmallmatrix} \right] $ & Yes \\ MRDCT~\cite{bc2012} & $ \left[ \begin{rsmallmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 &-1 \\ 1 & 0 & 0 &-1 &-1 & 0 & 0 & 1 \\ 0 & 0 &-1 & 0 & 0 & 1 & 0 & 0 \\ 1 &-1 &-1 & 1 & 1 &-1 &-1 & 1 \\ 0 &-1 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 &-1 & 1 & 0 & 0 & 1 &-1 & 0 \\ 0 & 0 & 0 &-1 & 1 & 0 & 0 & 0 \end{rsmallmatrix} \right] $ & Yes \\ \bottomrule \end{tabular} \end{table} \subsection{Pruned Exact and Approximate DCT} Essentially, DCT pruning consists of extracting from the 8$\times$8 DCT matrix~$\mathbf{C}$ a submatrix that aims at furnishing similar mathematical properties as~$\mathbf{C}$. Pruning is often realized on the transform-domain by means of computing fewer transform coefficients than prescribed by the full transformation. Usually, only the $K<N$ coefficients that retain more energy are preserved. For the DCT, this corresponds to the first~$K$ rows of the DCT matrix. Therefore, this particular type of pruning implies the following $K\times 8$ matrix: \begin{align} \mathbf{C}_K = \begin{bmatrix} c_{0,0} & c_{0,1} & \cdots & c_{0,7} \\ c_{1,0} & c_{1,1} & \cdots & c_{1,7} \\ \vdots & \vdots & \ddots & \vdots \\ c_{K-1,0} & c_{K-1,1} & \cdots & c_{K-1,7} \end{bmatrix} , \end{align} where $0<K\le8$ and $c_{m,n}$, $m,n=0,1,\ldots,7,$ are the entries of~$\mathbf{C}$. The case $K=8$ corresponds to the original transformation. Such procedure was proposed in~\cite{Makkaoui2010,Lecuire2012} for the DCT in the context of wireless sensor networks. For the \mbox{2-D} case, we have that the pruned DCT is given by: $\tilde{\mathbf{B}} = \mathbf{C}_K \cdot \mathbf{A} \cdot \mathbf{C}_K^\top$. Notice that $\tilde{\mathbf{B}}$ is a $K\times K$ matrix over the transform-domain. Lecuire~\emph{et al.}~\cite{Lecuire2012} showed that retaining the transform-domain coefficients in a $K\times K$ square pattern at the upper-right corner leads to a better energy-distortion trade-off when compared to the alternative triangle pattern~\cite{Makkaoui2010}. The pruning approach can be applied to DCT approximations. By discarding the lower rows of the low-complexity matrix~$\mathbf{T}$, we obtain the following $K\times N$ pruned matrix transformation: \begin{align} \mathbf{T}_K = \begin{bmatrix} t_{0,0} & t_{0,1} & \cdots & t_{0,7} \\ t_{1,0} & t_{1,1} & \cdots & t_{1,7} \\ \vdots & \vdots & \ddots & \vdots \\ t_{K-1,0} & t_{K-1,1} & \cdots & t_{K-1,7} \end{bmatrix} , \end{align} where $t_{m,n}$, $m,n=0,1,\ldots,7$, are the entries of~$\mathbf{T}$ (cf.~Table~\ref{table-approximations}). Considering the orthogonalization method described in~\cite{Cintra2014-sigpro}, the $K\times$8 pruned approximate DCT is given by: \begin{align} \hat{\mathbf{C}}_K = \mathbf{S}_K \cdot \mathbf{T}_K , \end{align} where $\mathbf{S}_K = \sqrt{ \operatorname{diag}\{(\mathbf{T}_K \cdot \mathbf{T}_K^\top)^{-1}\}}$ is a $K\times K$ diagonal matrix and $\operatorname{diag}(\cdot)$ returns a diagonal matrix with the diagonal elements of its argument. If $\mathbf{T}$ is orthogonal, then $\mathbf{T}_K$ satisfies semi-orthogonality~\cite[p.~84]{Abadir2005}. The \mbox{2-D} pruned DCT of a matrix $\mathbf{A}$ is given by \begin{align} \label{equation-pruned-2d} \tilde{\mathbf{B}} = \mathbf{T}_K \cdot \mathbf{A} \cdot \mathbf{T}_K^\top . \end{align} Resulting transform-domain matrix $\tilde{\mathbf{B}}$ is sized $K\times K$. \section{Complexity and Performance Assessment} \label{sec:complex_perform} In this section, we analyze the arithmetic complexity of the selected pruned DCT approximations. We also assess their performance in terms of energy retention and image compression for each value of~$K$. \subsection{Arithmetic complexity} Because all considered approximate DCT are natively multiplierless operators, the pruned DCT approximation inherits such property. Therefore, the arithmetic complexity of the pruned approximations is simply given by the number of additions and bit-shifting operations required by their respective fast algorithms. To illustrate the complexity assessment, we focus on the MRDCT~\cite{bc2012}, whose fast algorithm signal flow graph (SFG) is shown in \figurename~\ref{figure-sfg-mrdct_full}. The full computation of the MRDCT requires 14~additions. By judiciously considering the computational cost of only the first $K$ transform-domain components, we derived fast algorithms for the pruned MRDCT matrices as shown in \figurename~\ref{figure-sfg-mrdct}. The same procedure was applied to each of the discussed approximations based on their fast algorithms~\cite{Haweel2001, Elliot1982, bas2008, bas2009, bas2013, cb2011, bc2012}. The obtained arithmetic additive complexity is presented in \tablename~\ref{table-assessment}. We notice that the pruned MRDCT exhibited the lowest computational complexity for all values of $K$. Such mathematical properties of the MRDCT are translated into good hardware designs. Indeed, in~\cite{Potluri2013}, several DCT approximations were physically realized in FPGA devices. Hardware and performance assessments revealed that the MRDCT outperformed several competitors, including BAS 2008~\cite{bas2008} and RDCT~\cite{cb2011}, in terms of speed, hardware resource consumption, and power consumption~\cite{Potluri2013}. \begin{figure*}% \centering \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.0]{mrdct_full.eps} \caption{Original MRDCT (14 additions)} \label{figure-sfg-mrdct_full} \end{subfigure} \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.00]{mrdct_k7.eps} \caption{$K = 7$ (13 additions)} \label{figure-pruned-mrdct-k7} \end{subfigure} \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.00]{mrdct_k6.eps} \caption{$K = 6$ (12 additions)} \label{figure-pruned-mrdct-k6} \end{subfigure} \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.00]{mrdct_k5.eps} \caption{$K = 5$ (11 additions)} \label{figure-pruned-mrdct-k5} \end{subfigure} \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.00]{mrdct_k4.eps} \caption{$K = 4$ (10 additions)} \label{figure-pruned-mrdct-k4} \end{subfigure} \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.00]{mrdct_k3.eps} \caption{$K = 3$ (9 additions)} \label{figure-pruned-mrdct-k3} \end{subfigure} \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.00]{mrdct_k2.eps} \caption{$K = 2$ (8 additions)} \label{figure-pruned-mrdct-k2} \end{subfigure} \begin{subfigure}[b]{0.40\linewidth} \includegraphics[scale=1.00]{mrdct_k1.eps} \caption{$K = 1$ (7 additions)} \label{figure-pruned-mrdct-k1} \end{subfigure} \caption{Signal flow graph for the MRDCT matrix and pruned MRDCT matrices} \label{figure-sfg-mrdct} \end{figure*} An examination of~\eqref{equation-pruned-2d} reveals that the \mbox{2-D} pruned approximate DCT is computed after~eight column-wise calls of the \mbox{1-D} pruned approximate DCT and~$K$ row-wise call of \mbox{1-D} pruned approximate DCT. Let $\operatorname{A}_\text{1-D}(\mathbf{T}_K)$ be the additive complexity of $\mathbf{T}_K$. Therefore, the additive complexity of the \mbox{2-D} pruned approximate DCT is given by: \begin{equation} \begin{split} \operatorname{A}_\text{2-D}(\mathbf{T}_K) & = 8 \cdot \operatorname{A}_\text{1-D}(\mathbf{T}_K) + K \cdot \operatorname{A}_\text{1-D}(\mathbf{T}_K) \\ & = (8+K) \cdot \operatorname{A}_\text{1-D}(\mathbf{T}_K) . \end{split} \end{equation} For the particular case of the pruned MRDCT, we can derive the expressions below: \begin{align} \operatorname{A}_\text{1-D}(\mathbf{T}_K) & = K + 6 , \\ \operatorname{A}_\text{2-D}(\mathbf{T}_K) & = K^2 + 14 \cdot K + 48 , \end{align} for $K = 1,2,\ldots,8$. \begin{table*} \centering \caption{Complexity and performance assessment of % pruned DCT approximations} \label{table-assessment} \begin{tabular}{l l c c c c c c c c } \toprule \multirow{3}{*}{Measure} & \multirow{3}{*}{Method} & \multicolumn{8}{c}{$K$} \\ \cmidrule{3-10} & & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \midrule & Exact DCT~\cite{Makkaoui2010} & 7 & 20 & 23 & 24 & 25 & 26 & 28 & 29 \\ & WHT~\cite{Elliot1982}& 7 & 8 & 11 & 12 & 19 & 20 & 23 & 24 \\ & SDCT~\cite{Haweel2001} & 7 & 14 & 17 & 19 & 20 & 22 & 23 & 24 \\ Additive & BAS-2008~\cite{bas2008} & 7 & 10 & 13 & 14 & 15 & 16 & 17 & 18 \\ complexity & BAS-2009~\cite{bas2009} & 7 & 10 & 13 & 14 & 15 & 16 & 17 & 18 \\ & BAS-2013~\cite{bas2013,kouadria2013low} & 7 & 14 & 17 & 20 & 21 & 22 & 23 & 24 \\ & RDCT~\cite{cb2011} & 7 & 12 & 13 & 16 & 17 & 19 & 20 & 22 \\ & MRDCT~\cite{bc2012} & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \midrule & Exact DCT & 95.46 & 97.47 & 98.55 & 99.13 & 99.49 & 99.71 & 99.87 & 100.00 \\ & WHT &95.46 & 95.57 & 96.03 & 96.25 & 98.24 & 98.52 & 99.63 & 100.00 \\ Mean& SDCT & 95.46 & 96.39 & 97.30 & 98.16 & 98.52 & 99.26 & 99.61 & 100.00 \\ retained & BAS-2008 & 95.46 & 97.08 & 98.10 & 98.86 & 99.20 & 99.51 & 99.68 & 100.00 \\ energy & BAS-2009 & 95.46 & 97.08 & 97.96 & 98.71 & 99.04 & 99.35 & 99.68 & 100.00 \\ & BAS-2013 & 95.46 & 97.18 & 98.08 & 98.76 & 99.10 & 99.44 & 99.77 & 100.00 \\ & RDCT & 95.46 & 97.36 & 98.28 & 98.81 & 99.16 & 99.41 & 99.75 & 100.00 \\ & MRDCT & 95.46 & 96.41 & 97.22 & 97.91 & 98.22 & 99.34 & 99.68 & 100.00 \\ \midrule & Exact DCT & 23.17 & 26.08 & 28.52 & 30.40 & 31.71 & 32.39 & 32.78 & 33.12 \\ & WHT & 23.17 & 23.17 & 23.63 & 23.81 & 26.88 & 27.22 & 29.40 & 30.17 \\ & SDCT & 23.17 & 24.28 & 25.23 & 27.15 & 27.59 & 28.43 & 28.82 & 29.84 \\ Mean & BAS-2008 & 23.17 & 25.30 & 27.04 & 29.34 & 30.15 & 30.97 & 31.33 & 32.20 \\ PSNR & BAS-2009 & 23.17 & 25.30 & 26.95 & 28.70 & 29.47 & 30.14 & 30.96 & 31.76 \\ & BAS-2013 & 23.17 & 24.41 & 26.95 & 28.73 & 29.51 & 30.31 & 31.12 & 31.84 \\ & RDCT & 23.17 & 25.83 & 27.64 & 28.94 & 29.79 & 30.41 & 31.21 & 31.96 \\ & MRDCT & 23.17 & 24.29 & 25.26 & 26.37 & 26.77 & 29.58 & 30.29 & 30.98 \\ \midrule & Exact DCT & 0.48 & 0.66 & 0.79 & 0.86 & 0.89 & 0.90 & 0.90 & 0.90 \\ & WHT & 0.48 & 0.49 & 0.55 & 0.58 & 0.74 & 0.76 & 0.82 & 0.83 \\ & SDCT & 0.48 & 0.59 & 0.67 & 0.77 & 0.80 & 0.81 & 0.82 & 0.84 \\ Mean & BAS-2008 & 0.48 & 0.62 & 0.74 & 0.83 & 0.85 & 0.87 & 0.88 & 0.89 \\ SSIM & BAS-2009 & 0.48 & 0.62 & 0.73 & 0.82 & 0.84 & 0.85 & 0.87 & 0.88 \\ & BAS-2013 & 0.48 & 0.64 & 0.74 & 0.82 & 0.85 & 0.87 & 0.87 & 0.88 \\ & RDCT & 0.48 & 0.66 & 0.76 & 0.82 & 0.85 & 0.87 & 0.88 & 0.88 \\ & MRDCT & 0.48 & 0.55 & 0.65 & 0.72 & 0.76 & 0.83 & 0.84 & 0.86 \\ \midrule & Exact DCT & 0.816 & 0.953 & 0.988 & 0.995 & 0.995 & 0.996 & 0.997 & 0.997 \\ & WHT & 0.815 & 0.815 & 0.823 & 0.823 & 0.955 & 0.955 & 0.982 & 0.982 \\ & SDCT &0.816 & 0.943 & 0.971 & 0.986 & 0.986 & 0.994 & 0.994 & 0.994 \\ Mean & BAS-2008 &0.816 & 0.936 & 0.973 & 0.993 & 0.993 & 0.993 & 0.993 & 0.995 \\ SR-SIM & BAS-2009&0.816 & 0.936 & 0.974 & 0.993 & 0.993 & 0.993 & 0.993 & 0.995 \\ & BAS-2013 & 0.815 & 0.951 & 0.982 & 0.997 & 0.997 & 0.997 & 0.997 & 0.997 \\ & RDCT &0.816 & 0.952 & 0.981 & 0.988 & 0.988 & 0.988 & 0.992 & 0.993 \\ & MRDCT & 0.816 & 0.898 & 0.932 & 0.958 & 0.958 & 0.982 & 0.986 & 0.988 \\ \bottomrule \end{tabular} \end{table*} \subsection{Retained energy} To further examine the performance of the pruned approximations, we investigate the signal energy distribution in the transform-domain for each value of $K$. This analysis is relevant, because higher energy concentrations implies that $K$~can be reduced without severely degrading the transform coding performance~\cite{britanak2007discrete}. In fact, higher energy concentration effects a large number of zeros in the transform-domain after quantization. On its turn, a large number of zeros translates into longer runs of zeros, which are beneficial for subsequent run-length encoding and Huffman coding stages~\cite{bhaskaran1997}. We analyzed a set of fifty $512 \times 512$ 256-level grayscale standard images from~\cite{USC_database}. Originally color images were converted to grayscale by extracting the luminance. Image types included textures, satellite images, landscapes, portraits, and natural images. Such variety is to ensure that selection bias is not introduced in our experiments. Thus our results are expected to be robust in this sense. Images were split into 8$\times$8 subimages. Resulting subimages were submitted to each of the discussed pruned DCT approximation for all values of~$K$. Subsequently, the relative amount of retained energy in the transform-domain was computed. Obtained values are displayed in Table~\ref{table-assessment}. \subsection{Image Compression} Proposed methods were submitted to an image compression simulation to facilitate their performance as an image/video coding tool. We based our experiments on the image compression simulation described in in~\cite{Haweel2001,bas2008,Rao2001,bhaskaran1997,penn1992}, which is briefly outlined next. We considered the same above-mentioned set of images, sub-image decomposition, and \mbox{2-D} pruned transformation, as detailed in previous sub-section. Resulting data were quantized by dividing each term of the transformed matrix by elements of the standard quantization matrix for luminance~\cite[p.~153]{bhaskaran1997}. Differently from~\cite{Haweel2001,bas2008,cb2011}, we included the quantization step in image compression simulation. This is a more realistic and suitable approach for pruned methods which take advantage of quantization step. An inverse procedure was applied to reconstruct images considering \mbox{2-D} inverse transform operation. Recovered images were assessed for image degradation by means of peak signal-to-noise~(PSNR)~\cite[p.~9]{bhaskaran1997}, structural similarity index~(SSIM)~\cite{Wang2004}, and spectral residual based similarity (SR-SIM)~\cite{sr-sim}. The SSIM compares an original image~$\mathbf{I}$ with the recovered image~${\mathbf{R}}$ according to the following expression: \begin{equation} \operatorname{SSIM}\left(\mathbf{I}, \mathbf{R} \right) = \frac{\left[ 2\mu_{_I}\mu_{_R} + \left( L\cdot10^{-2} \right) \right] \cdot \left[ 2\sigma_{_{IR}} + \left(3L\cdot 10^{-2}\right) \right]}{\left[ \mu_{_I}^2 + \mu_{_R}^2 + \left(L\cdot 10^{-2}\right) \right]\cdot \left[\sigma_{_I}^2 + \sigma_{_R}^2 + \left(3L\cdot 10^{-2}\right) \right]}, \end{equation} where $ \mu_{_I} = \sum\limits_{i=1}^{8}\sum\limits_{j=1}^{8} \omega_{i,j} \cdot \mathbf{I}_{i,j} $, $ \sigma_{_I} = \sum\limits_{i=1}^{8}\sum\limits_{j=1}^{8} \omega_{i,j}\cdot \left( \mathbf{I}_{i,j} - \mu_{_I} \right)^{1/2} $, $ \sigma_{_{IR}} = \sum\limits_{i=1}^{8}\sum\limits_{j=1}^{8} \omega_{i,j}\cdot \left( \mathbf{I}_{i,j} - \mu_{_I} \right)\cdot \left( \mathbf{R}_{i,j} - \mu_{_R} \right) $, $L=255$ is the dynamic range of pixels values, and $\omega_{i,j}$ is entry of a Gaussian weighting function $\mathbf{w} = \left[\omega_{i,j}\right], i,j=1,2,\ldots,8$, with standard deviation of $1.5$ and normalized to unit sum. The SR-SIM between the original image~$\mathbf{I}$ and the recovered image~${\mathbf{R}}$ is calculated as described in~\cite{sr-sim}. Average PSNR, SSIM, and SR-SIM values of all images were computed and are shown in Table~\ref{table-assessment}. For a qualitative analysis, \figurename~\ref{fig:lena} displays the reconstructed Lena image computed via the MRDCT for all values of $K$. Associated PSNR, SSIM, and SR-SIM values are also shown. Visual inspection suggests $K=6$ as good compromise between quality and complexity. Indeed, we notice that the PSNR improvement from $K=5$ to $K=6$ is 3.92\,dB, while the PSNR difference from $K=6$ and $K=7$ is just 0{.}4\,dB. \begin{figure*}% \centering \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k1.eps} \caption{$K=1$ (PSNR=23.66, SSIM=0.63, \mbox{SR-SIM}=0.852)} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k2.eps} \caption{$K=2$ (PSNR=25.29, SSIM=0.69, \mbox{SR-SIM}=0.926)} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k3.eps} \caption{$K=3$ (PSNR=26.24, SSIM=0.75, \mbox{SR-SIM}=0.946)} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k4.eps} \caption{$K=4$ (PSNR=27.48, SSIM=0.79, \mbox{SR-SIM}=0.967)} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k5.eps} \caption{$K=5$ (PSNR=27.69, SSIM=0.80, \mbox{SR-SIM}=0.967)} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k6.eps} \caption{$K=6$ (PSNR=31.62, SSIM=0.86, \mbox{SR-SIM}=0.986)} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k7.eps} \caption{$K=7$ (PSNR=32.02, SSIM=0.87, \mbox{SR-SIM}=0.988)} \end{subfigure} \begin{subfigure}[b]{0.24\linewidth} \includegraphics[scale=0.30]{lena_mrdct_k8.eps} \caption{$K=8$ (PSNR=32.38, SSIM=0.87, \mbox{SR-SIM}=0.989)} \end{subfigure} \caption{Reconstructed Lena image according to the pruned MRDCT} \label{fig:lena} \end{figure*} \subsection{Combined analysis} In order to compare the discussed approximations, we consider a combined figure of merit that takes into account some of the previously discussed measures. Although popular and worth reporting, mean retained energy and PSNR are closely related measures. Similarly, the SR-SIM is a derivative of SSIM. For a combined figure of merit, we aim at selecting unrelated measures; thus we separated the \mbox{2-D} additive complexity, PSNR, and SSIM values, whose numerical values are listed in Table~\ref{table-assessment}. Such combined measure is proposed as the following linear cost function: \begin{equation} \label{eq:cost} \begin{split} \operatorname{cost}(\mathbf{T}_K) = & \alpha_1 \cdot \operatorname{A}_\text{2-D}(\mathbf{T}_K) + (1-\alpha_1) \cdot \\ & \Big\{ \alpha_2 \cdot [ -\operatorname{NMSSIM}(\mathbf{T}_K) ] + (1 - \alpha_2) \cdot [ -\operatorname{NMPSNR}(\mathbf{T}_K) ] \Big\} , \end{split} \end{equation} where $\alpha_1,\alpha_2\in[0,1]$ are weights; and NMSSIM and NMPSNR represent the normalized mean SSIM, and normalized mean PSNR, respectively, for all considered images submitted to a particular approximation~$\mathbf{T}_K$. The above cost function consists of a multi-objective function, which are commonly found in optimization literature~\cite{ehrgott2005multicriteria}. Two types of metrics---arithmetic complexity and performance measurements---are subject to a convex combination according to $\alpha_1$. The performance measurements are themselves a convex combination of SSIM and PSNR measurements, balanced according to $\alpha_2$. Thus the weights $\alpha_1$ and $\alpha_2$ control the relative importance of the constituent metrics of the cost function. For large values of $\alpha_1$, the cost function emphasizes the minimization of the computational complexity; whereas, for small values of $\alpha_1$, the cost function is proner to capture measures of image quality performance. The quantity $\alpha_2$ balances the composition of the performance measurement between NMSSIM and NMPSNR. Because we consider $\alpha_1,\alpha_2\in[0,1]$, all possible combinations of weights are taken into account. Only the particular context, application, and user requirements can determine the final choice of the weight values. \figurename~\ref{figure-cost}(a) and~(b) shows, respectively, regions for optimal transformation and pruning value $K$, considering any choice of weights values $\alpha_1$ and $\alpha_2$. For large $\alpha_1$ (emphasis in complex minimization), the optimal choice tends to small $K$ regardless the transform. Indeed, for small $K$, most pruned transformations collapse to the same matrix. For small $\alpha_1$ (emphasis in performance maximization), optimality favors more complex transformations with large values of $K$, being the full exact DCT the limiting case. For mid-range values of $\alpha_1$, we have less trival scenarios. In \figurename~\ref{figure-cost}(a), considering the optimal transform, we notice that for mid-range values of $\alpha_1$ the MRDCT and the BAS-2008 occupy most of the central area of the discussed region. Around the same region in \figurename~\ref{figure-cost}(b), for the MRDCT, we obtain mostly $K=6$; whereas for the BAS-2008 we have $K=6,8$. We emphasize that the proposed pruned MRDCT with $K=6$ requires \emph{only 12 additions}. The fast algorithm for this particular case is presented in \figurename~\ref{figure-pruned-mrdct-k6}. \begin{figure} \centering \begin{subfigure}{6cm} \includegraphics[scale=1]{regions-transform-ssim.eps} \end{subfigure} \begin{subfigure}{5cm} \includegraphics[scale=1]{regions-k-ssim.eps} \end{subfigure} \caption{Optimality regions for the cost function: (a)~optimal transform and (b)~optimal pruning value.} \label{figure-cost} \end{figure} \subsection{HEVC Simulation} \begin{figure}[h] \centering \begin{subfigure}{10cm} \includegraphics[scale=1]{qp_psnr.eps} \end{subfigure} \\ \begin{subfigure}{10cm} \includegraphics[scale=1]{bitrate_psnr.eps} \end{subfigure} \caption{Video coding performance assessment.} \label{figure-videocoding_quant} \end{figure} Taking into account the previous combined analysis, we embedded the proposed pruned MRDCT ($K=6$), the BAS-2008 approximation ($K=8$), and the pruned BAS-2008 ($K=6$) in the widely employed HEVC reference software HM~10.0~\cite{hm_software}. This embedding consisted of substituting the original 8-point integer-based DCT transform present in the codec for each of the above-mentioned approximations. We considered nine CIF video sequences with 300~frames at 25~frames per second from a public video bank~\cite{xiph_database}. Such sequences were submitted to encoding according to: (i)~the original software, and (ii)~the modified software. We assessed mean PSNR metrics for luminance by varying the quantization parameter~(QP) from 10 to 50 with steps of 5~units. Results are shown in \figurename~\ref{figure-videocoding_quant} considering both QP and bitrate. Obtained curves are almost indistinguishable. The mean PSNR values at $\text{QP}=30$ correspond to 37.06~dB, 36.92~dB, 36.96~dB, and 36.93~dB for the original integer DCT, the pruned MRDCT ($K=6$), BAS-2008, and the pruned BAS-2008 ($K=6$), respectively. The degradation of the pruned approximations methods relative to the unmodified software was smaller than 0.15~dB for such QP value. \figurename~\ref{fig:relative_psnr} shows the relative percent PSNR of each approximate method compared to the original HEVC according to QP and bitrate values. The curves show very close performance to the original codec. In \figurename~\ref{fig:relative_psnr_qp}, for low QP values, the approximations show even higher PSNR, i.e., more than $100\%$ relative PSNR, suggesting better compaction capability at low compression rates. However, same QP values do not necessarily generate the same compression ratio for each method, since distinct coefficients are derived from each transformation and submitted to the same quantization table. \figurename~\ref{fig:relative_psnr_br} indicates that the approximations possess slightly lower coding performance compared to original HEVC when compared at same bitrate. At the same time, the approximate methods present considerable lower computational cost and the lost of performance is smaller than $1\%$. \figurename~\ref{figure-videocoding_quali} shows a qualitative comparison considering the first frame of the standard ``Foreman'' video sequence at $\mathrm{QP}=30$. The degradation is hardly perceived. \begin{figure}[h] \centering \begin{subfigure}{10cm} \includegraphics[scale=1]{rel_psnr-vs-qp.eps} \caption{Relative PSNR vs. QP} \label{fig:relative_psnr_qp} \end{subfigure} \\ \begin{subfigure}{10cm} \includegraphics[scale=1]{rel_psnr-vs-bitrate.eps} \caption{Relative PSNR vs. Bitrate} \label{fig:relative_psnr_br} \end{subfigure} \caption{Video coding performance assessment relative to Original HEVC.} \label{fig:relative_psnr} \end{figure} \begin{figure*} \begin{subfigure}[b]{0.450\linewidth} \centering \includegraphics[scale=1.00]{videocoding_dct.eps} \caption{Unmodifed HEVC (PSNR=37.1154)} \end{subfigure} \begin{subfigure}[b]{0.450\linewidth} \centering \includegraphics[scale=1.00]{videocoding_mrdct.eps} \caption{MRDCT $K=6$ (PSNR=37.0613)} \end{subfigure} \\ \begin{subfigure}[b]{0.450\linewidth} \centering \includegraphics[scale=1.00]{videocoding_bas.eps} \caption{BAS-2008 $K=8$ (PSNR=37.0757)} \end{subfigure} \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[scale=1.00]{videocoding_bask6.eps} \caption{BAS-2008 $K=6$ (PSNR=37.0669)} \end{subfigure} \caption{Reconstruct first frame of the ``Foreman'' video sequence encoded according to considered methods.} \label{figure-videocoding_quali} \end{figure*} \section{VLSI Architectures} \label{sec:vlsi} We aim at the physical realization of pruned designs based on the MRDCT, BAS-2008, and BAS-2013. The MRDCT and BAS-2008 were selected in accordance to the discussion in previous section. The BAS-2013 was also included because it is the base for the only pruned approximate DCT competitor in literature~\cite{kouadria2013low}. Such designs were realized in a separable \mbox{2-D} block transform using two \mbox{1-D} transform blocks with a transpose buffer between them. Such blocks were designed and simulated, using bit-true cycle-accurate modeling, in Matlab/Simulink. Thereafter, the proposed architecture was ported to Xilinx Virtex-6 field programmable gate array (FPGA) as well as to custom CMOS standard-cell integrated circuit (IC) design. The transform was applied in a row-parallel fashion to the blocks of data and all blocks were $8 \times 8$, irrespective of pruning. When $K$ decreases, the number of null elements in the blocks increases. The row-transformed data were subject to transposition and then the same pruned algorithm was applied, albeit for column direction. \figurename~\ref{figure-mrdct-architectures} shows the architectures for the MRDCT. Remaining designs have similar realizations. \begin{figure}% \centering \begin{subfigure}[b]{0.49\textwidth} \centering \scalebox{0.55}{\input{8point_n=8.tex}} \caption{Original MRDCT (14 additions)} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \scalebox{0.55}{\input{8point_n=6.tex}} \caption{$K = 6$ (12 additions)} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \scalebox{0.55}{\input{8point_n=4.tex}} \caption{$K = 4$ (10 additions)} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \scalebox{0.55}{\input{8point_n=2.tex}} \caption{$K = 2$ (8 additions)} \end{subfigure} \caption{Digital architectures of the MRDCT matrix and pruned MRDCT matrices for $K = 6,4,2$.} \label{figure-mrdct-architectures} \end{figure} \subsection{FPGA Rapid Prototypes} The pruned architectures were physically realized on a Xilinx Virtex-6 XC6VLX 240T-1FFG1156 FPGA device with fine-grain pipelining for increased throughput. The FPGA realizations were verified using hardware-in-the-loop testing, which was achieved through a JTAG interface. Proposed approximations were verified using more than 10000~test vectors with complete agreement with theoretical values. Evaluation of hardware complexity and real-time performance considered the following metrics: the number of employed configurable logic blocks (CLB), flip-flop (FF) count, critical path delay ($T_\text{cpd}$), and the maximum operating frequency ($F_{\text{max}}$) in~MHz. The \texttt{xflow.results} report file, from the Xilinx FPGA tool flow, led to the reported results. Frequency normalized dynamic power ($D_p$, in $\mathrm{mW}/\mathrm{MHz}$) was estimated using the Xilinx XPower Analyzer software tool. Above measurements are shown in Table~\ref{fpga} for the proposed pruned MRDCT (highlighted in green), the pruned version of the BAS-2008 introduced in~\cite{bas2008} (highlighted in blue) and the pruned BAS-2013 introduced in~\cite{bas2013}. \begin{table} \centering \caption{Resource consumption on Xilinx XC6VLX240T-1FFG1156 device} \label{fpga} \begin{tabular}{lccccc} \toprule $K$ & CLB & FF & $T_\text{cpd}$ & $F_\text{max}$\! & $D_p$\! \\ \midrule \multirow{3}{*}{1} & 107\cellcolor{green!50} & 376\cellcolor{green!50} & 2.263\cellcolor{green!50} & 441.89\cellcolor{green!50} & 0.67\cellcolor{green!50} \cellcolor{green!50} \\ & 107\cellcolor{blue!50} & 376\cellcolor{blue!50} & 2.263\cellcolor{blue!50} & 441.89\cellcolor{blue!50} & 0.67\cellcolor{blue!50}\cellcolor{blue!50} \\ & 107 & 376 & 2.263 & 441.89 & 0.67 \\ \hline \multirow{3}{*}{2} & 136\cellcolor{green!50} & 568\cellcolor{green!50} & 2.300\cellcolor{green!50} & 434.78\cellcolor{green!50} & 0.97\cellcolor{green!50} \cellcolor{green!50} \\ & 203\cellcolor{blue!50} & 672\cellcolor{blue!50} & 2.600\cellcolor{blue!50} & 384.61\cellcolor{blue!50} & 1.33\cellcolor{blue!50} \cellcolor{blue!50} \\ & 204 & 751 & 2.450 & 408.10 & 1.45 \\ \hline \multirow{3}{*}{3} & 210\cellcolor{green!50} & 783\cellcolor{green!50} & 2.509\cellcolor{green!50} & 398.56\cellcolor{green!50} & 0.87\cellcolor{green!50} \cellcolor{green!50} \\ & 252\cellcolor{blue!50} & 956\cellcolor{blue!50} & 2.878\cellcolor{blue!50} & 347.46\cellcolor{blue!50} & 1.74\cellcolor{blue!50} \cellcolor{blue!50} \\ & 263 & 978 & 2.534 & 394.63 & 2.11 \\ \hline \multirow{3}{*}{4} & 247\cellcolor{green!50} & 961\cellcolor{green!50} & 2.946\cellcolor{green!50} & 339.44\cellcolor{green!50} & 1.35\cellcolor{green!50} \cellcolor{green!50} \\ & 343\cellcolor{blue!50} & 1170\cellcolor{blue!50} & 3.100\cellcolor{blue!50} & 322.58\cellcolor{blue!50} & 2.06\cellcolor{blue!50} \cellcolor{blue!50} \\ & 339 & 1216 & 2.900 & 344.82 & 2.50 \\ \hline \multirow{3}{*}{5} & 290\cellcolor{green!50} & 1123\cellcolor{green!50} & 2.877\cellcolor{green!50} & 347.58\cellcolor{green!50} & 1.70\cellcolor{green!50} \cellcolor{green!50} \\ & 362\cellcolor{blue!50} & 1331\cellcolor{blue!50} & 3.067\cellcolor{blue!50} & 326.05\cellcolor{blue!50} & 2.76\cellcolor{blue!50} \cellcolor{blue!50} \\ & 377 & 1374 & 2.902 & 344.58 & 3.13 \\ \hline \multirow{3}{*}{6} & 350\cellcolor{green!50} & 1286\cellcolor{green!50} & 2.735\cellcolor{green!50} & 365.63\cellcolor{green!50} & 2.07\cellcolor{green!50} \cellcolor{green!50} \\ & 438\cellcolor{blue!50} & 1531\cellcolor{blue!50} & 3.214\cellcolor{blue!50} & 311.13\cellcolor{blue!50} & 3.07\cellcolor{blue!50} \cellcolor{blue!50} \\ & 382 & 1557 & 2.784 & 359.19 & 3.80 \\ \hline \multirow{3}{*}{7} & 424\cellcolor{green!50} & 1487\cellcolor{green!50} & 3.300\cellcolor{green!50} & 303.03\cellcolor{green!50} & 2.21\cellcolor{green!50} \cellcolor{green!50} \\ & 501\cellcolor{blue!50} & 1709\cellcolor{blue!50} & 3.286\cellcolor{blue!50} & 304.32\cellcolor{blue!50} & 3.58\cellcolor{blue!50} \cellcolor{blue!50} \\ & 445 & 1720 & 3.432 & 291.37 & 3.87 \\ \hline \multirow{3}{*}{8} & 445\cellcolor{green!50} & 1696\cellcolor{green!50} & 3.390\cellcolor{green!50} & 294.98\cellcolor{green!50} & 2.74\cellcolor{green!50} \cellcolor{green!50} \\ & 559\cellcolor{blue!50} & 1962\cellcolor{blue!50} & 3.300\cellcolor{blue!50} & 303.03\cellcolor{blue!50} & 3.85\cellcolor{blue!50} \cellcolor{blue!50} \\ & 517 & 1910 & 3.200 & 312.5 & 5.07 \\ \bottomrule \end{tabular} \end{table} \subsection{ASIC Synthesis} For the ASIC synthesis, the hardware description language code from the Xilinx System Generator FPGA design flow was ported to 45~nm CMOS technology and subject to synthesis using Cadence Encounter. Standard ASIC cells from the FreePDK, which a free open-source cell library at the 45~nm node, was used for this purpose. The supply voltage of the CMOS realization was fixed at $V_\text{DD} = 1.1~\mathrm{V}$ during estimation of power consumption and logic delay. The adopted figures of merit for the ASIC synthesis were: area ($A$) in~$\mathrm{mm^2}$, area-time complexity ($AT$) in $\mathrm{mm}^2 \cdot \mathrm{ns}$, area-time-squared complexity ($AT^2$) in $\mathrm{mm}^2 \cdot \mathrm{ns}^2$, frequency normalized dynamic power ($D_p$, in $\mathrm{mW}/\mathrm{MHz}$), critical path delay ($T_{cpd}$) in~$\mathrm{ns}$, and maximum operating frequency ($F_{\text{max}}$) in~GHz. ASIC synthesis results for the proposed pruned MRDCT (highlighted in green), pruned version of the BAS-2008 (highlighted in blue) and the pruned BAS-2013 algorithm are displayed in Table~\ref{asic}. \begin{table} \centering \caption{Resource consumption for CMOS 45\,nm ASIC synthesis} \label{asic} \begin{tabular}{lcccccc} \toprule $K$ & Area & AT & ${AT}^2$ & $T_\text{cpd}$ & $F_\text{max}$\! & $D_p$\! \\ \midrule \multirow{3}{*}{1} & 0.011\cellcolor{green!50} & 0.011\cellcolor{green!50} & 0.010\cellcolor{green!50} & 0.961\cellcolor{green!50} & 1.040\cellcolor{green!50} & 0.018\cellcolor{green!50} \cellcolor{green!50} \\ & 0.011\cellcolor{blue!50} & 0.011\cellcolor{blue!50} & 0.010\cellcolor{blue!50} & 0.961\cellcolor{blue!50} & 1.040\cellcolor{blue!50} & 0.018\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.011 & 0.011 & 0.010 & 0.961 & 1.040 & 0.018 \\ \hline \multirow{3}{*}{2} & 0.017\cellcolor{green!50} & 0.016\cellcolor{green!50} & 0.015\cellcolor{green!50} & 0.962\cellcolor{green!50} & 1.039\cellcolor{green!50} & 0.028\cellcolor{green!50} \cellcolor{green!50} \\ & 0.021\cellcolor{blue!50} & 0.020\cellcolor{blue!50} & 0.020\cellcolor{blue!50} & 0.980\cellcolor{blue!50} & 1.020\cellcolor{blue!50} & 0.035\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.022 & 0.022 & 0.022 & 0.995 & 1.005 & 0.036 \\ \hline \multirow{3}{*}{3} & 0.022\cellcolor{green!50} & 0.021\cellcolor{green!50} & 0.020\cellcolor{green!50} & 0.963\cellcolor{green!50} & 1.038\cellcolor{green!50} & 0.038\cellcolor{green!50} \cellcolor{green!50} \\ & 0.031\cellcolor{blue!50} & 0.030\cellcolor{blue!50} & 0.030\cellcolor{blue!50} & 0.990\cellcolor{blue!50} & 1.010\cellcolor{blue!50} & 0.051\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.029 & 0.028 & 0.027 & 0.981 & 1.019 & 0.047 \\ \hline \multirow{3}{*}{4} & 0.027\cellcolor{green!50} & 0.027\cellcolor{green!50} & 0.026\cellcolor{green!50} & 0.970\cellcolor{green!50} & 1.030\cellcolor{green!50} & 0.047\cellcolor{green!50} \cellcolor{green!50} \\ & 0.037\cellcolor{blue!50} & 0.037\cellcolor{blue!50} & 0.038\cellcolor{blue!50} & 1.016\cellcolor{blue!50} & 0.984\cellcolor{blue!50} & 0.063\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.037 & 0.036 & 0.036 & 0.997 & 1.003 & 0.059 \\ \hline \multirow{3}{*}{5} & 0.032\cellcolor{green!50} & 0.034\cellcolor{green!50} & 0.037\cellcolor{green!50} & 1.075\cellcolor{green!50} & 0.930\cellcolor{green!50} & 0.057\cellcolor{green!50} \cellcolor{green!50} \\ & 0.042\cellcolor{blue!50} & 0.042\cellcolor{blue!50} & 0.043\cellcolor{blue!50} & 1.011\cellcolor{blue!50} & 0.989\cellcolor{blue!50} & 0.069\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.041 & 0.041 & 0.041 & 1.007 & 0.993 & 0.068 \\ \hline \multirow{3}{*}{6} & 0.038\cellcolor{green!50} & 0.038\cellcolor{green!50} & 0.037\cellcolor{green!50} & 0.995\cellcolor{green!50} & 1.005\cellcolor{green!50} & 0.067\cellcolor{green!50} \cellcolor{green!50} \\ & 0.048\cellcolor{blue!50} & 0.048\cellcolor{blue!50} & 0.048\cellcolor{blue!50} & 1.000\cellcolor{blue!50} & 1.000\cellcolor{blue!50} & 0.081\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.046 & 0.046 & 0.046 & 1.008 & 0.992 & 0.077 \\ \hline \multirow{3}{*}{7} & 0.043\cellcolor{green!50} & 0.047\cellcolor{green!50} & 0.051\cellcolor{green!50} & 1.085\cellcolor{green!50} & 0.921\cellcolor{green!50} & 0.079\cellcolor{green!50} \cellcolor{green!50} \\ & 0.053\cellcolor{blue!50} & 0.053\cellcolor{blue!50} & 0.054\cellcolor{blue!50} & 1.014\cellcolor{blue!50} & 0.986\cellcolor{blue!50} & 0.091\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.051 & 0.054 & 0.057 & 1.050 & 0.952 & 0.087 \\ \hline \multirow{3}{*}{8} & 0.046\cellcolor{green!50} & 0.051\cellcolor{green!50} & 0.057\cellcolor{green!50} & 1.103\cellcolor{green!50} & 0.906\cellcolor{green!50} & 0.084\cellcolor{green!50} \cellcolor{green!50} \\ & 0.060\cellcolor{blue!50} & 0.062\cellcolor{blue!50} & 0.065\cellcolor{blue!50} & 1.047\cellcolor{blue!50} & 0.955\cellcolor{blue!50} & 0.104\cellcolor{blue!50} \cellcolor{blue!50} \\ & 0.057 & 0.057 & 0.058 & 1.008 & 0.992 & 0.097 \\ \bottomrule \end{tabular} \end{table} \subsection{Discussion}% The FPGA realization of the proposed pruned MRDCT showed a drastic reductions in both area (measured from the number of CLBs) and frequency normalized dynamic power consumption, compared to the full MRDCT. Table~\ref{fpga-asic} shows the percentage reduction of area and frequency-normalized dynamic power for both FPGA implementation and CMOS synthesis for different pruning values. All metrics indicate lower hardware resource consumption when the number of outputs are reduced from 8 to 1. In particular, for $K=6$, which minimizes the discussed cost function (cf.~\eqref{eq:cost}), we notice a power consumption reduction for approximately 20--25\%. In order to compare the hardware resource consumption of the introduced pruned DCT approximation with competing transforms, we physically realized the pruned BAS-2013 algorithm~\cite{bas2013} and the pruned BAS-2008 algorithm~\cite{bas2008} on the same Xilinx Virtex-6 XC6VLX240T-1FFG1156 device and submitted it to synthesis using ASIC 45~nm CMOS technology. By comparing the results in Table~\ref{fpga} and~\ref{asic}, it can be seen that the proposed transform discussed here outperforms both pruned BAS-2008 and pruned BAS-2013 in terms of hardware resource consumption, and power consumption while is in par in terms of speed as well. \begin{table} \centering \caption{Percentage reduction in area and dynamic power for FPGA} \label{fpga-asic} \begin{tabular}{lcc | cc} \toprule & \multicolumn{2}{ c| }{FPGA} & \multicolumn{2}{ c }{ASIC} \\ \midrule $K$ & Area \% & $D_{p}$ \% & Area \% & $D_{p}$ \% \\ \midrule 1 & 71.65 & 83.11 & 75.32 & 76.66 \\ 2 & 54.59 & 72.29 & 64.93 & 66.00 \\ 3 & 44.88 & 62.33 & 53.24 & 54.66 \\ 4 & 30.18 & 51.94 & 41.55 & 43.33 \\ 5 & 19.16 & 34.63 & 32.46 & 34.00 \\ 6 & 3.14 & 20.77 & 23.37 & 24.66 \\ 7 & 1.57 & 12.12 & 10.38 & 12.66 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} In this paper, we present a set of 8-point pruned DCT approximations derived from state-of-the-art methods. All possible frequency-domain pruning schemes were considered and analyzed in terms of arithmetic complexity, energy compaction in the transform-domain, and image compression performance. A new combined metric was defined considering the \mbox{2-D} arithmetic complexity and average values of PSNR and SSIM. The pruned transform based on MRDCT presented the lowest arithmetic complexity and the showed competitive performance. Thus, the pruned MRDCT approximations were digitally implemented using both Xilinx FPGA tools and CMOS 45 nm ASIC technology. The proposed pruned transforms demonstrated practical relevance in image/video compression. The proposed algorithms are fully compatible with modern codecs. We have embedded the proposed methods into a standard HEVC reference software~\cite{hm_software}. Results presented very low qualitative and quantitative degradation at a considerable lower computational cost. Additionally, low-complexity designs are required in several contexts were very high quality imagery is not a strong requirement, such as: environmental monitoring, habitat monitoring, surveillance, structural monitoring, equipment diagnostics, disaster management, and emergency response~\cite{Kimura2005}. All above contexts can benefit of the proposed tools when embedded into wireless sensors with low-complexity codecs and low-power hardware~\cite{WMSN2007survey}. We summarize the contributions of the present work: \begin{itemize} \item The pruning approach for DCT approximations was generalized by not only considering all possible pruning variations but also investigating a wide range of DCT approximations; \item An analysis covering all cases under different figures of merit, including arithmetic complexity and image quality measures was presented; \item A combined figure of merit to guide the decision making process in terms hardware realization was introduced; \item The \mbox{2-D} case was also analyzed and concluded that the pruning approach is even better suited for \mbox{2-D} transforms. \item The considered pruned DCT approximation was implemented using Xilinx FPGA tools and synthesized using CMOS 45~nm ASIC technology. Such implementations demonstrated the low resource consumption of the proposed pruned transform. \end{itemize} \section*{Acknowledgements} This work was partially supported by CNPq, FACEPE, and FAPERGS (Brazil), and by the College of Engineering at the University of Akron, Akron, OH, USA. {\small \bibliographystyle{IEEEtran}
2,877,628,088,692
arxiv
\section{Introduction} Electric charges are ubiquitous in the colloidal domain, and often a major player shaping the behaviour of soft matter systems. Counter-intuitive phenomena often ensue, such as overcharging (charge inversion) or effective attraction between like-charged macro-ions \cite{GBPP00,HL00,GNS02,L02,L05,M09,NKNP10}. To rationalize such observations that are the fingerprints of correlation effects, simplified models are welcome, that should furthermore be treated beyond the mean-field level \cite{T00}. Interestingly, the physics of strongly coupled charged systems has witnessed relevant progress in the last 15 years \cite{RB96,S99,L02,MN02,BKNN05,ST11}, while the study in the weak coupling limit where mean-field arguments hold, started about 100 years ago \cite{G10,C13}. The study of intermediate Coulombic couplings, though, appears more elusive \cite{BAO04,CW06,S06,BMP10} and will be the focus of our interest in the present paper. The system under scrutiny here is a variant of Thomson's plum pudding model (see \cite{T1904,BP94,CCRT11} and references therein), also referred to as the One Component Plasma \cite{DL74,J81,L02}. Point particles with charge $q$ are embedded in a two-dimensional flat disk $\mathcal{D}$ of radius $R$. In addition, a uniformly charged background is present in the disk region (see Fig. \ref{fig:OCPdisk-charge}). While the charged background is fixed, the particles are free to move in $\mathcal{D}$. They interact through a log potential, the form taken by Coulomb law in two dimensions. The relevant coupling parameter is $\Gamma = \beta q^2$, where $\beta$ is the inverse temperature. At small $\Gamma$ (formally $\Gamma \to 0$), the Poisson-Boltzmann mean-field description holds \cite{Hunter} \footnote{It is straightforward to check that in the globally neutral case $N=N_b$, the mean-field solution is trivial, with a vanishing electrostatic potential, and a particle density that compensates for that of the background. This is a consequence of the confinement in $\mathcal{D}$ imposed to the charges. If the mobile charges are allowed to leave the uniformly charged disk and explore the whole 2D plane, the mean-field solution becomes non trivial --the constant electrostatic potential can by no means provide a solution to the problem-- and has been studied in \cite{WB57,CMTR09}. We come back to this modified ``unbounded'' model in our concluding section.}. \begin{figure}[htb] \centering \includegraphics[width=5cm]{OCPdisk-charge.eps} \caption{\label{fig:OCPdisk-charge} Definition of the system under consideration: a disk $\mathcal{D}$ with fixed and uniform background charge (hatched area), in which $N$ mobile oppositely charged particles, shown by the bullets, are free to move ($N=4$ in the figure). The total charge of the background is $-N_b q$, while each mobile ion bears a charge $q$. The total charge on the disk (background plus free ions) is therefore $(N-N_b)q$. A test ion with charge $Q$ approaches the disk along the symmetry axis shown, defining $x$-coordinate ($x=0$ when the test charge lies on the disk, at the center.)} \end{figure} As is often the case for 2D Coulombic problems, the coupling parameter $\Gamma=2$ lends itself to an exact analytical treatment, see Refs \cite{J81,AJ81,S04}. The goal is here to extend the exact analysis at $\Gamma=2$ to investigate the interactions between the disk bearing the mobile charges, and a test charge $Q$ that is approached perpendicularly to the disk, along the axis of symmetry (see Fig. \ref{fig:OCPdisk-charge}). We shall assume that $Q$ and all other charges (mobile + background) interact through a log potential. Since $Q$ explores a third additional direction compared to those in which the point $q$-charges and the background disk are confined, the choice of such a potential can be questioned: it does not correspond to the solution of Poisson's equation in three dimensions. This is however the price for obtaining analytical results, that shed light on phenomena at work in more realistic systems. In particular, we will be interested in the effective interactions between the fixed charge $Q$ and other charges, that can be seen as mimicking a colloid (the uniform background), dressed by a double-layer of counterions (the mobile $q$-charges). The model will be defined in section \ref{sec:general}, where the theoretical tools will also be introduced. In section \ref{sec:long-distance}, the features of the effective potential at large distances will be addressed. While most of the present analysis pertains to the $\Gamma=2$ case, other couplings will be addressed (namely $\Gamma=4$ and 6, corresponding to smaller temperatures). Then, the emphasis will be in section \ref{sec:short} on short range correlations that, through polarization effects, rule the short distance behaviour of the effective potential. Conclusions will be drawn in section \ref{sec:concl}. \section{Model and general formalism} \label{sec:general} The system is a one-component plasma~\cite{DL74,J81,AJ81,ZW06} on a disk $\mathcal{D}$ of radius $R$ with $N$ mobile point charges $q$, and a fixed background charge density $\rho_b=-q n_b$. The system can be globally charged, since $N_b = \pi R^2 n_b$ can be different from $N$. The charged particles interact with the two-dimensional logarithmic Coulomb potential, \begin{equation} \label{eq:vCoul} v_{c}(\r_i,\r_j)=-\ln \frac{|\r_i-\r_j|}{\ell} \,, \end{equation} for two particles located at $\r_i$ and $\r_j$ on the disk ($\ell$ is an arbitrary length). The interaction potential between a charge $q$ located at $\r$ and the background consequently reads \begin{eqnarray} v_b(r)&=&\int_{\mathcal{D}} q \rho_b v_{c}(\r,\r')\,d^2\r' \nonumber\\ &=& \frac{\pi n_b q^2}{2} \left[ r^2-R^2 \left(1-2\ln\frac{R}{\ell}\right) \right] \,, \label{eq:vb} \end{eqnarray} where $r=|\r|$. The disk is in thermal equilibrium with a heat bath at an inverse temperature $\beta=1/k_B T$. We consider now that a particle with charge $Q$ approaches the disk from the axis normal to the disk that passes through its center, see figure~\ref{fig:OCPdisk-charge}. The charge $Q$ is held fixed at a distance $x$ from the disk. The interaction potential between this charged particle and a charge from the disk located at $\r$, will also be taken logarithmic: \begin{equation} \label{eq:vQp} v_Q(x,r)=-\ln\frac{\sqrt{x^2+r^2}}{\ell} \,. \end{equation} It will be convenient to use rescaled lengths $\widetilde{r}=\sqrt{\pi n_b} r$, $\widetilde{x}=\sqrt{\pi n_b} x$, $\widetilde{\ell}=\sqrt{\pi n_b} \ell$, etc. With such a choice, the rescaled disk radius is $\widetilde{R} = \sqrt{N_b}$. The interaction potential between the background and the approaching particle is \begin{eqnarray} \label{eq:VQb} V_{Qb}(x)&=&\int_{\mathcal{D}} Q\rho_b v_Q(x,r) \, d^2\r \nonumber\\ &=& \frac{Qq}{2} \left[ (N_b+\widetilde{x}^2)\ln (N_b+\widetilde{x}^2) -N_b -\widetilde{x}^2\ln \widetilde{x}^2 \right] \,, \end{eqnarray} where we have chosen the arbitrary constant $\ell$ such that $\widetilde{\ell}=1$. \subsection{The special coupling $\Gamma=2$} Since the one-component plasma on the disk is two-dimensional with log-potential, one can use the special techniques developed for two-dimensional Coulomb systems~\cite{J81, AJ81} and random matrices~\cite{Mehta} to compute exactly the effective interaction potential between the disk and the approaching charge, for a special value of the Coulomb coupling $\Gamma=\beta q^2=2$. Let $\r_i$ be the position of the $i$-th particle on the disk, in polar coordinates $\r_i=(r_i,\varphi_i)$. It is convenient to define $z_i=r_i e^{i\varphi_i}$ and $\widetilde{z}_i=\widetilde{r}_i e^{i\varphi_i}$. The total potential energy of the system can be written, up to an irrelevant constant \begin{equation} \label{eq:H} H=Qq \sum_{i=1}^{N} v_{Q}(x,r_i)+ V_{Qb}(x) +\frac{q^2}{2}\sum_{i=1}^{N} \widetilde{r}_i^2 \, - q^2 \!\sum_{1\leq i< j \leq N} \ln|\widetilde{z}_{i}-\widetilde{z}_{j}| , \end{equation} where the first two terms on the right hand side account for the test charge - mobile charge and test charge - background interactions respectively, while the last two terms are for the mobile charge - background and mobile charge - mobile charge energies. When $\beta q^2=2$, up to a multiplicative constant, the Boltzmann factor reads \begin{equation} \label{eq:ebetaH} e^{-\beta H}= e^{-\beta V_{Qb}(x)} \prod_{i=1}^{N} e^{-2 \frac{Q}{q} v_{Q}(x,r_i)-\widetilde{r}_i^2} \prod_{1\leq i<j\leq N} |\widetilde{z}_i-\widetilde{z}_j|^2 \,. \end{equation} The product $\prod_{1\leq i<j\leq N} (\widetilde{z}_i-\widetilde{z}_j)$ is a Vandermonde determinant $\det(\widetilde{z}_{i}^{j-1})$. Defining \begin{equation} \label{eq:psi} \psi_{j}(\r)= e^{- \frac{Q}{q} v_{Q}(x,r)-\frac{\widetilde{r}^2}{2}} \widetilde{z}^{j}\,, \end{equation} the Boltzmann factor can be written as \begin{equation} \label{eq:ebetaH2} e^{-\beta H}= e^{-\beta V_{Qb}(x)} \left| \det\left(\psi_{j-1}(\r_i)\right)_{1\leq i,j \leq N} \right|^2 \,. \end{equation} The functions $\psi_{j}$ are orthogonal \begin{equation} \label{eq:ortho} \int_{\mathcal{D}} \overline{\psi_{j}(\r)}\psi_{k}(\r) \,d^2\r=0\,\qquad \text{if } j\neq k \,, \end{equation} with norm \begin{equation} \label{eq:norm} \Vert\psi_j\Vert^2=\int_{\mathcal{D}} |\psi_j(\r)|^2\,d^2\r =\frac{1}{n_b}\int_{0}^{N_b} t^j (\widetilde{x}^2+t)^{Q/q} e^{-t}\,dt \,. \end{equation} If $Q/q$ is a positive integer, this can be expressed in terms of incomplete gamma functions $\gamma(k,N_b)=\int_0^{N_b} t^{k-1} e^{-t}\,dt$. For instance, when $Q=q$, \begin{equation} \label{eq:normgamma} \Vert\psi_j\Vert^2=\frac{1}{n_b} \left[ \widetilde{x}^2 \gamma(j+1,N_b)+ \gamma(j+2,N_b) \right] \,. \end{equation} The configurational canonical partition function is \begin{eqnarray} Z&=&\frac{1}{N!} \int_{\mathcal{D}^N} e^{-\beta H} \,\prod_{i=1}^N d^2\r_i \nonumber\\ &=& \frac{1}{N!} e^{-\beta V_{Qb}(x)} \int_{\mathcal{D}^N} \left| \det\left(\psi_{j-1}(r_i)\right)_{1\leq i,j \leq N} \right|^2 \,\prod_{i=1}^N d^2\r_i \,. \label{eq:Z} \end{eqnarray} If the determinant is explicitly expanded, and the integrals performed, the result simplifies~\cite{AJ81,Mehta}, due to the orthogonality of the functions $\psi_j$ \begin{equation} \label{eq:Zres} Z=e^{-\beta V_{Qb}(x)} \prod_{j=0}^{N-1} \Vert \psi_j \Vert^2 \,. \end{equation} Up to an additive constant, the effective interaction potential, $V_{\text{eff}}(x)$, between the disk and the approaching charge $Q$, is given by \cite{B00} $e^{-\beta V_{\text{eff}}(x)} \propto Z$, and more specifically, we choose \begin{equation} e^{-\beta V_{\text{eff}}(x)} \, = \, \frac{Z}{Z_0}\,, \end{equation} where $Z_0$ is the $x$-independent partition function when $Q=0$. The above definition ensures that for $N=N_b$, $V_{\text{eff}}(x) \to 0$ when $x\to \infty$. On the other hand, for $N\neq N_b$, $V_{\text{eff}}(x)$ diverges for $x\to \infty$, see below. The physical meaning of $V_{\text{eff}}$ is clear : $-\partial V_{\text{eff}}(x) / \partial x$ provides the mean force experienced by $Q$, averaged over all possible fluctuations of charge configurations on the disk. The function $V_{\text{eff}}$ is precisely the free energy of the system, for a given test charge - disk distance $x$. Therefore, \begin{eqnarray} \beta V_{\text{eff}}(x)&=& \frac{Q}{q} \left[ (N_b+\widetilde{x}^2)\ln (N_b+\widetilde{x}^2) -N_b -\widetilde{x}^2\ln \widetilde{x}^2 \right] \nonumber\\ && -\sum_{j=1}^{N} \left[ \ln \int_{0}^{N_b} t^{j-1} (\widetilde{x}^2+t)^{Q/q} e^{-t}\,dt -\ln \gamma(j,N_b) \right] \,. \label{eq:Veff} \end{eqnarray} In the special case $Q=q$, we obtain \begin{eqnarray} \beta V_{\text{eff}}(x)&=& (N_b+\widetilde{x}^2)\ln (N_b+\widetilde{x}^2) -N_b -\widetilde{x}^2\ln \widetilde{x}^2 \nonumber\\ && -\sum_{j=1}^{N} \ln \left[ \widetilde{x}^2 + \frac{\gamma(j+1,N_b)}{\gamma(j,N_b)} \right] \,. \label{eq:VeffQ=q} \end{eqnarray} The density profile $n(r)$ on the disk can also be obtained explicitly~\cite{J81, Mehta}, and will be discussed in some detail below \begin{eqnarray} \label{eq:n} n(r)&=& \sum_{j=0}^{N-1} \frac{|\psi_{j}(\r)|^2}{\Vert \psi_j \Vert^2} \nonumber\\ &=& n_b \sum_{j=0}^{N-1} \frac{\widetilde{r}^{2j}(\widetilde{x}^2+\widetilde{r}^2)^{Q/q}\, e^{-\widetilde{r}^2}}{\int_{0}^{N_b} t^{j} (\widetilde{x}^2+t)^{Q/q}\, e^{-t}\,dt}\,. \end{eqnarray} It can be checked that the two situations where $Q=0$ and $x\to \infty$ are equivalent, since both decouple the test charge from those on the disk. \subsection{Arbitrary even coupling parameters} \label{ssec:arb} For couplings parameters $\Gamma=\beta q^2=2\gamma$, with $\gamma$ an integer, the partition function of the system, and the effective potential, can be computed for small enough number of particles $N$, by using a method developed in~\cite{SPK94, TF99}, based on techniques used in the study of the quantum Hall effect~\cite{dFGIL94, D94, STW94}. We provide here some details on the methods. Up to a multiplicative constant, the Boltzmann factor of the system reads \begin{equation} \label{eq:ebetaH-gamma} e^{-\beta H}= e^{-\beta V_{Qb}(x)} \left| \det\left(\psi_{j-1}(r_i)\right)_{1\leq i,j \leq N} \right|^{2\gamma} \,. \end{equation} where, now, the orthogonal functions $\psi_{k}$ are \begin{equation} \psi_{k}(\r)=[w(r)]^{1/2} \tilde{z}^k \end{equation} with \begin{equation} w(r)=e^{-2\gamma\frac{Q}{q} v_{Q}(x,r)-\gamma \tilde{r}^2} \,. \end{equation} The key idea to compute the partition function is to expand $[\det(\tilde{z}_{k}^{j-1})]^\gamma$ in terms of appropriate orthogonal polynomials~\cite{TF99}. For $\gamma$ even, the expansion is in terms of symmetric monomials, whereas for $\gamma$ odd, it is expanded in terms of antisymmetric polynomials. The coefficients of the expansion are conveniently indexed by a partition $\mu=(\mu_1,\cdots, \mu_N)$ of $\gamma N(N-1)/2$, for example for $\gamma$ even, \begin{equation} [\det(\tilde{z}_{k}^{j-1})]^\gamma = \sum_{\mu} c_{\mu}\, \text{Sym}(z_1^{\mu_1}\ldots z_{N}^{\mu_N}) \end{equation} with the symmetric monomial \begin{equation} \text{Sym}(z_1^{\mu_1}\ldots z_{N}^{\mu_N}) = \frac{1}{\prod_i m_i!} \sum_{\sigma \in S_{N}} z_{\sigma(1)}^{\mu_1} \ldots z_{\sigma(N)}^{\mu_N} \end{equation} where $S_{N}$ is the permutation group of $N$ elements and $m_{i}$ is the multiplicity of the integer $i$ in the partition $\mu$. A similar expression is used for $\gamma$ odd with antisymmetrized monomials. Due to the orthogonality of the (anti)symmetric monomials, the partition function is finally given also as an expansion similar to the one of the power $\gamma$ of the Vandermonde determinant, see~\cite{TF99} for details. The final expression for the effective potential is \begin{equation} \label{eq:Veff-anygamma} \beta V_{\text{eff}}(x)=\beta V_{Qb}(x)-\ln \frac{Z^{*}}{Z_0^{*}} \end{equation} with \begin{equation} Z^{*}=\sum_{\mu} \frac{c_{\mu}^2}{\prod_{i} m_i!}\, \prod_{k=1}^{N} ||\psi_{\mu_k}||^{2} \,, \end{equation} \begin{equation} ||\psi_{j}||^{2}=\int_{\cal D} w(r) r^{2j} d\r= \frac{1}{n_b} \int_{0}^{N_b} e^{-\gamma t}(\tilde{x}^2+t)^{\gamma Q/q} t^{j}\,dt \,, \end{equation} and $Z_0^{*}$ is $Z^{*}$ evaluated when $Q=0$. In the case when $\gamma$ is odd, the factor $\prod_{i} m_i!=1$, since due to the antisymmetry, the admitted partitions $\mu$ do not have repeated numbers. This method can equivalently be formulated by transforming the classical problem of the one-component plasma in a quantum problem of a linear chain of interacting fermions, as explained in Refs.~\cite{SP95, S04}. The starting point for this method is to write the Vandermonde determinant as a Gaussian integral over Grassmann variables. The final result is again~(\ref{eq:Veff-anygamma}). For the present work, we did some calculations up to $N=11$ particles. The coefficients $c_{\mu}$ needed for the numerical calculations where kindly provided by L. \v{S}amaj for $\gamma=2$ up to $N=10$ and for $\gamma=3$ up to $N=9$. For $\gamma=2$ and $N=11$, and $\gamma=3$ and $N=10$ and $N=11$, we obtained the coefficients using the algorithm recently proposed by Bernevig and Regnault~\cite{BR09}, and their Jack polynomial generator online code~\cite{jack}. We now turn to the results obtained from the previous analysis, starting with the effective potential experienced by the test charge $Q$ at large distances from the disk. \section{Long distance behavior} \label{sec:long-distance} \subsection{General results at arbitrary couplings} As mentioned earlier, the effective interaction potential, also known as potential of mean force, $V_{\text{eff}}(x)$, has the property that $-\nabla V_{\text{eff}}$ is the mean force experienced by $Q$. It is interesting to introduce another quantity, $V(x)$, the electric potential created by the average charge density distribution $q (n(r)-n_b)$ at the position of the charge $Q$ \begin{equation} \label{eq:defV} V(x)=\int_{\cal D} q(n(r)-n_b) v_{Q}(x,r)\,d\r \end{equation} Because of the fluctuations and the fact that the presence of $Q$ at position $x$ modifies the density on the disk, in general $V_{\text{eff}}(x)\neq Q V(x)$, only for a small infinitesimal charge $Q$ the equality holds. For arbitrary $Q$, a simple relation can be found between the two, by noticing that the total potential energy of the system~(\ref{eq:H}) depends linearly on $Q$, then \begin{equation} \frac{\partial e^{-\beta H}}{\partial Q} =-\beta \int_{\cal D} q(\widehat{n}(r;\r_1,\ldots,\r_N)-n_b) v_{Q}(x,r)\,d\r \ e^{-\beta H} \end{equation} where $\widehat{n}(r;\r_1,\ldots,\r_N)=\sum_{i=1}^{N}\delta(\r-\r_i)$ is the microscopic density. Averaging this relation over all the configurations of the ions on the disk, we find \begin{equation} \label{eq:V-Veff} \frac{\partial V_{\text{eff}}(x)}{\partial Q} = V(x) \,. \end{equation} At large distances from the disk, expanding $v_{Q}(x,r)$ for $r \ll x$, one can obtain the multipolar expansion of the electric potential $V$ \begin{equation} V(x)=-q (N-N_b) \ln \widetilde{x} - q \frac{\mathbb{Q}_{2}}{2 x^2} + O(1/\widetilde{x}^4) \label{eq:multipol} \end{equation} where the relevant quadrupole moment $\mathbb{Q}_{2}$ results from the second moment of the excess density $[n^{0}(r)-n_b]$ \begin{equation} \mathbb{Q}_{2}\, = \, \int_\mathcal{D} r^2 [n^{0}(r)-n_b] \, d \r \end{equation} where $n^{0}(r)$ is the density when $x\to\infty$ (or equivalently $Q=0$). Since we are computing the potential on the $x$-axis, no dipolar contribution remains, while the logarithmic monopole contribution stems from the global charge of the disk, $q(N-N_b)$. Since up to terms of higher power than $1/x^2$, Eq.~(\ref{eq:multipol}) shows that $V(x)$ does not depend on $Q$, integrating Eq.~(\ref{eq:V-Veff}) one finds that Eq.~(\ref{eq:multipol}) also gives the large $x$ expansion of $V_{\text{eff}}$ (multiplied by $Q$). We also mention here a sum rule that turns out interesting for the following discussion. The quadrupolar moment $\mathbb{Q}_{2}$ can be shown to be related to the mobile particle density at contact through \cite{CFG80,TF99} \begin{equation} \Gamma \,\frac{n_b}{2 R^2} \,\mathbb{Q}_{2} \,=\, n_b \left(1-\frac{\Gamma}{4} \right) - n^{0}(R) \label{eq:sumquad} \end{equation} for a neutral disk. For the next order of the expansion, in $1/x^4$, the situation is more involved. The expansion of $V(x)$ cannot be obtained only from the next multipole $\mathbb{Q}_4=\int_{\cal D} r^4 (n^0(r)-n_b)\,d\r$, since the density $n(r)$ itself depends also on $x$ and one needs to take into account the next-to-leading order of the large-$x$ expansion of $n(r)$ to compute properly the expansion of $V(x)$ up to order $1/x^4$. This next-to-leading order is of order $1/x^2$ and it is proportional to $Q$ as it can be checked by expanding $e^{-\beta H}$ for large $x$. It has the form \begin{equation} n(r)=n^{0}(r)+\frac{\beta Q q d_2(r)}{x^2} + O(1/\widetilde{x}^4) \end{equation} where $d_2(r)$ is a function only of $r$ and $\beta q^2$. Using this expansion one can obtain from Eq.~(\ref{eq:defV}) the expansion of $V(x)$ up to the order $1/x^4$ \begin{equation} V(x)=-q (N-N_b) \ln \widetilde{x} - q \frac{\mathbb{Q}_{2}}{2 x^2} + q\left (\frac{\mathbb{Q}_4}{4} - \frac{\beta qQ}{2} \int_{\cal D} r^2 d_2(r) \,d\r \right) \frac{1}{x^4} + O(1/x^6) \,. \label{eq:multipol4} \end{equation} Then integrating with respect to $Q$ one finds \begin{equation} V_{\text{eff}}(x)=-q Q (N-N_b) \ln \widetilde{x} - q Q \frac{\mathbb{Q}_{2}}{2 x^2} + qQ \left (\frac{\mathbb{Q}_4}{4} - \frac{\beta qQ}{4} \int_{\cal D} r^2 d_2(r) \,d\r \right) \frac{1}{x^4} + O(1/x^6) \,. \end{equation} The term involving the second moment of $d_2$ differs by a factor $Q/2$ between $V(x)$ and $V_{\text{eff}}(x)$. In the following section we illustrate these considerations on the explicit results obtained when $\Gamma=\beta q^2=2$. \subsection{Results at $\Gamma=2$} Unless otherwise specified, the results reported correspond to $\Gamma=2$. For $Q$ arbitrary, the long distance behavior of the effective interaction~(\ref{eq:Veff}), for $N$ and $N_b$ fixed, $\widetilde{x}^2\gg N$ and $\widetilde{x}^2\gg N_b$ is \begin{eqnarray} \label{eq:Veffasympttmp} V_{\text{eff}}(x)&=&-Q q (N-N_b) \ln \widetilde{x} + \frac{Q q}{2}\frac{1}{\widetilde{x}^2} \left[\frac{N_b^2}{2}-\sum_{j=1}^{N} \frac{\gamma(j+1,N_b)}{\gamma(j,N_b)}\right] \\ &&-\frac{Q}{4 \widetilde{x}^4} \sum_{j=1}^N \left[ \left(Q-q\right)\frac{\gamma(j+2,N_b)}{\gamma(j,N_b)} - Q\frac{\gamma(j+1,N_b)^2}{\gamma(j,N_b)^2} \right] +O(1/\widetilde{x}^6)\, , \nonumber \end{eqnarray} the structure of which deserves some comments. Up to order $1/x^2$, such a series has the form of a multipolar expansion, in agreement with the discussion from the previous section. Indeed, the coefficient of $1/\widetilde{x}^2$ is precisely $-qQ\mathbb{Q}_{2}/2$ as it can be checked by computing the second moment of the excess density from the explicit expression~(\ref{eq:n}) when $Q=0$. Eq.~(\ref{eq:Veffasympttmp}) can be compared to the large-$x$ expansion of the electric potential \begin{eqnarray} \label{eq:Velecasympt} V(x)&=&- q (N-N_b) \ln \widetilde{x} + \frac{q}{2}\frac{1}{\widetilde{x}^2} \left[\frac{N_b^2}{2}-\sum_{j=1}^{N} \frac{\gamma(j+1,N_b)}{\gamma(j,N_b)}\right] \\ &&-\frac{1}{4 \widetilde{x}^4} \sum_{j=1}^N \left[ \left(2Q-q\right)\frac{\gamma(j+2,N_b)}{\gamma(j,N_b)} - \frac{\gamma(j+1,N_b)^2}{\gamma(j,N_b)^2} \right] +O(1/\widetilde{x}^6)\, , \nonumber \end{eqnarray} where one can explicitly check that $\partial_{Q} V_{\text{eff}}=V$. Let us discuss further the expansion of $V_{\text{eff}}$ up to order $1/x^2$. Using the properties of the incomplete gamma function, that allow us to write the coefficient of the $1/\widetilde{x}^2$ term appearing in Eq. (\ref{eq:Veffasympttmp}) as \begin{eqnarray} \frac{N_b^2}{2}-\sum_{j=1}^{N} \frac{\gamma(j+1,N_b)}{\gamma(j,N_b)} &=& \frac{N_b^2-N^2}{2}-\frac{N}{2}+\sum_{j=1}^{N} \frac{e^{-N_b}N_{b}^{j}}{\gamma(j,N_b)} \nonumber\\ &=& \frac{N_b^2-N^2}{2}-\frac{N}{2}+\frac{N_b n^{0}(R)}{n_b} \,, \label{eq:x2} \end{eqnarray} where $n^{0}(R)$ is the density of particles at the edge of the disk in the absence of the charge $Q$, i.e.~Eq.~(\ref{eq:n}) with $Q$=0 at $\widetilde{r}=\widetilde{R}=\sqrt{N_b}$. Thus, \begin{equation} \label{eq:Veffasympt} V_{\text{eff}}(x)=-Q q (N-N_b) \ln \widetilde{x} + \frac{Q q}{2}\frac{1}{\widetilde{x}^2} \left[\frac{N_b^2-N^2}{2}-\frac{N}{2}+\frac{N_b n^{(0)}(R)}{n_b} \right] +O(1/\widetilde{x}^4)\,. \end{equation} For a neutral disk, $N=N_b$, and \begin{equation} \label{eq:Veffasymptneutral} V_{\text{eff}}(x)= \frac{Q q N_b}{2}\frac{1}{\widetilde{x}^2} \left[\frac{n^{(0)}(R)}{n_b} -\frac{1}{2} \right] +O(1/\widetilde{x}^4)\, . \end{equation} This result is an explicit check, at $\Gamma=2$, of the multipolar expansion~(\ref{eq:multipol}) combined with the sum rule (\ref{eq:sumquad}). One notice that the $V_{\text{eff}}$ is repulsive for $Qq >0$. This can be understood as follows. When $\widetilde{x}\to\infty$, the charge density profile $n(r)$ inside the disk is the same one as for a disk alone (without the approaching charge $Q$), found in Refs.~\cite{J81,J82}. The density $n(r)$ is equal to the background density $n_b$ in the bulk of the disk (local neutrality in the bulk). Close to the boundary, it raises above the background density, then falls below it~\cite{J82}, see figure~\ref{fig:ndens.xstar.infty}. Therefore, there are two concentric layers of charges close to the edge: the inner layer bears a net charge which is of the same sign as $q$ [i.e.~$n(r)>n_b$], while the outer one is opposite, by electroneutrality. This ensures that $\mathbb{Q}_{2}$ is generically negative, so that the quadrupolar term yields an effective interaction (disk-test charge) that is of the same sign as $Q q$, i.e.~repulsive for $Q q>0$. When $N\to\infty$, more explicit results can be obtained. In this limit, the density at the edge of the disk takes a simple form~\cite{J82}, $n^{(0)}(R)=n_b \ln 2$, so that \begin{equation} \label{eq:VeffasymptneutralNinf} V_{\text{eff}}(x)= \frac{Q q N_b}{2}\frac{1}{\widetilde{x}^2} \left(\ln 2 -\frac{1}{2} \right) +O(1/\widetilde{x}^4)\, = \frac{Q q R^2}{2}\frac{1}{x^2} \left(\ln 2 -\frac{1}{2} \right) +O(1/x^4) , \end{equation} with $\ln 2 -\frac{1}{2}\simeq 0.19 >0$, which is consistent with the generic discussion above on the negative sign of $\mathbb{Q}_{2}$. Again, the effective potential is attractive at large distances for $Qq<0$, repulsive for $Qq>0$, for a neutral disk. \begin{figure} \centering \includegraphics[width=\GraphicsWidth]{ndens.xstar.infty.eps} \vspace{5mm} \caption{\label{fig:ndens.xstar.infty} Reduced charge density profile in the disk, for different neutral situations ($N=N_b$), and $x\to \infty$. The arrow on the right hand side indicates the limiting value $\ln 2 \simeq 0.693$ that is reached in the large $N$ limit. The total charge density profile is $q[n(r)-n_b]$. For large $N$, it thus vanishes except in a small region of linear size $1/\sqrt{N_b}$ in the vicinity of the boundary $r=R$. } \end{figure} The quadrupolar route allows us to obtain results for strongly coupled systems (large $\Gamma$), making use of the sum rule (\ref{eq:sumquad}). We note in passing that this general result is compatible with the value $\mathbb{Q}_{2} = - q R^2 (\log 2- 1/2)$ that holds at $\Gamma=2$ when the number of mobile charges on the disk becomes large, see Eqs. (\ref{eq:VeffasymptneutralNinf}) and (\ref{eq:multipol}). When $\Gamma$ itself turns large, the system crystallizes, but the sum rule~(\ref{eq:sumquad}) remains valid, provided $n(R)$ is replaced by the average of the contact density over the perimeter of the disk~\cite{CFG80,TF99}. It is physically reasonable to suppose that the average of $n(R)/n_b$ remains bounded in this limit. Actually, for the three-dimensional analogue of this model, with $1/r$ interaction, this is the case~\cite{HA91}. Then Eq.~(\ref{eq:sumquad}) becomes $\mathbb{Q}_{2} \sim -R^2/2$. We consequently have \begin{equation} V_{\text{eff}}(x) \underset{\Gamma \to \infty}{\sim} \frac{Q q R^2}{4}\frac{1}{x^2} +O(1/x^4) , \end{equation} which is again repulsive for $Q q >0$, attractive otherwise. It can be mentioned here that the scaling result $\mathbb{Q}_{2} \propto - R^2$ is readily recovered by the two concentric layers simplified viewpoint. For large $N$, there exists a outermost corona void of charges: particles are depleted there, as they are in the plum pudding model, see e.g. \cite{HA91,BP94,CCRT11}, the width of which is given by the typical distance between particles $\delta \propto R/\sqrt{N}$ (at $\Gamma=2$, $\delta$ is already the typical distance between the density maximum and the disk radius that can be seen in Fig. \ref{fig:ndens.xstar.infty} for large $N$). The charge in this corona is given by $-q n_b R \delta$, which contributes a quantity $- R \, n_b \, \delta \, R^2$ to the quadrupole moment. This charge is compensated by an oppositely charged ring, located at $R-\delta$, which contributes a quantity $R \, \delta \, n_b (R-\delta)^2$ to $\mathbb{Q}_{2}$. Summing both contributions, assuming that the particles and background with $r<R-\delta$ do not contribute to $\mathbb{Q}_{2}$, and remembering that $\delta \ll R$, we arrive at $\mathbb{Q}_{2} \propto - n_b R^2 \delta^2 \simeq - R^2 $. A very similar argument holds at $\Gamma=2$, since the two corona approach is also valid. \section{Short scale features} \label{sec:short} We now turn to the study of the phenomenology at shorter distances, which is different depending on whether the particle approaching the disk has a charge of the same sign of the mobile particles on the disk ($Q/q>0$), or a charge of opposite sign. In addition, the cases of globally neutral or charged disks should be treated separately, and the different cases are ruled by different sorts of polarization effects. \subsection{Case $Q/q>0$} \label{sec:Qpos} \subsubsection{Neutral disk} We consider a globally neutral disk $N=N_b$. We study in this section if it is possible to overcharge this object, by approaching a particle that has a charge $Q$ with the same sign of the mobile counterions on the disk. At large distances, we know from~(\ref{eq:Veffasympt}) that the interaction is repulsive. We anticipate that this behavior changes when the charge $Q$ is close enough to the disk, since the intruder $Q$ should then create a correlation hole, pushing mobile charges closer to the boundary $r=R$, and thereby gaining Coulombic energy from hole opened. This is the mechanism behind charge inversion in colloidal systems, that has been reviewed for situations of strong coupling in Ref.~\cite{GNS02}. The short-distance behavior of the effective potential~(\ref{eq:Veff}) is, when $Q/q>0$, \begin{eqnarray} \beta V_{\text{eff}}(x)&=& \frac{Q}{q} \left[ N_b\ln N_b-N_b +\widetilde{x}^2 \left(1+\ln\frac{N_b}{\widetilde{x}^2}\right) \right] -\sum_{j=1}^{N} \ln\frac{\gamma\left(j+\frac{Q}{q},N_b\right) }{\gamma\left(j,N_b\right)} \nonumber\\ &&- \frac{Q}{q}\widetilde{x}^2\sum_{j=1}^{N} \frac{\gamma\left(j+\frac{Q}{q}-1,N_b\right) }{\gamma\left(j+\frac{Q}{q},N_b\right)} +O\left(\widetilde{x}^4,\widetilde{x}^{2(1+Q/q)}\right) \,, \label{eq:veff-shortQpos} \end{eqnarray} which is clearly an increasing function of $x$ when $\widetilde{x}\ll1$. Therefore, at close distance from the disk, the interaction turns out to be attractive. This can be observed in figure~\ref{fig:veffQ1-N=Nf}, where the effective potential indeed increases at short distances, reaches a maximum and then decreases upon increasing the distance between the test particle and the disk. Note that at $x=0$, the effective potential takes a finite value \begin{equation} \label{eq:betaVeff0} \beta V_{\text{eff}}(0)=\frac{Q}{q} \left(N_{b}\ln N_{b}-N_{b}\right) -\sum_{j=1}^{N} \ln\frac{\gamma(j+\frac{Q}{q},N_{b})}{\gamma(j,N_{b})} \,, \end{equation} although it is not shown in all figures. \begin{figure}[htb] \null\vskip 5mm \centering \includegraphics[width=\GraphicsWidth]{veffQ=1}\hspace{1mm}\includegraphics[width=\GraphicsWidth]{veffQ=10} \null\vskip 5mm \caption{\label{fig:veffQ1-N=Nf} The effective interaction between the disk and an approaching ion with charge $Q=q$ (left graph) or $Q=10 q$ (right graph). The disk is globally neutral with $N_b=N$ ions of charge $q$. The distance $x$ is expressed in reduced units ($\widetilde{x}$), in which the disk radius reads $\sqrt{N_b}$, and therefore takes different values for the three curves shown. } \end{figure} Let $x^{*}$ be the distance at which the interaction potential reaches its maximum, and $\widetilde{x}^{*}=\sqrt{\pi n_b} x^{*}$. $x^{*}$ is the minimum distance that one has to approach the charged particle in order to overcome the natural repulsion of the disk. The corresponding (free) energy cost to overcharge the disk is given by $V^{\dagger}=V_{\text{eff}}(x^{*})$, and $V^{*}=V^{\dagger}-V_{\text{eff}}(0)$ is the binding energy, i.e. the necessary energy to unbind the charged particle from the disk, once it has been overcharged. More generally, $V^{*}$ can be defined as the energy to overcome to peel off an ion from the disk. In all the present discussion, the energy costs alluded to correspond to the work an external operator holding the intruder should perform, and this equals the corresponding free energy variation of the system as a whole. Figure~\ref{fig:Vdagger-Vstar-xstar-fnct-N=Nb-Q10} shows how $x^{*}/R$, $V^{\dagger}$ and $V^{*}$ depend on $N$ for fixed $Q/q$, and on $Q/q$ at fixed $N$, respectively. First of all, it appears that the binding energy $V^{*}$ is several orders of magnitude larger than the energy cost $V^{\dagger}$. This means that $V^* \simeq |V_{\text{eff}}(0)|$. Second, the threshold distance $x^*$ scales like $R$, when $N$ becomes large enough, a fact that is masked in Fig. \ref{fig:veffQ1-N=Nf} by the choice of units made (tilde variables, for which $\widetilde R = \sqrt{N_b}$). More precisely, from the numerical data of Fig.~\ref{fig:Vdagger-Vstar-xstar-fnct-N=Nb-Q10}, we explored how $x^{*}/R$ depends on the charge $Q$. We found, numerically, the approximate relation \begin{equation} \frac{x^{*}}{R\ \ }=a \sqrt{\frac{Q}{q}} +O(1/\sqrt{N}) \end{equation} with $a=1.5+O(1/\sqrt{N})$. The fact that $a$ is of order one means that the effect of effective attractions holds up to rather large distances, on the order of the disk radius. Another feature visible on Fig. \ref{fig:Vdagger-Vstar-xstar-fnct-N=Nb-Q10} is that the energy cost increases with $Q/q$, but quickly saturates to a finite value $V_{\text{eff}}^{\text{sat}}$. On the other hand the binding energy increases as the charge $Q$ increases, as expected, but it also increases with the number of mobile ions on the disk $N$. For $N\to\infty$, using Stirling formula for the incomplete gamma functions in Eq.~(\ref{eq:betaVeff0}), one can obtain the analytical behavior of $V_{\text{eff}}(0)$, and therefore the one of the binding energy, remembering that $V^{*}\simeq |V_{\text{eff}}(0)|$. We find \begin{equation} \label{eq:betaV0Ninfty} \beta V_{\text{eff}}(0)= -\frac{1}{2} \left(\frac{Q}{q}\right)^2 \ln N + O(1) \,. \end{equation} \begin{figure}[htb] \null\vskip 6mm \centering \includegraphics[width=\GraphicsWidth]{Vdagger-Vstar-xstar-fnct-N=Nb-Q10} \hspace{1mm} \includegraphics[width=\GraphicsWidth]{Vdagger-Vstar-xstar-fnct-Q-N=Nb=22} \vspace{5mm} \caption{\label{fig:Vdagger-Vstar-xstar-fnct-N=Nb-Q10} Left: The binding energy $V^{*}$, the energy cost $V^{\dagger}$ and the distance $x^{*}$ to overcharge the globally neutral disk with an additional particle of charge $Q=10q$, as a function of the number of particles $N=N_b$ on the disk. Right: same quantities as a function of intruder charge $Q$, for $N=N_{b}=22$.} \end{figure} \begin{figure} \centering \includegraphics[width=\GraphicsWidth]{Vdagger-Vstar-xstar-fnct-N=Nb-Q10-allgamma} \hspace{1mm} \includegraphics[width=\GraphicsWidth]{Vdagger-Vstar-xstar-fnct-Q-N=Nb=8-allgamma} \vspace{5mm} \caption{\label{fig:Vdagger-Vstar-xstar-fnct-N=Nb-Q10-allgamma} Left: The binding energy $V^{*}$, the energy cost $V^{\dagger}$ and the distance $x^{*}$ to overcharge the globally neutral disk with an additional particle of charge $Q=10q$ as a function of the number of particles $N=N_b$ on the disk, for different values of the Coulombic coupling $\Gamma=\beta q^2$. Right : same as a function of $Q$, for a globally neutral disk with $N=N_{b}=8$. } \end{figure} Figure~\ref{fig:Vdagger-Vstar-xstar-fnct-N=Nb-Q10-allgamma} shows how the previous quantities behave under different couplings ($\Gamma=\beta q^2=2, 4, 6$). The qualitative features appear to be robust: The behavior is similar to the one when $\Gamma=2$, with changes in the numerical values of $x^{*}$, $V^{\dagger}$ and $V^{*}$. As $\Gamma$ increases, $V^{\dagger}$ and $V^{*}$ increase, and $x^{*}$ decreases slightly: the test ion has to come closer to the disk, and requires more energy to overcome the long distance repulsion. Once it overcharges the disk it is more energetically bounded to it. \subsubsection{Charged disk} If $N<N_b$, the disk has a charge $q(N-N_b)$ of opposite sign to that of the approaching ion $Q$. In this case the effective potential is attractive at all distances. Let us consider the more interesting case where the disk is already overcharged, with a net charge of the same sign as $Q$, i.e.~$N>N_b$. The question is to study the effective potential profile, and the distance range where it corresponds to an effective attraction. At large distances, the effective interaction between the ion and the disk is repulsive, and diverges as $-Qq(N-N_b)\ln \widetilde{x}$. However, from Eq.~(\ref{eq:veff-shortQpos}), we find that, at short distances, the effective potential becomes attractive. Therefore, as in the previous situation, there exists a distance $x^{*}$ below which the test charge will be attracted, which results in a further charge inversion of the disk. Figure~\ref{fig:veffNb-lt-N} shows the effective potential in this situation for different charges $Q$ and charges of the disk $q(N-N_b)$. Here, one can also define a binding energy $V^{*}=V_{\text{eff}}(x^{*})-V_{\text{eff}}(0)$, necessary to pull out the ion from the disk once it has been ``adsorbed''. Figure~\ref{fig:Q-xstar-vstar-Nb=15-N=16} shows how $x^{*}$ and $V^{*}$ depend on the charge $Q$ of the ion and on the charge of the disk, respectively. It is also useful to emphasize that when the disk complex is not neutral, one cannot define the energy barrier $V^{\dagger}$. Indeed, this quantity was defined in the neutral case as the barrier to overcome to approach the test charge from $x=\infty$, down to the distance where attraction sets in. When $N\neq N_b$, the large distance effective potential diverges as $(N-N_b) \log x$, which precludes the definition of $V^{\dagger}=V_{\text{eff}}(x^*)-V_{\text{eff}}(\infty)$. This feature is absent in three dimensions, where charges interact though a $1/r$ potential (hence the possible definition of $V^{\dagger}$ also for non neutral complexes). As a consequence, the study of overcharging is somewhat less rich in the present case than for three dimensional systems and overcharging is, with a log potential, necessarily a phenomenon of small amplitude (if not infinitesimal) : another way to rephrase previous remarks is that $V^{\dagger}$ diverges as soon as $N\neq N_b$. \begin{figure} \null\vskip 6mm \centering \includegraphics[width=\GraphicsWidth]{veffNb.lt.N.eps} \vspace{5mm} \caption{\label{fig:veffNb-lt-N} The effective potential between the charged disk and the approaching ion in cases where $Q/q>0$ and where the large distance behaviour is repulsive (i.e. $N>N_b$). The main inset is for $N_b=15,N=16, Q/q=5$ while the smaller inset is for $N_b=10,N=16, Q/q=1$. } \vspace{5mm} \end{figure} \begin{figure} \centering \vspace{6mm} \includegraphics[width=\GraphicsWidth]{Q-xstar-vstar-Nb=15-N=16} \hspace{1mm} \includegraphics[width=\GraphicsWidth]{Nb-xstar-vstar-N=16-Q=1} \vspace{5mm} \caption{\label{fig:Q-xstar-vstar-Nb=15-N=16} Left: The binding energy $V^{*}$ and the distance $x^{*}$ to overcharge the charged disk with $N_{b}=15$ and $N=16$ particles (global charge $q$) with an additional charged particle with charge $Q$ as a function of $Q/q$. Right: Same, as a function of $N_b$, for $N=16$ and $Q=q$.} \end{figure} To understand the mechanism behind the attraction at short distances, it is instructive to study the density distribution of particles in $\mathcal{D}$, as $Q$ approaches the disk. Figure~\ref{fig:ndens-Nb=15-N=16-Q=5-x=2-xstar-5-infty} shows the density profile for different distances $x$. It can be seen that the correlation hole alluded to earlier is increasingly marked, when $x$ becomes smaller: the mobile charges $q$ of the disk feel the repulsion due to the charge $Q$, and move towards the edge of the disk. This results in a local negative charge density in the center of the disk, which is finally responsible for the attractive interaction between the disk and the intruder charge $Q$. \begin{figure} \centering \vspace{6mm} \includegraphics[width=\GraphicsWidth]{ndens-Nb=15-N=16-Q=5-x=2-xstar-5-infty} \vspace{5mm} \caption{\label{fig:ndens-Nb=15-N=16-Q=5-x=2-xstar-5-infty} The density profile of mobile particles in the disk, with $N=16$, $N_b=15$, charge of the disk equal to $q$ and the approaching ion has charge $Q=5q$. Notice that as the ion approaches the disk, the charge density in the center of the disk becomes negative. This results in the effective attraction at short-distances $x$ of the disk and the ion. } \end{figure} \subsection{Case $Q/q<0$} We now turn to the case where the test particle and mobile ions on the disk have charges of opposite signs. \subsubsection{Neutral disk} For a globally neutral disk ($N=N_b$), we know from Eq.~(\ref{eq:Veffasymptneutral}) and the analysis of section~\ref{sec:long-distance} that the effective potential is attractive at large-distances. This behavior remains at short-distances, as illustrated in figure~\ref{fig:veffQnegN=10}. Therefore, the neutral disk has a natural tendency to overcharge. \begin{figure}[htb] \null\vskip 6mm \centering \includegraphics[width=\GraphicsWidth]{veffQnegN=10} \vspace{5mm} \caption{\label{fig:veffQnegN=10} The effective potential between the globally neutral disk and the approaching ion, in the case where the charge of the test ion and those of the mobile particles on the disk have opposite signs: $Q/q<0$. Here, the disk bears $N=10$ particles with charge $q$. } \vspace{5mm} \end{figure} \subsubsection{Charged disk} If $N>N_b$, the disk has a charge $q(N-N_b)$ of opposite sign as the approaching ion $Q$. In this case the effective potential is always attractive, a situation that is not of particular interest. We concentrate instead on the case where the disk has a net charge of the same sign as $Q$, i.e.~$N_b>N$. Due to this excess charge, the effective potential with a charge $Q$ of the same sign as the disk is expected to be repulsive at large distances, see equation~(\ref{eq:Veffasympt}). However, as we shall see below, there is here also a change in the behavior of the effective interaction at short distances, where the force between the disk and the particle becomes attractive. The situation seems at first sight similar to the case studied in section~\ref{sec:Qpos}. There are some notable differences though. If $Q/q\leq -1$, the effective potential diverges when $\widetilde{x}\to0$. Indeed, for $Q/q\leq -1$, equation~(\ref{eq:veff-shortQpos}) is no longer valid. The dominant contribution is given by the term $j=1$ in the sum~(\ref{eq:Veff}). Explicitly, it yields, for $Q/q<-1$, \begin{equation} V_{\text{eff}}(x)\sim -q(Q+q) \ln \widetilde{x}\,,\qquad x\to0 \label{eq:fin1} \end{equation} and \begin{equation} V_{\text{eff}}(x)\sim -\frac{q^2}{2}\ln \left(\ln \frac{1}{\widetilde{x}}\right)\,,\qquad x\to0\,, \label{eq:fin2} \end{equation} if $Q=-q$. In both cases, the small $x$ behaviour is attractive. We note that the divergence of $V_{\text{eff}}(x)$ for $x\to 0$ stems from the fact that the Boltzmann weight $\exp(\beta Q q\log r)=1/r^{-\Gamma Q/q}$ is non integrable at $r=0$ whenever $Q/q < -2/\Gamma$. As announced earlier, the effective potential is repulsive at large distances and attractive at short distances. We can thus again define the distance $x^{*}$ at which the potential becomes attractive if we approach the ion below $x^{*}$. However, in the present case, the binding energy $V^{*}$ is infinite because $\lim_{x\to0} V_{\text{eff}}(x)=-\infty$. Likewise, we cannot define the energy barrier $V^\dagger$ since either attraction applies at all distances (neutral complex on the disk), or the effective potential diverges at infinity (charged case). \begin{figure}[htb] \null\vskip 6mm \centering \includegraphics[width=\GraphicsWidth]{ndens-Nb=16-N=15-Q=-5-x=2-xstar-5-infty} \hspace{1mm} \includegraphics[width=\GraphicsWidth]{logndens-Nb=16-N=15-Q=-5-x=0p01-0p1-1-2-infty} \null\vspace{5mm} \caption{\label{fig:ndens-Nb=16-N=15-Q=-5-x=2-xstar-5-infty} Density profile of counterions in the disk, in the case $Q=-5q$, with $N=15$ and $N_b=16$. The net charge of the disk is then equal to $-q$. } \vspace{6mm} \end{figure} The mechanism behind the attraction between these like-charged objects at short distance is now due to an accumulation of mobile charges near the center of the disk. This can be seen in Fig.~\ref{fig:ndens-Nb=16-N=15-Q=-5-x=2-xstar-5-infty}, which shows the density profile of the mobile charges $q$, ie.~the counterions. As the charge $Q$ approaches, it strongly attracts the counterions to the disk center, and this results in an effective attraction, the precise form of which is non trivial. As seen on Fig.~\ref{fig:ndens-Nb=16-N=15-Q=-5-x=2-xstar-5-infty}, for small $\widetilde{x}$, there is a large density of counterions close to the center of the disk, but which is concentrated over a disk of radius of order 1 in $\widetilde{r}$ units. The precise analytical behavior of the charge density can be extracted from Eq.~(\ref{eq:n}), for $\widetilde{x}\ll 1$, taking into account that for $Q/q<-1$ the first term of the sum is the dominant one \begin{equation} \label{eq:dens-x0} n(r)\sim - n_b\, e^{-\widetilde{r}^2} \left(\frac{Q}{q}+1\right) \left(1+\frac{\widetilde{r}^2}{\widetilde{x}^2}\right) \, \frac{1}{\widetilde{x}^2} \,. \end{equation} The total integrated charge of this counterion cloud close to the center plus the background turns out to be equal to $-Q$, as one might expect. However, the behaviour of the effective potential encoded in Eqs. (\ref{eq:fin1}) and (\ref{eq:fin2}) exhibits a different attraction than the bare Coulombic form $-Q^2 \log \widetilde x$, that would be obtained assuming the attracted counterions are located as a point charge at $r=0$. Since they are spread over distances larger than $\widetilde{x}$, the behavior of the potential, although logarithmic, turns out to have a different prefactor, see Eq. (\ref{eq:fin1}). The case when $-1< Q/q<0$ is somewhat different. The effective interaction potential is no longer logarithmic and has a finite value at $x=0$. For $x\to0$, \begin{equation} \beta V_{\text{eff}}(x) = \beta V_{\text{eff}}(0) + \frac{\widetilde{x}^{2(1+Q/q)}}{ \left(\frac{Q}{q}+1\right)\gamma(1+Q/q,N_b)} + O(\widetilde{x}^2) \end{equation} with $V_{\text{eff}}(0)$ given by Eq.~(\ref{eq:betaVeff0}). Notice that since $0<1+Q/q<1$, the potential is again attractive at short distances. Also the power law $\widetilde{x}^{2(1+Q/q)}$ is different from the one of the case $Q/q>0$ where it was $\widetilde{x}^2$. In this case, $-1< Q/q<0$, it is again possible to define the binding energy, that diverges when the limit $Q/q\to-1^{+}$ is approached. One can indeed show that $\beta V_{\text{eff}}(0) \sim \ln \left(1+Q/q\right)$. \section{Conclusion and discussion} \label{sec:concl} We have introduced a classical system that exhibits some of the phenomenology at work in more complex colloidal suspensions. An ensemble of $N$ point particles with charge $q$ are free to move within a disk of radius $R$, that bears a uniform background charge of surface density $- q N_b/(\pi R^2)$. The corresponding complex (mobile charges and background) forms a one component plasma, with a global charge $(N-N_b)q$. A test point charge $Q$ is then approached to the complex, perpendicularly to the disk plane, along its axis of symmetry ($x$-axis, see Fig. \ref{fig:OCPdisk-charge}). All charges were assumed to interact through a log potential, a choice that is convenient for the derivation of analytical results and for the discussion of the physical mechanisms, but that we emphasized as somewhat irrealistic for a real Coulombic problem in three dimensions. We have studied in detail the $x$-dependent effective potential $V_{\text{eff}}$ experienced by the intruder $Q$, defined as the free energy of the complete charge distribution for a given distance $x$ between the test charge and the complex. At short distances $x$, $V_{\text{eff}}$ is always attractive, with different underlying mechanisms depending on the sign of $Q/q$. If the intruder and the mobile charges are like-charged, the intruder creates its own correlation hole as it approaches the disk. The resulting short-range attraction resulting from this polarization is analogous to its three dimensional counterpart explaining charge inversion (overcharging, see Ref. \cite{GNS02}). If on the other hand $Q/q<0$, the test particle attracts an excess of mobile charges in the vicinity of the disk center, which overcomes the background - test charge repulsion. In this case, we found a diverging attraction for $Q/q\leq -1$, which precludes the definition of a binding energy (cost to drag the test charge away from the disk, starting from $x=0$, the point of contact). The long distance behaviour is also of interest. If the complex has a net charge, the leading contribution to $V_{\text{eff}}$ reads $-Q q (N-N_b) \ln \widetilde{x}$, which leads to the expected like-charge repulsion at large $x$. The neutral case $N=N_b$ is more subtle, and it has been shown that a key quantity to rationalize $V_{\text{eff}}$ is the quadrupolar moment $\mathbb{Q}_{2}$ of the total charge distribution on the disk. At large $x$, polarization effects disappear, and the mobile charges adopt a profile that compensates for the background charge in the bulk of the disk, while they are expelled from the immediate vicinity of the disk edge $r=R$, thereby creating a charge imbalance far from the disk center only. This necessarily leads to a negative value of $\mathbb{Q}_{2}$, and hence to a repulsive behaviour at large $x$, when $Q/q>0$. Indeed, what matters for large distance interactions is the charges that are closest to the intruder, and they happen to be the mobile charges expelled from $r=R$ (see the region where $n>n_b$ in Fig. \ref{fig:ndens.xstar.infty}, for $N=50$ or $N=500$, or equivalently, see the arrow in Fig. \ref{fig:ndens-unbound} below). The ensuing interaction is repulsive when $Q$ and $q$ are of the same sign. This leads us to a final remark that illustrates the subtlety of the long distance effective potential. Consider a variant of the previous model, where the mobile charges are no longer confined in the disk $r<R$, but can explore the full disk plane (they are thus still 2D confined, but unbounded in the plane). The uniform background, as before, is a disk of radius $R$. We can repeat the analysis for $\Gamma=2$, which leads to a profile $n(r)$ that departs from the one reported above in an essential way: As can be seen in Fig. \ref{fig:ndens-unbound}, it is monotonously decreasing, as happens to be the case at mean-field level \cite{CMTR09} (i.e. for $\Gamma \to 0$). For $N\geq N_b$, the decay of the density profile $n(r)$ at large distances is algebraic in $1/r^4$~\cite{J86,J03}, leading to a divergent quadrupole. Furthermore, the density profile of this ``unbounded'' model shows a peculiarity, when $N\geq N_b$, $\int_{\mathbb{R}^2} n(r)\,d^2\r=N_b-1$. Since there were originally $N$ mobile particles, this means that $N-N_b+1$ particles have escaped to infinity. This can be checked explicitly at $\Gamma=2$~\cite{J86,J03,FJT03}, but more generally, it is a manifestation of the Onsager-Manning-Oosawa condensation phenomenon~\cite{M69a, M69b, O71}: only a fraction $(N_b-1)/N_b$ of the mobile ions are ``condensed'' inside or in the vicinity of the disk. This is a consequence of the logarithmic interaction between the ions and the disk when they are outside the disk. As a consequence, and at variance with the bounded model where the charges stay in the disk, the global charge of the complex (disk + mobile ions) is $-q$, whenever $N\geq N_b$. Therefore one expect that the effective interaction of this complex with the charge $Q$ at large $x$ will be attractive for $Q/q>0$. This situation is opposite to the one met with the bounded model. \begin{figure}[htb] \null\vskip 6mm \centering \centering \includegraphics[width=\GraphicsWidth]{density-bounded-unbounded} \vspace{5mm} \caption{\label{fig:ndens-unbound} Density profile of counterions in the disk, for the ``unbounded'' and ``bounded'' models (counter-ions are either allowed to explore the region $r>R$, or not). Here $N=N_b=40$ and $\Gamma=2$. The counter-ion excess for the bounded model --shown by the arrow, and studied extensively in this paper-- leads to a negative quadrupole moment $\mathbb{Q}_{2}$, see section \ref{sec:long-distance}, while in the unbounded case, one mobile ion escapes to infinity, leaving the complex (disk + ions) with a net charge $-q$. This results in large distance effective forces on the test charge that have opposite signs.} \vspace{6mm} \end{figure} We would like to thank L. \v{S}amaj for interesting discussions, and for having provided us with some of the coefficients $c_\mu$ required to compute the partition functions in section \ref{ssec:arb}. The support of ECOS-Nord/COLCIENCIAS-MEN-ICETEX is also gratefully acknowledged. G.~T.~acknowledges partial finantial support from Comit\'e de Investigaciones y Posgrados, Facultad de Ciencias, Universidad de los Andes. \bibliographystyle{apsrev}
2,877,628,088,693
arxiv
\section{The scVI probabilistic model} \label{model} Figure \ref{graph} represents the probabilistic model graphically. Latent variable \begin{align*} z_n \sim \mathcal N(0, I) \end{align*} is a low-dimensional random vector describing cell $n$. For neural network $f_w$, latent variable \begin{align*} w_{ng} \sim \mathrm{Gamma}(f_w(z_n, \gamma_n)) \end{align*} accounts for the stochasticity of gene $g$ expressed in cell $n$. Here the constant $\gamma_n$ are optional covariates that can be passed to $f_w$ that account for batch effects, like normalization~\cite{Vallejos2017,Risso2014}, to remove unwanted variation from the latent representation. Latent variable \begin{align*} y_{ng} \sim \mathrm{Poisson}(w_{ng}) \end{align*} is the underlying expression level for gene $g$ in cell $n$. \begin{minipage}{0.75\textwidth} For neural network $f_h$, latent variable \begin{align*} h_{ng} \sim \mathrm{Bernoulli}(f_h(z_n, \gamma_n)) \end{align*} indicates whether a particular entry has been “zeroed out” due to technical effects~\cite{ZIFA,ZINB-WAVE}. Finally, the observed gene expression level is defined by: \begin{align*} x_{ng} = \begin{cases} y_{ng} & \text{ if } h_{ng} = 0,\\ 0 & \text{ otherwise}.\\ \end{cases} \end{align*} Conditional distribution $p(x_{ng} | z_{n})$ is a zero-inflated negative binomial---a distribution known to effectively model the kinetics of stochastic gene expression with some entries replaced by zeros~\cite{Grun2014}. The neural networks $f_w$ and $f_h$ use dropout regularization and batch normalization. Each network has 3 fully connected-layers, with 128 nodes each. The activation functions are all ReLU, exponential, or linear. Weights for some layers are shared between $f_w$ and $f_h$. \end{minipage}\hfill \begin{minipage}{0.20\textwidth} \centering \tikz{ % \node[obs] (x) {${x}_{ng}$} ; % \node[latent, above=0.7 of x, xshift=-0.8cm] (h) {${h}_{ng}$}; \node[latent, above=0.2 of x, xshift=0.8cm] (y) {${y}_{ng}$}; \node[latent, above=0.4 of y] (w) {${w}_{ng}$}; \node[latent, above=0.4 of w, xshift=-0.8cm] (z) {${z}_{n}$}; \plate[inner sep=0.15cm, xshift=-0.cm, yshift=0.cm] {plate1} {(w) (h) (y) (x)} {G}; % \plate[inner sep=0.15cm, xshift=-0.cm, yshift=0.cm] {plate2} {(z) (w) (h) (y) (x) (plate1)} {N}; % \edge {z} {w, h} ; % \edge {h} {x} ; % \edge {w} {y}; % \edge {y} {x} ; % } \captionof{figure}{The scVI graphical model} \label{graph} \end{minipage} \section{Posterior inference} \label{inference} The posterior distribution combines the prior knowledge with information acquired from the data $X$. We cannot directly apply Bayes rule to determine the posterior because the denominator (the marginal distribution) $p(x_n)$ is intractable. Instead, we use variational inference~\cite{blei2017variational} to approximate the posterior $p(x_n | z_n)$. Our variational distribution $q(z_n | x_n)$ is Gaussian with a diagonal covariance matrix. The variational distribution's mean and covariance are given by an encoder network applied to $x_n$, as in~\cite{Kingma2013}. The encoder network may, optionally, be given the constant covariates $\gamma_n$ (along with $x_n$) if we wish to discourage $z_n$ from encoding batch effects and other unwanted variations. The variational lower bound is \begin{align} \mathcal{L}(n) \geq \mathbb{E}_{q(z|x)}\log p(x|z) - KL(q(z|x)||p(z)) \end{align} To optimize the variational lower bound, we write $p(x|z)$ by analytically marginalizing out the discrete random variables $h_{ng}$, $w_{ng}$, and $y_{ng}$. Now, our variational lower bound is continuous and end-to-end differentiable. We maximize the variational lower bound using stochastic backpropagation. \section{Performance benchmarks} \label{benchmarks} We assess the performance of scVI at three benchmarks tasks: generalizing to heldout data (Section~\ref{heldout}), imputating ``zeroed out'' data (Section~\ref{impute}), and recovering known clusters (Section~\ref{clusters}). Throughout, we compare scVI with factor analysis, as well as two state-of-the-art methods: ZIFA~\cite{ZIFA} and ZINB-WaVE~\cite{ZINB-WAVE} Only scVI and factor analysis scale to the larger benchmark datasets---a key advantage relative to ZIFA and ZINB-WaVE. ZIFA and ZINB-WaVE are based on batch optimization algorithms. Their runtimes for \textit{each} iteration of their numerical optimization routines scale linearly in the number of samples and linearly in the number of genes---both potentially very large. For 10,000 cells, each of these methods requires more than 20 minutes of computation. For 100,000 cells, both methods run out of memory on a machine with 32 GB RAM. scVI trains on the entire 1.3 million cell dataset in less than two hours on a single GPU, using off-the-shelf neural network software. \subsection{Generalization to heldout data} \label{heldout} For this task, we use a dataset that contains 1.3 million brain cells from \textsc{10x Genomics}~\cite{10x} with 720 sampled variable genes. For each method, we learn a mapping from the 10-dimensional latent space to a reconstruction of training set $X$. Then, we assess the marginal likelihood of held-out data, conditioned on a latent representation learned for the held-out data by each model. Table~\ref{LL} shows that scVI best compresses the held-out data, even for our smallest dataset. scVI's lead over the other methods grows as the dataset size grows. \begin{table} \centering \begin{tabular}{lccc} cells& 4k & 10k & 100k \\ \hline FA & -1178.2 & -1177.3 & -1169.8 \\ ZIFA & -1250.9 & -1250.7 & NA \\ ZINB-WaVE & -1166.3 & -1164.4 & NA\\ scVI & \textbf{-1159.9} & \textbf{-1147.8} & \textbf{-1128.73}\\ \hline \end{tabular} \vspace{0.2cm} \caption{Marginal log likelihood for a held-out subset of the brain cells dataset. NA means we could not run the given algorithm for this sample size.} \label{LL} \end{table} \subsection{Imputating zeroed-out data} \label{impute} On a 10,000 sample of the same dataset, we set to zero entries at random conditioned on the expected transcript abundance --- according to probabilities from the ZIFA model --- to mimic the technical effects that zero out some entries of the real data. Because we have introduced these zeros synthetically, we know 1) each entry's true value, and 2) that each entry is zero because of a technical effect, not because the true expression level is nearly zero. We also compare for this task to a state-of-the-art method MAGIC~\cite{VanDijk2017} based on diffusion in the cell k-nearest neighbors graph and report results on Table~\ref{IM}. \begin{table} \centering \begin{tabular}{lccc} & imputation & identification of \\ & error & zeroed-out\\ \hline ZIFA & 3.00 & 1.955 \\ MAGIC & 1.806 & NA \\ ZINB-WaVE & 1.053 & 1.366 \\ scVI & \textbf{1.048} & \textbf{0.742} \\ \hline \end{tabular} \vspace{0.2cm} \caption{Absolute errors for imputing zeroed entries (column 1), mean cross entropy for predicting which entries were zeroed-out entries (column 2). Scores are based on a dataset of 10,000 brain cells. MAGIC does not predict dropout probabilities.} \label{IM} \end{table} \subsection{Recovering known clusters} \label{clusters} To further assess the models, we compare how each clusters cells of known types (e.g., muscle cells, blood cells) in latent space. For this task, we make a slight modification to our model: we treat each $z_n$ as an unknown parameter to estimate rather than a latent variable with a distribution. This way, our procedure maximizes mutual information between $z_n$ and $x_n$~\cite{zhao2017infovae}. A first dataset from~\cite{Zeisel1138} contains 3005 mouse cortex cells and gold-standard labels for seven distinct cell types. Each cell type corresponds to a cluster to recover. We sample 558 variable genes as in~\cite{BISCUIT} and report silhouette (a measure of distance between clusters) on the mouse cortex dataset in Table~\ref{SIL}. \begin{table} \parbox{.30\linewidth}{ \centering \begin{tabular}{lc} & silhouette \\ \hline FA & 0.208 \\ ZIFA & 0.202 \\ ZINB-WaVE & 0.260 \\ scVI & \textbf{0.285} \\ \hline \end{tabular} \vspace{0.2cm} \caption{Silhouette scores on the mouse cortex dataset.} \label{SIL} } \hfill \parbox{.65\linewidth}{ \centering \begin{tabular}{lcc} & silhouette & QC correlation\\ & (higher is better)& (lower is better)\\ \hline PCA & 0.314 & 0.381 \\ PCA (normalized) & 0.321 & 0.169 \\ scVI (no covariates) & 0.375 & 0.366 \\ scVI & \textbf{0.379} & \textbf{0.157} \\ \hline \end{tabular} \vspace{0.2cm} \caption{Unwanted variation metric on the PBMCs dataset. \\ ~ \\ ~} \label{UV} } \end{table} A second dataset contains 12039 Peripheral blood mononuclear cells (PBMCs) from \cite{Zheng2017} with 10310 sampled genes and get biologically meaningful clusters with the software Seurat~\cite{SEURAT}. For this dataset, we use SCONE~\cite{SCONE} to select most important factors of unwanted variation to be incorporated into downstream models. These factors generally include batches meta-data, sequencing depth (number of transcript per cell) and quality controls (QC) for each cell. In this case, SCONE selected a strategy that consists in scaling by depth and regressing out the QC. We compare scVI with and without covariates with a PCA with and without normalization in Table~\ref{UV} and show we can better remove the variation while yielding a high silhouette score --- which means we would get a consistent but tighter clustering with our latent space. \section{Differential expression} A significant application of our generative model and of main interest in the field is to go from a clustering to a procedure for identifying gene differentially expressed between two cell-types. Our model relies on Bayesian statistics and can thus benefit from uncertainty evaluation to provide a hypothesis testing framework for differential expression. Let $A$ and $B$ be two set of cells and $g$ a fixed gene. Now take $(a, b) \in A\times B$ and say we want to test the following: $$\mathcal{H}_0^g: w_{ag} < w_{bg} \textrm{~~~~vs.~~~~}\mathcal{H}_1^g: w_{ag} \geq w_{bg}$$ where $w$ is the Gamma latent variable in the generative model, i.e the mean of the gene expression conditioned on a non-dropout event. The posterior of these hypotheses can be approximated via the variational distribution: $$ p(\mathcal{H}_0^g | x) \approx \iint_{z_a, z_b, w_{ag}, w_{bg}} p(w_{ag} < w_{bg})dp(w_{ag} | z_a)dq(z_a | x_a)dp(w_{bg} | z_b)dq(z_b | x_b)$$ where all the measures are uni-dimensional or low-dimensional so we can use naive monte-carlo to compute these integrals. We can then use a Bayes factor for the test. We use again the PBMC dataset from \cite{Zheng2017} and the Seurat-based cell classification to understand how differential expression is captured by our testing method compared to tradition DESeq2~\cite{Love2014}. We defined a reference from a publicly-available bulk array expression profiling data for human B cells (n=10) and Dendritic cells (n=10) at baseline of vaccination~\cite{GEO:GSE29618} which we use to test the association of each gene's expression with biological class, defining a 2-sided t-test p-value per gene. Because defining a threshold and then use a ROC curve is ambiguous we prefer to look at reproducibility between the microarray experiment and a family of tests used on the scRNA-Seq sequencing experiment. To quantify this, we model the relationship between significance ranks using the Irreproducible Discovery Rate model for matched rank lists~\cite{Li2011} and report correlation score of the reproducible components in Figure~\ref{de-boxplot}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{fig1.pdf} \caption{Results on the Differential expression task on B cells against DC cells} \label{de-boxplot} \end{figure} \bibliographystyle{unsrt}
2,877,628,088,694
arxiv
\section{Envy-Freeness} \label{EFdivisions} In this section we develop an efficient algorithm for finding envy-free allocations in cake division instances with MLRP (Theorem~\ref{theorem:EFdivision}). Towards this goal, we define a class of cake divisions, referred to as \emph{ripple divisions} (Definition~\ref{defn:RD}), and prove that, under MLRP, every ripple division induces an envy-free allocation (Theorem~\ref{theorem:RD-EF}). Existential and computational guarantees for {ripple divisions} are established in Section~\ref{section:rd-exist} (Lemma~\ref{RDexistence}) and Section~\ref{section:rd-compute} (Lemma~\ref{RDcomputation}), respectively. Section~\ref{proofthm1} builds upon these results to prove our main result (Theorem~\ref{theorem:EFdivision}) for envy-freeness. We establish the universal existence of ripple divisions through the intermediate value theorem, i.e., a one-dimensional fixed-point argument (Lemma~\ref{RDexistence}). Consequentially, for cake-division instances with MLRP, we develop an alternate proof of existence of envy-free allocations. Since one can use binary search to find fixed points in the one-dimensional setting, this proof in fact leads to an algorithm for finding ripple divisions and, hence, envy-free divisions. \begin{definition}[Ripple Division] \label{defn:RD} Given a cake-division instance $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle $, a collection of points $x^*_0=0 \leq x^*_1 \leq \dots \leq x^*_{n-1} \leq x^*_n = 1$ is said to form a ripple division of the cake iff \begin{align*} v_{i}(x^*_{i-1}, x^*_{i}) &= v_{i}(x^*_{i}, x^*_{i+1})>0 \ \ \text{for all agents} \ i \in [n-1]. \end{align*} \end{definition} For establishing existence of ripple divisions, we first consider a relaxation of Definition~\ref{defn:RD} wherein do not enforce the last cut point (i.e., $x^*_n$) to be equal to one. Under this relaxation, the intervals $\{[x_{i-1}, x_i]\}_{i=1}^n$ do not cover the entire interval $[0,1]$ (instead, they span $[0,x_n]$) and, hence, lead to a partial allocation of the cake. Also, by convention, the agents are indexed following the MLRP order: for each $i \in [n-1]$, the likelihood ratio $f_{i+1}/f_i$ is nondecreasing. Hence, assigning interval $[x_{i-1}, x_i]$ to agent $i \in [n]$ provides an allocation wherein the intervals are assigned (left to right on the cake) in accordance with the MLRP order. We will show that the (partial) allocation obtained by assigning interval $[x_{i-1}, x_i]$ to agent $i \in [n]$ is envy-free (Theorem~\ref{theorem:RD-EF}). \begin{definition}[$\delta$-Ripple Division] \label{defn:deltaRD} Given a cake-division instance $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle $, a collection of points $x_0=0 \leq x_1 \leq \dots \leq x_{n-1} \leq x_n \leq 1$ is said to form a $\delta$-ripple division of the cake iff $x_n \geq 1 - \delta$ and \begin{align*} v_{i}(x_{i-1}, x_{i}) &= v_{i}(x_{i}, x_{i+1})>0 \ \ \text{for all agents} \ i \in [n-1]. \end{align*} \end{definition} Both Definitions~\ref{defn:RD} and~\ref{defn:deltaRD} require that, for all $i \in [n-1]$, agent $i$'s value for the $i$th interval ($[x^*_{i-1}, x^*_{i}]$ and $[x_{i-1}, x_{i}]$, respectively) is positive. Also, note that a $0$-ripple division is an exact ripple division. To compose the value equalities that define a $\delta$-ripple division (Definition~\ref{defn:deltaRD}), we consider $(n-1)$ functions $\mathrm{RD}_i: [0,1] \mapsto [0,1]$, for $2 \leq i \leq n$. In particular, focusing on the equalities considered in Definition~\ref{defn:deltaRD} (i.e., $v_i(x_{i-1}, x_i) = v_i(x_i, x_{i+1})$), we note that that if we set the first cut point $x_1=x \in [0,1]$, then all the subsequent points $x_2, \ldots, x_{n}\in [0,1]$ are fixed as well. In particular, $x_2$ is the point that satisfies $v_1(0, x) = v_1(x, x_2)$ and, iteratively, $x_{i+1}$ can be identified from $v_i(x_{i-1}, x_i) = v_i(x_i, x_{i+1})$. The functions $\mathrm{RD}_i$s capture this ``ripple'' effect and can be expressed as compositions of cut and eval queries. Formally, the functions $\mathrm{RD}_i : [0,1] \mapsto [0,1]$, for $i \in \{2,3, \dots, n\}$, are recursively defined as follows\footnote{The first cut point $x$ is specified upfront and, hence, we do not require $\mathrm{RD}_1$. The functions $\mathrm{RD}_i$s are defined for $i \in \{2, \ldots, n\}$.} \begin{align} \label{RDfunction} \mathrm{RD}_2(x) &\coloneqq \mathrm{Cut}_1\left(x, \mathrm{Eval}_1(0,x)\right) \nonumber\\ \mathrm{RD}_i(x) &\coloneqq \mathrm{Cut}_{i-1} \Big( \mathrm{RD}_{i-1}(x), \mathrm{Eval}_{i-1} \big( \mathrm{RD}_{i-2}(x), \mathrm{RD}_{i-1}(x) \big) \Big) \quad \text{ for } i \in \{3,4, \ldots, n\} \end{align} In particular, $\mathrm{RD}_n(x)$ denotes the value of the last cut point $x_n$ obtained by initializing the ripple effect with $x_1 = x$. Also, note that $\mathrm{RD}_n(0) = 0$ and $\mathrm{RD}_n(1) =1$; by convention, the response to a cut query $\mathrm{Cut}_i( \ell, \tau)$ is truncated to $1$ iff $\tau$ is greater than the entire value to the right of $\ell$, i.e., if $v_i(\ell, 1) < \tau$. Since $\mathrm{RD}_i$s can be expressed as a composition of the cut and eval queries, these functions can be efficiently computed in the Robertson-Webb query model. Moreover, using the fact that the cut and eval queries are $\lambda$-Lipschitz, in the following proposition we establish that $\mathrm{RD}_n$ is also Lipschitz continuous. \begin{restatable}{proposition}{PropositionCcnLipschitz} If in a cake-division instance $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle $ the cut and eval queries are $\lambda$-Lipschitz, then the function $\mathrm{RD}_n$ is $\lambda^{2(n-1)}$-Lipschitz. \label{RDn} \end{restatable} The proof of Proposition~\ref{RDn} is deferred to Appendix~\ref{appendix:exist-compute-rd}. The Lipschitz continuity of $\mathrm{RD}_n$ turns out to be a key property, both for establishing the existence of ripple divisions and in developing an efficient algorithm for finding them. Specifically, we can apply the intermediate value theorem to $\mathrm{RD}_n$ and prove that, for any $\delta >0$, there exists a $\delta$-ripple division; see Lemma~\ref{lemma:delta-rd-exist} below. The universal existence of $\delta$-ripple divisions, for all $\delta >0$, along with a limit argument ($\delta \to 0$), establishes our existential result (Lemma~\ref{RDexistence}) for ripple divisions, i.e., for $0$-ripple divisions. The Lipschitzness of $\mathrm{RD}_n$ also ensures that, via binary search, we can find a $\delta$-ripple division (via a point $x_1 \in (0,1)$ that satisfies $\mathrm{RD}_n(x_1) \geq 1 - \delta$ ), in time that is polynomial in $\log \left( \frac{1}{\delta} \right)$. This runtime dependence ensues that the precision parameter $\delta$ can be driven exponentially close to zero, in polynomial (in the bit complexity of $\delta$) time; see Lemma~\ref{RDcomputation} \subsection{Existence of Ripple Divisions} \label{section:rd-exist} \begin{lemma} \label{lemma:delta-rd-exist} Let $\mathcal{C} = \langle [n], \{f_i \}_i \rangle$ be a cake-division instance, with $\lambda$-Lipschitz cut and eval queries, and let parameter $\delta \in (0,1)$. Then, in $\mathcal{C}$, there always exist a point $\widehat{x} \in (0,1)$ with the property that $\mathrm{RD}_n\left(\widehat{x}\right) = 1- \delta$. Furthermore, such a point $\widehat{x}$ initializes a $\delta$-ripple division: the points $x_0=0$, $x_1 = \widehat{x}$, and $x_i= \mathrm{RD}_i\left( \widehat{x} \right)$, for $2 \leq i \leq n$, form a $\delta$-ripple division in $\mathcal{C}$. Here, the functions $\mathrm{RD}_2, \ldots, \mathrm{RD}_n$ are defined with respect to the cut and eval queries of $\mathcal{C}$. \end{lemma} \begin{proof} The function $\mathrm{RD}_n$ is continuous (Proposition \ref{RDn}) and it satisfies $\mathrm{RD}_n(0) = 0$ along with $\mathrm{RD}_n(1) =1$. Hence, the intermediate value theorem ensures that, for any $\delta \in (0,1)$, there must exist a point $\widehat{x} \in (0,1)$ which satisfies $\mathrm{RD}_n\left(\widehat{x} \right) = 1-\delta$. This shows that the required point $\widehat{x}$ always exists. We will complete the proof by establishing that the points $x_0=0$, $x_1 = \widehat{x}$, and $x_i= \mathrm{RD}_i\left( \widehat{x} \right)$, for $2 \leq i \leq n$, form a $\delta$-ripple division. Note that, since $\delta >0$, we have $\mathrm{RD}_n\left(\widehat{x} \right) <1$. This strict inequality implies that the intermediate points $x_i$s were \emph{not} truncated to one. Indeed, in this case, all of the value equalities in Definition~\ref{defn:deltaRD} hold, i.e., $v_i(x_{i-1}, x_i) = v_i(x_i, x_{i+1})$ for all $i \in [n-1]$. It remains to show that these values are positive. The bound $\delta <1$ gives us $\mathrm{RD}_n \left( \widehat{x} \right) >0 = \mathrm{RD}_n(0)$. Therefore, we have $\widehat{x}>0$, i.e., $x_1 = \widehat{x} > 0 = x_0$. In the current setting, all nonempty intervals have positive value for the agents. This follows from the continuity of the cut queries.\footnote{In fact, under MLRP, we explicitly have $f_i(x) >0$ for all $x \in [0,1]$, to have the likelihood ratios be well-defined.} Since the interval $[0, x_1]$ is nonempty, agent $1$ has a positive value for it, $v_1(0, x_1) >0$. Furthermore, the value equality $v_1(x_1, x_2) = v_1(0, x_1)$ gives us $v_1(x_1, x_2) > 0$, i.e., $x_2>x_1$. Extending this argument iteratively shows that $x_0 < x_1 < \ldots < x_n <1$. In other words, each agent receives a positive value under the cut points, i.e., $v_i(x_{i-1}, x_i) = v_i(x_i, x_{i+1}) >0$ for all $i \in [n-1]$. Hence, $x_i$s form a $\delta$-ripple division. \end{proof} Next we use Lemma~\ref{lemma:delta-rd-exist} and a limit argument ($\delta \to 0$) to establish universal existence of ripple divisions. \begin{restatable}{lemma}{RDexistence} \label{RDexistence} Let $\mathcal{C}$ be a cake-division instance in which the cut and eval queries are $\lambda$-Lipschitz. Then, $\mathcal{C}$ necessarily admits a ripple division. \end{restatable} \begin{proof} Lemma~\ref{lemma:delta-rd-exist} asserts that, for any $\delta \in (0,1)$, there exists a collection of points $x^{\delta}_0=0 < x^{\delta}_1 < \ldots < x^{\delta}_{n-1} < x^{\delta}_n = 1-\delta$ that form a $\delta$-ripple division. Note that here $x^\delta_n = 1- \delta$. We consider the sequence of these $\delta$-ripple divisions, $\mathcal{S} \coloneqq \left\langle (x^{\delta}_0, x^{\delta}_1, \ldots, x^{\delta}_n) \right\rangle_{\delta \in (0,1)}$. Since $\mathcal{S}$ is a nonempty and bounded sequence in $[0,1]^n$, the Bolzano-Weierstrass theorem ensures that $\mathcal{S}$ contains a convergent subsequence, say $\left\langle ({x}^{\delta_j}_0, {x}^{\delta_j}_1, \dots, {x}^{\delta_j}_n)\right\rangle_{\delta_j>0}$. Write $(x^*_0, x^*_1, \dots, x^*_n) \in [0,1]^n$ to denote the limit of this subsequence as $\delta_j$ tends to zero. We will show that the points $x^*_0, x^*_1, \ldots, x^*_n$ form a ripple division in $\mathcal{C}$ (see Definition~\ref{defn:RD}), i.e., establish that $x^*_n =1$ and $v_i(x^*_{i-1}, x^*_i) = v_i(x^*_i,x^*_{i+1})>0$ for all agents $i \in [n-1]$. First, note that $x^*_n =1$, since the sequence $\langle 1-\delta_j \rangle \to 1$ as $\delta_j \to 0$. Also, $x^*_0 =0$, since the constant sequence $\langle 0\rangle$ tends to $0$. We will next prove that $v_{i}(x^*_{i-1}, x^*_{i}) = v_{i}(x^*_{i}, x^*_{i+1})$ for all agents $i \in [n-1]$. Given that the collection $\left( {x}^{\delta}_0, {x}^{\delta}_1, \dots, {x}^{\delta_j}_n \right)$ forms a $\delta_j$-ripple division, we have $v_{i}({x}^{\delta_j}_{i-1}, {x}^{\delta_j}_{i}) = v_{i}({x}^{\delta_j}_{i}, {x}^{\delta_j}_{i+1})$, for all $i \in [n-1]$. Recall that the value-density function $v_i$ (equivalently, $\mathrm{Eval}_i$) is continuous over $[0,1]^2$. Therefore, the sequential criterion of continuity\footnote{For any continuous function $g: [0,1]^2 \mapsto \mathbb{R}$, if a sequence $\langle a_t\rangle_t \in [0,1]^2$ converges to some $a \in [0,1]^2$, then the sequence $\langle g(a_t) \rangle_t$ must converge to $g(a)$.} gives us $\langle v_{i}({x}^{\delta_j}_{i-1}, {x}^{\delta_j}_{i}) \rangle \to v_i(x^*_{i-1}, x^*_i)$ and $\langle v_{i}({x}^{\delta_j}_{i}, {x}^{\delta_j}_{i+1}) \rangle \to v_i(x^*_i, x^*_{i+1})$, as $\delta_j$ tends to $0$. Applying the algebra of limits\footnote{For any two convergent sequences $\langle a_t \rangle_t \to a$ and $\langle b_t \rangle_t \to b$, if we have $a_t = b_t$ for all $t \geq 1$, then their limits must be equal as well, i.e., $a=b$.} on the two sequences $\langle v_{i}({x}^{\delta_j}_{i-1}, {x}^{\delta_j}_{i})\rangle$ and $\langle v_{i}({x}^{\delta_j}_{i}, {x}^{\delta_j}_{i+1})\rangle$, we obtain that their limits must be equal as well, i.e., $v_i(x^*_{i-1}, x^*_i) = v_i(x^*_i, x^*_{i+1})$. We now complete the proof by showing that these two equal values must be positive, $v_i(x^*_{i-1}, x^*_i) = v_i(x^*_i,x^*_{i+1})>0$. Since $x^*_n = 1$, we have $x^*_1>0$; otherwise, if $x^*_1 = 0$, then following the value equalities we would have $x^*_n = \mathrm{RD}_n(0) = 0$. In the current setting, all nonempty intervals have positive value for the agents. This follows from the Lipschitz continuity of the cut queries. Hence, $x^*_1> x^*_0 = 0$ gives us $0 < v_1(x^*_0, x^*_1) = v_1(x^*_1, x^*_2)$. This bound also implies $x^*_2 > x^*_1$. Extending this argument iteratively shows that $0= x^*_{0} < x^*_{1} < \ldots < x^*_n =1$. Hence, the agents' values are positive, $v_i(x^*_{i-1}, x^*_i) >0$. Overall, we get that the set of points $0=x^*_0<x^*_1< \dots < x^*_n=1$ form a ripple division in $\mathcal{C}$. \end{proof} \subsection{Computation of Ripple Divisions} \label{section:rd-compute} In this section we present an efficient algorithm for computing $\delta$-ripple divisions. Lemma~\ref{lemma:delta-rd-exist} implies that the problem of computing a $\delta$-ripple division reduces to finding a point $x \in (0,1)$ that satisfies $\mathrm{RD}_n(x) \in [1- \delta, 1)$. The algorithm \textsc{BinSearch} (Algorithm~\ref{alg:BinSearch}) finds such a point $x$ (and, hence, a $\delta$-ripple division) via binary search. It is well-known that binary search can be used to compute fixed points in the one-dimensional setting. We provide the relevant details here for completeness. Specifically, in \textsc{BinSearch}, we initialize $\ell = 0$ along with $r =1$ and keep bisecting the interval $[\ell, r]$ as long as $\mathrm{RD}_n(\ell) < 1 - \delta$ and $\mathrm{RD}_n(r) =1$. {Recall that $\mathrm{RD}_n(0) = 0$ and $\mathrm{RD}_n(1) = 1$.} Since $\mathrm{RD}_n$ is Lipschitz continuous (Proposition~\ref{RDn}), the intermediate value theorem (applied on $[\ell, r]$ with the bounds $\mathrm{RD}_n(\ell) < 1 - \delta$ and $\mathrm{RD}_n(r) =1$) guarantees that in each considered interval $[\ell, r]$ there exists a point $x \in (\ell, r)$ which satisfies $\mathrm{RD}_n(x) = 1- \delta$.\footnote{Note that this argument (in particular, the use of intermediate value theorem) does not require $\mathrm{RD}_n$ to be monotonic. Also, recall that $\mathrm{RD}_i$s can be computed efficiently in the Robertson-Webb query model. Hence, evaluating $\mathrm{RD}_i$s in \textsc{BinSearch} leads to at most a polynomial overhead in its runtime.} In each iteration of this algorithm the length of the considered interval $[\ell, r]$ reduces by a multiplicative factor of two and, hence, after an appropriate number of iterations, the required point $x$ and the midpoint of the interval $(\ell + r)/2$ get sufficiently close. In such a case, one can show (using the Lipschitz continuity of $\mathrm{RD}_n$) that the midpoint $(\ell + r)/2$ itself initializes a $2 \delta$-ripple division. Appendix~\ref{section:proofRDc} formalizes this runtime analysis and provides a proof of the following lemma. \begin{algorithm} { \small {\bf Input:} A cake-division instance $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle$, in the Robertson-Webb query model, and parameter $\delta >0$. \\ {\bf Output:} A $\delta$-ripple division $0=x_0 \leq x_1 \leq x_2, \ldots \leq x_n \leq 1$. \caption{\textsc{BinSearch}} \label{alg:BinSearch} \begin{algorithmic}[1] \STATE Initialize $\ell = 0$ and $r=1$ \WHILE{$\ell < r$} \IF {$ \mathrm{RD}_n\left( \frac{\ell+r}{2} \right) < 1 - \delta$} \STATE Update $\ell \leftarrow (\ell+r)/2$ \label{step:update-left} \ELSIF {$ \mathrm{RD}_n\left( \frac{\ell+r}{2} \right) =1$} \STATE Update $r \leftarrow (\ell+r)/2$ \label{step:update-right} \ELSIF {$ \mathrm{RD}_n\left(\frac{\ell+ r}{2} \right) \in [1 - \delta, 1) $} \STATE Set $x_0 = 0$, $x_1 = \frac{\ell+r}{2}$, and $x_i = \mathrm{RD}_i(x_1)$ for all $i \in \{2, \ldots, n\}$ \RETURN the collection of points $x_0, x_1, \ldots, x_n$ \ENDIF \ENDWHILE \end{algorithmic} } \end{algorithm} \begin{restatable}{lemma}{RDcomputation} \label{RDcomputation} Let $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle $ be a cake-division instance in which the cut and eval queries are $\lambda$-Lipschitz. Then, for any $\delta \in (0,1)$ and in the Robertson-Webb query model, a $\delta$-ripple division of $\mathcal{C}$ can be computed in $\mathcal{O}\left( {\rm poly} ( n, \log \lambda, \log \frac{1}{\delta}) \right)$ time. \end{restatable} \subsection{From Ripple Divisions to Envy-Free Allocations} The next theorem establishes the crucial connection between ripple divisions and envy-freeness. In particular, setting $\delta = 0$ in this theorem, we obtain that, under MLRP, every (exact) ripple division induces an envy-free allocation. \begin{restatable}{theorem}{RD-EF} \label{theorem:RD-EF} Let $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle$ be a cake-division instance in which the value densities satisfy the monotone likelihood ratio property and let parameter $\delta \geq 0$. Then, every $\delta$-ripple division, $x_0=0 \leq x_1 \leq \dots \leq x_{n-1} \leq x_n \leq 1$, in $\mathcal{C}$ induces an envy-free partial allocation $\{ I_i = [x_{i-1}, x_i] \}_{i=1}^n$. \end{restatable} This theorem asserts that here the partial allocation $\mathcal{I}=\{I_1, \ldots, I_n\}$ (with $I_i = [x_{i-1}, x_i]$) satisfies $v_i(I_i) \geq v_i(I_j)$, for all $i, j \in [n]$, and at most a $\delta$-length piece of the cake (specifically, $[x_n, 1]$) remains unallocated in $\mathcal{I}$. \begin{proof} We will show that if the points $x_0 = 0 \leq x_1 \leq x_2 \leq \ldots \leq x_n \leq 1$ form a $\delta$-ripple division, then the partial allocation $\{I_i = [x_{i-1}, x_i]\}_{i=1}^n$ is envy-free. Here, the definition of a $\delta$-ripple division (Definition~\ref{defn:deltaRD}) ensures that, for each agent $i \in [n-1]$, the values the two consecutive intervals $I_i$ and $I_{i+1}$ are equal and positive \begin{align} \label{equalityRD} v_i(I_i) = v_i(x_{i-1}, x_i) & = v_i(x_i, x_{i+1}) = v_i(I_{i+1}) >0 \end{align} Recall that the agents are indexed following the MLRP order, i.e., for each $i \in [n-1]$, the likelihood ratio $f_{i+1}/f_i$ is nondecreasing. We fix an agent $i \in [n]$ and establish envy-freeness with respect to $i$ by considering two complementary cases (i) for agents to the left of $i$, we prove that $v_i(I_1) \leq v_i(I_2) \leq \ldots \leq v_i(I_{i}) $ and (ii) for agents to the right of $i$, we prove that $v_i(I_i) \geq v_i(I_{i+1}) \geq \ldots \geq v_i(I_n)$. \\ \noindent Case {(i)}: Consider any agent $k \in [n]$ such that $k<i$. Given that $f_k$ and $f_i$ bear MLRP (i.e., $f_i/f_k$ is non-decreasing), property (i) of Lemma~\ref{lemma:R2R}, with $a = x_{k-1}$, $b=c=x_k$, and $d=x_{k+1}$, gives us $\frac{v_i(x_{k-1}, x_{k})}{v_k(x_{k-1},x_{k})} \leq \frac{v_i(x_{k}, x_{k+1})}{v_k(x_{k},x_{k+1})}$. That is, for the intervals $I_k = [x_{k-1},x_{k}]$ and $I_{k+1} = [x_{k}, x_{k+1}]$ we have \begin{align} \label{noleftenvy} \frac{v_i(I_k)}{v_k(I_k)} & \leq \frac{v_i(I_{k+1})}{v_k(I_{k+1})} \end{align} Instantiating equation~(\ref{equalityRD}) for agent $k$, we can simplify inequality (\ref{noleftenvy}) to $v_i(I_k) \leq v_i(I_{k+1})$. Combining this inequality across all $k<i$, we obtain the desired chain of inequalities for agent $i$, i.e., $v_i(I_1) \leq v_i(I_2) \leq \ldots \leq v_i(I_{i}) $. \noindent {Case (ii):} Consider any agent $j \in [n]$ such that $j>i$ Given that $f_i$ and $f_j$ bear MLRP (i.e., $f_j/f_i$ is non-decreasing), property (i) of Lemma~\ref{lemma:R2R}, with $a = x_{j-1}$, $b=c=x_j$, and $d=x_{j+1}$, gives us $\frac{v_j (x_{j-1}, x_j)}{v_i(x_{j-1}, x_j)} \leq \frac{v_j(x_{j}, x_{j+1})}{v_i(x_{j},x_{j+1})}$. That is, for the intervals $I_j = [x_{j-1},x_{j}]$ and $I_{j+1} = [x_{j}, x_{j+1}]$ we have \begin{align} \label{norightenvy} \frac{v_j(I_j)}{v_i(I_j)} \leq \frac{v_j(I_{j+1})}{v_i(I_{j+1})} \end{align} Instantiating equation~(\ref{equalityRD}) for agent $j$, we can simplify inequality (\ref{norightenvy}) to $v_i(I_j) \geq v_i(I_{j+1})$. Combining this inequality across all $j>i$, we obtain the desired chain of inequalities for agent $i$, i.e., $v_i(I_i) \geq v_i(I_{i+1}) \geq \ldots \geq v_i(I_n)$. The above two cases establish that agent $i \in [n]$ does not envy any other other agent, i.e., $\mathcal{I}= \{I_1, \ldots, I_n\}$ is an envy-free partial allocation. Indeed, if $\delta = 0$, then $\mathcal{I}$ covers the entire cake, i.e., we obtain an envy-free allocation. \end{proof} Notably, Lemma~\ref{RDexistence} and Theorem~\ref{theorem:RD-EF} (with $\delta=0$) provide a stand-alone proof of existence of envy-free allocations in cake-division instances with MLRP. The next section establishes an algorithmic counterpart of this existential result; specifically, we show that using Lemma~\ref{RDcomputation} one can directly obtain an efficient algorithm for finding envy-free allocations, under MLRP. \subsection{Proof of Theorem~\ref{theorem:EFdivision}} \label{proofthm1} This section restates and proves our main result (Theorem~\ref{theorem:EFdivision}) for envy-freeness. \EFdivision* \begin{proof} Given a cake-division instance $\mathcal{C}$, with MLRP, and precision parameter $\eta >0$, we invoke Lemma~\ref{RDcomputation} to find an $\left(\frac{\eta}{\lambda}\right)$-ripple division in $\mathcal{O}\left( {\rm poly} ( n, \log \lambda, \log \frac{1}{\eta} ) \right)$ time. Write $x_0=0 \leq x_1 \leq \dots \leq x_{n-1} \leq x_n \leq 1$ to denote the computed $\left(\frac{\eta}{\lambda}\right)$-ripple division and let $\mathcal{I} = \{I_1, \ldots, I_n\}$ be the corresponding partial allocation; here $I_i = [x_{i-1}, x_i]$. Theorem~\ref{theorem:RD-EF} ensures that $\mathcal{I}$ is envy-free. We will show that coalescing the unassigned (in $\mathcal{I}$) piece $[x_n, 1]$ to agent $n$ provides a complete allocation that satisfies envy-freeness, up to $\eta$ precision. Write $\mathcal{I}^* \coloneqq \{I^*_1, I^*_2, \ldots, I^*_{n-1}, I^*_n\}$ to denote this allocation in which the $n$th agent receives the interval $I^*_n \coloneqq [x_{n-1}, 1]$ (equivalently, $I^*_n = I_n \cup [x_n, 1]$) and $I^*_i = I_i$ for the remaining agents $i \in [n-1]$. Note that against all agents $j \in [n-1]$, envy-freeness of $\mathcal{I}^*$ directly follows from the fact that the partial allocation $\mathcal{I}$ is envy-free: $v_i(I^*_i) \geq v_i(I_i) \geq v_i(I_j) = v_i(I^*_j)$, for all $i \in [n]$ and all $j \in [n-1]$. Finally, we address envy against agent $n$. Recall that $x_i$s form a $\left(\frac{\eta}{\lambda}\right)$-ripple division, hence $x_n \geq 1 - \frac{\eta}{\lambda}$. In addition, the fact that eval queries are $\lambda$-Lipschitz gives us $v_i([x_n, 1]) \leq \eta$ for all agents $i \in [n]$. Hence, for all $i \in [n]$ we have \begin{align*} v_i(I^*_i) & \geq v_i(I_i) \geq v_i (I_n) \tag{since $\mathcal{I}$ is an envy-free partial allocation} \\ & = v_i (I^*_n) - v_i ( [x_n, 1]) \tag{since $I^*_n = I_n \cup [x_n, 1]$} \\ & \geq v_i(I^*_n) - \eta \end{align*} Therefore, $\mathcal{I}^*$ satisfies envy-freeness, up to $\eta$ precision: $v_i(I^*_i) \geq v_i(I^*_j) - \eta$ for all $i, j \in [n]$. The time complexity obtained via Lemma~\ref{RDcomputation} implies that the parameter $\eta$ can be driven exponentially close to zero, in time that is polynomial in $\log \frac{1}{\eta}$ (i.e., in the bit complexity of $\eta$). Hence, we can find an envy-free allocation, up to arbitrary precision, in $\mathcal{O}\left( {\rm poly} ( n, \log \lambda ) \right)$ time. Theorem~\ref{theorem:EFdivision} now stands proved. \end{proof} \section{Implications of the Monotone Likelihood Ratio Property} This section provides useful implications of MLRP (Lemma~\ref{lemma:R2R}). This result will be used in subsequent sections towards the analysis of our algorithms. \begin{restatable}{lemma}{RacetotheRatios} \label{lemma:R2R} Let $f_i$ and $f_j$ be two (ordered) value-density functions that bear MLRP i.e., the likelihood ratios satisfy $\frac{f_{j} (b)}{f_i(b)} \leq \frac{f_{j}(c)}{f_i(c)}$ for all $0 \leq b \leq c \leq 1$. Then, $f_i$ and $f_j$ satisfy the following two properties \begin{itemize} \setlength\itemsep{0.001em} \item[(i)] The values of the intervals satisfy $\frac{\int \limits_a^b f_{j}}{\int \limits_a^b f_{i}} \leq \frac{\int \limits_c^d f_{j}}{\int \limits_c^d f_{i}}$ for all $[a,b], [c,d] \subseteq [0,1]$ with $ b \leq c$. \item[(ii)] The (normalized) values satisfy $\frac{\int \limits_x^b f_i}{\int \limits_a^b f_i} \leq \frac{\int \limits_x^b f_{j}}{\int \limits_a^b f_{j}} $ for all intervals $[a,b] \subseteq [0,1]$ and all $x \in [a,b]$. \end{itemize} Moreover, properties (i) and (ii) are equivalent. \end{restatable} Here, if the likelihood ratio $\frac{f_{j}(x)}{f_i(x)}$ is strictly increasing, then we have a strict inequality in the corresponding implications. The proof of this lemma is delegated to Appendix~\ref{appendix:proof-race-to-ratio}. Note that, in terms of the agents' valuations $v_i$ and $v_j$, property (i) in Lemma~\ref{lemma:R2R} can be expressed as $\frac{v_{j} (a,b)}{ v_i(a,b) } \leq \frac{v_{j}(c,d)}{v_i(c,d)}$ for all $[a,b], [c,d] \subseteq [0,1]$ with $ b \leq c$. Similarly, property (ii) corresponds to $\frac{v_{i} (x,b)}{ v_i(a,b) } \leq \frac{v_{j}(x,b)}{v_{j}(a,b)}$ for all $[a,b] \subseteq [0,1]$ and all $ x \in [a,b]$. It is well-known that MLRP implies first-order stochastic dominance (see Lemma~\ref{SD}). Interestingly, property (ii) provides a strengthening: over any interval $[a,b]$, the normalized (by $v_{j}(a, b)$) values of agent $j$ stochastically dominate the normalized (by $v_{i}(a, b)$) values of agent $i$. \section*{Acknowledgements} We thank Manjunath Krishnapur for helpful discussions and references. Siddharth Barman gratefully acknowledges the support of a Ramanujan Fellowship (SERB - {SB/S2/RJN-128/2015}) and a Pratiksha Trust Young Investigator Award. Nidhi Rathi's research is generously supported by an IBM PhD Fellowship. \bibliographystyle{alpha} \section{Conclusion and Future Work} The current work studies algorithmic aspects of contiguous cake division under the monotone likelihood ratio property. The scope of this property ensures that the developed algorithms are applicable in various cake-division settings. We also note that while under MLRP the value densities must have full support, the developed framework is somewhat robust to this requirement. For example, our results extend to the class of non-full-support value densities considered in \cite{alijani2017envy}. In particular, Alijani et al. \cite{alijani2017envy} established that a contiguous envy-free cake division can be efficiently computed if every agent uniformly values a single interval and these intervals satisfy an \emph{ordering} property. Appendix \ref{appendix:structured-perturbations-for-MLRP} shows that here one can modify the value densities to a small degree and obtain MLRP (with full support). Hence, applying our results, one can efficiently compute an allocation with arbitrary small envy in the modified instance and, hence, also in the original one. Generalizing such ideas to address, say, value densities that bear first-order stochastic dominance is an interesting direction of future work. For instances with MLRP, finding an allocation that maximizes various welfare notions among the set of envy-free allocations is an important thread for future work. Another relevant notion of fairness in the context of cake cutting is that of a \emph{perfect division}~\cite{alon1987splitting}. In such a division $\mathcal{D} = \{D_1, D_2, \ldots, D_n\}$, each agent $i \in [n]$ values every piece at $1/n$, i.e., $v_i(D_j) = 1/n$ for all $i, j \in [n]$. In contrast to the other solution concepts considered in the present paper, perfect divisions are not guaranteed to exist under the contiguity requirement; a perfect \emph{allocation} might not exist even with MLRP (Appendix \ref{appendix:perfect-cuts-nonexample}). However, Alon~\cite{alon1987splitting} has shown that a perfect division with $n(n-1)$ cuts always exists. Perfect cake divisions are particularly useful since they lead to truthful mechanisms for cake division \cite{mossel2010truthful, chen2013truth}. Hence, developing efficient algorithms to find (noncontiguous) perfect divisions under MLRP is a relevant thread for future work. More broadly, it would be interesting to identify tractable classes through MLRP in other computational social choice contexts, such as discrete fair division and voting. \section{Missing Proofs from Section~\ref{EFdivisions}} \label{appendix:exist-compute-rd} \subsection{Proof of Proposition~\ref{RDn}} Here we restate and prove Proposition~\ref{RDn} \PropositionCcnLipschitz* \begin{proof} Applying strong induction over $i \in \{2, 3, \ldots, n\}$, we will prove that the the function $\mathrm{RD}_i$ is $\lambda^{2(i-1)}$-Lipschitz. For the base case of $\mathrm{RD}_2$ consider the following bound, with points $x, x' \in [0,1]$: \begin{align*} \big| \mathrm{RD}_2(x') - \mathrm{RD}_2(x)\big| &= \big| \mathrm{Cut}_1\left( x', \mathrm{Eval}_1(0,x') \right) - \mathrm{Cut}_1\left( x, \mathrm{Eval}_1(0,x) \right)\big| \tag{by definition of $\mathrm{RD}_2$}\\ & \leq \lambda \max\{|x'-x| , |\mathrm{Eval}_1(0,x')-\mathrm{Eval}_1(0,x)|\} \tag{$\mathrm{Cut}_1$ is $\lambda$-Lipstchiz}\\ & \leq \lambda \max\{|x'-x| , \lambda |x'-x|\} \tag{$\mathrm{Eval}_1$ is $\lambda$-Lipstchiz}\\ & = \lambda^2 |x'-x| \tag{since $\lambda \geq 1$} \end{align*} Hence, $\mathrm{RD}_2$ is $\lambda^2$-Lipschitz. Next, assuming that, for all $k \leq i-1$, $\mathrm{RD}_k$ is $\lambda^{2(k-1)}$-Lipschitz, we will prove that $\mathrm{RD}_i$ is $\lambda^{2(i-1)}$-Lipschitz. Note that the following bound holds for all $x', x \in [0,1]$ \begin{align} & \mathrm{Eval}_{i-1} \left(\mathrm{RD}_{i-2}(x'), \mathrm{RD}_{i-1}(x')\right) - \mathrm{Eval}_{i-1} \left( \mathrm{RD}_{i-2}(x),\mathrm{RD}_{i-1}(x) \right) \nonumber \\ & \leq \lambda \max \{ |\mathrm{RD}_{i-2}(x') - \mathrm{RD}_{i-2}(x)| , |\mathrm{RD}_{i-1}(x') - \mathrm{RD}_{i-1}(x)| \} \tag{since $\mathrm{Eval}_{i-1}$ is $\lambda$-Lipschitz} \nonumber\\ & \leq \lambda \max \{ \lambda^{2(i-3)} |x'-x| , \lambda^{2(i-2)} |x'-x| \} \nonumber \tag{using the induction hypothesis for $i-2$ and $i-1$} \\ & = \lambda \ \lambda^{2(i-2)} |x-x'| \tag{since $\lambda \geq 1$} \nonumber \\ & = \lambda^{2i-3} |x'-x| \label{eval} \end{align} We can now bound $\big| \mathrm{RD}_i(x') - \mathrm{RD}_i(x)\big|$ for all $x', x \in [0,1]$: \begin{align*} & \big| \mathrm{RD}_i(x') - \mathrm{RD}_i(x)\big| \\ &\ = \big| \mathrm{Cut}_{i-1}\left(\mathrm{RD}_{i-1}(x'), \mathrm{Eval}_{i-1}\left(\mathrm{RD}_{i-2}(x'),\mathrm{RD}_{i-1}(x')\right) \right) - \mathrm{Cut}_{i-1} \left(\mathrm{RD}_{i-1}(x), \mathrm{Eval}_{i-1}\left(\mathrm{RD}_{i-2}(x),\mathrm{RD}_{i-1}(x)\right) \right) \big|\\ & \ \leq \lambda \max \{|\mathrm{RD}_{i-1}(x')-\mathrm{RD}_{i-1}(x)| , \lambda^{2i-3} |x'-x|\} \tag{since $\mathrm{Cut}_{i-1}$ is $\lambda$-Lipstchiz and by equation~(\ref{eval})}\\ & \ \leq \lambda \max\{\lambda^{2(i-2)}|x'-x| , \lambda^{2i-3} |x'-x|\} \tag{using the induction hypothesis for $i-1$} \\ & \ = \lambda^{2(i-1)} |x'-x| \tag{since $\lambda \geq 1$} \end{align*} Therefore, $\mathrm{RD}_i$ is $\lambda^{2(i-1)}$-Lipschitz for all $2 \leq i \leq n$. Setting $i =n$, gives us the desired claim. \end{proof} % \subsection{Proof of Lemma~\ref{RDcomputation}} \label{section:proofRDc} This section restates Lemma~\ref{RDcomputation} and shows that \textsc{BinSearch} (Algorithm~\ref{alg:BinSearch}) satisfies this claim. \RDcomputation* \begin{proof} Lemma~\ref{lemma:delta-rd-exist} implies that the problem of computing a $\delta$-ripple division reduces to finding a point $x \in (0,1)$ that satisfies $\mathrm{RD}_n(x) \in [1- \delta, 1)$. Here, we will show that \textsc{BinSearch} (Algorithm~\ref{alg:BinSearch}) finds such a point $x$ (and, hence, a $\delta$-ripple division) in $\mathcal{O}\left( {\rm poly} ( n, \log \lambda, \log \frac{1}{\delta}) \right)$ time. By design, \textsc{BinSearch} maintains two points $\ell \leq r$ and keeps iterating till it finds a point $x = (\ell + r)/2$ that satisfies the required property $\mathrm{RD}_n\left( x \right) \in [1- \delta, 1)$. Hence, at termination the algorithm indeed finds a $\delta$-ripple division. Below we complete the proof by showing that the time complexity of the algorithm is as stated (in particular, the algorithm necessarily terminates). Note that, throughout the algorithm's execution, the maintained points $\ell$ and $r$ satisfy $\mathrm{RD}_n(\ell) < 1 - \delta$ and $\mathrm{RD}_n(r) =1$. This invariant holds at the beginning of the algorithm, where we initialize $\ell = 0$ and $r=1$; recall that $\mathrm{RD}_n(0) = 0$ and $\mathrm{RD}_n(1) = 1$. Furthermore, in each iteration of the while-loop in \textsc{BinSearch}, the left endpoint $\ell$ gets updated ($\ell \leftarrow (\ell+r)/2$) iff the midpoint $(\ell+r)/2$ satisfies $\mathrm{RD}_n\left( \frac{\ell+r}{2} \right) < 1 - \delta$. That is, even after the update we have $\mathrm{RD}_n(\ell) < 1 - \delta$. Similarly, for the right endpoint $r$, during the algorithm's execution we have $\mathrm{RD}_n(r) = 1$. This invariant implies that between $\ell$ and $r$ we always have a point $x \in (\ell, r)$ which satisfies $\mathrm{RD}_n(x) = 1 - \delta/2$. This observation is obtained by applying the intermediate value theorem to the (continuous) function $\mathrm{RD}_n$ on the interval $[\ell, r]$. Hence, the interval under consideration, $[\ell, r]$, continues to contain a required point. Furthermore, in each iteration of the while-loop of \textsc{BinSearch}, the difference between $\ell$ and $r$ reduces by a multiplicative factor of two. Hence, after $T$ iterations we must have $r - \ell \leq \frac{1}{2^T}$. In particular, the difference between $(\ell+r)/2$ and the desired point $x\in (\ell, r)$ is at most $\frac{1}{2^T}$, after $T$ iterations. Therefore, the following bound holds after $T$ iterations \begin{align*} \left|\mathrm{RD}_n(x) - \mathrm{RD}_n \left( \frac{\ell+r}{2} \right) \right| & \leq \lambda^{2(n-1)} \left| x - \left(\frac{\ell+r}{2} \right) \right| \tag{Proposition~\ref{RDn}: $\mathrm{RD}_n$ is $\lambda^{2(n-1)}$-Lipschitz} \\ & \leq \frac{\lambda^{2(n-1)}}{2^T} \end{align*} Note that, if the iteration count $T$ is greater than $2\left(n - 1\right) \ \log \left( \frac{2 \lambda}{\delta} \right)$, then $\left|\mathrm{RD}_n(x) - \mathrm{RD}_n \left( \frac{\ell+r}{2} \right) \right| < \delta/2$. Since $\mathrm{RD}_n(x) = 1 - \delta/2$, we get that $\mathrm{RD}_n \left( \frac{\ell+r}{2} \right) \in (1- \delta, 1)$. That is, in at most $2\left(n - 1\right) \ \log \left( \frac{2 \lambda}{\delta} \right)$ iteration \textsc{BinSearch} will certainly find a point $\left( \frac{\ell+r}{2} \right)$ that satisfies the termination criterion $\mathrm{RD}_n \left( \frac{\ell+r}{2} \right) \in [1-\delta, 1)$. This shows the time complexity of \textsc{BinSearch} is $\mathcal{O}\left( {\rm poly} ( n, \log \lambda, \log \frac{1}{\delta}) \right)$ and completes the proof. \end{proof} \section{Nonexistence of Contiguous Perfect Divisions under MLRP} \label{appendix:perfect-cuts-nonexample} A cake division $\mathcal{D} = \{D_1, D_2, \dots, D_n\}$ (consisting of connected or disconnected pieces) is said to be \emph{perfect} if all the agents agree on the value of every piece, i.e., $v_i(D_j) = 1/n$ for all $i, j \in [n]$. Perfect divisions are not guaranteed to exist under the contiguity requirement, i.e., there are cake-division instances that do not admit perfect allocations. In this section, we will show that the nonexistence continues to hold even with MLRP. That is, we will provide a cake-division instance with MLRP that does not admit a perfect allocation. Consider a cake-division instance $\mathcal{C}=\langle [2], \{f_1, f_2\} \rangle$ with two agents. For a fixed constant $\alpha \in (0,1)$, we define the value densities of the two agents $f_1, f_2: [0,1] \mapsto \mathbb{R}_+ $ as follows \begin{equation*} f_1(x) = \begin{cases} 1+ \alpha & \text{if} \ 0 \leq x \leq (1- \alpha) \\ \alpha & \text{if} \ (1-\alpha) < x \leq 1 \end{cases} \qquad \text{ and } \qquad \ \ f_2(x) = \begin{cases} 1- \alpha & \text{if} \ 0 \leq x \leq (1- \alpha) \\ 2- \alpha & \text{if} \ (1-\alpha) < x \leq 1 \end{cases} \end{equation*} The agents' values for the cake $[0,1]$ are normalized, $\int_0^1 f_1(x)dx = \int_0^1 f_2(x)dx = 1$. Also, the likelihood ratio of $f_1$ and $f_2$ satisfies \begin{equation*} \frac{f_2(x)}{f_1(x)} = \begin{cases} \frac{1- \alpha}{1+\alpha} & \text{if} \ 0 \leq x \leq (1- \alpha) \\ \frac{2- \alpha}{\alpha}& \text{if} \ (1-\alpha) < x \leq 1 \end{cases} \end{equation*} Since $\frac{2- \alpha}{\alpha} > \frac{1- \alpha}{1+\alpha} $, for all $\alpha \in (0,1)$, the likelihood ratio is nondecreasing over $[0,1]$ and, hence, the densities bear MLRP. We assume, towards a contradiction that there exists a perfect allocation in $\mathcal{C}$. That is, there exists a point $x \in [0,1]$ such that $v_1(0,x) = v_2(0,x) = 1/2$, and $v_1(x,1) = v_2(x,1) =1/2$. The fact that the value of intervals $[0,x]$ and $[x,1]$ is equal to $1/2$ ensures that the point $x$ cannot be $0$ or $1$. We consider two complementary and exhaustive cases: Case (i) $0<x \leq (1- \alpha)$ and Case (ii) $ (1- \alpha) < x \leq 1$. The analysis is similar in both the cases. Hence, we only address Case (i) and omit the analysis for Case (ii). For the first interval $[0,x]$ (with $0< x \leq (1- \alpha)$), we have $v_1(0,x) = x(1+\alpha)$ and $v_2(0,x) = x(1- \alpha)$. However, for any $\alpha \in (0,1)$ and $x > 0$, the following strict inequality holds: $v_1(0,x) = x(1+\alpha) > x(1- \alpha) = v_2(0,x)$. This leads to a contradiction and proves that $\mathcal{C}$ does not admit a perfect division with connected pieces. Even though we might not have a perfect allocation, the work of Alon~\cite{alon1987splitting} proves that a perfect division with $n(n-1)$ cuts always exists. Hence, in the above-mentioned instance $\mathcal{C}$ with $2$ agents, $2$ cuts should suffice to form a perfect division. In particular, we note that the following division $\mathcal{D}^* = \{D^*_1, D^*_2\}$ is perfect in $\mathcal{C}$; here $D^*_1 = \left[ \frac{1}{2}- \frac{\alpha}{2}, 1-\frac{\alpha}{2} \right]$ and $D^*_2 = \left[ 0,\frac{1}{2}- \frac{\alpha}{2} \right] \cup \left[ 1-\frac{\alpha}{2},1 \right]$. Here, \begin{align*} v_1(D^*_2) &= (1+\alpha)\left(\frac{1}{2}- \frac{\alpha}{2}\right) + \alpha\left(\frac{\alpha}{2}\right) = \frac{1}{2} \end{align*} and \begin{align*} v_2(D^*_2) &= (1-\alpha)\left(\frac{1}{2}- \frac{\alpha}{2}\right) + (2-\alpha)\left(\frac{\alpha}{2}\right) = \frac{1}{2} \end{align*} That is, both the agents value the piece $D^*_2$ at $1/2$. Since the valuations are normalized, we additionally have $v_1(D^*_1)=v_2(D^*_1)=1/2$. This shows that $\mathcal{D}^*$ is a perfect division (with disconnected pieces) in $\mathcal{C}$. \section{Bit Complexity of Cake Division} \label{appendix:example-precision-loss} This section provides a cake-division instance (with MLRP) in which the unique envy-free allocation has an irrational cut point. Notably the parameters that specify the value densities in this instance are rational. This example implies that, in general, one cannot expect an efficient algorithm that outputs an {exact} envy-free allocation. That is, when considering cake-division algorithms with bounded bit complexity, a precision loss is inevitable. We will also show, through the example, that to obtain a nontrivial bound on the envy (between the agents) the bit complexity of the output has to be $\Omega (\log \lambda)$; here $\lambda \geq 1$ is the Lipschitz constant of the cut and eval queries. Hence, a runtime dependence of $\log \lambda$ is unavoidable as well. Consider cake division between three agents with identical value densities, $f: [0,1] \mapsto \mathbb{R}_+$. Such an instance trivially satisfies MLRP, since the likelihood ratio of the value densities is equal to $1$ throughout the cake. In particular, the density function $f$ is piecewise linear and is defined as follows \begin{align*} f(x) & \coloneqq \begin{cases} \frac{\lambda}{3(\lambda-1)} & \quad \text{if} \ 0 \leq x \leq \left(1 - \frac{1}{\lambda}\right) \\ \frac{2 \lambda^2}{3} x + \left( \lambda - \frac{2\lambda^2}{3} \right) & \quad \text{if} \ \left(1 - \frac{1}{\lambda}\right) \leq x \leq 1 \\ \end{cases} \end{align*} Note that this function can be given as input using only rational parameters. In addition, $f$ is discontinuous at $1 - \frac{1}{\lambda}$; recall that our results require the value densities to be integrable, and not necessarily continuous. Here, the values of the agents are normalized, $\int_{0}^{1}f(x)dx =1$. Also, the following bounds hold for the density: $0<\frac{\lambda}{3(\lambda-1)} \leq f(x) \leq \lambda$ for all $x \in [0,1]$. Therefore, Proposition~\ref{Lip} implies that the cut and eval queries in this instance are $3 \lambda$-Lipschitz. Since in this instance the three agents have identical value densities (with full support over $[0,1]$), there exists a unique envy-free allocation wherein each agent receives an interval of value exactly equal to $1/3$. Write $0=x^*_0 \leq x^*_1 \leq x^*_2 \leq x^*_3=1$ to denote the cut points of this (unique) envy-free allocation; in particular, agent $i \in [3]$ receives the $i$th interval $[x^*_{i-1},x^*_i]$. First, we will show that $x^*_2$ is irrational. Note that the interval $[0, 1 - 1/\lambda]$ is of value $1/3$: $\int_0^{1-\frac{1}{\lambda}} f(x) dx = \frac{1}{3}$. Hence, the first cut point $x^*_1=1-\frac{1}{\lambda}$. The second cut point $x^*_2$ now lies in the interval ($[x^*_1, 1]$) of width $\frac{1}{\lambda}$ and it must satisfy $\int_{1-\frac{1}{\lambda}}^{x^*_2} f(x) dx =\frac{1}{3}$. Using the definition of $f$ in this range, we get that $x^*_2$ is a solution of the following quadratic equation \begin{align} \lambda^2 (x^*_2)^2 + (3 \lambda - 2 \lambda^2) x^*_2 + (\lambda^2 - 3 \lambda +1) = 0 \label{equation:quad-irrational} \end{align} With the constraint $x^*_2 \in \left(1- \frac{1}{\lambda},1 \right)$, equation~(\ref{equation:quad-irrational}) gives us $x^*_2 = 1 - \left( \frac{3- \sqrt{5}}{2}\right) \frac{1}{\lambda}$. That is, the cut point $x^*_2$ is irrational, for $\lambda \in \mathbb{Q}_+$. We next establish a lower bound on the bit complexity of the output. Consider, in the above-mentioned instance, any allocation $\mathcal{I}=\{I_1, I_2, I_3\}$ wherein the envy between the agents is, say, less than $\eta = 1/2$, i.e., $v_i(I_i) \geq v_i(I_j) - 1/2$ for all $i, j \in [n]$.\footnote{Here, the choice of $\eta=1/2$ is essentially for ease of exposition; by scaling down the density in the range $\left[ 0, \left(1 - {1}/{\lambda} \right) \right]$, we can drive $\eta$ close to one.} For any such allocation $\mathcal{I}$ one of the cut points must lie in $[\left(1 - {1}/{\lambda} \right), 1]$. Indeed, this interval is of value $2/3$ to each agent, $\int_{\left(1- 1/\lambda \right)}^1 f = 2/3$. The bit complexity of such a cut point is $\Omega(\log \lambda)$. Hence, in general, the bit complexity of the any algorithm that finds an allocation (equivalently, outputs cut points) with bounded envy is $\Omega(\log \lambda)$. \\ \noindent {\bf Welfare-maximizing allocations induced by irrational cuts:} One can also construct cake-division instances (with rational and MLRP value densities) in which the welfare-maximizing allocations have irrational cut points. In particular, consider cake division between two agents with identical value densities, $f(x) = x+1/2$. Here, the (unique) allocation $\mathcal{I} =\{I_1, I_2 \}$ that maximizes egalitarian welfare consists of the intervals $I_1 = \left[0, \frac{\sqrt{5}-1}{2}\right]$ and $I_2 = \left[\frac{\sqrt{5}-1}{2}, 1\right]$. Allocation $\mathcal{I}$ is also the (unique) equitable, proportional, and envy-free allocation in this instance. In addition, consider a cake-division instance with two agents and the following value densities: $f_1(x)=1$ and $f_2(x)=3x^2$. Note that these densities satisfy MLRP. In this instance, the ({unique}) social welfare maximizing allocation $\mathcal{S} = \{S_1, S_2\}$ is obtained by an irrational cut; specifically, $S_1= \left[0, \frac{1}{\sqrt{3}} \right]$ and $S_2 = \left[ \frac{1}{\sqrt{3}},1 \right]$. The cut point $\frac{1}{\sqrt{3}}$ is the switching point (as defined in Section \ref{section:social welfare}) between the two densities $f_1$ and $f_2$. \subsection{Welfare-Maximizing Allocations Induced by Irrational Cuts} \label{appendix:examples-irrational} This section provides examples of cake-division instances to show that even with rational and MLRP value densities, it is possible that the welfare-maximizing allocations are induced by irrational cuts. The following example identifies a cake-division instance in which the \emph{unique} envy-free allocation between two agents is obtained by an irrational cut: consider two agents with identical value densities, $f(x) = x+1/2$. Here, the unique envy-free allocation $\mathcal{I} =\{I_1, I_2 \}$ satisfies $I_1 = \left[0, \frac{\sqrt{5}-1}{2}\right]$ and $I_2 = \left[\frac{\sqrt{5}-1}{2}, 1\right]$. Allocation $\mathcal{I}$ is also the (unique) equitable, proportional, and max-min optimal allocation in this instance. Next, consider a cake-division instance with two agents having following value densities: $f_1(x)=1$ and $f_2(x)=3x^2$. Here, the \emph{unique} allocation $\mathcal{S} = \{S_1, S_2\}$ that maximizes social welfare in this instance is obtained by an irrational cut, we have $S_1= [0, \frac{1}{\sqrt{3}}]$ and $S_2 = [\frac{1}{\sqrt{3}},1]$. \section{Introduction} Cake division is a quintessential model in the study of fair division. This setup captures the allocation of a divisible resource (metaphorically, the cake) among agents with equal entitlements, but distinct preferences. Over the past several decades, a significant body of work in mathematics, economics, and computer science has been devoted to cake cutting; see \cite{brams1996fair, robertson1998cake, procaccia2015cake} for excellent expositions and motivating applications (e.g., border negotiations and divorce settlements) of this framework. Some of the central solution concepts and axiomatic characterizations in the fair-division literature stem from the cake-cutting context \cite{moulin2004fair}. Indeed, the work of Steinhaus, Banach, and Knaster \cite{S48problem}---which lays the mathematical foundations of fair division---addresses cake division. The notion of \emph{envy-freeness} was also mathematically formalized in this setup \cite{stern1958puzzle, foley1967resource}. This well-studied notion deems a cake division to be fair if every agent prefers the piece assigned to her over that of any other agent, i.e., if no agent is envious of others. Formally, the cake is modeled as the unit interval $[0,1]$ and the cardinal preferences of the agents over pieces of this divisible resource are specified via valuation functions: $v_i(I) \in \mathbb{R}_+$ denotes the value that an agent $i$ has for interval (piece) $I \subset [0,1]$. These valuations $v_i$s are typically assumed to be induced by value-density functions $f_i$s, i.e., $v_i(I) \coloneqq \int_{x \in I} f_i(x) dx$, for each agent $i$ and interval $I$. This work focuses on a standard formulation of cake division in which every agent must receive a contiguous piece of the cake. That is, the goal is to partition the cake $[0,1]$ into exactly $n$ disjoint intervals (connected pieces) and assign them among the $n$ participating agents. This connectivity requirement is naturally motivated by settings in which a contiguous part of the resource needs to be allocated to every agent \cite{brams1996fair}; consider, e.g., division of land, transmission spectrum, or processing time on a machine. Note that a partition of the cake $[0,1]$ into intervals $I_1, I_2, \ldots, I_n$---wherein interval $I_i$ is assigned to agent $i \in [n]$---is said to be envy-free iff $v_i(I_i) \geq v_i(I_j)$ (i.e., iff $\int_{I_i} f_i \geq \int_{I_j} f_i$) for all agents $i$ and $j$. The appeal of envy-freeness is substantiated by strong existential results: under mild assumptions, a contiguous envy-free cake division always exists~\cite{stromquist1980cut, simmons1980private, edward1999rental}. While these results are built upon interesting mathematical connections,\footnote{For instance, the proof by Su \cite{edward1999rental} invokes Sperner's lemma.} they are, however, nonconstructive. In fact, Stromquist \cite{stromquist2008envy} has shown that there does not exist a finite-time algorithm for finding envy-free cake divisions with connected pieces; this result holds in a setup wherein the valuations are provided through an (adversarial) oracle. In addition, the work of Deng et al. \cite{deng2012algorithmic} establishes {\rm PPAD}-hardness of finding envy-free cake divisions with contiguous pieces, under ordinal valuations. Algorithms for envy-free cake division remain elusive even if we relinquish the contiguity requirement. It was not until the work of Brams and Taylor~\cite{brams1995envy} that a bounded-time algorithm was obtained for noncontiguous envy-free cake division. In general, the best-known result for this problem is by Aziz and Mackenzie \cite{aziz2016discrete}, who develop a {hyper-exponential} time algorithm for finding envy-free divisions with noncontiguous pieces.\footnote{The problem of finding an \emph{approximate} envy-free division (not necessarily with connected pieces) admits a fully-polynomial time approximation scheme~\cite{lipton2004approximately}.} In light of these algorithmic barriers, identification of computationally-tractable instances in the cake-cutting context stands as a meaningful direction of work. The current paper addresses this consideration and, in particular, identifies an encompassing property---called the \emph{monotone likelihood ratio property}---which enables the development of efficient algorithms for fair cake-cutting (with connected pieces). The ordered value-density functions $(f_i, f_j)$ are said to satisfy the {monotone likelihood ratio property} (MLRP) iff, for every $x \leq y$ in the domain, we have $\nicefrac{f_j(x)}{f_i(x)} \leq \nicefrac{f_j(y)}{f_i(y)}$. In other words, the {likelihood ratio} $\nicefrac{f_j(x)}{f_i(x)}$ is nondecreasing in the argument $x \in \mathbb{R}$. Intuitively, this property asserts that, in comparison to $f_i$, the density $f_j$ is higher towards the right end of the domain. We note that MLRP does not require $f_i$ and $f_j$ to be monotonic (or unimodal) by themselves. In the cake-division context, we will say that an ordered collection $(f_i)_{i \in [n]}$ of value densities (of the $n$ agents) satisfies the {monotone likelihood ratio property} iff for each $i \in [n-1]$, the likelihood ratio $\nicefrac{f_{i+1}(x)}{f_{i} (x)}$ is nondecreasing in $x \in [0,1]$. That is, the agents are indexed with the property that consecutive likelihood ratios bear MLRP. This property is transitive and, hence, in cake-division instances with MLRP, value densities $f_i$ and $f_j$ satisfy MLRP for all $i < j$. Many distribution families are also known to bear MLRP \cite{larsen2001introduction, casella2002statistical}. In particular, this property holds if all the value densities belong to any one of the following families: Gaussian distributions (with the same variance but different means), Poisson distributions, binomial distributions, and single-parameter exponentials; see Appendix~\ref{appendix:mlrp-use-cases} for details. Furthermore, it is known that linear translations of any log-concave function satisfy MLRP \cite{saumard2014log}. In particular, linear translations of the following (log-concave) distributions also satisfy this property: Laplace, uniform, multivariate Gaussian, gamma, beta, Subbotin, chi-square, Dirichlet, and logistic. Hence, the current work obtains novel results for many distribution families in a unified manner. MLRP is a common assumption on agents' utilities and type distributions in many economic contexts; see~\cite{jewitt1991applications} for a survey. As a stylized application of MLRP in cake division, consider a setting wherein each agent $i$ has a most preferred point $\mu_i$ on the divisible resource (cake) and $i$'s valuation density decreases as a Gaussian function (with a variance parameter that is common across the agents) of the distance from $\mu_i$. Indeed, the distance here can be geographical (as in case of land division), temporal (i.e., wait time), or it can be an abstract metric. Considering similar single-peaked preferences, but with a linear drop in value densities, Wang and Wu~\cite{wang2019cake} developed an efficient algorithm for noncontiguous cake division. Note that while linear densities bear MLRP, this property does not hold for piecewise linear densities. Hence our results do not directly address the setting considered in \cite{wang2019cake}. However, in absence of the contiguity requirement (as is the case in \cite{wang2019cake}) one can find a fair cake division by first partitioning the cake into intervals, in each of which the agents' value densities are linear, and then applying the MLRP result separately.\footnote{Recall that, in contrast to such a result, our focus is on finding cake divisions in which each agent receives a contiguous piece of the cake.} We focus on cake-division instances with MLRP and develop algorithmic results for almost all the standard notions of fairness and economic efficiency. Our algorithms only require oracle access to the valuations. In particular, the developed algorithms operate under the standard Robertson-Webb model~\cite{robertson1998cake}, wherein we have access to the agents' valuations through \emph{eval} and \emph{cut} queries; see Section~\ref{section:notations} for details. MLRP implies that these cut and eval queries (functions) are $\lambda$-Lipschitz (Appendix~\ref{appendix:mlrp-lipschitz}). The time complexities of our algorithms depend polynomially on the the bit complexity of this Lipschitz constant $\lambda \geq 1$. Such a runtime dependency on $\log \lambda$ is unavoidable (Appendix~\ref{appendix:example-precision-loss}): there exist cake-division instances (with $\lambda$-Lipschitz cut and eval queries) wherein for all the agents the value of the cake is almost entirely concentrated in an interval $L$ of length ${1}/{\lambda}$. Here, an envy-free cake division can be obtained only by finely partitioning $L$ among the agents. In particular, the cut points that induce an envy-free allocation (and, hence, correspond to the output of a fair-division algorithm) must be $1/\lambda$ close to each other, i.e., the bit complexity of the output has to be ${\Omega} \left( \log \lambda \right)$. In fact, one can construct instances in which a contiguous envy-free division can be obtained only by cutting the cake at irrational points (Appendix~\ref{appendix:example-precision-loss}). Hence, in general, (and even under MLRP) one cannot expect an efficient algorithm that outputs an \emph{exact} envy-free division, with contiguous pieces.\footnote{Indeed, the bit complexity of a computationally-bounded algorithm is bounded as well.} Therefore, when considering efficient algorithms for cake division, a precision loss in the output is inevitable. However, our algorithms ensure that this precision loss in value, $\eta$, is arbitrarily small; specifically, the developed algorithms run in time $\mathcal{O}\left( {\rm poly} \left(n, \log \lambda, \log \frac{1}{\eta} \right) \right)$ and, hence, the precision parameter $\eta$ can be driven exponentially close to zero in polynomial (in the bit complexity of $\eta$) time. Note that this bit-precision issue is akin to the one faced in the convex-optimization problems (where again the optimal solutions can be irrational) and our runtime bound, with respect to the precision parameter $\eta$, is analogous to the one obtained by the ellipsoid method~\cite{grotschel2012geometric}. \\ \noindent {\bf Our Results and Techniques:} Next, we summarize our results for various notions of fairness and (economic) efficiency. \\ \noindent \emph{Envy-Freeness:} We prove that, given a cake-division instance (in the Robertson-Webb query model) with MLRP, an envy-free allocation can be computed, up to an arbitrary precision, in time that is polynomial in the number of agents and $\log \lambda$; here $\lambda$ is the Lipschitz constant of the Robertson-Webb (cut and eval) queries. To establish this result, we define a class of divisions, referred to as \emph{ripple divisions} (Definition~\ref{defn:RD}), and prove that, under MLRP, every ripple division induces a contiguous envy-free cake division (Theorem~\ref{theorem:RD-EF}). Specifically, a collection of points $x_0 = 0 \leq x_1 \leq x_2 \leq x_{n-1} \leq x_n = 1$ (in the cake $[0,1]$) is said to form a {ripple division} of the cake if, for each $i \in [n-1]$, agent $i$ is indifferent between the consecutive intervals $[x_{i-1}, x_i]$ and $[x_i, x_{i+1}]$, i.e., $v_i(x_{i-1}, x_i) = v_i(x_i, x_{i+1})$. Note that a ripple division induces a contiguous cake division---by assigning interval $[x_{i-1}, x_i]$ to agent $i$---with the property that agent $i$ does not envy agent $i+1$. That is, in and of itself, a ripple division mandates absence of envy only between consecutive agents, and not between all pairs of agents. We will show that, interestingly, under MLRP this relaxation suffices--the cake division induced by a ripple division is guaranteed to be envy free (Theorem~\ref{theorem:RD-EF}). Recall that the agents are indexed following the MLRP order: for each $i \in [n-1]$, the likelihood ratio $f_{i+1}/f_i$ is nondecreasing. Hence, allocating $[x_{i-1}, x_i]$ to agent $i \in [n]$ ensures that the intervals are assigned (left to right on the cake) in accordance with the MLRP order We establish the universal existence of ripple divisions through the intermediate value theorem, i.e., a one-dimensional fixed-point argument (Lemma~\ref{RDexistence}). Since one can use binary search to find fixed points in the one-dimensional setting, this proof in fact leads to an algorithm for finding ripple divisions and, hence, envy-free divisions. Indeed, the notion of ripple divisions and their connection with envy-freeness, under MLRP, are two key contributions of this work. \\ \noindent \emph{Pareto Optimality:} We show that in cake-division instances with MLRP, Pareto optimal cake divisions, with connected pieces, conform to the MLRP order (Lemma~\ref{theorem:POorder}). This structural result implies that for maximizing welfare we can restrict attention to allocations wherein the intervals are assigned (left to right on the cake) in accordance with the MLRP order. Intuitively, this leads us to a welfare-maximizing algorithm---specifically, a dynamic program---that recursively finds optimal allocations for intervals placed at the left end of the cake (i.e., for intervals of the form $[0,x]$). We also establish an extension of Weller's theorem in the MLRP context. Weller's theorem~\cite{weller1985fair} asserts that there always exists some cake division---though, not necessarily with connected pieces---which is both envy-free (fair) and Pareto optimal. While this theorem holds in general,\footnote{Weller's theorem applies even in the absence of MLRP.} it does not guarantee that envy-freeness and Pareto optimality can be achieved together through contiguous cake divisions. We show that, by contrast, under MLRP \emph{every} contiguous envy-free division is Pareto optimal (Theorem~\ref{theorem:ef-po}). Therefore, given a cake-division instance with MLRP, the allocation computed by our algorithm is not only envy-free but also Pareto optimal, up to an arbitrary precision. \\ \noindent \emph{Social Welfare:} Social (utilitarian/Benthamite) welfare is a standard measure of collective value. For a cake division $\{I_1, I_2, \ldots, I_n\}$ it is defined to be the sum of the values that the division generates among the agents, $\sum_i v_i(I_i)$. Maximizing social welfare is a well-studied objective in resource-allocation contexts. In the cake-cutting setup, this maximization problem is known to be {\rm APX}-hard under general valuations \cite{arunachaleswaran2019fair}. Complementarily, if the value densities bear MLRP, then we can find (up to an arbitrary precision) a social welfare maximizing division with connected pieces in $\mathcal{O} \left( {\rm poly } \left(n, \log \lambda \right) \right)$ time (Theorem~\ref{theorem:SocialWelfare}). As mentioned previously, our algorithm for this problem is based on a dynamic program. \\ \noindent \emph{Egalitarian Welfare:} The egalitarian (Rawlsian) welfare of a cake division $\{I_1, \ldots, I_n\}$ is defined as the value of the least well-off agent, i.e., $\min_i \ v_i (I_i)$. From a welfarist perspective, maximizing this minimum value among cake divisions with connected pieces is an important fairness objective. However, no nontrivial approximation guarantees are known for this problem under general valuations; the work of Aumann et al. \cite{aumann2013computing} shows that maximizing egalitarian welfare across all contiguous cake divisions is {\rm APX}-hard. Complementing this hardness result, we develop an algorithm that, under MLRP, maximizes egalitarian welfare (up to an arbitrary precision) and runs in $\mathcal{O} \left( {\rm poly } \left(n, \log \lambda \right) \right)$ time (Theorem~\ref{theorem:Maxmin}). Our algorithm for maximizing egalitarian welfare is based on a ``moving-knife'' procedure. This procedure, for a given a target value $\tau >0$, iteratively selects points $x_0 = 0, x_1, x_2, \ldots, x_n \leq 1$ such that the each interval $[x_{i-1}, x_i]$ is of value $\tau$ to agent $i \in [n]$. Let $\tau^*$ denote the optimal egalitarian welfare in the given cake-division instance. The useful observation here is that this moving-knife procedure will succeed for all $\tau \leq \tau^*$. This follows from the fact that here the intervals are assigned (left to right) in the MLRP order\footnote{Note that the agents are indexed accordingly.} and this ordering is also satisfied by an egalitarian welfare maximizing (in particular, a Pareto optimal) division. Therefore, by performing a binary search with $\tau$, we can find a contiguous division with egalitarian welfare arbitrarily close to the optimal. \\ \noindent \emph{Nash Social Welfare:} A balance between social and egalitarian welfare is obtained by considering the Nash social welfare \cite{nash1950bargaining, kaneko1979nash}. This welfare objective is defined as the geometric mean of the agents' values. It is known that, in general, it is {\rm APX}-hard to find a contiguous cake division that maximizes Nash social welfare \cite{arunachaleswaran2019fair}. Under MLRP, however, the problem of maximizing Nash social welfare admits a fully polynomial-time approximation scheme (Theorem~\ref{theorem:NSW}). We obtain this result via a dynamic program that considers the agents in the MLRP order. \\ \noindent {\bf Additional Related Work:} Recently, approximation algorithms---with both additive \cite{hollender2019contiguous} and multiplicative \cite{arunachaleswaran2019fair} approximation guarantees---have been developed for finding contiguous envy-free cake divisions. The work of Brânzei and Nisan \cite{branzei2017query} develops query complexity upper and lower bounds for computing approximately envy-free allocations. In contrast to these results, the current work focuses on cake-division instances with MLRP and shows that in such settings arbitrarily low envy can be achieved among the agents. The work of Bei et al. \cite{bei2012optimal} also studies contiguous cake division and provides computational results for maximizing social welfare subject to proportional fairness. Under this fairness constraint each agent $i \in [n]$ must receive an interval of value at least $1/n$ times $i$'s total value for the cake. Bei et al. \cite{bei2012optimal} show that, if the value densities are linear, then this problem admits a fully polynomial-time approximation scheme (FPTAS). We note that every pair of linear densities bear MLRP and, hence, such value-density functions fall within the purview of the current work. However, our algorithm is incomparable to the FPTAS of Bei et al. \cite{bei2012optimal}--we focus on maximizing social welfare without the fairness constraints. Also, envy-freeness is not addressed in \cite{bei2012optimal}. Another well-studied fairness notion is that of {equitability.} Specifically, a cake division $\{I_1, I_2, \ldots, I_n \}$ is said to be \emph{equitable} iff all the agents derive the same value from the intervals assigned to them, $v_i(I_i) = v_j(I_j)$ for all $i$ and $j$ \cite{dubins1961cut, alon1987splitting}. In other words, equitability ensures that all the agents are equally well-off. Cechl{\'a}rov{\'a} and Pill{\'a}rov{\'a} \cite{cechlarova2012computability} consider the computation of equitable cake divisions with connected pieces. They showed that---given access to ``reverse'' cutting queries---such divisions can be efficiently computed, up to an arbitrary precision. We note that value densities that satisfy MLRP have, by definition, full support over the cake. In such a case, the reverse cutting queries can be simulated by standard (cut) queries in the Robertson-Webb model. Hence, under MLRP, strong algorithmic results hold for equitability as well. Cake-division algorithms for specific classes of valuations have been studied in \cite{cohler2011optimal} and \cite{kurokawa2013cut}. The work of Kurokawa et al.~\cite{kurokawa2013cut} provides a query-efficient algorithm for envy-free, noncontiguous cake division under piecewise linear densities. Cohler et al.~\cite{cohler2011optimal} also address the noncontiguous version of the problem, and for piecewise constant densities they develop a polynomial-time algorithm that computes an envy-free division with optimal social welfare. In contrast to these results our focus is on contiguous cake division. \section{Main Results} \label{section:MainResults} This section presents the statements of our key results. \\ \noindent \textbf{Envy-Freeness:} In Section~\ref{EFdivisions} we prove that for cake-division instances, in which the value densities satisfy MLRP, the problem of finding an envy-free allocation {essentially admits a polynomial-time algorithm}. \begin{restatable}{theorem}{EFdivision} \label{theorem:EFdivision} Let $\mathcal{C} = \langle [n], (f_i )_{i \in [n] } \rangle $ be a cake-division instance in which the value-density functions satisfy the monotone likelihood ratio property. Then, in the Robertson-Webb query model, an envy-free allocation of $\mathcal{C}$ can be computed (up to an arbitrary precision) in $\mathcal{O}\left( {\rm poly} ( n, \log \lambda ) \right)$ time; here $\lambda \in \mathbb{R}_+$ is the Lipschitz constant of the cut and eval queries. \end{restatable} Recall that, under MLRP, the cut and eval queries are necessarily $\lambda$-Lipschitz (Proposition~\ref{Lip}). \\%and while outputting an envy-free (or welfare-maximizing) allocation a run-time dependency on $\log \lambda$ (i.e., a dependency the bit complexity of the Lipschitz constant) is unavoidable, in general. \\ \noindent \textbf{Pareto Optimality:} Weller's theorem~\cite{weller1985fair} asserts that there always exists some cake division (though, not necessarily with connected pieces) which is both envy-free and Pareto optimal (among all cake divisions, with or without connected pieces). We show that, in the context of MLRP, \emph{every} envy-free allocation is in fact Pareto optimal (among all cake divisions). Therefore, for cake-division instances with MLRP, the allocation computed by our algorithm is not only envy-free (fair) but also Pareto optimal, up to an arbitrary precision. \begin{restatable}{theorem}{EFPO} \label{theorem:ef-po} Let $\mathcal{C}$ be a cake-division instance wherein the value-density functions satisfy the monotone likelihood ratio property. Then, every envy-free allocation in $\mathcal{C}$ is also Pareto optimal (over the set of all cake divisions) \end{restatable} \noindent \textbf{Social Welfare:} In Section \ref{section:social welfare} we show that, up to an arbitrary precision, a social welfare maximizing allocation can be computed efficiency under MLRP. \begin{restatable}{theorem}{SocialWelfare \label{theorem:SocialWelfare} Let $\mathcal{C} =\langle [n], (f_i)_{i \in [n]} \rangle$ be a cake-division instance in which the value-density functions satisfy the monotone likelihood ratio property. Then, in the Robertson-Webb query model, an allocation that achieves the optimal social welfare in $\mathcal{C}$ can be computed (up to an arbitrary precision) in $\mathcal{O}\left( {\rm poly} ( n, \log \lambda ) \right)$ time; here $\lambda \in \mathbb{R}_+$ is the Lipschitz constant of the cut and eval queries. \end{restatable} \noindent \textbf{Egalitarian Welfare:} Section~\ref{section:max-min} addresses the problem of maximizing egalitarian welfare. Specifically, we prove that, in cake-division instances with MLRP, an allocation with egalitarian welfare arbitrarily close to the optimal can be computed efficiently. \begin{restatable}{theorem}{Maxmin \label{theorem:Maxmin} Let $\mathcal{C} =\langle [n], (f_i)_{i \in [n]} \rangle$ be a cake-division instance in which the value-density functions satisfy the monotone likelihood ratio property. Then, in the Robertson-Webb query model, an allocation that achieves the optimal egalitarian welfare in $\mathcal{C}$ can be computed (up to an arbitrary precision) in $\mathcal{O}\left( {\rm poly} ( n, \log \lambda ) \right)$ time; here $\lambda \in \mathbb{R}_+$ is the Lipschitz constant of the cut and eval queries. \end{restatable} \noindent {\bf Nash Social Welfare:} In Section~\ref{section:nsw} we show that, under MLRP, the problem of maximizing Nash social welfare admits a fully polynomial-time approximation scheme (FPTAS). \begin{restatable}{theorem}{NashSocialWelfare \label{theorem:NSW} Let $\mathcal{C} =\langle [n], (f_i)_{i \in [n]} \rangle$ be a cake-division instance in which the value-density functions satisfy the monotone likelihood ratio property. Then, in the Robertson-Webb query model, an allocation with Nash social welfare at least $(1- \varepsilon)$ times the optimal (Nash social welfare) in $\mathcal{C}$ can be computed in time that is polynomial in $1/\varepsilon$, $\log \lambda$, and $n$; here $\lambda \in \mathbb{R}_+$ is the Lipschitz constant of the cut and eval queries. \end{restatable} \section{Egalitarian Welfare} \label{section:max-min} This section presents an algorithm for maximizing egalitarian welfare in cake-division instances with MLRP (Theorem~\ref{theorem:Maxmin}). Towards this end, we will define a ``moving-knife'' procedure that leads to the desired algorithm. Given a target value $\tau >0$, we consider cut points obtained by iteratively selecting intervals that are of value $\tau$ to agents $1$ through $n$, respectively. That is, the first cut point $x_1$ is chosen to satisfy $v_1(0,x_1) = \tau$. Inductively, for $i \geq 2$, the cut point $x_i$ is selected such that $v_i(x_{i-1}, x_i ) = \tau$. Recall that the agents are indexed following the MLRP order. To specify these points, we recursively define $n$ functions, $\mathrm{MK}_i: \mathbb{R}_+ \mapsto [0,1]$, for $i \in [n]$: \begin{align*} \mathrm{MK}_1(\tau) &\coloneqq \mathrm{Cut}_1(0, \tau) \nonumber \\ \mathrm{MK}_i(\tau) &\coloneqq \mathrm{Cut}_i(\mathrm{MK}_{i-1}(\tau), \tau) \end{align*} Here, $\mathrm{MK}_n(\tau)$ denotes the last cut point which we obtain by executing the moving-knife procedure with target value of $\tau \in [0,1]$ for every agent. Given that $\mathrm{MK}_i$s can be expressed as a composition of cut queries, these functions can be efficiently computed in the Robertson-Webb query model. The following proposition establishes that $\mathrm{MK}_n$ is monotonic. \begin{proposition} \label{proposition:MKn} For any cake-division instance $\mathcal{C} = \langle [n], \{f_i\}_i \rangle$ and each $i \in [n]$, the function $\mathrm{MK}_i$ is monotonically increasing. \end{proposition} \begin{proof} Consider two target values $\tau, \tau' \in \mathbb{R}_+$ such that $\tau \leq \tau'$. We will show by inducting over $i \in [n]$ that $\mathrm{MK}_i(\tau) \leq \mathrm{MK}_i(\tau')$. For the base case $i =1$, observe that $\mathrm{Cut}_1(0,\tau) \leq \mathrm{Cut}_1(0, \tau')$, since $f_1(x) \geq 0$ for all $x \in [0,1]$, In other words, we have $\mathrm{MK}_1(\tau) \leq \mathrm{MK}_1(\tau')$. Next, with the induction hypothesis $\mathrm{MK}_{i-1}(\tau) \leq \mathrm{MK}_{i-1}(\tau')$ in hand, we will establish $\mathrm{MK}_{i}(\tau) \leq \mathrm{MK}_{i}(\tau')$: \begin{align*} \mathrm{MK}_i(\tau) = \mathrm{Cut}_i(\mathrm{MK}_{i-1}(\tau), \tau) \leq \mathrm{Cut}_i(\mathrm{MK}_{i-1}(\tau'), \tau) \leq \mathrm{Cut}_i(\mathrm{MK}_{i-1}(\tau'), \tau') = \mathrm{MK}_{i}(\tau') \end{align*} Here, we use the fact that the value densities are nonnegative. This establishes the stated claim. \end{proof} We now establish the main result for egalitarian welfare. \Maxmin* \begin{proof} For the given cake-division instance $\mathcal{C}$, write $\mathcal{R}^* = \{R^*_1, R^*_2, \ldots, R^*_n\}$ to denote an allocation that maximizes egalitarian welfare among all allocations in $\mathcal{C}$. Without loss of generality we can assume that $\mathcal{R}^*$ is Pareto optimal an conforms to the MLRP order (Lemma~\ref{theorem:POorder}). Write $\tau^* \coloneqq \mathrm{EW}(\mathcal{R}^*) = \min_i v_i(R^*_i)$. Note that $\mathrm{MK}_n(0) = 0$ and $\mathrm{MK}_n(1) = 1$; by convention, $\mathrm{Cut}_i( \ell, \tau)$ is truncated to $1$ iff $\tau$ is greater than the entire value to the right of $\ell$, i.e., if $v_i(\ell, 1) < \tau$. We will say that a target value $\tau$ is \emph{feasible} iff $v_i(\mathrm{MK}_{i-1}(\tau),\mathrm{MK}_i(\tau)) = \tau$ for all $i \in [n]$. That is, executing the moving-knife procedure with target value $\tau$ ensures that each agent receives an interval $[\mathrm{MK}_{i-1}(\tau), \mathrm{MK}_i (\tau)]$ of value $\tau$. Otherwise, we say that $\tau$ is \emph{infeasible}. Note that if $\tau$ is infeasible, then for some $i \in [n]$ the $\mathrm{MK}_i(\tau)$ gets truncated at one--in such a case, $i$ and subsequent agents $j >i$ do not receive intervals of value $\tau$. Hence, $\tau=0$ is a feasible target value, whereas $\tau=1$ is infeasible. Note that we can efficiently determine whether a given $\tau$ is feasible or not.\footnote{As mentioned previously, the functions $\mathrm{MK}_i$s can be computed efficiently in the Robertson-Webb model.} Proposition~\ref{proposition:MKn} implies that we have monotonicity with respect to the feasibility of target values. Specifically, ({\rm P}): for all $\tau \leq \tau'$, if $\tau$ is infeasible, then so is $\tau'$. To establish property ({\rm P}) we note that, for an infeasible $\tau$, there exists, by definition, an agent $i \in [n]$ such that $v_i(\mathrm{MK}_{i-1}(\tau), \mathrm{MK}_i(\tau)) < \tau$. This happens when $\mathrm{Cut}_i(\mathrm{MK}_{i-1}(\tau), \tau) = \mathrm{MK}_i(\tau)$ gets truncated to $1$. Applying Proposition~\ref{proposition:MKn} to agent $i-1$, with $\tau \leq \tau'$, we obtain $\mathrm{MK}_{i-1}(\tau) \leq \mathrm{MK}_{i-1}(\tau')$. Therefore, for agent $i$ the value of the remaining cake $[\mathrm{MK}_{i-1}(\tau'),1]$ is less than $\tau'$. Once again $\mathrm{Cut}_i(\mathrm{MK}_{i-1}(\tau'), \tau')$ gets truncated at $1$ and we obtain $v_i \left(\mathrm{MK}_{i-1}(\tau'), \mathrm{MK}_{i}(\tau') \right) < \tau'$, i.e., $\tau'$ is infeasible. With a precision parameter $\eta >0$ in hand, we perform binary search over integer multiples of ${\eta}$, i.e., over the set $\left\{k \eta \right\}_{k = 0}^{1/\eta}$. Recall that the values of the agents are normalized and, hence, $\tau^* \in [0,1]$. Also, $\tau=0$ is feasible, while $\tau=1$ is infeasible., This switch in feasibility (between $0$ and $1$) along with property ({\rm P}), imply that there exists a unique index $k_0 \in \{0, 1, \ldots, 1/\eta \}$ with the property that $k_0 \eta$ is feasible and $(k_0+1) \eta$ is infeasible. We can identify $k_0$ by $\mathcal{O}\left( \log (1/\eta) \right)$ iterations of binary search. The relevant observation here is that $\tau^* \in [k_0 \eta, (k_0 + 1) \eta]$. Indeed, the optimal value $\tau^*$ is feasible: $\mathcal{R}^*=\{R^*_1, \ldots, R^*_n\}$ conforms to the MLRP order and $v_i(R^*_i) \geq \tau^*$ for all $i \in [n]$. Hence, property ({\rm P}) ensures that $\tau^*$ cannot be greater than the infeasible target $(k_0 + 1) \eta$. Furthermore, the optimality of $\tau^*$ gives us $ \tau^* \geq k_0 \eta$. For any feasible $\tau$, the moving-knife procedure provides an allocation with egalitarian welfare at $\tau$. Specifically, for the computed index $k_0$, consider allocation $\widehat{\mathcal{R}} = \{\widehat{R}_1, \widehat{R}_2, \ldots, \widehat{R}_n\}$ in which $\widehat{R}_i = [\mathrm{MK}_{i-1}(k_0 \eta), \mathrm{MK}_i(k_0 \eta) ]$, for $i \in [n-1]$ and $\widehat{R}_n = \left[\mathrm{MK}_{n-1}(k_0 \eta), \mathrm{MK}_{n}(k_0 \eta) \right] \cup [\mathrm{MK}_{n}(k_0 \eta), 1]$. For notational convince here we set $\mathrm{MK}_0(k_0 \eta) = 0$. Feasibility of $k_0 \eta$ ensures that $v_i(\widehat{R}_i) = k_0 \eta \geq \tau^* - \eta$ for all $i \in [n]$. Therefore, we have $\mathrm{EW}(\widehat{\mathcal{R}}) \geq \tau^* - \eta$. For the runtime analysis, note that the binary search finds the desired index $k_0$ in $\mathcal{O}({\rm poly}(n, \log \frac{1}{\eta}))$ time. The dependency on $\lambda$ stems from the fact that the bit-complexity of the output (i.e., of the computed cut points) can be $\mathcal{O} \left( \log \frac{\eta}{\lambda}\right)$. Overall, these arguments show that we can find an allocation $\widehat{\mathcal{R}}$ with egalitarian welfare $\eta$ close to the optimal in time $\mathcal{O}({\rm poly}(n, \log \lambda, \log \frac{1}{\eta}))$. The precision parameter $\eta$ can be driven exponentially close to zero in time that is polynomial in $\log \frac{1}{\eta}$ and, hence, the stated claim follows. \end{proof} \section{Implications of MLRP} \subsection{Proof of Lemma~\ref{lemma:R2R}} \label{appendix:proof-race-to-ratio} In this section we restate and prove Lemma~\ref{lemma:R2R}. \RacetotheRatios* \begin{proof} Given that $f_i$ and $f_j$ bear MLRP, we will first prove that they satisfy property (i). For $b \leq c$, we have \begin{align} \label{ineq:MLRP} \frac{f_{j}(x)}{f_i(x)} \leq \frac{f_{j}(b)}{f_i(b)} \leq \frac{f_{j}(y)}{f_i(y)} \quad \text{for all} \ x \in [a,b] \ \ \text{and for all} \ y \in [c,d] \end{align} Recall that MLRP value densities are, by definition, positively valued, $f_i(x) > 0$ for all $x \in [0,1]$. Therefore, equation (\ref{ineq:MLRP}) gives us $f_{j}(x) \leq \ \frac{f_{j}(b)}{f_i(b)} \ f_i(x)$ for all $x \in [a,b]$. Integrating we obtain $\int \limits_a^b f_{j}(x) dx \leq \int \limits_a^b \frac{f_{j}(b)}{f_i(b)} \ f_i(x) dx$ and, hence,\footnote{Recall that the integral of a positive function is positive.} \begin{align} \frac{\int_a^b f_{j}(x)dx}{\int_a^b f_i(x)dx} \leq & \ \frac{f_{j}(b)}{f_i(b)} \label{ineq:left-sandwich} \end{align} Starting with equation (\ref{ineq:MLRP}) and integrating over the interval $[c,d]$, we can also establish the following equality \begin{align} \frac{f_{j}(b)}{f_i(b)} \leq \frac{\int_c^d f_{j}(x)dx}{\int_c^d f_i(x)dx} \label{ineq:right-sandwich} \end{align} Equations (\ref{ineq:left-sandwich}) and (\ref{ineq:right-sandwich}) lead to property (i): \begin{align*} \frac{\int_a^b f_{j}(x)dx}{\int_a^b f_i(x)dx} \leq \frac{\int_c^d f_{j}(x)dx}{\int_c^d f_i(x)dx}. \end{align*} Next we will prove that properties (i) and (ii) are equivalent. Since $f_i$ and $f_j$ satisfy property (i), the equivalence of properties (i) and (ii) will imply that they satisfy property (ii) as well; thereby completing the proof. To establish that property (i) implies property (ii), we instantiate (i) over the intervals $[a,x]$ and $[x,b]$: $\frac{\int_a^x f_{j}}{\int_a^x f_{i}} \leq \frac{\int_x^b f_{j}}{\int_x^b f_{i}}$. Cross multiplying the terms\footnote{Recall that the value densities are strictly positive.} and adding one to both sides of the inequality, gives us $\frac{\int_a^x f_{j}}{\int_x^b f_{j}} + 1 \leq \frac{\int_a^x f_{i}}{\int_x^b f_{i}} + 1$. Simplifying further we obtain $\frac{\int_a^x f_{j} + \int_x^b f_{j}}{\int_x^b f_{j}} \leq \frac{\int_a^x f_{i} + \int_x^b f_{i}}{\int_x^b f_{i}}$. This gives us the desired bound \begin{align} \frac{\int_x^b f_{i}}{\int_a^b f_i} \leq \frac{\int_x^b f_{j}}{\int_a^b f_{j}} \label{ineq:prop-three} \end{align} For the reverse direction, i.e., (ii) implies (i), we begin by cross multiplying the terms in inequality (\ref{ineq:prop-three}) and subtracting one from both sides yield $\frac{\int_a^b f_{j}}{\int_x^b f_{j}} - 1 \leq \frac{\int_a^b f_i }{ \int_x^b f_i} - 1$. This inequality simplifies to $\frac{\int_a^x f_{j}}{\int_x^b f_{j}} \leq \frac{\int_a^x f_i}{\int_x^b f_i}$. That is, we obtain property (i) for intervals $[a,x]$ and $[x,b]$. Reapplying this bound (with $a$, $x$, and $b$ set appropriately) shows that (ii) implies (i). This completes the proof. \end{proof} \subsection{Efficiently Finding the MLRP Order} \label{appendix:find-mlrp-order} This section shows that, if in a cake-division instance the underlying value densities bear MLRP, then we can efficiently find the MLRP order in the Robertson-Webb query model. Specifically, through eval queries $\{\mathrm{Eval}_i(1/2, 1)\}_{i\in [n]}$ we find the value that each agent $i \in [n]$ has for the piece $[1/2,1]$ of the cake, and sort the agents according to the increasing order (with ties broken arbitrarily) of these values. Lemma~\ref{MLRPorder} shows that this sorting order is in fact the MLRP order of the underlying value densities. To prove this claim, we first state a well-known result (in Lemma~\ref{SD}) that MLRP implies first-order stochastic dominance; we provide a proof here for completeness. Recall that, given two probability density functions $f_i$ and $f_j$ over $[0,1]$, density $f_j$ is said to have \emph{first-order stochastic dominance} over $f_i$ iff $\int_t^1 f_j(x)dx \geq \int_t^1 f_i(x)dx$, for all $t \in [0,1]$, and there exists at least one $t' \in [0,1]$ such that $\int_{t'}^1 f_j(x)dx > \int_{t'}^1 f_i(x)dx$. \begin{lemma} \label{SD} Let $f_i$ and $f_j$ be two (ordered) value-density functions that satisfy the monotone likelihood ratio property: for every $ 0 \leq x \leq y \leq 1$ we have $\frac{f_j(x)}{f_i(x)} \leq \frac{f_j(y)}{f_i(y)}$. Then, the density $f_j$ has first-order stochastic dominance over $f_i$. \end{lemma} \begin{proof} Given that $f_i$ and $f_j$ bear MLRP, we consider property (ii) of Lemma~\ref{lemma:R2R}, with $a = 0$, $b=1$, and $x=t$, for any $0 \leq t \leq 1$, to obtain ${\int \limits_t^1 f_i} \leq \int \limits_t^1 f_j $. Here, we use the fact that the valuations are normalized, $\int_0^1 f_i = \int_0^1 f_j = 1$. Furthermore, since the two densities are distinct, there exists a point $t' \in [0,1]$ such that $\int_{t'}^1 f_j $ is not equal to $\int_{t'}^1 f_i$. For such a point $t'$, a strict inequality must hold, $\int_{t'}^1 f_j > \int_{t'}^1 f_i$. \end{proof} Note that, since both MLRP and first-order stochastic dominance are transitive properties, the above lemma directly extends to a collection of probability density functions. Next we prove that, given a cake-division instance, with the promise that the underlying value densities satisfy MLRP, one can find the MLRP order efficiently. \begin{lemma} \label{MLRPorder} Let $\mathcal{C}$ be a cake-division instance in which the value-density functions satisfy the monotone likelihood ratio property. Then, in the Robertson-Webb query model, we can find the MLRP order of $\mathcal{C}$ in polynomial time. \end{lemma} \begin{proof} Through eval queries $\{\mathrm{Eval}_i(1/2, 1)\}_{i\in [n]}$, we find the value that each agent $i \in [n]$ has for the piece $[1/2,1]$ of the cake, and sort the agents according to the increasing order (with ties broken arbitrarily) of these values. We will show that that this sorting order, say $\pi :[n] \mapsto [n]$, is in fact the MLRP order of the underlying value densities. This directly provides an efficient algorithm for finding the MLRP order. Consider two agents $i, j \in [n]$ such that $i$ appears before $j$ in the MLRP order, i.e., the likelihood ratio $f_j(x)/f_i(x)$ is non-decreasing in $x \in [0,1]$. Below, we will address the settings in which the value densities of $i$ and $j$ are distinct. Otherwise, if the value densities are identical, then $\mathrm{Eval}_i(1/2, 1) = \mathrm{Eval}_j(1/2,1)$ and they will be placed appropriately in the sorting order $\pi$. Given that $f_i$ and $f_j$ are distinct we get, via Lemma~\ref{SD}, that $f_j$ has first-order stochastic dominance over $f_i$. Therefore, agent $j$'s value for the piece $[1/2,1]$ is at least as large as agent $i$'s value for this piece, $\int_{1/2}^1 f_j \geq \int_{1/2}^1 f_i$, i.e., $\mathrm{Eval}_j(1/2, 1) \geq \mathrm{Eval}_i(1/2, 1)$. We will show that, under MLRP, this inequality is in fact strict and, hence, $\pi$ correctly identifies the MLRP order among each pair of agents $j$ and $i$. The first-order stochastic dominance between $f_j$ and $f_i$ also ensures that there exists a point $t' \in [0,1]$ such that \begin{align} \int_{t'}^1 f_j > \int_{t'}^1 f_i \label{ineq:defn-t-prime} \end{align} We consider two complementary cases $(i)$ $t'\leq 1/2$ and $(ii)$ $t'>1/2$. In both of these cases we assume, towards a contradiction, that $\int_{1/2}^1 f_j = \int_{1/2}^1 f_i$. \noindent {\emph Case (i):} $t' \leq 1/2$. Note that in this case equation (\ref{ineq:defn-t-prime}) expands to $\int_{t'}^{1/2} f_j + \int_{1/2}^1 f_j > \int_{t'}^{1/2} f_i + \int_{1/2}^1 f_i$. Since, $\int_{1/2}^1 f_j = \int_{1/2}^1 f_i$, we have $\int_{t'}^{1/2} f_j > \int_{t'}^{1/2} f_i$. On the other hand, applying Lemma~\ref{lemma:R2R}, property (i), to intervals $[t',1/2]$ and $[1/2,1]$ gives us $ \frac{\int_{t'}^{1/2} f_j}{\int_{t'}^{1/2} f_i} \leq \frac{\int_{1/2}^1 f_j}{\int_{1/2}^1 f_i} = 1$. This inequality leads to the desired contradiction, $\int_{t'}^{1/2} f_j \leq \int_{t'}^{1/2} f_i$. \noindent {\emph Case (ii):} $t' > 1/2$. Here, equation (\ref{ineq:defn-t-prime}) and the assumed equality $\int_{1/2}^1 f_j = \int_{1/2}^1 f_i$ give us $\int_{1/2}^{t'} f_j < \int_{1/2}^{t'} f_i$. Note that (due to normalization) we have $\int_0^{1/2} f_j = \int_0^{1/2} f_i$. This equality contradicts an application of Lemma~\ref{lemma:R2R}, property (i), to the intervals $[0,1/2]$ and $[1/2, t']$ \begin{align*} \frac{\int_0^{1/2} f_j}{\int_0^{1/2} f_i} & \leq \frac{\int_{1/2}^{t'} f_j}{\int_{1/2}^{t'} f_i} < 1 \end{align*} Therefore, by sorting the values of the eval queries $\{\mathrm{Eval}_i(1/2, 1)\}_{i\in [n]}$, we can efficiently find the MLRP order. \end{proof} \subsection{Lipschitzness of Cut and Eval Queries under MLRP} \label{appendix:mlrp-lipschitz} Here, we prove that if the value densities satisfy MLRP, then the cut and eval queries of the corresponding cake-division instance are Lipschitz continuous. Recall, that the value densities $f_i$s are assumed to be Riemann integrable and hence, by definition, $f_i$s are bounded. Furthermore, as mentioned previously, MLRP mandates that the value densities are strictly positive over the cake. Therefore, for each agent $i \in [n]$ and $x \in [0,1]$ we have $0 < L \leq f_i(x) \leq U$ for some $L, U \in \mathbb{R}_+$. Proposition~\ref{Lip} below shows that we can express the Lipschitz constant $\lambda$ in terms of these bounding parameters, $\lambda \leq \max\{1/L, U, U/L\}$. This establishes the Lipschitz continuity of the cut and eval queries under MLRP. \begin{proposition} \label{Lip} Let $\mathcal{C} = \langle [n], \{f_i \}_{i \in [n] } \rangle$ be a cake-division instance in which the value densities are positively-bounded, i.e., there exists $L,U \in \mathbb{R}_+$ such that $0 < L\leq f_i(x) \leq U$ for each agent $i \in [n]$ and all $x \in [0,1]$. Then the cut and eval queries, $\{\mathrm{Cut}_i\}_i$ and $\{\mathrm{Eval}_i\}_i$, are $\lambda$-Lipschitz with $\lambda \leq \max\{1/L, U, U/L\}$. \end{proposition} \begin{proof} Fix an agent $i \in [n]$. First, we will prove that the eval query (function) $\mathrm{Eval}_i$ is $U$-Lipschitz. Then, we will prove that $\mathrm{Cut}_i$ is $\max\{1/L, U/L\}$-Lipschitz. These two results imply the stated claim. Recall that $\mathrm{Eval}_i: [0,1] \times [0,1] \mapsto \mathbb{R}_+$ is said to be $U$-Lipschitz iff $|\mathrm{Eval}_i(\ell, r) - \mathrm{Eval}_i(\ell', r')| \leq U \max\{ |\ell - \ell'|, |r-r'|\}$. We will establish this inequality component wise. For a fixed first component $\ell_0 \in [0,1]$, consider the function $g(r) \coloneqq \mathrm{Eval}_i(\ell_0, r) = \int_{\ell_0}^r f_i(x) dx$ for $r \in [0,1]$. For $r, r' \in [0,1]$, the definition of $g$ allows us to write $|g(r) - g(r')| = |\int_r^{r'} f_i(x) dx|$. Since, $f_i(x) \leq U$ for all $x \in [0,1]$, we obtain the inequality $|g(r) - g(r')| \leq U |r-r'|$. Therefore, the function $g(r)$ and hence $\mathrm{Eval}_i(\ell,r)$ is $U$-Lipschitz in the second argument $r$. Similarly, considering the function $\widehat{g}(\ell) \coloneqq \mathrm{Eval}_i(\ell, r_0)$, with a fixed $r_0 \in [0,1]$, we obtain the desired inequality, $|\widehat{g}(\ell) - \widehat{g}(\ell')| = |\int_{\ell}^{\ell'} f_i(x) dx| \leq U |\ell-\ell'|$ for $\ell, \ell' \in [0,1]$. Hence, $\mathrm{Eval}_i(\ell, r)$ is $U$-Lipschitz in the first argument $\ell$ as well. The two parts imply that $\mathrm{Eval}_i$ is $U$-Lipschitz. For $\mathrm{Cut}_i(\ell, \tau)$ we again establish Lipschitz continuity by considering the two arguments $\ell$ and $\tau$ separately. Fix $\ell_0 \in [0,1]$ and for target values $\tau \leq \tau'$, let $y \coloneqq \mathrm{Cut}_i( \ell_0, \tau) $ and $y' \coloneqq \mathrm{Cut}_i( \ell_0, \tau')$. Note that $y \leq y'$ and $\tau' - \tau$ is equal to agent $i$'s value for the interval $[y, y']$. This value can be lower bounded by observing that $f_i(x) \geq L$, for all $x \in [0,1]$; specifically, agent $i$'s value for the interval $[y,y']$ is at least $L (y' - y)$. Therefore, we get that $\mathrm{Cut}_i( \ell_0, \tau') - \mathrm{Cut}_i( \ell_0, \tau) = y' - y \leq \frac{1}{L} (\tau' - \tau)$. That is, $\mathrm{Cut}_i(\ell, \tau)$ is $\frac{1}{L}$-Lipschitz in $\tau \in \mathbb{R}_+$. For the remaining analysis, fix target value $\tau_0 \in \mathbb{R}_+$, and for $\ell \leq \ell'$, let $y \coloneqq \mathrm{Cut}_i( \ell, \tau_0) $ and $y' \coloneqq \mathrm{Cut}_i( \ell', \tau_0)$. A useful observation here is that agent $i$'s value for the interval $[y, y']$ is equal to $i$'s value for the interval $[\ell, \ell']$, i.e., $v_i(y, y') = v_i(\ell, \ell')$. Specifically, if $y \leq \ell'$, then we can write $\delta$ to denote agent $i$'s value for the interval $[y, \ell']$ (i.e., $\delta \coloneqq v_i(y, \ell')$) and note that $v_i(\ell, \ell') = v_i(\ell, y) + v_i(y, \ell') = \tau_0 + \delta$. The equality then follows from observing that $v_i(y, y') = v_i(y, \ell') + v_i(\ell',y') = \delta + \tau_0$. Complementarily, if $y > \ell'$, then we write $\delta \coloneqq v_i(\ell', y)$ and use the following equations to obtain the desired equality: $v_i(\ell, \ell') = v_i(\ell, y) - v_i(\ell', y) = \tau_0 - \delta$ and $v_i(y, y') = v_i(\ell', y') - v_i(\ell', y) = \tau_0 - \delta$. The bounds on the value densities, $0< L \leq f_i(x) \leq U$, and the equality $v_i(y, y') = v_i(\ell, \ell')$ imply $L (y' - y) \leq v_i(y, y') = v_i(\ell, \ell') \leq U (\ell' - \ell)$. Hence, we have $ | \mathrm{Cut}_i( \ell', \tau_0) - \mathrm{Cut}_i( \ell, \tau_0)| = |y' - y| \leq \frac{U}{L} |\ell' - \ell|$. That is, $\mathrm{Cut}_i(\ell, \tau)$ is $\frac{U}{L}$-Lipschitz in $\ell \in \mathbb{R}_+$. Combing the above-mentioned bounds we get that the $\mathrm{Cut}_i$ and $\mathrm{Eval}_i$ queries are $\lambda$-Lipschitz with $\lambda \leq \max \{U, 1/L, U/L\}$. \end{proof} \section{Notation and Preliminaries} \label{section:notations} This work studies the problem of dividing a cake $[0,1]$ among $n$ agents. Throughout, we will focus on a well-studied formulation of cake cutting which requires that each agent should receive a contiguous piece of the cake, i.e., the goal is to partition the cake $[0,1]$ into $n$ pairwise disjoint intervals and assign them among the $n$ agents in a fair/efficient manner. The cardinal preferences of each agent $i \in [n]$ is induced by a value-density function $f_i : [0,1] \mapsto \mathbb{R}_+$. Following standard conventions, we will assume that each value-density function $f_i$ is (Riemann) integrable. In particular, the (finite) integral of $f_i$ induces agent $i$'s valuation function over the intervals contained in $[0,1]$ (i.e., over the pieces of the cake): $v_i(I) \coloneqq \int_\ell^r f_i(x) \ {d}x$ denotes the value that agent $i \in [n]$ has for any interval $I =[\ell, r] \subset [0,1]$. For notational convenience, we will write $v_i(a, b)$ to denote agent $i$'s value for interval $[a,b] \subseteq [0,1]$. The integrability\footnote{Our results hold for integrable value densities and do not necessarily require $f_i$s to be continuous. Recall that, by definition, Riemann integrable functions are bounded. Also, every (bounded) continuous function on an (bounded) interval is Riemann integrable, but the converse is not true.} and nonnegativity of value-densities $f_i$ imply that the corresponding valuations $v_i$ are (i) nonnegative, (ii) divisible: for every interval $[\ell,r]$ and parameter $ \kappa \in [0,1]$, there exists a $z \in [\ell,r]$ with the property that $v_i(\ell,z) = \kappa \ v_i(\ell,r)$,\footnote{This implication can be obtained by applying the intermediate value theorem to the antiderivative of $f_i$.} and (iii) sigma additive: $v_i(I \cup J) = v_i(I) + v_i(J)$, for all disjoint intervals $I, J \subset [0,1]$. The divisibility property ensures that the valuations are non-atomic, i.e., $v_i([x,x]) = 0$ for all $i \in [n]$ and $x \in [0,1]$. Furthermore, this property allows us, as a convention, to regard two intervals to be disjoint even if they intersect exactly at an endpoint. We additionally assume that the valuations are normalized such that the value of the entire cake is equal to one for every agent $i \in [n]$, i.e., $\int_0^1 f_i(x)dx = v_i(0,1) =1$. Hence, the value-densities $f_i$s constitute probability density functions over the cake $[0,1]$. {We note that this is a standard assumption in the cake-cutting framework and we conform to it for the purpose of brevity. All of our results hold true, even otherwise.} \\ \noindent {\bf Problem Instances:} A \emph{cake-division instance} $\mathcal{C}$ is a tuple $\langle [n], \{f_i \}_{i \in [n] } \rangle$ where $[n]=\{1,2, \ldots, n\}$ denotes the set of $n \in \mathbb{Z}_+$ agents and $f_i$s denote the value-density functions of the agents. We will use the notation $(f_i)_{i \in [n]}$ to denote an ordered set of value-density functions of $n$ agents.\\ \noindent {\bf Robertson-Webb Query Model:} While, for exposition, we specify $f_i$s as part of the problem instance, our algorithms only require oracle access to the valuations. In particular, the developed algorithms operate under the Robertson-Webb model~\cite{robertson1998cake}, which supports oracle access to agents' valuations in the form of \emph{eval} and \emph{cut} queries: \\ \noindent (i) \emph{Eval queries:} for each agent $i \in [n]$, we have (blackbox) access to function $\mathrm{Eval}_i: [0,1] \times [0,1] \mapsto \mathbb{R}_+$, which when queried with any interval $[\ell, r]$ returns (in unit time) the value that agent $i$ has for this interval, i.e., $ \mathrm{Eval}_i (\ell, r) = v_i (\ell, r)$. \\ \noindent (ii) \emph{Cut queries:} for each agent $i \in [n]$, we can also query function $\mathrm{Cut}_i: [0,1] \times \mathbb{R}_+ \rightarrow [0,1]$, which given an initial point $\ell \in [0,1]$ and a target value $\tau \in \mathbb{R}_+$, returns $\mathrm{Cut}_i(\ell, \tau) = y$ where $y \in [0,1]$ is the leftmost point with the property that $v_i(\ell, y) = \tau$. {If for a given $\ell \in [0,1]$ and $\tau \in \mathbb{R}_+$, there does not exist a $y \in [\ell,1]$ such that $v_i(\ell, y) = \tau$, then we have, by convention, $\mathrm{Cut}_i(\ell, \tau) = 1$.} \\ \noindent {\bf Allocations and Cake Divisions:} As mentioned previously, the goal is to assign each agent a single interval. Towards this end, for any cake-division instance with $n$ agents, we define an \emph{allocation} to be a collection of $n$ pairwise-disjoint intervals, $\mathcal{I} = \{I_1, I_2, \ldots, I_n \}$, where interval $I_i$ is assigned to agent $i \in [n]$ and $\cup_{i \in [n]} \ I_i = [0,1]$. {Note that here the subscript of each interval identifies unique agent who has been assigned this interval.} In addition, we will refer to a collection of pairwise-disjoint intervals $\mathcal{J} = \{J_1, J_2, \ldots, J_n\}$ as a \emph{partial allocation} if they do not cover the entire cake, $\cup_{i=1}^n J_i \subsetneq [0,1]$. For an allocation $\mathcal{I}=\{I_1, \ldots, I_n\}$, the endpoints of the constituent intervals will be referred to as the \emph{cut points} of $\mathcal{I}$, i.e., if $I_i = [x_{i-1}, x_i]$ for $ 1 \leq i \leq n$, then the cut-points are $\{x_0 = 0, x_1, \ldots, x_n = 1\}$. We will throughout use the term allocation to specifically refer to partitions of the cake in which each agent receives a connected piece, i.e., receives exactly one interval. More generally, a \emph{cake division} will be used to denote partitions of the cake $\mathcal{D}=\{D_1, D_2, \ldots, D_n\}$ in which agent $i$ receives $D_i$, a finite collection of intervals. Here, the bundles $D_i$s are pairwise disjoint and their union covers the entire cake $[0,1]$. \\ \noindent In this work we develop algorithmic results for the following notions of fairness and economic efficiency. \noindent {\bf Envy-Freeness:} For a cake-division instance $\mathcal{C}$, an allocation $\mathcal{I} = \{I_1, \ldots, I_n \}$ is said to be \emph{envy-free} iif each agent prefers its own interval over that of any other agent, $v_i(I_i) \geq v_i(I_j)$ for all agents $i, j \in [n]$. \\ \noindent {\bf Pareto Optimality:} Given a cake-division instance $\mathcal{C}$, a division $\mathcal{D}=\{D_1, \ldots, D_n \}$ is said to \emph{Pareto dominate} another division $\mathcal{C}=\{C_1, \ldots, C_n \}$ iff $v_i(D_i) \geq v_i(C_i)$ for all agents $i \in [n]$ and, there exists at least one agent $k \in [n]$ such that $v_k(D_k) > v_k(C_k)$. Consequently, a cake division is said to be \emph{Pareto optimal} iff it is not Pareto dominated by any other division. Recall that a cake division refers to a partition of the cake in which agent receives a finite collection of intervals. By contrast, in an allocation each agent receives a single interval. The algorithms developed in this work compute allocations. Interestingly, though, the Pareto optimality guarantees achieved by our algorithms are stronger in the sense that optimality holds across all cake divisions; specifically, under MLRP, we establish that particular allocations are Pareto optimal not only among the set of all allocations but also among all cake divisions. \\ \noindent {\bf Social Welfare:} Social welfare is a standard measure of collective value. Specifically, \emph{social welfare} for an allocation $\mathcal{I} = \{I_1, \ldots, I_n\}$ is defined to be sum of the agents' valuations, $\mathrm{SW}(\mathcal{I}) \coloneqq \sum_{i=1}^n v_i(I_i)$. \\ \noindent {\bf Egalitarian (Rawlsian) Welfare:} For an allocation $\mathcal{I} = \{I_1, \ldots, I_n\}$, the \emph{egalitarian welfare} is defined as the minimum value achieved across the agents, $\mathrm{EW}(\mathcal{I}) \coloneqq \min_{i \in [n]} v_i(I_i)$. \\ \noindent {\bf Nash Social Welfare:} For an allocation $\mathcal{I}=\{I_1, \ldots, I_n\}$, the \emph{Nash social welfare} is defined to be the geometric mean of the agents' valuations, $\mathrm{NSW}(\mathcal{I}) \coloneqq \big( \prod_{i=1}^n v_i(I_i) \big)^{1/n}$. \\ Finding allocations that maximize the above-mentioned welfare notions is known to be {\rm APX}-hard, in general; see, e.g.,~\cite{aumann2013computing, arunachaleswaran2019fair}. Complementing these negative results, a key contribution of this work is to identify a broad class of cake-division instances that admit strong algorithmic results for these welfare objectives and envy-freeness. Specifically, we focus on value densities (distributions) that satisfy the {monotone likelihood ratio property} (MLRP). We will next define this property and note that our results hold for multiple distribution families that satisfy MLRP. \\ \noindent {\bf Monotone Likelihood Ratio Property:} Probability density functions $f_i$ and $f_j$ (in order) are said to satisfy the \emph{monotone likelihood ratio property} (MLRP) iff, for every $x \leq y$ in the domain, we have $\nicefrac{f_j(x)}{f_i(x)} \leq \nicefrac{f_j(y)}{f_i(y)}$. That is, the {likelihood ratio} $\nicefrac{f_j(x)}{f_i(x)}$ is non-decreasing in the argument $x \in \mathbb{R}$. Note that MLRP does not require $f_i$ and $f_j$ to be monotonic (or unimodal) by themselves. We also observe that MLRP is transitive: if two pairs of distributions $(f_i, f_j)$ and $(f_j, f_k)$ satisfy MLRP separately, then the pair $(f_i, f_k)$ also conforms to MLRP. Furthermore, this property continues to hold under positive scaling: if $f_i$ and $f_j$ satisfy MLRP, then so do $\gamma_i f_i$ and $\gamma_j f_j$, for any positive scalars $\gamma_i, \gamma_j \in \mathbb{R}_+$. This fact, in particular, allows us to restrict MLRP densities (which are typically defined over the real line) to the cake $[0,1]$ and, at the same time, assume normalization $\int_{0}^1 f_i = 1$. In the cake-division context, we will say that a given collection $\{f_i\}_{i \in [n]}$ of value-density functions satisfies the \emph{monotone likelihood ratio property} iff there exists an order $\pi: [n] \rightarrow [n]$, among the $f_i$s, such that, for all $i \in [n-1]$, the consecutive likelihood ratios $\frac{f_{\pi (i+1)} \ (x)}{f_{\pi(i)} \ (x)}$ are non-decreasing in $x \in [0,1]$. That is, for each $i \in [n-1]$, the densities $f_{\pi(i)}$ and $f_{\pi(i+1)}$ bear MLRP over $[0,1]$. We will refer to this order $\pi$ as the {MLRP order} of the value densities. Lemma~\ref{MLRPorder} (in Appendix~\ref{appendix:find-mlrp-order}) shows that, given a cake-division instance in the Robertson-Webb query model, with the promise that the underlying value densities satisfy MLRP (i.e., given a promise problem), one can efficiently find the MLRP order $\pi$. Hence, without loss of generality, we will throughout assume that the $n$ agents are indexed such that $\pi$ is the identity permutation, i.e., for all $i \in [n-1]$, the likelihood ratio $\frac{f_{i+1} (x)}{f_i(x)}$ is non-decreasing in $x \in [0,1]$. It is relevant to note that, to be well defined, MLRP requires the value densities $f_i$s to be strictly positive over the cake $[0,1]$. Hence, for cake-division instances $\langle [n], \{f_i\}_{i \in [n]} \rangle$ with MLRP, we have $f_i(x) >0$ for all $i \in [n]$ and $x \in [0,1]$. \\ \noindent {\bf Instantiations of MLRP:} MLRP induces a total order on linear value densities $f_i(x) = a_i x + b_i$; see Appendix~\ref{appendix:mlrp-use-cases} for details. Hence, our results imply that if, in a cake-division instance, the value densities of all the agents are linear, then an envy-free (or welfare-maximizing) allocation can be computed efficiently. Many other distribution families are also known to bear MLRP, e.g., Gaussian distributions (with the same variance), Poisson distributions, and single-parameter exponentials. Therefore, our algorithmic results address, in particular, cake-division instances wherein all the agents have Gaussian value densities with the same variance, but different means. In fact, it is known that linear translations of any log-concave function $g$---i.e., densities of the form $f_{\theta}(x) \coloneqq g(x-\theta)$, for $\theta \in \mathbb{R}$---satisfy MLRP~\cite{saumard2014log}. Hence, our results also hold for linear translations of the following (log-concave) distributions: Laplace, uniform, multivariate Gaussian, gamma, beta, Subbotin, chi-square, Dirichlet, and logistic. These instantiations substantiate the applicability of our algorithmic results which, through MLRP, address a wide range of cake-division instances. \\ \noindent {\bf Lipschitz Constant of {Cut} and {Eval} Queries:} We say that the cut and eval queries in a cake-division instance are $\lambda$-Lipschitz iff the following inequalities hold for each agent $i \in [n]$: \begin{align*} |\mathrm{Eval}_i(\ell',r') - \mathrm{Eval}_i(\ell,r)| & \leq \lambda \ \| (\ell',r') - (\ell,r) \|_{\infty} \ \ \quad \text{for all} \ \ (\ell',r'), (\ell,r) \in [0,1] \times [0,1]\\ |\mathrm{Cut}_i(\ell',\tau') - \mathrm{Cut}_i(\ell,\tau)| & \leq \lambda \ \| (\ell',\tau') - (\ell,\tau) \|_{\infty} \ \ \quad \text{for all} \ \ (\ell',\tau'), (\ell,\tau) \in [0,1] \times \mathbb{R}_+ \end{align*} A useful consequence of MLRP (and the integrability of the value densities) is that the corresponding {cut} and {eval} queries are in fact $\lambda$-Lipschitz, for a finite $\lambda \geq 1$;\footnote{If a function is $\lambda'$-Lipschitz then, it is $\lambda$-Lipschitz as well, for all $\lambda \geq \lambda'$.} see Proposition~\ref{Lip}. This proposition follows from the fact that (Riemann) integrable value densities $f_i$s are, by definition, bounded. Furthermore, as mentioned previously, MLRP mandates that the value densities are strictly positive over the cake. Therefore, for each agent $i \in [n]$ and $x \in [0,1]$ we have $0 < L \leq f_i(x) \leq U$ for some $L, U \in \mathbb{R}_+$. Proposition~\ref{Lip} asserts that the Lipschitz constant $\lambda$ can be expressed in terms of these bounding parameters, $\lambda \leq \max\{1/L, U, U/L\}$. It is worth pointing out that besides MLRP (and, hence, the positivity of the value densities), all the other assumptions made in this work are standard. The time complexities of our algorithms depend polynomially on the the bit complexity of the Lipschitz constant $\lambda \geq 1$.\footnote{In the case of linear value densities, $f_i(x) = a_i x + b_i$, the bit complexity of the Lipschitz constant $\lambda$ is proportional to the bit complexity of the coefficients $a_i$s and $b_i$s (Proposition~\ref{Lip}). Hence, if linear densities are explicitly given as input, then we have a polynomial (in the input size) runtime bound.} As mentioned previously, in general, such a runtime dependency on $\log \lambda$ is unavoidable; Appendix~\ref{appendix:example-precision-loss} provides an illustrative examples. Furthermore, it is possible---even with rational and MLRP value densities---that the exact envy-free/welfare-maximizing allocations are induced by irrational cuts (Appendix~\ref{appendix:example-precision-loss}). That is, in general, one cannot expect an efficient algorithm that outputs an {exact} envy-free (or welfare-maximizing) allocation. Therefore, when considering efficient algorithms for cake division, a precision loss is inevitable. However, our algorithms ensure that this loss is arbitrarily small. Specifically, given a cake-division instance $\mathcal{C}$ in which the value densities satisfy MLRP, we can find, in time that is polynomial in $\log (1/\eta)$ (along with $n$ and $\log \lambda$), an envy-free allocation $\mathcal{I} = \{I_1,\ldots, I_n \}$ such that, $v_i(I_i) \geq v_i(I_j) - \eta$ for all agents $i, j \in [n]$. Since the precision parameter $\eta$ can be driven exponentially close to zero in polynomial (in the bit complexity of $\eta$) time, we will say that an envy-free allocation can be computed {up to an arbitrary precision}. Similarly, in the context of maximizing welfare (social or egalitarian) welfare, given a cake-division instance wherein the value densities bear MLRP, we can find---in time that is polynomial in $ \log(1/\eta)$---an allocation with the (social or egalitarian) welfare $\eta$ (additively) close to the optimal. Hence, as in the case of envy-freeness, we assert that a welfare-maximizing allocation can be computed efficiently, up to an arbitrary precision. \section{Nash Social Welfare } \label{section:nsw} This section presents an FPTAS for maximizing Nash social welfare in cake-division instances with MLRP. \NashSocialWelfare* \begin{proof} Given a cake-division instance $\mathcal{C}$ with MLRP, write $\mathcal{A}^* = \{A^*_1, A^*_2, \dots, A^*_n\}$ to denote an allocation that maximizes the Nash social welfare in $\mathcal{C}$. Arunachaleswaran et al. \cite{arunachaleswaran2019fair} have shown that the bundles in any Nash optimal allocation $\mathcal{A}^*= \{A^*_1, A^*_2, \dots, A^*_n\}$ satisfy $v_i(A^*_i) \geq \frac{1}{4} v_i(A^*_j)$ for all $i,j \in[n]$; this result holds even in the absence of MLRP. For a fixed agent $i \in [n]$, we sum the inequalities $v_i(A^*_i) \geq \frac{1}{4} v_i(A^*_j)$ over all $j \in [n]$ to obtain $v_i(A^*_i) \geq \frac{1}{4n}$. {Recall that $v_i(0,1)=1$ for all $i \in [n]$.} For an approximation parameter $\varepsilon >0$, we consider points $c_0= 0 < c_1 < c_2 < \ldots < c_N=1$ such that $v_i(c_{t-1}, c_{t}) \leq \frac{\varepsilon}{8n}$ for all $t \in [N]$ and for all agents $i \in [n]$. In particular, we start with $c_0=0$ and iteratively select points $c_t \coloneqq \min_i \ \mathrm{Cut}_i \left(c_{t-1}, \frac{\varepsilon}{8n} \right)$, for all $t \geq 1$. Note that the following upper bound holds for total number of selected points $N \leq \frac{8 n^2}{\varepsilon}$. We round each cut point in the optimal allocation $\mathcal{A}^*$ to its closest point in the collection $\{c_i\}_{i=0}^N$. This leads us to another allocation, $\widehat{\mathcal{A}} = \{\widehat{A}_1, \widehat{A}_2, \ldots, \widehat{A}_n\}$, wherein interval $\widehat{A}_i$ is assigned to agent $i \in [n]$. By construction, the cut points of $\widehat{\mathcal{A}}$ are contained in the set $\{c_t\}_{t=0}^N$. Also, since $\mathcal{A}^*$ conforms to the MLRP order (Lemma~\ref{theorem:POorder}), so does $\widehat{\mathcal{A}}$. Next we show that the Nash social welfare of $\widehat{\mathcal{A}}$ is comparable to that of $\mathcal{A}^*$. For all $i \in [n]$, we have \begin{align*} v_i(\widehat{A}_i) &\geq v_i(A^*_i) - \frac{\varepsilon}{4n} \tag{since $v_i(c_{t-1}, c_{t}) \leq \frac{\varepsilon}{8n}$ for all $t \in [N]$}\\ & \geq v_i(A^*_i) - \varepsilon v_i(A^*_i) \tag{since $v_i(A^*_i) \geq \frac{1}{4n}$ } \\ & \geq (1 - \varepsilon) \ v_i(A^*_i) \end{align*} Multiplying the above inequality over $i\in [n]$, we obtain $\mathrm{NSW}(\widehat{\mathcal{A}}) = \left( \prod_{i=1}^n v_i(\widehat{A}_i) \right)^{1/n} \geq (1-\varepsilon) \left( \prod_{i=1}^n v_i(A^*_i) \right)^{1/n} = (1 - \varepsilon) \ \mathrm{NSW}(\mathcal{A}^*)$. Therefore, there exists an allocation $\widehat{\mathcal{A}}$ with the properties that (i) $\widehat{\mathcal{A}}$ has Nash social welfare at least $(1- \varepsilon)$ times the optimal (Nash social welfare), (ii) the cut points of $\widehat{\mathcal{A}}$ are contained in the set $\{ c_t \}_t$, and (iii) $\widehat{\mathcal{A}}$ conforms to the MLRP order. To complete the proof of the theorem we will show that, among all the allocations that satisfy (ii) and (iii), we can find (via a dynamic program) one that maximizes the Nash social welfare. For $t \in [N]$ and $k \in [n]$, we write $H(k,t)$ to denote the optimal Nash product (i.e., the product of valuations) that one can achieve by allocating the interval $[0,c_t]$ among the first $k$ agents (in order). \begin{align*} H(k,t) \coloneqq \max_{1 \leq t' \leq t} \left\{ H(k-1, c_{t'}) \cdot v_k(c_{t'},c_t) \right\} \end{align*} Here, we initialize the dynamic program by setting $H(1,t) \coloneqq v_1(0,c_t)$ for all $1 \leq t \leq N$. One can show, via induction, that $H(n, N)$ is the optimal Nash product among allocations that satisfy properties (ii) and (iii). That is, $(H(n, N))^{1/n} \geq (1 - \varepsilon) \ \mathrm{NSW}(\mathcal{A}^*)$. Hence, the dynamic program computes the desired allocation using $\mathcal{O}(n^2 N)$ eval queries. Therefore, we can find an allocation with Nash social welfare at least $(1- \varepsilon)$ times the optimal in $\mathcal{O} \left( {\rm poly} \left(n, 1/\varepsilon, \log \lambda \right) \right)$ time; the dependency on $\log \lambda$ stems from the fact that the bit-complexity of the output (i.e., of the computed cut points) can be $\mathcal{O} \left( \log \frac{\varepsilon}{\lambda}\right)$. Overall, we get that maximizing Nash social welfare admits an FPTAS under MLRP. \end{proof} \section{Pareto Optimality} This section shows that, with MLRP in hand, one does not loose out on Pareto optimality by imposing the contiguity requirement. That is, under MLRP, there always exist allocations (i.e., cake divisions with connected pieces) that are Pareto optimal among all cake divisions, with or without connected pieces. Moreover, such allocations conform to the MLRP order. This structural result implies that for maximizing welfare we can restrict attention to allocations wherein the intervals are assigned (left to right on the cake) in accordance with the MLRP order. Intuitively, this leads us to a welfare-maximizing algorithm---specifically, a dynamic program---that recursively finds optimal allocations for intervals placed at the left end of the cake (i.e., for intervals of the form $[0,x]$); see Section~\ref{section:social welfare} and Appendix~\ref{section:nsw} for details. Subsequently, Section~\ref{section:weller} (Theorem~\ref{theorem:ef-po}) establishes a strong connection between fairness and (Pareto) efficiency in the MLRP context: if the value densities bear MLRP, then \emph{every} envy-free allocation is necessarily Pareto optimal. \begin{restatable}{lemma}{MLRPorder} \label{theorem:POorder} Let $\mathcal{C} =\langle [n], (f_i)_i \rangle$ be a cake-division instance in which the value densities satisfy the monotone likelihood ratio property. Then, for every cake division $\mathcal{D}=\{D_1, \ldots, D_n\}$ in $\mathcal{C}$ there exists an allocation $\mathcal{J} = \{J_1, \ldots, J_n \}$ such that $v_i(J_i) \geq v_i(D_i)$, for $1 \leq i \leq n$. Furthermore, for every Pareto optimal allocation $\mathcal{I} = \{I_1, \ldots, I_n \}$ in $\mathcal{C}$, there exists an allocation $\mathcal{I'} = \{I'_1, \ldots, I'_n \}$ with $v_i(I'_i) = v_i(I_i)$ that conforms to the MLRP order, i.e., if $ \ f_{i+1}/f_i $ is nondecreasing in $[0,1]$, then the interval assigned to agent $i$ (i.e., $I'_i$) appears to the left of the interval assigned to the agent $i+1$ (i.e., $I'_{i+1}$). \end{restatable} \begin{proof} Consider a cake division $\mathcal{D}=\{D_1, \ldots, D_n\}$ wherein two consecutive intervals are assigned violating the MLRP order: say, interval $[p,q]$ is assigned to agent $j$ (i.e., this interval is contained in the bundle $D_j$), the adjacent interval $[q,r]$ is assigned to agent $i$, and agent $i$ appears before $j$ in the MLRP order ($f_{j}/f_i$ is non-decreasing over $[0,1]$). We will show that in such a case there always exists a point $q' \in [p, r]$ such that $v_i(p,q') \geq v_i(q, r)$ and $v_j(q', r) \geq v_j(p,q)$. That is, one can swap the allocation order between $i$ and $j$ (in the interval $[p,q] \cup [q, r]$) without decreasing the agents' values. Moreover, we note that, if $f_j/f_i$ is strictly increasing in the interval $[p,q] \cup [q, r]$, then this update leads to a strict increase in agent $i$'s or agent $j$'s value. Hence, starting with any cake division $\mathcal{D}=\{D_1, \ldots, D_n\}$, we can repeatedly apply the above-mentioned resolution towards the MLRP order and obtain an \emph{allocation} $\mathcal{J} = \{J_1, \ldots, J_n \}$ with the desired property, $v_i(J_i) \geq v_i(D_i)$ for all $i \in [n]$. Note that this resolution process also establishes the second part of the theorem, i.e., for every Pareto optimal allocation $\mathcal{I}$, there exists an allocation $\mathcal{I'}$ with $v_i(I'_i) = v_i(I_i)$ that conforms to the MLRP order. The remainder of the proof addresses the desired point $q' \in [p,r]$. In particular, we will identify $q'$ such that assigning interval $[p,q']$ to agent $i$ (instead of $[q,r]$) and assigning $[q',r]$ to agent $j$ (instead of $[p,q]$) leads to an increment in values. Write $\beta_i \coloneqq \frac{v_i(q,r)}{v_i(p,r)}$ to denote the normalized value of agent $i$ under the initial assignment. Define $q' \in [p,r]$ to be the point that satisfies \begin{align} \frac{v_j(p,q')}{v_j(p,r)} = \beta_i \label{eq:defn-q-prime} \end{align} Since the value densities satisfy MLRP, property (ii) of Lemma~\ref{lemma:R2R}, applied to the interval $[p,r]$ and $q' \in [p,r]$, gives us $\frac{v_i(q',r)}{v_i(p,r)} \leq \frac{v_j(q',r)}{v_j(p,r)} $. Simplifying further we obtain $1- \frac{v_j(q',r)}{v_j(p,r)} \leq 1- \frac{v_i(q',r)}{v_i(p,r)}$, i.e., \begin{align} \frac{v_j(p,q')}{v_j(p,r)} \leq \frac{v_i(p,q')}{v_i(p,r)} \label{ineq:norm-2} \end{align} Therefore, we obtain a value bound for agent $i$ \begin{align*} \frac{v_i(p,q')}{v_i(p,r)} \geq \frac{v_j(p,q')}{v_j(p,r)} & = \beta_i = \frac{v_i(q, r)}{v_i(p,r)} \tag{via equations (\ref{ineq:norm-2}), (\ref{eq:defn-q-prime}) and the definition of $\beta_i$} \end{align*} That is, agent $i$'s value is preserved through the reassignment, $v_i(p,q') \geq v_i(q,r)$. For agent $j$, via property (ii) of Lemma~\ref{lemma:R2R}, with interval $[p,r]$ and $q \in [p,r]$, we have \begin{align*} \frac{v_j(q,r)}{v_j(p,r)} \geq \frac{v_i(q,r)}{v_i(p,r)} & = \beta_i = \frac{v_j(p,q')}{v_j(p,r)} \tag{by the defintion of $\beta_i$ and equation (\ref{eq:defn-q-prime})} \end{align*} This inequality reduces to $1 - \frac{v_j(p,q')}{v_j(p, r)} \geq 1 - \frac{v_j(q,r)}{v_j(p, r)}$. Simplifying we obtain $\frac{v_j(p, r) - v_j(p,q')}{v_j(p, r)} \geq \frac{v_j(p, r) - v_j(q,r)}{v_j(p, r)}$. Therefore, we have the desired inequality $v_j(q',r) \geq v_j(p,q)$ and the stated claims follow. \end{proof} \begin{remark} For cake-division instances with MLRP, we can prove that any allocation $\mathcal{K}$ that conforms to the MLRP order is Pareto optimal (over the set of all cake divisions). Write $0=k_0<k_1< \dots <k_n=1$ to denote the cut-points of $\mathcal{K}$. For contradiction, we assume that $\mathcal{K}$ is not Pareto optimal. That is, there exists a cake-division $\mathcal{L}$ that dominates $\mathcal{K}$ and is Pareto optimal. By Lemma~\ref{theorem:POorder}, we know there exists another Pareto optimal allocation $\mathcal{M}$ with $v_i(M_i)=v_i(L_i)$ for all $i\in[n]$ that conforms to the MLRP order. Write $0=m_0<m_1< \dots <m_n=1$ to denote the cut-points of $\mathcal{M}$. Since, $\mathcal{M}$ Pareto dominates $\mathcal{K}$, we will have $k_i \leq m_i$ for all $i\in[n]$ with at least one strict inequality. This contradicts the fact that $k_n=m_n=1$. Therefore, $\mathcal{K}$ is Pareto optimal. \end{remark} \subsection{Extending Weller's Theorem} \label{section:weller} Weller's theorem~\cite{weller1985fair} is a notable result in the cake-cutting literature and it asserts that there always exists some cake division---though, not necessarily with connected pieces---which is both envy-free (fair) and Pareto optimal. While this theorem holds in general, it does not guarantee that envy-freeness and Pareto optimality can be achieved together with allocations. Indeed, there are cake-division instances wherein none of the of envy-free {allocations} are Pareto optimal. We show that, by contrast under MLRP, {every} envy-free allocation is Pareto optimal, among all cake divisions (Theorem~\ref{theorem:ef-po}). Therefore, given a cake-division instance with MLRP, the allocation computed by our algorithm (Algorithm \ref{alg:BinSearch}) is not only envy-free but also Pareto optimal, up to an arbitrary precision. \EFPO* \begin{proof} Write $\mathcal{I} = \{I_1, I_2, \dots, I_n\}$ to denote an envy-free allocation in $\mathcal{C}$; here interval $I_i$ is assigned to agent $i \in [n]$. We assume, towards a contradiction, that there exists a cake division $\mathcal{D} = \{D_1, \ldots, D_n\}$ that Pareto dominates $\mathcal{I}$. Lemma~\ref{theorem:POorder} implies that in such a case there exists an allocation $\mathcal{J} = \{J_1, \ldots, J_n\}$ which also Pareto dominates $\mathcal{I}$. That is, we have $v_i(J_i) \geq v_i(D_i) \geq v_i(I_i)$, for all agents $i \in [n]$, and there exists some agent $k \in [n]$ such that $v_k(J_k) \geq v_k(D_k) >v_k(I_k)$. Recall that for an allocation the endpoints of all the constituent intervals are referred to as its {cut points}. We break our analysis into the following two cases depending on whether $\mathcal{I}$ and $\mathcal{J}$ have the same set of cut points. \noindent $\emph{Case 1:}$ The cut points of the allocations $\mathcal{I}$ and $\mathcal{J}$ are identical. In this case, there must exist a permutation $\sigma: [n] \mapsto [n]$ such that $I_{\sigma(i)} = J_i$ for all $i \in [n]$. Since $\mathcal{I}$ is envy-free, we have $ v_i(I_i) \geq v_i(I_{\sigma(i)}) = v_i(J_i)$ for all agents $i \in [n]$. However, this contradicts the fact that $\mathcal{J}$ Pareto dominates the allocation $\mathcal{I}$. \noindent $\emph{Case 2:}$ The cut points of $\mathcal{I}$ and $\mathcal{J}$ are not identical. Since both the allocations form a partition of the same cake $[0,1]$, there must exist some $s, t \in [n]$ such that the interval $J_s$ is a strict subset of the interval $I_t$, i.e., $J_s \subset I_t$. Envy-freeness of $\mathcal{I}$ gives us $v_s(I_s) \geq v_s(I_t) > v_s(J_s)$. The last strict inequality follows from the fact that the value density $f_s$ of agent $s$ has full support over $[0,1]$ and $J_s$ is a strict subset of $I_t$. This bound $v_s(I_s) > v_s(J_s)$ contradicts the fact that $\mathcal{J}$ Pareto dominates $\mathcal{I}$ and completes the proof. \end{proof} \section{Robustness of MLRP} \label{appendix:structured-perturbations-for-MLRP} This section shows that our framework extends to the class of non-full-support value densities considered in \cite{alijani2017envy}. In particular, Alijani et al. \cite{alijani2017envy} established that a contiguous envy-free cake division can be efficiently computed if every agent $i \in [n]$ uniformly values a single interval $[\ell_i, r_i] \subseteq [0,1]$ and these intervals satisfy the following ordering property {\rm OP}: for all $i, j \in [n]$ we have $\ell_i \leq \ell_j$ iff $r_i \leq r_j$. This ordering property {\rm OP} ensures that, in particular, any interval $[\ell_i, r_i]$ is not a strict subset of $[\ell_j,r_j]$ for $i, j \in [n]$. Let $\mathcal{K} = \langle [n], \{f_i \}_{i \in [n] } \rangle $ denote a cake-division instance with such value densities and note that, for each $i \in [n]$,\begin{equation} \label{equation:pconstant} f_i(x) \coloneqq \begin{cases} \frac{1}{r_i-{\ell}_i} &\quad \text{ if } \ x \in [\ell_i, r_i] \\ 0 & \quad \text{otherwise}\\ \end{cases} \end{equation} Write $h_i \coloneqq 1/(r_i - \ell_i)$. The work of Alijani et al. \cite{alijani2017envy} shows that an envy-free allocation of $\mathcal{K}$ can be computed efficiently. Indeed, the value densities in $\mathcal{K}$ do not have full support over the cake. However, we will show that we can perturb these densities $f_i$s (in a structured manner) to obtain an instance $\widehat{\mathcal{K}} = \langle [n], \{\widehat{f}_i \}_{i \in [n] } \rangle $ such that (i) the value densities $\widehat{f}_i$s in $\widehat{\mathcal{K}}$ have full support and bear MLRP (Claim~\ref{claim:perturbation-mlrp}) and (ii) any envy-free allocation in $\widehat{\mathcal{K}}$ forms an envy-free allocation, up to a small precision loss, in the original instance $\mathcal{K}$ (Claim~\ref{claim:perturbation-envy-bound}). Index the agents such that $0 \leq \ell_1 \leq \ell_2\leq \ldots \leq \ell_n \leq 1$. Property {\rm OP} ensures that, under this indexing, the right endpoints $r_i$s are also sorted: $r_1 \leq r_2 \ldots \leq r_n$. To construct the densities $\widehat{f}_i$, for each agent $i$, we will define counting functions $c_i: [0, 1] \mapsto \{0, 1, \ldots, i\}$ and $d_i: [0, 1] \mapsto \{0, 1, 2, \ldots, n-i\}$ as follows: $c_i(x) \coloneqq \sum_{k=1}^i \mathbbm{1} \{ \ell_k \leq x \}$ and $d_i(x) \coloneqq \sum_{k=i}^n \mathbbm{1} \{ r_k \leq x \}$, i.e., $c_i(x)$ is the number of left endpoints in $\{\ell_1, \ell_2, \ldots, \ell_i \}$ that appear before $x$ and $d_i(x)$ is equal to the number of right endpoints in $\{r_i, r_{i+1}, \ldots, r_n \}$ that appear before $x$. For a sufficiently large parameter $H \geq 1$ (which we will set in Claim~\ref{claim:perturbation-envy-bound}), we define $\widehat{f}_i: [0,1] \mapsto \mathbb{R}_+$ as follows\footnote{The integrals of the value densities $\widehat{f}_i$s are not normalized to $1$, though, as mentioned previously, MLRP is preserved under scaling and, hence, we do not have to explicitly enforce normalization.} \begin{equation} \label{equation:perturbation} \widehat{f}_i(x) \coloneqq h_i \ H^{(c_i(x) - i - d_i(x)) } \end{equation} In other words, we initialize $\widehat{f}_i(0) = {h_i} {H}^{-i}$ and scale up this density by a multiplicative factor of ${H}$ whenever an interval $[\ell_k, r_k]$ starts, for any agent $k \leq i$. Note that $c_i(x) = i$ and $d_i(x) = 0$ for all $x \in [\ell_i, r_i]$ and, hence, in this interval $\widehat{f}_i(x) = h_i = f_i(x)$. Beyond this range, i.e., for $x \geq r_i \geq \ell_i$, we have $c_i(x) = i$. Therefore, once the interval $[\ell_i, r_i]$ ends, we scale down $\widehat{f}_i$ by a multiplicative factor of $H$, whenever an interval $[\ell_j,r_j]$ ends, for any agent $j \geq i$. Furthermore, for all $x \notin [\ell_i, r_i]$, we have $\widehat{f}_i(x) \leq h_i/H$. Also, as required, $\widehat{f}_i$s have full support over $[0,1]$. Applying Proposition~\ref{Lip}, we obtain that the Lipschitz constant of cut and eval queries in $\widehat{\mathcal{K}}$ is $\lambda = H^n \ \max_{1 \leq i \leq n} \ h_i$. The next claim shows that the the value densities in $\widehat{\mathcal{K}}$ satisfy MLRP. \begin{claim} \label{claim:perturbation-mlrp} Let $\mathcal{K} = \langle [n], \{f_i \}_{i \in [n] } \rangle$ be a cake-division instance in which the value densities satisfy equation~(\ref{equation:pconstant}) and the ordering property {\rm OP}. Then, for parameter ${H} \geq 1$, the value densities $\widehat{f}_i$s (as defined in equation~(\ref{equation:perturbation})) satisfy MLRP. \end{claim} \begin{proof} In this constructed instance $\widehat{\mathcal{K}} = \langle [n], \{\widehat{f}_i \}_{i \in [n] } \rangle$ the agents are indexed such that $0 \leq \ell_1 \leq \ell_2\leq \dots \leq \ell_n \leq 1$ and, by {\rm OP}, we have $r_1 \leq r_2 \leq \ldots \leq r_n$. Fix any two agents $i, j \in [n]$ such that $i<j$. The indexing among the agents ensures that $\ell_i \leq \ell_j$ and $r_i \leq r_j$. To prove the stated claim it suffices to show that the likelihood ratio $\frac{\widehat{f}_j(x)}{\widehat{f}_i(x)}$ is nondecreasing throughout $[0,1]$. We establish the monotonicity of this likelihood ratio by considering three different ranges in the cake $[0, \ell_i]$, $[\ell_i, r_j]$, and $[r_j, 1]$. Initially, the likelihood ratio $\frac{\widehat{f}_j(0)}{\widehat{f}_i(0)} = \left(\frac{h_j}{h_i}\right) \frac{1}{{H}^{j-i}}$. For all $x \in [0, \ell_i]$ whenever the density $\widehat{f}_i(x)$ experiences a multiplicative increase so does $\widehat{f}_j(x)$, by the same factor $H$. In particular, if $c_i$ increases then so does $c_j$ throughout the interval $[0, \ell_i]$: if at a point $x \in [0, \ell_i]$ the density $\widehat{f}_i$ is scaled up by a multiplicative factor ${H}$ (i.e., $c_i$ increases by one), then it must have been the case that an interval $[\ell_k, r_k]$, with $k \leq i$, starts at $\ell_k = x$. In such a case, by definition, $\widehat{f}_j$ also increases by a multiplicative factor ${H}$ (i.e., $c_j$ increases by one as well). Therefore, for all $x \in [0, \ell_i]$, the likelihood ratio stays constant at $\frac{\widehat{f}_j(0)}{\widehat{f}_i(0)}$. In the interval $[\ell_i, r_j]$, the density $f_i$ does not increase, since here $c_i$ saturates at $i$ after $\ell_i$ and $d_i$ increases from zero beyond $r_i$. At the same time, in this interval $[\ell_i, r_j]$ the density $f_j$ is nondecreasing. Therefore, the likelihood ratio continues to be monotonic in the interval $[\ell_i, r_j]$ as well. Finally, for $x \geq r_j$, we note that $d_i$ and $d_j$ increase synchronously. Also, for points $x \in [r_j, 1]$ we have $c_j(x) = j$ and $c_i(x) = i$. Therefore, in this range the likelihood ratio stays constant at $\frac{\widehat{f}_j(r_j)}{\widehat{f}_i(r_j)}$. Overall, we obtain the monotonicity of the likelihood ratio and the MLRP guarantee follows. \end{proof} \begin{claim} \label{claim:perturbation-envy-bound} Given a cake-division instance $\mathcal{K} = \langle [n], \{f_i \}_{i \in [n] } \rangle$ in which the value densities satisfy equation~(\ref{equation:pconstant}) and the ordering property {\rm OP}. Let $\widehat{\mathcal{K}} = \langle [n], \{\widehat{f}_i \}_{i \in [n] } \rangle$ be the cake-division instance defined above, with parameter $H \coloneqq \frac{2}{\eta} \ \max_{1 \leq i \leq n} h_i$, and suppose that allocation $\mathcal{I}=\{I_1, \ldots, I_n\}$ is envy-free up to an additive factor of $\eta$ in $\widehat{\mathcal{K}}$ (i.e., $v_i(I_i) \geq v_i(I_j) - \eta$ for all $i, j \in [n]$). Then, allocation $\mathcal{I}$ envy-free up to $2\eta$ in $\mathcal{K}$. \end{claim} \begin{proof} For agent $i \in [n]$, write $v_i$ and $\widehat{v}_i$ to denote the valuation function of agent $i$ under $f_i$ and $\widehat{f}_i$, respectively. By construction, the densities $f_i$ and $\widehat{f}_i$ coincide over the interval $[\ell_i ,r_i]$; in particular, $f_i(x) = \widehat{f}_i(x)=h_i$ for all $x \in [\ell_i ,r_i]$. Hence, for any interval $I$ we have $v_i(I \cap [\ell_i, r_i]) = \widehat{v}_i(I \cap [\ell_i, r_i])$. Furthermore, the construction (equation~(\ref{equation:perturbation})) of $\widehat{f}_i$ ensures that $\widehat{v}_i(I \setminus [\ell_i,r_i]) \leq \frac{h_i}{{H}} \leq \eta/2$. The last inequality follows from the choice of the parameter $H$. Note that $f_i(x) = 0$ for all $x \notin [\ell_i, r_i]$ and, hence, $v_i(I \setminus [\ell_i,r_i]) =0$. These bounds imply that, for any agent $i \in [n]$, the difference in values of any interval $I$ is at most $\eta/2$: \begin{align} \label{equation:perturb-value} |\widehat{v}_i(I) - v_i(I)| & \leq \frac{\eta}{2} \end{align} Hence, an allocation $\mathcal{I}=\{I_1, \ldots, I_n\}$ that is envy-free up to an additive factor of $\eta$ in $\widehat{\mathcal{K}}$ (i.e., $\widehat{v}_i(I_i) \geq \widehat{v}_i(I_j) - \eta$ for all $i, j \in [n]$) is envy-free up to $2\eta$ in $\mathcal{K}$. Specifically, for any $i, j \in [n]$, \begin{align*} v_i({I}_i) & \geq \widehat{v}_i({I}_i) - \eta/2 \tag{using inequality~(\ref{equation:perturb-value}) with interval $I_i$} \\ & \geq \widehat{v}_i({I}_j)- \frac{3\eta}{2} \tag{by $\eta$-envy-freeness of $\mathcal{I}$} \\ &\geq v_i({I}_j) - 2\eta \tag{using inequality (\ref{equation:perturb-value}) with interval ${I}_j$} \end{align*} That is, ${\mathcal{I}}$ is envy-free, up to $2\eta$ precision, in $\mathcal{K}$. This completes the proof. \end{proof} Since the constructed instance $\widehat{\mathcal{K}}$ satisfies MLRP, we can use Algorithm~\ref{alg:BinSearch} (Section~\ref{EFdivisions}) to efficiently compute, up to an arbitrary precision, an envy-free allocation $\mathcal{I}$ in the instance $\widehat{\mathcal{K}}$. The previous claim ensures that $\mathcal{I}$ is an envy-free allocation (up to an arbitrary precision) in the original instance $\mathcal{K}$ as well. This observation highlights the fact that the ideas developed in this work are somewhat robust and extend to other value-density settings. \section{Envy-Free Cake Division Under Piecewise Linear Valuations} \label{appendix:piecewise-linear} In this section we extend the MLRP framework to find envy-free cake divisions (with disconnected pieces and up to an arbitrary precision) for piecewise linear value densities.\footnote{A function is said to be piecewise linear iff it can be expressed as a union of a finite number of linear functions. Hence, piecewise linear densities contain a finite number of breakpoints and do not necessarily satisfy MLRP.} The work of Kurokawa et al.~\cite{kurokawa2013cut} shows that if there are $k \geq 1$ breakpoints in total across all $n$ agents' piecewise linear value densities, then an envy-free cake division can be computed using $\mathcal{O}\left(n^6k \ln k\right)$ Robertson-Webb queries. We provide an improved query complexity dependence on $n$. Specifically, we show that (up to an arbitrary precision) an envy-free cake division can be found using $\mathcal{O}\left(n^2 k \log^2 (k U) \right) $ queries (Theorem \ref{theorem:PL-EF}); here $U \geq 1$ is an upper bound on the value densities, i.e., $f_i(x) \leq U$ for all $x \in [0,1]$ and $i \in [n]$. Our recursive algorithm $\textsc{PL-EF}(a,b)$ aims to compute envy-free divisions for the left and right half of the given interval $[a,b]$, respectively, and then takes the union of these divisions to obtain a fair one for $[a,b]$. Towards this end, if $\textsc{BinSearch}$ (Algorithm \ref{alg:BinSearch}) by itself finds (in a specified number of iterations) a division of $[a,(a+b)/2]$ wherein the envy is appropriately small, then $\textsc{PL-EF}(a,b)$ does not recurse over this subinterval, otherwise it does. Identical steps are executed for the right half $[(a+b)/2,b]$. Lemma~\ref{lemma:recurse} below shows a recursive call is made only if the subinterval under consideration contains one or more breakpoints. Indeed, if all the value densities are linear within, say, $[a,(a+b)/2]$, then $\textsc{BinSearch}$ will necessarily find the desired division, and $\textsc{PL-EF}(a,b)$ will not recurse on $[a,(a+b)/2]$. Furthermore, note that if the length of the interval (i.e., $|b-a|$) is small enough, then the Lipschitz continuity of the densities implies that any division will have appropriately small envy. Using these observations we can bound the total number of recursive calls of $\textsc{PL-EF}(0,1)$, i.e., establish the stated upper bound on the query complexity of finding an envy-free partition (up to an arbitrary precision) of the cake $[0,1]$ for piecewise linear densities. Throughout this section, we will use $k \geq 1$ to denote the total number of breakpoints across the piecewise linear $f_i$s and $U \geq 1$ to denote an upper bound on these densities, i.e., $f_i(x) \leq U$ for all $x \in [0,1]$ and $i \in [n]$. Also, for parameter $\eta >0$, we will say that a division $\mathcal{D}=\{D_1, \ldots, D_n\}$ is $\eta$-envy-free iff $v_i(D_i) \geq v_i (D_j) - \eta$ for all $i, j \in [n]$. \begin{algorithm} {\bf Input:} A cake-division instance, $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle$, with piecewise linear densities, in the Robertson-Webb query model. \\ {\bf Output:} An $\eta$-envy-free cake division $\mathcal{D} = \{ D_1,\ldots, D_n\} $ \caption{\textsc{PL-EF}$(a,b)$} \label{alg:PL-EF} \begin{algorithmic}[1] \STATE Set $\widehat{\eta} \coloneqq \left( \frac{\eta}{k} \right)^2 \frac{1}{U}$ and $\lambda \coloneqq \max\left\{U, \frac{U}{\widehat{\eta}}, \frac{1}{\widehat{\eta}} \right\}$. \STATE \label{step:small-length} If $|b-a| \leq \frac{\eta^2}{k^2 U^2}$, then return division $\{D_j\}_{j=1}^n$, with $D_1= [a, b]$ and $D_i = \emptyset$ for all $i >1$, and exit. \STATE \label{step:prep} Let $K_L \coloneqq \left\{i \in [n] : \mathrm{Eval}_i\left(a, \frac{a+b}{2}\right) \geq \widehat{\eta} \right\}$. Normalize the valuations of the agents in $K_L$ over $[a,b]$ and compute a potential MLRP order among these agents using Lemma \ref{MLRPorder}. \STATE \label{step:subroutine} With this order in hand and for parameter $\delta \coloneqq \frac{\widehat{\eta}}{\lambda}$, run $\textsc{BinSearch}$ (Algorithm \ref{alg:BinSearch}), for at most $2 n \log \left( \frac{2 \lambda}{\widehat{\eta}}\right)$ iterations, over the interval $[a, (a+b)/2]$ and with agents $K_L$. \STATE \label{step:ef} If Step \ref{step:subroutine} is unable to find an $\widehat{\eta}$-envy-free allocation, then set $\{L_1, L_2, \ldots, L_n\} \coloneqq \textsc{PL-EF} \left(a, \frac{a+b}{2}\right)$, else set $\{L_1, L_2, \ldots, L_n\}$ as the $\widehat{\eta}$-envy-free computed in the previous step. \STATE Repeat Steps \ref{step:prep} to \ref{step:ef} for the right half $[\frac{a+b}{2},b]$ and compute allocation $\mathcal{R} = \{R_1, R_2, \ldots, R_n\}$. \RETURN \label{step:righthalf} cake division $\mathcal{D} = \{ L_i \cup R_i\}_{i=1}^n$. \end{algorithmic} \end{algorithm} \begin{lemma} \label{lemma:recurse} Consider a cake-division instance with piecewise linear value densities and an interval $[a,b] \subseteq [0,1]$. If there are no breakpoints in the left half of the interval $[a, (a+b)/2]$, then Step \ref{step:subroutine} succeeds in finding an $\widehat{\eta}$-envy-free division and, hence, $\textsc{PL-EF}(a,b)$ does not recurse on $[a, (a+b)/2]$. An identical claim holds for the right half $[(a+b)/2, b]$. \end{lemma} \begin{proof} We note that, in Step \ref{step:subroutine} of $\textsc{PL-EF}(a,b)$, $\textsc{BinSearch}$ is executed over the set of agents $K_L \coloneqq \{i \in [n] : \mathrm{Eval}_i\left(a, \frac{a+b}{2}\right) \geq \widehat{\eta} \}$. Since there are no breakpoints in $[a, (a+b)/2]$, the value density of each agent in this subinterval is linear. Moreover, for each agent in $K_L$ the cut and eval queries are $\lambda$-Lipschitz: \textcolor{red}{this follows from the fact that...} Therefore, via Lemma \ref{RDcomputation}, we get that in Step \ref{step:subroutine} $\textsc{BinSearch}$ finds an $\left( \frac{\widehat{\eta}}{\lambda} \right)$-ripple division $\mathcal{L} = \{L_1,\ldots, L_n\}$ in at most $2 n \log \left( \frac{2 \lambda}{\widehat{\eta}}\right)$ iterations. Linear densities always satisfy MLRP, hence, as in the proof of Theorem \ref{theorem:EFdivision}, we get that $\mathcal{L}$ is an $\widehat{\eta}$-envy-free division. In such a case $\textsc{PL-EF}(a,b)$ will not recurse on $[a, (a+b)/2]$ and the lemma stands proved. \end{proof} We next upper bound the number of recursive calls made by the algorithm. For precision parameter $\eta >0$ and density upper bound $U \geq 1$, write $B \coloneqq 2 \log \left( \frac{kU}{\widehat{\eta}} \right)$ and $\widehat{\eta} \coloneqq \left(\frac{\eta}{k}\right)^2 \frac{1}{U}$. Also, let $N(k', \ell)$ denote the maximum number of nodes in the recursion tree obtained by executing $\textsc{PL-EF}(\cdot)$ on any interval with $k'$ breakpoints (of the piecewise linear densities) and of length $\frac{1}{2^{B + 1 -\ell}}$. Hence, the overall recursion tree obtained under $\textsc{PL-EF}(0,1)$ contains at most $N(k, B+1)$ nodes (calls). The following lemma implies the useful upper bound $N(k, B + 1) \leq k (B+1)$. \begin{lemma} \label{lemma: recursive-calls} For all $k' \geq 1$ and $\ell \in \{1, \ldots, B\}$, we have $N(k', \ell) \leq k' \ell$. \end{lemma} \begin{proof} We prove the lemma via induction on $k'$ and $\ell$. In particular, consider an interval $[a,b]$ with $k'$ breakpoints and of length $\frac{1}{2^{B+1-\ell}}$. To prove the base case of induction, observe that if $[a,b]$ contains a single breakpoint, i.e., $k'=1$, then $\textsc{PL-EF}(a,b)$ will only recurse either on the left half or the right half of $[a,b]$ that contains the breakpoint (Lemma \ref{lemma:recurse}). Furthermore, this process can continue for at most $\ell$ recursive calls, after which the interval length becomes $\frac{1}{2^{B}} = \frac{\eta^2}{k^2 U^2}$ (see Step \ref{step:small-length}). Therefore, we have $N(1,\ell) \leq \ell$ for all $\ell \in [B]$. We will now assume that the statement holds true for $N(t, \ell-1)$ with $t \leq k'$, and will inductively prove it for $N(k',\ell)$. We break our analysis in the following two cases depending on the distribution of the breakpoints in $[a,b]$: \begin{enumerate}[noitemsep] \item[(i)] Either the left or the right half of $[a,b]$ contains no breakpoint. In such a case, $\textsc{PL-EF}(a,b)$ can recurse only on the half that contains all $k'$ breakpoints, while the other half terminates. Hence, we have $N(k', \ell) = N(k', \ell-1) +1$. \item[(ii)] The left half $[a, \frac{a+b}{2}]$ contains $k_1 \geq 1$ breakpoints and right half $[\frac{a+b}{2},b]$ contains $k_2\geq 1$ breakpoints, with $k_1+k_2=k$. In such a setting, we can write $N(k, \ell) = N(k_1, \ell-1) +N(k_2, \ell-1)+1$. \end{enumerate} Therefore, we can bound $N(k, \ell)$ as \begin{align*} N(k, \ell) & \leq \max\{ N(k,\ell-1)+1, N(k_1, \ell-1) +N(k_2, \ell-1)+1\} \\ & \leq \max\{ k(\ell-1) +1, k_1(\ell-1) + k_2(\ell-1)+1\} \tag{by induction hypothesis} \\ & \leq \max\{k \ell, k(\ell-1) +1\}\\ & \leq k \ell \end{align*} Therefore, if $N(t, \ell-1) \leq t(\ell-1)$ for all $t \leq k$ then we have $N(k, \ell) \leq k \ell.$ Finally, we will assume that the statement holds true for $N(t-1, h)$ with $t\leq k$ and $h \leq \ell$, and will inductively prove it for $N(k,\ell)$. We once again break our analysis in the following two complementary and exhaustive cases: \begin{enumerate} \item [(i)] For $\ell$ consecutive recursive calls, either the left or right half of the considered sub-intervals contain no breakpoint. That is, we can write $N(k, \ell) = N(k,0)+\ell$. \item [(ii)] For $m < \ell$ consecutive recursive calls, either the left or right half of the considered sub-intervals contain no breakpoint (i.e., we have $ N(k, \ell) = N(k, \ell-m) +m$). And after $m$ recursive calls, there are $k'_1 \geq 1$ breakpoints in the left half and $k'_2 \geq 1$ breakpoints (with $k'_1+k'_2=k$) in the right half of the current sub-interval of length $\frac{1}{2^{B+1-(\ell-m)}}$. We can therefore write $ N(k, \ell) = N(k, \ell-m) +m = N(k'_1,\ell-m-1)+N(k'_2,\ell-m-1)+m+1$. \end{enumerate} Overall we can bound $N(k, \ell)$ as \begin{align*} N(k, \ell) & \leq \max\{N(k,0)+\ell, N(k'_1,\ell-m-1)+N(k'_2,\ell-m-1)+m+1\} \\ & \leq \max\{ \ell , k'_1(\ell-m-1) + k'_2(\ell-m-1)+m+1\} \tag{by induction hypothesis and the base case} \\ & \leq \max\{ \ell, k (\ell-m-1) + m+1\}\\ & \leq k \ell \end{align*} Therefore, if $N(t-1, h) \leq (t-1)h$ for all $t\leq k$ and $h \leq \ell$, we have for $N(k,\ell) \leq k \ell$. This completes our induction argument and establishes the stated claim. \end{proof} Using Lemma~\ref{lemma: recursive-calls}, next we upper bound the maximum number of intervals assigned to any agent by $\textsc{PL-EF}(0,1)$. \begin{lemma} \label{lemma:number of intervals} Let $\mathcal{C}=\langle [n], \{f_i \}_i \rangle$ be a cake-division instance with piecewise linear value densities that have $k$ breakpoints. Then, the cake division $\mathcal{D} = \{D_1, \ldots, D_n\}$ computed for $\mathcal{C}$ by $\textsc{PL-EF}(0,1)$ (Algorithm~\ref{alg:PL-EF}) satisfies \begin{align*} |D_i| \leq 2 N(k,B+1) & \leq 2 k(B+1) \quad \text{ for all } i \in [n]. \end{align*} \end{lemma} \begin{proof} In $\textsc{PL-EF}(\cdot)$, at each node of the recursion tree at most two additional intervals are assigned to any agent. Since the number of nodes in the recursion tree is at most $N(k, B+1)$, the lemma follows. \end{proof} Theorem~\ref{theorem:PL-EF} presents our main result of this section. \begin{theorem} \label{theorem:PL-EF} Let $\mathcal{C} = \langle [n], \{f_i \}_{i} \rangle $ be a cake-division instance in which the value densities are piecewise linear and have $k$ breakpoints overall. Then, for precision parameter $\eta>0$ and in the Robertson-Webb query model, an $\eta$-envy-free cake division of $\mathcal{C}$ can computed using $\mathcal{O}\left( n^2k \log^2 \left(\frac{kU}{\eta} \right) \right) $ queries. Here $U \geq 1$ is an upper bound of value densities, i.e., $f_i(x) \leq U$ for all $x \in [0,1]$ and $i \in [n]$. \end{theorem} \begin{proof} We will first prove that Algorithm~\ref{alg:PL-EF} outputs an $\eta$-envy-free cake division $\mathcal{D} = \{D_1, D_2, \dots, D_n\}$. Recall that $\widehat{\eta} \coloneqq \left(\frac{\eta}{k}\right)^2 \frac{1}{U}$, and $B+1 \coloneqq \log \frac{U}{\widehat{\eta}}$. Also, at any terminating call in $\textsc{PL-EF}(\cdot)$, an $\widehat{\eta}$-envy-free division (of some subinterval) is assigned among the agents. Specifically, in Step \ref{step:small-length}, the fact that $|b-a| \leq \frac{\eta^2}{k^2 U^2}$ and the bound $f_i(x) \leq U$ (for all $x \in [0,1]$) imply that $v_i(a, b) \leq \widehat{\eta}$ for all $i \in [n]$ and, hence, the returned division is $\widehat{\eta}$-envy-free. The allocation $\{D_1, D_2, \dots, D_n\}$ computed by $\textsc{PL-EF}(0,1)$ is the union of $2 k(B+1)$ such $\widehat{\eta}$-envy-free divisions (Lemma \ref{}) Let us define Recall that, at each terminating call (see Steps~\ref{step:small length} and \ref{step:update}) during the execution of Algorithm~\ref{alg:PL-EF}, an $\widehat{\eta}$-EF allocation is added to the cake division $\mathcal{D}$. Hence, envy between agents in the cake division $\mathcal{D}$ can be bounded by summing their envy in each of the (at most) $2k(B+1)$ $\widehat{\eta}$-EF allocations that are added to $\mathcal{D}$. For any $i,j \in [n]$, we have \begin{align*} |v_i(D_i) - v_i(D_j)| & \leq 2k(B+1) \widehat{\eta} \tag{by Proposition~\ref{proposition:number of intervals}} \\ & \leq \frac{2\eta^2}{k} \frac{1}{U} \log\left(\frac{k^2 U^2}{\eta^2}\right) \tag{by definition of $B+1$ and $\widehat{\eta}$} \\ & \leq \eta \left(\frac{\eta}{k U}\right) 4 \log \left(\frac{k U}{\eta}\right) \\ & \leq \eta \end{align*} where the last inequality follows from the fact that $4 \log x \leq x$ for all $x \geq 4$; setting $ x=\frac{k U}{\eta} \geq 4$. Therefore, $\mathcal{D}$ is an $\eta$-EF cake division in $\mathcal{C}$. Next, we will analyze the query complexity of Algorithm~\ref{alg:PL-EF}. Note that Lemma~\ref{lemma: recursive-calls} proves that the maximum number of recursive calls made by $\textsc{PL-EF}(0,1)$ is $N(k, B+1) =k(B+1)$. Recall that, $\textsc{PL-EF}(a,b)$ performs identical operations on the two equal halves (see Step~\ref{step:righthalf}) of interval $[a,b]$ during its recursive call on $[a,b]$. Hence, we can bound the maximum number of queries required for the left half $[a, (a+b)/2]$ and multiply it by $2$ to get the total number of queries made by $\textsc{PL-EF}(a,b)$. To begin with, it requires $n$ queries to find the set $K_L$ (in Step~\ref{step:subroutine}) of agents valuing the interval $[a, \frac{a+b}{2}]$ for no more than $\widehat{\eta}$. Next, at most $n$ queries are made to normalize the value-densities of agents over $[a,\frac{a+b}{2}]$, and another $n$ queries to compute a potential candidate for the MLRP order for agents in $[n] \setminus K_L$ (Step~\ref{step:subroutine}). Then, \textsc{BinSearch} \ is executed for at most $2(n-1) \log \left(\frac{2 \lambda}{\widehat{\eta}}\right)$ number of iterations (in Step~\ref{step:subroutine}), where each such iteration requires $2n$ queries. Finally, it takes at most $n^2$ queries to verify if an $\widehat{\eta}$-EF allocation is computed (in Step~\ref{step:ef}). Overall, the maximum number of queries, denoted by $Q(n,k)$, made by Algorithm~\ref{alg:PL-EF} can be bounded as, \begin{align*} Q(n,k) & \leq k(B+1) \left( 2 \left( n+n+ 2n \left(2(n-1) \log \left(\frac{2 \lambda}{\widehat{\eta}}\right)\right) + n^2 \right) \right) \\ & \leq k(B+1) \left(14n^2\log \left(\frac{2 \lambda}{\widehat{\eta}}\right) \right) \\ & \leq \mathcal{O}(n^2 k) \log \left(\frac{ k^2 U^2}{\eta^2}\right) \log \left(\frac{2 \lambda k^2U}{\eta^2}\right) \\ & \leq \mathcal{O}(n^2 k) \log \left(\frac{ kU}{\eta}\right) \left( \log (2\lambda )+2 \log \left(\frac{ kU}{\eta}\right)\right) \tag{since $U \geq 1$} \\ & \leq \mathcal{O}(n^2 k) \left( 2\log^2 \left(\frac{kU}{\eta}\right) + \log (2\lambda )\log \left(\frac{kU}{\eta}\right) \right) \end{align*} Recall that $\lambda = \max\{U, \frac{k^2 U}{\eta^2},\frac{k^2 U^2}{\eta^2} \}$. Assuming that the precision parameter $\eta \leq 1$, we have $\lambda = \frac{k^2U^2}{\eta}$, and hence $\log(2 \lambda) = \log \left(\frac{2k^2U^2}{\eta^2} \right)$, since $U \geq 1$. That is, we have $\log(2 \lambda) = \mathcal{O} \left(\log \left(\frac{kU}{\eta}\right)\right)$. Overall, we obtain the required bound, $Q(n,k) = \mathcal{O}\left( n^2k \log^2 \left(\frac{kU}{\eta} \right) \right)$. Hence, Algorithm~\ref{alg:PL-EF} requires $\mathcal{O}\left( n^2k \log^2 \left(\frac{kU}{\eta} \right) \right) $queries to compute an $\eta$-EF cake division. Theorem~\ref{theorem:PL-EF} now stands proved. \end{proof} The query complexity obtained via Theorem~\ref{theorem:PL-EF} implies that the precision parameter $\eta$ can be driven exponentially close to zero, in queries that is polynomial in $\log \frac{1}{\eta}$ (i.e., in the bit complexity of $\eta$). Hence, we can find an envy-free cake division, up to arbitrary precision, using $\mathcal{O}(n^2k \log^2 (kU)) $ queries. \section{Social Welfare} \label{section:social welfare} This section develops an algorithm for social welfare maximization. Recall that, under MLRP, Pareto optimal allocations conform to the MLRP order (Lemma \ref{theorem:POorder}). Hence, for maximizing social welfare we can restrict attention to allocations wherein the intervals are assigned (left-to-right on the cake) in accordance with the MLRP order. This observation, in and of itself, leads to a fully-polynomial time approximation scheme for the maximizing social welfare: we can partition the cake into ${\rm poly} (n, 1/\varepsilon)$ contiguous intervals, each of value at most $\varepsilon$, and then solve the problem using a dynamic program. We show that instead of considering a general partition we can identify a set $P$---of $\mathcal{O}(n^2)$ points---such that the cut points of an optimal allocation are contained in $P$. This will enable us to execute a dynamic program focusing only on the points in $P$ and establish Theorem~\ref{theorem:SocialWelfare}. In sharp contrast to the FPTAS described above, our dynamic program finds an allocation with social welfare $\eta>0$ close to the optimal in time that is dependent on $\log(1/\eta)$. For value densities $f_i$ and $f_j$, write $L_{ij}$ to denote the set of points at which $f_j$ is at least as large as $f_i$; specifically, $L_{ij} \coloneqq \{x \in [0,1]: {f_j(x)} \geq {f_i(x)} \}$. Since that the densities $f_i$ and $f_j$ are normalized, there must exist a point $x \in [0,1]$ such that $f_j(x) \geq f_i(x)$; in other words, $L_{ij} \neq \emptyset$. Also, observe that this set is bounded below by $0$. Therefore, the greatest lower bound property of $\mathbb{R}$ implies that $L_{ij}$ admits an infimum. We will refer to this infimum as the \emph{switching point} between the two value densities, $p_{ij} \coloneqq \inf_{x \in L_{ij}} \ x$.\footnote{As noted, $p_{ij}$ exists and satisfies $p_{ij} \in [0,1]$. However, since that the value densities $f_i$s are not necessarily continuous (they are only assumed to be Riemann integrable) a point $x$ with the property that $f_i(x) = f_j(x)$ might not exist.} For a cake-division instance $\mathcal{C}=\langle[n], \{f_i\}_i \rangle$, we will write $P$ to denote the collection of all switching points $p_{ij}$s, with $ 1\leq i<j \leq n$ (along with $0$ and $1$), i.e., $P\coloneqq \{ p_{i,j} \in [0,1] : 1 \leq i < j \leq n \} \cup \{0, 1\}$; we include the endpoints for cake in $P$ for notational convenience. Also, recall the convention that the agents are indexed following the MLRP order. The next lemma provides a useful property about the switching points in the MLRP context, that is reminiscent of the `single-crossing' type condition used in social choice; see \cite{athey2001single, gans1996majority, elkind2014characterization}. \begin{lemma} \label{switch} Let $f_i$ and $f_j$ be two value-density functions that satisfy MLRP, i.e., ${f_j}/{f_i}$ is non-decreasing over $[0,1]$. Then, \begin{itemize} \item[(a)] For all $y \in [0, p_{i,j})$, the likelihood ratio satisfies ${f_j(y)}/{f_i(y)} <1$. \item[(b)] For all $z \in (p_{ij}, 1]$, the likelihood ratio satisfies ${f_j(z)}/{f_i(z)} \geq 1$. \end{itemize} Here, $p_{ij}$ is the switching point between $f_i$ and $f_j$. \end{lemma} \begin{proof} As observed previously, $p_{ij} \in [0,1]$ exists and is unique. We begin by proving part (a) of the stated claim. Consider any point $y \in [0, p_{i,j})$ and assume, towards a contradiction, that the likelihood ratio satisfies $\frac{f_j(y)}{f_i(y)} \geq 1$. This implies that the point $y$ belongs to the set $L_{ij} =\{x \in [0,1]: {f_j(x)} \geq {f_i(x)} \}$. Since $y<p_{ij}$, we get a contradiction to the fact that $p_{ij}$ is the infimum of the set $L_{ij}$. For proving part (b), consider any point $z \in (p_{ij}, 1]$. Assume, towards a contradiction, that at $z$ we have $\frac{f_j(z)}{f_i(z)} < 1$. Since the likelihood ratio $\frac{f_j(x)}{f_i(x)}$ is non-decreasing over $[0,1]$ (by definition of MLRP), we have $\frac{f_j(t)}{f_i(t)} < 1$ for all $0 \leq t \leq z$. That is, there does not exist a point $t \in [0, z]$ with the property that $f_j(t) \geq f_i(t)$. Hence, $z$ constitutes a lower bound for the set $L_{ij}=\{x \in [0,1]: {f_j(x)} \geq {f_i(x)} \}$. Since $p_{ij} <z$, we get a contradiction to the fact that $p_{ij}$ is the infimum (greatest lower bound) of the set $L_{ij}$. This completes the proof. \end{proof} The following corollary asserts that, up to an arbitrary precision, each switching point can be determined efficiently. \begin{corollary} \label{infirmumswitch} Let $f_i$ and $f_j$ be two value densities that bear MLRP and let parameter $\gamma \in (0,1)$. Then, in the Robertson-Webb model, we can find an interval of length $\gamma$ that contains the switching point $p_{ij}$ in $\mathcal{O} \left( \log \left( 1/ \gamma \right) \right)$ time. \end{corollary} \begin{proof} Consider intervals of the form $B_k = \left[ (k-1) \frac{\gamma}{2}, \ k \frac{\gamma}{2} \right]$ for $k \in \{1,2, \ldots, \frac{2}{\gamma}\}$, i.e., for analysis, we discretize the cake $[0,1]$ evenly into intervals each of length $\gamma/2$. Write $k^*$ to denote the index with the property that $p_{ij} \in B_{k^*} = \left[ (k^*-1) \frac{\gamma}{2}, \ k^* \frac{\gamma}{2} \right]$. Part (a) of Lemma~\ref{switch} implies that $v_j(B_k) < v_i(B_k)$ for all $k < k^*$. Similarly, part (b) of Lemma~\ref{switch} gives us $v_j(B_k) \geq v_i(B_k)$ for all $k >k^*$. Therefore, applying binary search, we can, in $\mathcal{O} \left( \log \left( 1/ \gamma \right) \right)$ iterations, find the smallest index $k' \in \{1,2, \dots, \frac{2}{\gamma}\}$ such that $v_j(B_{k'}) < v_i(B_{k'})$ and $v_j(B_{k'+1}) \geq v_i(B_{k'+1})$. Note that the computed index $k'$ satisfies $k' \in \{k^*-1, k^* \}$: if, for contradiction, we have $k' \leq k^*-2$, then it must be the case that $v_j(B_{k'+1}) < v_i(B_{k'+1})$. This inequality contradicts the selection criterion of $k'$. Also, the inequality $k' \geq k^* +1$ would lead to the contradiction $v_j(B_{k'}) \geq v_i(B_{k'})$. The value comparisons required to execute the binary search can be performed using eval queries. Hence, in $\mathcal{O}\left( \log (1/\gamma) \right)$ iterations we can find an interval $B_{k'} \cup B_{k'+1}$ of length $\gamma$ that contains $p_{ij}$. \end{proof} This corollary implies that we can efficiently compute the set of switching points $P = \{ p_{i,j} \in [0,1] : 1 \leq i < j \leq n \}$, up to an arbitrary precision. Next, we will establish the usefulness of $P$. \begin{lemma} \label{switchingset} Let $\mathcal{C}$ be a cake-division instance in which the value densities satisfy the monotone likelihood ratio property. Then, in $\mathcal{C}$, there exists a social welfare maximizing allocation all of whose cut points belong to the set of switching points $P$. \end{lemma} \begin{proof} Among all allocations that maximize social welfare, consider the ones that conform to the MLRP order; Lemma~\ref{theorem:POorder} ensures that this collection is nonempty. Furthermore, among these optimal allocations select one $\mathcal{S} = \{S_1, S_2, \ldots, S_n\}$ that minimizes $\left| \{s_0=0, s_1, s_2, \ldots, s_n=1 \} \setminus P \right|$; here $s_i$s denote the cut points of $\mathcal{S}$. That is, $\mathcal{S}$ is a social welfare maximizing allocation that uses as many points from $P$ as possible. We will show that $\{s_i \}_i \setminus P = \emptyset$ and, hence, the claim follows. Towards a contradiction, assume that there exists a cut point $s_t$ of the allocation $\mathcal{S}$ that does not belong to $P$. Let $S_i =[s, s_t]$ and $S_j=[s_t, s']$ be the two \emph{nonempty} intervals in $\mathcal{S}$ that are separated by $s_t$. Interval $S_i$ is to the (immediate) left of $S_j$ and, since the allocations conform to the MLRP order, we that $i <j$. Given that $s_t \notin P$, we know that $s_t \neq p_{ij}$; here $p_{ij}$ is the switching point between $f_i$ and $f_j$. We will show that in this case we can always move $s_t$ towards $p_{ij}$ and obtain another social welfare maximizing allocation that uses more cut points from $P$ than $\mathcal{S}$. This contradicts the choice of $\mathcal{S}$ and establishes the stated claim. Towards this goal, consider two complementary cases \\ \noindent \emph{Case (i):} $s_t < p_{ij}$. In this case we can move $s_t$ to the right without decreasing the social welfare. In particular, if $p_{ij} \in S_i \cup S_j = [s, s']$, then, instead of $S_i$ and $S_j$, we can assign intervals $[s, p_{ij}]$ and $[p_{ij}, s']$ to agents $i$ and $j$, respectively. Since ${f_j(x)} < {f_i(x)}$ for all $x \in [s_t, p_{ij}]$ (Lemma~\ref{switch}, part (a)), such an update increases the social welfare. This contradicts the optimality (with respect to social welfare) of $\mathcal{S}$. A similar argument holds if $p_{ij} > s'$. Here, we can assign $[s, s']$ entirely to agent $i$ (and an empty set to agent $j$). For all $x \leq s' < p_{ij}$, we have ${f_j(x)} < {f_i(x)}$ (Lemma~\ref{switch}, part (a)). Therefore, the reassignment increase the social welfare and leads to a contradiction. \\ \noindent \emph{Case (ii):} $s_t > p_{ij}$. In this case we can move $s_t$ to the left (towards $p_{ij}$). If we have $p_{ij} \in S_i \cup S_j$, then assigning intervals $[s, p_{ij}]$ and $[p_{ij}, s']$ to agents agents $i$ and $j$, respectively, does not decrease the social welfare (Lemma~\ref{switch}, part (b)). Though, at the same time, it does provide an allocation that uses more cut points from $P$ and, hence, contradicts the choice of $\mathcal{S}$. On the other hand, if $p_{ij} <s$, then we can assign the entire interval $[s,s']$ to agent $j$. As before, the reassignment does not decrease the social welfare. However, it does decrease the cardinality of the set difference between the cut points and $P$. This contradicts the selection criterion of $\mathcal{S}$. Hence, the cut points of $\mathcal{S}$ satisfy $\{s_i\}_i \subseteq P$ and the stated claim follows. \end{proof} We now present the main result of this section. \SocialWelfare* \begin{proof} Given a cake-division instance $\mathcal{C}$ with MLRP, write $\mathcal{S}^* = \{S^*_1, S^*_2, \dots, S^*_n\}$ to denote the allocation identified in Lemma~\ref{switchingset}; in particular, $\mathcal{S}^*$ is a social welfare maximizing allocation whose cut points $\{0=s^*_0, s^*_1, \dots, s^*_n=1\}$ belong to the set of switching points $P$. For a precision parameter $\eta >0$ and for each switching point $p_{ij} \in P$, we invoke Corollary~\ref{infirmumswitch} to find $\widehat{p}_{ij} \in [0,1]$ with the property that $|\widehat{p}_{ij}-p_{ij}| \leq \frac{\eta}{n \lambda}$. Write $\widehat{P}$ to denote the set of these estimates, $\widehat{P} \coloneqq \{ \widehat{p}_{ij} : 1 \leq i < j \leq n \} \cup \{0,1\}$.\footnote{As in the case of $P$, we include $0$ and $1$ in $\widehat{P}$ for ease of presentation.} Applying Corollary~\ref{infirmumswitch} to each $\widehat{p}_{ij}$, we get that the set $\widehat{P}$ can be computed in $\mathcal{O}({\rm poly}(n, \log \lambda, \log \frac{1}{\eta}))$ time. Next we will show that there exists an allocation $\widehat{\mathcal{S}}$ whose cut points are contained in $\widehat{P}$ and this allocation has near-optimal social welfare, $\mathrm{SW}(\widehat{\mathcal{S}}) \geq \mathrm{SW}(\mathcal{S}^*) - \eta$. Specifically, for each cut point $s^*_i$ of $\mathcal{S}^*$, let $\widehat{s}_i$ denote its closest point in $\widehat{P}$, i.e., $\widehat{s}_i \coloneqq \argmin_{ \widehat{p} \in \widehat{P}} | s^*_i - \widehat{p} |$; here we break ties, say, lexicographically. By construction of $\widehat{P}$ we have $|s^*_i - \widehat{s}_i | \leq \frac{\eta}{n \lambda}$, for each $0 \leq i \leq n$. For the optical allocation $\mathcal{S}^* = \{S^*_1, \ldots, S^*_n \}$ we have $S^*_i = [s^*_{i-1}, s^*_i]$ for all $i \in [n]$. Write allocation $\widehat{\mathcal{S}} \coloneqq \{\widehat{S}_1, \widehat{S}_2, \ldots, \widehat{S}_n\}$, where interval $\widehat{S}_i = [\widehat{s}_{i-1}, \widehat{s}_i]$ is assigned to agent $i \in [n]$. The cut points of allocations $\widehat{\mathcal{S}}$ are contained in $\widehat{P}$. Also, given that $\mathcal{S}^*$ conforms to the MLRP order, so does $\widehat{\mathcal{S}}$. Using the bounds $|\widehat{s}_i - s^*_i| \leq \frac{\eta}{n \lambda}$, for $0 \leq i \leq n$, and the fact that eval queries are $\lambda$-Lipschitz, we obtain $|v_i(\widehat{S}_i) - v_i(S^*_i)| \leq \frac{\eta}{n}$, for all agents $i \in [n]$. Summing over all $i\in [n]$, we obtain the desired social-welfare bound: $|\sum_{i \in [n]} v_i(\widehat{S}_i) - \sum_{i \in [n]}v_i(S^*_i)| = |\mathrm{SW}(\widehat{\mathcal{S}}) - \mathrm{SW}(\mathcal{S}^*)| \leq \eta$. Therefore, there exists an allocation $\widehat{\mathcal{S}}$ with the properties that (i) $\widehat{\mathcal{S}}$ has near-optimal social welfare, (ii) cut points of $\widehat{\mathcal{S}}$ are contained in $\widehat{P}$, and (iii) $\widehat{\mathcal{S}}$ conforms to the MLRP order. To complete the proof of the theorem we will show that, among all allocations that satisfy properties (ii) and (iii), we can find one that maximizes social welfare. We accomplish this algorithmic result by a simple dynamic program. Recall that cardinality of the set $\widehat{P}$ is $\mathcal{O}(n^2)$ and this set can be computed in $\mathcal{O}({\rm poly}(n, \log \lambda, \log \frac{1}{\eta}))$ time using eval queries. We index the elements of the computed set $\widehat{P} = \{ \widehat{p}_t \}_t$ such that $0 = \widehat{p}_0 < \widehat{p}_1 < \ldots < \widehat{p}_{|\widehat{P}|} = 1$ For each $k \in [n]$ and $1 \leq t \leq |\widehat{P}|$, we write $M(k, t)$ to denote the maximum social welfare that one can achieve by allocating the interval $[0, \widehat{p}_t]$ among the first $k$ agents (in order).\footnote{By convention, the agents are indexed following the MLRP order.} The following recursive equation for $M(k, t)$ gives us the desired dynamic program \begin{align*} M(k, t) & \coloneqq \max_{1 \leq t' \leq t } \left\{ M(k-1, t') + v_k(\widehat{p}_{t'}, \widehat{p}_t) \right\} \end{align*} Here, we initialize $ M(1,t) \coloneqq v_1(0, \widehat{p}_t) $ for all $ 1 \leq t \leq |\widehat{P}|$. One can directly show, via induction, that $M(n, |\widehat{P}|)$ is equal to the optimal social welfare among allocations that satisfy the above-mentioned properties (ii) and (iii). That is, $M(n, |\widehat{P}|) \geq \mathrm{SW}(\mathcal{S}^*) - \eta$. Hence, the dynamic program gives us the desired allocation. For the runtime analysis, note that the dynamic program runs in $\mathcal{O}( n^2 |\widehat{P}|)$ time and only requires eval queries. Overall, we can find an allocation with social welfare $\eta$ close to the optimal in time $\mathcal{O}({\rm poly}(n, \log \lambda, \log \frac{1}{\eta}))$. Since the precision parameter $\eta$ can be driven exponentially close to zero, in time that is polynomial in $\log \frac{1}{\eta}$ (i.e., in the bit complexity of $\eta$) the stated claim follows. \end{proof} \begin{remark2} For cake-division instances with MLRP, Theorem~\ref{theorem:SocialWelfare} also shows that we can efficiently maximize social welfare among all cake divisions, and not necessarily among allocations. In particular, let $\mathcal{D}^* = \{D^*_1, D^*_2, \dots, D^*_n\}$ denote a cake division that maximizes social welfare in a given instance. We know that there exists an allocation $\mathcal{J}=\{J_1, \ldots, J_n \}$ such that $v_i(J_i) \geq v_i(D_i)$, for all $i \in [n]$ (Theorem~\ref{theorem:POorder}). Summing over $i$ we get $\mathrm{SW}(J) \geq \sum_{i=1}^n v_i(D^*_i)$. Therefore, the allocation computed through Theorem~\ref{theorem:SocialWelfare} is (near) optimal among all cake divisions. \end{remark2} \section{Distribution Families with MLRP} \label{appendix:mlrp-use-cases} This section highlights that various well-studied distribution families bear MLRP. Recall, that two probability density functions $f_i$ and $f_j$ are said to satisfy MLRP iff, for every $x \leq y$ in the domain, we have \begin{align*} \frac{f_j(x)}{f_i(x)} & \leq \frac{f_j(y)}{f_i(y)}. \end{align*} That is, the {likelihood ratio} $\nicefrac{f_j(x)}{f_i(x)}$ is non-decreasing in the argument $x \in \mathbb{R}$. It is relevant to note that this property can be verified analytically. We support this observation through two examples: Binomial polynomials (and, hence, linear value densities) in Proposition~\ref{prop:binomial-polynomials} and Gaussian densities in Proposition~\ref{prop:gaussian}. Similar analytic arguments can be used to show that many other classes of value densities satisfy MLRP \cite{larsen2001introduction, casella2002statistical,saumard2014log}. \\ \noindent \textbf{Binomial Polynomials:} The following proposition shows that every pair of binomial polynomials bear MLRP over $[0,1]$, i.e., we have a total order over this family of density functions with respect to MLRP.\footnote{As mentioned previously, MLRP is preserved under scaling and, hence, here we do not have to explicitly enforce normalization.} Hence, the results developed in the work hold for cake-division instances in which the value densities are binomial. Also, setting the exponent parameters $s=1$ and $t = 0$ in this proposition, we observe that linear functions form a special case of binomial polynomials. \begin{proposition} \label{prop:binomial-polynomials} With integer exponents $s > t$, let $f_i (x) = a_i x^s + b_i x^t$ and $f_j (x) = a_j x^s + b_j x^t$ be two binomial polynomials. Then, $f_i$ and $f_j$ bear MLRP iff $a_i b_j - a_j b_i \leq 0$. \end{proposition} \begin{proof} For any two points $x, y \in [0,1]$, such that $x \leq y$, the MLRP condition for binomials corresponds to the following inequality \begin{align*} \frac{a_j x^s + b_j x^t}{a_i x^s + b_i x^t} \leq \frac{a_j y^s + b_j y^t}{a_i y^s + b_i y^t} \end{align*} Since $f_i$ and $f_j$ constitute value densities with full support, the values of these functions are positive over the cake. Hence, the previous equation can be rewritten as \begin{align*} (a_j x^s + b_j x^t)(a_i y^s + b_i y^t) \leq (a_j y^s + b_j y^t)(a_i x^s + b_i x^t) \end{align*} This equation further simplifies to \begin{align*} (a_i b_j - a_j b_i) x^t y^s \leq (a_i b_j - a_j b_i) x^s y^t \end{align*} Finally, we rewrite the last inequality to obtain that for binomials MLRP is equivalent to \begin{align} \label{equation:binomial-mlrp} x^ty^t (a_i b_j - a_j b_i) (y^{s-t} - x^{s-t}) \leq 0 \end{align} Since $y-x \geq 0$ and $(s-t) \in \mathbb{Z}_+$, we have that\footnote{For $(s-t) \in \mathbb{N}$, write $y^{s-t} - x^{s-t} = (y-x)(y^{s-t-1} + y^{s-t-2}x+ \dots + y x^{s-t-2} + x^{s-t-1})$. Hence, $(y-x)$ and $(y^{s-t} - x^{s-t})$ have the same signs, for $0 \leq x \leq y \leq 1$.} $y^{s-t}-x^{s-t} \geq 0 $. Furthermore, since $x^ty^t \geq 0$, inequality (\ref{equation:binomial-mlrp}) holds iff $ a_i b_j - a_j b_i \leq 0$. That is, $f_i$ and $f_j$ bear MLRP iff $ a_i b_j - a_j b_i \leq 0$. \end{proof} \noindent \textbf{Gaussian distributions:} The next proposition shows that Gaussian distributions with different means, but the same variance, bear MLRP. \begin{proposition} \label{prop:gaussian} Let $f_i$ and $f_j$ be two Gaussian density functions with the same variance $\sigma^2$ and means $\mu_i \leq \mu_j$, respectively. Then, $f_i$ and $f_j$ satisfy MLRP. \end{proposition} \begin{proof} Here, the two density functions are $f_i(x) = \frac{1}{\sigma \sqrt{2 \pi}} e^{- \frac{(x - \mu_i)^2}{2 \sigma^2}}$ and $f_j(x) = \frac{1}{\sigma \sqrt{2 \pi }} e^{- \frac{(x - \mu_j)^2}{2 \sigma^2}}$, for $x \in \mathbb{R}$. Note that the likelihood ratio of $f_j$ and $f_i$ is equal to \begin{align*} \frac{e^{- (x - \mu_j)^2/2 \sigma^2}}{e^{- (x - \mu_i)^2/2 \sigma^2}} = {\rm exp} \left( \frac{1}{2 \sigma^2} \left( (x-\mu_i)^2-(x-\mu_j)^2 \right) \right) \end{align*} Write $g(x) \coloneqq \frac{1}{2 \sigma^2} \left( (x-\mu_i)^2-(x-\mu_j)^2 \right)$ and note that the derivate of this function $g'(x) = \frac{(\mu_j-\mu_i)}{\sigma^2} \geq 0$ if and only if $\mu_i \leq \mu_j$. Hence, $g$ is an increasing function in $x$---and so is ${\exp} \left( g(x) \right)$---iff $\mu_i \leq \mu_j$. That is, $f_i$ and $f_j$ bear MLRP iff $\mu_i \leq \mu_j$. \end{proof} An analogous result holds for Gaussians with mean zero, but distinct variances.
2,877,628,088,695
arxiv
\section{Introduction} There is a lot of current interest in the problem of elastic lattices in the presence of substrate disorder. This is important in various physical systems such as flux lattices in superconductors \cite{blatter-vortex-review}, charge density waves \cite{gruner-revue-cdw}, colloids and magnetic bubbles \cite{seshadri-bubbles-thermal}, Wigner crystals \cite{wigner-millis}, interface roughening with disorder and solid friction of surfaces \cite{hwa-cule}. An important distinction is whether topological defects such as dislocations are present in the system or not. These defects may appear spontaneously because of disorder or not, or may be explictly excluded from the model (e.g flux lines in d=1+1 geometry \cite{nattermann-flux-creep,carpentier-bglass-layered,nattermann-kierfeld}, tethered networks with permanent bonds). Systems which are both elastic and periodic and do {\it not} contain topological defects are believed to form glass states with many metastable states, diverging barriers and slow dynamics \cite{villain-cosine-realrg,nattermann-pinning,giamarchi-vortex-short,giamarchi-vortex-long,fisher-cdw}. There are very few analytical methods to study these states and their physics is far from being completely elucidated, despite help from extensive numerical simulations. When topological defects are present the problem is even more difficult and very little is known. A crucial question of course is whether these topological defects, when allowed in the model, will appear spontaneously at low temperature because of disorder. The general problem of a lattice on a disordered substrate was addressed in a previous work \cite{giamarchi-vortex-short,giamarchi-vortex-long} and it was predicted that in $d=3$ and for weak pointlike (uncorrelated) disorder a thermodynamic phase free of unbound dislocations should exist and be stable. In this phase, called the Bragg glass, translational order decays very slowly beyond a translational quasi-order length $R_a$ (at most algebraically). These results \cite{giamarchi-vortex-short,giamarchi-vortex-long} being obtained by mapping the lattice problem onto a multicomponent version of a random field XY model the very same predictions (i.e absence of vortices and quasi long range order) obviously hold for the usual random field XY model as well. These results are at variance from previous studies which either argued for \cite{fisher-vortexglass-long} or assumed \cite{chudnovsky-pinning} the spontaneous generation of dislocations, or did not address the issue \cite{blatter-vortex-review,feigelman-collective,fisher-vortexglass-short,nattermann-pinning}. Predicted consequences \cite{giamarchi-vortex-long} for the phase diagram of type II superconductors, i.e existence of a low temperature low field Bragg glass phase of vortices undergoing a transition into an amorphous state upon raising the field seem to be in agreement \cite{giamarchi-diagphas} with the most recent experiments. The existence of a $d=3$ defect free, Bragg glass type phase was confirmed in several numerical simulations either in the context of superconductors \cite{ryu-diagphas-numerics} or for the random field XY model \cite{gingras-dislocations-numerics}. Though a rigorous proof is still lacking there is also at present further theoretical support for the stability of the Bragg glass: analytical variational \cite{carpentier-bglass-layered,nattermann-kierfeld} and RG studies \cite{carpentier-bglass-layered} in a layered geometry and, very recently, a proof using improved scaling and energy arguments \cite{fisher-bragg-proof}. The Bragg glass is thus an example of an elastic glass phase with internal periodicity. In the physics of this type of glasses two dimensions play a particular role. In addition, further analytical methods become available \cite{cardy-desordre-rg,goldschmidt-houghton,toner-log-2} in $d=2$. When dislocations are {\it excluded} by hand, it is believed that the lower critical dimension of these elastic glasses with internal periodicity is $d_{lc}=2$, with a glass phase existing for $T< T_g$ in $d=2$. For the $N=1$ component model (random field XY model) this is the result of the Cardy Ostlund (CO) RG calculation \cite{cardy-desordre-rg}. By analogy it can be argued \cite{giamarchi-vortex-long} that the same holds in the case of the triangular lattice, but it has not yet been directly verified. A triangular lattice requires a fully coupled $N=2$ component model. The CO $N=1$ calculation could in principle describe a $N=2$ decoupled square lattice, except that such a lattice has usually a more complicated elasticity tensor which again couples the components. When topological defects are allowed, it was shown in \cite{cardy-desordre-rg} that for the $N=1$ component model they are perturbatively relevant near the glass transition $T_g$. As argued in \cite{giamarchi-vortex-long} $T_g$ for the triangular lattice is well above the KTNHY melting temperature $T_m$ and dislocations should then be relevant near $T_g$ for the $N=2$ triangular lattice as well. At low temperature however, much less is known about the importance of dislocations. The common belief \cite{blatter-vortex-review}, which is by no means rigorously established, is that if dislocations are allowed, no glass phase will exist in $d=2$. This is also hinted at by the results obtained in \cite{rubinstein-nelson-shraiman} on the simpler case of the random phase shift model, relevant to describe a lattice with {\it internal} (i.e structural) disorder \cite{nelson-elastic-disorder} (a subset of the present problem). In these early studies \cite{rubinstein-nelson-shraiman,nelson-elastic-disorder} the high temperature phase with unbound dislocations was found to be {\it rentrant} at low temperatures suggesting the importance of topological defects at low temperature. However, it was pointed out in \cite{giamarchi-vortex-long}, from a careful study of the CO RG flow, that at low temperature the scale at which the lattice is effectively dislocation-free (i.e the distance between unpaired dislocations) can be {\it much larger} than the translational length $R_a$. Thus even in $d=2$ the Bragg glass fixed point may be useful to describe the physics, as a very long crossover or maybe directly at $T=0$. Furthermore, it was also pointed out in \cite{giamarchi-vortex-long2} that the conventional CO RG will not be adequate at low temperature since it assumes a thermalized description of the vortices and neglects important effect such as the pinning of dislocations by disorder (i.e that the fugacity depends on the position). A similar idea was recently proposed by Nattermann et al. \cite{natt1} who reconsidered the simpler random phase shift model. They explicitly showed that \cite{rubinstein-nelson-shraiman,nelson-elastic-disorder} was incorrect at low temperature and proposed a modified coulomb gas RG. It is still an open question how these new set of ideas and techniques apply to the present problem. In the present paper we study the statics and the dynamics of two dimensional isotropic lattices interacting with point like disorder {\it excluding dislocations}. We use a (replica symmetric) Renormalisation Group (RG) approach as well as equilibrium dynamics RG. Though we will not attempt to include dislocations, it is still interesting for several reasons. First, it is the natural continuation to $d=2$ of the non trivial fixed point which describes the Bragg glass phase for $d=3$. The only other analytical approaches available to describe the Bragg glass phase are the replica variational method \cite{giamarchi-vortex-short,korshunov-variational-short,giamarchi-vortex-long} and Fisher's functional RG \cite{fisher-functional-rg} in a $d=4-\epsilon$ expansion \cite{giamarchi-vortex-short,giamarchi-vortex-long}. Thus, the study of the present fixed point in $d=2$ is also useful by comparison. Second, it is of general interest for systematic study of the $d=2$ glass phases and allows to show that the lower critical dimension is $d=2$. It is also a first step towards the study of the more difficult problem of lattices with disorder in $d=2$ with dislocations allowed. Also, since it has more parameters that the $N=1$ component model and can be similarly studied numerically, it may lead to further useful numerical checks of the replica symmetric RG in this problem as well as the dynamics (see discussion below). Finally, it may be possible to realize such tethered networks experimentally. It was argued recently \cite{hwa-cule} that {\it internal disorder} which breaks the internal periodicity of the network (and may occur in e.g polymerization) drive the system to another universality class. Even if this claim, which we will not investigate here, is correct, crossover lengths will probably be large for weak internal disorder. The $N=1$ Cardy-Ostlund model, which is a subcase of our present calculation, has also generated a lot of attention recently, for different reasons. The gaussian replica variational method (GVM) applied to this model, leads to a one step replica symmetry breaking (RSB) solution below $T_g$ and to mean squared displacements which grow as $u^2 \sim T_g \ln r$, a different result from the replica symmetric (RS) RG prediction $u^2 \sim A_1 \ln^2 r$. Note that the GVM, being by construction an approximation which neglect some non linearities has no reason to yield the exact result (see however \cite{korshunov96}). However it was also shown \cite{ledoussal-rsb-lettre,kierfeld} that the Cardy Ostlund RS-RG flow is unstable to an infinitesimal RSB perturbation at and below $T_g$. The issue was thus raised \cite{ledoussal-rsb-lettre} of whether the RS-RG may miss some of the physics related to RSB. It is thus important to perform numerical simulations and carefully check the predictions of the RS-RG, as well as carefully define the observables to be computed. The numerical studies on the random phase sine-gordon model presently available show discrepancies, and their analysis is not satisfactory. In \cite{numRSGM1} no effect of disorder was found, while in \cite{numRSGM2} the results seem close to the predictions of the GVM. In the more recent \cite{numRSGM3,numRSGM4} the results seem to agree qualitatively with the RS-RG but with an amplitude {\it much smaller} than the the prediction for $A_1$ quoted in \cite{hwa-fisher-co,batrouni-numerical-cardy} to which it was compared. Let us also point out the two very recent studies \cite{rieger-zerot-co,middleton-zerot-co} of the random substrate SOS model (believed to be in the same universality class) directly at {\it zero temperature} which also yield $u^2 \sim A_1 \ln^2 r$. In view of the above interest, it is thus important to compute accurately the amplitude $A_1$ predicted by the RS-RG. This is one of the result of the present paper. Our result, smaller by a factor of $4$ from the previous result \cite{hwa-fisher-co,batrouni-numerical-cardy} is different from any of the previously published results \cite{hwa-fisher-co,batrouni-numerical-cardy,goldschmidt-houghton} and, we argue, is correct. This seems to reconcile, at least in order of magnitude, the result of the RS-RG with the result of the numerical simulations of \cite{numRSGM3,numRSGM4}. A finer and more precise analysis than the one performed in \cite{numRSGM3,numRSGM4} is however needed, including predicted finite size scaling corrections, since a more careful treatment of the effects of RSB may reveal that deviations from the RS-RG result are small \cite{footnote5} e.g only in the amplitude of the $\log^2 r$ \cite{ledou-giamarchi}. Another consequence of our results is that the distribution of rescaled displacements $u/\ln r$ should be {\it gaussian} at large scale, which we propose as a useful numerical check of the RS-RG method. Along these lines, one notes that the equilibrium dynamical RG studied here, because it obeys by construction the Fluctuation Dissipation Theorem (FDT), gives, as we illustrate explicitly, identical answers for static quantities than the replica symmetric static RG. (it thus provides for us a useful check on our statics calculations). As discussed above, it is possible that the static RG does need a RSB structure (either strong, i.e in the ground state as in mean field - or weak, i.e only in the low lying excitations) and a proposal for its construction was made in \cite{ledoussal-rsb-lettre}. Also, we interpret the recent results \cite{natt1} on the simpler random phase shift model, i.e its mapping on some version of Derrida's random energy model, as another hint that RSB may be important in these models. If this is the case, it is then obvious that an equivalent statement should exist in the dynamics, i.e that one may need to construct a RG which violates FDT and replaces it with a generalized FDT structure, a program which was successfully carried in mean field dynamics \cite{meanfield-dynamics1,meanfield-dynamics2} We will not address this question further here but will lucidly concentrate on the predictions of the replica symmetric RG which is certainly a good starting point in this problem. The paper is organized as follows. In Section (\ref{themodel}) we define the model for a triangular lattice on a disordered substrate. In Section (\ref{section3}) we study the statics of the problem. First we use a mapping onto the replicated Coulomb gas and derive the RG equations (Section (\ref{section3a}). Then we analyze the RG flow and compute the correlation functions (\ref{section3b}). The results for the triangular lattice can be found in (\ref{section3b1}). In (\ref{section3b2}) we give the results for the $N=1$ Cardy Ostlund model and review the discrepancy with other published results. In Section (\ref{section4}) we study the dynamics. In (\ref{section4a}) we carry first order perturbation theory and in (\ref{section4b}) we obtain the results for the dynamical RG. We then compute the dynamical exponent, and obtain the scaling behaviour of transport quantities at the transition. In particular we identify an ``effective critical force'' at the transition. Section (\ref{section5}) gives the conclusion. Technicalities are releguated in the appendices. In Appendix (\ref{appendixa}) we discuss in detail the regularization, obtain the RG equations and show explicitely the cutoff independence of some universal ratios. In Appendix (\ref{appendixb}) we perform the RG directly on the static replica effective action. In Appendix (\ref{appendixc}) we perform the dynamical RG to second order. In Appendix (\ref{appendixe}) we study some consequences of the statistical tilt symmetry. \section{The model} \label{themodel} We consider a set of identical ``atoms''. The interactions between them is such that the ground state configuration in the absence of disorder is a perfect triangular lattice (see fig. 1) of equilibrium positions $R_i^0$ and lattice spacing $a_0$. Both thermal fluctuations and disorder will result in displacements of the atoms from their ideal positions $R_i^0$ to new positions $R_i=R_i^0 + u(R_i^0)$. Furthermore the connectivity of the lattice is fixed: each site will have exactly six neighbors (see fig 1). This can be realized in principle by considering a network of identical and permanent nearest neighbor bonds (tethers) with the topology of a perfect triangular lattice. Dislocations are thus excluded in this model by construction. As discussed in the Introduction, the model can also be relevant at length scales and in regions of the phase diagram where dislocations can be neglected. \begin{figure} \centerline{\fig{6cm}{fig-lattice.eps}} \caption{ \label{fig-lattice} \narrowtext Representation of the elastic lattice and its reciprocal lattice vectors of minimal modulus.} \end{figure} The interaction energy can be described in terms of the local displacement field $\vec{u}(\vec{r})$ by the harmonic hamiltonian \begin{equation} \label{elastic-interaction} H_0 = {1 \over 2} \int {d^2 \vec{q} \over (2 \pi)^2 } u_i(\vec{q}) \Phi_{ij}(\vec{q}) u_j(-\vec{q}) \end{equation} where $\Phi_{ij}(\vec{q})$ is the elastic matrix of the 2D lattice. In this paper we will consider isotropic ({\it i.e} triangular) lattices which can be described by only two independant elastic coefficients : \begin{equation} \label{phi} \Phi_{ij} (\vec{q})= c_{11} q^2 P_{ij}^L + c_{66} q^2 P_{ij}^T \end{equation} where $P_{ij}^L = \hat{q}_i \hat{q}_j = q_i q_j / q^2$ and $P_{ij}^T= \delta_{ij} - \hat{q}_i \hat{q}_j = (\hat{q}_{\perp})_i(\hat{q}_{\perp})_j$ are respectively the longitudinal and transverse projectors. The interaction of the lattice with the impurity disorder of the substrate is modelled by adding to the hamiltonian: \begin{equation} H_V = \int d^2 \vec{x} ~ \rho(\vec{x}) V(\vec{x}) \end{equation} where $\rho(\vec{x})$ is the local density of atoms, $\rho(\vec{x}) = \sum_i \delta^{(2)}(\vec{x} - \vec{R}_i)$. The random potential is gaussian with short range correlator $\overline{V(x) V(x')} = h(x-x')$. The symbol $\overline{..}$ denotes average over disorder, while $\langle .. \rangle$ denotes thermal averages. This model can be transformed, as described in details in \cite{giamarchi-vortex-long} (see Section II-B), into the following random phase model: \begin{eqnarray} \overline{Z} & = & \int d[u] exp ~ - {1 \over T} \left( H_0 + H_{dis} \right) \\ \label{def-model} {1 \over T} H_{dis} & = & \int_{\vec{x}} \frac{1}{2} \sigma_{ij}(x) (\partial_i u_j + \partial_j u_i) \\ \nonumber & & + 2 \sqrt{g} \sum_{\nu} \cos\left( \vec{K}_{\nu}. \vec{u}(x)+\phi_{\nu}(x) \right) \end{eqnarray} The first terms corresponds to random local stresses and comes from the $q \sim 0$ part of the disorder\cite{giamarchi-vortex-long}. The effect of only this term was studied in \cite{nelson-elastic-disorder} (in the presence of dislocations), and would also arise in a problem of structural disorder (with no substrate). The second term originates from the Fourier components of the substrate potential $V_{K}$ close to one of the reciprocal lattice vectors \cite{giamarchi-vortex-long}. The random field is a phase distributed uniformly over $[0,2 \pi]$ and satisfies: \begin{equation} \overline{\langle e^{i(\phi_{\nu}(x)-\phi_{\nu'}(x') } \rangle } =\delta_{\nu,\nu'} \delta^{(2)}(x-x') \end{equation} Note that we are keeping only the terms which are relevant in the RG sense near $T_g$. There are additional higher harmonics terms which are irrelevant in $d=2$ near $T_g$. There are also higher non linear terms which are small in the elastic limit $|\vec{u}(\vec{R}_{i+1})-\vec{u}(\vec{R}_{i})|\ll a$ where the derivation of (\ref{elastic-interaction}) is valid and also irrelevant by power counting. These term will correct the bare values of the relevant coupling constants \cite{footnote3} The random stress field has the general correlator: \begin{eqnarray} \overline{ \sigma_{ij}(x) \sigma_{kl}(x') } & = & \frac{1}{T} [ (\Delta_{11} - 2 \Delta_{66}) \delta_{ij} \delta_{kl} \\ \nonumber && + \Delta_{66} ( \delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk} ) ] \end{eqnarray} The bare value is $\Delta_{66}=0$ and $\Delta_{11}=\rho_0^2 h_{k=0}/T $ (from \cite{giamarchi-vortex-long} ) but they will flow under renormalization. We will study both the statics and the dynamics of this model. The statics of this model can be studied using replicas to average over the disorder potential $V(\vec{x})$ \cite{edwards-replica}. The replicated model reads: \begin{eqnarray} \label{replica-model} \overline{Z^n} & = & \int d[u^a] exp ~- {1 \over T} \left( H_0^{(n)} + H_I\right) \\ H_0^{(n)} & = & \frac{1}{2} \sum_{ab} \int_q [ ( c_{11} P_{ij}^L + c_{66} P_{ij}^T ) \delta_{ab} \\ \nonumber & & - ( \Delta_{11} P_{ij}^L + \Delta_{66} P_{ij}^T ) ] q^2 u^a_i(q) u^b_j(-q) \\ \frac{H_I}{T} & = & - g \sum_{\nu = 1,2,3} \sum_{a,b=1}^n \cos \vec{K}_{\nu}. \left[ \vec{u}^a (\vec{x})-\vec{u}^b (\vec{x})\right] \end{eqnarray} $g$ is related to the fourier coefficient $\Delta_{K_0}$ and the constant density $\rho_0$ : $g = \rho_0^2 h_{K_0} / T^2 $. We will also use the dimensionless coupling constant $\tilde{g} = g a^2$. \section{Study of the Statics using a Coulomb gas formulation} \label{section3} In this Section we study the statics of the model. We first derive the renormalization group equations from a Coulomb gas formulation. In the second part we compute the static displacement correlation functions. \subsection{derivation of the RG equations from the Coulomb gas} \label{section3a} The model (\ref{replica-model}) is the analogue for a two-component field of the two dimensional random field XY model whose RG equations have been derived using the Coulomb Gas approach by Cardy and Ostlund\cite{cardy-desordre-rg}. But contrarily to them, our Coulomb gas formulation\cite{nienhuis,nelson-halperin,young} is obtained by first using a Villain approximation\cite{villain} before averaging over the disorder. This results in a natural coupling between the continuous replicated displacement field $u^a(\vec{x})$ and vector charges $\vec{n}^a_{\nu}(\vec{x})= n^a(\vec{x},\nu).\vec{K}_{\nu}$ where $\vec{K}_{\nu}$ is one of the three ($ \nu=1,2,3$) reciprocal lattice vector of minimal modulus (see fig 1) and $ n^a(\vec{x},\nu) \in {\Bbb Z}^n$ : \begin{eqnarray} \overline{{Z}^{n}} = \int d[u^{a}] \ \sum_{[n_{\nu}^{a}(\vec{x})] } \ \exp \ \Biggl( && -{1 \over T} H_0^n -i \ \int_{\vec{r}} \vec{n}_{\nu}^{a}.\vec{u}^{a}(\vec{r}) \\ \nonumber & & +\int_{\vec{r}} \ln (\sqrt{g}) \sum_{a,\nu} \left( \vec{n}^{a} \right)^{2} (\vec{r},\nu)\Biggr) \end{eqnarray} with the condition $ \forall (\vec{x},\nu) \ \ \sum_{a} n^{a}(\vec{x},\nu)=0 $. The RG equations will be derived to lowest order in the charge fugacity $\sqrt{g}$, since higher order operators are irrelevant near the critical point\cite{amit} : thus we can only consider charges of the form $n^{a}(x,\nu) =\delta_{\alpha}^{a}-\delta_{\beta}^{a}\equiv \delta_{\alpha,\beta}^a$ where $\alpha$ and $\beta$ are replica integers between $0$ and $n$. The effective fugacity of these minimal charges is $g$. Integrating over the smooth field $u(\vec{x})$ one recover a 2D vector Coulomb Gas with charges having both a spatial and a replica structure, and whose action is (omitting the fugacity term): \begin{eqnarray} S[n] & = & { T \over 2}\sum_{\nu,\nu'} \int_{\vec{q}} \vec{n}^{a}_{\nu,i}(q) (\Phi^{-1})_{ij}^{ab} \vec{n}^{b}_{\nu',j}(-q) \\ & = & - { T \over 2}\sum_{\nu,\nu'} \int_{\vec{r},\vec{r}'} \vec{n}_{\nu,i}^{a}(\vec{r})V_{ij}^{ab}(\vec{r}-\vec{r}')\vec{n}_{\nu,j}^{b}(\vec{r}') \end{eqnarray} where the interaction $V(\vec{r})$ is obtained by a Fourier transform. We have incorporated the replica off diagonal terms in $\Phi^{-1}_{ab}$. In this section, as is usualy done, we will use\cite{footnote6}, instead of the full interaction, its asymptotic form\cite{nelson-halperin} ($r\gg a$) (see Appendix A) : \begin{equation} \label{CG-interaction} V_{ij}(\vec{r}) = \delta_{ij} \left( {\kappa}^{ab}_{1} \ln {r \over a} + \frac{1}{2} {\kappa}^{ab}_{2}\right) \ \ -{\kappa}^{ab}_{2}\ { r_{i}r_{j} \over r^{2} } \end{equation} and ${\kappa}^{ab}_{1} = \left( c_{66}^{-1} +c_{11}^{-1}\right)^{ab}/4 \pi$, ${\kappa}^{ab}_{2} = \left( c_{66}^{-1} -c_{11}^{-1}\right)^{ab}/4 \pi $. We will use the notation ${\kappa}^{ab}_{1}= {\kappa}_1 \delta_{ab} -\overline{{\kappa}_1}$ and $\left( c_{11}\right)^{ab}= c_{11}\delta^{ab} -\Delta_{11}$. We won't deal with possible appearence of dislocations in this model. This will lead to an electromagnetic vector CG which neads more work\cite{work-in-progress}. This simplification allows us to derive RG equations by working on the 2-point correlation function\cite{nelson-in-domb} instead of the replicated partition function\cite{young,nelson-halperin}. These two schemes are equivalent\cite{footnote} to order (at least) $g^2$. The renormalised elastic matrix can be defined as \begin{eqnarray} q^2 \left(\Phi_{R}^{-1}\right)_{ij}^{ab} & = & (c_{11,R}^{-1})^{ab} P_{ij}^{L} + (c_{66,R}^{-1})^{ab} P_{ij}^{T} \\ &= & \lim_{q \rightarrow 0} { q^{2}\over T} \langle u_{i}^{a}(\vec{q})u_{j}^{b}(-\vec{q})\rangle \end{eqnarray} To be consistent this definition must be independant of the direction $\hat{q}$ : this is simply a direct consequence of the isotropic nature of the 2nd order invariant tensors on the triangular lattice. The explicit expression for the renormalised elastic coefficients naturally follows : \begin{eqnarray} \label{renorm-elastic} & & \left(c_{11,R}^{-1} -c_{11}^{-1}\right)^{ab} = \\ \nonumber & & -T \lim_{q\rightarrow 0} q^{2} \langle \vec{n}^{c}_{\nu,l}(\vec{q}) \vec{n}^{d}_{\nu',m}(-\vec{q})\rangle _{S[n]} P_{ij}^{L} \left(\Phi^{-1}\right)_{il}^{ac}\left(\Phi^{-1}\right)_{jm}^{bd} \end{eqnarray} where $i,j,l,m$ are cartesian coordinates, and we used the notation $ \vec{n}^{c}_{\nu,l} = n^c \times \left(\vec{K}_{\nu}\right)_l$. The same holds for $c_{66}^{-1}$ by simply replacing $P_{ij}^L$ by $P_{ij}^T$. \begin{figure}[htb] \centerline{ \fig{6cm}{fig-charge.eps} } \caption{ \label{fig-correl} \narrowtext Representation of the three different configurations that contribute to the correlation function of the Coulomb Gas charge $\vec{n}$. Configuration (a) is of order $g^{2}$ whereas (b) and (c) are of order $g^{3}$. } \end{figure} Replacing $\Phi^{-1}$ by its expression (\ref{phi}) and using the Fourier transform of the charge correlation, we obtain \begin{eqnarray} \label{perturb-elastic} \left(c_{11,R}^{-1}\right)^{ab} = \left(c_{11}^{-1}\right)^{ab} & + & {T \over 2} P_{ij}^L \left( c_{11}^{-1}\right)^{ac}\left( c_{11}^{-1}\right)^{bd} \\ \nonumber & \times & \int_{\vec{r}} (\hat{q}.\vec{r})^{2} \langle \vec{n}_{\nu,i}^{c}(\vec{0})\vec{n}_{\nu',j}^{d}(\vec{r})\rangle_{S[n]} \end{eqnarray} This equation is the starting point of our RG study. Note that itis a definition of the renormalised elastic coefficient exact to all order in $g$. We now proceed as follow : first we expand perturbatively in $g$ the right-hand side of this equation. One can then apply a coarse-graining procedure which leaves the form of this perturbative expansion unchanged, thus defining coarse-grained elastic coefficients. To order $g^3$ only three terms appear (see fig.2) : (a) of order $g^2$ and (b) and (c). Corrections coming from the coarse-graining of configuration (a) defines the coarse-grained elastic constant $c_{11}^{-1}(l)$ to order $g^2$ and $g(l)$ to order $g$, where $l$ is the scaling parameter. In the spirit of Coulomb Gas renormalisation\cite{nienhuis}, this corresponds to the annihilation of charges (screening of the coulomb interaction), whereas the fusion of charges is given by the contributions of configurations (b) and (c) when $|\vec{R}|\ll |\vec{r}|$ or when $|\vec{R}-\vec{r}|\ll |\vec{r}|$. This gives the correction of $g(l)$ to order $g^2$. First, let us develop the two point correlation function $\langle \vec{n}_{\nu,i}^{c}(\vec{0})\vec{n}_{\nu',j}^{d}(\vec{r})\rangle$ to order $g^3$ : it is given by the three contributions (a), (b), and (c) (see fig 2) : \begin{equation} \label{correl-real} \langle \vec{n}_{\nu,l}^{c}(\vec{0})\vec{n}_{\nu',m}^{d}(\vec{r})\rangle_{S[n]} = g^2 {\cal A}_1 (\vec{0},\vec{r})+ g^3 {\cal A}_2 (\vec{0},\vec{r}) \end{equation} where the first term corresponds to the only neutral 2-charges configuration : \begin{equation} {\cal A}_1 (\vec{0},\vec{r}) = \sum_{\nu}\sum_{\alpha\neq\beta}^n \delta^c_{\alpha,\beta} \delta^d_{\beta,\alpha}(K_{\nu})_l(K_{\nu})_m e^{-S_a} \end{equation} and configurations (b) and (c) contribute to the factor ${\cal A}_2$ : \begin{eqnarray*} {\cal A}_2 (\vec{0},\vec{r}) & = & 2 \int_{\vec{R}} \sum_{\nu\neq \nu'}\sum_{\alpha\neq\beta}^n \delta^c_{\alpha,\beta} \delta^d_{\alpha,\beta} (K_{\nu})_l(K_{\nu'})_m e^{-S_b} \\ & + & 2 \int_{\vec{R}} \sum_{\nu}\sum_{\alpha\neq\beta\neq \gamma }^n \delta^c_{\alpha,\beta} (\delta^d_{\beta,\gamma}+\delta^d_{\gamma,\alpha}) (K_{\nu})_l(K_{\nu})_m e^{-S_c} \end{eqnarray*} In these expressions, $S_a,S_b,S_c$ are the respective actions of the three configurations : $ S_a = 2T |\vec{K}|^2 \left( {\kappa}_1 \ln (r / a) - {\kappa}_2 (\hat{K}.\hat{r})^2 +\kappa_2/2 \right) $. The two others can be easily expressed in terms of $S_a$. We can put this back in the expression (\ref{perturb-elastic}) of $c_{11,R}^{-1}$. The angular integral of the $g^2$ term can be performed, using the parameter $\alpha = {\kappa}_2 |\vec{K}|^2 T$ : \begin{eqnarray} \label{def-alpha} B_{\stackrel{{\scriptstyle 11}}{\scriptstyle 66}}(\alpha) & = &\sum_{\nu} \int d\hat{r} \left( \hat{q} . \hat{r} \right)^2 \left( \hat{q}_{\stackrel{{\scriptstyle \parallel}}{\scriptstyle \perp}} . K_{\nu} \right)^2 e^{2 \alpha (\hat{K}.\hat{r})^2 -\alpha} \\ & = & K^2 {3 \pi \over 4} \left(\pm I_1(\alpha) + 2 I_0(\alpha) \right) \end{eqnarray} where $I_0$ and $I_1$ are the modified Bessel functions. The function $B_{11}$ appears in the expression of $c_{11,R}^{-1}$ while $c_{66,R}^{-1}$ involves $B_{66} $. Using the following results on the replicated charges \begin{eqnarray*} &&\sum_{\alpha,\beta} \delta^a_{\alpha,\beta}\delta^b_{\beta,\alpha}= -2\left(n \delta^{ab}-1\right)\\ &&\sum_{\alpha\neq\beta\neq \gamma}\delta^a_{\alpha,\beta}\delta^b_{\beta,\gamma}= (2-n)\left(n \delta^{ab}-1\right) \end{eqnarray*} and the definition $\tilde{g}=g a^2$, we can express $c_{11,R}^{-1}$ as \begin{eqnarray} \label{develop} && \left( c_{11,R}^{-1} \right)^{ab} = \left( c_{11}^{-1} \right)^{ab}(l) \\ \nonumber &-& { T \over c_{11}^2} \tilde{g}^2(l) (n\delta^{ab}-1)B_{11}(\alpha) \int_{a}^{\infty} {dr \over a} \left( {r \over a} \right)^{3-2 |\vec{K}|^2 K_1 T} \\ \nonumber \nonumber & & + {T \over c_{11}^2} \tilde{g}^3(l)(n\delta^{ab}-1) \int_{|\vec{r}|\geq a} {d^2 \vec{r} \over a^2 } K^2 \Lambda(\vec{r}) \end{eqnarray} The integrand in the last term is given by : \begin{eqnarray*} \Lambda(\vec{r}) = \int_{|\vec{R}|\geq a} {d^2 \vec{R} \over a^2 } {3 \over 8} && \left({r\over a}\right)^2 \left( \cos 2(\hat{K},\hat{r}) + 2 \right) \\ \nonumber & & \times \left( (2-n) e^{-S_c}- e^{-S_b}\right) \end{eqnarray*} We can then apply the coarse-graining procedure to this self-consistent equation. We thus rescale the hard-core cut-off from $a$ to $\tilde{a}=ae^{dl} \approx a + adl$. The first integral split into two terms $(\eta =|\vec{K}|^2 {\kappa}_{1} T )$ : \begin{eqnarray*} \int_{a}^{\infty}{dr \over a} \left( {r\over a}\right)^{3-2\eta} & \longrightarrow & \int_{a}^{ae^{dl}} \frac{dr}{a} \left( {r\over a}\right)^{3-2\eta} + \int_{ae^{dl}}^{\infty} \frac{dr}{a} \left( {r\over a}\right)^{3-2\eta} \\ & \approx & dl + e^{l(4-2\eta)} \int_{\tilde{a}}^{\infty} \frac{dr}{\tilde{a}} \left( {r\over \tilde{a}}\right)^{3-2\eta} \end{eqnarray*} Hence the first terms contribute to $c_{11}^{-1}(l)$ while the second renormalises $\tilde{g}(l)$ : \begin{eqnarray} \label{RG-eqs} d\ [c_{11}^{-1}]^{ab} (l) & = & - \tilde{g}^{2} (n \delta^{ab}-1) T B_{11}(\alpha)c_{11}^{-2}~dl\\ d \ \tilde{g}^2(l) & = & (4- 2 |\vec{K}|^2 {\kappa}_{1} T) \tilde{g}^2 ~dl \end{eqnarray} For the purpose of this paper, we are only interested in the contribution of the last term of eq. (\ref{develop}) to $dg^2$. It corresponds to configuration (b) and (c) of (fig.2), when $a \leq |\vec{R}|\leq a(1+dl)$ or $a \leq |\vec{R}-\vec{r}|\leq a(1+dl)$. The renormalisability of the model that one can directly see in our procedure ensures that this correction can be written as a a contribution to $\tilde{g}^2(l)$ of order $\tilde{g}^3$. First of all we check these renormalisation properties of the hamiltonian : $$ \lim_{|\vec{R}| \rightarrow a} e^{-S_b(\vec{r},\vec{R}) }= e^{- S_a (\vec{r})} \ e^{-2 \alpha (\hat{K}_1.\hat{R})(\hat{K}_2.\hat{R}) -\frac{\alpha}{2}}$$ $$\lim_{|\vec{R}| \rightarrow a} e^{-S_c(\vec{r},\vec{R})}= e^{-S_a(\vec{r})} \ e^{\alpha (\hat{K}_1.\hat{K})^2-\frac{\alpha}{2}}$$ The corresponding correction is \begin{equation} \delta \Lambda(\vec{r}) = {3 \over 4} \left( {r \over a}\right)^2 \left( \cos 2(\hat{K},\hat{r}) + 2 \right) e^{-S_a} B_g(\alpha) \end{equation} where the fonction $B_g(\alpha)$ is given by \begin{eqnarray} B_g(\alpha) & = & \int d\hat{R} e^{-\alpha/2} [ (2-n) e^{\alpha \cos^2 (\hat{K},\hat{R})} \\ \nonumber & & ~~~~~~~~~~~~~~~- e^{-2 \alpha \cos(\hat{K},\hat{R}) \cos(\hat{K}',\hat{R}) } ] \\ \nonumber & = & 2 \pi \left[ (2-n) I_0(\alpha /2) - I_0(\alpha) \right] \end{eqnarray} Integration over $\vec{r}$ leads to $$ \int {d^2 \vec{r} \over a^2} K^2 \delta \Lambda(\vec{r}) = 2 B_g(\alpha)B_{11}(\alpha) \int_{a}^{\infty} {dr \over a} \left( {r \over a} \right)^{3-2K^2 {\kappa}_{1} T} $$ This is exactly the expected form, which can be written as a correction to $g^2$ : \begin{equation} d(g^2)= - dl~ \tilde{g}^3 2 B_g(\alpha) \end{equation} where $$ B_g(\alpha) =- 2 \pi \left ( I_0(\alpha) + ( n-2) I_0(\alpha/2) \right) $$ The final equations, obtained after the $n\rightarrow 0$ limit, are \begin{mathletters} \label{rg1} \begin{eqnarray} \label{RG-final} {d\ c_{11} \over d\ l} & = & {d\ c_{66} \over d\ l} =0 \\ {d\ \Delta_{11}(l) \over d \ l} & = & \tilde{g}^{2} T B_{11}(\alpha) \\ {d\ \Delta_{66}(l) \over d \ l} & = & \tilde{g}^{2} T B_{66}(\alpha) \\ {d \ \tilde{g} \over d\ l} & = & (2- K^2 {\kappa}_{1} T ) \tilde{g} - \tilde{g}^2 B_g(\alpha) \end{eqnarray} \end{mathletters} with $\tilde{g}=g a^2$. From now on $B_g(\alpha)$ denotes its value at $n=0$. It is useful to compare these equations with the one for the $n=1$ component model of Cardy and Ostlund. To obtain them we set $c_{11}=c_{66}=c$ (i.e $\alpha=0$), $\Delta_{11}=\Delta_{66}=\Delta$ and consider the two reciprocal lattice vectors (instead of three) of a square lattice in the interaction term with $K^2=1$. This gives two decoupled $n=1$ component model with RG equations obtained by $B_{11}= B_{66} \to \pi$ and drop the term not proportional to $n-2$ in the $g^2$ correction. \begin{mathletters} \label{RG-co} \begin{eqnarray} {d c \over dl} & = & 0 \\ {d \Delta(l) \over dl} & = & \pi \tilde{g}^2 T \\ {d \tilde{g} \over d\ l} & = & (2 - \frac{T}{2 \pi c} ) \tilde{g} - 4 \pi \tilde{g}^2 \end{eqnarray} \end{mathletters} the transition temperature is thus $T^{CO}_g = 4 \pi c$. \subsection{Analysis of the RG flow and static correlation functions} \label{section3b} We now study the RG flow and compute the correlations. In the case of the $N=1$ model this was first done in \cite{cardy-desordre-rg,goldschmidt-houghton} and later reconsidered in \cite{toner-log-2,denis-bernard}. \subsubsection{triangular lattice} \label{section3b1} The flow defined by the above RG equations (\ref{rg1}) has similarities with the one obtained for the random field XY model\cite{cardy-desordre-rg}. There is a transition at $T_g = 2/( K^2 \kappa_1) = 8 \pi K^{-2} c_{11} c_{66}/ (c_{11} + c_{66} )$ and one has $K^2 = 16 \pi^2/3 a_0^2$ for the triangular lattice. In the high temperature phase $T> T_g$ the disorder renormalizes to zero. At low temperature $T < T_g$ the coupling constant $\tilde{g}(l)$ converges towards a perturbative fixed point $\tilde{g}^*$. Introducing the reduced temperature $\tau = (T_g - T)/T_g $, one has: $\tilde{g}^* = 2 \tau/B_g(\alpha)$ with $ B_g(\alpha) = 2 \pi ( 2 I_0(\alpha/2) - I_0(\alpha) )$. It depends continuously on the value of $\alpha$, defined in (\ref{def-alpha}) and which we can evaluate at $T_g$ since we work near $T_g$. Using the above value for $T_g$ one finds that $\alpha(T_g) = \alpha_g = 2 (c_{11}-c_{66})/(c_{11}+c_{66}) = 2 (1+\sigma)/(3-\sigma)$ where $\sigma$ is the Poisson ratio defined as usual as $\sigma = 1 - 2 c_{66}/c_{11} = \lambda/(\lambda + 2 \mu)$. Thus we find that the obtained fixed point depends continuously on the Poisson ratio. Since the Poisson ratio is not renormalized one has now a {\it plane} of fixed points, rather than a line in the case of the $N=1$ Cardy Ostlund model, parametrized by the temperature $\tau$ and the Poisson ratio $\sigma$. One must check however that the perturbative fixed point does indeed exist, i.e that $ B_g(\alpha) >0$ on the allowed domain of variations of $\alpha$. This is indeed the case since one finds that $ B_g(\alpha) >0$ as long as $\alpha \leq \alpha^*\approx 2.218$. This condition is fullfilled since $c_{66} >0$ implies that $\alpha <2$. At this fixed point both $\Delta_{11}(l)$ and $\Delta_{66}(l)$ grow to non perturbative values. This is not a problem since due to statistical symmetry \cite{statistical-inv} they do not feedback in the RG equations. This produces however a change in the correlation functions (see e.g Appendix \ref{appendixe}). We can integrate the above RG equations and the solution reads: \begin{eqnarray} \label{solution-RG} \tilde{g}(l) &=& \frac{ g_0 e^{ 2 \tau l} }{ 1 + \chi (e^{2 \tau l} -1) } \\ \Delta_{11,66} (l)&=& \Delta(0) + \frac{D_{11,66}}{2 \tau} \Biggl( \ln ( 1 + \chi (e^{2 \tau l} -1) ) \\ \nonumber & & -\chi( 1 -\chi ) \frac{ e^{2 \tau l} -1 }{ 1 + \chi(e^{2 \tau l} -1) } \Biggr) \end{eqnarray} where we defined $D_{11,66}= 4\tau^2 T B_{11,66}(\alpha) / B_g^2(\alpha)$ and $\chi = g_0 B_g(\alpha)/(2 \tau)$. Note that $\chi =g_0/\tilde{g}^*$ for $\tau>0$. Thus at large $l$ one has in the glass phase ($\tau > 0$): \begin{eqnarray} && \tilde{g}(l) = \tilde{g}^* + {\cal O}(e^{-2 \tau l}) \\ \label{delta-glass} && \Delta_{11,66}(l) = D_{11,66} \ln \left( \frac{a e^l}{\xi} \right) + \Delta_{11,66}(0) + {\cal O}(e^{-2 \tau l}) \end{eqnarray} where we have introduced the length: \begin{eqnarray} \xi = a \exp \left( \frac{1 - \chi - \ln \chi}{2 \tau} \right) \end{eqnarray} For weak disorder $g_0 \ll 2 \tau/B_g$ it corresponds to the length $\xi \sim R_a \sim (2 \tau/g_0)^{1/(2 \tau)} \gg a$ at which translational order decays asymptotically (also equal to the Larkin length $R_c$ for this lowest harmonic model \cite{carpentier-bglass-layered}, valid near $T_g$). Note that one can define another length scale $\xi_c$ defined as the crossover length between a $\log(r)$ behaviour of the mean squared displacements (see below) for $r < \xi_c$ to a large $r$ regime which corresponds to a $\log^2(r)$ asymptotic correlation. This length diverges exponentially as $\xi_c \sim exp(- cst /\tau^2)$ at the transition $\tau=0$. We now compute the correlation functions \cite{toner-log-2}. We will be interested in the following correlation functions: \begin{eqnarray} B_{ij}(r) & = & \overline{ \langle ( u_i(\vec{r})-u_i(\vec{0}) ) ( u_j(\vec{r})-u_j(\vec{0}) ) \rangle } \\ \nonumber & = & 2 \int {d^2 \vec{q} \over (2 \pi )^2 } (1-e^{i\vec{q}.\vec{r}}) \overline{ \langle u_i(\vec{q})u_j(-\vec{q}) \rangle } \\ \nonumber & = & B_L(r) \tilde{P}_{ij}^L + B_T(r) \tilde{P}_{ij}^T \end{eqnarray} where the projectors are defined by $\tilde{P}_{ij}^L = \hat{r}_i \hat{r}_j$ and $\tilde{P}_{ij}^T = (\hat{r}_{\perp})_i (\hat{r}_{\perp})_j$. The longitudinal correlator is thus given by \begin{equation} B_L(r) = 2 \int { d^2 \vec{q} \over (2 \pi)^2 } \left( 1- e^{i \vec{q}.\vec{r}}\right) \tilde{P}_{ij}^L \overline{ \langle u_i(\vec{q})u_j(-\vec{q}) \rangle } \end{equation} The transverse correlation function follows by replacing $\tilde{P}^L$ by $\tilde{P}^T$. We define \begin{equation} \Gamma_{ij}(\vec{q},\Delta_{11,66}(0),\tilde{g}(0))= \overline{ \langle \vec{u}_i(\vec{q})\vec{u}_j(-\vec{q}) \rangle } \end{equation} Using usual dimensional scaling relations, we can write $$ \Gamma_{ij}(\vec{q},\Delta_{11,66}(0),\tilde{g}(0)) = e^{2l}~\Gamma_{ij}(e^l \vec{q},\Delta_{11,66}(l),\tilde{g}(l))$$ We then choose the scaling parameter $l$ such that\cite{toner-log-2} $qe^l = 1/a$. The large $r$ behaviour of the correlation function thus corresponds to the limit $e^l \rightarrow \infty$. The RG flow approaches its fixed point in that limit : $\tilde{g} \rightarrow \tilde{g}^{\infty}\approx \tilde{g}^*$. Assuming that $\Gamma_{ij}(1/a,\Delta_{11,66}(l),\tilde{g}^*)$ can be evaluated perturbatively in $\tilde{g}^*$ near $T_c$ We can then evaluate the correlation function : \begin{eqnarray} &&\Gamma_{ij}(\vec{q},\Delta_{11,66}(0),\tilde{g}(0)) = \\ \nonumber && {T \over q^2} \left( (c_{11}^{-1} +{\Delta_{11}(l) \over c_{11}^{2}} ) P^L_{ij} + (c_{66}^{-1} +{\Delta_{66}(l) \over c_{66}^2}) P^T_{ij} \right) \end{eqnarray} The correlation function then takes the following form near $T_g$: \begin{eqnarray} \overline{ \langle \vec{u}_i(\vec{q})\vec{u}_j(-\vec{q}) \rangle } \stackrel{q \rightarrow 0}{\simeq} & & -{T_g \over q^2} \ln ( q \xi) \left( { D_{11} \over c_{11}^{2}} P_{ij}^L + { D_{66} \over c_{66}^{2}} P_{ij}^T \right) \end{eqnarray} The angular integrals can be easily performed using the formula: \begin{equation} \label{angular-int-0} {1 \over 2 \pi} \int d\hat{q} ( 1-e^{i \vec{q}.\vec{r}}) P_{ij}^{\alpha} \tilde{P}_{ij}^{\beta} = {1 \over 2} \left( 1-J_0(qr) + \epsilon_{\alpha \beta} J_2(qr)\right) \end{equation} with $\alpha, \beta = L,T$ and $\epsilon_{LL}=\epsilon_{TT}=1$ and $\epsilon_{LT}=\epsilon_{TL}=-1$. This gives: \begin{eqnarray} \label{angular-int} B_{L,T}(r) & = & {1 \over 2 \pi} (\tilde{B}_{11} + \tilde{B}_{66}) \int \frac{dq}{q} \ln(1/(q \xi)) (1-J_0(qr)) \\ \nonumber & &\pm {1 \over 2 \pi} (\tilde{B}_{11} - \tilde{B}_{66}) \int \frac{dq}{q} \ln(1/(q \xi)) J_2(qr) \end{eqnarray} where $\tilde{B}_{11,66}= T_g D_{11,66}/c_{11,66}^2$. The integrals can be evaluated as: \begin{eqnarray*} & & {1 \over 2 \pi} \int_0^{{2 \pi\over a}} dq {\ln (q \xi) \over q} \left( 1-J_0(qr)\right) \\ & &= -{1 \over 4 \pi} \ln^2\left( {r \over \xi} \right) + {\cal O}\left( \ln \left( {r \over a} \right) \right) \\ & & {1 \over 2 \pi} \int_0^{{2 \pi\over a}} dq {\ln (q \xi) \over q} J_2(qr) \\ && = -\frac{1}{4\pi} \left( \ln \left( {r \over \xi}\right) \right) + {\cal O}\left( 1 \right) \end{eqnarray*} The difference between the expressions of $B_L$ and $B_T$ is the sign of the second order Bessel function $J_2(qr)$. Both these correlations have the same leading term in the limit of large $r$, given to the lowest order in $\tau$ by: \begin{eqnarray} \label{correl-long} B_L(r) \sim B_T(r) \sim \frac{b(\alpha)}{K^2} \tau^2 \ln^2\left( {r \over \xi} \right) \end{eqnarray} with a universal coefficient (the cut-off dependence drops out) $b(\alpha) = (\tilde{B}_{11} + \tilde{B}_{66})/(4 \pi)$ Another universal quantity (in the limit of large $r$) is the difference $B_T(r)-B_L(r)$ since the leading integral, with all its cutoff dependent corrections in $\ln r/a$ cancel exactly. One obtains: \begin{equation} \label{correl-trans} B_T(r)-B_L(r) \sim \frac{\tilde{b}(\alpha)}{K^2} \tau^2 \ln \left( {r \over \xi}\right) \end{equation} The coefficient of the log is a universal function of $c_{11}$ and $c_{66}$, $\tilde{b}(\alpha) = (\tilde{B}_{66} - \tilde{B}_{11})/(4 \pi)$. Specifically one finds (see fig.3 for a plot of these functions) : \begin{eqnarray} && b(\alpha) = 6 \frac{ 2 I_0(\alpha) (1 + \frac{\alpha^2}{4}) - \alpha I_1(\alpha) }{ (2 I_0(\alpha/2) - I_0(\alpha) )^2 } \\ && \tilde{b}(\alpha) = 6 \frac{ I_1(\alpha) (1 + \frac{\alpha^2}{4}) - 2 \alpha I_0(\alpha) }{ (2 I_0(\alpha/2) - I_0(\alpha) )^2 } \\ \end{eqnarray} where $\alpha = \alpha_g = 2 (c_{11}-c_{66})/(c_{11}+c_{66})$. Here it is interesting to remark that in addition to the two terms (\ref{correl-long},\ref{correl-trans}), $B_{L,T}(r)$ contains a non universal term proportional to $\ln (r/a)$ (e.g a term like $\Delta(0) \ln (r/a)$). Therefore when comparing the RG predictions with e.g numerical results, one should be careful in identifying the different components of the spatial correlations. \begin{figure} \centerline{\fig{8cm}{fig-RGcoef.eps}} \caption{ \label{fig-RGcoef} \narrowtext Plot of the numerical coefficients of the correlation functions $B_{T,L}(r)$ and $B_T(r)-B_L(r)$ : $b(\alpha)$ and $\tilde{b}(\alpha)$ as function of the elastic ratio $\alpha = 2(1+\sigma)/(3-\sigma)$. We have also plotted the coefficient of the dynamical exponent : $\theta(\alpha)= (z-2)/ \tau$. $\theta_{co}$, the corresponding value for the planar random field XY model (the Cardy-Ostlund model) is shown as a reference. See the text for the details.} \end{figure} As we discuss in the Appendix \ref{appendixe} the distribution of rescaled displacements $u/\ln r$ should be {\it gaussian}. Thus we have also obtained the decay of the structure factor at large $r$: \begin{eqnarray} C_{K_0}(r) = \overline{ e^{i K_0.u(r)} e^{- i K_0.u(0)} } \sim \exp( - \frac{1}{2} b(\alpha) \tau^2 \ln^2( {r \over \xi}) ) \end{eqnarray} the corresponding result for the $N=1$ CO model structure factor is given in \ref{result-structure} of the Appendix \ref{appendixe}. Though the $\ln^2 r$ term is isotropic there is subdominant anisotropy. For instance one has for the decay in different directions: \begin{eqnarray} \frac{C_{K_0}(r \parallel K_0)}{C_{K_0}(r \perp K_0)} \sim (\frac{\xi}{r})^{\frac{\tilde{b}(\alpha) \tau^2}{2}} \end{eqnarray} which is the analogous for $d=2$ of Eq. (4.32) of Ref. \cite{giamarchi-vortex-long}. {\it High temperature phase} Finally, the high temperature phase is characterised by $\tilde{g}(l\rightarrow \infty)=0$ and a non universal value of $\Delta_i(l)\rightarrow \Delta^{\infty}_i$. From (\ref{solution-RG}) we find \begin{equation} \Delta^{\infty}_{11,66} = \Delta_{11,66}(0) + \frac{D_{11,66}}{2\tau} \left( \ln(1-\chi) +\chi\right) \end{equation} which depends on the initial bare values of $\Delta$ but remains finite at the transition $\tau \to 0^{-}$. Since the renormalised theory is gaussian, the correlation function is straightforwardly given in the limit of large $r$ by: \begin{eqnarray} && \overline{ \langle ( u_i(\vec{r})-u_i(\vec{0}) ) ( u_j(\vec{r})-u_j(\vec{0}) ) \rangle } \sim \\ \nonumber && {T \over \pi} \delta_{ij} \ln \left( { r \over a}\right) \left( (c_{11}^{-1} +{\Delta_{11}^{\infty} \over c_{11}^{2}} ) + (c_{66}^{-1} +{\Delta_{66}^{\infty} \over c_{66}^2}) \right) \end{eqnarray} \subsubsection{Cardy-Ostlund model} \label{section3b2} In the case of the Cardy Ostlund $N=1$ problem, defined in its replicated version, as: \begin{eqnarray} \label{cardyostlund} \frac{H}{T} & = & \sum_{ab} \int_r ( \frac{1}{2 T} (c \delta_{ab} - \Delta_{ab}) \nabla u_a \nabla u_b - g \cos(K (u_a - u_b)) \end{eqnarray} one finds, using (\ref{RG-co}): \begin{eqnarray} \label{result-co} B(r) = \overline{\langle (u(r) - u(0))^2 \rangle } \sim \frac{2}{K^2} \tau^2 \ln^2 (r/\xi) \end{eqnarray} where, we recall $\tau = (T_g-T)/T_g$. As announced in the Introduction, this result is different from all the results previously published (to our knowledge). Since it is important for comparison with several existing numerical simulations, we now discuss in more details this discrepancy. Our result is smaller by a factor of $2$ from the original result (5.24) of Ref. \cite{goldschmidt-houghton}. It is smaller by a factor of $4$ from the result quoted in Ref. \cite{hwa-fisher-co}, and still by a factor of $2$ from its later corrected value \cite{hwa-privcomm} (it is also larger by a factor of $4$ from the result obtained in \cite{denis-bernard}). As discussed above the RG equations which read at $T_g$, $d\Delta/dl = c_1 g^2$, $dg/dl = c_2 g^2$ are not universal but the following amplitude ratio, defined in \cite{hwa-fisher-co} $R = T_g c_1/(c c_2)^2$ is universal. We find here that $R=\pi$. (as well as in Ref. \cite{carpentier-bglass-layered}). This is also the valued we inferred from the static and dynamic RG equations of \cite{goldschmidt-houghton,goldschmidt-dynamics-co} and we thus agree with their RG equations. This value was $R=2 \pi$ and thus incorrect in \cite{hwa-fisher-co} but later corrected back to $R=\pi$ \cite{hwa-privcomm}. The origin of the discrepancy between (\ref{result-co}) and the result (5.24) of Ref. \cite{goldschmidt-houghton} thus lies in the calculation of the correlation function. It can probably be traced to the algebraic mistake between equations (5.23) and (5.24) of Ref. \cite{goldschmidt-houghton}. Finally the discrepancy with \cite{denis-bernard} lies first in their RG equations with $N=1$ (we extracted their amplitude ratio as being $R = \pi/2$) and later in a factor of two in the calculation of the correlations (correcting for the additional misprint between formulae (13) and (14)). As discussed in the Introduction this improves the comparison between numerical simulations and the RS-RG. Indeed in Ref. \cite{numRSGM3} a result smaller by a factor of $5$ than \cite{hwa-fisher-co} was found, and we find here a result smaller than \cite{hwa-fisher-co} by a factor of $4$. Concerning the recent quoted results \cite{rieger-zerot-co} of the numerical determination of the ground state, we find that it is about twice the naive continuation of our amplitude to $T=0$, i.e setting ($\tau=1$) in the above result (\ref{result-co}). \section{Dynamics} \label{section4} In this section we study the dynamics of the triangular lattice on a disordered substrate. We study directly the dynamics of the original random sine gordon model (\ref{def-model}). The method we use is to perform the renormalisation on the dynamical effective action associated to (\ref{def-model}) and compute dynamical quantities. We find that this method is more convenient, and leads to more tractable calculations, than the method \cite{goldschmidt-dynamics-co,shapir-dynamics-co} originally used by Goldschmidt and Schaub to study the dynamics of the Cardy-Ostlund model. In order to obtain the dynamical exponent $z$ one needs, in addition to purely dynamical quantities, some information on the statics. It would be potentially erroneous to attempt to extract from the previous Section the necessary information on the statics, since we used in the previous Section a different method with a different regularisation scheme. Instead, the consistent approach is to use the same method and the same regularisation scheme in the statics and the dynamics. We will thus carefully reobtain the results for the statics within the same dynamical RG. This is possible, as we will demonstrate, because we are studying the {\it equilibrium} dynamics. Using the fluctuation dissipation theorem (FDT), which then holds exactly by assumption, one can then reobtain the statics. We will also show (see Appendix \ref{appendixb}) that the same effective action method, but applied to the replica symmetric theory gives the same results for the static as the FDT equilibrium dynamics. Also clearly demonstrate how the cutoff dependence comes in the RG equations and how at $T=T_g$ the physical amplitudes become universal, as detailed in Appendix \ref{appendixa}. \subsection{perturbation theory on the dynamical action} \label{section4a} The dynamics of the model (\ref{def-model}) can be described by a Langevin type equation: \begin{eqnarray} \eta \frac{\partial}{\partial t} u^{\alpha}(x,t) = - \frac{\delta H}{\delta u^{\alpha}(x,t)} + \zeta^{\alpha}(x,t) \end{eqnarray} with $\langle \zeta^{\alpha}(x,t) \zeta^{\beta}(x',t') \rangle = 2 \eta T \delta_{\alpha \beta} \delta(x-x') \delta(t-t')$ being the thermal noise and $\eta$ is the friction coefficient. The hamiltonian $H$ is the sum $H_0+H_{dis}$ defined in (\ref{def-model}). The equation of motion reads, specifically: \begin{eqnarray} && \eta \frac{\partial}{\partial t} u^{\alpha} = c_{66} \nabla^2 u^{\alpha} + (c_{11} - c_{66}) \partial_{\alpha} \partial_{\beta} u^{\beta} \\ \nonumber & & ~~~~~~~~~~~~+ f_1^{\alpha}(x) + f_2^{\alpha}(x) + \zeta^{\alpha}(x,t) \\ && f_1^{\alpha}(x) = - 2 T \sqrt{g} \sum_{\nu} \vec{K}^{\alpha}_{\nu} \sin \left( \vec{K}_{\nu}. \vec{u}(x)+\phi_{\nu}(x) \right) \\ && f_2^{\alpha}(x) = \frac{T}{2} ( \partial_{\beta} \sigma_{\alpha \beta} + \partial_{\beta} \sigma_{\beta \alpha } ) \end{eqnarray} A convenient method to study the dynamics is to use the de Dominicis-Janssen (or Martin Siggia Rose) generating functional. Using the Ito prescription it can be readily averaged over disorder. The disordered averaged functional reads: \begin{eqnarray}\label{action2} && Z[h,\hat{h}] = \int Du D\hat{u} e^{- S[u,\hat{u}] + h u + i \hat{h} \hat{u}} \\ && S[u,\hat{u}] = S_0[u,\hat{u}] + S_2[u,\hat{u}] + S_{int}[u,\hat{u}] \nonumber \\ && S_0[u,\hat{u}] = \int_{q,t} i \hat{u}^{\alpha}_{- q t} ( \eta \partial_t + c_{11} q^2 P_{\alpha \beta}^L + c_{66} q^2 P_{\alpha \beta}^T ) u^{\beta}_{q,t} \\ \nonumber && ~~~~~~~~~~~~~~~~~~~~~~~~ - \eta T \int_{r,t} (i \hat{u}^{\alpha}_{rt}) (i \hat{u}^{\alpha}_{rt}) \nonumber \\ && S_2[u,\hat{u}] = - \frac{T}{2} \int_{q,t,t'} (i \hat{u}^{\alpha}_{q,t}) (i \hat{u}^{\beta}_{-q,t'}) q^2 \\ \nonumber && ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times (\Delta_{66} P_{\alpha \beta}^T(q) + \Delta_{11} P_{\alpha \beta}^L(q) ) \nonumber \\ && S_{int}[u,\hat{u}] = - \frac{1}{2} \int_{r t t'} (i \hat{u}^{\alpha}_{rt}) (i \hat{u}^{\beta}_{rt'}) \Delta^{\alpha \beta}(u_{rt} - u_{rt'}) \end{eqnarray} where the correlator of the pinning force is $\overline{ f_1(r,u_{rt}) f_1(r',u_{r't'}) } = \Delta^{\alpha \beta}(u_{rt} - u_{r't'})$. >From this functional one can obtain the disorder averaged correlation function $C^{\alpha,\beta}_{rt,r't'} = \overline{ \langle u^{\alpha}_{rt} u^{\beta}_{r't'} \rangle }$ and response function $R^{\alpha,\beta}_{rt,r't'} = \delta \overline{ \langle u^{\alpha}_{rt} \rangle }/ \delta h^{\beta}_{r't'} $ which measures the response to a perturbation applied at a previous time. They are obtained from the above functional as $C^{\alpha \beta}_{rt,r't'} = \langle u^{\alpha}_{rt} u^{\beta}_{r't'} \rangle_S $ and $R^{\alpha \beta}_{rt,r't'} = \langle u^{\alpha}_{rt} i \hat{u}^{\beta}_{r't'} \rangle_S$ respectively. Causality imposes that $R_{rt,r't'}=0$ for $t'>t$ and the Ito prescription imposes that $R_{rt,r't}=0$. All correlations $\hat{u} \hat{u}$ vanish. We will assume here time and space translational invariance and denote indifferently $C_{rt,r't'} =C_{r-r',t-t'}$ and $R_{rt,r't'} =R_{r-r',t-t'}$ by the same symbol, as well as their Fourier transforms when no confusion is possible. Note that in this problem $C_{r,-t}=C_{r,t}$. In the absence of disorder the action is simply quadratic $S=S_0$ and the response function is thus (for $t>0$): \begin{equation} R^{\alpha \beta}_{q,t} = P^L_{\alpha \beta}(q) \mu e^{- c_{11} q^2 \mu t} \theta(t) + P^T_{\alpha \beta}(q) \mu e^{- c_{66} q^2 \mu t} \theta(t) \end{equation} we have introduced the mobility $\mu = 1/\eta$. The correlation function is: \begin{equation} C^{\alpha \beta}_{q,t} = P^L_{\alpha \beta}(q) \frac{T}{c_{11} q^2} e^{- c_{11} q^2 \mu t} + P^T_{\alpha \beta}(q) \frac{T}{c_{66} q^2} e^{- c_{66} q^2 \mu t} \end{equation} They satisfy the fluctuation dissipation theorem (FDT), i.e that: \begin{equation} \label{fdt} R^{\alpha \beta}_{r,t} = - \theta(t) \frac{1}{T} \partial_t C^{\alpha \beta}_{r,t} \end{equation} In the presence of disorder one will study perturbation theory expanding in the interaction term $S_{int}$ using the quadratic part $S_0 + S_2$ as the bare action. The disorder has a quadratic part $S_2$ which is purely static and is immaterial in the perturbation theory. Indeed the response function of $S_0 + S_2$ is identical to the one of $S_0$ and the correlation function is changed as: $C^{\alpha \beta}_{q,t} \to C^{\alpha \beta}_{q,t} + {C_{stat}}^{\alpha \beta}_{q,t}$ with: \begin{equation} {C_{stat}}^{\alpha \beta}_{q,t} = T \frac{\Delta_{66}}{c_{66}^2 q^2} P_{\alpha \beta}^T(q) + T \frac{\Delta_{11}}{c_{11}^2 q^2} P_{\alpha \beta}^L(q) \end{equation} which is purely static and does not appear in any diagram of perturbation theory. Thus for any practical purpose one can consider that $S_0$ is used as the bare action. We will perform the calculations on the general form for the correlator of the disorder : \begin{equation} \Delta^{\alpha \beta}(u) = \sum_K \Delta^{\alpha \beta}_K e^{i K.u} \end{equation} In the case of the triangular lattice for later convenience the sum is over the six reciprocal lattice vectors $K=1,2,3,-1,-2,-3$. We will specialize at the end to the case of interest here, i.e a triangular lattice with the model introduced in Section II: \begin{equation} \Delta^{\alpha \beta}(u) = 2 g T^2 \sum_{K=1,2,3} K_{\alpha} K_{\beta} \cos(K.u) \end{equation} where we denote by $1,2,3$ the three reciprocal lattice vectors $K_{\nu}$ with $\nu = 1,2,3$ of (\ref{def-model}). In fact, the condition $\Delta^{\alpha \beta}_K = \Delta_K K_{\alpha} K_{\beta}$ is a potential condition which is preserved under RG and describes the dynamics of the model studied in the previous Section. With no loss of generality $\Delta(u)$ is an even function and thus $\Delta_K = \Delta_{-K}$ and $\Delta_K$ is real. The model of Section II is thus obtained for $\Delta_K = g T^2$, and more generally the relation with the statics studied in Appendix C is $\Delta_K =2 T^2 g_K$. To establish the dynamical RG equations we will compute the effective action in perturbation of $S_{int}$ using $S_0 + S_2$ as the bare theory. The calculation to second order is performed in the appendix and we will only quote the results. Here we compute the effective action $\Gamma[u,\hat{u}]$ to lowest order in the interacting part $S_{int}$: \begin{eqnarray} \Gamma[u,\hat{u}] & = & S_0[u,\hat{u}] + S_2[u,\hat{u}] \\ \nonumber & & + \langle S_{int}[u+\delta u, \hat{u} + \delta \hat{u}] \rangle_{\delta u, \delta \hat{u}} + O(S_{int}^2) \end{eqnarray} where the averages over $\delta u$, $\delta \hat{u}$ are performed with the action $S_0 + S_2$. The calculation gives: \begin{eqnarray} \nonumber \Gamma[u,\hat{u}] & = & S_0 + S_2 - \int_{r t t'} R^{\gamma \beta}_{rt,rt'} (i \hat{u}^{\alpha}_{rt}) \langle \Delta^{\alpha \beta ; \gamma}(u_{rt} - u_{rt'}) \rangle \\ & & - \frac{1}{2} \int_{r t t'} (i \hat{u}^{\alpha}_{rt}) (i \hat{u}^{\beta}_{rt'}) \langle \Delta^{\alpha \beta}(u_{rt} - u_{rt'}) \rangle \end{eqnarray} Here the symbol $\langle F[u]\rangle $ means $\langle F[u+ \delta u]\rangle _{\delta u}$ and we have used the symmetry $\Delta^{\alpha \beta} = \Delta^{\beta \alpha}$. We denote $\Delta^{\alpha \beta ; \gamma}(u) = \partial_\gamma \Delta^{\alpha \beta}(u)$ etc.. This can be rewritten as: \begin{eqnarray} \nonumber \Gamma[u,\hat{u}] & = & S_0 + S_2 + \int_{r t t'} (i \hat{u}^{\alpha}_{rt}) \Sigma(u_{rt} - u_{rt'}, t-t') u^{\beta}_{rt'} \\ & &- \frac{1}{2} \int_{r t t'} (i \hat{u}^{\alpha}_{rt}) (i \hat{u}^{\beta}_{rt'}) D^{\alpha \beta}(u_{rt} - u_{rt'}, t-t') \end{eqnarray} with: \passage \begin{eqnarray} && \Sigma^{\alpha \beta}(u_{rt} - u_{rt'}, t-t') = R^{\gamma \delta}_{rt,rt'} ( \langle \Delta^{\alpha \beta ; \gamma \delta}(u_{rt} - u_{rt'}) \rangle - \delta_{tt'} \int_{t''} R^{\gamma \delta}_{rt,rt''} \langle \Delta^{\alpha \beta ; \gamma \delta}(u_{rt} - u_{rt''}) \rangle ) \\ && =- \sum_K \Delta^{\alpha \beta}_Ke ^{i K (u_{rt} - u_{rt'})} ( K.R_{0,t-t'}.K e^{- K.(C_{0,0} - C_{0,t-t'}).K } - \delta(t-t') \int_\tau K.R_{0,\tau}.K e^{- K.(C_{0,0} - C_{0,\tau}).K } )\\ && D^{\alpha \beta}(u_{rt} - u_{rt'}, t-t') = \langle \Delta^{\alpha \beta}(u_{rt} - u_{rt'}) \rangle = \sum_K \Delta^{\alpha \beta}_K e^{- K.(C_{0,0} - C_{0,t-t'}).K } e^{i K (u_{rt} - u_{rt'})} \end{eqnarray} \retour One can check that the exact FDT equalities are satisfied (for $t>t'$): \begin{eqnarray} \partial_{t'} D^{\alpha \beta}(u_{rt} - u_{rt'}, t-t') = - T \Sigma^{\alpha \beta}(u_{rt} - u_{rt'}, t-t') \end{eqnarray} where the time derivative acts only on the explicit time dependence (i.e second argument) of the function \cite{footnote2} In the above effective action to first order we will keep only the relevant terms, namely: (i) the disorder, by definition: \begin{eqnarray} \Delta_R^{\alpha \beta} = D^{\alpha \beta}(u_{rt} - u_{rt'}, \infty ) \end{eqnarray} (ii) the thermal noise, by definition: \begin{eqnarray} \delta(\eta T)_{\alpha \beta} = \frac{1}{2} \int_\tau ( D^{\alpha \beta}(0, \tau) - D^{\alpha \beta}(0, \infty) ) \end{eqnarray} which will be in practice a diagonal tensor $\delta(\eta T)_{\alpha \beta} = \delta(\eta T) \delta_{\alpha \beta}$. (iii) the friction coefficient, by definition: \begin{eqnarray} \delta \eta_{\alpha \beta} = \lim_{\omega \to 0} i \partial_{\omega} \Sigma^{\alpha \beta}(0, \omega) = - \int_{\tau >0} \tau \Sigma^{\alpha \beta}(0, \tau) \end{eqnarray} Let us note that in the same way that the disorder term $D$ generated in the effective action can be split into a time persistent (i.e non local in time) operator (disorder (i)) which become relevant at $T_g$ and an operator local in time (thermal noise (ii)), the term $\Sigma$ gives a local operator (friction (iii)) and an additional time persistent kinetic term (iv) not written above. In the present approach these terms are directly related via the FDT. and we do not need to consider (iv) separately, though it may be important in other cases. Using the above FDT relation, integrating by parts and using the symmetry $C_{r,-t} = C_{r,t}$ one immediately finds that $T \delta \eta_{\alpha \beta} = \delta(\eta T)_{\alpha \beta}$, i.e that temperature is not renormalized: \begin{eqnarray} \delta T = 0 \end{eqnarray} The non-renormalization of temperature holds to all orders and is guaranteed by the FDT relations. Thus, to first order one has simply to consider the disorder: \begin{eqnarray} \Delta^R_K= \Delta_K e^{- K.(C_{0,0}- C_{0,t=\infty}).K } \end{eqnarray} and the correction to the friction: \begin{eqnarray*} \delta \eta_{\alpha \beta} = \frac{1}{T} \sum_K \Delta_K^{\alpha \beta} \int_{0}^{+\infty} dt && ( e^{- K.(C_{0,0} - C_{0,t}).K } \\ &&- e^{- K.(C_{0,0} - C_{0,t=\infty}).K} ) \end{eqnarray*} Using that $\Delta_K^{\alpha \beta} = \Delta_K K_\alpha K_\beta$, $\Delta_K = g T^2$ for the model of interest and isotropy one gets that $\delta \eta_{\alpha \beta} = \delta \eta \delta_{\alpha \beta}$ with: \begin{eqnarray} \label{eta-first} \delta \eta = \frac{g T}{2} \sum_K K^2 \int_{0}^{+\infty} dt && ( e^{- K.(C_{0,0} - C_{0,t}).K } \\ &&- e^{- K.(C_{0,0} - C_{0,t=\infty}).K} ) \end{eqnarray} which becomes infrared divergent below $T_g$. We now turn to the full result up to second order and the renormalization. \subsection{RG equations to second order and calculation of the dynamical exponent} \label{section4b} The calculation of the effective action $\Gamma$ to second order in $S_{int}$ is performed in the Appendix. We now summarize all the results up to second order, specializing to the case of interest here, i.e model (\ref{def-model}) with $\Delta_K = g T^2$. We have also introduced the dimensionless disorder strength $\tilde{g} = g a^2$. We have specified a short-scale regularization, which we keep as general as possible (see Appendix for all details): \begin{eqnarray} C_{ij}(\vec{r},t,a) = && T \int {d^2 \vec{q} \over (2 \pi)^2} { \phi(a q) \over q^2} e^{i \vec{q} .\vec{r}} \\ \nonumber & \times & \left( c_{11}^{-1} P_{ij}^L e^{- c_{11} q^2 \mu t } +c_{66}^{-1} P_{ij}^T e^{- c_{66} q^2 \mu t } \right) \end{eqnarray} with $\phi(0)=1$. One has: \begin{eqnarray} \label{definition-b} B_{ij} (r,t,a) & = & 2 ( C_{ij}(0,0,a) - C_{ij}(r,t,a) )\\ & = & B_{ij}(r/a,t/a^2,1) \end{eqnarray} With these definitions one finds: \passage \begin{mathletters} \label{resultfinal} \begin{eqnarray} \label{g-renorm} && g_R = e^{- \frac{1}{2} K.B(0,t=\infty,a).K } a^{-2} ( \tilde{g} + \tilde{g}^2 a^{-2} \int_{r} ( e^{K.B(r,0,a).K'} - 2 e^{- \frac{1}{2} K.B(r,0,a).K} )) \\ \label{d66-renorm} && \Delta_{66,R} = \Delta_{66} + \frac{1}{2} \tilde{g}^2 a^{-4} T \sum_K \int_r (\frac{3}{8} K^2 r^2 - \frac{1}{4} (K.r)^2 ) e^{- K.B(r,0,a).K} \\ \label{d11-renorm} && \Delta_{11,R} = \Delta_{11} + \frac{1}{2} \tilde{g}^2 a^{-4} T \sum_K \int_r (\frac{1}{8} K^2 r^2 + \frac{1}{4} (K.r)^2 ) e^{- K.B(r,0,a).K} \\ \label{eta-renorm} && \eta_R = \eta + \frac{1}{2} T \tilde{g} a^{-2} \sum_K K^2 \int_{0}^{+\infty} dt e^{- \frac{1}{2} K.B(0,t,a).K } \end{eqnarray} \end{mathletters} \retour In the first line $K' \neq K$. The last line is our previous first order result (\ref{eta-first}). Studying the variation with respect to an infinitesimal change of cutoff $a \to a' = e^l a$ leads to the following RG equations, derived in detail in the Appendix: \begin{mathletters} \label{rgfinal} \begin{eqnarray} \label{rg-d66} && \frac{d \Delta_{66}(l)}{dl} = A_{66}(\phi) T \tilde{g}(l)^2 \\ \label{rg-d11} && \frac{d \Delta_{11}(l)}{dl} = A_{11}(\phi) T \tilde{g}(l)^2 \\ \label{rg-g} && \frac{d \tilde{g}(l)}{dl} = (2 - \kappa_1 K^2 T) \tilde{g}(l) + \tilde{g}(l)^2 A_{g}(\phi) \\ \label{rg-eta} && \frac{d \eta (l)}{dl} = A_{\eta}(\phi) T \tilde{g}(l) \end{eqnarray} \end{mathletters} with $T_c = 2/\kappa_1 K^2$. The amplitudes $A_i(\phi)$ in general depend on the details of the regularisation and are computed in the Appendix. When evaluated at $T=T_g$ however, they have a simple form: \begin{mathletters} \label{constantes} \begin{eqnarray} && A_{66} = \frac{3 \pi}{4} K^2 e^{8 \pi C(\phi)} (2 I_0(\alpha) - I_1(\alpha)) \\ && A_{11} = \frac{3 \pi}{4} K^2 e^{8 \pi C(\phi)} (2 I_0(\alpha) + I_1(\alpha)) \\ && A_g = 2 \pi e^{4 \pi C(\phi)} (I_0(\alpha) - 2 I_0(\alpha/2)) \\ && A_{\eta} = \frac{3}{2} e^{\gamma + 4 \pi C(\phi)} K^2 \frac{1}{c_{66}} ( \frac{c_{66}}{c_{11}} )^{\frac{c_{66}}{c_{11}+c_{66}}} \eta = B \eta \end{eqnarray} \end{mathletters} One sees that all cutoff dependence drops out of the universal ratios determining the critical exponents. The dynamical exponent $z$ below $T_g$ is determined from: \begin{eqnarray} \frac{d \ln \eta(l)}{d l} = T B \tilde{g}(l) \end{eqnarray} At the fixed point $\tilde{g}(l) = \tilde{g}^* = - 2 \tau /A_g$ with $\tau = (T-T_c)/T_c$ one has $\eta(L) \sim L^{z-2}$ where the dynamical exponent is: \begin{eqnarray*} z - 2 = -2 \tau \frac{T_g B}{A_g} = \frac{12 e^{\gamma} \tau}{2 I_0(\alpha/2) - I_0(\alpha)} ( \frac{c_{66}}{c_{11}} )^{\frac{c_{66}}{c_{11} + c_{66}}} \frac{c_{11}}{c_{11} + c_{66} } \end{eqnarray*} using $K^2 T_g = 8 \pi/( c_{11}^{-1} + c_{66}^{-1} )$. This yields our result for the dynamical exponent to lowest order in $\tau$: \begin{eqnarray} \label{theta} \frac{z - 2}{\tau} = \theta = 3 e^{\gamma} \frac{ (2+ \alpha) (\frac{2-\alpha}{2+ \alpha})^{\frac{2-\alpha}{4}} }{ 2 I_0(\alpha/2) - I_0(\alpha)} \end{eqnarray} with $\alpha(T_g) = \alpha_g = 2 (c_{11}-c_{66})/(c_{11}+c_{66}) = 2 (1+\sigma)/(3-\sigma)$ where $\sigma$ is the Poisson ratio defined as usual as $\sigma = 1 - 2 c_{66}/c_{11} = \lambda/(\lambda + 2 \mu)$. See fig.3 for a plot of this exponent $\theta$. One checks that in the case of the $N=1$ Cardy Ostlund model one recovers previous result. Performing the substitutions explained before equation (\ref{RG-co}), with $\tilde{g}^* = \tau/(2 \pi)$, $B^{CO} = e^\gamma/c$, $T^{CO}_g = 4 \pi c$, one finds: \begin{eqnarray} z^{CO} - 2 = \tilde{g}^* T^{CO}_g B^{CO} = 2 e^{\gamma} \tau \end{eqnarray} which is the result of Goldschmidt and Schaub\cite{goldschmidt-dynamics-co,shapir-dynamics-co} (they use $\sqrt{c} = e^{\gamma}/2$). One can obtain further dynamical quantities near the transition. Explicitly integrating the RG equation (\ref{solution-RG}) one has: \begin{eqnarray} \int_0^l dl \tilde{g}(l) = \frac{1}{B_g} \ln(1 + \chi (e^{2 \tau l}-1) ) \end{eqnarray} This yields the diffusion coefficient $D=c/\eta$ at scale $L=a \ln l$. We denote $D_0=D(a)$ the bare diffusion coefficient $D_0=D(a)$. Exactly at $T_g$ one finds: \begin{eqnarray} \frac{D(L)}{D_0} = exp( - B T \int_0^l dl \tilde{g}(l) ) = \frac{1}{(1 + \frac{\ln(L/a)}{\ln(\xi_0/a)})^\theta} \end{eqnarray} with $\xi_0 = a \exp(1/(g_0 B_g))$ is the characteristic length at $T_g$ and the exponent $\theta = \lim_{\tau \to 0^{+}} (z-2)/\tau$ is given by the formula (\ref{theta}) above. Above and near $T_g$ one has: \begin{eqnarray} \label{diffusion} \frac{D}{D_0} = \exp( - \theta \ln(1 + \frac{g_0 B_g}{2 |\tau|} ) ) \end{eqnarray} Finally below $T_g$ one has: \begin{eqnarray} \frac{D}{D_0} = (1 + \frac{g_0 B_g}{2 \tau} (e^{2 \tau l} -1) )^{\theta} \sim e^{(2 - z) l} \end{eqnarray} One can also study the velocity $v$ of the system in response to a small force $f$. We follow the analysis of \cite{shapir-dynamics-co} based on stopped RG arguments as in \cite{nozieres-gallet}. Above $T_g$ one has $v \sim \mu f$ with $\mu = D$ (Einstein relation), and $D$ was obtained in (\ref{diffusion}). In the glass phase $T \leq T_g$ one iterates the RG until a scale $L \sim \sqrt{c/f}$ and then match the RG to an asymptotic ohmic regime. One thus obtains the $v-f$ characteristics as $v = \mu(L = \sqrt{c/f}) f$. For $\tau > 0$ one has: \begin{eqnarray} v \sim \frac{\mu_0 f}{(1 + (\frac{f_c}{f})^{\tau} - (\frac{f_c}{f_0})^{\tau})^{\frac{z-2}{2 \tau}}} \end{eqnarray} with $f_0 = c/a^2$ and $f_c$ is the {\it effective critical force}. Since we work at $T>0$ there cannot be a true critical force, but one can always define an effective critical force \cite{blatter-vortex-review}. In the case of vortices in superconductors it is gives the critical current $j_c$. It is also possible to relate $f_c$ to the Larkin length $R_c = a (2 \tau/g_0 B_g)^{1/(2\tau)}$ which is similar to $R_a$ in this single cosine model. The relation is $f_c/f_0 = (a/R_c)^2$. Thus for weak disorder one has: \begin{eqnarray} v \sim \mu_0 f \left(\frac{f}{f_c}\right)^{\frac{z-2}{2}} \end{eqnarray} Exactly at the transition $\tau=0$ one finds: \begin{eqnarray} v \sim \mu_0 \frac{f}{(1 + \frac{ \ln( \frac{f_0}{f} )}{\ln( \frac{f_0}{f_c} }) )^{\theta}} \end{eqnarray} with $f_c = c \xi_0^{-2}$. Note that we have used near-equilibrium arguments which assume the irrelevance of KPZ type terms near the transition and ignores possible violations of Einstein relation. A more detailed analysis goes beyond the scope of this paper \cite{shapir-dynamics-co}. \section{Conclusion} \label{section5} In this paper we have studied the problem of a triangular elastic lattice on a disordered substrate, excluding dislocations. This problem is interesting in relation to vortex lattices in superconductors, friction of surfaces, magnetic bubbles. We have constructed the $N=2$ components model necessary to describe correctly the triangular lattice and we have studied both the statics and the dynamics using several methods of renormalization. These methods have yielded consistent results. We have studied several regularizations and showed explicitely that the amplitude ratios which determine the exponents become independent of the regularization procedure at $T_g$. We have obtained that there is a glass phase for $T<T_g$ where disorder is perturbatively relevant and yields to behaviour qualitatively similar, but quantitatively more complex than the $N=1$ component version of this model studied so far. We found that the glass phase is described by a a plane of fixed points, parametrized by temperature and the Poisson ratio. We obtained that a $u^2 \sim A_1 \ln^2 r$ growth of the static displacements and computed the amplitude $A$ to lowest order in $T-T_g$. We found that it is a universal function of the Poisson ratio. We showed that the asymptotic behaviour of the correlation function in the glass phase is isotropic $u_T \sim u_L$, but also found a universal subdominant anisotropy $u^2_T - u^2_L \sim A_2 \ln r$. These behaviours are reminiscent of the behaviour for the Bragg glass in $d=4-\epsilon$ or in $d=3$ obtained from the GVM, with the difference that $u^2 \sim A_d \ln r$ in that case and $u^2_T - u^2_L \sim cst$. We have also studied the equilibrium dynamics of the model. We have obtained the dynamical exponent $z$, which is also a function of temperature and the Poisson ratio. This value is larger than the one for the $N=1$ component model indicating that the system is more glassy. We also computed the effective critical current in the glass phase and related it to the Larkin length. Finally, we also reexamined the statics of the $N=1$ Cardy-Ostlund model (planar random field XY model). We obtained an expression for the amplitude of the $u^2 \sim A \ln^2 r$ growth of the displacements which seems compatible in order of magnitude with the numerical simulations, though more extensive simulations would be useful. Another prediction is that the distribution of rescaled displacements $u/\ln r$ is a gaussian at large scale. We propose that this could provide a new and non trivial numerical check of the validity of the replica symmetric Cardy-Ostlund RG. We thank T. Giamarchi, L. Cugliandolo, J. Kurchan for useful discussions. {\it Note added}: While this manuscript was under completion we received a preprint by C. Carraro and D.R. Nelson, cond-mat/9607184 who study the same model (\ref{def-model}). Their results are different from ours. Their static RG calculation does not take into account angular integrals and thus shows no dependence on the Poisson ratio. The two calculations may coincide only for the case $\alpha=0$, but a direct comparison was difficult since they did not compute the $\ln^2 r$ amplitude. Our result for $z$, obtained by carefully using the same regularization in the statics and the dynamics, also disagrees with their result.
2,877,628,088,696
arxiv
\section{Introduction} Click prediction is an important and central component in any online advertisement system. Predicting the probability of clicks and click through rate is central in sponsored search advertising and display advertising, and several downstream systems including our auction mechanism rely on being able to predict the probability of click accurately and reliably. Most click prediction systems are modeled via the standard machine learning classification framework. We design features relevant to the user, ads and query, with the goal of predicting if a given user, will click a given ad for a given search query. A training data period is selected, and the click prediction model is trained and validated. Machine learning scientists then analyze the model performance offline, and if things look good, deploy the models in production. \begin{figure} \includegraphics[width = 0.5\textwidth]{Model_Staleness.png} \caption{Effect of Staleness on Model Metrics and gains with retraining} \end{figure} The main caveat of this, is that the models get stale quickly and the performance degrades over time. The distribution of users, queries and ads change over time and correspondingly models are evaluated on a different data distribution compared to what they have learnt from. For this reason, we need to retrain model periodically. Figure 1 demonstrates that we can achieve a significant gain in model metrics after retraining the model in just a month. This paper addresses the issue of model staleness via online learning. We investigate a unified model adaptation framework, where the models continuously and gradually adapt over time, to learn the distribution changes. A natural alternative is to just run an automated experimental pipeline which continuously retrains the model from scratch. We not only show that gradual learning outperforms this, but we also show that via continuous adaptation, we are able to control how much the model adapts and ensure the model does not overfit to a single distribution. For example, we know that the distribution of queries, ads and users are quite different on Labor Day, compared to other week days. Training a model from scratch on data containing Labor Day can be problematic since it will learn a potentially different input distribution, that is not seen on other days. This problem gets intensified when there is data or system corruption issues. Our comprehensive evaluation shows that our batch online learning framework via gradual and continuous model adaption, not only improves performance, but is also more reliable and safe compared to automatic model retraining. \subsection{Related Work} The problem of online learning has been heavily studied in the theoretical machine learning community. Most of this work has revolved regret minimization, where the regret of an online learning algorithm is defined as the gap in performance of the online algorithm compared to the solution obtained from an offline algorithm, which has access to all the data in hindsight~\cite{zinkevich2003online,shalev2012online,shalev2007online}. Since several machine learning problems naturally involve solving convex optimization problems, online machine learning can naturally be posed as online convex optimization. This is the case with all linear models like logistic regression, SVMs etc. Zinkevich~\cite{zinkevich2003online} was one of the first papers to study online gradient descent for online learning and show that OGD enjoys low regret. Following this seminal work, several papers have extended upon this paradigm (see~\cite{shalev2012online,shalev2007online} for a survey). While most literature around online learning has focused on proving theoretical bounds, a few of these have been proved successful in real world problems. \cite{mcmahan2013ad,mcmahan2011follow} proposed a Follow the Regularized Leader Scheme (FTRL) for online learning on a logistic regression model. \arxiv{Their problem setting consists of a sparse feature set (with more than a million features) with a L1 logistic regression model. The authors argue how their framework naturally handles both L1 and L2 regularization, and in the case of L2 regularization, boils down to online Gradient Descent.} The authors provide extensive empirical validation of their framework and some hints into the deployment of such a large scale system for serving ads at Google. Following this \cite{he2014practical} from Facebook provide a framework of Online Learning with a combined Decision Tree and Logistic Regression Model. Similar to~\cite{mcmahan2013ad}, the authors go into a lot of details into the deployment of a real world online learning system in production. \cite{ciaramita2008online} propose an online learning click prediction system on multi-layer nueral networks. Another large scale Online Learning system for Click Prediction was proposed by~\cite{graepel2010web}, where they propose Online Learning system on Bayesian Probit Regression Models, and provide compelling details into deploying such a system in practice. Similarly~\cite{liu2017pbodl} describes the click prediction system at Tencent where they use a Bayesian Online Learning scheme similar to~\cite{graepel2010web}. \cite{cheng2010personalized} investigate the role of personalization in click prediction systems. \cite{mcmahan2014delay} investigate a distributed online learning framework for large scale click prediction problems. Beyond Click Prediction in Search Advertisement, Online Learning schemes have been used in other scenarios as well. \cite{chapelle2011empirical,chapelle2015simple} propose an Thompson Sampling based contextual bandit scheme for Display advertisement. They propose a Proximal Update Algorithm similar to the one discussed in this paper. Similarly, \cite{kirkpatrick2017overcoming} look into the problem of learning from a new domain, while simultaneously not forgetting about the previous domain. Similarly \cite{ma2009identifying} investigate online learning for identifying suspicious URLS. \subsection{Our Contributions} The following are our main contributions. \begin{itemize} \item This paper studies two different views of Online Learning: One which performs iterative training (like Online Gradient Descent, FTRL etc.) with early stopping, and another, which minimizes at every round, a proximal regularized objective function (which ensures the current solution does not move too much from the previous solution). Both approaches provide tradeoff between historical and new data, via the learning rate and number of iterations in the early stopping, and the regularization parameter in the proximal scheme. \item We empirically and theoretically show that both these paradigms are closely related to each other. In particular, we show that with a right choice of these parameters (learning rate, number of iterations and proximal regularization), the two OL paradigms achieve very similar solutions. \item We next prove the benefit of incremental learning schemes, by showing how this can substantially improve upon simple retraining of models. We argue how online learning not only ensures automatic model updates, but also can improve upon model metrics because of the fact it retains a larger history. Moreover, we also show how it is much more robust to data corruption and other distributional changes compared to simple model retrainings. \item We then look into several important challenges of production systems and study the effect of online learning with data delays, how different initializations affect the performance of OL, and we conclude by discussing engineering issues in deploying such a model in production systems serving Search Ads to Hundreds of Millions of Users. \end{itemize} \section{System Overview} \begin{wrapfigure}{L}{0.25\textwidth} \centering \includegraphics[width = 0.22\textwidth]{Featurization.png} \caption{GBDT as a Feature Extractor for Linear Models} \end{wrapfigure} In this section, we go over our modeling framework, features, evaluation metrics and our system overview. Given a user, ad and query, our task is to accurately predict the probability that the user will click on this ad. It is not just important to rank the ads correctly, but the resulting probability must be calibrated (in that the predicted probability must match the true click through rate). For this reason, we shall compare both the Area under the curve (AUC), which measures the ranking of the ads and the Relative Information Gain (RIG) metrics which measures the calibration. The RIG of a Model $M$ can be defined as \begin{equation} RIG_M = \frac{LogLoss_M - LogLoss_{CTR}}{LogLoss_{CTR}} \end{equation} where $LogLoss_{CTR}$ is the LogLoss of the empirical CTR of the data. Since $LogLoss_{CTR}$ is a constant, RIG is proportional to the LogLoss of the Model. Next, we go over the features for our problem. Our features include Ad, Query and User features. Ad features include Ad Title, Ad id, Decoration information etc. Query features include query category, query text etc. User features include IP address, Browser, Location, age/gender information etc. We encode our features as Counting features~\cite{ling2017model}, representing the click through rate for that feature. We resort to two of the most popular choices of supervised learning techniques, namely gradient boosted decision trees and Neural Networks. Both these techniques outperform other non-linear and generalized linear models on our data. To incrementally train models over time however, it is more natural to do so over generalized linear models. We achieve best of both worlds, by training a generalized linear model over features extracted from non-linear models. For example, we can extract tree and leaf value features from a GBDT (shown in the Figure 2), and train a Logistic Regression model on these features. This can also be done if we use a Neural Network as the feature extractor, and if we extract features, say from the last layer. In this paper, we shall focus on Online Learning over a Logistic Regression Model using features extracted from a fixed GBDT model. \section{Our Online Learning Framework} In Click Prediction systems, we get near instant feedback from users based on whether they click on an ad or not. Assume we have a Base LR Model $M_{\mathcal S}$, trained on a given dataset $\mathcal S$ (say, for example, one week of data). Once a user searches for a query on a search engine, the system sees a feature vector $x_t \in \mathbf{R}^d$. Using the Model currently in production, the system then predicts the probability of click $p_t$. The auction then ranks ads based on the pClick, bid and other factors, finally creating a set of ads which are shown to the user. We then receive the feedback $y_t$ whether the user clicks or not. This data is then collected in batches. Denote $\mathcal B_1, \mathcal B_2, \cdots, \mathcal B_n$ as the different batches of data (each batch, is for example, a day of data or four hours). The predictions made in batch $\mathcal B_i$ are made using the Model from the previous batch -- i.e. $M_{i-1} = M_{\mathcal B_{i-1}}$. The most important piece of this story is how do we update the model. A critical challenge here is to be able to learn from the incremental data coming in, and yet, not forget what was learned in the past. In the below sections, we describe two schemes of incremental updates of the models. \subsection{Early Stopping Incremental Learning} The first scheme, is what we call {\em early stopping} scheme, abbreviated as {\sc ES}. We initialize the model with the base model $M_{\mathcal S}$. At round $i$, we initialize the incremental learning algorithm $\mbox{Alg}$, with the model from the previous round $M_{i-1}$, and limit the number of passes on the data to be $k$. We denote this by, \begin{equation} \label{esupdate} M_i = \mbox{Alg}(M_{i-1}, \mathcal B_i, k). \end{equation} \arxiv{\noindent \textbf{LBFGS/TRON:} One example of $\mbox{Alg}$ is LBFGS~\cite{liu1989limited}. Limited-memory BFGS (LBFGS) algorithm belongs to a family of quasi-Newton methods which approximates the BFGS algorithm with a limited memory. The BFGS algorithm itself is an iterative technique, where the Hessian matrix is updated at every iteration using the past gradient evaluations. BFGS requires storing the dense $n \times n$ approximation of the Inverse Hessian Matrix, while L-BFGS just stores the past $m$ updates of the positions and gradients and uses them for the updates. In practice, $m$ is chosen around $10$ – $30$. Another example of a similar algorithm is a Trust Region Newton~\cite{lin2007trust}. } \noindent \textbf{OGD/SGD/GD: } \narxiv{One}\arxiv{Another} choice of $\mbox{Alg}$ is Online Gradient Descent~\cite{zinkevich2003online} or Stochastic Gradient Descent~\cite{bottou2018optimization}. This is akin to a Gradient descent scheme, except that the (stochastic) gradient is computed based on a single example or a minibatch, rather than using the entire batch. There are two flavors of this, either using a fixed learning rate, a decaying learning rate or an adaptive learning rate (as in AdaGrad~ \cite{duchi2011adaptive}). In this paper, we focus on the simplest version of fixed learning rates for SGD or GD. The main hyper-parameters under consideration for Early Stopping algorithms is $k$ and the learning rate, which determines the tradeoff between the new data and history. Having too large a $k$, implies that we overfit to the distribution in the current batch, thereby generalizing poorly to the next batch. Having a small $k$ implies that we might learn the data changes too slowly. We shall demonstrate the interplay between these quantities in detail in our experiments. Similarly, a large learning rate can cause the incremental learning to diverge and a small learning rate could mean slow learning. One can also have a per coordinate learning rate~\cite{mcmahan2013ad,he2014practical}. One way to define a per coordinate rate, is to set $\alpha^i_k = 1/n^i_k$, where $n^i_k$ is the total number of times feature $i$ is seen till round $k$~\cite{he2014practical}.\\ \noindent \textbf{FTRL: } Follow The Regularized Leader (FTRL)~\cite{mcmahan2013ad} can be seen as another instance of this paradigm. In the case of L2 regularization, FTRL updates are equivalent to the one from OGD. \subsection{Proximal Regularization based Incremental Learning} Given a Batch $\mathcal B_i = \{(x^i_1, y^i_1), \cdots, (x^i_l, y^i_l)\}$, the {\em proximal} based incremental learning scheme, abbreviated as {\sc Prox}, minimizes the following objective function: \begin{equation} \label{proxl2} G(w) = \sum_{j = 1}^l L(w, x^i_j, y^i_j) + \frac{\lambda}{2} ||w - w^{i-1}||^2 \end{equation} This formulation ensures, we minimize the objective function on the current batch, while still not moving too much away from the previous solution. Here, $\lambda$ is a tradeoff between the new data and the history. If $\lambda$ is too small, we overfit completely to the current data (similar to a large $k$ in the early stopping scheme). Similarly if $\lambda$ is too large, we will not move much from the initial model. In Equation~\ref{proxl2}, each coordinate has the same weight $\lambda$. Often, however, we want some of the coordinates to move less compared to other coordinates. For example, coordinates that have covered many training examples in the recent history can have a higher penalty for change compared to parameters covering relatively fewer number of examples. The Proximal update equation is the same as Equation~\ref{proxl2}, except that we have a per coordinate regularization $\lambda_r$. \begin{equation} \label{proxewc} G(w) = \sum_{j = 1}^l L(w, x^i_j, y^i_j) + \sum_{r = 1}^d \lambda_r (w_r - w_r^{i-1})^2 \end{equation} where $w_r$ is the $r$th coordinate of the weight vector. This looks similar to the per coordinate learning rate in an Online Gradient Descent scheme above. One way of setting the per coordinate regularization parameter is the diagonal of the Fisher Information of the data~\cite{kirkpatrick2017overcoming}. This scheme is called Elastic Weight Consolidation in ~\cite{kirkpatrick2017overcoming}. This comes naturally as an approximation of the Posterior of the weights, which contains information of the parameters important to the historical data. In the case of Logistic Regression, this is exactly the Double derivative of the Log Likelihood Function. Incidentally, this scheme was also proposed as an online learning scheme with Thompson Sampling for Click Prediction Problems~\cite{chapelle2011empirical}. The individual optimization problems of the L2 Proximal Update (Equation~\ref{proxl2}) and the Per coordinate one (EWC) are convex optimization problems and can be optimized via methods like LBFGS~\cite{liu1989limited} or Trust Region Newton~\cite{lin2007trust}. \subsection{Relationship between Early Stopping and Proximal Update Schemes} We next study the relationship between the early stopping algorithms and the proximal update scheme. For simplicity, we shall analyze the case when the {\sc ES} algorithm is Gradient Descent with a fixed Learning rate. The learning rate $\alpha$ and the number of iterations $k$ for {\sc ES}, and the regularization parameter $\lambda$ for {\sc Prox} determine the trade-off and performance. We show here that there is a close relationship between the two. Assume we initialize {\sc ES} and {\sc Prox} with $w_0$, i.e. {\sc Prox} minimizes $G(w) = F(w) + \lambda/2 ||w - w_0||^2$, where $F(w) = \sum_{j = 1}^l L(w, x_j, y_j)$. Denote $w_1, \cdots, w_k$ as the weights obtained via an ES scheme. For this analysis, we assume we use gradient descent. We then show the following result. \begin{theorem} Denote $w^*$ as the optimal solution of the {\sc Prox} objective function $G$ with regularization parameter $\lambda$. Denote by $w_k$ the solution obtained by running {\sc ES} on $F$ with a learning rate $\alpha$ for $k$ iterations. If $\lambda, \alpha$ and $k$ satisfy $\alpha \lambda k = 1$, then the solution $w_k$ satisfies, \begin{align} |G(w_k) - G(w^*)| \leq \epsilon (k - 1) ||w_k - w^*|| \end{align} where $\epsilon = \max_i || \nabla F(w_i) - \nabla F(w_{i-1}) ||$. \end{theorem} The above theorem shows that as long as the gradients of the loss function $F$ do not change much from iteration $i$ to $i+1$ in the early stopping scheme, $w_k$ is close to the optimal solution of {\sc Prox} provided the parameters satisfy $\alpha \lambda k = 1$. The proof of this result is in the \arxiv{Appendix.}\narxiv{extended version.} Theorem 1 can be extended to show a relationship between a per co-ordinate learning rate and per co-ordinate regularization. In particular, a per co-ordinate learning rate $\alpha_1, \cdots, \alpha_m$ is closely related to a per co-ordinate regularization $\lambda_1, \cdots, \lambda_m$ if $\forall r, \lambda_r \alpha_r k = 1$. \begin{corollary} Denote $w^*$ as the optimal solution of the {\sc Prox} objective function $G$ with per-coordinate regularization $\lambda_r$. Denote by $w_k$ the solution obtained by running {\sc ES} on $F$ with a per-coordinate learning rate $\alpha_r$ for $k$ iterations. If $\lambda_r, \alpha_r, \forall r$ and $k$ satisfy $\alpha_r \lambda_r k = 1, \forall r$, then the solution $w_k$ satisfies, \begin{align} |G(w_k) - G(w^*)| \leq \epsilon (k - 1) ||w_k - w^*|| \end{align} where $\epsilon = \max_i || \nabla F(w_i) - \nabla F(w_{i-1}) ||$. \end{corollary} We make several important remarks about Theorem 1. Firstly, as noted earlier, $w_k$ (obtained via $k$ rounds of the ES scheme) is close to the optimal solution of the Proximal scheme if $\epsilon$ is small. The quantity $\epsilon$ being small implies that the gradients of the subsequent iterations of the Early stopping scheme are close to each other. Secondly, notice that the bound also depends on $k$. We expect the bound to be looser if $k$ is large (everything else remaining the same). In the next section, we investigate this relationship empirically. We observe that for several parameter values of $\alpha, \lambda$ and $k$ satisfying $\lambda \alpha k = 1$, ES and {\sc Prox} methods obtain similar solutions. We show that in those cases, the gradient differences are small. We also show cases where the solutions of {\sc Prox} and those of ES are not close to each other, and argue how in those cases the bound from Theorem 1 is weaker. \begin{figure} \includegraphics[width = 0.5\textwidth]{OLGain_AUC.png} \includegraphics[width = 0.5\textwidth]{OLGain_RIG.png} \caption{Gains from Online Learning relative to a Stale (fixed) Model over a span of three months} \label{OLgains} \end{figure} \begin{figure} \arxiv{\includegraphics[width = 0.5\textwidth]{ES_Algo_AUC.png} \includegraphics[width = 0.5\textwidth]{ES_Algo_RIG.png}} \includegraphics[width = 0.5\textwidth]{ES_Iter_AUC.png} \includegraphics[width = 0.5\textwidth]{ES_Iter_RIG.png} \includegraphics[width = 0.5\textwidth]{ES_LR_AUC.png} \includegraphics[width = 0.5\textwidth]{ES_LR_RIG.png} \caption{\arxiv{The top two figures compares the different algorithms for ES schemes. The third and fourth figure (from top) show the AUC and RIG gains for different number of iterations ($k$) while the bottom two figures show the AUC and RIG gains respectively of the different learning rate parameters ($\alpha$).}\narxiv{The top two figures (from top) show the AUC and RIG gains for different number of iterations ($k$) while the bottom two figures show the AUC and RIG gains respectively of the different learning rate parameters ($\alpha$).}} \label{escomparisons} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{Prox_Lambda_AUC.png} \includegraphics[width = 0.5\textwidth]{Prox_Lambda_RIG.png} \caption{The top figure compares the AUC gains and the second figure compares the RIG gains of the Proximal scheme for different values of the proximal regularization $\lambda$.} \label{proxcomparisons} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{ES_Prox_AUC.png} \includegraphics[width = 0.5\textwidth]{ES_Prox_RIG.png} \caption{Comparing the Proximal Update algorithm and the early stopping algorithms over different number of iterations, learning rates and regularization.} \label{proxes} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{ES_Prox_Diff.png} \caption{Comparing the Difference between the ES and the {\sc Prox} solutions $|G(w_k) - G(w^*)$ and the upper bound from Theorem 1 for different values of $\alpha, k$ and $\lambda$. Results are in Log-scale.} \label{proxesdiff} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{OL_DifferentInitializations.png} \caption{Demonstrating the effect of starting OL with different initializations.} \label{diffinitial} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{OLvsMW_AUC.png} \includegraphics[width = 0.5\textwidth]{OLvsMW_RIG.png} \caption{The two figure compare Online Learning and the Moving window baseline relative to the base (stale) model on both AUC and RIG respectively.} \label{OLvsMW} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{OL_Delay.png} \caption{Comparison of different OL models having predictions delayed by different time periods.} \label{delay} \end{figure} \section{Experiments and Results} This section provides details of our extensive evaluation of our online learning framework, with a goal of providing a better understanding to the model performance in various scenarios, and to understand the theoretical results discussed above. The experiments shared below have been run for over a year in our production systems. We have evaluated the models on various feature sets, various times of the year (holiday and regular time periods), and various parameter choices. The model performance is consistent over all these experiments. In the interest of space, we provide only a summary of the results below. The results do not drastically change with different batch sizes (daily, four hourly etc.) Smaller batch sizes only ensure quicker model updates. All our results are on batch sizes of one day. Also, all the results below were conducted over a span of 15 days to three months with daily updates. Each day of data consists of around 2 Million instances. We show the results as time series graphs to demonstrate the gains of online learning over time. Our C++ code (built on top of ~\cite{iyer2018jensen}) and dataset used for our experiments is available at \url{https://github.com/rishabhk108/jensen-ol}. \subsection{Batch OL as a solution for the Model Staleness} Figure~\ref{OLgains} demonstrate the gains of online learning by comparing the model metrics to the stale model. We show the gains in both AUC (ranking) and RIG. We see that the relative RIG gains of about 0.5\% in the middle and close to 1.5\% towards the end. We also observe AUC gains of around 0.1\%. These experiments are run over a span of three months. Both these are significant gains in our system, and are better than the gains we would expect from retraining the models (we shall compare both in later sections). \subsection{Tradeoff parameters for Early Stopping Online Learning} This section investigates the critical trade-off parameters for Early stopping, namely the choice of the ES algorithm, the Learning rate ($\alpha)$ and the number of iterations ($k$). Figure~\ref{escomparisons} show the results. \arxiv{We first compare the different ES algorithms. In this setup, we compare LBFGS, SGD and Gradient Descent. SGD and GD are gradient descent style algorithms and in both cases, we use a fixed learning rate, wherever applicable, and compare their performance for varying numbers of iterations. LBFGS adapts the learning rate as the algorithm proceeds. We see that incremental training with GD and SGD perform similar to one another for the same learning rate and number of iterations -- we run both algorithms with $\alpha = 1e-05$ and obtain results for $k = 3, 5$ and $10$. LBFGS, however, performs worse than both these (with $k = 10$ we see that LBFGS already overfits to the new data). The added benefit of SGD and GD comes from the flexibility of a fixed learning rate, whereas LBFGS tries to minimize the objective function completely as quickly as possible. In our case, we do not want to overfit to the new data, and it is desirable to have the right knobs to tradeoff between the historical and new data. This consideration does not favor LBFGS as the algorithm for use in an ES scheme. This comparison is shown in the top two graphs in Figure~\ref{escomparisons}.} We \arxiv{next}\narxiv{first} compare the different number of iterations. We set the ES algorithm to be SGD and fix the learning rate as $\alpha = 1e-05$. With a small number of iterations ($k = 2, 3$), the model does not learn enough of the new data while with large number of iterations ($k = 50, 100, 1000$), the model overfits to the new data and we see a loss in performance. The optimal performance is achieved for $k = 5, 10$. This parameter, along with the learning rate, needs to be tuned for each model, depending on the amount of historical data and incremental data. We see the RIG and AUC gains in the \arxiv{third and the fourth graphs (from top)}\narxiv{top two graphs} in Figure~\ref{escomparisons}. Finally, we compare the learning rate. A large learning rate ($\alpha = 1e-02, 1e-03$), causes the weights to diverge while with a small learning rate ($\alpha = 1e-07, 1e-08$), the learning is slow. We achieve the best results with $\alpha = 1e-05$. The RIG and AUC gains are shown in the last two plots (from top) in Figure~\ref{escomparisons}. \narxiv{In the interest of space, the comparisons of the different early stopping algorithms is in the extended version.} \subsection{Trade-off Parameters for Proximal Scheme} We next compare the effect of different regularization parameters for {\sc Prox}. Using a small regularization parameter ($\lambda = 100, 1000$) tends to make the model overfit to the new data, while when using a large regularization $\lambda = 100000, 500000$ and above, the model hardly learns. The optimal performance comes from $\lambda = 10000$ and $\lambda = 20000$ in this case. Again, this parameter will need to be tuned depending on the amount of historical and new data. The results are shown in Figure~\ref{proxcomparisons}. \subsection{Comparing Early Stopping and Proximal Updates} In this section, we look into the early stopping and proximal updates, and their connection. The goal of this exercise is to compare several ES schemes (for different $\alpha, k$) and different {\sc Prox} schemes by varying $\lambda$. The results of this are in Figure~\ref{proxes}. Firstly, we compare the following sets of ES and {\sc Prox} schemes, 1) $\alpha = 1e-05, k = 5$ and $\lambda = 20000$, 2) $\alpha = 1e-05, k = 10$ and $\lambda = 10000$, 3) $\alpha = 5e-06, k = 5$ and $\lambda = 40000$, and 4) $\alpha = 1e-06, k = 10$ and $\lambda = 100000$. Notice that all these sets satisfy $\alpha k \lambda = 1$. We see that the {\sc Prox} and {\sc ES} gains are very similar to each other (the blue, orange, red and dark gray lines in Figure~\ref{proxes}). The results hold for both the AUC and RIG gains. We next consider two additional settings: $\alpha = 1e-04, k = 10$ and $\lambda = 1000$ and $\alpha = 1e-05, k = 100, \lambda = 1000$. We see here that there is a gap between the {\sc Prox} and ES schemes with the {\sc Prox} method consistently outperforming the ES schemes in both these cases (the three green lines in Figure~\ref{proxes}). To understand this better, we plot the difference in loss function $|G(w_k) - G(w^*)|$, and the upper bound from Theorem 1 in Figure~\ref{proxesdiff}. We see that the settings, $\alpha = 1e-05, k = 5, \lambda = 20000$ and $\alpha = 2e-05, k = 5, \lambda = 10000$ have small values of the loss function difference, as expected. We see that the upper bound estimate is also small in this case (around 1e-02). However, with the settings $\alpha = 1e-04, k = 10$ and $\lambda = 1000$ and $\alpha = 1e-05, k = 100, \lambda = 1000$, we see a larger difference between $G(w^*)$ and $G(w_k)$ (i.e. the {\sc Prox} and the {\sc ES} solutions). Note that all four of these satisfy $\alpha k \lambda = 1$. We also see that the upper bound estimate is also larger. With a larger learning rate $\alpha = 1e-04$, the gradient difference between subsequent iterations will be larger, so will $\epsilon$. In the second case, the learning rate is smaller $\alpha = 1e-05$, but we run it for more iterations. Correspondingly, the bound (which depends on both $\epsilon$ and $k$) is larger. {\sc ES} and {\sc Prox} yield very similar update rules if we choose the right set of hyper-parameters. Fortunately, in practice, we observe that the the optimal performance comes from smaller values of $\alpha$ and $k$, and in those settings the {\sc Prox} and the {\sc ES} schemes coincide. Unlike the {\sc Prox} scheme, the {\sc ES} stopping does not require solving a convex optimization scheme to completion. We just need to run a few iterations of SGD with the right learning rate. On the flip side, ES scheme has more hyper parameters ($\alpha, k$) which make it slightly harder to tune compared to {\sc Prox}, where we just need to tune the regularization. In the rest of the paper, we choose the setting with $\alpha = 1e-05$ and $k = 10$ and use the {\sc ES} update scheme. \subsection{Comparing OL with Regular Model Retrains} We have illustrated how batch OL can help resolve the issue of model staleness that a fixed model suffers from. Another obvious alternative to resolve this problem is to automatically retrain and update the base model periodically. To compare these 2 approaches, we setup a baseline where the model was being completed retrained daily on the previous week’s data and evaluated on just the next day. This was compared against a batch OL model being incrementally trained daily. Figure~\ref{OLvsMW} demonstrates the results. Firstly we notice that both models follow the same trend over time. Next we see that on several days, the online learning outperforms the moving window baseline. This can be attributed to the fact that online learning is done incrementally in each batch, so the model has seen more data than just the previous period. This gives the OL model the ability to generalize better to trends it has learnt in earlier batches, while also having the ability to learn and adapt to the recent trends. The model retraining however learns solely from the last few days and may overfit to this distribution. We also see that the moving window baseline is more sensitive to overfitting to data corruption and distributional changes compared to online learning. The reader will notice large drops in AUC and RIG owing to distributional changes (see the two large srops with the Moving window approach). This effect can be even more pronounced with data corruption issues (like system bugs or livesites). Since online learning adapts slowly while remembering the past historical data, it does not overfit as much. Moreover, it is easier to also implement validation checks and safeguards to ensure it does not learn from corrupt data. \subsection{Impact of when we start Online Learning} We next study the effect of online learning on different initializations. We consider different starting points of online learning. In this experiment, we train four different base models with one week of initial data each. We start four different OL schemes, each of which begin one week apart. We see that after about a month of online learning, all the four model converge to roughly the same performance with respect to a fixed base model. Figure~\ref{diffinitial} shows the results of this. \subsection{Delay Analysis in Online Learning} In this section, we investigate the effect of delayed predictions made by online learning models. In other words, we fix the model evaluation period and compare the performance of multiple OL models trained till different time periods. The most recent model was trained on daily batches till one day before this period. The other models in comparison are trained till one week, 15 days and so on up till more than 2 months before the evaluation period. We also compare the performance of a base model trained from scratch approximately 1 month before the evaluation period. The results are in Figure~\ref{delay}. Here we can see that for both AUC and RIG, the performance degrades with increased delay. This inference is intuitive since the delayed models haven’t seen the latest trends in data closer to the evaluation period. The reduction in performance however is small as we move to older models. Even the model trained till 03/05, which is more than 2 months before the evaluation period, retains most of the gains in AUC and RIG over the base model. We also compare these delayed models to a fixed baseline model trained to completion around one month before the evaluation period (marked as April Retrained, trained on 04/01 to 04/07). Notice that there are a few OL model snapshots that have been updated till before this time-period, namely till 03/25 and 03/05. As seen in the figure, even these OL models perform better than the retrained baseline, even though the baseline model is trained closer to the evaluation period. The reason for this is that the OL models are trained incrementally and have actually seen data across several months, hence they generalize better. The fixed model on the other hand trains on just one week of data and stands to learn just from the distribution in this time period. This again underscores the point that online learning models are superior compared to simple retrained models with a data refresh. \section{Conclusions and Lessons Learned} This paper presents a unified framework for online learning, by showing how two seemingly different views of online learning, namely iterative early stopping scheme and a Proximal Update algorithm (both of which have been extensively in literature for this problem), are closely related. We provide conditions when the two algorithms achieve the same updates and empirically validate them. We demonstrate several results proving the benefit of online learning, by understanding the tradeoff between historical and new data, the impact of initializations and delay in the system, and proving that Online Learning is a superior and more stable method compared to model retrainings. Finally, we discuss some important validation and safeguard mechanisms required for online learning systems in production systems. This is important since models are getting automatically updated. Some of our validation checks include: \begin{itemize} \item Check daily differences in model metrics such as RIG and AUC (day over day differences). We do not expect the day over day differences to be large due to the incremental nature of our online learning schemes. \item Check differences to the base model. We expect to see non trivial improvements compared to stale initial model. \item Comparison the moving window ensures that the online learning is no worse than a retrained baseline model. \item Day over day CTR and other data checks is required to ensure we do not incrementally train models over livesight data. For data checks, we check the volume of the training data over various slices as we do not expect a drastic difference in the volume of the input data. \end{itemize} We also have monitoring dashboards to monitor daily model metrics, input data volumes, CTR etc. These dashboards allow us to monitor the daily performance of the models, and investigate potential issues. In case of model issues, it is also easy to rollback to previous snapshots of the model. \bibliographystyle{aaai}
2,877,628,088,697
arxiv
\section{Introduction} Advanced LIGO~\cite{TheLIGOScientific:2014jea} has just completed its first science run. Results from the first 5 weeks of data have been made public, and include the binary black hole (BBH) coalescence GW150914~\cite{GW150914-DETECTION}. Advanced Virgo~\cite{TheVirgo:2014hva} is expected to join LIGO in the second science run~\cite{2016LRR....19....1A}, starting in Fall 2016. KAGRA~\cite{PhysRevD.88.043007} and LIGO India~\cite{M1100296} are expected to join the network before the end of this decade. Ground based interferometers will detect gravitational radiation from several kinds of sources and start gravitational-wave (GW\xspace) astrophysics. Compact binary coalescences (CBC) of two neutron stars (BNS), two black holes or a neutron star and a black hole (NSBH) have traditionally been among the most promising sources, and will be detected at a rate of several tens per year, although significant uncertainty still exists in the astrophysical rates, both predicted and measured~\cite{GW150914-RATES,2010CQGra..27q3001A}. Analysis of many such signals promises to shed light on several open problems in astrophysics. For example, direct mass measurement with GWs\xspace could allow for an accurate reconstruction of black hole and neutron star mass functions. GWs\xspace also represent our first chance to perform strong field tests of general relativity. An idea of what can be done is given by the recent analyses of GW150914~\cite{GW150914-PARAMESTIM,GW150914-TESTOFGR,GW150914-ASTRO}. In the last few years a pipeline has been created, called TIGER~\cite{2012PhRvD..85h2003L,2012JPhCS.363a2028L,2014PhRvD..89h2001A}, which can look for unmodeled deviations from general relativity in detected GW\xspace signals. Elsewhere~\cite{2013PhRvL.111g1101D} it has been shown how GWs\xspace could be used to test proposed equations of state for neutron stars. Most of this work, and others~\cite{2012PhRvD..86d4023L,2014arXiv1410.3852L}, relies in Bayesian model selection, instead of simple parameter estimation. This has the advantage that information can be aggregated from all detected signals, in a cumulative way, resulting in more powerful tests. In this letter we show that GWs\xspace can be used to check whether spins are preferentially aligned with the orbital angular momentum in BBH and NSBH systems (we will not consider BNS, since known neutron stars in binaries do not have large spins~\cite{2003Natur.426..531B}). This is of fundamental importance for astrophysics and for understanding the formation mechanisms of compact binaries, for which there still are many open questions~\cite{2010ApJ...725.1984L,2015MNRAS.447.2181I,2014arXiv1405.4662Z,2005ApJ...632.1035O,2016PhRvD..93h4029R,2016MNRAS.458.2634M,2016arXiv160404254R}. It is believed that two main formation channels exist for compact binaries (See~\cite{GW150914-ASTRO} for a review). Common envelope evolution is expect to happen in galactic fields, whereas dynamical capture could happen in dense environments such as globular clusters. Critically, it has been suggested that common envelope evolution in binaries will align the spins with the orbital angular momentum~\cite{2007ApJ...661L.147B}~\footnote{Others suggest that kicks introduced in the system when the progenitor stars undergo core collapse supernovae could result in spins being significantly misaligned~\cite{2000ApJ...541..319K}}. Spins are instead expected to be randomly oriented for CBCs formed dynamically. Ultimately, being able to verify if and how often spins are aligned could significantly help to understand the formation patterns of binary systems, verify which channel happens more frequently, and the efficiency of the common envelope evolution phase in aligning spins. In this paper we consider a scenario where a fraction $f_{a}$\xspace of signals have spins nearly aligned, while the rest are non-aligned. This accounts for two possible formation patterns for CBCs, one of which is efficient in aligning spins with the orbital angular momentum. We find that the posterior distribution for the mixture parameter $f_{a}$\xspace can be accurately estimated with a couple hundred sources, with precision of the order of $10\%$. \section{Method} Let us assume two main formation channels for CBC exist, which result in a fraction $f_{a}$\xspace of systems having spins nearly aligned, and a fraction (1-$f_{a}$\xspace) having misaligned spins. We will assume $N$ GW\xspace detections are made, denoted by their data streams $d_i\mbox{ with } i=1 \ldots N$, and show how they can be used to estimate $f_{a}$\xspace. We introduce two mutually exclusive models: $\mathcal{H}_a$\xspace corresponding to spins nearly aligned with the orbital angular momentum, and $\mathcal{H}_{\bar{a}}$\xspace corresponding to non-aligned spins (we will define these models more precisely later). Given an event and the corresponding data stream $d_k$, we can calculate the evidence of the data for the models above, $Z^m_k\equiv p(d_k|\mathcal{H}_m)$, with $m=a,\bar{a}$. The evidence must be calculated by integrating a non-trivial likelihood function over a multidimensional parameter space~\cite{2014arXiv1409.7215V}. Calling $\vec\theta$ the unknown parameters on which a CBC depends, we have: \beq\label{eq.evidence} Z^m_k = \int_{\Theta_m}{ p(d_k| \vec\theta\, \mathcal{H}_m)p_m(\vec\theta|\mathcal{H}_m)}d\vec\theta \:. \end{equation} We solve this integral by using the Nested Sampling and the BAMBI flavors of \texttt{lalinference}~\cite{2014arXiv1409.7215V}. We stress that the hypervolume we integrate over, $\Theta_m$, depends on the model being considered (see next section). We can now show how the aligned fraction of events can be calculated. We start by applying Bayes' theorem and the product rule to the posterior distribution of $f_{a}$\xspace: \beq p(f_{a}|\vec{d} )\propto p(\vec{d}|f_{a} )p(f_{a})=p(f_{a})\prod_{k=1}^{N}p(d_k|f_{a} )\label{Eq.PostMixture} \end{equation} where $p(f_{a})$ is the prior on the mixture parameter, that we take as flat on $[0,1]$ since we do not have any previous astrophysical information. With $\vec{d}$ we have denoted the set of N detected events, $\vec{d}\equiv \{d_1, d_2, \cdots, d_N\}$. The factors inside the product can be expanded by noticing that $\mathcal{H}_a$\xspace and $\mathcal{H}_{\bar{a}}$\xspace are mutually exclusive for each detection, i.e. that $p(\mathcal{H}_a) + p(\mathcal{H}_{\bar{a}})=1$. Thus: \beq p(d_k|f_{a})=\sum_{j=a,\bar{a}}{p(d_k|\mathcal{H}_j f_{a})p(\mathcal{H}_j|f_{a})} \end{equation} We notice that $\mathcal{H}_j f_{a}$ can be written simply as $\mathcal{H}_j$, since knowing if a signal was aligned or not makes knowing $f_{a}$\xspace irrelevant. Next, we need to calculate $p(\mathcal{H}_a|f_{a})$ and $p(\mathcal{H}_{\bar{a}}|f_{a})$. These are trivially $p(\mathcal{H}_a|f_{a})=f_a$ and $p(\mathcal{H}_{\bar{a}}|f_{a})=(1-f_a)$: if a fraction $f_a$ of events is aligned, the probability that the aligned model applies to any event, before looking at the data, is $f_a$. Modulo a normalization constant, the log of Eq.~\ref{Eq.PostMixture} then reads: \beqa \log p(f_{a}|\vec{d})&=&\log(p(f_a))\label{Eq.logpw}\\ &+&\sum_{k=1}^{N}{\left(\log Z^a_k +\log\left[f_{a}+ (1-f_{a}) \frac{Z^{\bar{a}}_k}{Z^a_k}\right]\right)}\nonumber \eeqa \section{Implementation}\label{Sec.Implementation} In this section we describe the parameters of the sources we simulate. We consider GWs\xspace emitted by BBH and NSBH, and for each type of source we generate two catalogs of GW\xspace signals, one corresponding to nearly aligned spins and the other to non-aligned spins. For the BBH, we use the so called IMRphenomPv2 waveform approximant~\cite{Hannam:2013oca,Khan:2015jqa} (this is one of the two families used for the analysis of GW150914~\cite{GW150914-PARAMESTIM}). For the lighter NSBH we use the SpinTaylorT4 approximant~\cite{2003PhRvD..67j4025B,2006PhRvD..74b9904B}. Unlike IMRphenomPv2, SpinTaylorT4 is an inspiral-only approximant and thus cannot model the merger and ringdown phase of CBCs. However, since the frequency at which the merger happens is roughly inversely proportional to the total mass of the system, merger and ringdown can be neglected as long as the total mass is below $\sim 20M_\odot$~\cite{2014arXiv1404.2382M}, for reasonable signal-to-noise ratios (SNRs). All the NSBH we simulate have total mass below 13$M_\odot$\xspace. In both cases, we work at the highest known post-Newtonian phase order, while neglecting higher-order amplitude corrections. We also neglect tidal contributions in neutron stars~\cite{Lattimer:2013,2014PhRvD..89b1303Y,2014PhRvD..89j3012W,2013PhRvL.111g1101D}. Both these limitations are due to computational considerations and will not impact our main result. For the BBH, we choose to consider heavy black holes of a few tens of solar masses, which we know will be detected in large number by ground based detectors in the coming months~\cite{GW150914-RATES}. We thus generate component masses uniformly from the range $[30,50]$$M_\odot$\xspace (in the source frame). The dimensionless spins, $a_i\equiv \frac{c|\vec{ S}_i|}{G m_i^2}$, are uniformly generated in the range $[0,0.98]$, compatible with the range of validity of the waveform approximant~\cite{Hannam:2013oca,Khan:2015jqa}. For the NSBH signals, BH masses are in the range $[6,11]$$M_\odot$\xspace, and NS masses in the range $[1.2,2]$$M_\odot$\xspace. Black hole spins are uniform in the range $[0,1]$ while for the neutron stars we restrict possible spins to $[0,0.3]$ (the largest measured spin of a NS in a binary system is $0.02$~\cite{2008LRR....11....8L}). For the NSBH sources, our choice for the mass range of BH is mainly driven by the use of inspiral-only waveforms (IMRphenomPv2 waveforms are not considered reliable for mass ratios above $4-5$~\cite{Hannam:2013oca,Khan:2015jqa}). The distances are uniform in comoving volume, with a lower network SNR (that is, the root-sum of squares of the SNR in each instrument) cut at $8\sqrt{3}\sim13.9$ for NSBH and $7\sqrt{3}\sim12.1$ for BBH. These correspond to distances up to \si1.2~Gpc for NSBH and \si12~Gpc for BBH. For both BBH and NSBH, the sky position and orientation-polarization of the systems are uniform on the unit sphere. To verify that the test we propose is self-consistent and does not rely on the exact definition of ``aligned'', we define it in a different way for NSBH and BBH. For NSBH, the nearly aligned (henceforth just aligned) catalog is made of signals with tilt angles (i.e. the angles between spin vectors and orbital angular momentum) isotropic in the interval $[0,10]^\circ$, i.e. close to the positive direction of the orbital angular momentum. For the BBH, the tilts in the aligned catalog are in the range $[0,10]^\circ \cup [170-180]^\circ$, i.e. the spin vectors can be along both the positive and negative direction of the orbital angular momentum. For both BBH and NSBH, the non-aligned model is the logical negation of the corresponding aligned model. For example, for NSBH tilts were isotropic in the range $[10,180]^\circ$. The priors on the tilt angles for the $\mathcal{H}_a$\xspace and $\mathcal{H}_{\bar{a}}$\xspace models, eq.~\ref{eq.evidence}, are isotropic with cuts that match these intervals. Each event is added into simulated Gaussian noise corresponding to the design sensitivity of the two Advanced LIGO detectors and the Advanced Virgo detector~\cite{2016LRR....19....1A}. We analyze all events in the catalogs twice, once with a prior that matches the $H_a$ model, and once with a prior that matches $\mathcal{H}_{\bar{a}}$\xspace. These runs provide the evidences of eq.~\ref{eq.evidence} that we can use in eq.~\ref{Eq.logpw} to calculate posterior of the mixture fraction. \section{Results} To show how the method performs for some representative values of $f_a$, we generate (for both BBH and NSBH) 5 catalogs with increasing fraction of aligned events. From 0 (all events are not-aligned) to 1 (all events are aligned), with steps of 0.25. These catalogs are trivially created from our initial set of NSBH and BBH by randomly drawing aligned and non-aligned signals with the desired ratio until 100 sources for NSBH or 200 for BBH are obtained. The evidences of these events are then used in eq.~\ref{Eq.logpw} to obtain the posterior distribution of $f_{a}$\xspace. The main results are shown in Fig.~\ref{Fig.Mixture}, where for pedagogical purposes we keep separated BBH and NSBH sources. We see how the posterior distributions for $f_a$ peak at or very close to the corresponding true values, given in the legend, with 1 $\sigma$ uncertainties of the order of $10\%$. The small offsets of some of the curves can be explained with the limited number of events we consider (the offset would be zero in the limit of an infinite number of sources). By regenerating the catalogs a few times we saw that the peaks can shift by a few percents on either side of the true values. We have verified that halving the number of sources in the catalogs (50 NSBH and 100 BBH) broadens the posterior distributions, while leaving them centered around or close to the true values. One might be surprised that the NSBH distributions are narrower in spite of the fact that fewer (100) sources are used for NSBH than for BBH (200). This can be explained by reminding that for BBH we used a slightly lower SNR threshold (12 vs 13.9), thus increasing the number of weak signals that do not contribute much to the measurement, while broadening the posterior distributions. Furthermore, the characteristic effects of misaligned spins (e.g. amplitude precession) are more visible in NSBH, which make them ideal sources for this test. \begin{figure}[htb] \includegraphics[width=0.5\textwidth]{1.png} \caption{Posterior distribution for the mixture parameter $f_{a}$\xspace after 100 NSBH (dashed) and 200 BBH (solid) detections. Several underlying values of $f_{a}$\xspace (given in the legend) are considered. $f_a=0$ corresponds to a catalog where none of the sources had aligned spins, while $f_a=1$ refers to a catalog where all events had aligned spins.} \label{Fig.Mixture} \end{figure} We stress that we do \emph{not} assume that the priors in eq.~\ref{eq.evidence} perfectly match the corresponding distributions in the simulated events, which will likely happen in the first years of gravitational-wave astrophysics. Two important examples are the prior distributions for distance and masses. While geometrical arguments led us to use a prior for the luminosity distance uniform in comoving volume, in reality, since far away sources would not be detectable, the distance distribution of \emph{detected} events will first increase with distance, reach a maximum, and then decrease. The distances corresponding to the maximum of the distribution and the length of the tail depend on the true astrophysical distribution of masses (heavy CBC will be visible farther away), which we don't know (but will hopefully measure in the coming years). Similarly for the mass prior: we used priors in the component masses which were a factor of few larger than the range used to simulate the sources. It will be interesting to verify how the test performs if the true distribution of tilt angles for the aligned model is different than what is used to split the two models, or if the true distributions are not mutually exclusive\footnote{In our simulations we assumed isotropic tilt distributions $p(\tau)\propto \sin{\tau}$, with cuts at $10^\circ$. This makes our non-aligned distribution basically equal to a fully isotropic distributions, since the probability that both tilts are small (or close to $\pi$ for BBH) is negligible.}. Given the amount of simulations that would be necessary to fully explore those scenarios, we leave it for future work. It is worth remarking that while we consider two possible formation channels, this framework can be extended to take into account more models, provided they are mutually exclusive. Similarly, if one believes only one formation channel is possible in the universe, and thus \emph{all} events will either have aligned or not-aligned spins (that would correspond to $f_a=0,1$), model selection can be used to quantify how many detections are required before the model can be proven right. Although we believe considering a mixture of two models is more consistent with today's understanding of binaries' formation, we give an example of a single-channel test. For this, we simulate a situation in which all sources have non-aligned spins and we calculate the cumulative odds ratio: \beq O^a_{\bar{a}}\equiv\frac{p(\mathcal{H}_a | \vec d)}{p(\mathcal{H}_{\bar{a}} | \vec{d})}= \frac{p(\vec{d}| \mathcal{H}_a) p(\mathcal{H}_a)}{p(\vec{d}| \mathcal{H}_{\bar{a}}) p(\mathcal{H}_{\bar{a}})}=B^a_{\bar{a}}~\frac{p(\mathcal{H}_a)}{p(\mathcal{H}_{\bar{a}})}\nonumber \end{equation} where $B^a_{\bar{a}}$ is the cumulative Bayes factor for aligned vs non-aligned models. Since the data corresponding to the $N$ detections is statistically independent, the cumulative Bayes factor can be written as a product over the single events: \beq B^a_{\bar{a}}= \prod_{k=1}^{N} \frac{p(d_k| \mathcal{H}_a)}{p(d_k| \mathcal{H}_{\bar{a}})} \equiv \prod_{k=1}^{N} \frac{Z^a_k}{Z^{\bar{a}}_k} \end{equation} The logarithm of the odds ratio is shown in Fig.~\ref{Fig.Odds} as a function of the number of events, for random sub-catalogs of 10 NSBH and 50 BBH. We have assumed $p(\mathcal{H}_a)=p(\mathcal{H}_{\bar{a}})$. We see that for both type of sources the correct non-aligned model is favored in a significant way (log odds below the solid horizontal line favor the non-aligned model at a $>2.7 \sigma$ level). NSBH curves go down faster since for NSBH the effect of spin misalignment are stronger in the waveform, and thus harder to match with an aligned model. \begin{figure}[htb] \includegraphics[width=0.5\textwidth]{2.png} \caption{(color online) Cumulative odds ratio for NSBH (red) and BBH (blue), with non-aligned injections. Each line is a sub-catalog. Cumulative odds values below the solid horizontal thick line favor the (correct) non-aligned model with a significance larger than \si2.7 $\sigma$. We notice that we cut the y axis at -250 to improve clarity. Some NSBH catalogs go down to cumulative odds of -1000.} \label{Fig.Odds} \end{figure} \section{Conclusions}\label{Sec.Conclusions} Two formation channels are commonly considered for CBC: common envelope evolution, which should result in spins to be preferentially along (or very close to) the direction of the orbital angular momentum, and dynamical capture, which should results in randomly oriented spins. In fact, there is not complete agreement on whether common envelope evolution is efficient enough in aligning spins, or if instead eventual kicks from the core collapse supernova of the progenitor stars will be the dominant factor. It would thus be of importance to calculate which fraction of the compact binaries have spins nearly aligned with the orbital angular momentum, which could be used to expand our understanding of formation channels. In this paper, we have shown how gravitational waves emitted by compact binaries containing a black hole, and detected by Advanced LIGO and Virgo, can be used to verify if spins are preferentially aligned with the orbital angular momentum. We considered neutron star - black hole and binary black hole systems, and created catalogs of sources with increasingly large fraction of aligned sources (from 0 to 100\%). Black holes in NSBH were of low mass (up to 11$M_\odot$\xspace), while for BBH we simulated heavy objects, comparable to GW150914 (masses in the range $[30-50]$$M_\odot$\xspace), which will be detected in large number in the coming months and years. We showed how a couple hundred signals are enough to pinpoint the underlying value of the aligned fraction with \si10\% uncertainty, which suggests GWs represent a viable way of gaining insight into the orientation of spins in compact binaries, and ultimately on their evolution. We have verified the robustness of the test against some common prior mismatch (distance, masses). Future work includes introducing a mismatch between the definition of aligned in the test and the true distribution of aligned sources. We also stress that if more information is available which could help distinguish between the two channels (e.g. the resulting mass ratio distribution), it could be folded in an extended version of this test. This is LIGO document P1500022. \section{Acknowledgments} SV and RL acknowledge the support of the National Science Foundation and the LIGO Laboratory. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement PHY-0757058. RS is supported by the FAPESP grant 2013/04538-5. PG was supported by NASA grant NNX12AN10G. SV, RL and RS acknowledge the FAPESP-MIT grant 2014/50727-7. The authors would like to acknowledge the LIGO Data Grid clusters, without which the simulations could not have been performed. Specifically, these include the Syracuse University Gravitation and Relativity cluster, which is supported by NSF awards PHY-1040231 and PHY-1104371, the Leonard E Parker Center for Gravitation, Cosmology and Astrophysics at University of Wisconsin-Milwaukee (NSF-0923409) and the Atlas cluster of the Albert Einstein Institute in Hanover. We would also like to thank M.~Branchesi, W.~Del~Pozzo, T.~Dent, R.~Essick, W.~Farr, V.~Grinberg E.~Katsavounidis, F.~Lacasa, T.~G.~F.~Li, I.~Mandel, M.~Mapelli, C.~Van~Den Broeck, A.~Weinstein, R.~Weiss and the CBC parameter estimation group for useful comments and suggestions.
2,877,628,088,698
arxiv
\subsection{Data representation} Following~\cite{ctgan&tvae}, we use mode-specific normalization for numerical features and one-hot encoding for categorical features. Mode-specific normalization preserves the multi-modal distribution of numerical features and improves the performance of tabular data synthesizers~\cite{ctgan&tvae, zhao2021ctab}. One-hot encoding is a simple yet effective way to convert categorical values to numerical ones without losing too much information. Certainly, one-hot encoding and mode-specific normalization causes sparse input, but the autoencoder in AE-GAN maps the sparse encoded input into compact latent vectors solving this issue. \subsection{Encoder and decoder} Since we identify sparsity as one of the main reasons for tabular data synthesizers' sensitivity to column permutations, one natural solution is to use an autoencoder to extract the features of tabular data and compress them into compact latent vectors. Such an advantage can apply to all kinds of tabular data synthesizers. We use two three-layer fully connected networks as $Enc$ and $Dec$. Based on our study of mainstream open-source implementations of autoencoders~\cite{liao_2020, zhao_2020, zhou_2020}, fully-connected networks with 2-4 layers are common choices for autoencoders. Another important design choice is the length of the latent vector, i.e., the output size of $Enc$ and the input size of $Dec$. This determines the autoencoder's capacity to represent high-dimensional data. We choose this parameter based on the size of the input dataset. For datasets with a large number of columns, we increase the length of the latent vector to ensure that complex relations between columns can be well represented, therefore helping the generator to synthesize realistic data. \subsection{Generator and discriminator} The core of AE-GAN is a GAN, which has two competing networks, namely the generator, $G$, and the discriminator, $D$. Figure~\ref{fig:ae_gan_architecture_g&d} shows their architectures. Both $G$ and $D$ have three fully-connected layers which are more resilient to {permutations of elements in the (more compact) latent vector.} In the discriminator, the fully connected layers are followed by leaky ReLU activation and the final output is the validity of the input. In the generator, the fully-connected layers are followed by batch normalization and leaky ReLU activation, and the final output is the synthetic latent vector. The design is based on the architecture of WGAN-GP~\cite{wgan_gp_github_2019}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{images/ae-gan-components-generator-and-discriminator-drawio.png} \caption{Architectures of the discriminator and the generator in AE-GAN.} \label{fig:ae_gan_architecture_g&d} \end{figure} \subsection{Auxiliary classifier} To enhance the training of $G$ and thus the quality of the generated tabular data, we introduce an auxiliary classifier $C$ to the GAN. This design is inspired by~\cite{tablegan}, where an auxiliary classifier is added to maintain the semantic consistency of synthetic data. Note that the input to $C$ is the reconstructed data from $Dec$, rather than the latent data from $Enc$ and $G$ because we want $C$ to learn the semantic relations between columns directly. Specifically, given a categorical target column with several categories, $C$ learns to classify which category a sample belongs to according to the columns other than the target column. This is how $C$ differs from the $D$: $D$ determines the ``realness'' of a sample based on all columns in the latent space, whereas $C$ learns the relationship between the target column and all other columns in the reconstructed space. By combining $C$ with the GAN, we simultaneously leverage the flexibility of unsupervised training with the control provided by supervised training, thereby improving the quality of the synthetic data. \subsection{Loss functions} The training of AE-GAN requires four loss functions: autoencoder loss $\mathbb{L}_{AE}$, generator loss $\mathbb{L}_{G}$, discriminator loss $\mathbb{L}_{D}$, and classifier loss $\mathbb{L}_{C}$. \subsubsection{Autoencoder loss} Autoencoder loss is the reconstruction loss, i.e., the element-wise mean squared error between its input and reconstructed output. It is defined as follows: \begin{equation} \mathbb{L}_{AE} = \mathbb{E}||x-\tilde{x}||_2^2, \end{equation} where $x$ and $\tilde{x}$ are the input and the reconstructed output. \subsubsection{Generator loss} The generator receives feedback from both the discriminator and the classifier. Therefore, its loss function is the sum of: discriminator feedback $\mathbb{L}_{G}^{D}$ and classifier feedback $\mathbb{L}_{G}^{C}$. \begin{equation} \mathbb{L}_{G} = \mathbb{L}_{G}^{D} + \mathbb{L}_{G}^{C} \end{equation} Discriminator feedback is the validity of synthetic samples: \begin{equation} \mathbb{L}_{G}^{D} = - \mathbb{E}[D(G(z))], \end{equation} where $G(z)$ is the generator output and $D(G(z))$ is the discriminator output. Classifier feedback is the cross entropy between the predicted value and the actual value of the target column: \begin{equation} \mathbb{L}_{G}^{C} = H(m,m'), \end{equation} where $m$ and $m'$ are the actual and predicted values of the target column, and $H(\cdot)$ is the cross entropy operator. \subsubsection{Discriminator loss} The discriminator loss measures how well it differentiates the real samples and the synthetic samples. We use Wasserstein loss with gradient penalty to improve the training stability and alleviate the vanishing gradient problem of GANs~\cite{gulrajani2017improved}. It is calculated by: \begin{equation} \mathbb{L}_{D} = -\mathbb{E}[D(x) - D(G(z)) - \lambda \cdot (||\nabla D(\hat{x})||_2-1)^2], \end{equation} where $D(x), D(G(z))$ and $D(\hat{x})$ are the discriminator output on real samples, synthetic samples, and the interpolates between real and synthetic samples. $\lambda$ is the gradient penalty coefficient, $\nabla D(\hat{x})$ is the gradient of $D(\hat{x})$ on $\hat{x}$. \subsubsection{Classifier loss} The classifier loss also has two parts: loss on real samples $\mathbb{L}_{C}^{R}$ and loss on synthetic samples $\mathbb{L}_{C}^{S}$. \begin{equation} \mathbb{L}_{C} = \mathbb{L}_{C}^{R} + \mathbb{L}_{C}^{S}. \end{equation} The calculation of $\mathbb{L}_{C}^{R}$ and $\mathbb{L}_{C}^{S}$ are similar to $\mathbb{L}_{G}^{C}$. \subsection{Training algorithm} {We test two training strategies: disjoint training and joint training. For disjoint training we first train the AE until convergence and then train the GAN with the classifier while utilizing the compression power of the AE. For joint training, inspired by TimeGAN~\cite{yoon2019time} and the hypothesis of possible training synergies, we first pre-train the autoencoder for a certain number of epochs and then co-train it with the GAN and the classifier. Ablation tests, details in Section~\ref{ssec:ablation}, show that disjoint training achieves lower training losses. Hence we use disjoint training for all results if not specified.} \subsection{Generative Adversarial Network} Generative adversarial networks (GAN) are a recently developed algorithm~\cite{goodfellow2014generative} for synthetic data generation. A GAN consists of two components: a generator ($G$) that learns to produce realistic synthetic data, and a discriminator ($D$) that tries to distinguish real data from synthetic (fake) data. Both $G$ and $D$ are neural networks, e.g., Fully-Connected Networks (FCNs) or Convolutional Neural Networks (CNNs). In the training process, $G$ and $D$ play an adversarial game described as follows: \begin{equation} \begin{split} \underset{G}{min}\ \underset{D}{max}\ V(G,D) = & \mathbb{E}[log D(x)]_{x \sim p_{data}(x)} \\ & + \mathbb{E}[log(1-D(G(z)))]_{z \sim p(z)}, \end{split} \end{equation} where $x$ is the real sample, $z$ is the random input signal given to $G$, $G(z)$ is the synthetic sample, and $D(\cdot)$ is the probability of a sample being real from the perspective of $D$. The goal of $G$ is to minimize the chance that its generated samples are identified as synthetic, whereas $D$ maximizes the chance of correctly distinguishing real and synthetic samples. \subsection{Autoencoder} An autoencoder (AE) is an unsupervised learning algorithm that learns a mapping from high-dimensional inputs to low-dimensional representations~\cite{tschannen2018recent, ng2011sparse}, namely latent vectors. It consists of two models, an encoder ($Enc$) and a decoder ($Dec$). $Enc$ takes a high-dimensional input and compresses it to a latent vector, and $Dec$ uses the latent vector to reconstruct the original input. $Enc$ and $Dec$ are trained as a whole and penalized for creating output that deviates from the input. The loss function is defined as follows: \begin{equation} \underset{\theta, \phi}{min} \ L(\theta, \phi) = \frac{1}{N} \sum_{i=1}^{N} ||x_i - Dec_{\theta}(Enc_{\phi}(x_i))||_{2}^{2}, \end{equation} where $\theta$ and $\phi$ are the parameters of $Dec$ and $Enc$, $x_{i}$ is a high-dimensional input, $Enc_{\phi}(x_i)$ is the latent vector, $Dec_{\theta}(Enc_{\phi}(x_i))$ is the reconstructed input, and $N$ is the total number of samples. \subsection{Experimental Setup} \subsubsection{Datasets} We use five tabular datasets that are common in the machine learning community. Table~\ref{tab: dataset_overview} summaries their main statistics. The Loan dataset\footnote{https://www.kaggle.com/code/pritech/bank-personal-loan-modelling/data} contains the demographic information about bank customers and their response to a personal loan campaign. The Adult dataset\footnote{https://archive.ics.uci.edu/ml/datasets/Adult} has many census data and is used to predict whether the income of an adult exceeds \$50k/year. The Credit dataset\footnote{https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud} consists of anonymized credit card transactions labeled as fraudulent or genuine. The Intrusion dataset\footnote{https://archive.ics.uci.edu/ml/datasets/Unmanned+Aerial+Vehicle+\%28U\\AV\%29+Intrusion+Detection} has encrypted WiFi traffic records and classifies whether a record is from an unmanned aerial vehicle. The Covtype dataset\footnote{https://archive.ics.uci.edu/ml/datasets/Covertype} contains the cover type of forests and the related geographical information. Every dataset has a target column for classification tasks. Due to the limitation of computational resources, we randomly select 50k samples from the Credit, Intrusion, and Covtype datasets. \begin{table}[t] \centering \caption{Statistics of datasets} \begin{tabular}{@{}ccccc@{}} \toprule Datasets & \begin{tabular}[c]{@{}c@{}}\# Continuous\\ columns\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Categorical \\ columns\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Columns\\ after encoding\end{tabular} & \# Samples \\ \midrule Loan & 5 & 8 & 55 & 5k \\ Adult & 5 & 9 & 151 & 48k \\ Credit & 30 & 1 & 301 & 50k \\ Covertype & 10 & 45 & 205 & 50k \\ Intrusion & 22 & 20 & 322 & 50k \\ \bottomrule \end{tabular} \label{tab: dataset_overview} \end{table} \subsubsection{Baselines} Four state-of-the-art tabular data synthesizers are selected as the baseline models, namely table-GAN~\cite{tablegan}, CTGAN~\cite{ctgan&tvae}, TVAE~\cite{ctgan&tvae}, and CTAB-GAN~\cite{zhao2021ctab}. We use the same hyperparameters as the original papers, and every experiment is repeated three times to obtain reliable results. \subsubsection{Computational Environment} We implemented our proposed solutions using Pytorch on a server equipped with an Intel(R) Core(TM) i9-10900KF CPU @3.70GHz and a GeForce RTX 2080 Ti GPU. \subsection{Evaluation Metrics} Our evaluation of tabular data synthesizers focuses on the statistical difference and machine learning utility difference between real and synthetic data. Two metrics quantify the statistical difference. {\bf Wasserstein-1 Distance (WD)} measures the difference between two continuous/discrete 1-dimensional distributions. We use this metric to compare the per features difference between real and synthetic data. {\bf Difference in Correlation Matrix (Dif. Corr.)} measures how well the cross-column correlations\footnote{Note that we use ``correlation'' as a general term. For two numerical features, it refers to the Pearson correlation coefficient; for two categorical features, it is their Cramer's V; and for a categorical features and a numerical features, it means their correlation ratio.} are captured by a tabular data synthesizer. We calculate the difference between the correlation matrices of the real and synthetic table as follows: \begin{equation} Dif. \ Corr. = \sqrt{\sum_{i,j}(Corr^{R}_{i,j} - Corr^{F}_{i,j})^2}, \end{equation} where $Corr^{R}_{i,j}$ and $Corr^{F}_{i,j}$ are the correlation coefficients between features $i$ and $j$ in the real and synthetic correlation matrices. We measure machine learning utility as the performance differences of machine learning models trained on the real and synthetic data. Specifically, we first train four machine learning models with real and synthetic data separately. Then we obtain their average prediction accuracy and compute the difference. The difference is small if the synthetic data has high machine learning utility. \subsection{AE-GAN} AE-GAN is evaluated on four aspects: column permutation invariance, statistical similarity and machine learning utility of the synthetic data, and training time. We aim to verify if AE-GAN is robust to column permutations and achieves good synthesis quality compared with the state-of-the-art. Additionally, we evaluate the scalability of AE-GAN by analyzing its training time. Table~\ref{tab:all_result_in_one} summarises the results averaged on five datasets. \begin{table}[t] \centering \caption{AE-GAN evaluation results against the state-of-the-art. For all metrics, a lower value is better.} \resizebox{0.5\textwidth}{!}{% \begin{tabular}{@{}cccccc@{}} \toprule \multirow{2}{*}{Model} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Sensitivity to \\ permutations\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Stat. diff. \end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}ML utility\\ diff. \end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Training time \\ (mins)\end{tabular}} \\ \cmidrule(lr){3-4} & & WD & Dif. Corr. & & \\ \midrule table-GAN & \textbf{6.82\%} & 4.481 & 3.651 & 21.14\% & \textbf{1.31} \\ CTAB-GAN & 38.67\% & \textbf{1.039} & \textbf{1.905} & \textbf{9.11\%} & 63.70 \\ CTGAN & 18.62\% & 1.857 & 3.079 & 14.99\% & 10.47 \\ TVAE & 17.29\% & 1.723 & 2.848 & 12.84\% & 7.00 \\ \midrule AE-GAN & 11.71\% & 2.699 & 2.331 & 9.98\% & 9.89 \\ \bottomrule \end{tabular} } \label{tab:all_result_in_one} \end{table} \textbf{Column Permutation Invariance}. Similar to our empirical analysis, we arange training data in three different orders, i.e., original order, order by type, and order by correlation, and test the performance of AE-GAN. The second column of Table \ref{tab:all_result_in_one} shows the sensitivity to column permutations of the baseline models and AE-GAN, averaged on five datasets. We find AE-GAN ranks second in permutation invariance among the five models. Table-GAN is the most permutation-invariant model because it does not have the sparsity issue caused by one-hot encoding and mode-specific normalization. In contrast, CTAB-GAN, TVAE, CTGAN and AE-GAN adopt one-hot encoding for categorical features and mode-specific normalization for numerical features and thus have sparse input, leading to higher sensitivity. However, since AE-GAN has an autoencoder to compress the input, it is more robust to column permutations than CTAB-GAN, TVAE, and CTGAN. \textbf{Synthesis Quality Comparison}. We evaluate the quality of the synthesized data with two metrics: statistical difference and ML utility difference between real and synthetic data. Table~\ref{tab:all_result_in_one} shows that CTAB-GAN is the best model in synthesis quality because its synthetic data have the lowest statistical difference and ML utility difference compared with real data. Table-GAN is the worst among all models. In the rest three models, AE-GAN is better than CTGAN and TVAE on Dif. Corr, but worse on WD. However, the ML utility of AE-GAN is better than CTGAN and TVAE. The results show that the autoencoder of AE-GAN helps preserve the correlation between different features, but for each feature, the statistical difference between real and synthetic data may increase due to the information loss caused by data compression. {Future work may look into optimizing the trade-off between the performance boost for GAN and the compression loss caused by compact representations.} \textbf{Training Time Analysis}. A model with a short training time can scale up to large datasets. It also requires fewer hardware resources than slow models given the same input. We compare the scalability of AE-GAN with the baseline models by analyzing their total training time. Table~\ref{tab:all_result_in_one} shows that AE-GAN is faster than CTGAN and CTAB-GAN, but slower than table-GAN and TVAE. Table-GAN has the shortest training time because of its simple data representation, which leads to small input size. TVAE is faster than AE-GAN because AE-GAN has an auxilliary classifier. Nonetheless, AE-GAN is significantly faster than CTAB-GAN, speeding up as much as 6 times. \textbf{Summary}. AE-GAN achieves the best tradeoff between permutation invariance, synthesis quality, and training time compared with state-of-the-art tabular data synthesizers. It is more permutation-invariant than CTAB-GAN, CTGAN, and TVAE, and it has better synthesis quality than table-GAN, CTGAN, and TVAE in terms of ML utility. Although table-GAN is less sensitive to column permutations and takes less time to train than AE-GAN, its synthesis quality is much worse. In a similar vein, although CTAB-GAN is better than AE-GAN in synthesis quality, it is much slower and more sensitive to column permutations. Compared with CTGAN and TVAE, AE-GAN is more permutation-invariant, has better ML utility, and takes a similar time to train. \subsection{Ablation Study} \label{ssec:ablation} We conduct an ablation study to understand the influence of the design choices we made with AE-GAN. We change the data representations, model architecture, and training algorithm to test their effect. Table~\ref{tab: ablation study} summarizes the results of the ablation study. \begin{table*}[htp] \centering \caption{Ablation study results on synthesis quality and permutation invariance of AE-GAN} \label{tab: ablation study} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}ccccccccc@{}} \toprule \multirow{2}{*}{Dataset} & \multicolumn{5}{c}{WD between real and synthetic data} & \multicolumn{3}{c}{Sensitivity to column permutations} \\ \cmidrule(l{4pt}r{4pt}){2-6} \cmidrule(l{4pt}r{4pt}){7-9} & AE-GAN & w/o MSN & w/o one-hot \& MSN & w/o classifier & co-train AE \& GAN & AE-GAN & w/o MSN & w/o one-hot \& MSN \\ \midrule Loan & 1.374 & 2.309 & 3.749 & 1.308 & 1.880 & 7.28\% & 3.29\% & 4.45\% \\ Adult & 6.042 & 6.319 & 15.432 & 6.293 & 6.508 & 21.77\% & 39.20\% & 40.65\% \\ Credit & 0.341 & 1.650 & 1.505 & 0.344 & 0.534 & 9.09\% & 3.19\% & 2.70\% \\ Covtype & 1.408 & 3.099 & 5.954 & 1.428 & 2.437 & 10.16\% & 5.48\% & 0.79\% \\ Intrusion & 4.328 & 14.915 & 58.241 & 4.492 & 4.836 & 10.28\% & 26.84\% & 2.76\% \\ \midrule Avg. & 2.699 & 5.658 & 16.976 & 2.773 & 3.239 & 11.71\% & 15.60\% & 10.27\% \\ \bottomrule \end{tabular}} \end{table*} \textbf{Without Mode-Specific Normalization (MSN)}. We use mode-specific normalization in AE-GAN to normalize numerical features. Although it preserves the multi-model distribution of numerical features, it increases the sparsity in training data. We replace it with min-max normalization to understand its effect. Table~\ref{tab: ablation study} shows that after removing mode-specific normalization, the WD becomes worse on all datasets, meaning that mode-specific normalization improves the synthesis quality. However, it also makes AE-GAN more sensitive to column permutations. After removing it, the sensitivity to column permutations decreases on the Loan, Credit, and Covtype datasets. In conclusion, mode-specific normalization increases sensitivity to column permutations, but it improves synthesis quality. \textbf{Without One-hot and Mode-Specific Normalization}. To further reduce sparsity in the input data, we remove one-hot encoding and mode-specific normalization together. Similar to table-GAN, we pre-process categorical and numerical features using min-max normalization. We found that the WD is worse than only removing mode-specific normalization, which proves that one-hot encoding can enhance synthesis quality. Moreover, the sensitivity to column permutations decreases on all datasets except the Adult dataset after removing one-hot encoding and mode-specific normalization, especially on datasets with a high proportion of categorical features such as the Covtype and Intrusion datasets. The results verify again that reducing sparsity can enhance permutation invariance. \textbf{Without Auxiliary Classifier}. We use an auxiliary classifier to improve the synthesis quality of AE-GAN. After removing the auxiliary classifier, the Wasserstein distance between real and synthetic data worsens on all datasets except the Loan dataset. Overall, the average WD on five datasets increases from $2.669$ to $2.773$ after removing auxiliary classifier, showing that the classifier improves synthesis quality. \textbf{Co-training AE and GAN}. The AE and GAN in AE-GAN are trained separately. To study whether co-training AE and GAN can improve the synthesis quality, we first pre-train the AE for 300 epochs and then train it together with GAN. Surprisingly, the results show that co-training makes the synthesis quality worse. We find that the training loss of AE is already low after pre-training. However, during co-training, the feedback from GAN increases AE's loss and makes it unstable. \subsection{Feature sorting algorithm} We evaluate the proposed feature sorting algorithm on table-GAN and CTAB-GAN, two CNN-based tabular data synthesizers, because this algorithm is designed to alleviate the limitations of CNN as explained in Section~\ref{feature_sorting_algorithm}. \textbf{Table-GAN}. Table~\ref{tab: tablegan-feature-sorting} shows the effect of the feature sorting algorithm on table-GAN. A negative change in Dif. Corr. or WD means the difference between synthetic and real data becomes smaller. That is, the feature sorting algorithm helps tabular data synthesizers generate more realistic data. The results show that the feature sorting algorithm works best on the Credit dataset, where Dif. Corr. and WD are decreased by 12\% and 4\%. It also improves the results on the Intrusion dataset, where WD is reduced by 16\%, whereas Dif. Corr slightly increases by 3\%. However, it does not influences much the Loan and Covtype datasets, where the Dif. Corr. and WD change less than 5\%. Moreover, the results on the Adult dataset become worse, with Dif. Corr and WD increase by 24\% and 3\%. The algorithm performs best on the Credit dataset because of the simple correlations between its features. Using $\displaystyle \pm 0.2$ as the threshold for high correlation, only \textit{Time} and \textit{Amount} are strongly-correlated with other features. All other features have a close-to-0 correlation. Besides, \textit{Time} and \textit{Amount} are only correlated with 3 and 5 features, respectively. With such a small number of correlated features, capturing their relation in the convolution process is easy once we group them together. The algorithm also alleviates the CNN boundary effect on the highly-correlated features of the Credit dataset. In the original order, \textit{Time} and \textit{Amount} are the leftmost and rightmost columns in the table, and many of their correlated features are far apart. However, after feature sorting, these features are in the middle of the table therefore reducing the boundary effect. In contrast to the Credit dataset, the other four datasets have a larger number of correlated features. For example, in the Adult dataset most features are correlated with at least one other feature, and seven features are correlated with more than three features. Due to the limited kernel size, it is challenging for CNNs to capture all the cross-column relations even after putting the highly-correlated features together. Besides, our algorithm is based on pairwise correlation, but putting one pair of highly-correlated features together could possibly separate another pair of highly-correlated features, which explains why sometimes Dif. Corr. and WD become worse after applying the feature sorting algorithm. In this case, domain knowledge is required to effectively group the correlated features and arrange them in a good order. \begin{table}[t] \centering \caption{Table-GAN before and after the feature sorting algorithm } \label{tab: tablegan-feature-sorting} \begin{tabular}{@{}ccccccc@{}} \toprule \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Before sorting} & \multicolumn{2}{c}{After sorting} & \multicolumn{2}{c}{Change} \\ \cmidrule(l{4pt}r{4pt}){2-3} \cmidrule(l{4pt}r{4pt}){4-5} \cmidrule(l{4pt}r{4pt}){6-7} & Dif. Corr. & WD & Dif. Corr. & WD & Dif. Corr. & WD \\ \midrule Loan & 2.284 & 2.062 & 2.203 & 2.087 & -4\% & 1\% \\ Adult & 1.563 & 12.153 & 1.942 & 12.502 & 24\% & 3\% \\ Credit & 3.092 & 0.420 & 2.728 & 0.403 & -12\% & -4\% \\ Covtype & 4.885 & 1.282 & 4.915 & 1.348 & 1\% & 5\% \\ Intrusion & 6.433 & 6.486 & 6.597 & 5.418 & 3\% & -16\% \\ \midrule Avg. & 3.651 & 4.481 & 3.677 & 4.352 & 1\% & -3\% \\ \bottomrule \end{tabular} \end{table} \textbf{CTAB-GAN}. To understand whether our feature sorting algorithm works when sparsity is involved, we test it on CTAB-GAN, and the results are summarized in Table \ref{tab: ctabgan-feature-sorting}. Surprisingly, the algorithm can reduce Dif. Corr. and WD by more than 10 \% on all datasets except the Credit dataset. On the Loan dataset, the Dif. Corr. and WD are decreased by 57\% and 29\% after feature sorting, meaning that the algorithm can effectively improve the statistical similarity between synthetic and real data. Compared with table-GAN, CTAB-GAN has more performance gain after feature sorting. This is due to the sparsity issue caused by the encoding methods of CTAB-GAN, i.e., mode-specific normalization for numerical features and one-hot encoding for categorical features. Since the input data are sparse after encoding, putting the highly-correlated columns together can drastically reduce the distance between correlated columns, and therefore improves CTAB-GAN's ability to capture the relation between highly-correlated columns. To summarize, our feature sorting algorithm can improve the performance of CNN-based table synthesizers, especially when the input tabular data are sparse. For dense tabular data, it also works if the relation between correlated features is relatively simple. \begin{table}[t] \centering \caption{ CTAB-GAN before and after the feature sorting algorithm } \label{tab: ctabgan-feature-sorting} \begin{tabular}{@{}ccccccc@{}} \toprule \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Before sorting} & \multicolumn{2}{c}{After sorting} & \multicolumn{2}{c}{Change} \\ \cmidrule(l{4pt}r{4pt}){2-3} \cmidrule(l{4pt}r{4pt}){4-5} \cmidrule(l{4pt}r{4pt}){6-7} & {Dif. Corr.} & {WD} & {Dif. Corr.} & {WD} & {Dif. Corr.} & {WD} \\ \midrule Loan & 1.469 & 0.356 & 0.638 & 0.253 & -57\% & -29\% \\ Adult & 0.448 & 1.517 & 0.296 & 1.205 & -34\% & -21\% \\ Credit & 1.688 & 0.115 & 1.660 & 0.134 & -2\% & 17\% \\ Covtype & 1.948 & 0.539 & 1.442 & 0.475 & -26\% & -12\% \\ Intrusion & 3.969 & 2.668 & 3.385 & 1.999 & -15\% & -25\% \\ \midrule Avg. & 1.904 & 1.039 & 1.484 & 0.813 & -22\% & -22\% \\ \bottomrule \end{tabular} \end{table} \subsection{Tabular data synthesizers} We focus on deep-learning approaches for tabular data synthesis and skip the discussion of classical methods such as Copulas~\cite{patki2016synthetic, li2020sync} and Bayesian Networks~\cite{zhang2017privbayes}. Table~\ref{tab: model_overview} summarizes the recently developed deep learning methods for tabular data synthesis in terms of models, network architecture, and datasets. MedGAN~\cite{choi2017generating} is designed for aggregated electronic health records (EHRs), which only have count and binary features. Since EHRs are high-dimensional and sparse~\cite{baowaly2019synthesizing}, medGAN uses a pre-trained autoencoder to learn compact representations of the input data and thereby simplifies the GAN's task. MedGAN is improved by~\cite{baowaly2019synthesizing}, where the standard GAN loss is replaced by Wasserstein loss with gradient penalty, and the new model is named medWGAN. However, {different from AE-GAN,} medGAN and medWGAN are limited in generalizing to real-world scenarios because they only consider count and binary features. A few recent tabular data synthesizers are suitable for general data types, including table-GAN~\cite{tablegan}, CTGAN~\cite{ctgan&tvae}, TVAE~\cite{ctgan&tvae}, and CTAB-GAN~\cite{zhao2021ctab}. CTGAN, TVAE, and CTAB-GAN use Variational Gaussian Mixture (VGM) to encode numerical features and one-hot encoding for categorical features. Moreover, CTAB-GAN defines the \textit{mixed} datatype and proposes a new encoding method. In addition, CTGAN, TVAE, and CTAB-GAN adopt the training-by-sampling technique to handle highly-imbalanced distributions. Despite their effectiveness in tabular data synthesis, these models overlook and do not abide by the key property of column permutation invariance. \begin{table}[t] \centering \caption{Deep learning methods for tabular data synthesis \label{tab: model_overview} \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}cccc@{}} \toprule Method & Model design & Network & Data \\ \midrule medGAN~\cite{choi2017generating} & AE + GAN & FCN & Medical records \\ table-GAN~\cite{tablegan} & DCGAN + Classifier & CNN & General \\ medWGAN~\cite{baowaly2019synthesizing} & AE + WGAN-GP & FCN & Medical records \\ CTGAN~\cite{ctgan&tvae} & Conditional WGAN-GP & FCN & General \\ TVAE~\cite{ctgan&tvae} & Conditional VAE & FCN & General \\ CTAB-GAN~\cite{zhao2021ctab} & Conditional DCGAN + Classifier & CNN, FCN & General \\ \bottomrule \end{tabular} } \end{table} \subsection{Column permutation invariance} \label{define_column_permutation_invariance} In computer vision similar concepts have been brought up and investigated, including \textit{permutation invariance}~\cite{lee2019set, cohen2020regularizing}, \textit{translation invariance}~\cite{kayhan2020translation, kauderer2017quantifying, furukawa2017deep}, and \textit{translation equivalence}~\cite{weiler2018learning}. Permutation invariance means that the output of a neural network stays the same despite permutations of its input. For example, the classification of an image should not change after adjusting the object location in the image. Motivated by that, we define column permutation invariance in tabular data synthesis as follows. The performance of a tabular data synthesizer should not be affected by permutations on the input column order. To the best of our knowledge, column permutation invariance has not been researched by the prior art on tabular data synthesis. \subsection{Pitfall of CNNs for tabular data synthesis} \label{cnn_not_a_natural_fit} \begin{table}[t] \centering \caption{TableGAN experiment results: Wasserstein-1 distance between real and synthetic data} \begin{tabular}{@{}ccccc@{}} \toprule \multirow{2}{*}{Dataset} & \multicolumn{3}{c}{Column order} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Max diff. \\ (\%)\end{tabular}} \\ \cmidrule(lr){2-4} & Original order & Order by type & Order by corr. & \\ \midrule Loan & 2.062 & \textbf{2.047} & 2.066 & \textbf{0.93\%} \\ Adult & 12.153 & 12.563 & \textbf{11.512} & 9.13\% \\ Credit & 0.420 & 0.410 & \textbf{0.403} & 4.22\% \\ Covtype & \textbf{1.282} & 1.284 & 1.345 & 4.91\% \\ Intrusion & 6.486 & 5.896 & \textbf{5.645} & 14.90\% \\ \midrule Avg. & 4.481 & 4.440 & \textbf{4.194} & 6.82\% \\ \bottomrule \end{tabular} \label{tab:tablegan_sensitivity} \end{table} Initially designed for images, CNNs use a set of convolution kernels to slide over the input feature space, abstract high-dimensional features, and then aggregate them into knowledge about the input. Due to the limited kernel size, CNNs only learn local relations between neighboring features and fall short to capture global dependencies The focus of CNNs on local relations hinders high quality tabular data synthesis. In contrast to image data, tabular data do not have necessarily strong local relations. Highly-correlated features can be very far apart, and their dependencies can be complex and irregular~\cite{zhu2021converting}. These characteristics make modeling tabular data extra challenging for CNNs~\cite{katzir2021netdnf, pmlr-v97-rahaman19a}, despite their remarkable performance in many machine learning tasks~\cite{sultana2018advancements}. Since CNNs capture mainly local relations, CNN-based tabular data synthesizers are sensitive to column permutations. We use table-GAN to verify this assumption. We test it with five datasets arranged using three column orders, namely the original order, order by type, and order by correlation. Order by type means putting all continuous columns on the left of the table, and all categorical columns on the right. Order by correlation means placing highly-correlated columns on the left and weakly-correlated columns on the right. For each order, we train the model separately and calculate the Wasserstein-1 distance (WD) between real and synthetic data. Every experiment is repeated 5 times. Table~\ref{tab:tablegan_sensitivity} summarizes our results. The best, i.e. lowest, distance values are highlighted in bold. The last column shows the maximum WD change in percent across all three column permutations. The results show that table-GAN is most sensitive on the Intrusion dataset with a maximum difference in WD of 14.90\%. \subsection{Sparsity v.s. sensitivity} Efficient representation of categorical features is one of the main challenges in tabular data synthesis. In the state-of-the-art, table-GAN uses label encoding to transform categorical features into numerical ones and normalizes them with min-max normalization. This method often leads to sub-optimal performance due to the artificial order in categorical features~\cite{hancock2020survey}. In contrast, CTGAN, CTAB-GAN, and TVAE use one-hot encoding to represent categorical features. Despite its simplicity and effectiveness, one-hot encoding introduces many zeros to the input data and thus increases sparsity. Representing numerical features in tabular data is relatively straightforward. The most common method is mapping them into [-1, 1] with min-max normalization. However, the authors of CTGAN, TVAE, and CTAB-GAN adopt \textit{mode-specific normalization}, which uses Variational Gaussian Mixture to represent multi-modal numerical features. Although this method improves the quality of the synthetic data, we find it leads to sparse input because one-hot encoding is used to represent multiple modes. Our experiments show that sparse tabular data causes sensitivity to column permutations. We compare CTAB-GAN, a model using one-hot encoding and mode-specific normalization, with table-GAN, a model using label encoding and min-max normalization. Note that label encoding and min-max normalization do not change the dimensionality of the input data, whereas one-hot encoding and mode-specific normalization increase sparsity. Table~\ref{tab:ctabgan_sensitivity} shows our experiment results of CTAB-GAN. Compared to table-GAN (see Table~\ref{tab:tablegan_sensitivity}), CTAB-GAN {synthesizes more realistic data (shown by a lower WD which is about $1/4$ of table-GAN's WD), but with higher} sensitivity to column permutations on all datasets. The average maximum change in WD across the five datasets is 38.67\%, whereas in table-GAN it is 6.82\%. \begin{table}[t] \centering \caption{CTAB-GAN experiment results: Wasserstein-1 distance between real and synthetic data} \begin{tabular}{@{}ccccc@{}} \toprule \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{3}{c}{Column order} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Max diff. \\ (\%)\end{tabular}} \\ \cmidrule(lr){2-4} & Original order & Order by type & Order by corr. & \\ \midrule Loan & 0.356 & 0.283 & \textbf{0.216} & 64.81\% \\ Adult & 1.517 & \textbf{0.934} & 1.203 & 62.42\% \\ Credit & \textbf{0.115} & 0.144 & 0.137 & 25.22\% \\ Covtype & 0.539 & \textbf{0.514} & 0.583 & \textbf{13.42\%} \\ Intrusion & \textbf{2.668} & 3.401 & 2.831 & 27.47\% \\ \midrule Avg. & 1.039 & 1.055 & \textbf{0.994} & 38.67\% \\ \bottomrule \end{tabular} \label{tab:ctabgan_sensitivity} \end{table} To further highlight the sparsity of the encoded data we visualize the input of table-GAN and CTAB-GAN in Figure~\ref{fig: visualize_input_to_tablegan&ctabgan}. We note that the sparsity is determined by the sum of all levels of all variables, i.e., the number of modes per continuous variable and the discrete levels per discrete variable. We reshape each row of a table into a square matrix to make it compatible with CNNs. In table-GAN, one row of the Adult dataset is represented by a $4 \times 4$ matrix, but in CTAB-GAN a $24 \times 24$ matrix is needed due to one-hot encoding and mode-specific normalization which increases the number of zeros, i.e. purple matrix cells in the figure accounting for roughly 97\% area. This increase in sparsity makes CTAB-GAN more sensitive to column permutations than table-GAN since the average distance between related columns (pixels) is increased. \begin{figure}[t] \centering \includegraphics[width=.5\linewidth]{images/tablegan/adult_real_corr_tablegan.png}\hfill \includegraphics[width=.5\linewidth]{images/ctabgan/adult_real_corr_ctabgan.png} \caption{Visualization of the encoded input to table-GAN (left) and to CTAB-GAN (right) using Adult dataset.} \label{fig: visualize_input_to_tablegan&ctabgan} \end{figure} \subsection{FCNs column permutation invariance} Theoretically, fully-connected networks (FCNs) should be robust to column permutations because all features are connected. However, we find FCNs are not fully permutation invariant. Tests with CTGAN and TVAE, two FCN-based tabular data synthesizers, on five datasets show an average WD of $1.87$ with an average maximum change of 18.62\% for CTGAN and $1.80$ with 14.89\% for TVAE (details skipped due to space constraints). {Overall, the state-of-the-art tabular data synthesizers either provide the high quality synthetic data or are resilient to column order permutations, struggling to make a good trade-off.} \section{Introduction} \input{chapters/introduction} \section{Background and Related Work} \input{chapters/background} \input{chapters/related_work} \section{Empirical analysis} \input{chapters/empirical_analysis} \section{AE-GAN} \input{chapters/ae-gan} \section{Feature Sorting Algorithm} \input{chapters/feature_sorting_v2} \section{Experimental evaluation} \input{chapters/experimental_evaluation} \section{Conclusion} \input{chapters/conclusion} \bibliographystyle{ieeetr}
2,877,628,088,699
arxiv
\section{Introduction} The notion of sparsification is ubiquitous in applied mathematics and combinatorial optimization is no exception. For instance, shortest paths to a fixed root vertex in a graph $G=(V,E)$ are usually stored as a tree directed towards the root. Another classical application is that of Gomory-Hu (GH) Trees \cite{gomory1961multi} which encode all of the minimum cuts of an edge-capacitated undirected graph $G=(V,E)$, with capacities $c: E \rightarrow \mathbb{R}^+$. For each $s,t \in V$, we denote by $\lambda(s,t)$ the capacity of a minimum cut separating $s$ and $t$. Equivalently $\lambda(s,t)$ is the maximum flow that can be sent between $s,t$ in $G$ with the given edge capacities. Gomory and Hu showed that one may encode the $O(n^2)$ minimum cuts by a tree on $V$. A \emph{spanning edge-capacitated tree} for $G$ is a spanning tree $T = (V,E')$ together with a capacity function $c' : E' \rightarrow \mathbb{R}^+$. Any edge $e \in E'$ induces a \emph{fundamental cut} $G(A,B)$, where $A$ and $B$ are the vertex set of the two components of $T \setminus e$. Here we use $G(A,B)$, or occasionally $\delta(A)=\delta(B)$, to denote the associated cut in $G$, that is, $G(A,B) = \{e \in E(G) : \mbox{$e$ has one endpoint in $A$ and the other in $B$}\}$. \begin{definition} \label{defn:encoding} Let $T$ be a spanning edge-capacitated tree. An edge $e=ab \in E(T)$ is \emph{encoding} if its fundamental cut $G(A,B)$ is a minimum $ab$-cut and its capacity is $c'(e)$, that is, $c(G(A,B))=c'(e)$. \end{definition} A \emph{Gomory-Hu tree} (GH tree for concision) is a spanning edge-capacitated tree whose edges are all encoding. In this case, it is an exercise to prove that any minimum cut can be found as follows. For $s,t \in V$ we have that $\lambda(s,t)=\min\{ c'(e): e \in T(st) \}$, where $T(st)$ denotes the unique path joining $s,t$ in $T$. In some applications we only specify a subset $Z \subseteq V$ for which we need cut information. We refer to $Z$ as the {\em terminals} of the instance. The Gomory-Hu method allows one to store a compressed version of the GH Tree which only captures cut values $\lambda(s,t)$ for $s,t \in Z$. Namely, a GH $Z$-Tree has $V(T)=Z$. It is well-known that there may not always exist a GH tree which is a subgraph of $G$. For instance, every GH tree for the vertices of $K_{3,3}$ is a $5$-star (cf. \cite{schrijver2003combinatorial}). Our first main result characterizes the graphs which admit GH subtrees. More precisely, we say that $G$ has the {\em GH Property} if any subgraph $G'$ of $G$ with any edge-capacity function $c$ has a Gomory-Hu tree $T$ that is a subgraph of $G'$. \begin{theorem}\label{thm:1sum} $G$ has the GH Property if and only if $G$ is the $1$-sum of outerplanar and $K_4$ graphs. \end{theorem} We then turn our attention to the generalized version where we are given a graph-terminal pair $(G,Z)$. Let $G$ be endowed with edge capacities. A {\em GH $Z$-Tree} is then a capacitated tree $T=(V(T),E(T))$ (cf. \cite{korte2012combinatorial}). Formally, the vertices of $T$ form a partition $\{B(v) : v \in Z\}$ of $V(G)$, with $z \in B(z)$ for all $z \in Z$. Hence Definition~\ref{defn:encoding} extends as follows. An edge $B(s)B(t)$ of $T$ is {\em encoding} if its fundamental cut $(B(S),B(U))$ induces a minimum $st$-cut in $G$. As before, if all edges are encoding, then $T$ determines the minimum cuts for all pairs $s,t \in Z$. We characterize those pairs $(G,Z)$ which admit a GH $Z$-tree as a minor for any edge capacities on $G$. We call such a tree a {\em GH $Z$-minor} (a formal definition is delayed to Section~\ref{sec:last}). Our starting point is the following elementary observation. \begin{proposition}\label{prop:notree} $K_{2,3}$ has no Gomory-Hu tree that is a subgraph of itself. \end{proposition} Even if GH $Z$-minors always existed in a graph $G$, it may still contain a $K_{2,3}$ minor. The proposition implies, however, that it should not have a $K_{2,3}$ minor where all nodes in the minor are terminals. Given a set $Z$ of terminals, we say that $H$ is a {\em terminal minor}, or {\em $Z$-minor}, of $G$ if nodes of $V(H)$ correspond to terminals of $G$. In other words, it is a minor such that each $v \in V(H)$ arises by contracting a connected subgraph which contains a vertex from $Z$. Hence a natural necessary condition for $G$ to always contain GH $Z$-minors is that $G$ must not contain a terminal-$\ensuremath{K_{2,3}}$ minor. We show that this is also sufficient (see Section~\ref{sec:last} for the formal statement). \begin{theorem} \label{thm:minorGH} Let $Z \subseteq V$. $G$ admits a Gomory-Hu tree that is a minor, for any capacity function, if and only if $(G,Z)$ is a terminal-$\ensuremath{K_{2,3}}$ minor free graph. \end{theorem} Establishing the sufficiency requires a better understanding of terminal minor-free graphs. We show that the family of pairs $G,Z$ which forbid such terminal-$\ensuremath{K_{2,3}}$ minors arises precisely as subgraphs of \emph{$Z$-webs}. $Z$-webs are built from planar graphs with one outside face which contains all the terminals $Z$ and each inner face is a triangle to which we may add an arbitrary graph inside connected to the three vertices; these additional arbitrary graphs are called \emph{$3$-separated subgraphs}. Subgraphs of $Z$-webs are called {\em Extended Okamura-Seymour Instances}. \begin{theorem}\label{thm:k23char} Let $G$ be a $2$-connected terminal-$\ensuremath{K_{2,3}}$ minor free graph. Then either $G$ has at most $4$ terminals or it is an Extended Okamura-Seymour Instance. \end{theorem} This immediately implies the following. \begin{corollary}\label{cor:k23char} $G$ is terminal-$\ensuremath{K_{2,3}}$ free if and only if for any $2$-connected block $B$, the subgraph obtained by contracting every edge not in $B$ is terminal-$\ensuremath{K_{2,3}}$ free. \end{corollary} These results also yield the following consequence for multiflow problems. Let $G,H$ be graphs such that $V(H) \subseteq V(G)$. Call a pair $(G,H)$ {\em cut-sufficient} if the cut condition is sufficient to characterize the existence of a multiflow for any demands on edges of $H$ and any edge capacities on $G$. If $Z \subseteq V(G)$, we also call $(G,Z)$ {\em cut-sufficient} if $(G,H)$ is cut-sufficient for any simple graph on $Z$. \begin{corollary}\label{cor:flows} $(G,Z)$ is cut-sufficient if and only if it is terminal-$\ensuremath{K_{2,3}}$ free. \end{corollary} One can compare this to results of Lomonosov and Seymour (\cite{lomonosov1985combinatorial,seymour1980four}, cf. Corollary 72.2a \cite{schrijver2003combinatorial}) which characterize the class of demand graphs $H$ such that every supply graph $G$ ``works'', i.e. for which $(G,H)$ is cut-sufficient for {\em any} graph $G$ with $V(H) \subseteq V(G)$. They prove that any such $H$ is (a subgraph of) either $K_4, C_5$ or the union of two stars. A related question asks for which graphs $G$ is it the case that $(G,H)$ is cut-sufficient for every $H$ which is a subgraph of $G$; Seymour \cite{seymour1981matroids} shows that this is precisely the class of $K_5$ minor-free graphs. We refer the reader to \cite{chekuri2013flow} for discussion and conjectures related to cut-sufficiency. The paper is structured as follows. In the next section we prove that every outerplanar instance has a GH tree which is a subgraph. In Section~\ref{sec:1sum} we present the proof of Theorem~\ref{thm:1sum}. In Section~\ref{sec:k23char} we provide the proofs for Theorem~\ref{thm:k23char} and Corollary~\ref{cor:flows}. Section~\ref{sec:last} wraps up with a proof of Theorem~\ref{thm:minorGH}. \subsection{Some Notation and a Lemma} \label{sec:notation} \begin{figure} \label{fig:middle} \begin{center} \begin{tikzpicture}[x=0.35cm,y=0.35cm] \fill[color=red,nearly transparent] plot[smooth cycle, tension=.7] coordinates {(2.6,4.6) (0.2,4.8) (-1.8,4.4) (-3.4,3.4) (-4.4,1.8) (-3.8,-0.2) (-2.7,0.2) (-1.6,2.4) (0.2,3.2) (2,3.2) (3.8,3.2) (3.8,4.2)}; \fill[color=blue, nearly transparent] plot[smooth cycle, tension=.7] coordinates {(-2.6,3.0) (-2,2.6) (-1,2.6) (1,3) (4.3,2.3) (5.8,0.8) (7,0.6) (7,2.1) (4.8,4.2) (2,5) (-0.6,5) (-2,4.6) (-2.6,3.8)}; \draw (1.5,1) ellipse (5 and 3); \draw [dashed] plot[smooth cycle, tension=.7] coordinates {(-0.1,-1.4) (0,-2.2) (-2.2,-1.4) (-3.2,-0.6) (-2.6,-0.2) (-1.4,-0.8)}; \draw [dashed] plot[smooth cycle, tension=.7] coordinates {(-3,0.2) (-3.6,0.2) (-4,1.8) (-3,2.8) (-2.4,2.4) (-3,1.2)}; \draw [dashed] plot[smooth cycle, tension=.7] coordinates {(1.4,-1.8) (1.4,-2.4) (3.4,-2.4) (5,-1.6) (6.6,-0.2) (6,0.4) (4.8,-0.6) (3.2,-1.4)}; \draw [dashed] plot[smooth cycle, tension=.7] coordinates {(4.4,3.8) (4.2,3.2) (5.2,2.6) (6,1.8) (6,1) (6.8,0.8) (6.8,2) (5.6,3.2)}; \draw [dashed] plot[smooth cycle, tension=.7] coordinates {(0.6,4.4) (-0.8,4.2) (-2,3.8) (-2.4,3.4) (-1.8,2.8) (-0.4,3.4) (1.4,3.6) (3.4,3.4) (3.4,4) (3,4.2) (1.6,4.4)}; \node[bigvertex] (v) at (0.6,-2) {}; \node[smallvertex] (x2) at (-3.2,2) {}; \node[smallvertex] (x4) at (5.65,2.65) {}; \node[smallvertex] (x1) at (-0.3,-1.8) {}; \node[smallvertex] (x5) at (1.6,-2) {}; \draw (v) -- (x4); \draw plot[smooth, tension=.7] coordinates {(0.6,-2) (-0.8,0.8) (-3.2,2)}; \path (v) node[anchor=north] {\Large $v$}; \node at (-2.6,-2) {\large $X_1$}; \node at (0.6,6) {\large $X_3$}; \node at (6.8,3.4) {\large $X_4$}; \node at (-5,2.8) {\large $X_2$}; \node at (4.6,-2.6) {\large $X_5$}; \end{tikzpicture} \end{center} \end{figure} We always work with connected graphs and usually assume (without loss of generality) that the edge capacities $c(e)$ have been adjusted so that no two cuts have the same capacity.\footnote{This can be achieved in a standard way by adding multiples of $2^{-\delta}$ where $\delta = O(|E|)$.} In particular, the minimum $st$-cut is unique for any vertices $s,t$. Moreover, we may assume any minimum cut $\delta(X)$ to be {\em central}, a.k.a a {\em bond}. That is, $G[X],G[V \setminus X]$ are connected. For any $X \subseteq V(G)$ we use shorthand $c(X)$ to denote the capacity of the cut $\delta(X)$, and if $Y \subseteq V(G)$, then $d(X,Y)$ denotes the sum of capacities for all edges with one endpoint in $X$, and the other in $Y$. We consistently use $c'(e)$ to denote the computed capacities on edges $e$ in some Gomory-Hu tree. As we use the following lemma several times throughout we introduce it now. \begin{lemma} \label{lem:middle} Let $t \in V(G)$ and $X,Y$ be disjoint subsets which induce respectively a minimum $xt$-cut and a minimum $yt$-cut where $x \in X, y \in Y$. For any non-empty subset $M$ of $V$ which is disjoint from $X \cup Y \cup \{t\}$, we have $d(M,V \setminus (X \cup Y \cup M)) > 0$. \end{lemma} \begin{proof} We have \begin{align*} &\quad c(M \cup X) + c(M \cup Y) \\ &= c(X) + c(Y) + 2d(M, V \setminus (X \cup Y \cup M))\\ &< c(M \cup X) + c(M \cup Y) + 2d(M, V \setminus (X \cup Y \cup M)) \end{align*} \noindent where the second inequality follows from the fact that $\delta(M \cup X)$ (respectively $\delta(Y \cup M)$) separates $t$ from $X$ (respectively $Y$) but $M \cup X \neq X$ (respectively $M \cup Y \neq Y$). \end{proof} \section{Outerplanar graphs have Gomory-Hu Subtrees}\label{sec:gh-tree} \label{sec:outerplanar} \begin{theorem}\label{th:gh-subtrees} Any $2$-connected outerplanar graph $G$ has a Gomory-Hu tree that is a subgraph of $G$. \end{theorem} \begin{proof} Let $G$ be an outerplanar graph with outer cycle $C = v_1, v_2, \ldots, v_n$. As discussed in Section~\ref{sec:notation}, we assume that no two cuts have the same capacity, so let $T$ be the unique Gomory-Hu tree of $G$. We want to prove that $T$ is a subgraph of $G$. Notice that the shore of any min-cut in $G$ must be a subpath $v_i,v_{i+1},\ldots,v_{j-1},v_j$ (indices taken modulo $n$) because we may assume any min-cut $\delta(S)$ to be {\em central} (a.k.a. a {\em bond}), that is, both $S$ and $V-S$ induce connected subgraphs. \begin{figure} \label{fig:ordering} \begin{center} \begin{tikzpicture}[x=0.4cm,y=0.4cm] \draw (1.5,1) ellipse (5 and 3); \node[bigvertex] (v1) at (0.6,-2) {}; \node[smallvertex] (x2) at (-3.2,2) {}; \node[smallvertex] (x3) at (0.8,4) {}; \node[smallvertex] (x4) at (5.65,2.65) {}; \node[smallvertex] (x1) at (-0.1,-1.9) {}; \node[smallvertex] (x5) at (1.6,-2) {}; \draw (v1) -- (x3); \draw (v1) -- (x4); \draw (v1) .. controls (-0.8,0.5) .. (x2); \draw plot[smooth cycle, tension=.7] coordinates {(0.2,-1.6) (-0.2,-2.3) (-2.2,-1.4) (-3.2,-0.6) (-2.6,-0.2) (-1.4,-0.8)}; \draw plot[smooth cycle, tension=.7] coordinates {(-3,0.2) (-3.6,0.2) (-4,1.8) (-3,2.8) (-2.4,2.4) (-3,1.2)}; \draw plot[smooth cycle, tension=.7] coordinates {(1.4,-1.8) (1.4,-2.4) (3.4,-2.4) (5,-1.6) (6.6,-0.2) (6,0.4) (4.8,-0.6) (3.2,-1.4)}; \draw plot[smooth cycle, tension=.7] coordinates {(4.4,3.8) (4,3.2) (5.2,2.6) (6,1.8) (6,1) (6.8,0.8) (6.8,2) (5.6,3.2)}; \draw plot[smooth cycle, tension=.7] coordinates {(0.6,4.4) (-0.8,4.2) (-2,3.8) (-2.4,3.4) (-2,2.6) (-0.4,3.4) (1.4,3.6) (3.6,3.4) (3.8,4) (3,4.2) (1.6,4.4)}; \path (v1) node[anchor=north] {\Large $v$}; \node at (-2.6,-2) {\large $X_1$}; \node at (0.6,5.2) {\large $X_3$}; \node at (6.2,3.4) {\large $X_4$}; \node at (-4.6,2.4) {\large $X_2$}; \node at (4.6,-2.6) {\large $X_5$}; \end{tikzpicture} \end{center} \end{figure} Let $v$ be any vertex and consider the fundamental cuts associated with the edges incident to $v$ in the Gomory-Hu tree. The shores (not containing $v$) of these cuts define a partition $X_1,X_2,\ldots X_k$ of $V \setminus \{v\}$ where each $X_i$ is a subpath of $C$. We may choose the indices such that $v, X_1, \ldots, X_k$ appear in clockwise order on $C$. \begin{claim}\label{claim:xXiconnected} For each $i \in \{1,\ldots,k\}$, there is an edge in $G$ from $v$ to some vertex in $X_i$. \end{claim} \begin{proof} By contradiction, assume there is no edge from $v$ to $X_i$. Notice $i \notin \{1,k\}$ because of the edges of $C$. Let $j \in \{1,\ldots,i-1\}$ maximum with $d(v,X_j) \neq \emptyset$, and let $j' \in \{i+1,\ldots,k\}$ minimum with $d(v,X_{j'}) \neq \emptyset$, hence $d(v,M) = \emptyset$ where $M:= X_{j+1} \cup X_{j+2} \ldots \cup X_{j'-1}$. By taking $X=X_j,Y=X_{j'},t=v$, Lemma~\ref{lem:middle} implies that $d(M,V \setminus (X_j \cup X_{j'} \cup M)>0$. However, outerplanarity and the existence of edges from both $X_j$ and $X_{j'}$ to $v$, imply that there is an edge between $v$ and $M$, cf. Figure~\ref{fig:usingouterplanar}. This contradicts the choice of $i$, $j$ or $j'$. \end{proof} \begin{figure} \label{fig:usingouterplanar} \begin{center} \begin{tikzpicture}[x=0.4cm,y=0.4cm] \draw (1.5,1) ellipse (5 and 3); \node[bigvertex] (v1) at (0.6,-2) {}; \node[smallvertex] (x1) at (-0.6,-1.7) {}; \node[smallvertex] (x2) at (-3.2,2) {}; \node[smallvertex] (x4) at (5.7,2.7) {}; \node[smallvertex] (x5) at (1.8,-2) {}; \draw (v1) -- (x4); \draw (v1) ..controls (-0.8,0.5) .. (x2); \draw[dashed] plot[smooth cycle, tension=.7] coordinates {(-0.2,-1.4) (-0.2,-2.2) (-2.2,-1.4) (-3.2,-0.6) (-2.6,-0.2) (-1.4,-0.8)}; \draw[very thick] plot[smooth cycle, tension=.7] coordinates {(-3,0.2) (-3.6,0.2) (-4,1.8) (-3,2.8) (-2.4,2.4) (-3,1.2)}; \draw[dashed] plot[smooth cycle, tension=.7] coordinates {(1.4,-1.8) (1.4,-2.4) (3.4,-2.4) (5,-1.6) (6.6,-0.2) (6,0.4) (4.8,-0.6) (3.2,-1.4)}; \draw[very thick] plot[smooth cycle, tension=.7,] coordinates {(4.4,3.8) (4.2,3.2) (5.2,2.6) (6,1) (6.8,0.8) (6.8,2) (5.6,3.5)}; \draw[dashed] plot[smooth cycle, tension=.7] coordinates {(0.6,4.4) (-0.8,4.2) (-2,3.8) (-2.4,3.4) (-1.8,2.8) (-0.4,3.4) (1.4,3.6) (3.4,3.4) (3.4,4) (3,4.2) (1.6,4.4)}; \path (v1) node[anchor=north] {\Large $v$}; \node at (-2.6,-2) {\large $X_1$}; \node at (0.6,5) {\large $M:=X_3$}; \node at (6.2,3.4) {\large $X_4$}; \node at (-5,2.8) {\large $X_2$}; \node at (4.6,-2.6) {\large $X_5$}; \node at (0.7,2) {?}; \draw [color=red] (0.6,3.6) -- (0.6,0.8); \draw [color=red] plot[smooth, tension=.7] coordinates {(0,3.6) (-0.4,2.4) (-0.8,1.6)}; \draw [color=red] plot[smooth, tension=.7] coordinates {(1.8,3.8) (2,2.6) (3.4,1.6)}; \end{tikzpicture} \end{center} \end{figure} Let $xy \in E(T)$ be an edge of the Gomory-Hu tree. We must prove that $xy \in E(G)$. Let $\delta(X)$ be the fundamental cut associated with $xy$, with $x \in X$, define $Y = V \setminus X$. As in the preceding arguments we may use the fundamental cuts associated to edges incident to $x$ and partition $X \setminus \{x\}$ into min-cut shores $X_1,X_2,\ldots, X_k$; we do this by ignoring the one shore $Y$. Similarly, we may partition $Y \setminus \{y\}$ into min-cut shores $Y_1,Y_2,\ldots,Y_l$. We can label these so that $X_1,X_2,\ldots,X_k,Y_1,\ldots,Y_l$ appear in clockwise order around $C$ - see Figure~\ref{fig:shores}. There is also some $i \in \{1,\ldots,k\}$ and $j \in \{1,\ldots,l\}$ such that $x$ is between $X_i$ and $X_{i+1}$ (or $Y_1$ if $i = k$) and $y$ is between $Y_j$ and $Y_{j+1}$ (or $X_1$ if $j = l$). \begin{figure}[htbp] \label{fig:shores} \begin{center} \begin{tikzpicture}[x=0.6cm,y=0.6cm] \draw (1.6,0.9) ellipse (5 and 3); \node[bigvertex] (x) at (0.6,-2.03) {}; \node[smallvertex] (x4) at (-2.2,-1) {}; \node[smallvertex] (x5) at (-3.1,2) {}; \node[smallvertex] (x1) at (4.3,-1.6) {}; \node[bigvertex] (y) at (3.6,3.7) {}; \node[smallvertex] (y1) at (2.6,3.85) {}; \node[smallvertex] (y2) at (4.5,3.4) {}; \node[smallvertex] (x3) at (-0.1,-1.9) {}; \node[smallvertex] (x2) at (1.5,-2.1) {}; \draw (x) .. controls (2.45,-0.8) .. (x1); \draw (x) .. controls (-0.8,-0.5) .. (x4); \draw (x) .. controls (-1.2,1) .. (x5); \draw[thick] plot[smooth cycle, tension=.7] coordinates {(-1,-1.2) (-1.4,-1.8) (-2.2,-1.4) (-3.2,-0.6) (-2.6,-0.2) (-1.4,-0.8)}; \draw [thick] plot[smooth cycle, tension=.7] coordinates {(-3,0.2) (-3.6,0.2) (-4,1.8) (-3,2.8) (-2.4,2.4) (-3,1.2)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(2.9,-1.9) (3.4,-2.4) (5,-1.8) (6.6,-0.2) (6,0.4) (4.8,-0.6) (3.4,-1.3)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(4.4,3.8) (4.2,3.2) (5.5,2.3) (6,1) (6.8,0.8) (6.8,2) (5.6,3.2)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(1,4.4) (-1,3.9) (-2.2,3.3) (-1.8,2.8) (-0.4,3.4) (1.4,3.6) (2.5,3.5) (3,4) (2.5,4.5)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(-0.6,-1.4) (-1,-2) (0,-2.2) (0,-1.6)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(1.2,-1.8) (1.2,-2.2) (2.2,-2.4) (2.2,-1.6)}; \draw[dashed,color=blue,very thick] plot[smooth, tension=.7] coordinates {(-3.7,4) (-1.5,2.3) (2.9,0.6) (7.1,0.7)}; \path (x) node[anchor=north] {\Large $x$}; \path (y) node[anchor=south] {\Large $y$}; \node at (5.4,-2.2) {\large $X_1$}; \node at (-1,-2.5) {\large $X_3$}; \node at (-2.8,-1.6) {\large $X_4$}; \node at (1.8,-3) {\large $X_2$}; \node at (-4.6,1.6) {\large $X_5$}; \node at (-1,4.5) {\large $Y_1$}; \node at (6.5,3.3) {\large $Y_2$}; \draw (2.6,1.2) -- (2.2,0.1); \draw (4.4,1) -- (4.4,0); \draw (0.3,2.1) -- (-0.4,1.2); \node at (1.8,2) {\LARGE $\delta(X)$}; \end{tikzpicture} \end{center} \caption{An arbitrary edge $xy \in T$.} \end{figure} By contradiction suppose $xy \notin E(G)$. By Claim~\ref{claim:xXiconnected}, there is an edge $e$ from $x$ to $Y$, let $m \in \{1,\ldots,l\}$ such that $e \in d(x,Y_m)$. If $m \notin \{1,l\}$, by outerplanarity either $d(y,Y_1)$ or $d(y,Y_l)$ is empty; this contradicts Claim~\ref{claim:xXiconnected}. By symmetry we may assume $e \in d(x,Y_1)$. By a similar argument there is an edge $e' \in d(y,X_1)$. By Claim~\ref{claim:xXiconnected}, there are also two edges $e'' \in d(x,X_1)$ and $e''' \in d(y,Y_1)$. \begin{figure} \begin{center} \begin{tikzpicture}[x=0.6cm,y=0.6cm] \fill[color=red,nearly transparent] plot[smooth cycle, tension=.7] coordinates {(-2.9,3.1) (-2,2.4) (-2.7,1.3) (-2.6,0.3) (-0.9,-0.8) (1.2,-1.4) (2.2,-1.4) (2.6,-2.2) (2,-2.8) (-0.4,-2.6) (-2.3,-1.9) (-3.6,-1) (-4.2,0.6) (-4.1,2.4)}; \fill[color=blue, nearly transparent] plot[smooth cycle, tension=.7] coordinates {(3.8,4.3) (3.2,3.6) (3.4,3.1) (4.6,2.6) (5.4,2.1) (5.8,0.9) (6.5,0.6) (7,0.7) (7.1,1.6) (6.8,2.5) (5.5,3.7)}; \draw (1.6,0.9) ellipse (5 and 3); \node[bigvertex] (x) at (0.6,-2) {}; \node[bigvertex] (y) at (3.6,3.7) {}; \node[smallvertex] (x5) at (-3.1,2) {}; \node[smallvertex] (x4) at (-2.2,-1) {}; \node[smallvertex] (x3) at (-0.1,-1.9) {}; \node[smallvertex] (x2) at (1.5,-2.1) {}; \node[smallvertex] (y1) at (2.2,3.9) {}; \node[smallvertex] (x1) at (4.3,-1.6) {}; \node[smallvertex] (y2) at (4.4,3.4) {}; \node[smallvertex] (y1purple) at (0.7,3.85) {}; \node[smallvertex] (x1purple) at (5.5,-1) {}; \draw[line width =2,color=purple] (x) .. controls (2.45,-0.8) .. (x1); \node[purple] at (2.3,-0.5) {\Large $e''$}; \draw[line width =2,color=purple] (x) -- (y1purple) node[pos=0.5,left] {\Large $e$}; \draw[line width =2,color=purple] (y) -- (x1purple) node[pos=0.5,left] {\Large $e'$}; \draw[line width =2,color=purple] (y1) -- (y) node[pos=0.5,below] {\Large $e'''$}; \draw (x) .. controls (-0.6,-0.2) .. (x4); \draw (x) .. controls (-0.8,1.2) .. (x5); \draw[thick] plot[smooth cycle, tension=.7] coordinates {(1,4.4) (-1,3.9) (-2.2,3.3) (-1.8,2.8) (-0.4,3.4) (1.4,3.6) (2.5,3.5) (3,4) (2.5,4.5)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(2.9,-1.9) (3.4,-2.4) (5,-1.8) (6.6,-0.2) (6,0.4) (4.8,-0.6) (3.4,-1.3)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(-1,-1.2) (-1.4,-1.8) (-2.2,-1.4) (-3.2,-0.6) (-2.6,-0.2) (-1.4,-0.8)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(-3,0.2) (-3.6,0.2) (-4,1.8) (-3,2.8) (-2.4,2.4) (-3,1.2)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(4.4,3.8) (4.2,3.2) (5.2,2.6) (6,1.8) (6,1) (6.8,0.8) (6.8,2) (5.6,3.2)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(-0.6,-1.4) (-1,-2) (0,-2.2) (0,-1.6)}; \draw[thick] plot[smooth cycle, tension=.7] coordinates {(1.2,-1.8) (1.2,-2.2) (2.2,-2.4) (2.2,-1.6)}; \path (x) node[anchor=north] {\Large $x$}; \path (y) node[anchor=south] {\Large $y$}; \node at (5.6,-2.1) {\large $X_1$}; \node at (-1,-2.9) {\large $X_3$}; \node at (-3.3,-1.8) {\large $X_4$}; \node at (1.8,-3.2) {\large $X_2$}; \node at (-4.9,1.6) {\large $X_5$}; \node at (-0.9,4.6) {\large $Y_1$}; \node at (7.6,1.9) {\large $Y_2$}; \end{tikzpicture} \end{center} \caption{Showing that $xy \in T$ must be an edge of $G$.} \end{figure} Let $X' = \{x\} \cup X_2 \cup \ldots \cup X_k$ and $Y' = \{y\} \cup Y_2 \cup \ldots \cup Y_l$, $\delta(X')$ is a cut separating $x$ from $X_1$ and similarly $\delta(Y')$ separates $y$ from $Y_1$. As $\delta(X_1)$ is the fundamental cut between $x$ and $X_1$, we have that $c(X_1) < c(X')$, and similarly $c(Y_1) < c(Y')$. Now, because of the edges $e, e', e'', e'''$, by outerplanarity there is no edge between $X'$ and $Y'$, hence $$c(X_1) + c(Y_1) = c(X') + c(Y') + 2c(X_1,Y_1) > c(X_1) + c(Y_1) + 2c(X_1, Y_1)$$ a contradiction. \end{proof} \section{Which Instances have Gomory-Hu Subtrees?} \label{sec:1sum} The previous result leads to a characterization of graphs with the {\em GH Property}: that is, graphs whose capacitated subgraphs always contain a Gomory-Hu Tree as a subtree. In Section~\ref{sec:last}, we extend this result to the case where a subset of terminals is specified. We start with a simple observation that $\ensuremath{K_{2,3}}$ does not have a GH subtree. \begin{repproposition}{prop:notree} $\ensuremath{K_{2,3}}$, when all edges have capacity $1$, has no Gomory-Hu tree that is a subgraph of itself. \end{repproposition} \begin{proof} Let $\{u_1,u_2\}, \{v_1, v_2,v_3\}$ be the bipartition. Since the minimum $u_1,u_2$ cut is of size $3$, a GH tree should contain a $u_1u_2$ path all of whose edges have capacity at least $3$. Suppose this path is $u_1v_1u_2$, then the tree's fundamental cut associated with $u_1v_1$ should be a minimum $u_1v_1$-cut. But this is impossible since $\delta(v_1)$ is a cut of size $2$. \end{proof} This leads to the desired characterization. \begin{reptheorem}{thm:1sum} $G$ has the GH Property if and only if $G$ is the $1$-sum of outerplanar and $K_4$ graphs. \end{reptheorem} \begin{proof} First suppose that $G$ is such a $1$-sum. Each outerplanar block in this sum has the GH Property by Theorem~\ref{th:gh-subtrees}. So consider a $K_4$ block and a subgraph $G'$ with edge capacities. If $G'$ is $K_4$, then clearly any GH tree is a subtree. Otherwise $G'$ is a proper subgraph of $K_4$ and hence is outerplanar. It follows that each block has the GH Property. It is not hard to see that the $1$-sum of Gomory-Hu trees of two graphs is a Gomory-Hu tree of the $1$-sum of the graphs. Repeating this argument to the blocks we find that $G$ itself satisfies the GH property. Suppose now that a $2$-connected graph $G$ has the GH property. By our proposition, $G$ has no $\ensuremath{K_{2,3}}$ minor. Outerplanar graphs are graphs with forbidden minors $\ensuremath{K_{2,3}}$ and $K_4$. Hence if $G$ is not outerplanar, then it has a $K_4$ minor. Notice that any proper subdivision of $K_4$ contains a $\ensuremath{K_{2,3}}$, as well as any graph built from $K_4$ by adding a path between two distinct vertices. Hence $G$ must be $K_4$ itself. The result now follows. \end{proof} \section{Characterization of terminal-$\ensuremath{K_{2,3}}$ free graphs} \label{sec:k23char} In this section we prove Theorem~\ref{thm:k23char}. Throughout, we assume we have an undirected graph $G$ with terminals $Z \subseteq V(G)$. We refer to $G$ as being $H$-terminal free (for some $H$) to mean with respect to this fixed terminal set $Z$. We first check sufficiency of the condition of Theorem~\ref{thm:k23char}. Any graph with at most $4$ terminals is automatically terminal-$\ensuremath{K_{2,3}}$ free and one easily checks that any extended Okamura-Seymour instance cannot contain a terminal-$\ensuremath{K_{2,3}}$ minor. Hence we focus on proving the other direction: any terminal-$\ensuremath{K_{2,3}}$ minor-free graph $G$ lies in the desired class. To this end, we assume that $|Z| \geq 5$ and we ultimately derive that $G$ must be an extended OS instance. We start by excluding the existence of certain $K_4$ minors. \begin{proposition} If $|Z| \geq 5$ and $G$ has a terminal-$K_4$ minor, then $G$ has a terminal-$\ensuremath{K_{2,3}}$ minor. \end{proposition} \begin{proof} Let $K_4^+$ be the graph obtained from $K_4$ by subdividing one of its edges. By removing the edge opposite to the subdivided edge, we see that $K_4^+$ contains $\ensuremath{K_{2,3}}$. Hence it suffices to prove that $G$ contains a terminal-$K_4^+$ minor. Consider a terminal-$K_4$ minor on terminals $T' = \{s,t,u,v\}$. Thus we have vertex-disjoint trees $T_x$ for each terminal $x \in T'$, such that for any $x,y \in T'$, there is an edge $e_{xy}$ having one extremity in $T_x$ and one in $T_y$. We may assume that $T_x = \bigcup_{y \in T' \setminus \{x\}} P[x,y]$, where $P[x,y]$ is a path from $x$ to an end of $e_{xy}$ and not containing $e_{xy}$. Denote $U := \bigcup_{x \in T'} V(T_x)$. Assume that the minor has been chosen so that $|U|$ is minimized. As $|T| \geq 5 > |T'|$, there is some terminal $w \not\in T'$. First, suppose that $w$ is contained in one of the subtrees, say $T_s$, of the representation of the terminal-$K_4$ minor. Note that $w$ could not lie in all three of the paths $P[s,u]$, $P[s,v]$, $P[s,t]$ since we could obtain a smaller terminal-$K_4$ minor by replacing $s$ by $w$. If $w$ lies in exactly one of these paths, say $P[s,u]$, then we obtain a terminal-$K_4^+$ minor where $w$ is the terminal which subdivides the minor edge $su$. The last case is where $w$ lies in exactly $2$ of the paths, say $P[s,u],P[s,v]$. In this case, one may replace $s$ and use $w$ as the degree $3$ vertex of the $K_4$ minor and hence $s$ can play the role of the degree $2$ vertex in a terminal-$K_4^+$ minor, a contradiction. Now we assume that $w$ is not contained in any of the subtrees. Then by $2$-connectivity, there are two disjoint paths from $w$ to two vertices $a$ and $b$ in $U$. If $a$ and $b$ are in different subtrees, we easily get a terminal-$K_4^+$ minor, a contradiction. Assume $a,b \in T_s$. Now suppose that one of $a$ and $b$ is in exactly one of $P[s,u]$, $P[s,v]$, $P[s,t]$, say $P[s,u]$. If both $a,b$ have this property, then choose the one which is closer to the edge $e_{su}$. Let this be $a$. Then by contracting the $wa$-path we get a terminal-$K_4^+$ minor (where $w$ has degree $2$). Next assume that $a$ lies in precisely two of these three paths, say $P[s,u],P[s,v]$. Then we also get a terminal-$K_4^+$ where $s$ is now a degree $2$ terminal vertex on a subdivided edge between $w$ and $t$. In the last case we may assume that $a, b \in R$ where $R := P[s,u] \cap P[s,v] \cap P[s,t]$. Let $z$ be the end of $R$ that is not $s$, and denote $U' := U \setminus (V(R) \setminus z)$. Let $Q_1,Q_2$ be openly vertex-disjoint paths from $s$ to $U'$. Without loss of generality $Q_1$ contains a vertex $z' \in R$ which is closest to $z$ amongst all vertices in $Q_1 \cup Q_2$. Hence replacing $Q_1$ by the path which follows the subpath of $R$ from $z'$ to $z$ also produces a vertex-disjoint pair of paths. Hence we assume the endpoints of $Q_1$ are $s$ and $z$. Now consider following one of the paths from $w$ until it first hits a vertex of $Q_1 \cup Q_2$. If there is no such vertex, then it hits $R$ and we may follow $R$ until it hits $Q_1 \cup Q_2$. In all cases this produces a path from $w$ to $U'$ which is disjoint from exactly one of $Q_1,Q_2$. Let $P',Q'$ be the resulting vertex-disjoint paths and note that one of them terminates at $z$; without loss of generality $P'$. Hence its other endpoint can now play the role of $s$ as a degree $3$ vertex in a terminal-$K_4$ minor. Therefore, the terminal on $Q'$ has a path to $U'$ which is disjoint from $P'$. Thus we are back to one of the previous cases. \end{proof} Now we have ruled out the existence of terminal-$K_4$ minors, we start building up minors which can be possible. \begin{proposition}\label{prop:2conn-terminal-minor} Any $2$-connected graph with terminals $Z$, with $|Z| \geq 3$, has a $2$-connected minor $H$ with $V(H)=Z$. \end{proposition} \begin{proof} Let $H$ be a minimal $2$-connected terminal-minor of $G$ containing $Z$ and assume there is a non-terminal vertex in $H$. In particular we may assume there is an edge $sv$ with $s \in Z$, $v \notin Z$. By minimality, contracting $sv$ decreases the connectivity to $1$. Hence, $\{s,v\}$ is a cut separating two vertices $t$ and $t'$. Thus, there are two disjoint $tt'$-paths, one containing $s$ and the other $v$. That is, there is a circuit $C$ containing $s,t,v,t'$ in that order. By minimality of $H$, we also have that $H-sv$ is not $2$-connected. It follows that $H-sv$ contains a cut vertex $\{z\}$ where $s,v$ lie in distinct components of $H-sv-z$. This would contradict the existence of $C$, and this completes the proof. \end{proof} As $|Z| \geq 5$, the previous lemma implies that there is a terminal-$C_4$ minor. Let $k$ be maximum such that $G$ contains a terminal-$C_k$ minor. \begin{proposition} $k = |Z|$. \end{proposition} \begin{proof} By Proposition~\ref{prop:2conn-terminal-minor}, let $H$ be a 2-connected terminal-minor of $G$ with $V(H)=T$. Consider an ear-decomposition of $H$, starting with longest cycle $C_0$ and ears $P_1,\ldots, P_k$. Then all ears are single edges (from which the proposition follows), otherwise let $P_i$ be an ear that is not a single edge, with $i$ minimum. The two ends of $P_i$ are vertices $x,y$ of $C_0$. If $x$ and $y$ are consecutive in $C_0$, this contradicts the maximality of $C_0$. If they are not consecutive, $C_o \cup P_i$ is a subdivision of $\ensuremath{K_{2,3}}$. \end{proof} We let $k=|Z|$ henceforth. A terminal-$C_k$ minor of $G$ can also be represented as a collection of $k$ vertex-disjoint subtrees $T_1, \ldots,T_k$, where each $T_i$ contains exactly one terminal $t_i$. There also exist edges $e_1,\ldots, e_k$, where $e_i$ has one extremity $u_i$ in $T_i$ and the other, $v_{i+1}$, in $T_{i+1}$. The subscript $k+1$ is taken to be $1$; the edges in the subtrees are the contracted edges and the edges $e_1,\ldots,e_k$ are the undeleted edges. We define $s_i$ as the only vertex in $V(P[t_i,u_i]) \cap V(P[u_i,v_i]) \cap V(P[v_i,t_i])$, where $V(P[x,y])$ is the vertex set of the path with ends $x$ and $y$ in the tree $T_i$. Thus, $T_i$ is $P[s_i,u_i] \cup P[s_i,v_i] \cup P[s_i,t_i]$. We denote by $S_i$ the path from $t_i$ to $s_i$ in $T_i$ and we take our representation so that $\sum_{i=1}^k |S_i|$ minimized. We denote by $P_i$ the path from $s_i$ to $s_{i+1}$. \begin{proposition} $\sum_{i=1}^k |S_i| = 0$. \end{proposition} \begin{proof} By contradiction, suppose $|S_1| > 0$ and so $t_1$ does not lie in the graph induced by $D=P_1 \cup \ldots \cup P_k \cup S_2 \cup \ldots \cup S_k$. By $2$-connectivity, there are two disjoint minimal paths from $t_1$ to distinct vertices $x$ and $y$ in $D$. Moreover we can assume that $x=s_1$ lies on $P_k \cup P_1$. To see this, suppose that $z \in S_1$ is the closest vertex to $s_1$ which is used by the one of the paths (possibly $z=t_1$). We may then re-route one of the paths to use the subpath of $S_1$ from $z$ to $s_1$. If $y$ is contained in one of $P_k, P_1$, it is routine to get another representation of the minor where all the $S_i$ are at least as short, and $S_1$ is empty, contradicting the minimality of our choice of representation. A similar argument holds if $y \in S_k \cup S_2$. So we assume $y \in D \setminus (P_k \cup P_1 \cup S_k \cup S_2)$. We now find a terminal-$\ensuremath{K_{2,3}}$ minor, and that is again a contradiction. To see this, let $T_i$ be a tree which contains the second vertex $y$. As $k \geq 5$, we may assume either $i \in [4,k-1]$, or $i \in [3,k-2]$. Suppose the latter as the two cases are similar. We obtain a terminal-$\ensuremath{K_{2,3}}$ where the two degree-3 vertices correspond to the terminals in $T_i$ and $T_k$. The degree-2 vertices will correspond to $t_1,t_2$ and $t_{k-1}$ --- see Figure~\ref{fig:k23-minor-in-sunny-graph}. \end{proof} Hence there is a circuit $C$ containing every terminal, in cyclic order $t_1$, $t_2$, \ldots $t_k$. \begin{figure} \begin{center} \begin{tabular}{cp{1cm}c} \begin{tikzpicture}[x=0.7cm,y=0.7cm] \draw[line cap=round,line width=0.2cm,color=LightPink] (18:2) arc[start angle=18,delta angle=36,radius=1.4cm] (54:2) (90:2) arc[start angle=90,delta angle=72,radius=1.4cm] (162:2) (162:2) arc[start angle=162,delta angle=72,radius=1.4cm] (234:2) (234:2) arc[start angle=234,delta angle=72,radius=1.4cm] (306:2) (306:2) arc[start angle=306,delta angle=72,radius=1.4cm] (18:2) (18:2) -- (18:3) (90:2) -- (0,0) (0,0) -- (54:2) (162:2) -- (162:3) (234:2) -- (234:3) (306:2) -- (306:3); \draw (0,0) circle[radius=1.4cm]; \node[smallvertex] (x) at (54:2) {}; \foreach \theta in {18,90,...,306} { \node[smallvertex] (s\theta) at (\theta:2) {}; } \node[terminal] (t90) at (0,0) {}; \foreach \theta/\i in {18/2,162/k,234/{k-1},306/3} { \node[terminal] (t\theta) at (\theta:3) {}; \path (t\theta) node[anchor=north] {$t_{\i}$}; \draw (t\theta) -- (s\theta); } \draw (s90) -- (t90) (t90) -- (x); \draw (s90) node[anchor=south] {$s_1$}; \draw (t90) node[anchor=east] {$t_1$}; \draw (x) node[anchor=south west] {$y$}; \end{tikzpicture} & & \begin{tikzpicture}[x=0.7cm,y=0.7cm] \draw[line cap=round,line width=0.2cm,color=LightPink] (90:2) arc[start angle=90,delta angle=72,radius=1.4cm] (162:2) (162:2) arc[start angle=162,delta angle=72,radius=1.4cm] (234:2) (234:2) arc[start angle=234,delta angle=72,radius=1.4cm] (306:2) (306:2) arc[start angle=306,delta angle=72,radius=1.4cm] (18:2) (18:2) -- (18:3) (90:2) -- (90:3) (162:2) -- (162:3) (234:2) -- (234:3) (306:2) -- (306:3) (90:3) to[out=330,in=120] (18:2.5); \draw (0,0) circle[radius=1.4cm]; \node[smallvertex] (x) at (18:2.5) {}; \foreach \theta in {18,90,...,306} { \node[smallvertex] (s\theta) at (\theta:2) {}; } \node[terminal] (t90) at (90:3) {}; \foreach \theta/\i in {18/2,162/k,234/{k-1},306/3} { \node[terminal] (t\theta) at (\theta:3) {}; \path (t\theta) node[anchor=north] {$t_{\i}$}; \draw (t\theta) -- (s\theta); } \draw (s90) -- (t90); \draw (90:3) to[out=330,in=120] (18:2.5); \draw (s90) node[anchor=north] {$s_1$}; \draw (t90) node[anchor=east] {$t_1$}; \draw (x) node[anchor=north] {$y$}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.7cm,y=0.7cm] \draw[line cap=round,line width=0.2cm,color=LightPink] (90:2) arc[start angle=90,delta angle=72,radius=1.4cm] (162:2) (306:2) arc[start angle=306,delta angle=36,radius=1.4cm] (342:2) (18:2) -- (18:3) (162:2) -- (162:3) (234:2) -- (234:3) (306:2) -- (306:3); \fill[LightPink] (0,0) circle[radius=6pt]; \draw (0,0) circle[radius=1.4cm]; \foreach \theta in {18,90,...,306} { \node[smallvertex] (s\theta) at (\theta:2) {}; } \node[terminal] (t90) at (0,0) {}; \node[smallvertex] (x) at (342:2) {}; \foreach \theta/\i in {18/2,162/k,234/{k-1},306/3} { \node[terminal] (t\theta) at (\theta:3) {}; \path (t\theta) node[anchor=north] {$t_{\i}$}; \draw (t\theta) -- (s\theta); } \draw (s90) -- (t90); \draw (t90) -- (x); \draw (s90) node[anchor=south] {$s_1$}; \draw (t90) node[anchor=north east] {$t_1$}; \draw (x) node[anchor=north west] {$y$}; \end{tikzpicture} & & \begin{tikzpicture}[x=0.7cm,y=0.7cm] \draw[line cap=round,line width=0.2cm,color=LightPink] (90:2) arc[start angle=90,delta angle=72,radius=1.4cm] (162:2) (270:2) arc[start angle=270,delta angle=36,radius=1.4cm] (306:2) (18:2) -- (18:3) (162:2) -- (162:3) (234:2) -- (234:3) (306:2) -- (306:3); \fill[LightPink] (0,0) circle[radius=6pt]; \draw (0,0) circle[radius=1.4cm]; \foreach \theta in {18,90,...,306} { \node[smallvertex] (s\theta) at (\theta:2) {}; } \node[terminal] (t90) at (0,0) {}; \node[smallvertex] (x) at (270:2) {}; \foreach \theta/\i in {18/2,162/k,234/{k-1},306/3} { \node[terminal] (t\theta) at (\theta:3) {}; \path (t\theta) node[anchor=north] {$t_{\i}$}; \draw (t\theta) -- (s\theta); } \draw (s90) -- (t90); \draw (t90) -- (x); \draw (s90) node[anchor=north] {$s_1$}; \draw (t90) node[anchor=east] {$t_1$}; \draw (x) node[anchor=north] {$y$}; \end{tikzpicture} \end{tabular} \end{center} \caption{Reducing $|S_1|$ of finding terminal-$\ensuremath{K_{2,3}}$ minors depending on the position of $y$.} \label{fig:k23-minor-in-sunny-graph} \end{figure} \begin{proposition}\label{prop:no-2-linkage} There are no two vertex-disjoint paths, one from $t_i$ to $t_{i'}$, the other from $t_{j}$ to $t_{j'}$, with $i < j < i' < j'$. \end{proposition} \begin{proof} By contradiction. For convenience, let's denote $s = t_i$, $t = t_{i'}$, $s' = t_{j}$ and $t' = t_{j'}$. Let $P$ be the $st$-path and $Q$ the $s't'$-path. We may assume that we choose $P$ and $Q$ to minimize their total number of maximal subpaths disjoint from $C$. We consider the set (not multi-set) of edges $E(C) \cup E(P) \cup E(Q)$, and only keep $s, s', t, t'$ as terminals. This defines a subgraph $G'$ of $G$ of maximum degree $4$ by construction. Contract edges in $E(C) \cap (E(P) \cup E(Q))$, and then contract edges so that vertices of degree 2 are eliminated. This gives a minor $H$ where the only vertices not of degree 4 are $s,t,s',t'$, which have degree 3. $E(H) \cap E(P)$ induces an $st$-path $P'$ in $H$, $E(H) \cap E(Q)$ induces an $s't'$-path $Q'$ in $H$. $P'$ and $Q'$ are again vertex-disjoint. We call the remaining edges of $E(C)$ in $H$ {\em $C$-edges}. They induce a cycle which alternates between vertices of $P'$ and $Q'$. To see this, suppose that $e$ is such an edge joining $x,y \in V(P')$ (the case for $Q'$ is the same). We could then replace the subpath of $P$ between $x,y$ by the subpath of $C$ which was contracted to form $e$. This would reduce, by at least $1$, the number of maximal subpaths of $P$ disjoint from $C$, a contradiction. Consider the two vertices $u'$ and $v'$ of $Q'$ adjacent to $s$, such that $s',u',v',t'$ appear in that order on $Q'$. $u'$ and $v'$ each has one more incident $C$-edge, whose extremities (respectively) are $u$, $v$ and must then be on $V(P') \setminus \{s\}$. We create a terminal-$K_4$ minor on $s,s',t,t'$ as follows --- see Figure~\ref{fig:k4}, where $u,v$ may be in either order on $P'$. We contract all the edges of $P'$ except the one $e_s$ incident to $s$, and all the edges of $Q'$ except the one $e_{u'}$ incident to $u'$ in the direction of $t'$, we get a terminal-$K_4$ minor with the edges $su$, $sv$, $uu'$, $vv'$, $e_s$ and $e_u$. One easily checks that this leads to the desired terminal-$K_4$ minor This contradiction completes the proof. \end{proof} \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[x=1cm,y=1cm] \draw[line cap=round, line width = 0.8cm, color=LightPink] (-0.05,2) -- (0.05,2) (1,2) -- (11,2) (0,0) -- (3,0) (4,0) -- (11,0); \foreach \i in {0,1,6,8,11} { \node[smallvertex] (p\i) at (\i,2) {}; } \foreach \i in {0,3,4,6,11} { \node[smallvertex] (q\i) at (\i,0) {}; } \draw[blue,thick] (p0) -- (p1) (q3) -- (q4) (p0) -- (q3) -- (p6) (p0) -- (q6) -- (p8); \draw[black] (p1) -- (p11) (q0) -- (q3) (q4) -- (q11); \draw (p0) node[anchor=south] {$s$}; \draw (p6) node[anchor=south] {$u$}; \draw (p8) node[anchor=south] {$v$}; \draw (p11) node[anchor=south] {$t$}; \draw (q0) node[anchor=north] {$s'$}; \draw (q3) node[anchor=north] {$u'$}; \draw (q6) node[anchor=north] {$v'$}; \draw (q11) node[anchor=north] {$t'$}; \draw (5.5,3) node {$P'$}; \draw (5.5,-1) node {$Q'$}; \end{tikzpicture} \end{center} \caption{How to get a terminal-$K_4$ minor: red parts are contracted into single nodes, the blue edges will then form a $K_4$.} \label{fig:k4} \end{figure} To conclude the characterization of terminal-$\ensuremath{K_{2,3}}$ minor free graphs, we use (a generalization of) the celebrated 2-linkage theorem. Take a planar graph $H$, whose outer face boundary is the cycle $t_1,t_2,\ldots,t_k$, and whose inner faces are triangles. For each inner triangle, add a new clique of arbitrary size, and connect each vertex of the clique to the vertices of the triangle. Any graph built this way is called a \emph{$(t_1,\ldots,t_k)$-web}, or a $\{t_1, \ldots ,t_k\}$-web if we do not specify the ordering. Note that a $Z$-web, for some set $Z$, can be described via \emph{Okamura-Seymour instances} (OS-instance). An OS-instance is a planar graph where all terminals appear on the boundary of the outer face. An {\em Extended OS Instance} is obtained from an OS-instance by adding arbitrary graphs, called \emph{$3$-separated sets}, each connected to up to three vertices of some inner face of the Okamura-Seymour instance. We also require that any two $3$-separated sets in a common face cannot be crossing each other in that face. Extended OS instances are precisely the $Z$-webs. \begin{theorem}[Seymour~\cite{seymour1980disjoint}, Shiloach~\cite{shiloach1980polynomial}, Thomassen~\cite{thomassen19802} ]\label{th:linkage} Let $G$ be a graph, and $s_1,\ldots,s_k \in V(G)$. Suppose there are no two disjoint paths, one with extremity $s_i$ and $s_{i'}$, and one with extremity $s_j$ and $s_{j'}$, with $i < j < i' < j'$. Then $G$ is the subgraph of an $(s_1,s_2,\ldots,s_k)$-web. \end{theorem} The linkage theorem is usually stated in the special case when $k = 4$, but the extension presented here is folklore. One can reduce the general case to the case $k=4$ by identifying the vertices $s_1,\ldots s_k$ with every other inner vertex of a ring grid with $7$ circular layers and $2k$ rays, and choosing 4 vertices of the outer layer, labelling them $s,t,s',t'$ in this order, and connecting them in a square --- see Figure~\ref{fig:linkage-reduction}. Is is easy to prove that there are two vertex-disjoint paths, one with extremity $s$ and $s'$, the other with extremities $t$ and $t'$ in the graph built this way if and only there are two disjoint paths as in the theorem in the original graph (for instance, use the middle layer to route the path from $s$ to $s_i$ with only 2 bends, then the remaining graph is a sufficiently large subgrid to route the three other paths). Because the grid is 3-connected, its embedding is unique and we get that $G$ is embedded inside the inner layer of the ring, from which the general version of the theorem is deduced. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[x=0.25cm,y=0.25cm] \foreach \theta in {0,20,...,340} { \foreach \distance in {4,5,...,10} { \node[smallvertex] (n\theta\distance) at (\theta:\distance) {}; } \foreach \distance in {4,5,...,9} { \draw (\theta:\distance) -- +(\theta:1); } } \foreach \theta in {0,40,...,320} { \node[terminal] (t\theta) at (\theta:4) {}; } \foreach \distance in {4,5,...,10} { \draw (0,0) circle[radius=\distance]; } \draw[thick,looseness=1.5] (n4010) to[out=110,in=50] (n12010); \draw[thick,looseness=1.5] (n12010) to[out=190,in=170] (n24010); \draw[thick,looseness=1.5] (n24010) to[out=290,in=250] (n32010); \draw[thick,looseness=1.5] (n32010) to[out=30,in=330] (n4010); \draw (n4010) node[anchor = south west] {$s$}; \draw (n12010) node[anchor = south east] {$t$}; \draw (n24010) node[anchor = north east] {$s'$}; \draw (n32010) node[anchor = north west] {$t'$}; \end{tikzpicture} \end{center} \caption{Gadget for the proof of the general linkage theorem.} \label{fig:linkage-reduction} \end{figure} By using Theorem~\ref{th:linkage} with Proposition~\ref{prop:no-2-linkage}, we get that any $2$-connected terminal-$\ensuremath{K_{2,3}}$ free graph is a subgraph of a $Z$-web where $Z$ is the set of terminals. This now completes the proof of Theorem~\ref{thm:k23char}. \hspace{\stretch{1}} $\Box$ We now establish Corollary~\ref{cor:k23char}. \begin{proof} If $G$ is terminal-$\ensuremath{K_{2,3}}$ minor-free, then clearly contacting all blocks but one must create a terminal-$\ensuremath{K_{2,3}}$ free instance. Conversely, suppose that $G$ has a terminal-$\ensuremath{K_{2,3}}$ minor. Since this minor is $2$-connected, it must be a minor of a graph obtained by contracting or deleting all the edges of every $2$-connected component except one. Let's call that last block $B$. Hence the terminal-$\ensuremath{K_{2,3}}$ minor is a minor of the graph obtained by contracting all the edges not in $B$. \end{proof} \subsection{A Consequence for Multiflows} \medskip Recall from the introduction that for a graph $G$ and $Z \subseteq V(G)$, we call $(G,Z)$ cut-sufficient if for any multi-flow instance (capacities on $G$, demands between terminals in $Z$), we have feasibility if and only if the cut condition holds. \noindent \begin{repcorollary}{cor:flows} $(G,Z)$ is cut-sufficient if and only if it is terminal-$\ensuremath{K_{2,3}}$ free. \end{repcorollary} \begin{proof} We first establish a lemma which we use again in the next section. \begin{lemma} Let $G$ be an extended OS instance and $F$ be a $3$-separated graph whose attachment vertices to the planar part are $\{x,y,z\}$. We may define a new graph $G'$ from $G$ by removing $V(F) \setminus \{x,y,z\}$ and add a new vertex $s$ with edges $sx,sy,sz$ with capacities $c_x,c_y,c_z$ so that minimum cuts separating disjoint sets of terminals in $Z$ have the same capacities in $G'$ and in $G$. \end{lemma} \begin{proof} For each $\alpha \in \{x,y,z\}$, let $c_\alpha$ be the value of a minimum cut in $F$ separating $\alpha$ from $\{x,y,z\} \setminus \{\alpha\}$, for $\alpha \in \{x,y,z\}$. We use $S_\alpha$ to denote the shore of such a cut in $F$, where $\alpha \in S_{\alpha}$. We replace $H$ in $G$ by a claw where the central vertex is a new vertex $u_H$, and leaves are $x$, $y$ and $z$, and the capacity of $u_H\alpha$ is $c_\alpha$ for any $\alpha \in \{x,y,z\}$. We claim that this transformation preserves the values of minimum cuts between sets of terminals. Notice that $c_\alpha \leq \sum_{\beta \in \{x,y,z\} \setminus \{\alpha\}} c_\alpha$, hence a minimum cut $S$ of $G'$ containing $x$ but none of $y$, $z$, does not contain $s$. For a cut $S'$ in $G'$ with $x \in S, s, y,z \notin S$, we may then associate a cut $S$ in $G$ with same capacity, by taking $S := S' \cup S_x$. Reciprocally, given a cut $S$ of $G$ with $x \in S$, $y, z \notin S$, the cut $S' := S \setminus V(F) \setminus \{x,y,z\}$ has capacity at most the capacity of $S$. Thus the values of minimum terminal cuts are preserved. \end{proof} Since the $3$-separated graphs are non-crossing, we may iterate the process to obtain the following. \begin{lemma}\label{lem:upshot} For any extended OS instance $G$ we may replace each $3$-separated graph by a degree-3 vertex to obtain an equivalent (planar) OS instance $G'$. It is equivalent in that for any partition $Z_1 \cup Z_2 = Z$, the value of a minimum cut separating $Z_1,Z_2$ in $G$ is the same as it is in $G'$. \end{lemma} We now return to the proof of the corollary. First, if there is a terminal-$\ensuremath{K_{2,3}}$ minor then we obtain a ``bad'' multiflow instance as follows. For each deleted edge we assign it a capacity of $0$. For each contracted edge we assign it a capacity of $\infty$. The remaining 6 edges have unit capacity. We now define four unit demands. One between the two degree-3 nodes of the terminal minor and a triangle on the remaining three nodes. It is well-known that this instance has a flow-cut gap of $\frac{4}{3}$ cf.~\cite{chekuri2010flow,chakrabarti2012cut}. Now suppose that $G$ is terminal-$\ensuremath{K_{2,3}}$ free and consider a multiflow instance with demands on $Z$. By the preceding corollary, we may replace each 3-separated graph by a degree-3 vertex and this new OS instance will satisfy the cut condition if the old one did. Hence the Okamura-Seymour Theorem~\cite{Okamura81} yields a half-integral multiflow in the new instance. We now show that the flow in the modified instance can be mapped back to the original extended OS instance. We do this one $3$-separated graph at a time. Consider the total flow on paths that use the new edges through $s$ obtained via the reduction. Let $d(xy),d(yz),d(zx)$ be these values. We claim that these can be routed in the original $F$. First, it is easy to see that this instance on $F$ satisfies the cut condition. Any violated cut $\delta_F(S)$ would contain exactly one of $x,y,z$, say $x$. Hence this cut would have capacity less than $d(xy) + d(xz)$ but since this flow routed through $s$, this value must be at most $c_x$ which is a contradiction. Finally, the cut condition is sufficient to guarantee a multiflow in any graph if demands only arise on the edges of $K_4$, cf. Corollary 72.2a~\cite{schrijver2003combinatorial}. Hence we can produce the desired flow paths in $F$. \end{proof} \section{General Case: Gomory-Hu Terminal Trees in terminal-$\ensuremath{K_{2,3}}$ minor free graphs.} \label{sec:last} In this section we prove Theorem~\ref{thm:minorGH} using the characterization of terminal-$\ensuremath{K_{2,3}}$ minor free graphs. The high level idea is a reduction to Theorem~\ref{th:gh-subtrees} by contracting away the non-terminal nodes in the graph. In the following we let $(G,Z)$ denote a connected graph $G$ and terminals $Z \subseteq V(G)$. Recall that the classical Gomory-Hu Algorithm produces a $GH$ $Z$-Tree $T=(V(T),E(T))$ where formally $V(T)$ is a partition $\mathcal{P}=\{B(v): v \in Z\}$ of $V(G)$. We call $B(v)$ the {\em bag} for terminal $v$ and informally one often thinks of $V(T)=Z$. In addition each edge $st \in E(T)$ identifies a minimum $st$-cut in $G$, i.e., it is {\em encoding} as per the discussion following Definition~\ref{defn:encoding}. We say that GH $Z$-tree $T$ occurs as a {\em bag minor} in $G$ if (i) each bag induces a connected graph $G[B(v)]$ and (ii) for each $st \in E(T)$, there is an edge of $G$ between $B(s)$ and $B(t)$. We say that $T$ occurs as a {\em weak bag minor} if it occurs as a bag minor after deletion of some non-terminal vertices (from its bags and $G$). \begin{definition} The pair $(G,Z)$ has the {\em GH Minor Property} if for any subgraph $G'$ with capacities $c'$, there is a GH Tree which occurs as a bag minor in $G'$. The pair $(G,Z)$ has the {\em weak} GH Minor Property if such GH trees occur as a weak bag minor. \end{definition} An example where we have the weak but not the (strong) property is for $\ensuremath{K_{2,3}}$ where $Z$ consists of the degree $2$ vertices and one of the degree $3$ vertices, call it $t$. Clearly this is terminal-$\ensuremath{K_{2,3}}$ minor free since it only has $4$ terminals. The unique GH Tree $T$ is obtained from $G$ by deleting the non-terminal vertex and assigning capacity $2$ to all edges in the $3$-star. Hence $T$ is obtained as a minor (in fact a subgraph) of $G$. However, the bag $B(t)$ consists of the $2$ degree-$3$ vertices which do not induce a connected subgraph. Hence $T$ does not occur as a bag minor. Fortunately, such instances are isolated and arise primarily due to instances with at most $4$ terminals. We handle these separately. \begin{proposition} \label{prop:blech} Let $G$ be an undirected, connected graph and $Z$ be a subset of at most $4$ terminals. Suppose that edge-capacities are given so that no two central cuts have the same capacity. Then the unique GH Tree $T$ occurs as a weak bag minor, and if $T$ is a path, then it occurs as a bag minor. \end{proposition} We omit the to the very end. In the following it is useful to see how a GH $Z$-Tree bag minor (weak or strong) immediately implies such a minor for some $Z' \subseteq Z$. \begin{lemma} \label{lem:subset} Let $T$ be a GH $Z$-Tree bag minor for some capacitated graph $G$ and let $v \in Z$. Let $uv \in T$ be the edge with maximum weight $c'(uv)$ in $T$. If we set $B'(u) = B(v) \cup B(u)$ and $B'(x)=B(x)$ for each $x \in Z \setminus \{u,v\}$, then the resulting partition defines a GH $(Z \setminus v)$-Tree $T'$ which is a bag minor. \end{lemma} \begin{proof} Clearly $T'$ is a bag minor and every fundamental cut of $T$, other than $uv$'s, is still a fundamental cut of $T'$. It remains to show that for any $a,b \in Z \setminus v$, there is a minimum $ab$-cut that does not correspond to the fundamental cut of $uv$. This is immediate if the unique $ab$-path $P$ in $T$ does not contain $uv$. If it does contain $uv$, then since $a,b \neq v$, the $ab$-path in $T$ contains some edge $vw$. But since $c'(vw) \leq c'(uv)$, the result follows. \end{proof} \begin{reptheorem}{thm:minorGH} Let $G$ be an undirected graph and $Z \subseteq V$. $(G,Z)$ has the weak GH Minor Property if and only if $(G,Z)$ is a terminal-$\ensuremath{K_{2,3}}$ minor free graph. Moreover, if none of $G$'s blocks is a $4$-terminal instance, then $(G,Z)$ has the GH Minor Property. \end{reptheorem} \begin{proof} If $G$ has a terminal-$\ensuremath{K_{2,3}}$ minor, then by appropriately setting edge capacities to $0,1$ or $\infty$ we find a case where $G$ does not have the desired bag minor. So we now assume that $G$ is a terminal-$\ensuremath{K_{2,3}}$ minor free graph. Let $G'$ be some subgraph of $G$ with edge capacities $c(e)>0$, perturbed so that all minimum cuts are unique. We show that the unique GH $Z$-tree occurs as a bag minor. We deal first with the case where $G'$ has cut vertices. Note that one may iteratively remove any leaf blocks which do not contain terminals. This operation essentially does not impact the GH $Z$-Tree. Now consider any block $L$. Contracting all other blocks into $L$ will put a terminal at each cut vertex in $L$. Since such minors must be terminal $\ensuremath{K_{2,3}}$-free, we may actually add all cut vertices to $Z$ in the original graph, and the resulting configuration is still $\ensuremath{K_{2,3}}$-free. Henceforth we assume that $Z$ includes these extra vertices and show the desired bag minor exists for this terminal set. This is sufficient since we can then retrieve the bag minor for the original terminal set via Lemma~\ref{lem:subset}. One checks that a GH $Z$-Tree is obtained by gluing together the appropriate GH terminal trees in each block. Moreover, since each cut vertex is a terminal, if each of these blocks' tree is a bag minor (resp. weak bag minor), then the whole tree is a bag minor (resp. weak bag minor). Therefore it is now sufficient to prove the result in the case where $G'$ is $2$-connected. If $G'$ has at most $4$ terminals, then Proposition~\ref{prop:blech} asserts that it has a weak bag minor for a GH tree. Moreover, if it has less than $4$ terminals, then it's GH Tree is a path and hence occurs as a bag minor. So we now assume that $G'$ contains at least $5$ terminals and hence it is an extended OS instance whose outside face is a simple cycle. Lemma~\ref{lem:upshot} implies that we may replace each $3$-separated set by a degree-$3$ vertex and the resulting graph is planar and has the same pairwise connectivities amongst vertices in $Z$. It is easy to check that any $Z$-tree bag minor in this new graph is also such a minor in the original instance. Therefore, it is sufficient to show that any planar OS instance with terminals on the outside face has the desired GH tree bag minor. Denote by $t_1,t_2,\ldots,t_{|T|}$ the terminals in the order in which they appear on the boundary of the outer face. Let $B(t): t \in Z$ be the bags associated with the (necessarily unique) GH $Z$-tree $T$. We show that (i) each $G'[B(t)]$ is connected and (ii) for any $st \in T$, there is some edge of $G$ between $B(s)$ and $B(t)$. Consider the fundamental cuts associated with edges incident to some terminal $t$. Let $X_1,X_2, \ldots X_k$ be their shores which do not contain $t$. Since any min-cut is central, each $X_i$ intersects the outside face in a subpath of its boundary. Hence, similar to Claim~\ref{claim:xXiconnected} (cf. Figure~\ref{fig:ordering}), we can order them $X_1,\ldots,X_k$ in clockwise order on the boundary with $t$ between $X_k$ and $X_1$. The next two claims complete the proof of the theorem. \begin{claim} For each terminal $t$, $G'[B(t)]$ is connected. \end{claim} \begin{proof} By contradiction, let $C$ be a component of $G' \setminus (X_1 \cup \ldots \cup X_k)$ which does not contain $t$. If $N(C) \subseteq X_i$ for some $i \in \{1,\ldots,k\}$, then $\delta(C \cup X_i)$ is a cut separating $t$ from any vertex in $X_i$ with capacity smaller than $\delta(X_i)$, contradicting the minimality of $X_i$. Otherwise choose $j < j'$ be such that $N(C) \cap X_j, N(C) \cap X_{j'}$ are non-empty and $j'-j$ is maximized. Call $(j,j')$ the {\em span} of $C$. Without loss of generality $C$ has the largest span amongst all components other than $B(t)$ (whose span is $(1,k)$ incidentally). Moreover, amongst those (non $B(t)$) components with span $(j,j')$ we may assume that $C$ was selected to maximize the graph ``inside'' the embedding of $G'[X_j \cup X_{j'} \cup M']$, where $M'=C \cup X_{j+1} \cup \ldots X_{j'-1}$. In particular, any component $C'$ with neighbhours in $M$' has $N(C') \subseteq M' \cup X_j \cup X_{j'}$. Let $M$ be the union of $M'$ and all such components $C'$. By construction $M$ is non-empty and $t \not\in M$, however $d(M,V \setminus (X_j \cup X_{j'} \cup M))=0$ which contradicts Lemma~\ref{lem:middle} if we take $X=X_j,Y=X_{j'}$. \end{proof} \begin{claim}\label{claim:shores} For each $i \in \{1,\ldots,k\}$, there is an edge from a vertex in $B(t)$ to a vertex in $X_i$. \end{claim} \begin{proof} By contradiction, suppose $\delta(B(t),X_i) = \emptyset$, for some $i \in \{1,\ldots,k\}$. Let $j$ maximum and $j'$ minimum such that $j < i < j'$, $\delta(B(t),X_j) \neq \emptyset$ and $\delta(B(t),X_{j'}) \neq \emptyset$. Note that $j$ and $j'$ are defined because $X_1$ and $X_k$ are adjacent to $B(t)$ by the outer cycle. If we define $M := X_{j+1} \ldots \cup X_{j'-1}$, then $d(M, V \setminus (M \cup X_j \cup X_{j'}))=0$, contradicting Lemma~\ref{lem:middle} where we take $X=X_j,Y=X_{j'}$. \end{proof} \end{proof} Finally, we provide the proof for Proposition~\ref{prop:blech}. \begin{proof} We first consider the case where we have $4$ terminals and let $T$ be the unique GH tree. Suppose that $T$ is a star with center vertex $1$ and let $B_1,B_2, B_3,B_4$ be the bags. Since each fundamental cut of $T$ is central (in $G$) we have that $B_2,B_3,B_4$ each induces a connected subgraph of $G$. Let $Y \subseteq B_1$ be those vertices (if any) which do not lie in the same component of $G[B_1]$ as $1$. We may try to produce $T$ as a weak bag minor of $G$ by deleting $Y$. This fails only if for some $j \geq 2$, $d(B_1 \setminus Y,B_j)=0$; without loss of generality $j=2$. Let $R=B_2 \cup Y \cup B_3, S=B_2 \cup Y \cup B_4$. It follows that $d(R \cap S,V-(R \cup S))=0$ and hence $c(R)+c(S)=c(R \setminus S)+c(S \setminus R)=c(B_3)+c(B_4)$. But $\delta(R)$ is a $34$-cut and so $c(R) > c(B_3)$. Similarly, $c(S) > c(B_4)$. But this now contradicts the previously derived equality. Consider now the case where $T$ is a path, say $1,2,3,4$. Since each fundamental cut is central, $G[B_1],G[B_4]$ are connected. Now suppose that $G[B_2]$ is not connected. Let $M$ be the set of vertices which do not lie in the same component as $2$. If we define $X=B_1,Y=B_3 \cup B_4$ and $t=2,x=1,y=3$, then Lemma~\ref{lem:middle} implies that $d(M,B_2 \setminus M)>0$ a contradiction. It remains to show that $d(B_i,B_{i+1})>0$ for each $i=1,2,3$. Suppose first that $d(B_1,B_2)=0$. Then $c(B_1 \cup B_3 \cup B_4) \leq c(B_3 \cup B_4)$ contradicting the fact that $B_3 \cup B_4$ induces the unique minimum $23$ cut. Hence $d(B_1,B_2)>0$ and by symmetry $d(B_3,B_4)>0$. Finally suppose that $d(B_2,B_3)=0$. One then easily checks that $c(B_1)+c(B_4) \geq c(B_2)+c(B_3)$. But then either $B_2$ induces a second minimum $12$ cut, or $B_3$ induces another minimum $34$ cut. In either case, we have a contradiction. The final cases where $|Z| \leq 3$ follow easily by the same methods. \end{proof} \section{Acknowledgements.} This paper is dedicated to T.C. Hu for his elegant and fundamental contributions to combinatorial optimization. Some of this work was completed during a visit to a thematic semester in combinatorial optimization at the Hausdorff Research Institute for Mathematics in Bonn. We are grateful to the institute and the organizers of the semester. We thank Bruce Reed who informed us about the reduction used to derive the general linkage theorem.
2,877,628,088,700
arxiv
\section{Introduction} \subsection{} Let $W$ be a finite complex reflection group. Associated to $W$ is a family of noncommutative algebras, the rational Cherednik algebras. These algebras depend on a pair of parameters, $t$ and $\mbf{c}$ (precise definitions are given in section \ref{subsection:defns}). At $t = 0$ the algebras are finite modules over their centres. The aim of this paper is to continue the study of a certain finite dimensional quotient of the rational Cherednik algebra at $t = 0$, the restricted rational Cherednik algebra. The blocks of the restricted rational Cherednik algebra induce a partitioning of the set $\LW$ of irreducible $W$-modules, called the Calogero-Moser partition. Using the geometry of certain quiver varieties, Gordon and Martino \cite{12} have given an explicit combinatorial description of the Calogero-Moser partition when $W = C_m \wr S_n$. We show that Clifford theoretic arguments can be use to extend this result to the normal subgroups $G(m,d,n)$ of $C_m \wr S_n$. In their paper \cite{12}, Gordon and Martino conjecture that the Calogero-Moser partition should be related, in some precise way, to the Rouquier blocks of a particular Hecke algebra associated to the same complex reflection group $W$. This conjecture is refined in \cite{Mo} and, by comparing the combinatorial description of these partitions, is shown to be true when $W = C_m \wr S_n$. A consequence of the main result of this paper is that the conjecture as stated in \cite[Conjecture 2.7 (i)]{Mo}, is true for all $G(m,d,n)$. However it is important to note here that, when $n = 2$ and $d$ is even, there are certain ``unequal parameter'' cases where our methods fail (see (\ref{sec:unequal}) for details). In these cases it is not known what the Calogero-Moser partition is.\typeout{Mathematics Subject Classification (2010) 16G99,05E10.} \section{The rational Cherednik algebra at $t = 0$} \subsection{Definitions and notation}\label{subsection:defns} Let $W$ be a complex reflection group, $\mathfrak{h}$ its reflection representation over $\mathbb{C}$ with rank $\mathfrak{h} = n$, and $\mathcal{S}(W)$ the set of all complex reflections in $W$. Let $( \cdot, \cdot ) : \mathfrak{h} \times \mathfrak{h}^* \rightarrow \mathbb{C}$ be the natural pairing defined by $(y,x) = x(y)$. For $s \in \mathcal{S}(W)$, fix $\alpha_s \in \mathfrak{h}^*$ to be a basis of the one dimensional space $\Im (s - 1)|_{\mathfrak{h}^*}$ and $\alpha_s^{\vee} \in \mathfrak{h}$ a basis of the one dimensional space $\Im (s - 1)|_{\mathfrak{h}}$, normalised so that $\alpha_s(\alpha_s^\vee) = 2$. Choose $\mathbf{c} : \mathcal{S}(W) \rightarrow \mathbb{C}$ to be a $W$-equivariant function and $t$ a complex number. The \textit{rational Cherednik algebra}, $H_{t,\mathbf{c}}(W)$, as introduced by Etingof and Ginzburg \cite[page 250]{1}, is the quotient of the skew group algebra of the tensor algebra, $T(\mf{h} \oplus \mf{h}^*) \rtimes W$, by the ideal generated by the relations \begin{equation}\label{eq:rel} [x_1,x_2] = 0, \qquad [y_1,y_2] = 0, \qquad [x_1,y_1] = t (y_1,x_1) - \sum_{s \in \mathcal{S}} \mathbf{c}(s) (y_1,\alpha_s)(\alpha_s^\vee,x_1) s, \end{equation} \noindent for all $x_1,x_2 \in \mathfrak{h}^* \textrm{ and } y_1,y_2 \in \mathfrak{h}$.\\ \noindent For any $\nu \in \mathbb{C} \backslash \{ 0 \}$, the algebras $H_{\nu t,\nu \mathbf{c}}(W)$ and $H_{t,\mathbf{c}}(W)$ are isomorphic. In this article we will only consider the case $t = 0$, therefore we are free to rescale $\mbf{c}$ by $\nu$ whenever this is convenient.\\ \noindent A fundamental result for rational Cherednik algebras, proved by Etingof and Ginzburg \cite[Theorem 1.3]{1}, is that the PBW property holds for all $t, \mbf{c}$. That is, there is a vector space isomorphism \beq\label{eq:PBW} H_{t, \mbf{c}}(W) \stackrel{\sim}{\rightarrow} \mathbb{C} [\mathfrak{h}] \otimes \mathbb{C} W \otimes \mathbb{C} [\mathfrak{h}^*]. \eeq \subsection{The restricted rational Cherednik algebra}\label{sec:restricteddefinition} Let us now concentrate on the case $t = 0$, and we omit $t$ from the notation. In this case the algebra $H_{\mbf{c}}(W)$ is a finite module over its centre $Z_{\mbf{c}}(W)$. By \cite[Proposition 4.5]{1}, we have an inclusion $A = \mathbb{C}[\mf{h}]^W\otimes\mathbb{C}[\mf{h}^*]^W \subset Z_{\mathbf{c}}$. This allows us to define the \textit{restricted rational Cherednik algebra} $\HW$ as \begin{displaymath} \HW = \frac{H_{\mathbf{c}}(W)}{A_+H_{\mathbf{c}}(W)}, \end{displaymath} \noindent where $A_+$ denotes the ideal in $A$ of elements with zero constant term. The PBW property (\ref{eq:PBW}) implies that $\HW \cong \mathbb{C} [\mathfrak{h}]^{co W} \otimes \mathbb{C} W \otimes \mathbb{C} [\mathfrak{h}^*]^{coW}$ as vector spaces, where $\mathbb{C} [\mathfrak{h}]^{co W} := \mathbb{C}[ \mathfrak{h}] / \langle \mathbb{C}[\mathfrak{h}]^W_+ \rangle$ is the coinvariant ring of $\mathfrak{h}$ with respect to $W$. In particular, $\dim \HW = |W|^3$. The inclusion $\mathbb{C}[\mathfrak{h}]^W \otimes \mathbb{C}[\mathfrak{h}^*]^W \hookrightarrow Z_{\mathbf{c}}(W)$ defines a surjective, finite morphism $\Upsilon \, : \,\textrm{Spec}\, (Z_{\mathbf{c}}(W)) \twoheadrightarrow \mathfrak{h}^* / W \times \mathfrak{h}/W$. \subsection{The Calogero-Moser partition}\label{sec:partitions} Fix a complete set of non-isomorphic simple $W$-modules and denote it by $\LW$. Following \cite{12} we define the \textit{Calogero-Moser partition} of $\textrm{Irr} \, \HW$ to be the set of equivalence classes of $\textrm{Irr} \, \HW$ under the equivalence relation $L \sim M$ if and only if $L$ and $M$ belong to the same block of $\HW$. The set of equivalence classes will be denoted $\CMW$. It has been shown, \cite[Proposition 4.3]{Baby}, that $\textrm{Irr} \, \HW$ can be naturally identified with $\LW$. Thus the Calogero-Moser partition $\CMW$ will be thought of a partition of $\textrm{Irr}(W)$ throughout this article. Given $\lambda, \mu \in \LW$ we say that $\lambda, \mu $ belong to the same partition of $\CMW$ if they are in the same equivalence class. \section{Blocks of normal subgroups} \subsection{} Throughout this section we fix an irreducible complex reflection group $W$ with reflection representation $\mathfrak{h}$. Moreover we assume that there exists a normal subgroup $K \triangleleft W$ such that $K$ acts, via inclusion in $W$, on $\mathfrak{h}$ as a complex reflection group (though $\mathfrak{h}$ need not be irreducible as a $K$-module) and that $W/K \cong C_d$, the cyclic group of order $d$. Since $K$ is normal in $W$, the group $W$ acts on $\mc{S}(K)$ by conjugation. Let us fix a $W$-equivariant function $\mbf{c} \, : \, \mc{S}(K) \rightarrow \mathbb{C}$. We extend this to a $W$-equivariant function $\mbf{c} \, : \, \mc{S}(W) \rightarrow \mathbb{C}$ by setting $\mbf{c}(s) = 0$ for $s \in \mc{S}(W) \backslash \mc{S}(K)$. Note that the partition of $ \mc{S}(K)$ into $K$-orbits can be finer than the corresponding partition into $W$-orbits. Thus a $K$-equivariant function on $\mc{S}(K)$ is not always $W$-equivariant. However, as will be shown below, this problem does not occur in the cases we consider. For our choice of parameter $\mbf{c}$, the defining relations (\ref{eq:rel}) show that the natural map $T (\mathfrak{h} \oplus \mathfrak{h}^*) \rtimes K \rightarrow H_{t,\mbf{c}}(W)$ descends to an algebra morphism $H_{t,\mbf{c}}(K) \rightarrow H_{t,\mbf{c}}(W)$. The PBW property (\ref{eq:PBW}) shows that this map is injective. \begin{prop} For $\mbf{c}$ as defined above, the algebra $H_{t,\mbf{c}}(K)$ is a subalgebra of $H_{t,\mbf{c}}(W)$. \end{prop} \subsection{} As explained in the introduction, the goal of this article is to relate the Calogero-Moser partition of $K$ to the Calogero-Moser partition of $W$. However the algebra $\HK$ is not a subalgebra of $\HW$. To overcome this we study an intermediate algebra, $\HtK$, which is defined to be the image of $H_{\mbf{c}}(K)$ in $\HW$. Thus we are in the following setup: \begin{displaymath} \vcenter{ \xymatrix{ H_{0,\mbf{c}}(K) \ar@{->>}[d] \ar@{->}[r] & H_{0,\mbf{c}}(W) \ar@{->>}[d] \\ \HtK \ar@{->>}[d] \ar@{->}[r] & \HW \\ \HK & } } \end{displaymath} where the horizontal arrows are inclusions. To be precise, $\HtK := H_{0,\mbf{c}}(K) / A_+ \cdot H_{0,\mbf{c}}(K)$, where $A = \mathbb{C}[\mathfrak{h}]^W \otimes \mathbb{C}[\mathfrak{h}^*]^W$ and $A_+$ the ideal of polynomials with constant term zero. The PBW property (\ref{eq:PBW}) implies that $\HtK \cong \mathbb{C}[ \mathfrak{h}]^{co W} \otimes \mathbb{C} K \otimes \mathbb{C} [\mathfrak{h}^*]^{co W}$ and hence has dimension $|K| \cdot |W|^2$. The idea is to relate the block partition of $\HtK$ to $\CMW$ via the formalism of twisted symmetric algebras. The Proposition below shows that this allows us to deduce information about the partition $\CMK$. \subsection{}\label{thm:lifting} As noted in (\ref{sec:partitions}), the set $\{ L(\lambda) \, | \, \lambda \in \LK \}$ is a complete set of non-isomorphic simple modules for $\HK$. There is a natural surjective map $\HtK \twoheadrightarrow \HK$ and the kernel of this map is generated by certain central nilpotent elements of $\HtK$. Therefore the kernel is contained in the radical of $\HtK$. This implies that $\{ L(\lambda) \, | \, \lambda \in \LK \}$ is also a complete set of non-isomorphic simple modules for $\HtK$ and the block partition of $\HtK$ corresponds to a partition of the set $\LK$. In particular, the space $L(\lambda)$ is both a simple $\HK$ and $\HtK$-module. However when we wish to consider $L(\lambda)$ as a $\HtK$-module we will denote it by $\tilde{L}(\lambda)$. \begin{prop} The Calogero-Moser partition $\CMK$ of $\LK$ and the block partition of $\HtK$ on $\LK$ are equal because the blocks of $\HtK$ are the preimages of the blocks of $\HK$ under the natural map $\HtK \twoheadrightarrow \HK$. \end{prop} \begin{proof} Let us again denote by $A$ the algebra $\mathbb{C}[\mathfrak{h}]^W \otimes \mathbb{C}[\mathfrak{h}^*]^W$ and define $B = \mathbb{C}[\mathfrak{h}]^K \otimes \mathbb{C}[\mathfrak{h}^*]^K$. Then we have inclusions $A \subset B \subset Z(H_\mbf{c}(K)) \subset H_\mbf{c}(K)$. The Proposition will follow from an application of a result of B. M\"uller; the version which we use here is stated in \cite[Proposition 2.7]{Ramifications}. Recall that $A_+$ is the maximal ideal of elements with constant term zero in $A$. Let $B_+$ be the maximal ideal of elements with constant term zero in $B$. Fix $Z := Z(H_\mbf{c}(K))$ and $H := H_\mbf{c}(K)$. M\"uller's Theorem says that the primitive central idempotents of $H / A_{+} \cdot H$ are the images of the primitive idempotents of $Z / A_{+} \cdot Z$, and similarly the primitive central idempotents of $H / B_{+} \cdot H$ are the images of the primitive idempotents of $Z / B_{+} \cdot Z$. However $A_+ \cdot Z \subset B_+ \cdot Z$ and $B_+ \cdot Z / A_+ \cdot Z$ is a nilpotent ideal in $Z / A_+ \cdot Z$; therefore the primitive idempotents of $Z / B_+ \cdot Z$ are the images of the primitive idempotents of $Z / A_+ \cdot Z$. This implies that the primitive central idempotents of $H / B_+ \cdot H$ are the images of the primitive central idempotents of $H / A_+ \cdot H$. This is equivalent to the statement of the Proposition. \end{proof} \subsection{}\label{lem:algpoly} The following lemma will be required later. \begin{lem} We can choose a set $\{ f_1, \ds, f_n \}$ of homogeneous, algebraically independent generators of $\mathbb{C}[\mathfrak{h}]^K$ and positive integers $a_1, \ds, a_n$ such that $\{ f_1^{a_1}, \ds, f_n^{a_n} \}$ is a set of homogeneous, algebraically independent generators of $\mathbb{C}[\mathfrak{h}]^W$ and $a_1 \cdots a_n = d$. \end{lem} \begin{proof} The ring $\mathbb{C}[\mathfrak{h}]^K$ is $\mathbb{N}$-graded with $(\mathbb{C}[\mathfrak{h}]^K)_0 = \mathbb{C}$. Therefore $\mf{m} := \mathbb{C}[\mathfrak{h}]^K_+$, the ideal of polynomials with zero constant terms, is the unique maximal, graded ideal of $\mathbb{C}[\mathfrak{h}]^K$. The group $W$ acts on $\mf{m}$ and hence also on $\mf{m}^2$. Let $U$ be a homogeneous, $W$-stable complement to $\mf{m}^2$ in $\mf{m}$. By \cite[Lemme 2.1]{BessisBonnafeRouquier}, $U$ generates $\mathbb{C}[\mathfrak{h}]^K$ and so $\mathbb{C}[\mathfrak{h}]^K = \mathbb{C}[U^*]$. The action of $W$ on $U^*$ factors through $C_d$. Since $\mathbb{C}[U^*]^{C_d} = \mathbb{C}[\mathfrak{h}]^W$ is a polynomial ring, the Chevalley-Shephard-Todd Theorem, \cite[Theorem 1.2]{CohenReflections}, says that $C_d$ acts on $U^*$ as a complex reflection group. Therefore we can decompose $U$ into a direct sum of one-dimensional, homogeneous $C_d$-modules, $U = \oplus_{i = 1}^n \mathbb{C} \cdot f_i$, and $C_d = C_{a_1} \times \cdots \times C_{a_n}$ such that the action of $C_d$ on $\mathbb{C} \cdot f_i$ factors through $C_{a_i}$ (with $C_{a_i}$ acting faithfully on $\mathbb{C} \cdot f_i$). Then $\mathbb{C}[\mathfrak{h}]^W = \mathbb{C}[f_1^{a_1}, \ds, f_n^{a_n}]$ and the fact that $\mathbb{C}[\mathfrak{h}]^W$ is a polynomial ring in $n$ variables means that the polynomials $f_1^{a_1}, \ds, f_n^{a_n}$ are algebraically independent. \end{proof} \begin{remark} For $W = G(m,1,n)$ and $K = G(m,d,n)$ (as defined in Section \ref{sec:example}) we can make an explicit choice of invariant polynomials as described in Lemma \ref{lem:algpoly}. Let $e_i(x_1, \dots, x_n)$ denote the $i^{th}$ elementary symmetric polynomial in $x_1, \dots , x_n$. By \cite[page 387]{CohenReflections}, the following are a choice of algebraically independent, homogeneous generators for $\mathbb{C} [\mathfrak{h}]^W$: \begin{displaymath} e_i(x_1^m, \dots , x_n^m), \quad 1 \le i < n \qquad \textrm{and} \qquad (x_1 \dots x_n)^{mn}. \end{displaymath} In Lemma \ref{lem:algpoly} we take $f_n$ to be $(x_1 \dots x_n)^{\frac{nm}{d}}$ and $f_i = e_i(x_1^m, \dots , x_n^m)$ for $1 \le i < n$ so that $a_i = 1$ for $1 \le i < n$ and $a_n = d$. \end{remark} \section{Automorphisms of rational Cherednik algebras} \subsection{} The group $W$ is a finite subgroup of $GL(\mathfrak{h})$. Let us choose an element $\sigma \in N_{GL(\mathfrak{h})}(W) \subset GL(\mathfrak{h})$. Then $\sigma$ is an automorphism of $W$ and we can regard it as an algebra automorphism of $\mathbb{C} W$ by making $\sigma$ act trivially on $\mathbb{C}$. Moreover $\sigma$ acts naturally on $\mathfrak{h}^*$ as $(\sigma \cdot x)(y) = x(\sigma^{-1} \cdot y)$ for $x \in \mathfrak{h}^*$ and $y \in \mathfrak{h}$. Therefore $\sigma$ also acts on $\mathbb{C}[\mathfrak{h}^*]$ and $\mathbb{C}[\mathfrak{h}]$. Let us explicitly write $\mc{S}(W) = \{ C_1, \ds , C_k \}$ for the set of conjugacy classes of reflections in $W$. Then $\sigma$ permutes the $C_i$'s and regarding $\sigma$ as an element of the symmetric group $S_k$ we write $\sigma \cdot C_i = C_{\sigma(i)}$. It can be checked from the defining relations (\ref{eq:rel}) that the maps \bdm x \mapsto \sigma(x), \qquad y \mapsto \sigma(y), \qquad w \mapsto \sigma(w), \qquad x \in \mathfrak{h}^*, y \in \mathfrak{h}, w \in W \edm define an algebra isomorphism \bdm \sigma \, : \, H_{t, \mbf{c}}(W) \stackrel{\sim}{\longrightarrow} H_{t,\sigma(\mbf{c})}(W), \edm where $\sigma(\mbf{c}) = \sigma(c_1 , \ds , c_k) = (c_{\sigma^{-1}(1)}, \ds , c_{\sigma^{-1}(k)})$. Since $\sigma$ normalizes $W$, there is a well defined action of $\sigma$ on $\mathbb{C}[\mathfrak{h}]^W \otimes \mathbb{C}[\mathfrak{h}^*]^W$. Hence $\sigma$ descends to an isomorphism $\sigma \, : \, \HW \stackrel{\sim}{\rightarrow} \bar{H}_{\sigma(\mbf{c})}(W)$. \subsection{} Now let us consider $K$. By definition $W \subset N_{GL(\mathfrak{h})}(K)$, therefore elements of $W$ act as isomorphisms between the various rational Cherednik algebras associated to $K$. Moreover, if we once again make the assumption that the parameter $\mbf{c}$ is $W$-equivariant then the elements of $W$ actually define automorphisms of $H_{t,\mbf{c}}(K)$. These induce automorphisms of $\HK$ and $\HtK$. Let $M$ be a module for one of the three algebras $\mathbb{C} K,\HK$ or $\HtK$. Then ${}^\sigma M$ is also a module for that algebra, where $M = {}^\sigma M$ as vector spaces and if $a$ is an element of the algebra and $m \in M$, then $a \cdot_{\sigma} m = \sigma^{-1}(a) \cdot m$. The following lemma is standard. \begin{lem} Let $\lambda$ be a $K$-module and $\sigma \in W$. Then ${}^\sigma L(\lambda) \cong L({}^\sigma \lambda)$ and ${}^\sigma \tilde{L}(\lambda) \cong \tilde{L}({}^\sigma \lambda)$. \end{lem} \subsection{Clifford theory}\label{sec:Clifford} We now define an action of $C_d$ on $\HtK$. For $\eta \in C_d$, choose a lift $\sigma$ of $\eta$ in $W$ and let $\lambda \in \LK$. Define \bdm \eta \cdot \lambda = {}^\sigma \lambda, \quad \eta \cdot \tilde{L}(\lambda) = {}^\sigma \tilde{L}(\lambda). \edm Note that the action of $C_d$ is only well-defined up to isomorphism, therefore $C_d$ can be considered as acting on the isomorphism classes of the objects in $\HtK \text{-}\mathrm{mod}$. Given $\mu \in \LK$, the stabilizer subgroup of $C_d$ with respect to $\mu$ will be denoted $C_\mu$. Let $C_d^\vee = \Hom_{\textrm{gp}} (C_d, \mathbb{C}^*)$ be the group of characters of $C_d$. There is an action of $C_d^\vee$ on the isomorphism classes of the objects in $\HW \text{-}\mathrm{mod}$. First let us define an action of $C_d^\vee$ on $\LW$: $\delta \cdot \lambda = \lambda \otimes \delta$, for $\delta \in C^\vee_d$ and $\lambda \in \LW$. The stabilizer subgroup of $C_d^\vee$ with respect to $\lambda$ will be denoted $C^\vee_\lambda$. We choose coset representatives $w_1, \ds , w_d$ of $C_d$ in $W$, then Lemma \ref{lem:algpoly} implies that $\HW = \bigoplus_{i} \HtK w_i$. Given a $\HW$-module $M$ we define $\delta \cdot M = M \otimes \delta$ with action \bdm h w_i \cdot( m \otimes \delta) = \delta(K w_i) (h w_i \cdot m) \otimes \delta. \edm This action does not depend on the choice of coset representatives and one can define $\delta$ as a functor on $\HW \text{-}\mathrm{mod}$, though we will not require this level of generality. \subsection{}\label{prop:Clifford} Let $\Res^W_K$ and $\Ind_K^W$ be the induction and restriction functors $\mathbb{C} K \text{-}\mathrm{mod} \leftrightarrows \mathbb{C} W \text{-}\mathrm{mod}$. Then Clifford's Theorem allows one to compare $\mathbb{C} K$ and $\mathbb{C} W$-modules via the induction and restriction functors, see \cite[Chapter 7]{CR} for details. When the quotient group is cyclic it is possible to deduce the following result (the proof of which can be found in \cite[Proposition 6.1]{7}). \begin{prop} Fix $\lambda \in \LW$ and write $\Res_K^W \lambda = \mu_1 \oplus \ds \oplus \mu_k$, where each $\mu_i$ is nonzero and irreducible. Then \begin{enumerate} \item $C_{\mu_i} = (C^\vee_d / C_\lambda^\vee)^\vee \subset C_d$, hence $|C_{\mu_i}| \cdot |C^\vee_\lambda| = d$, \item $C_d$ acts transitively on the set $\{ \mu_1, \ds , \mu_k \}$, \item the $\mu_i$ are pairwise non-isomorphic, \item $\Ind_K^W \mu_i = \bigoplus_{\delta \in C^\vee_d / C_\lambda^\vee} \delta \cdot \lambda$. \end{enumerate} \end{prop} \subsection{} To relate the action of $C_d$ on $\HtK \text{-}\mathrm{mod}$ and $C_d^\vee$ on $\HW \text{-}\mathrm{mod}$ let us introduce the semisimple algebras \bdm A_W := \HW / \mbox{\textrm{rad}}\, \HW \qquad \textrm{ and } \qquad A_K := \HtK / \mbox{\textrm{rad}}\, \HtK. \edm Note that $A_K \subset A_W$ and there are natural induction and restriction functors, $\Ind_{A_K}^{A_W}$ and $\Res_{A_K}^{A_W}$. The functors \bdm E_W \, : \, \mathbb{C} W \text{-}\mathrm{mod} \rightarrow A_W \text{-}\mathrm{mod}, \qquad E_W(\lambda) := A_W \otimes_{\HW} \HW \otimes_{\mathbb{C}[\mathfrak{h}^*]^{co W} \rtimes W} \lambda \edm \bdm E_K \, : \, \mathbb{C} K \text{-}\mathrm{mod} \rightarrow A_K \text{-}\mathrm{mod}, \qquad E_K(\mu) := A_K \otimes_{\HtK} \HtK \otimes_{\mathbb{C}[\mathfrak{h}^*]^{co W} \rtimes K} \mu \edm are equivalences of categories with $E_W(\lambda) = L(\lambda)$ and $E_K(\mu) = \tilde{L}(\mu)$ for $\lambda \in \LW$ and $\mu \in \LK$. \begin{lem} The following diagram commutes up to natural equivalences. \beq\label{eq:commute} \vcenter{ \xymatrix{ \mathbb{C} W \text{-}\mathrm{mod} \ar[r]^{ E_W } \ar[d]^{\Res_K^W} & A_W \text{-}\mathrm{mod} \ar[d]^{\Res_{A_K}^{A_W}} \\ \mathbb{C} K \text{-}\mathrm{mod} \ar[r]_{E_K} \ar[u]^{\Ind_K^W} & A_K \text{-}\mathrm{mod} \ar[u]^{\Ind_{A_K}^{A_W}} } } \eeq \end{lem} \begin{proof} Let us write $\LW = \{ \lambda_1 , \ds, \lambda_k \}$, $\LK = \{ \mu_1, \ds, \mu_l \}$ and $a_{ij} \in \mathbb{N}$ such that $\Res_{K}^{W} \lambda_i = \oplus_j \, \mu_j^{\oplus a_{ij}}$. We begin by showing that the functors $E_W \circ \Ind_K^W$ and $\Ind_{A_K}^{A_W} \circ E_K$ are equivalent. The fact that $\mathbb{C} W = \bigoplus_{i} \lambda_i \otimes \lambda_i^*$ as a $\mathbb{C} W$-$\mathbb{C} W$-bimodule implies that $E_W (\mathbb{C} W) =\bigoplus_{i} L(\lambda_i) \otimes \lambda_i^*$ as a $A_W$-$\mathbb{C} W$-bimodule. Similarly, $E_K (\mathbb{C} K) = \bigoplus_{j} \tilde{L}(\mu_j) \otimes \mu_j^*$ as a $A_K$-$\mathbb{C} K$-bimodule. Frobenius reciprocity implies that \bdm E_W \circ \Ind_K^W \mathbb{C} K \simeq \bigoplus_{ij} L(\lambda_i) \otimes (\mu_j^*)^{\oplus a_{ij}} \edm as a $A_W$-$\mathbb{C} K$-bimodule. The isomorphism $\HW \otimes_{\HtK} \tilde{\Delta}(\mu_j) \simeq \Delta( \Ind_K^W \mu_j)$ implies that \bdm \Ind_{A_K}^{A_W} \tilde{L}(\mu_j) \simeq \bigoplus_i L(\lambda_i)^{\oplus a_{ij}}, \edm and thus \bdm \Ind_{A_K}^{A_W} \circ E_K (\mathbb{C} K) \simeq \bigoplus_{ij} L(\lambda_i) \otimes (\mu_j^*)^{\oplus a_{ij}}, \edm as a $A_W$-$\mathbb{C} K$-bimodule. Since the functors $E_W \circ \Ind_K^W$ and $\Ind_{A_K}^{A_W} \circ E_K$ are exact, Watts' Theorem (\cite[Theorem 5.45]{Rotman}) says that $E_W \circ \Ind_K^W$ is naturally isomorphic to $E_W \circ \Ind_K^W (\mathbb{C} K) \otimes_{\mathbb{C} K} -$ and $\Ind_{A_K}^{A_W} \circ E_K$ is naturally isomorphic to $\Ind_{A_K}^{A_W} \circ E_K (\mathbb{C} K) \otimes_{\mathbb{C} K} -$. The required equivalence now follows from the general fact that if $A_1$ and $A_2$ are algebras, $B,C$ isomorphic $A_1$-$A_2$-bimodules then fixing an isomorphism $B \rightarrow C$ defines an equivalence \bdm B \otimes_{A_2} - \, \stackrel{\sim}{\longrightarrow} C \otimes_{A_2} - \, : \, A_1 \text{-}\mathrm{mod} \longrightarrow A_2 \text{-}\mathrm{mod}. \edm The fact that the functors $E_K \circ \Res_{A_K}^{A_W}$ and $\Res_{A_K}^{A_W} \circ E_W$ are equivalent follows from the facts that $E_W \circ \Ind_K^W$ and $\Ind_{A_K}^{A_W} \circ E_K$ are equivalent, $(\Ind_K^W, \Res_K^W)$ and $(\Ind_{A_K}^{A_W},\Res_{A_K}^{A_W})$ are pairs of adjoint functors and that $E_K$ and $E_W$ are equivalences of categories. \end{proof} \subsection{}\label{lem:equiv} The functors $E_W$ and $E_K$ behave well with respect to the groups $C_d^\vee$ and $C_d$. More precisely: \begin{lem} Let $\delta \in C_d^\vee$, $g \in C_d$, $\lambda \in \mathbb{C} W \text{-}\mathrm{mod}$ and $\mu \in \mathbb{C} K \text{-}\mathrm{mod}$, then \bdm E_W(\delta \cdot \lambda) \simeq \delta \cdot E_W(\lambda) \quad \textrm{ and } \quad E_K(g \cdot \mu) \simeq g \cdot E_K(\mu). \edm \end{lem} \begin{proof} We prove that $E_W(\delta \cdot \lambda) = \delta \cdot E_W(\lambda)$, the argument for $E_K$ being similar. Consider the space $1 \otimes \lambda \otimes \delta \subset \delta \cdot \Delta(\lambda)$. For $\mathfrak{h} \subset \mathbb{C}[\mathfrak{h}^*]^{co W} \subset \HW$ we have $\mathfrak{h} \cdot (1 \otimes \lambda \otimes \delta) = 0$, thus there is a nonzero map $\Delta(\delta \cdot \lambda) \rightarrow \delta \cdot \Delta(\lambda)$. The space $1 \otimes \lambda \otimes \delta$ generates $\delta \cdot \Delta(\lambda)$ therefore the map is an isomorphism. The head of $\Delta(\delta \cdot \lambda)$ is $E_W(\delta \cdot \lambda)$ and the head of $\delta \cdot \Delta(\lambda)$ is $\delta \cdot E_W(\lambda)$. This proves the result. \end{proof} \subsection{}\label{prop:Clifford2} Combining Proposition \ref{prop:Clifford}, the commutativity of diagram (\ref{eq:commute}) and Lemma \ref{lem:equiv} we can conclude that \begin{prop} Fix $\lambda \in \LW$ and write $\Res_{A_K}^{A_W} L(\lambda) = \tilde{L}(\mu_1) \oplus \ds \oplus \tilde{L}(\mu_k)$, where each $\tilde{L}(\mu_i)$ is nonzero, irreducible. Then \begin{enumerate} \item $C_{\tilde{L}(\mu_i)} = C_{\mu_i}$ and $C_{L(\lambda)}^\vee = C_{\lambda}^\vee$. \item $C_{\tilde{L}(\mu_i)} = (C^\vee_d / C_{L(\lambda)}^\vee)^\vee \subset C_d$, hence $|C_{\tilde{L}(\mu_i)}| \cdot |C^\vee_{L(\lambda)}| = d$, \item $C_d$ acts transitively on the set $\{ \tilde{L}(\mu_1), \ds , \tilde{L}(\mu_k) \}$, \item the $\tilde{L}(\mu_i)$ are pairwise non-isomorphic, \item $\Ind_{A_K}^{A_W} \tilde{L}(\mu_i) = \bigoplus_{\delta \in C^\vee_d / C_{L(\lambda)}^\vee} \delta \cdot L(\lambda)$. \end{enumerate} \end{prop} \subsection{}\label{lem:trivialaction} Since $C_d^\vee$ acts on the isomorphism classes of objects in $\HW \text{-}\mathrm{mod}$ and $C_d$ acts on the isomorphism classes of objects in $\HtK \text{-}\mathrm{mod}$, these groups also permute the blocks of the corresponding algebras. Hence there is an action of $C_d^\vee$ on the set $\CMW$ and an action of the group $C_d$ on the block partition of $\LK$ with respect to $\HtK$. \begin{lem} The action of $C_d^\vee$ on $\CMW$ is trivial since each partition in $\CMW$ is a union of $C_d^\vee$ orbits. \end{lem} \begin{proof} Let $\delta$ be a generator of $C_d^\vee$. Fix $B$ to be a block of $\HW$ and $\lambda \in \LW$ such that $L(\lambda)$ is a simple module for $B$. Then we must show that $L(\delta \cdot \lambda)$ is also a simple module for $B$. Since the baby Verma modules $\Delta(\lambda)$ and $\Delta(\delta \cdot \lambda)$ are indecomposable it suffices to show that there is a nonzero map $\Delta(\delta \cdot \lambda) \rightarrow \Delta(\lambda)$. In the notation of Lemma \ref{lem:algpoly}, $\mathbb{C}[U^*]^{co C_d}$ is isomorphic to the regular representation as a $C_d$-module. Let $\{ f_1, \ds, f_n \}$ be the set of generators described in Lemma \ref{lem:algpoly}. Then there exist $u_1, \ds, u_n$ with $0 \le u_i < a_i$ such that $g := f_1^{u_1} \cdots f_n^{u_n}$ equals $\delta$ as characters of $C_d$. Moreover the image of $g$ in $\mathbb{C}[\mathfrak{h}]^{co W}$ is non-zero. The polynomial $g$ is $K$-invariant therefore it is central in $\HtK$. Since $\HtK \subset \HW$, $g$ commutes with the elements $\mathfrak{h} \subset \HW$. Therefore the required map exists and is uniquely defined by $1 \otimes \delta \cdot \lambda \stackrel{\sim}{\longrightarrow} g \otimes \lambda$. \end{proof} \subsection{Twisted symmetric algebras} We shall show that $\HW$ is an example of a twisted symmetric algebra with respect to the group $C_d$. We follow the exposition given in \cite[Section $1$]{Chlou2} (see also \cite{Chlou3}). Although we do not use the properties of $\HW$ derived from the fact that it is a symmetric algebra we recall the relevant definitions for completeness. Let $A$ be a finite dimensional $\mathbb{C}$-algebra. \bdefn A trace function on $A$ is a linear map $t \, : \, A \rightarrow \mathbb{C}$ such that $t(ab)= t(ba)$ for all $a, b \in A$. It is called a symmetrizing form on $A$, and $A$ itself is said to be a symmetric algebra, if the morphism \bdm \hat{t} \, : \, A \rightarrow \Hom_{\mathbb{C}}(A,\mathbb{C}), \qquad a \mapsto (\hat{t}(a) \, : \, b \mapsto t(ab)) \edm is an isomorphism of $(A,A)$-bimodules. \edefn \begin{prop}[\cite{BGS}, Corollary 3.7] The restricted rational Cherednik algebra $\HW$ is a symmetric algebra. \end{prop} \subsection{}\label{lem:subsymm} Let $A$ be a symmetric algebra with form $t$ and $B$ a subalgebra of $A$. Then $B$ is said to be a symmetric subalgebra of $A$ if the restriction of $t$ to $B$ is a symmetrizing form for $B$ and $A$ is free as a left $B$-module. \begin{lem} The algebra $\HtK$ is a symmetric subalgebra of $\HW$. \end{lem} \begin{proof} If $w_1, \ds , w_d$ are left coset representatives of $K$ in $W$, then the PBW property (\ref{eq:PBW}) implies that $\HW$ is a free left $\HtK$-module with basis $w_1, \ds , w_d$. The fact that the restriction of $t$ to $\HtK$ is symmetrizing is clear from the proof of \cite[Lemma 3.5]{BGS}. \end{proof} \begin{defn} Following \cite[Definition 1.10]{Chlou2} we say that the symmetric algebra $(A,t)$ is a twisted symmetric algebra of a finite group $G$ over the subalgebra $B$ if $B$ is a symmetric subalgebra of $A$ and there is a family of vector subspaces $\{ A_g \, | g \in G \}$ of $A$ such that the following conditions hold: \begin{enumerate} \item $A = \bigoplus_{g \in G} A_g,$ \item $A_g A_h = A_{gh}$ for all $g,h \in G$, \item $A_1 = B$, \item $t(A_g) = 0$ for all $g \in G, \, g \neq 1$, \item $A_g \cap A^{\times} \neq \emptyset$ for all $g \in G$ (here $A^{\times}$ are the units of $A$). \end{enumerate} \end{defn} \begin{prop} The symmetric algebra $\HW$ is a twisted symmetric group algebra of the group $C_d$ over the subalgebra $\HtK$. \end{prop} \begin{proof} As in Lemma \ref{lem:subsymm}, let $w_1, \ds , w_d$ be left coset representatives of $K$ in $W$ and assume $C_d = \{ g_1, \ds , g_d\}$, such that $Kw_i = g_i$ in $W/K = C_d$. Then $\HW_{g_i} := \HtK \cdot w_i$. Conditions $(1), \, (3)$ and $(5)$ are clear. Since conjugation by $w_i$ defines an automorphism of $\HtK$, condition $(2)$ is also clear. Finally condition $(4)$ follows from the definition of the symmetrizing form $\Phi$ given in \cite[(3.5)]{BGS}. \end{proof} \subsection{}\label{thm:compare1} We are now in a situation where we can apply \cite[Proposition 2.3.18]{Chlou3}. \begin{thm} For $\mc{S} \subset \LW$, let $\Gamma(\mc{S})$ be the set of all $\mu \in \LK$ occurring as a summand of $\Res_{K}^{W} \, \lambda$ for some $\lambda \in \mc{S}$. Let $\mc{P} \in \CMW$. Then there exists $\mc{Q} \in \CMK$ such that $\Gamma(\mc{P}) = C_d \cdot \mc{Q}$. This implies that there is a bijection \bdm \CMW \stackrel{1 : 1}{\longleftrightarrow} \CMK / C_d. \edm \end{thm} \begin{proof} Proposition \ref{thm:lifting} tells us that $\{ \textrm{blocks of $\HtK$} \} = \CMK$. This identification is $C_d$-equivariant. Therefore it suffices to show that theorem holds but with $\CMK$ replaced by $\{ \textrm{blocks of $\HtK$} \}$. In \cite{Chlou3} Chlouveraki makes use of the existence of a field extension of the base field of the twisted symmetric algebra $A$ such that the extended symmetric algebra is split-semisimple. This fact is used to prove \cite[Proposition 2.3.15]{Chlou3}. Such an extension does not exist for $\HW$ but Proposition \ref{prop:Clifford2} is our substitute result. Now \cite[Proposition 2.3.18]{Chlou3} is applicable, with $A = \HW$ and $\bar{A} = \HtK$ since its proof does not explicitly rely on the existence of a ``splitting field extension''. This result says that the rule $C_d^\vee \cdot \mc{P} \mapsto \Gamma( C_d^\vee \cdot \mc{P})$ defines a bijection between the set of $C^\vee_d$-orbits in $\CMW$ and the $C_d$-orbits in $\{ \textrm{blocks of $\HtK$} \}$. However, Lemma \ref{lem:trivialaction} says that the action of $C_d^\vee$ on $\CMW$ is trivial. \end{proof} \subsection{}\label{lem:singletons} Let us note a particular situation where we can give a more precise result. \begin{lem} Let $\lambda \in \LW$ such that $\{ \lambda \} \in \CMW$. Then $\Res_K^W \lambda = \oplus_{i = 1}^d \mu_i$, $\mu_i \not\cong \mu_j$ for $i \neq j$ and $\{ \mu_i \} \in \CMK$ for $1 \le i \le d$. \end{lem} \begin{proof} Again, since Proposition \ref{thm:lifting} tells us that $\{ \textrm{blocks of $\HtK$} \} = \CMK$ it suffice to show the statement holds with $\CMK$ replaced by $\{ \textrm{blocks of $\HtK$} \}$. Proposition \ref{prop:Clifford} tells us that $\Res_K^W \lambda = \oplus_{i = 1}^e \mu_i$ for some $e$ dividing $d$ and $\mu_i \not\cong \mu_j$ for $i \neq j$. Moreover, there exists $g \in C_d$ such that ${}^g \mu_i = \mu_j$ and hence ${}^g \tilde{L}(\mu_i) = \tilde{L}(\mu_j)$. In particular, $\dim \tilde{L}(\mu_i) = \dim \tilde{L}(\mu_j) = r$ for all $i,j$ and some $r \le |K|$. It is shown in \cite[(5.3)]{Baby} that $\dim L(\lambda) = |W|$ if and only if $\{ \lambda \}$ is a partition of $\CMW$. Proposition \ref{prop:Clifford2} says that $\Res_{A_K}^{A_W} L(\lambda) = \oplus_{i = 1}^e \tilde{L}(\mu_i)$. Comparing the dimension of both sides gives \bdm |W| = e \cdot r \le d \cdot |K| = |W|. \edm Thus $e = d$ and $r = |K|$. Again, by \cite[(5.3)]{Baby}, $\dim \tilde{L}(\mu_i) = |K|$ implies that $\{ \mu_i \}$ is a block of $\HtK$. \end{proof} \begin{remark} In this article we focus on the particular case of $W = G(m,1,n)$ and $K = G(m,d,n)$ (details are given in section \ref{sec:example}). However, we believe that it is advantageous to present Theorem \ref{thm:compare1} in the level of generality that we have done here since there are many examples among the $34$ exceptional irreducible complex reflection groups of pairs $(W,K)$. Therefore in order to calculate the Calogero-Moser partition for all exceptional groups it would suffice to consider only certain groups. We refer the reader to the appendix of \cite{Chlou3} for a list of many such pairs $(W,K)$. \end{remark} \section{The imprimitive groups $G(m,d,n)$}\label{sec:example} \subsection{} The irreducible complex reflection groups are divided into two classes, the primitive complex reflection groups and the imprimitive complex reflection groups. The groups were classified by Shephard and Todd in \cite{5}. There are 34 primitive complex reflection groups, which in the classification of \cite{5} are labelled $G_4, \dots , G_{37}$. They are also known as the exceptional complex reflection groups. In this section we will consider instead the imprimitive complex reflection groups. These belong to one infinite family $G(m,d,n)$, where $m,d,n \in \mathbb{N}$ and $d$ divides $m$. Let $S_n$ be the symmetric group on $n$ elements, considered as the group of all $n \times n$ permutation matrices. Let $A(m,d,n)$ be the group of all diagonal matrices whose diagonal entries are powers of a certain (fixed) $m^{th}$ root of unity and whose determinant is a $(m/d)^{th}$ root of unity. The group $S_n$ normalizes $A(m,d,n)$ and $G(m,d,n)$ is defined to be the semidirect product of $A(m,d,n)$ by $S_n$. Note that $G(m,1,n)$ is the wreath product group $C_m \wr S_n$. Fix $p = m/d$. \subsection{The conjugacy classes of reflections} Fix $\zeta$ a primitive $m^{th}$ root of unity. Let $s_{(i,j)} \in S_n$ denote the transposition swapping $i$ and $j$ and let $\varepsilon_i^k$ be the matrix in $A(m,1,n)$ which has ones all along the diagonal except in the $i^{th}$ position where its entry is $\zeta^k$. The conjugacy classes of reflections in $G(m,1,n)$ are \begin{displaymath} R = \{ s_{(i,j)} \varepsilon_i^k \varepsilon_j^{-k} : 1 \le i \neq j \le n, 0 \le k \le m-1 \}, \end{displaymath} \begin{displaymath} S_i = \{ \varepsilon_j^i : 1 \le j \le n \}_{1 \le i \le m-1}. \end{displaymath} The $G(m,1,n)$-conjugacy classes of reflections in $G(m,d,n)$ are \begin{displaymath} R = \{ s_{(i,j)} \varepsilon_i^k \varepsilon_j^{-k} : 1 \le i \neq j \le n, 0 \le k \le m-1 \}, \end{displaymath} \begin{displaymath} S_{id} = \{ \varepsilon_j^{id} : 1 \le j \le n \}_{1 \le i \le p-1}. \end{displaymath} The following is an application of \cite[Theorem 3]{16}. \begin{prop} Let $n > 2$ or $n = 2$ and $d$ odd, then the $G(m,1,n)$-conjugacy classes of reflections in $G(m,d,n)$ coincide with the $G(m,d,n)$-conjugacy classes of reflections in $G(m,d,n)$. When $n = 2$ and $d$ is even the $G(m,d,2)$-conjugacy classes of reflections in $G(m,d,2)$ are \begin{displaymath} R_1 = \{ s_{(1,2)} \varepsilon_i^k \varepsilon_j^{-k} : 0 \le k \le m-1, k \textrm{ even} \}, \qquad R_2 = \{ s_{(1,2)} \varepsilon_i^k \varepsilon_j^{-k} : 0 \le k \le m-1, k \textrm{ odd} \}, \end{displaymath} and \begin{displaymath} S_{id} = \{ \varepsilon_j^{id} : 1 \le j \le n \}_{1 \le i \le p-1}. \end{displaymath} \end{prop} \subsection{}\label{sec:unequal} The group $G(m,d,n)$ is a normal subgroup of $G(m,1,n)$ of index $d$ and the quotient group is the cyclic group $C_d$. Therefore we are in the situation considered in the previous sections. If $\mathbf{c}$ is a $G(m,d,n)$-conjugate invariant function on the set of reflections of that group then, provided $n \neq 2$ or $n = 2$ and $d$ is odd, $\mathbf{c}$ extends by zero to a $G(m,1,n)$-conjugate invariant function on the set of reflections of $G(m,1,n)$. If $n = 2$ and $d$ is even, we are restricted to considering $\mathbf{c}$ such that $\mathbf{c}(R_1) = \mathbf{c}(R_2)$. The group $C_d = \langle \varepsilon_1^p \rangle$ is a cyclic subgroup of $G(m,1,n)$ and normalises $G(m,d,n)$. If $d$ is co-prime to $p$ then $G(m,1,n) = G(m,d,n) \rtimes C_d$, an important example of this behaviour is $G(m,m,n) \triangleleft G(m,1,n)$. In such situations there exists an algebra isomorphism \begin{displaymath} H_{t,\mathbf{c}}(G(m,1,n)) \cong H_{t,\mathbf{c}}(G(m,d,n)) \rtimes C_d. \end{displaymath} A specific example of this is $H_{t,(c,0)}(B_n) \cong H_{t,c}(D_n) \rtimes C_2$, where $B_n$ and $D_n$ are the Weyl groups of type $B$ and $D$ respectively (they correspond to $G(2,1,n)$ and $G(2,2,n)$). \subsection{Representations of $G(m,d,n)$}\label{sec:reps} We begin by giving an explicit description of the simple $G(m,1,n)$-modules. This will allow us to give a combinatorial description of the action of the groups $C_d$ and $C_d^\vee$ as defined in (\ref{sec:Clifford}). Recall that a \textit{partition} of $n$ is a sequence of positive integers $\lambda = (\lambda_1 \ge \lambda_2 \ge \ds \ge \lambda_k > 0)$ such that $n = | \lambda | := \sum_{i = 1}^k \lambda_k$. We call $k$ the \textit{length} of $\lambda$. The simple $S_n$-modules are parameterized by partitions of $n$. Let $V_{\lambda}$ denote the simple $S_n$-module labelled by the partition $\lambda$. The simple $C_m$-modules will be denoted $\mathbb{C} \cdot \omega_i$ (or simply $\omega_i$), $0 \le i < m$. If $C_m = \langle \varepsilon \rangle$ then $\varepsilon \cdot \omega_i = \zeta^i \omega_i$ (we may think of $C_m \subset G(m,1,n)$ such that $\varepsilon = \varepsilon_1$). Now let $U$ be any $C_m$-module and $V$ a $S_n$-module. The \textit{wreath product} $U \wr V$ is the $G(m,1,n)$-module, which as a vector space is $U^{\otimes n} \otimes V$ and whose module structure is uniquely defined by \bdm \varepsilon_i \cdot (u_1 \otimes \ds \otimes u_n \otimes v) = u_1 \otimes \ds \otimes \varepsilon \cdot u_i \otimes \ds \otimes u_n \otimes v, \edm and for $\sigma \in S_n$: \bdm \sigma \cdot (u_1 \otimes \ds \otimes u_n \otimes v) = u_{\sigma^{-1}(1)} \otimes \ds \otimes u_{\sigma^{-1}(n)} \otimes \sigma \cdot v. \edm If $U$ and $V$ are simple modules then $U \wr V$ is a simple $G(m,1,n)$-module. However not every simple $G(m,1,n)$-module can be written in this way. A complete set of simple modules was originally constructed by Specht \cite{Specht}. The precise result is stated below, and a proof can be found in \cite[Theorem 4.3.34]{JK}. An $m$-multipartition $\underline{\lambda}$ of $n$ is an ordered $m$-tuple of partitions $(\lambda^0, \dots, \lambda^{m-1})$ such that $|\lambda^0| + \dots + |\lambda^{m-1}| = n$. Let $\mathcal{P}(m,n)$ denote the set of all $m$-multipartitions of $n$. To each $m$-tuple $n_0 + \ds + n_{m-1} = n$ there is a corresponding \textit{Young subgroup} $G_{(n)} = C_m \wr (S_{n_0} \times \ds \times S_{n_{m-1}})$ of $G(m,1,n)$. \begin{thm}\label{thm:Spetch} To each $\underline{\lambda}$ in $\mathcal{P}(m,n)$ we can associate the $G(m,1,n)$-module \bdm V_{\underline{\lambda}} := \Ind_{G_{(n)}}^{G(m,1,n)} \, (\omega_0 \wr V_{\lambda^0}) \otimes \ds \otimes (\omega_{m-1} \wr V_{\lambda^{m-1}}), \edm where $G_{(n)}$ is the Young subgroup corresponding to the $m$-tuple $|\lambda^0| + \dots + |\lambda^{m-1}| = n$. Each $V_{\underline{\lambda}}$ is simple, $V_{\underline{\lambda}} \not\simeq V_{\underline{\mu}}$ for $\underline{\lambda} \neq \underline{\mu}$ and every simple $G(m,1,n)$-module is isomorphic to $V_{\underline{\lambda}}$ for some $\underline{\lambda}$. \end{thm} \subsection{} Note that in the case $n_i = 0$, the module $\omega_i \wr V_{\lambda^i}$ should be regarded as the one-dimensional trivial module. An element of $G(m,1,n)$ can be thought of as a permutation matrix but with the unique $1$ in each row replaced by an element of $C_m$. The rule that takes each such matrix to the product of its non-zero entries defines a character $\delta' \, : \, G(m,1,n) \rightarrow \mathbb{C}^*$ (this is not the determinant of the matrix). Fix $\delta := (\delta')^p$. Then $C_d^\vee = \langle \delta \rangle$ and it follows from (\ref{sec:Clifford}) that $(\omega_i \wr V) \otimes \delta \simeq \omega_{i + p} \wr V$. If we define the action of $C_d^\vee$ on $\underline{\lambda}$ by \bdm \delta \cdot (\lambda^0, \dots, \lambda^{m-1}) = (\lambda^{m - p}, \lambda^{m+1-p}, \dots ,\lambda^{m-2},\lambda^{m-1},\lambda^0,\lambda^1, \dots, \lambda^{m-p-1}), \edm then Theorem \ref{thm:Spetch} implies that $\delta \cdot V_{\underline{\lambda}} = V_{\delta \cdot \underline{\lambda}}$. We denote the orbit $C_d^\vee \cdot \underline{\lambda}$ by $\{ \underline{\lambda} \}$. Since $(C^\vee_d / C^\vee_{\underline{\lambda}})^\vee \subset C_d$ is the stabilizer $C_{\mu}$ of $\mu$, an irreducible summand of $\Res_{G(m,d,n)}^{G(m,1,n)} \, \underline{\lambda}$, we see by Proposition \ref{prop:Clifford} that the set of all irreducible summands of $\Res_{G(m,d,n)}^{G(m,1,n)} \, \underline{\lambda}$ is parametrized by elements of the quotient $C_d / C_{\mu}$. This quotient can be identified with $C^\vee_{\underline{\lambda}}$ hence irreducible representations of $G(m,d,n)$ are parameterized by distinct pairs $(\{ \underline{\lambda} \},\epsilon)$, where $\epsilon \in C^\vee_{\underline{\lambda}}$. If we fix $C_d = \langle \, \overline{\varepsilon^p_1} \, \rangle$ and define the bijection $C_d \leftrightarrow C^\vee_d$ by $(\overline{\varepsilon^p_1})^i \leftrightarrow \delta^i$ then $C_d / C_\mu \leftrightarrow C^\vee_\lambda$ and the action of $C_d$ on pairs $(\{ \underline{\lambda} \},\epsilon)$ is given by \bdm \eta \cdot (\{ \underline{\lambda} \},\epsilon) = (\{ \underline{\lambda} \},\eta \cdot \epsilon) \quad \textrm{ where } \quad (\eta \cdot \epsilon)( \nu ) = \epsilon (\eta \nu), \quad \textrm{ for } \eta, \nu \in C_d. \edm \section{Combinatorics} \subsection{} In this section we apply Theorem \ref{thm:compare1} to the combinatorial description of the partition $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$ given in \cite{12} and deduce a similar description of the partition $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$. First we must introduce some combinatorial objects. \subsection{Young diagrams and $\beta$-numbers} Let $\lambda$ be a partition of $n$ of length $k$. The \textit{Young diagram} of $\lambda$ is defined to be the subset $Y(\lambda) := \{ (a,b) \in \Z^2 \, | \, 1 \le a \le k, \, 1 \le b \le \lambda_a \}$ of $\Z^2$. Each box in the diagram is called a \textit{node} and the \textit{content} of a node $(a,b)$ is defined to be the integer $\cont(a,b) := b - a$. The Young diagram should be visualized as a stack of boxes, justified to the left; for example the partition $(3,2,2,1)$ with its content is: \bdm \Yboxdim18pt \young(\minustwo :::,\minusone 01:,0123) \edm \subsection{Residues}\label{subsection:residue} Given a partition $\lambda$, we define the \textit{residue} of $\lambda$ to be the Laurent polynomial in $\Z [ x^{\pm 1}]$ given by \bdm \textrm{Res}_{\lambda}(x) := \sum_{(a,b) \in Y(\lambda)} x^{\cont (a,b)}. \edm For $r \in \Z$, the \textit{$r$-shifted} residue of $\lambda$ is defined to be $\textrm{Res}_{\lambda}^r(x) := x^r \textrm{Res}_{\lambda}(x)$. Let $\underline{\lambda} \in \mc{P}(m,n)$ and fix $\mbf{r} \in \Z^m$. Then the \textit{$\mbf{r}$-shifted residue} of $\underline{\lambda}$ is defined to be \bdm \textrm{Res}^{\mbf{r}}_{\underline{\lambda}} (x) := \sum_{i = 0}^{m-1} \textrm{Res}_{\lambda^i}^{r_i} (x). \edm \subsection{}\label{subsection:translate} In order to use the combinatorics described in \cite{12} and \cite{Mo} we must change the basis of our parameter space. Recall that we have labelled the conjugacy classes of complex reflections in $G(m,1,n)$ as $R$ and $S_i$. We fix $\mathbf{c}(R) = k$ and $\mathbf{c}(S_i) = c_i$. The parameters of the rational Cherednik algebra $H_{\mbf{c}}(G(m,1,n))$ as used in \cite{12} are $\mbf{h} = (h,H_0, \dots , H_{m-1})$. We wish to find an expression for these parameters in terms of $k$ and $c_1, \dots ,c_{m-1}$. For the remainder of this section we make the assumption that $k \neq 0$. Without loss of generality $k = -1$. The parameter $H_0$ is chosen so that $H_0 + H_1 + \dots + H_{m-1} = 0$. Recall that $\zeta$ is a primitive $m^{th}$ root of unity. By \cite[(2.7)]{10} we know that $h = k$ and \begin{displaymath} c_i = \sum_{j = 0}^{m-1} \zeta^{-ij} H_j. \end{displaymath} Noting that \begin{displaymath} \sum_{i = 1}^{m-1} \zeta^{-i(r + j)} = \left\{ \begin{array}{lcl} m-1 & & \textrm{if } r + j \equiv 0 \, \mod \, m\\ -1 & & \textrm{otherwise}, \end{array} \right. \end{displaymath} we have for $1 \le r \le m-1$: \begin{displaymath} \zeta^{-r} c_1 + \zeta^{-2r} c_2 + \dots + \zeta^{-(m-1)r} c_{m-1} = \sum_{i = 1}^{m-1} \zeta^{-ri} \sum_{j = 0}^{m-1} \zeta^{-ij} H_j \qquad \end{displaymath} \begin{displaymath} \qquad = \sum_{j = 0}^{m-1} H_j \sum_{i = 1}^{m-1} \zeta^{-i(r + j)} = (m-1)H_{m - r} - \sum_{\substack{j = 0 \\ j \neq m - r}}^{m-1} H_j = mH_{m-r}. \end{displaymath} Thus for $1 \le r \le m-1$: \begin{displaymath} H_r = \frac{1}{m} \sum_{i = 1}^{m-1} \zeta^{-i(m - r)} c_i = \frac{1}{m} \sum_{i = 1}^{m-1} \zeta^{ir} c_i. \end{displaymath} \subsection{The Calogero-Moser partition for $C_m \wr S_n$}\label{thm:Mo} The results in \cite{12} and \cite{Mo} are only valid for rational values of $\mbf{h}$. Therefore, for the remainder of this chapter, we restrict to those parameters $\mbf{c}$ for $G(m,1,n)$ such that $\mbf{h} = (-1, H_0, H_1, \ds, H_{m-1}) \in \mathbb{Q}^{m+1}$. Choose $e \in \mathbb{N}$ such that $e H_i \in \Z$ for all $0 \le i \le m-1$ and fix \bdm \mbf{s} = (0, eH_1, e H_1 + eH_2, \ds, e H_1 + \ds + e H_{m-1}) \in \Z^m. \edm Combining \cite[Theorem 2.5]{12} with the wonderful, but difficult combinatorial result \cite[Theorem 3.13]{Mo} gives: \begin{thm} The multipartitions $\underline{\lambda}, \underline{\mu} \in \mc{P}(m,n)$ belong to the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$ if and only if \bdm \textrm{Res}^{\mbf{s}}_{\underline{\lambda}}(x^e) = \textrm{Res}^{\mbf{s}}_{\underline{\mu}}(x^e). \edm \end{thm} \subsection{}\label{lem:cyclicparameters} The $G(m,1,n)$-conjugacy classes of $G(m,d,n)$ are $R$ and $S_{id}$, where $1 \le i \le p - 1$. Thus a parameter $\mbf{c}$ for $G(m,1,n)$ is an extension by zero of a parameter for $G(m,d,n)$ if and only if $c_i = 0$ for all $i \not\equiv 0 \, \mod \, d$. Let us therefore assume that $c_i = 0$ for $i \not\equiv 0 \, \mod \, d$. \begin{lem} We have $c_i = 0$ for all $i \not \equiv 0 \, \mod \, d$ if and only if $H_{i + p} = H_i$ for all $i$. \end{lem} \begin{proof} First assume that $c_i = 0$ for all $i \not \equiv 0 \, \mod \, d$. Then \begin{displaymath} H_{i + p} = \frac{1}{m} \sum_{r = 1}^{p-1} \zeta^{dr(i + p)} c_{dr} = \frac{1}{m} \sum_{r = 1}^{p-1} \zeta^{dri} c_{dr} = H_i. \end{displaymath} Conversely, if $H_{i + p} = H_i$ for all $i$ then \begin{displaymath} c_i = \sum_{j = 0}^{m-1} \zeta^{-ij} H_j = \sum_{j = 0}^{p-1} H_j \sum_{r = 0}^{d-1} \zeta^{-i(j + rp)}. \end{displaymath} The result now follows from \begin{displaymath} \sum_{r = 0}^{d-1} \zeta^{-i(j + rp)} = \zeta^{-ij} \sum_{r = 0}^{d-1} (\zeta^{-ip})^r = \left\{ \begin{array}{lcl} d \zeta^{-ij} & & \textrm{if } i \equiv 0 \, \mod \, d\\ 0 & & \textrm{otherwise}. \end{array} \right. \end{displaymath} \end{proof} \subsection{}\label{lem:Sdorbit} We will say that the parameter $\mbf{h} = (-1,H_0, \ds , H_{m-1})$ is \textit{$p$-cyclic} if $H_{i + p} = H_i$ for all $i$. Let $\underline{\lambda} = (\lambda^0, \dots , \lambda^{m-1})$ be an $m$-partition of $n$. We rewrite $\underline{\lambda}$ as $\underline{\lambda} = (\underline{\lambda}_0, \dots , \underline{\lambda}_{d-1})$ where $\underline{\lambda}_i = (\lambda^{ip}, \dots , \lambda^{(i+1)p - 1})$. Now the action of $C^\vee_d$ on $\underline{\lambda}$ as defined in (\ref{sec:reps}) can be expressed as \bdm \delta \cdot (\underline{\lambda}_0, \ds , \underline{\lambda}_{d-1}) = (\underline{\lambda}_{d-1}, \underline{\lambda}_{0}, \ds , \underline{\lambda}_{d-2}). \edm An $m$-multipartition of $n$ is called \textit{$d$-stuttering} if $\underline{\lambda}_i = \underline{\lambda}_j$ for all $0 \le i,j \le d-1$. The group $C_d^\vee$ can be considered as a subgroup of $\mf{S}_d$, the symmetric group on $d$ elements, acting on $\mc{P}(m,n)$ as: \bdm \sigma \cdot (\underline{\lambda}_0, \ds , \underline{\lambda}_{d-1}) = (\underline{\lambda}_{\sigma(0)}, \ds , \underline{\lambda}_{\sigma(d-1)}). \edm \begin{lem} Let $\mbf{c}$ be a parameter for $G(m,1,n)$ such that $\mbf{h} \in \mathbb{Q}^{m+1}$ is $p$-cyclic. Then the partitions of $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$ consist of $\mf{S}_d$-orbits since \bdm \textrm{Res}^{\mbf{s}}_{\underline{\lambda}}(x^e) = \textrm{Res}^{\mbf{s}}_{\sigma \cdot \underline{\lambda}}(x^e), \edm where $\underline{\lambda} \in \mc{P}(m,n)$, $\sigma \in \mf{S}_d$ and $\mbf{s}$ is defined in (\ref{thm:Mo}). \end{lem} \begin{proof} If $\mbf{h}$ is $p$-cyclic then the corresponding parameter $\mbf{s}$ has the form \bdm \mbf{s} = (\mbf{s}' , \ds , \mbf{s}') \quad \textrm{ where } \quad \mbf{s}' = (0, eH_1, e H_1 + eH_2, \ds, e H_1 + \ds + e H_{p-1}), \edm and thus \bdm \textrm{Res}^{\mbf{s}}_{\underline{\lambda}}(x^e) = \sum_{i = 0}^{d-1} \textrm{Res}^{\mbf{s}'}_{\underline{\lambda}_i}(x^e) \qquad \forall \, \underline{\lambda} \in \mc{P}(m,n) . \edm Since the action of $\mf{S}_d$ simply reorders this sum, the result is clear. \end{proof} \subsection{}\label{lem:primedivisor} The following technical result will be needed later. \begin{lem} Let $\mbf{h}$ be a $p$-cyclic parameter and choose $\underline{\lambda} \in \mc{P}(m,n)$ to be a non $d$-stuttering $m$-multipartition of $n$. For each prime divisor $q$ of $d$, there exists an $m$-multipartition $\underline{\lambda}(q)$ of $n$ such that $\underline{\lambda}$ and $\underline{\lambda}(q)$ belong to the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$ and the order of the stabilizer of $\underline{\lambda}(q)$ under the action of $C^\vee_d$ is not divisible by $q$. \end{lem} \begin{proof} We follow the argument given in \cite[Lemma 3.5]{14}. Since $\underline{\lambda}$ is not $d$-stuttering, there exists an $i > 0$ such that $\underline{\lambda}_i \neq \underline{\lambda}_0$. If $d = q$ there is nothing to prove so assume $d > q$ and set $l = d/q$, $l > 1$. Let $\sigma$ be the transposition in $\mf{S}_d$ that swaps $\underline{\lambda}_i$ and $\underline{\lambda}_{l-1}$ in $\underline{\lambda}$. We set $\underline{\lambda}(q) = \sigma \cdot \underline{\lambda}$. Then $\underline{\lambda}(q)$ is not fixed by any of the generators of the unique subgroup of $C^\vee_d$ of order $q$ and hence the stablizer subgroup of $\underline{\lambda}(q)$ has order co-prime to $q$. Since $\underline{\lambda}$ and $\underline{\lambda}(q)$ are in the same $\mf{S}_d$-orbit, Lemma \ref{lem:Sdorbit} says that they are in the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$. \end{proof} \subsection{}\label{lem:dstuttering} We will also require the following result. \begin{lem} Let $\mbf{c}$ be a parameter for $G(m,1,n)$ such that $\mbf{h} \in \mathbb{Q}^{m+1}$ is $p$-cyclic and choose $\underline{\lambda} \in \mc{P}(m,n)$ to be $d$-stuttering. If $\{ \underline{\lambda} \}$ is not a partition of $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$ then there exists a non $d$-stuttering $m$-multipartition $\underline{\mu}$ that is in the same partition as $\underline{\lambda}$. \end{lem} \begin{proof} Since $\{ \underline{\lambda} \}$ is not partition of $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$ there must exist an $m$-multipartition $\underline{\lambda}' \neq \underline{\lambda}$ that is in the same partition as $\underline{\lambda}$. If $\underline{\lambda}'$ is not $d$-stuttering then we are done. Therefore we assume that $\underline{\lambda}'$ is $d$-stuttering. As noted in the proof of Lemma \ref{lem:Sdorbit}, $\mbf{h}$ being $p$-cyclic implies that \bdm \textrm{Res}^{\mbf{s}}_{\underline{\mu}}(x^e) = \sum_{i = 0}^{d-1} \textrm{Res}^{\mbf{s}'}_{\underline{\mu}_i}(x^e) \qquad \forall \, \underline{\mu} \in \mc{P}(m,n) . \edm Hence $\textrm{Res}^{\mbf{s}}_{\underline{\lambda}}(x^e) = d \, \textrm{Res}^{\mbf{s}'}_{\underline{\lambda}_0}(x^e)$ and $\textrm{Res}^{\mbf{s}}_{\underline{\lambda}'}(x^e) = d \, \textrm{Res}^{\mbf{s}'}_{(\underline{\lambda}')_0}(x^e)$. It follows from Theorem \ref{thm:Mo} that \bdm \textrm{Res}^{\mbf{s}'}_{\underline{\lambda}_0}(x^e) = \textrm{Res}^{\mbf{s}'}_{(\underline{\lambda}')_0}(x^e). \edm Set $\underline{\mu} = (\underline{\lambda}_0, (\underline{\lambda}')_0, \underline{\lambda}_0, \ds, \underline{\lambda}_0)$, it is a non $d$-stuttering $m$-multipartition. Again by Theorem \ref{thm:Mo}, $\textrm{Res}^{\mbf{s}}_{\underline{\lambda}}(x^e) = \textrm{Res}^{\mbf{s}}_{\underline{\mu}}(x^e)$ implies that $\underline{\lambda}$ and $\underline{\mu}$ belong to the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$. \end{proof} \subsection{The main result}\label{thm:mainresult} Recall that for $\mc{P} \in \CMW$, $\Gamma(\mc{P})$ was defined to be the set of all $\mu \in \LK$ occurring as a summand of $\Res_{W}^{K} \, \lambda$ for each $\lambda \in \mc{P}$. In the case $W = G(m,1,n)$ and $K = G(m,d,n)$, $\Gamma$ is given combinatorially by $\Gamma(\mc{P}) = \{ \, ( \, \{ \underline{\lambda} \} \, , \epsilon ) \, | \, \underline{\lambda} \in \mc{P}, \, \epsilon \in C^\vee_{\underline{\lambda}} \}$. \begin{thm} Let $\mbf{c} \, : \, \mc{S}(G(m,d,n)) \rightarrow \mathbb{C}$ be a $G(m,1,n)$-equivariant function such that $k \neq 0$ and $\mbf{h} \in \mathbb{Q}^{m+1}$. The $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$ partition of $\textsf{Irr} \, (G(m,d,n))$ is described as follows. Let $\mc{Q}$ be a partition in $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$: \begin{enumerate} \item If $\underline{\lambda}$ is a $d$-stuttering $m$-multipartition such that $\mc{Q} = \{ \underline{\lambda} \}$ then the sets $\{ ( \{ \underline{\lambda} \} , \epsilon ) \}$ where $\epsilon \in C^\vee_d$ are partitions of $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$; \item Otherwise $\Gamma (\mc{Q})$ is a $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$ partition of $\textsf{Irr} \, (G(m,d,n))$. \end{enumerate} \end{thm} \begin{proof} Rescaling if necessary, we may assume that $k = -1$. It is clear that the sets described in $(1)$ and $(2)$ of the theorem define a partition of the set $\textsf{Irr} \, (G(m,d,n))$. Therefore we just have to show that the sets describe the blocks of $\bar{H}_{\mbf{c}}(G(m,d,n))$. Proposition \ref{thm:lifting} says that it is sufficient to prove that $(1)$ and $(2)$ describe the equivalence classes of $\textsf{Irr} \, (G(m,d,n))$ with respect to the blocks of $\tilde{H}_{\mbf{c}}(G(m,d,n))$. Lemma \ref{lem:singletons} shows that the sets described in $(1)$ are indeed blocks of $\tilde{H}_{\mbf{c}}(G(m,d,n))$. So let us assume that $\mc{Q}$ is not of the form described in $(1)$. The group $C_d$ acts on the set $\Gamma(\mc{Q})$ and Theorem \ref{thm:compare1} says that there exists a block $B$ of $\tilde{H}_{\mbf{c}}(G(m,d,n))$ such that $C_d \cdot B = \Gamma(\mc{Q})$. We wish to show that $C_d \cdot B = B$. The fact that $g \cdot \tilde{L} \in g \cdot B$ for $\tilde{L} \in B$ and $g \in C_d$ implies that \bdm \bigcup_{\tilde{L} \in B} \textrm{Stab}_{C_d} \, \tilde{L} \subseteq \textrm{Stab}_{C_d} \, B. \edm To show that $\textrm{Stab}_{C_d} \, B = C_d$ we will show that for every prime $q$ dividing $d$ there exists a $\tilde{L} \in B$ such that the highest power of $q$ dividing $d$ also divides $|\textrm{Stab}_{C_d} \, \tilde{L}(\mu)|$. This will imply $C_d \cdot B = B$ i.e. $\Gamma(\mc{Q}) = B$. Let $L(\lambda) \in \mc{Q}$ and let $\tilde{L}(\mu)$ be a summand of $\Res_{A_{G(m,d,n)}}^{A_{G(m,1,n)}} \, L(\lambda)$, then $\tilde{L}(\mu) \in g \cdot B$ for some $g \in C_d$. This means that $g^{-1} \cdot \tilde{L}(\mu) \in B$ is also a summand of $L(\lambda)$. Thus $\Res_{A_{G(m,d,n)}}^{A_{G(m,1,n)}} \, L(\lambda)$ contains a summand that lives in $B$, for all $L(\lambda) \in \mc{Q}$. Since $\textrm{Stab}_{C_d} \, \tilde{L}(\mu) = \textrm{Stab}_{C_d} \, \tilde{L}(\mu')$ for any two summands $\tilde{L}(\mu)$ and $\tilde{L}(\mu')$ of $\Res_{A_{G(m,d,n)}}^{A_{G(m,1,n)}} \, L(\lambda)$, it will suffice to show that, for every prime $q$ dividing $d$, there exists a $L(\lambda) \in \mc{Q}$ such that the highest power of $q$ dividing $d$ also divides $| \textrm{Stab}_{C_d} \, \tilde{L}(\mu) |$ for some summand $ \tilde{L}(\mu)$ of $\Res_{A_{G(m,d,n)}}^{A_{G(m,1,n)}} \, L(\lambda)$. Proposition \ref{prop:Clifford2} $(1)$ says that \bdm |\textrm{Stab}_{C_d} \, \tilde{L}(\mu)| \cdot | \textrm{Stab}_{C^\vee_d} \, L(\lambda)| = d. \edm Therefore it suffices to show that we can find $L(\lambda) \in \mc{Q}$ such that $q$ does not divide $| \textrm{Stab}_{C^\vee_d} \, L(\lambda)|$. Since $\mc{Q} \neq \{ \underline{\lambda} \}$ for some $d$-stuttering multipartition $\underline{\lambda}$, Lemma \ref{lem:dstuttering} says that there exists a non $d$-stuttering multipartition in $\mc{Q}$. Lemma \ref{lem:primedivisor} now says that the module $L(\lambda)$ we require exists in $\mc{Q}$. \end{proof} \begin{cor} Let $\mbf{c} \, : \, \mc{S}(G(m,d,n)) \rightarrow \mathbb{C}$ be a $G(m,1,n)$-equivariant function such that $k = -1$ and $\mbf{h} \in \mathbb{Q}^{m+1}$, extended to a function $\mbf{c} \, : \, \mc{S}(G(m,1,n)) \rightarrow \mathbb{C}$ and define $\mbf{s}$ as in (\ref{thm:Mo}). Choose $( \{ \underline{\lambda} \}, \epsilon) , ( \{ \underline{\mu} \}, \eta) \in \textsf{Irr} \, (G(m,d,n))$, then \begin{itemize} \item if $\{ \underline{\lambda} \} \neq \{ \underline{\mu} \}$, then $( \{ \underline{\lambda} \}, \epsilon)$ and $( \{ \underline{\mu} \}, \eta)$ are in the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$ if and only if \bdm \textrm{Res}^{\mbf{s}}_{\underline{\lambda}}(x^e) = \textrm{Res}^{\mbf{s}}_{\underline{\mu}}(x^e); \edm \item if $\underline{\lambda} = \underline{\mu}$ is a $d$-stuttering partition and $\textrm{Res}^{\mbf{s}}_{\underline{\lambda}}(x^e) \neq \textrm{Res}^{\mbf{s}}_{\underline{\nu}}(x^e)$ for all $\underline{\lambda} \neq \underline{\nu} \in \mc{P}(m,n)$ then $( \{ \underline{\lambda} \}, \epsilon)$ and $( \{ \underline{\lambda} \}, \eta)$ are in the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$ if and only if $\epsilon = \eta$; \item otherwise $( \{ \underline{\lambda} \}, \epsilon)$ and $( \{ \underline{\lambda} \}, \eta)$ are in the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$. \end{itemize} \end{cor} \subsection{}\label{lem:generic} It was shown by the author in \cite{Singular} that the partition $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$ is never trivial, even for generic values of $\mbf{c}$. Here we describe $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$ for generic $\mbf{c}$. \begin{lem} Let $\mbf{c}$ be a generic parameter for $H_{\mbf{c}}(G(m,d,n))$ such that $k \neq 0$ and $\mbf{h} \in \mathbb{Q}^{m+1}$. Choose $( \{ \underline{\lambda} \}, \epsilon) , ( \{ \underline{\mu} \}, \eta) \in \textsf{Irr} \, (G(m,d,n))$, \begin{itemize} \item if $\underline{\lambda}$ is a $d$-stuttering partition then $\{ \, ( \{ \underline{\lambda} \}, \epsilon) \, \}$ is a partition of $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$. \item otherwise $( \{ \underline{\lambda} \}, \epsilon)$ and $( \{ \underline{\mu} \}, \eta)$ are in the same partition of $\mathsf{CM}_{\mbf{c}}(G(m,d,n))$ if and only if \beq\label{eq:sum} \sum_{i = 0}^{d-1} \textrm{Res} {}_{\lambda^{j + pi}}(x^e) = \sum_{i = 0}^{d-1} \textrm{Res} {}_{\mu^{j + pi}}(x^e) \quad \forall \, 0 \le j \le p-1. \eeq \end{itemize} Note that the expressions in (\ref{eq:sum}) are independent of the choice of representatives $\underline{\lambda} \in \{ \underline{\lambda} \}$ and $\underline{\mu} \in \{ \underline{\mu} \}$. \end{lem} \begin{proof} Since $\mbf{h}$ is cyclic, we note once again that the vector $\mbf{s}$ as defined in (\ref{thm:Mo}) has the form \bdm \mbf{s} = (\mbf{s}' , \ds , \mbf{s}') \quad \textrm{ where } \quad \mbf{s}' = (0, eH_1, e H_1 + eH_2, \ds, e H_1 + \ds + e H_{p-1}). \edm Therefore \bdm \textrm{Res}^{\mbf{s}} {}_{\underline{\lambda}}(x^e) = \sum_{j = 0}^{p-1} x^{e \mbf{s}_j} \left( \sum_{i = 0}^{d-1} \textrm{Res} {}_{\lambda^{j + pi}}(x^e) \right), \edm and thus the genericity of $\mbf{c}$ implies that \bdm \textrm{Res}^{\mbf{s}} {}_{\underline{\lambda}}(x^e) = \textrm{Res}^{\mbf{s}} {}_{\underline{\mu}}(x^e) \Leftrightarrow \sum_{i = 0}^{d-1} \textrm{Res} {}_{\lambda^{j + pi}}(x^e) = \sum_{i = 0}^{d-1} \textrm{Res} {}_{\mu^{j + pi}}(x^e) \quad \forall \, 0 \le j \le p-1. \edm If $\underline{\lambda}$ is $d$-stuttering then $\sum_{i = 0}^{d-1} \textrm{Res} {}_{\lambda^{j + pi}}(x^e) = d \, \textrm{Res} {}_{\lambda^{j}}(x^e)$, $\forall \, 0 \le j \le p-1$. It can easily be shown that if \bdm d \, \textrm{Res} {}_{\lambda^{j}}(x^e) = \sum_{i = 0}^{d-1} \textrm{Res} {}_{\mu^{j + pi}}(x^e) \edm then $\mu^{j + pi} = \lambda^j$ for all $i$. Therefore each $d$-stuttering partition forms a singleton partition in $\mathsf{CM}_{\mbf{c}}(G(m,1,n))$. Now the Lemma follows from Corollary \ref{thm:mainresult}. \end{proof} \section{Relation to Rouquier families} \subsection{Generic Hecke algebras}In this section we show that Theorem \ref{thm:mainresult} confirms Martino's conjecture when $W = G(m,d,n)$. To each complex reflection group it is possible to associate a generic Hecke algebra. We recall the definition as given in \cite{Mo} (see also \cite{BMR}). Denote by $\mc{K}$ the set of all hyperplanes in $\mathfrak{h}$ that are the fixed point sets of the complex reflections in $W$. The group $W$ acts on $\mc{K}$. Given $H \in \mc{K}$, the parabolic subgroup of $W$ that fixes $H$ pointwise is a rank one complex reflection group and thus isomorphic to the cyclic group $C_e$ for some $e$. Therefore an orbit of hyperplanes $\mc{C} \in \mc{K}$ corresponds to a conjugacy class of rank one parabolic subgroups, all isomorphic to $C_{e_{\mc{C}}}$. Let $e_{\mc{C}} := |C_{e_{\mc{C}}}|$ be the order of these parabolic subgroups. For every $d > 1$, fix $\eta_d = e^{\frac{2 \pi i}{d}}$ and let $\mu_{d}$ be the group of all $d^{th}$ roots of unity in $\mathbb{C}$. If $\mu_{\infty}$ is the group of all roots of unity in $\mathbb{C}$ then we choose $K$ to be some finite field extension of $\mathbb{Q}$ contained in $\mathbb{Q} (\mu_{\infty})$ such that $K$ contains $\mu_{e_{\mc{C}}}$ for all $\mc{C} \in \mc{K} / W$. The group of roots of unity in $K$ is denoted $\mu(K)$ and the ring of integers in $K$ is $\Z_K$. \subsection{} Fix a point $x_0 \in \mathfrak{h}_{\textrm{reg}} := \mathfrak{h} \backslash \bigcup_{H \in \mc{K}} H$ and denote by $\bar{x}_0$ its image in $\mathfrak{h}_{\textrm{reg}} / W$. Let $B$ denote the fundamental group $\Pi_1 (\mathfrak{h}_{\textrm{reg}} / W , \bar{x}_0)$. Let $\mbf{u} = \{ \, (u_{\mc{C},j}) \, : \mc{C} \in \mc{K} / W, \, 0 \le j \le e_{\mc{C}} - 1 \}$ be a set of indeterminates, and denote by $\Z[\mbf{u},\mbf{u}^{-1}]$ the ring $\Z[u_{\mc{C},j}^{\pm 1} \, : \mc{C} \in \mc{K} / W, \, 0 \le j \le e_{\mc{C}}-1]$. The \textit{generic Hecke algebra}, $\mc{H}_W$, is the quotient of $\Z[ \mbf{u}, \mbf{u}^{-1}]B$ by the relations of the form \bdm (\mbf{s} - u_{\mc{C},0})(\mbf{s} - u_{\mc{C},1}) \cdots (\mbf{s} - u_{\mc{C},e_{\mc{C}} - 1}), \edm where $\mc{C} \in \mc{K}/W$ and $\mbf{s}$ runs over the set of monodromy generators around the images in $\mathfrak{h}_{\textrm{reg}} / W$ of the hyperplane orbit $\mc{C}$. The following properties are known to hold for all but finitely many complex reflection groups (it is conjectured that they hold for all complex reflection groups). In particular, they hold for the infinite series $G(m,d,n)$. \begin{itemize} \item $\mc{H}_W$ is a free $\Z[\mbf{u}, \mbf{u}^{-1}]$-module of rank $|W|$. \item $\mc{H}_W$ has a symmetrizing form $t \, : \, \mc{H}_W \rightarrow \Z[\mbf{u}, \mbf{u}^{-1}]$ that coincides with the standard symmetrizing form on $\Z_K W$ after specializing $u_{\mc{C},j}$ to $\eta^j_{e_\mc{C}}$. \item Let $\mbf{v} = \{ (v_{\mc{C},j}) \, : \mc{C} \in \mc{K} / W, \, 0 \le j \le e_{\mc{C}}-1 \}$ be a set of indeterminates such that $u_{\mc{C},j} = \eta^j_{e_{\mc{C}}} v_{\mc{C},j}^{| \mu(K)|}$. Then the $K(\mbf{v})$-algebra $K(\mbf{v})\mc{H}_W$ is split semisimple. \end{itemize} Note that Tits' deformation theorem, \cite[Theorem 7.2]{GeckPff}, implies that the specialization $v_{\mc{C},j} \mapsto 1$ induces a bijection $\LW \leftrightarrow \textsf{Irr} \, K(\mbf{v})\mc{H}_W$. \begin{remark} When $W = G(m,1,n)$ the set $\mc{K}/W$ is $\{ \mc{R}, \mc{S} \}$ where $\mc{R}$ is the orbit of hyperplanes that define the reflections in the conjugacy class $R$ and $\mc{S}$ is the orbit of hyperplanes defining the reflections in the conjugacy classes $S_0, \ds, S_{m-1}$. Therefore $e_{\mc{R}} = 2$ and $e_{\mc{S}} = m$. Similarly, when $W = G(m,d,n)$ and $n \neq 2$ or $n = 2$ and $p$ odd the set $\mc{K}/W$ is $\{ \mc{R}, \mc{S} \}$ where $\mc{R}$ is the orbit of hyperplanes that define the reflections in the conjugacy class $R$ and $\mc{S}$ is the orbit of hyperplanes defining the reflections in the conjugacy classes $S_d, \ds, S_{d(p-1)}$. Therefore $e_{\mc{R}} = 2$ and $e_{\mc{S}} = p$. However, when $W = G(m,d,2)$ with $d$ even, the set $\mc{K}/W$ is $\{ \mc{R}_1, \mc{R}_2, \mc{S} \}$, where $\mc{R}_1$, $\mc{R}_2$ are the orbits of the hyperplanes that define the reflections in the conjugacy classes $R_1$ and $R_2$. Here $e_{\mc{R}_1} = e_{\mc{R}_2} = 2$ and $e_{\mc{S}} = p$. \end{remark} \subsection{Cyclotomic Hecke algebras} The cyclotomic Hecke algebras are certain specializations of the generic Hecke algebra. Let $y$ be an indeterminate. \begin{defn} A cyclotomic Hecke algebra is the $\Z_K[y,y^{-1}]$-algebra induced from $\Z[\mbf{v},\mbf{v}^{-1}] \mc{H}_W$ by an algebra homomorphism of the form \bdm \Z_K [\mbf{v},\mbf{v}^{-1}] \rightarrow \Z_K[y,y^{-1}], \qquad v_{\mc{C},j} \mapsto y^{n_{\mc{C},j}}, \edm where the tuple $\mbf{n} := \{ (n_{\mc{C},j} \in \Z) \, : \mc{C} \in \mc{K} / W, \, 0 \le j \le e_{\mc{C}}-1 \}$ is chosen such that the following property holds. Set $ x:= y^{|\mu (K) |}$ and let $z$ be an indeterminate. Then the element of $\Z_K[y,z]$ defined by \bdm \Gamma_{\mc{C}}(y,z) = \prod^{e_{\mc{C}} - 1}_{j = 0} (z - \eta^j_{e_{\mc{C}}} y^{n_{\mc{C},j}}) \edm is required to be invariant under $\textrm{Gal} \, (K(y)/ K(x))$ for all $\mc{C} \in \mc{K}/W$. In other words, $\Gamma_{\mc{C}}(y,z)$ is contained in $\Z_K[x^{\pm 1},z]$. The cyclotomic Hecke algebra corresponding to $\mbf{n}$ is denoted $\mc{H}_W(\mbf{n})$. \end{defn} The symmetric form $t$ on $\mc{H}_W$ induces a symmetrizing form on $K(y)\mc{H}_W(\mbf{n})$ and this algebra is split semisimple by \cite[(4.3)]{Chlou3}. Therefore Tits' deformation theorem implies that we have bijections \bdm \LW \leftrightarrow \textsf{Irr} \, K(y)\mc{H}_W(\mbf{n}) \leftrightarrow K(\mbf{v})\mc{H}_W. \edm \subsection{Rouquier families} The \textit{Rouquier ring} is defined to be $\mc{R}(y) = \Z_K[y,y^{-1},(y^n - 1)^{-1} \, : \, n \in \mathbb{N}]$. Since $\mc{H}_W$ is free of rank $|W|$, $\mc{R}(y)\mc{H}_W(\mbf{n}) \subset K(y)\mc{H}_W(\mbf{n})$ is also free of rank $|W|$. We define an equivalence relation on $\textsf{Irr} \, K(y)\mc{H}_W(\mbf{n}) = \LW$ by saying that $\lambda \sim \mu$ if and only if $\lambda$ and $\mu$ belong to the same block of $\mc{R}(y)\mc{H}_W(\mbf{n})$. The equivalence classes of this relation are called \textit{Rouquier families}. \subsection{}\label{sec:fixparametersconjecture} Fix a parameter $\mbf{c}$ for $G(m,d,n)$ that extends to a parameter $\mbf{c}$ for $G(m,1,n)$, translated into the form $\mbf{h} = (h,H_0, \ds , H_{m-1})$ as described in (\ref{subsection:translate}). Again we make the assumption that $h = -1$ and $\mbf{h} \in \mathbb{Q}^{m+1}$. Choose $e \in \mathbb{N}$ such that $eh$ and $eH_i \in \Z$ for all $0 \le i \le m-1$. Then $\mbf{n} = (n_{\mc{R},0},n_{\mc{R},1},n_{\mc{S},0}, \ds , n_{\mc{S},m-1})$ is fixed to be $n_{\mc{R},0} = e, n_{\mc{R},1} = 0$ and $n_{\mc{S},j} = e \sum_{i = 1}^j H_i$ for $0 \le j \le m - 1$. From now on we fix $K = \mathbb{Q} (\eta_m)$ and $\Z_K = \Z[\eta_m]$. Recall the morphism $\Upsilon$ defined in (\ref{sec:restricteddefinition}). \begin{conjecture}[Martino, \cite{Mo}, (2.7)] Let $\mbf{c},\mbf{h}$ and $\mbf{n}$ be as above. \begin{enumerate} \item The partition of $\textsf{Irr} \, G(m,d,n)$ into Rouquier families associated to $\mc{H}_{G(m,d,n)}(\mbf{n})$ refines the\\ $\textsf{CM}_{\mbf{c}}(G(m,d,n))$ partition. For generic values of $\mbf{c}$ the partitions are equal. \item Let $q \in \Upsilon^{-1}(0)$ and let $K(y)B_1 \oplus \cdots \oplus K(y)B_k$ be the sum of the corresponding Rouquier blocks. Then $\dim \, (\mathbb{C}[ \Upsilon^{*}(0)_q]) = \dim_{K(y)} \, K(y)B_1 \oplus \cdots \oplus K(y)B_k$. \end{enumerate} \end{conjecture} \subsection{}\label{sec:essential} The Rouquier families for $G(m,1,n)$ are calculated by Chlouveraki \cite{Chlou1} using the idea of \textit{essential hyperplanes}. The essential hyperplanes for $G(m,1,n)$ in $\Z^{m+1}$ are of the form $(k n_{\mc{R},0} + n_{\mc{S},i} - n_{\mc{S},j} = 0)$ for $0 \le i < j \le m-1$ and $-m < k < m$, and $(n_{\mc{R},0} = 0)$. \begin{defn} Let $\mbf{n} \in \Z^{m+1}$. \begin{itemize} \item The hyperplane $(k n_{\mc{R},0} + n_{\mc{S},i} - n_{\mc{S},j} = 0)$ is said to be \textit{essential} if there exists a prime ideal $\mf{p}$ of $\Z [\eta_m]$ such that $\eta_m^i - \eta_m^j \in \mf{p}$. The hyperplane $(n_{\mc{R},0} = 0)$ is always assumed to be essential. \item If $\mbf{n}$ does not belong to any essential hyperplane then $\mbf{n}$ is said to be \textit{generic}. \item If $\mbf{n}$ belongs to the essential hyperplane $(k n_{\mc{R},0} + n_{\mc{S},i} - n_{\mc{S},j} = 0)$ and $\mbf{n}$ does not belong to any other essential hyperplane then $\mbf{n}$ is said to be a \textit{generic} element of $(k n_{\mc{R},0} + n_{\mc{S},i} - n_{\mc{S},j} = 0)$. \end{itemize} \end{defn} \subsection{}\label{prop:hyperpartition} If $\mbf{n} \in \Z^{m+1}$ does not belong to any essential hyperplane then the corresponding Rouquier families are independent of the choice of $\mbf{n}$. Similarly, if $\mbf{n}$ is a generic element in some essential hyperplane then the Rouquier families for $\mbf{n}$ are independent of the choice of $\mbf{n}$. A general element $\mbf{n} \in \Z^{m+1}$ will belong to a collection of essential hyperplanes $H_1, \ds, H_k = 0$. It has been shown by Chlouveraki \cite{Chlou3} that Rouquier families have the property of \textit{semi-continuity}. This means that the partition of $\textsf{Irr} \, G(m,1,n)$ into Rouquier families for $\mbf{n}$ is the finest partition of $\textsf{Irr} \, G(m,1,n)$ that is refined by the Rouquier families partition of $\textsf{Irr} \, G(m,1,n)$ associated to each of the essential hyperplanes $H_i = 0$. Therefore if $\underline{\lambda}$ and $\underline{\mu}$ are in the same Rouquier family for some essential hyperplane $H_i = 0$ then they are in the same Rouquier family for $\mbf{n}$. \begin{prop}[\cite{Chlou1}, Proposition 3.15] Let $(n_{\mc{S},i} - n_{\mc{S},j} = 0)$ be an essential hyperplane and choose $\mbf{n}$ to be a generic element of $(n_{\mc{S},i} - n_{\mc{S},j} = 0)$. Then $\underline{\lambda}, \underline{\mu} \in \mc{P}(n,m)$ are in the same Rouquier family of $\mc{R}(y)\mc{H}_{G(m,1,n)}(\mbf{n})$ if and only if \begin{enumerate} \item $\lambda^{a} = \mu^a$ for all $a \neq s,t$; and \item $\textrm{Res} \, {}_{(\lambda^s,\lambda^t)}(x) = \textrm{Res} \, {}_{(\mu^s,\mu^t)}(x)$. \end{enumerate} \end{prop} \begin{proof} The result \cite[Proposition 3.15]{Chlou1} is stated in terms of weighted content but \cite[Proposition 3.4]{BroueKim} shows that we can reformulate the result in terms of residues. The weighting is $(0,k)$, which in our case becomes $(0,0)$ since $k = 0$. \end{proof} \begin{lem} Let $\underline{\lambda}, \underline{\mu} \in \mc{P}(m,n)$. We write $\underline{\lambda} \sim \underline{\mu}$ if there exists $0 \le i \le p-1$ and $0 \le j < k \le d-1$ such that $\lambda^a = \mu^a$ for all $a \neq i + jp, i + kp$ and \bdm \textrm{Res} \, {}_{(\lambda^{i + jp},\lambda^{i + kp})}(x) = \textrm{Res} \, {}_{(\mu^{i + jp},\mu^{i + kp})}(x). \edm Now choose $\mbf{n}$ to be a generic parameter for $\mc{H}_{G(m,d,n)}$. Then the partition of $\textsf{Irr} \, G(m,1,n)$ into Rouquier families for $\mc{R}(y)\mc{H}_{G(m,1,n)}(\mbf{n})$ is the set of equivalence classes in $\textsf{Irr} \, G(m,1,n)$ under the transitive closure of $\sim$. \end{lem} \begin{proof} Since $\mbf{n}$ is generic for $\mc{H}_{G(m,d,n)}$, the parameter $\mbf{h}$ satisfies $H_{i + p} = H_i$ for all $i$ and no other linear relations. Therefore it follows from (\ref{sec:fixparametersconjecture}) that the only hyperplanes that might be essential for $\mbf{n}$ (now considered a parameter for $\mc{H}_{G(m,1,n)}$) are of the form $(n_{\mc{S},i +jp} - n_{\mc{S},i + kp} = 0)$ for $0 \le i \le p-1$ and $0 \le j < k \le d-1$. However not all of these hyperplanes will be essential. Let us say that the $m$-multi-partition $\underline{\lambda}$ is \textit{linked} to the $m$-multi-partition $\underline{\mu}$ if there exists an essential hyperplane $(n_{\mc{S},i +jp} - n_{\mc{S},i + kp} = 0)$ containing $\mbf{n}$ such that $$ \textrm{Res} \, {}_{(\lambda^{i + jp},\lambda^{i + kp})}(x) = \textrm{Res} \, {}_{(\mu^{i + jp},\mu^{i + kp})}(x). $$ Then, by Proposition \ref{prop:hyperpartition} and the principal of semi-continuity, the Rouquier families for $\mc{R}(y)\mc{H}_{G(m,1,n)}(\mbf{n})$ are the set of equivalence classes in $\textsf{Irr} \, G(m,1,n)$ under the transitive closure of ``linked''. Since $\underline{\lambda}$ linked $\underline{\mu}$ implies that $\underline{\lambda} \sim \underline{\mu}$, the Rouquier families refine the partition defined by $\sim$. Therefore we must show that if $\underline{\lambda} \sim \underline{\mu}$ (via $i + jp, i+ kp$ say) then there exists a chain of $m$-multi-partitions $\underline{\lambda} = \underline{\lambda}_1, \ds, \underline{\lambda}_q = \underline{\mu}$ such that $\underline{\lambda}_{\alpha}$ is linked to $\underline{\lambda}_{\alpha+1}$ for all $1 \le \alpha \le q-1$. For each $0 \le i \le p-1$ and $0 \le j \le d-1$, the result \cite[Lemma 3.6]{Chlou2} says that the multi-partitions $\underline{\lambda}$ and $(i, i+ j p) \cdot \underline{\lambda}$ belong to the same Rouquier family for $\mc{R}(y)\mc{H}_{G(m,1,n)}(\mbf{n})$, where $(i, i + jp)$ is the transposition swapping the partitions $\lambda^{i}$ and $\lambda^{i + j p}$. In particular, this result (assuming that $d > 1$) shows that there exists some $l \neq 0$ such that the hyperplane $(n_{\mc{S},i} - n_{\mc{S},i + l p } = 0)$ is essential. Applying the result \cite[Lemma 3.6]{Chlou2}, we see that $\underline{\lambda}$ is in the same Rouquier family as $$ \underline{\lambda}' := (i, i+ kp) \cdot (i + l p, i + j p) \cdot \underline{\lambda} $$ and $\underline{\mu}$ is in the same Rouquier family as $$ \underline{\mu}' := (i, i+ kp) \cdot (i + l p, i + j p) \cdot \underline{\mu}. $$ Now $(\lambda')^a = (\mu')^a$ for all $a \neq i, i + lp$ and \bdm \textrm{Res} \, {}_{((\lambda')^{i},\lambda^{i + lp})}(x) = \textrm{Res} \, {}_{((\mu')^{i},\mu^{i + lp})}(x). \edm Since the hyperplane $(n_{\mc{S},i} - n_{\mc{S},i + l p } = 0)$ is essential, this implies that $\underline{\lambda}'$ is linked to $\underline{\mu}'$ and there must be a chain from $\underline{\lambda}$ to $\underline{\mu}$ as required. \end{proof} \subsection{}\label{sec:residueHecke} We will require the following combinatorial result. The proof uses the representation theory of cyclotomic Hecke algebras, it would be interesting to have a direct combinatorial proof. \begin{lem} Let $\underline{\lambda}$ and $\underline{\mu}$ be two $m$-multi-partitions of $n$. Then $\textrm{Res}_{\underline{\lambda}}(x) = \textrm{Res}_{\underline{\mu}}(x)$ if and only if there exists a sequence of multipartitions $\underline{\lambda} = \underline{\lambda}(1), \ds, \underline{\lambda}(k) = \underline{\mu} \in \mc{P}(m,n)$ and $s(i) \neq t(i) \in \{1, \ds, m \}$, $1 < i \le k$, such that \begin{enumerate} \item $\lambda(i-1)^a = \lambda(i)^a$ for all $a \neq s(i), t(i)$; and \item $\textrm{Res} \, {}_{(\lambda(i-1)^{s(i)},\lambda(i-1)^{t(i)})}(x) = \textrm{Res} \, {}_{(\lambda(i-1)^{s(i)},\lambda(i-1)^{t(i)})}(x), \quad \forall \, 1 < i \le k$. \end{enumerate} \end{lem} \begin{proof} Let us fix $\mbf{n} = (n_{\mc{R},0},n_{\mc{R},1},n_{\mc{S},0}, \ds , n_{\mc{S},m-1})$ with $n_{\mc{R},0} = 1$ ,$n_{\mc{R},1} = 0$ and $n_{\mc{S},i} = 0$ for all $0 \le i \le m-1$. Then the Lemma is the result \cite[Proposition 3.19]{Chlou1} for our special parameter $\mbf{n}$, noting once again that \cite[Proposition 3.4]{BroueKim} allows us to rephrase \cite[Proposition 3.19]{Chlou1}, which is stated in terms of weighted content, in language of residues. \end{proof} \subsection{} We can now confirm the first part of Martino's conjecture for $G(m,d,n)$. \begin{thm} Let $\mbf{c} \, : \, \mc{S}(G(m,d,n)) \rightarrow \mathbb{C}$ be a $G(m,1,n)$-equivariant function such that $k = -1$ and $\mbf{h} \in \mathbb{Q}^{m+1}$. Choose $e \in \mathbb{N}$ such that $eh$ and $eH_i \in \Z$ for all $0 \le i \le m-1$. Fix $n_{\mc{R},0} = e, n_{\mc{R},1} = 0$ and $n_{\mc{S},j} = e \sum_{i = 1}^j H_i$ for $0 \le j \le m - 1$. Then \begin{enumerate} \item the partition of $\textsf{Irr} \, G(m,d,n)$ into Rouquier families associated to $\mc{H}_{G(m,d,n)}(\mbf{n})$ refines the\\ $\textsf{CM}_{\mbf{c}}(G(m,d,n))$ partition; \item the partition of $\textsf{Irr} \, G(m,d,n)$ into Rouquier families associated to $\mc{H}_{G(m,d,n)}(\mbf{n})$ equals the\\ $\textsf{CM}_{\mbf{c}}(G(m,d,n))$ partition for generic values of the parameter $\mbf{c}$. \end{enumerate} \end{thm} \begin{proof} It is shown in \cite[Theorem 3.10]{Chlou2} that if $\underline{\lambda}$ is a $d$-stuttering $m$-multi-partition of $n$ such that $\{ \underline{\lambda} \}$ is a Rouquier family for $\mc{R}(y) \mc{H}_{G(m,1,n)}(\mbf{n})$ then the sets $\{ (\underline{\lambda}, \epsilon) \}$, $\epsilon \in C^\vee_{d}$, are Rouquier families for $\mc{R}(y) \mc{H}_{G(m,1,n)}(\mbf{n})$. This agrees with Theorem \ref{thm:mainresult} (1). The second part of \cite[Theorem 3.10]{Chlou2} shows that if $\mc{P}$ is a Rouquier family for $\mc{R}(y) \mc{H}_{G(m,1,n)}(\mbf{n})$ not of the type just described then, in the notation of Theorem \ref{thm:compare1}, $\Gamma( \mc{P})$ is a Rouquier family for $\mc{R}(y) \mc{H}_{G(m,d,n)}(\mbf{n})$. The result \cite[Corollary 3.13]{Mo} shows that the partition of $\textsf{Irr} \, G(m,1,n)$ into Rouquier families associated to $\mc{H}_{G(m,1,n)}(\mbf{n})$ refines the $\textsf{CM}_{\mbf{c}}(G(m,1,n))$ partition. Therefore there exists a $\textsf{CM}_{\mbf{c}}(G(m,1,n))$-partition $\mc{Q}$ such that $\mc{P} \subseteq \mc{Q}$. By Theorem \ref{thm:mainresult} (2), $\Gamma(Q)$ is a $\textsf{CM}_{\mbf{c}}(G(m,d,n))$-partition. Thus $\Gamma(\mc{P}) \subseteq \Gamma(\mc{Q})$ implies that the partition of $\textsf{Irr} \, G(m,d,n)$ into Rouquier families refines the $\textsf{CM}_{\mbf{c}}(G(m,d,n))$ partition.\\ Now let $\mbf{c}$ be a generic parameter for the rational Cherednik algebra associated to $G(m,d,n)$. We think of $\mbf{c}$ as a parameter for the rational Cherednik algebra associated to $G(m,1,n)$. Thus it is a generic point of the subspace defined by $c_j = 0$ for all $j \not\equiv 0 \, \mod \, d$. Correspondingly, $\mbf{n}$ is a generic point in the sublattice of $\Z^{m+1}$ defined by the equations $n_{\mc{S},i + kp} - n_{\mc{S},i + lp} = 0$ for $0 \le i \le p-1$ and $0 \le k < l \le d-1$. We wish to show that the Calogero-Moser partition of $\textsf{Irr} \, G(m,d,n)$ equals the partition of $\textsf{Irr} \, G(m,d,n)$ into Rouquier families. As explained in the previous paragraph, \cite[Theorem 3.10]{Chlou2} and Theorem \ref{thm:mainresult} imply that it suffices to show that the Calogero-Moser partition of $\textsf{Irr} \, G(m,1,n)$ for $\mbf{c}$ equals the partition of $\textsf{Irr} \, G(m,1,n)$ into Rouquier families for $\mbf{n}$. The proof of Lemma \ref{lem:generic} shows that $\underline{\lambda}, \underline{\mu} \in \mc{P}(m,n)$ are in the same Calogero-Moser partition of $\textsf{Irr} \, G(m,1,n)$ if and only if \bdm \sum_{j = 0}^{d-1} \textrm{Res}_{\lambda^{i + pj}}(x^e) = \sum_{j = 0}^{d-1} \textrm{Res}_{\mu^{i + pj}}(x^e) \quad \forall \, 0 \le i \le p-1. \edm Combining the results Lemma \ref{prop:hyperpartition} and Lemma \ref{sec:residueHecke} shows that $\underline{\lambda}, \underline{\mu} \in \mc{P}(m,n)$ are in the same Rouquier family of $\mc{R}(y)\mc{H}_{G(m,1,n)}(\mbf{n})$ if and only if the same condition holds. \end{proof} \section*{Acknowledgements} The research described here was done both at the University of Edinburgh with the financial support of the EPSRC and during a visit to the University of Bonn with the support of a DAAD scholarship. This material will form part of the author's PhD thesis for the University of Edinburgh. The author would like to express his gratitude to his supervisor, Professor Iain Gordon, for his help, encouragement and patience. He also thanks Dr. Maurizio Martino, Dr. Maria Chlouveraki and Professor Ken Brown for many fruitful discussions and Professor C\'edric Bonnaf\'e for pointing out an error in an earlier version of the article.
2,877,628,088,701
arxiv
\section{Introduction}\label{sec:Intro} The LHCb Collaboration has recently presented evidence~\cite{Aaij:2020fnh} for the observation of at least one resonance in the $J/\psi$-pair spectrum at about 6900~MeV, and the likely presence of at least one additional resonance lying below this mass but above the 6200~MeV $J/\psi$-pair threshold. Such states are naturally assigned the valence-quark content $c\bar c c\bar c$, making them the first all-heavy multiquark exotic candidates claimed to date in the experimental literature. Theoretical studies of $c\bar c c\bar c$ states have a much longer history, dating indeed to a time only two years after the discovery of the $J/\psi$~\cite{Iwasaki:1976cn} and followed by a smattering of papers in the 1980s~\cite{Chao:1980dv,Ader:1981db, Badalian:1985es}. The current interest in $c\bar c c\bar c$ states, starting in 2011~\cite{Berezhnoy:2011xy,Berezhnoy:2011xn} and particularly ramping up since 2016~\cite{Wu:2016vtq,Chen:2016jxd, Karliner:2016zzc,Wang:2017jtz,Bicudo:2017usw,Debastiani:2017msn, Anwar:2017toa,Wang:2018poa,Liu:2019zuc,Wang:2019rdo,Bedolla:2019zwg, Chen:2020lgj,Liu:2020eha,Wang:2020ols,Jin:2020jfc,Yang:2020rih, Becchi:2020uvq,Lu:2020cns,Chen:2020xwe,Wang:2020gmd}, emerged from the expectation of dedicated searches at the LHC\@. A notable feature of the all-heavy multiquark exotics $Q_1 \overline Q_2 Q_3 \overline Q_4$ ($Q_i \! = \! c$ or $b$), in contrast to the known exotics $Q \overline Q q \bar q^\prime$~\cite{Lebed:2016hpi} ($q, q^\prime \! \in \! \{ u,d \}$), is the lack of a plausible molecular structure for the states. The lightness of the quarks $q, \bar q^\prime$ in the $Q \overline Q q \bar q^\prime$ case suggests the possibility of $(Q \bar q^\prime) (\overline Q q)$ molecules, bound by the exchange of light mesons with valence content $(q \bar q^\prime)$ and possessing a spatial extent at least as large as the light-meson wave function, of order $1/\Lambda_{\rm QCD} \! \simeq \! O(1)$~fm. If the state lies especially close to the $(Q \bar q^\prime)(\overline Q q)$ threshold [{\it e.g.}, $X(3872)$], then its spatial extent is determined by the inverse of the binding energy and can be quite substantial, possibly as large as several fm. Moreover, Yukawa-like light-meson binding exchanges as an explanation for such near-threshold states begin to appear implausibly fine-tuned, and instead {\it threshold rescattering effects\/} (loop exchanges of virtual particles between the constituent mesons that numerically enhance the amplitude near the threshold) provide a mechanism for binding the state. In contrast, the case of all-heavy $Q_1 \overline Q_2 Q_3 \overline Q_4$ states lacks a light-meson exchange mechanism, both for Yukawa-type exchanges and for threshold effects. The $X(6900)$ is noted~\cite{Aaij:2020fnh} to lie in the vicinity of the $\chi_{c0} \chi_{c0}$ and $\chi_{c1} \chi_{c0}$ thresholds, but to our knowledge no calculation has yet suggested the ability of such a threshold rescattering to produce a strong resonance. In general, one expects the lowest-lying $Q_1 \overline Q_2 Q_3 \overline Q_4$ states to exhibit comparable distances between all four heavy quarks. If, say, the $Q_1 \overline Q_2$ and $Q_3 \overline Q_4$ pairs are formed with substantially smaller internal separations than the distance between the two pairs, then one expects the immediate formation of two conventional quarkonium states rather than a single resonance, even if both pairs are in color octets and require gluon exchange (which has a range comparable to that of light-meson exchange) in order for $Q_1 \overline Q_2$ and $Q_3 \overline Q_4$to hadronize as color singlets. As a result, the most common models for $Q_1 \overline Q_2 Q_3 \overline Q_4$ states assume a diquark-antidiquark [($Q_1 Q_3$)($\overline Q_2 \overline Q_4$)] structure, typically exploiting the attractive color-antitriplet quark-quark coupling. One should keep in mind, however, that if all four quarks have comparable separations (as is anticipated for the ground states), then a combination of different color structures should be expected to appear for those states ({\it e.g.}, as in the lattice simulation of Ref.~\cite{Bicudo:2017usw}). Beyond the ground states, the separations between the quarks can become differentiated. As noted above, closer association of the $Q \overline Q$ pairs is expected to lead to an immediate dissociation into quarkonium pairs, while the configuration ($Q_1 Q_3$)($\overline Q_2 \overline Q_4$) with color-triplet diquarks becomes the only one that features an attractive interaction between the component constituents (the quarks within the diquarks), but must still remain bound due to confinement, independent of the exchange of any number of gluons. These features define the {\it dynamical diquark picture\/} of multiquark exotics~\cite{Brodsky:2014xia,Lebed:2015tna}. In the original picture, the diquark separation is a consequence of the production process; for example, $c\bar c q\bar q^\prime$ tetraquarks can be manifested due to the large momentum release between the $c\bar c$ pair in $B$-meson decays into a $(cq)(\bar c \bar q^\prime)$ structure. To be more precise, the diquark-antidiquark state couples most strongly to the portion of the four-quark momentum-space wave function for which the relative momentum between the quasiparticles $\delta^{\vphantom{1}}} \def\bde{{\bar\delta} \! \equiv \! (Q_1 Q_3)$ and $\bde \! \equiv \! (\overline Q_2 \overline Q_4)$ is significantly larger than the relative momenta within them. The dynamical diquark picture is elevated to a full model by identifying its mass eigenstates with those of the gluon field connecting the diquarks~\cite{Lebed:2017min}. Explicitly, confinement limits the eventual separation of the $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$-$\bde$ pair even though they may form with a large relative momentum, and the specific stationary states of the full system are supplied by the quantized modes of the gluon field stretching between the two heavy, (eventually) nearly stationary sources $\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde$. This approach uses the Born-Oppenheimer (BO) approximation in precisely the same manner as is done for simulations of heavy-quark hybrids on the lattice ({\it e.g.}, Refs.~\cite{Juge:1997nc,Juge:1999ie,Juge:2002br, Morningstar:2019,Capitani:2018rox}). Indeed, the specific form of the static potential $V_\Gamma(r)$ between the heavy sources for a particular BO glue configuration $\Gamma$ is precisely the same one computed in each lattice simulation just referenced. The corresponding coupled Schr\"{o}dinger equations were first numerically solved for $c\bar c q\bar q^\prime$ states in Ref.~\cite{Giron:2019bcs}. Typical diquark models approximate the quasiparticles $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}, \bde$ to be pointlike, even though they are expected to have spatial extents comparable to those of mesons carrying the same valence-quark flavor content. Nevertheless, model calculations in Ref.~\cite{Giron:2019cfc} for $c\bar c q\bar q$ states show that finite diquark size has a surprisingly mild effect on the spectrum for a $\delta^{\vphantom{1}}} \def\bde{{\bar\delta} \! = \! (cq)$ radius as large as 0.4~fm. The dynamical diquark model also selects a very specific set of spin-dependent couplings as the ones deemed most physically significant. In this model the $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}, \bde$ pair form distinguishable, separate entities within the full state, so that the dominant spin-spin couplings are taken to be the ones between quarks within each diquark~\cite{Maiani:2014aja}, while typical existing models for $c\bar c c\bar c$ states ({\it e.g.}, Refs.~\cite{Berezhnoy:2011xy,Berezhnoy:2011xn}) treat all quark spin-spin interactions on equal footing, or consider only couplings to full diquark spins ({\it e.g.}, Ref.~\cite{Bedolla:2019zwg}). The more restrictive paradigm used here leads to very simple predictions for the spectrum of $c\bar c c\bar c$ states, particularly in $S$-wave multiplets, which will become immediately testable once the {quantum numbers of the $c\bar c c\bar c$ states are known. On the other hand, the dominant operators in this model for $c\bar c c\bar c$ states carrying orbital angular momentum dependence (relevant to $P$- and higher-wave states) are taken to couple only to the diquarks as units, since $\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde$ are assumed to have no internal orbital excitation for all low-lying $c\bar c c\bar c$ states.\footnote{In contrast, the tensor operator for $P$-wave $c\bar c q\bar q^\prime$ states in Ref.~\cite{Giron:2020fvd}, owing its origin to a pionlike exchange within the state, was chosen to couple only to the light-quark spins within the diquarks. Nevertheless, the matrix elements for an alternative tensor operator that couples only to the full diquark spins (as to be used here) are also computed in that work.} The resultant spin-orbit and tensor operators for the low-lying spectrum are the same as those used in Ref.~\cite{Bedolla:2019zwg}, but differ from those used in Ref.~\cite{Liu:2020eha}, which instead are chosen to couple to all individual quark spins. Again, a very simple spectrum arises in this model for the $P$-wave states, whose degree of validity will become immediately apparent with further data. Our purpose in this paper is therefore not to compete with detailed calculations of spectra that are based upon assuming specific forms for all operators contributing to the Hamiltonian of $c\bar c c\bar c$ states ({\it e.g.}, using a one-gluon-exchange potential to obtain an explicit functional form for the coefficient for every operator, as in Ref.~\cite{Bedolla:2019zwg}). Rather, we describe the most significant features in the spectrum parametrically, identifying particular spin-spin, spin-orbit, or tensor terms to pinpoint their origin, while remaining agnostic as to the precise dynamical origin of these operators. We nevertheless also present an initial fit to the $c\bar c c\bar c$ spectrum, using numerical values for the Hamiltonian parameters obtained from the analogous operators in other sectors of exotics to which the model has previously been applied. Specifically, the strength of the spin-spin operator is obtained from a recent fit to $c\bar c s\bar s$ candidates~\cite{Giron:2020qpb}, and the spin-orbit and tensor strengths are taken from a recent fit to $P$-wave $c\bar c q\bar q^\prime$ candidates~\cite{Giron:2020fvd}. This paper is organized as follows. In Sec.~\ref{sec:Spectrum} we review the spectroscopy of the model for $S$- and $P$-wave $Q_1 \overline Q_2 Q_3 \overline Q_4$ states, identifying quantum-number restrictions arising from spin statistics. Section~\ref{sec:MassOps} presents the Hamiltonian and tabulates all matrix elements for the allowed states, and we identify features of the spectrum that appear based upon their parametric analysis. In Sec.~\ref{sec:Num} we present a numerical prediction for the $c\bar c c\bar c$ spectrum, using as described above the results of previous work; and in Sec.~\ref{sec:Concl} we conclude. \section{Spectroscopy of $Q\bar Q Q\bar Q$ Exotics} \label{sec:Spectrum} The spectroscopy of $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$-$\bde$ states in which the diquarks $\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde$ contain no internal orbital angular momentum, but that allow for arbitrary orbital excitation and gluon-field excitation between the $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$-$\bde$ pair, is presented in Ref.~\cite{Lebed:2017min}. For the all-heavy states with distinguishable quarks in $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ and $\bde$ ({\it i.e.}, $b\bar b c\bar c$, or for that matter, $c\bar c s\bar s$), precisely the same enumeration of states occurs. The core states, expressed in the basis of good diquark-spin eigenvalues with labels such as $1_\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$, are given by \begin{eqnarray} J^{PC} = 0^{++}: & \ & X_0 \equiv \left| 0_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 0_\bde \right>_0 \, , \ \ X_0^\prime \equiv \left| 1_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 1_\bde \right>_0 \, , \nonumber \\ J^{PC} = 1^{++}: & \ & X_1 \equiv \frac{1}{\sqrt 2} \left( \left| 1_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 0_\bde \right>_1 \! + \left| 0_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 1_\bde \right>_1 \right) \, , \nonumber \\ J^{PC} = 1^{+-}: & \ & Z \ \equiv \frac{1}{\sqrt 2} \left( \left| 1_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 0_\bde \right>_1 \! - \left| 0_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 1_\bde \right>_1 \right) \, , \nonumber \\ & \ & Z^\prime \, \equiv \left| 1_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 1_\bde \right>_1 \, , \nonumber \\ J^{PC} = 2^{++}: & \ & X_2 \equiv \left| 1_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} , 1_\bde \right>_2 \, , \label{eq:Swavediquark} \end{eqnarray} with the outer subscripts on the kets indicating total quark spin $S$. On their own, these 6 states fill the lowest multiplet $\Sigma^+_g(1S)$ within the Born-Oppenheimer (BO) approximation for the gluon-field potential connecting the $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$-$\bde$ pair. Higher BO potentials (like $\Sigma^-_u$, where standard BO quantum-number labels such as these are defined in Ref.~\cite{Lebed:2017min}) produce the multiquark analogues to hybrid mesons, and thus are expected to lie about 1~GeV above the $\Sigma^+_g(1S)$ ground states. For phenomenological reasons to be discussed in Sec.~\ref{sec:Num}, we do not discuss such states further here. The diquarks $\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde$ in this model transform as color (anti)triplets, which are antisymmetric under quark-color exchange. If the quarks within $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ or $\bde$ are identical, then the space-spin wave function of the corresponding diquark must be symmetric in order to satisfy Fermi statistics for the complete $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ or $\bde$ wave function; however, since the model assumes no orbital excitation within the diquarks, their spatial wave function and hence also their spin wave function alone must be symmetric, which thus requires the corresponding diquark spin to equal unity: Only $1_\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ and $1_\bde$ survive. In the $c\bar c c\bar c$ or $b\bar b b\bar b$ case, one immediately sees from Eq.~(\ref{eq:Swavediquark}) that the states $X_0$, $X_1$, and $Z$ are forbidden by spin statistics.\footnote{One may also consider truly exotic states like $b\bar b b\bar c$, in which $0_\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ is forbidden but $0_\bde$ is allowed, in which case only the state $X_0$ is eliminated. For such states $C$ also ceases to be a good quantum number, so that $X_1$ and $Z$ become the same $1^+$ state, thus leaving a total of 4 states in the multiplet $\Sigma^+_g(1S)$. In contrast, the case $b\bar b c\bar c$ (considered in, {\it e.g.}, Ref.~\cite{Bedolla:2019zwg}) retains the $C$ quantum number and all 6 $\Sigma^+_g(1S)$ states.} The ground-state multiplet $\Sigma^+_g(1S)$ is thus halved: Only the three states $X^\prime_0$ ($0^{++}$), $Z^\prime$ ($1^{+-}$), and $X_2$ ($2^{++}$) survive. An identical analysis applies to all radial-excitation multiplets $\Sigma^+_g(nS)$. One immediate conclusion of this model becomes evident: If the full state wave function contains a component that allows either diquark to appear in the (symmetric) color sextet, then that diquark in the low-lying states must appear in the antisymmetric spin-0 combination $0_\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ or $0_\bde$. In that case, the full spectrum of 6 states from Eq.~(\ref{eq:Swavediquark}), most notably a state with $J^{PC} \! = \! 1^{++}$, must appear. {\em The observation of a $1^{++}$ $c\bar c c\bar c$ state in the lowest multiplet (or any $S$-wave multiplet) would provide direct evidence of dynamics lying outside the most restrictive diquark models.} The addition of a nonzero orbital-excitation quantum number $L$ is now straightforward. Since the intrinsic parity factor $(-1)$ for an antiquark appears twice, the parity eigenvalue of the full state is just given by the usual spatial factor $(-1)^L$. All $S$-wave, $D$-wave, {\it etc.}\ states therefore have $P \! = \! +$, and all $P$-wave, $F$-wave, {\it etc.}\ states have $P \! = \! -$. Starting with the $S$-wave ``core'' states $X^\prime_0$, $Z^\prime$, and $X_2$ of Eqs.~(\ref{eq:Swavediquark}), one invokes the usual angular momentum addition rules to produce states of good total $J$ (indicated by a superscript ``($J$)'', using the notation developed in Ref.~\cite{Lebed:2017min}). Explicitly, the 7 $P$-wave $c\bar c c\bar c$ states, accompanied by their $J^{PC}$ eigenvalues, are \begin{eqnarray} & & X^{\prime \, (1)}_{0 \, P} \, (1^{--}) , \ Z^{\prime \, (1)}_P (1^{+-}) , \ Z^{\prime \, (2)}_P (2^{+-}) , \ Z^{\prime \, (3)}_P (3^{+-}) , \nonumber \\ & & X^{(1)}_{2 \, P} \, (1^{--}) , \; \ X^{(2)}_{2 \, P} \, (2^{--}) , \ X^{(3)}_{2 \, P} \, (3^{--}) . \end{eqnarray} For completeness, we note that each of the $D$-wave, $F$-wave, {\it etc.}\ multiplets each contain precisely 9 $c\bar c c\bar c$ states. In particular, the $\Sigma^+_g(1D)$ multiplet is the lowest one to contain a $1^{++}$ state, $X_{2 \, D}^{(1)}$. \section{Mass Hamiltonian} \label{sec:MassOps} The full mass spectrum of all states in the dynamical diquark model is computed by the following procedure: First, a particular BO potential $\Gamma$ ($= \! \Sigma^+_g$, $\Pi_u$, {\it etc.}) that gives rise to a multiplet of states [$\Sigma^+_g(1P)$, $\Pi_u(2P)$, {\it etc.}] is specified. The corresponding potentials $V_\Gamma (r)$ have been computed numerically on the lattice~\cite{Juge:1997nc,Juge:1999ie,Juge:2002br,Morningstar:2019, Capitani:2018rox}. One specifies a diquark mass $m_{\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde}$ (or in the case of pentaquarks, a color-triplet {\em triquark\/} mass as well), and solves the resulting Schr\"{o}dinger equation for this Hamiltonian $H_0$ numerically~\cite{Giron:2019bcs},\footnote{In some cases the BO potentials mix, leading to coupled Schr\"{o}dinger equations that require a more involved numerical solution technique.} giving rise to a multiplet-average mass eigenvalue $M_0(nL)$ for particular radial ($n$) and orbital ($L$) quantum numbers attached to the particular BO potential $\Gamma$. In this paper we are interested only in the $\Sigma^+_g$ potential, and primarily in the levels within the lowest multiplets $\Sigma^+_g(1S)$, $\Sigma^+_g(1P)$, and $\Sigma^+_g(2S)$. The next step is to identify and compute fine-structure corrections to the spectrum of each such multiplet. In the dynamical diquark model the dominant spin-dependent, isospin-independent operator is taken to be the spin-spin coupling between quarks in the diquark, and between the antiquarks in the antidiquark. In the case of $Q\overline Q q\bar q^\prime$ states (where $q, q^\prime \! \in \! \{ u,d \}$) the model also includes a spin-dependent, isospin-dependent operator that mimics the form present in pion exchange. The analysis of the $\Sigma^+_g(1S)$ multiplet of $c\overline c q\bar q^\prime$ states in Ref.~\cite{Giron:2019cfc} uses a Hamiltonian consisting only of $H_0$ and the 2 operators thus described: \begin{eqnarray} H & = & H_0 + 2 \kappa_{qQ} ({\bf s}_q \! \cdot \! {\bf s}_Q + {\bf s}_{\bar q^\prime} \! \cdot \! {\bf s}_{\bar Q}) + V_0 \, {\bm \tau}_q \! \cdot \! {\bm \tau}_{\bar q^\prime} \; {\bm \sigma}_q \! \cdot \! {\bm \sigma}_{\bar q^\prime} \, , \hspace{1em} \label{eq:Ham} \end{eqnarray} where of course $Q \! = \! c$, and $\kappa_{qQ}$ is assumed to be isospin-symmetric. This very simple Hamiltonian is used to great effect in Ref.~\cite{Giron:2019cfc}, where it provides a natural explanation for the $1^{++}$ $X(3872)$ being the lightest observed state in the $\Sigma^+_g(1S)$ multiplet and for the appearance of the preferential decay patterns $Z_c(3900) \! \to \! J/\psi$ and $Z_c(4020) \! \to \! h_c$. In the intermediate case of $c\bar c s\bar s$ states in Ref.~\cite{Giron:2020qpb} as well as in the all-heavy case $Q\overline Q Q\overline Q$ considered here (or more generally, $Q_1 \overline Q_2 Q_3 \overline Q_4$), the isospin-dependent term $V_0$ is absent. In addition, the coefficients $\kappa_{qQ}$, $\kappa_{sQ}$, and $\kappa_{QQ}$ refer to spin couplings within diquarks containing increasingly heavy quarks, and therefore the diquarks are expected to be increasingly spatially compact. Since the fundamental quark spins thus interact at increasingly close range, one may expect the numerical size of these couplings to increase for heavier quark combinations, a point to which we return in Sec.~\ref{sec:Num}. The $S$-wave Hamiltonian for $Q \overline Q Q \overline Q$ therefore contains only one new parameter, \begin{equation} H = H_0 + 2 \kappa_{QQ} ({\bf s}_Q \! \cdot \! {\bf s}_Q + {\bf s}_{\bar Q} \! \cdot \! {\bf s}_{\bar Q}) \, , \label{eq:Swavecccc} \end{equation} where the two factors of ${\bf s}_Q$ and of ${\bf s}_{\bar Q}$ are each understood to apply to a separate heavy quark. The eigenvalues of $H$ are trivially computed in the basis of good diquark spin: \begin{equation} M = M_0 + \kappa_{QQ} \left[ s_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} (s_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} + 1) + s_{\bde} (s_{\bde} + 1) - 3 \right] \, . \end{equation} Since as noted above, $s_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} \! = \! s_\bde \! = \! 1$ in any state for which diquarks have negligible coupling to the color-sextet channel, we immediately obtain a strong result: {\em The 3 states of each $\Sigma^+_g(nS)$ multiplet, $0^{++}$, $1^{+-}$, and $2^{++}$, are degenerate in this model, with a common mass eigenvalue given by} \begin{equation} M(nS) = M_0 + \kappa_{QQ} \, , \label{eq:SwaveMass} \end{equation} where of course both $M_0$ and $\kappa_{QQ}$ may vary with the radial excitation number $n$. The measurement of nonzero mass splittings between these three states would therefore provide direct evidence that the quarks within different diquarks have nonnegligible spin-spin couplings between them.\footnote{This result is parametrically apparent from the first equations of Sec.~IIB in Ref.~\cite{Berezhnoy:2011xn} (setting their $\kappa_+ \! = \! 0$). However, since all spin-spin couplings are numerically comparable in their model, this feature was not commented upon there.} Turning now to $L \! > \! 0$ states, the new operators appearing in the Hamiltonian are pure orbital [{\bf L}$^2$, which is the same for all states in the $\Sigma^+_g(nL)$ multiplet and therefore provides a contribution to $M_0$], spin-orbit, and tensor operators. Both of the latter operators are considered in Ref.~\cite{Giron:2020fvd} for $P$-wave $c\bar c q\bar q^\prime$ states. The spin-orbit operator in this model appears as \begin{eqnarray} \Delta H_{LS} & = & V_{LS} \; {\bf L} \cdot \! \left( {\bf s}_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} + {\bf s}_\bde \right) = V_{LS} \, {\bf L} \cdot {\bf S}\, , \\ \nonumber \label{eq:SpinOrbit} \end{eqnarray} where $S$ is the total spin carried by the quarks [the state subscripts in Eqs.~(\ref{eq:Swavediquark}), or 1 for $Z^{(\prime)}$], which trivially gives the matrix elements \begin{equation} \Delta M_{LS} = \frac{V_{LS}}{2} [J(J+1)-L(L+1)-S(S+1)] \, . \end{equation} Note that according to Eq.~(\ref{eq:SpinOrbit}) the model treats all four quarks on the same footing, each interacting with the same total {\bf L} operator since the individual diquarks are assumed to have no internal excitation. Thus, only one separation coordinate (${\bf r}_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} - {\bf r}_{\bde}$) and only one orbital angular momentum operator {\bf L} is relevant.\footnote{Alternate $c\bar c c\bar c$ tetraquark models ({\it e.g.}, Refs.~\cite{Badalian:1985es,Liu:2020eha}) have been presented in which all four quarks and their 3 relative separations are significant for a full description of the state.} The final operator in the model for $L \! > \! 0$ states is the tensor coupling $S_{12}$ between the $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$-$\bde$ pair, defined by \begin{equation} \label{eq:TensorHam} \Delta H_T = V_T \, S_{12} \, , \end{equation} where \begin{equation} \label{eq:Tensor} S_{12} \equiv 3 \, {\bm \sigma}_1 \! \cdot {\bm r} \, {\bm \sigma}_2 \! \cdot {\bm r} / r^2 - {\bm \sigma}_1 \! \cdot {\bm \sigma}_2 \, . \end{equation} ${\bm \sigma}$ here and below denotes twice the canonically normalized spin operator of the full entity coupling to the tensor force. In the study of $P$-wave $c\bar c q\bar q^\prime$ states in Ref.~\cite{Giron:2020fvd} the tensor operator is assumed to originate as an analogue to the corresponding operator in nucleon-nucleon interactions arising from pion exchange, and therefore ${\bm \sigma}$ couples only to the light quarks within $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ and $\bde$, just as for the spin-spin $V_0$ operator in Eq.~(\ref{eq:Ham}). The assumption of coupling only to the light quarks rather than to the full $\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde$ as units is viable in the dynamical diquark model because again, the diquarks are {\em not\/} treated as completely pointlike. Nevertheless, the alternative hypothesis of coupling the isospin-dependent spin-spin and tensor operators to $\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde$ as units was also studied in Refs.~\cite{Giron:2019cfc,Giron:2020fvd} and found to be incompatible with known phenomenology [{\it e.g.}, in predicting a degenerate $I \! = \! 1$ partner to the $X(3872)$, which is known not to exist]. In the all-heavy case one not only expects that $\delta^{\vphantom{1}}} \def\bde{{\bar\delta},\bde$ are more compact than in the $Q\overline Q q\bar q^\prime$ case, but also notes that the privileged position of light quarks with respect to isospin no longer occurs. In this case, the spin operators ${\bm \sigma}$ in the tensor operator of Eq.~(\ref{eq:Tensor}) refer to the full $QQ$ or $\overline Q \, \overline Q$ diquark spins. The matrix elements in that case are computed in Appendix~A of Ref.~\cite{Giron:2020fvd}: \begin{widetext} \begin{eqnarray} \label{eq:S12general} \left< L',S',J \right| S_{12} \left| L,S,J \right> & = & (-1)^{S+J} \! \sqrt{30[L][L'][S][S']} \left\{ \begin{array}{ccc} J & S' & L' \\ 2 & L & S \end{array} \right\} \! \left( \! \begin{array}{ccc} L' & 2 & L \\ 0 & 0 & 0 \end{array} \! \right) \! \left\{ \! \begin{array}{ccc} s_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} & s_\bde & S \\ s_{{\delta^{\vphantom{1}}} \def\bde{{\bar\delta}}^\prime} & s_{{\bde}^\prime} & S' \\ 1 & 1 & 2 \end{array} \! \right\} \! \left< s_{{\delta^{\vphantom{1}}} \def\bde{{\bar\delta}}^\prime} || \bm{\sigma}_1 || s_{\delta^{\vphantom{1}}} \def\bde{{\bar\delta}} \right> \left< s_{{\bde}^\prime} || \bm{\sigma}_2 || s_\bde \right> , \nonumber \\ \end{eqnarray} \end{widetext} where $[j] \! \equiv \! 2j \! + \! 1$. The reduced matrix elements of the angular momentum generators are given by \begin{equation} \label{eq:Jreduced} \left< j^\prime || \, {\bf j} \, || \, j \right> = \sqrt{j(2j+1)(j+1)} \, \delta_{j^\prime j} \, . \end{equation} The tensor operator of Eq.~(\ref{eq:Tensor}) does not change individual diquark spins [as is evident from Eq.~(\ref{eq:Jreduced})], and vanishes if $s_\delta^{\vphantom{1}}} \def\bde{{\bar\delta} \! = \! 0$ or $s_\bde \! = \! 0$ [as is evident from the $9j$ symbol in Eq.~(\ref{eq:S12general})]. It does however allow the total quark spin $S$ to change, as well as the orbital excitation $L$. In summary, the full Hamiltonian of the dynamical diquark model for all-heavy states $Q\overline Q Q \overline Q$ (and with small modifications, for general all-heavy states $Q_1 \overline Q_2 Q_3 \overline Q_4$) is given by the sum of Eqs.~(\ref{eq:Swavecccc}), (\ref{eq:SpinOrbit}), and (\ref{eq:TensorHam}): \begin{eqnarray} H & = & H_0 + 2 \kappa_{QQ} ({\bf s}_Q \! \cdot \! {\bf s}_Q + {\bf s}_{\bar Q} \! \cdot \! {\bf s}_{\bar Q}) + V_{LS} \, {\bf L} \cdot {\bf S} + V_T \,S_{12}^{(\delta^{\vphantom{1}}} \def\bde{{\bar\delta} \bde)} . \nonumber \\ \label{eq:FullHam} \end{eqnarray} Only the first two terms are required for $\Sigma^+_g(nS)$ states, while the latter two terms are needed for $L \! > \! 0$ states. The matrix elements ({\it i.e.}, mass eigenvalues) for the 3 $S$-wave states are degenerate and are given in Eq.~(\ref{eq:SwaveMass}), while those for the $7$ $P$-wave states are presented in Table~\ref{tab:MassParams}. They are listed in a particular order that recognizes another interesting feature of this model: {\em If $V_{LS} \! \gg \! V_T$, then the $P$-wave states fill an equal-spaced multiplet.} Assuming that $V_{LS} \! > \! 0$ (as occurs in Ref.~\cite{Giron:2020fvd}) means that the states in Table~\ref{tab:MassParams} may be expected to appear in order of increasing mass. This ordering almost precisely matches the corresponding (unmixed) numbers in Ref.~\cite{Liu:2020eha}, despite the fact that the latter calculation includes not only tensor terms, but also couplings between all of the quarks.\footnote{In their full calculation, Ref.~\cite{Liu:2020eha} also includes color-sextet combinations.} The only $\Sigma^+_g (1P)$ states degenerate in $J^{PC}$ are the $1^{--}$ pair $X_2^{(1)}$ and $X^{\prime \, (1)}_0$. In that case, for $V_T \! \neq \! 0$ the states form a $2 \! \times \! 2$ mass matrix whose diagonal values are given in Table~\ref{tab:MassParams}, and whose off-diagonal element is \begin{equation} \Delta M_{X_2^{(1)} \mbox{-} X^{\prime \, (1)}_0} = +\frac{8}{\sqrt{5}} V_T \, . \label{eq:Mix} \end{equation} \begin{table}[h] \caption{Mass eigenvalues of the 7 $\Sigma^+_g(nP)$ states, which assume the simple forms $M \! = \! M_0 \! + \! \kappa_{QQ} \! + \! \Delta M_{LS} \! + \! \Delta M_T$. The two $1^{--}$ states $X_2^{(1)},X^{\prime \, (0)}_1$ also have an off-diagonal mixing term given by Eq.~(\ref{eq:Mix}).} \label{tab:MassParams} \centering \setlength{\extrarowheight}{1.2ex} \begin{tabular}{rcrr} \hline\hline State & $J^{PC}$ & $\Delta M_{LS}$ & $\Delta M_T$ \\[0.5ex] \hline $X_2^{(1)}$ & $ 1^{--}$ & $-3V_{LS}$ & $-\frac{28}{5} V_T$ \\ $Z^{\prime \, (0)}$ & $ 0^{-+}$ & $-2V_{LS}$ & $-8V_T$ \\ $Z^{\prime \, (1)}$ & $ 1^{-+}$ & $-V_{LS}$ & $+4V_T$ \\ $X_2^{(2)}$ & $ 2^{--}$ & $-V_{LS}$ & $+\frac{28}{5} V_T$ \\ $X_0^{\prime \, (1)}$ & $ 1^{--}$ & $0 V_{LS}$ & $0V_T$ \\ $Z^{\prime \, (2)}$ & $ 2^{-+}$ & $+V_{LS}$ & $-\frac 4 5 V_T$ \\ $X_2^{(3)}$ & $ 3^{--}$ & $+2V_{LS}$ & $-\frac 8 5 V_T$ \\[0.5ex] \hline\hline \end{tabular} \end{table} \section{Numerical Analysis} \label{sec:Num} LHCb analyzes the results of their observations~\cite{Aaij:2020fnh} by providing fits to two model scenarios: \renewcommand{\labelenumi}{\Roman{enumi}.} \begin{enumerate} \item $X(6900)$ has $m \! = \! 6905 \pm \! 11$~MeV and $\Gamma \! = \! 80 \pm \! 19$~MeV\@. The second resonance, hereinafter labeled $X(6500)$, lies at $6490 \pm \! 15$~MeV.\footnote{This value is not stated in Ref.~\cite{Aaij:2020fnh}, but rather is estimated by us using their Fig.~3(b).} The mass splitting between these states is $\Delta m_{\rm I} \! = \! 415 \! \pm \! 19$~MeV\@. \item $X(6900)$ has $m \! = \! 6886 \! \pm \! 11$~MeV and $\Gamma \! = \! 168 \! \pm \! 33$~MeV\@. The second resonance, hereinafter labeled $X(6740)$, has $m \! = \! 6741 \! \pm \! 6$~MeV and $\Gamma \! = \! 288 \! \pm \! 16$~MeV\@. The mass splitting between these states is $\Delta m_{\rm II} \! = \! 145 \! \pm \! 15$~MeV\@. \end{enumerate} We now show that the scenario of Model~II appears to support a much more favorable interpretation within the dynamical diquark model. For this analysis we first assume that $X(6900)$ is not a $1S$ state, because it would then lie 700~MeV above the $J/\psi$-pair threshold, which would represent an astonishing mass gap for the appearance of the lowest $c\bar c c\bar c$ resonances. Similar conclusions appear in Refs.~\cite{Liu:2020eha,Wang:2020ols,Jin:2020jfc,Yang:2020rih, Becchi:2020uvq,Lu:2020cns,Chen:2020xwe}. We discuss the fate of the $1S$ states in our model later in this section; the subsequent multiplets in order of increasing mass turn out to be $1P$, $2S$, $1D$,$2P$, and $2D$, as confirmed below. The next required input of the analysis is a reliable value of the internal diquark spin-spin coupling $\kappa_{cc}$ appearing in Eqs.~(\ref{eq:Swavecccc})--(\ref{eq:SwaveMass}). The closest available analogue to $c\bar c c\bar c$ state is found with $c\bar c s\bar s$ candidates such as $X(4140)$, which have been analyzed using this model very recently in Ref.~\cite{Giron:2020qpb}. In that work, $\kappa_{cs}$ is found to be quite large (114.2~MeV) compared to the fit value for $\kappa_{cq}$ or $\kappa_{bq}$ (17.9--22.5~MeV). We observed in Ref.~\cite{Giron:2020qpb} that this pattern is explained by the diquark coupling being strongly dependent upon the lighter quark flavor ($\kappa_{cs}$ {\it vs.}\ $\kappa_{cq}$) and much less sensitive to the heavy-quark flavor ($\kappa_{cq}$ {\it vs.}\ $\kappa_{bq}$). We argued that the $s$ quark, being much heavier than $u$ or $d$, has less Fermi motion within $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$, permitting $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ to be substantially more compact and thus enhancing the strength of spin couplings within it. Therefore, it is reasonable to assume that the ($cc$) diquark has a similarly large spin-spin coupling (and possibly even larger, if $s$ is insufficiently heavy to reach the point of flavor independence for the lighter quark in $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$). Hence, for all states in this fit we take the spin-spin coupling to be \begin{equation} \kappa_{cc} = 114.2 \ {\rm MeV} \, . \label{eq:KappaValue} \end{equation} Note from Eq.~(\ref{eq:SwaveMass}) or Table~\ref{tab:MassParams} that such a large value of $\kappa_{cc}$ leads to the interesting consequence of predicting $M_0$, and hence the diquark mass $m_\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$, to be rather smaller than in fits from other works. We now possess sufficient information to study $S$-wave multiplet masses, as well as $P$-wave multiplet masses ignoring for the moment the spin-orbit and tensor terms. Two natural assignments for $X(6900)$ may be considered: as a $\Sigma_g^{+}(1P)$ or as a $\Sigma_g^{+}(2S)$ state. One then calculates for each case the mass splittings to lower multiplets, in order to confirm whether one or both of these assignments matches the mass splittings $\Delta m_{\rm I}$ and/or $\Delta m_{\rm II}$ between peaks from LHCb's Model~I or II, respectively. First we investigate the possibility that $X(6900)$ is a $\Sigma_g^{+}(1P)$ state. Since the $J/\psi$ pair has $C \! = \! +$, Table~\ref{tab:MassParams} suggests that the lightest allowed candidate (assuming $V_{LS}, V_T \! > \! 0$, as is used below) is $Z^{\prime \, (0)} (0^{-+})$. To be quantitative, we adopt the numerical results obtained from the $P$-wave $c\bar c q\bar q$ states in Ref.~\cite{Giron:2020fvd}. Specifically, we use values obtained from Cases~3 and 5 of Ref.~\cite{Giron:2020fvd} for $V_{LS}$ and $V_T$, which are \begin{equation} V_{LS}=42.9\;\mathrm{MeV}, \; V_T=5.5\;\mathrm{MeV}, \label{eq:NumFit1} \end{equation} and \begin{equation} V_{LS}=49.0\;\mathrm{MeV}, \; V_T=3.8\;\mathrm{MeV}, \label{eq:NumFit2} \end{equation} respectively. These cases were deemed in Ref.~\cite{Giron:2020fvd} to be the ones most likely to accurately represent the true $P$-wave $c\bar c q\bar q^\prime$ spectrum. Their application to the $c\bar c c\bar c$ system deserves some discussion. The spin-orbit term in this model connects two separated heavy diquarks in either case [($cq$) or ($cc$)], and therefore we assume the size of the coupling $V_{LS}$ to depend upon the source only through its spin and not its flavor content, so long as the diquarks are heavy. The tensor term, on the other hand, is an entirely different matter. In Ref.~\cite{Giron:2020fvd} the tensor operator was chosen to couple only to light-quark spins [see the discussion below Eq.~(\ref{eq:TensorHam})], while the $c\bar c q\bar q^\prime$ analogue to the form of Eq.~(\ref{eq:TensorHam}) used here for $c\bar c c\bar c$ was found to be phenomenologically irrelevant. We therefore take as our final assumption that $V_T$ for $c\bar c c\bar c$ is numerically no larger than the $V_T$ values obtained from $c\bar c q\bar q^\prime$. Using the values for $\kappa_{cc}, V_{LS}, V_T$ in Eqs.~(\ref{eq:KappaValue})--(\ref{eq:NumFit2}), one then needs only the mass expressions in Table~\ref{tab:MassParams} and Eqs.~(\ref{eq:SwaveMass}) and (\ref{eq:Mix}). Fixing the $Z^{\prime \, (0)}$ mass eigenvalue to the (Model~I) $X(6900)$ mass, we implement the Schr\"{o}dinger equation-solving numerical techniques applied to lattice-calculated potentials, as described in Ref.~\cite{Giron:2019bcs}. We thus obtain \begin{equation} M_0(1P) = 6931.3 \ {\rm MeV \ and} \ 6954.0 \ {\rm MeV} \, , \end{equation} using the inputs of Eqs.~(\ref{eq:NumFit1}) and (\ref{eq:NumFit2}), respectively.\footnote{The variation of these particular eigenvalues with the lattice potentials obtained in Refs.~\cite{Juge:1997nc, Juge:1999ie,Juge:2002br,Morningstar:2019,Capitani:2018rox} amounts to only about 0.07~MeV\@. The specific values presented here use Ref.~\cite{Morningstar:2019}.} Further computing $M_0(1S)$ and $M_0(2S)$ in the same calculation, we obtain the $M_0$ mass differences \begin{eqnarray} \Delta m_{1P - 1S} & = & +343.3 \ {\rm MeV} \, , \nonumber \\ \Delta m_{1P - 2S} & = & -156.9 \ {\rm MeV} \, , \label{eq:1PFit} \end{eqnarray} using Eqs.~(\ref{eq:NumFit1}). The corresponding values obtained using Eqs.~(\ref{eq:NumFit2}) are hardly changed, being +343.2~MeV and $-156.7$~MeV, respectively. In comparison with the LHCb results, the first of Eqs.~(\ref{eq:1PFit}) is too small to match Model~I ({\it i.e.}, $\Delta m_{1P - 1S} \! < \! \Delta m_{\rm I}$), especially since $M_0(1P)$ lies rather higher than the $Z^{\prime \, (0)}$ mass we fix to $X(6900)$, while the second has the right magnitude but the wrong sign to match Model~II ({\it i.e.}, $\Delta m_{\rm II} \! \approx \! -\Delta m_{1P - 2S}$), since we predict that $2S$ states lie above $1P$ states. We therefore conclude that the assignment of $X(6900)$ as a $\Sigma_g^+(1P)$ state is heavily disfavored in the dynamical diquark model. We therefore turn to the alternate possibility that $X(6900)$ is one of the states in the multiplet $\Sigma_g^{+}(2S)$ (which again, are degenerate in this model). Then using Eqs.~(\ref{eq:SwaveMass}), (\ref{eq:KappaValue}), and the Model-II mass value, we obtain \begin{equation} M_0(2S) = 6771.8 \ {\rm MeV} \, . \label{eq:True6900} \end{equation} Once again implementing the techniques developed in Ref.~\cite{Giron:2019bcs}, we calculate the $M_0$ mass differences \begin{eqnarray} \Delta m_{2S - 1P} & = & 160.4 \ {\rm MeV} \, , \nonumber \\ \Delta m_{2S - 1S} & = & 505.7 \ {\rm MeV} \, . \end{eqnarray} In this case we observe that the latter mass splitting is too large to agree with Model~I ({\it i.e.}, $\Delta m_{2S - 1S} \! > \Delta m_{\rm I}$), but the former agrees very well with Model~II ({\it i.e.}, $\Delta m_{2S - 1P} \! \approx \! \Delta m_{\rm II}$). Therefore, assuming that LHCb's Model~II is confirmed to be the correct interpretation of the data, we find that $X(6900)$ is favored in the dynamical diquark model to be a $\Sigma_g^{+}(2S)$ state and $X(6740)$ a $\Sigma_g^{+}(1P)$ state. Concluding from these calculations that $X(6900)$ is indeed a $\Sigma_g^{+}(2S)$ state with $M_0(2S)$ given by Eq.~(\ref{eq:True6900}), the corresponding diquark masses are computed to be \begin{equation}\label{diquark_mass_2S} m_\delta^{\vphantom{1}}} \def\bde{{\bar\delta}=m_\bde=3126.4 \mbox{-} 3146.4\;\mathrm{MeV} , \end{equation} which is only slightly larger than $m_{J/\psi}$. Using this value of $m_\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$, we further obtain \begin{eqnarray} M_0(1S) & = & 6264.0 \mbox{-} 6266.1\;\mathrm{MeV}, \nonumber \\ M_0(1P) & = & 6611.4\;\mathrm{MeV}, \nonumber \\ M_0(1D) & = & 6860.5 \mbox{-} 6862.4\;\mathrm{MeV}, \nonumber \\ M_0(2P) & = & 7010.8 \mbox{-} 7013.0 \; \mathrm{MeV} . \end{eqnarray} The variation here arises from using the differing lattice results of Refs.~\cite{Juge:1997nc,Juge:1999ie,Juge:2002br,Morningstar:2019, Capitani:2018rox}. The prediction for $M_0(1S)$ deserves special discussion, because the expected spatial size of a $1S$ state according to this model is calculated to be $\langle r \rangle \! \approx \! 0.3$~fm, the same magnitude as (or even smaller than) $J/\psi$ states. In this scenario all 4 of the quarks have comparable spatial separation, a configuration that runs afoul of the original separated-diquark motivation of the dynamical diquark model. At present, the LHCb data in the $\sim$~6300~MeV mass region is not yet sufficiently resolved to discern particular structures, so it will be interesting to see how well the model works even in situations for which it is expected to fail. Having identified $X(6900)$ with one of the (degenerate) $\Sigma_g^{+}(2S)$ states, we use the values of $V_{LS}$ and $V_T$ given by Eqs.~(\ref{eq:NumFit1}) and (\ref{eq:NumFit2}) and the expressions in Table~\ref{tab:MassParams} and Eq.~(\ref{eq:Mix}) to compute the full $\Sigma^+_g(1P)$ spectrum. The results are presented in Table~\ref{tab:BestFit}. One notes that the variation in mass for any given state between the two fits [excepting $X_2^{(2)}(2^{--})$] is $\alt \! 13$~MeV, and that the ordering of the states in mass is nearly identical to the one expected parametrically from the equal-spacing rule identified in Table~\ref{tab:MassParams}, even though the equal-spacing itself is numerically not so well supported. Since the values of $V_T$ in Eqs.~(\ref{eq:NumFit1})--(\ref{eq:NumFit2}) are based upon a naive assumption, the equal-spacing rule might turn out to be much better in practice if the actual $V_T$ value is smaller. An interesting feature of LHCb Model~II is the enormous width $\Gamma \! = \! 288$~MeV given for $X(6740)$ (twice the width of $\rho$, for example). From Table~\ref{tab:BestFit} we note that all $P$-wave states that could decay to a $J/\psi$ pair ($C \! = \! +$) have masses consistent with appearing within this wide peak, meaning that the broad $X(6740)$ peak could easily turn out to be a superposition of several narrower $1P$-state peaks. \begin{table}[h] \caption{Mass eigenvalues (in MeV) of the 7 $\Sigma^+_g(1P)$ states, using the expressions given in Table~\ref{tab:MassParams} and Eq.~(\ref{eq:Mix}). $M_0(1P)$ is obtained from the same numerical fit identifying $X(6900)$ as a $\Sigma^+_g(2S)$ state (specifically, using the lattice simulation of Ref.~\cite{Morningstar:2019}), $\kappa_{cc}$ is given in Eq.~(\ref{eq:KappaValue}), and the columns represents two different choices for $V_{LS}$ and $V_T$ values.} \label{tab:BestFit} \centering \setlength{\extrarowheight}{1.2ex} \begin{tabular}{rcrr} \hline\hline State & $J^{PC}$ & Eq.~(\ref{eq:NumFit1}) & Eq.~(\ref{eq:NumFit2}) \\ [0.5ex] \hline $X_2^{(1)}$ & $ 1^{--}$ & $6563.70$ & $6556.22$ \\ $Z^{\prime \, (0)}$ & $ 0^{-+}$ & $6595.79$ & $6597.19$ \\ $Z^{\prime \, (1)}$ & $ 1^{-+}$ & $6704.69$ & $6691.79$ \\ $X_2^{(2)}$ & $ 2^{--}$ & $6713.49$ & $6687.87$ \\ $X_0^{\prime \, (1)}$ & $ 1^{--}$ & $6727.98$ & $6726.68$ \\ $Z^{\prime \, (2)}$ & $ 2^{-+}$ & $6764.09$ & $6771.55$ \\ $X_2^{(3)}$ & $ 3^{--}$ & $6802.59$ & $6817.51$ \\[0.5ex] \hline\hline \end{tabular} \end{table} Finally, a notable enhancement in the LHCb data appears slightly above $7200$~MeV\@. This value coincides with the $\Xi_{cc}$-$\overline{\Xi}_{cc}$ threshold 7242.4~MeV, at which sufficient energy becomes available to create the lightest hadronic state containing both $c\bar c c\bar c$ and an additional light $q\bar q$ valence pair, namely, the baryon pair $(ccq)(\bar c \bar c \bar q)$. Above this threshold one expects no further narrow resonances decaying dominantly to $J/\psi$ pairs, since new open-flavor decay channels become kinematically available. This prediction is particularly easy to see in the dynamical diquark model; it is the point at which the gluon flux tube connecting the $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$-$\bde$ pair gains enough energy to fragment through $q\bar q$ pair creation, and was anticipated in Ref.~\cite{Brodsky:2014xia} for $c\bar c q\bar q$ states to occur at the $\Lambda_c^+$-$\bar{\Lambda}_c^-$ threshold. Interestingly, we find the $2D$ states to have a common multiplet mass of \begin{equation} M_0(2D)=7213.3 \mbox{-} 7216.7\;\mathrm{MeV}, \end{equation} meaning that the enhancement in the data above 7200~MeV may be a combination of some $2P$ and/or $2D$ $c\bar c c\bar c$ states [not forgetting the large mass offset due to $\kappa_{cc}$ from Eqs.~(\ref{eq:FullHam}) and (\ref{eq:KappaValue})] threshold effects in the form of rescattering of $\Xi_{cc}$-$\overline{\Xi}_{cc}$ pairs to $J/\psi$ pairs. In addition, the $c\bar c c\bar c$ states in higher BO multiplets than $\Sigma^+_g$ ({\it i.e.}, analogues to hybrid mesons) would also occur at or above the $\Xi_{cc}$-$\overline{\Xi}_{cc}$ threshold. \section{Conclusions} \label{sec:Concl} The recent LHCb discovery of resonance-like structures in the $J/\psi$-pair spectrum opens a whole new arena for hadronic spectroscopy. The $X(6900)$ represents the first clear candidate for a multiquark exotic hadron that contains only heavy valence quarks. This paper and multiple prior works referenced here suggest that numerous other such states, carrying a variety of quantum numbers, await discovery as experimental observations are refined. Furthermore, the all-heavy sector is particularly interesting from a theoretical point of view, since the molecular binding paradigm popular for light-flavor containing multiquark states like $X(3872)$ is much less viable (particularly for states that lie so far above the $J/\psi$-pair threshold), leaving a diquark-antidiquark binding structure as the leading candidate. This paper has explored the basic spectroscopic properties of the all-heavy 4-quark states $Q_1 \overline Q_2 Q_3 \overline Q_4$ in the dynamical diquark model. Its defining features for this system are (1) the dominance of the color-triplet binding between $\delta^{\vphantom{1}}} \def\bde{{\bar\delta} \! \equiv \! Q_1 Q_3$ and between $\bde \! \equiv \! \overline Q_2 \overline Q_4$, which for the identical-quark cases $c\bar c c\bar c$ or $b\bar b b\bar b$ leads to the absence of $1^{++}$ $S$-wave states; (2) the dominance of spin-spin couplings within $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$ and within $\bde$, but not between quarks and antiquarks, which leads to the degeneracy of all 3 states in each $Q\overline Q Q\overline Q$ $S$-wave multiplet; and (3) a spin-orbit coupling for $L \! > \! 0$ that couples to all quarks with the same strength. If the strength of the tensor coupling is substantially smaller than the spin-orbit coupling, then the 7 states of the $P$-wave $Q\overline Q Q\overline Q$ multiplet exhibit a remarkable equal-spacing spectrum. These features clearly provide simple and immediate tests of various aspects of the model. We have also produced numerical predictions of the full spectrum for the $1S$, $1P$, and $2S$ multiplets, and multiplet-averaged masses for $1D$, $2P$, and $2D$, using lattice-calculated confining potentials, the spin-spin coupling obtained from $c\bar c s\bar s$ candidate states, and the spin-orbit and tensor couplings obtained from $P$-wave $c\bar c q\bar q^\prime$ states, all using this model. In attempting different assignments for the $X(6900)$, we find that the only one compatible with the model is to identify $X(6900)$ with a state or states within the $2S$ multiplet, and the lower structure at about 6740~MeV from LHCb's ``Model II'' being some combination of the $C \! = \! +$ states within the $1P$ multiplet. Evidence for the $1S$ multiplet is obscure, possibly because it is predicted to occur at masses at which the $\delta^{\vphantom{1}}} \def\bde{{\bar\delta}$-$\bde$ structure is no longer viable, since all interquark distances become comparable not far above the $J/\psi$-pair threshold, while $1D$ states could easily be obscured by the large $X(6900)$ peak, and some $2P$ and $2D$ states are predicted to lie at or above the $\Xi_{cc}$-$\bar \Xi_{cc}$ threshold (which coincides with a structure in the LHCb results), at which point the $c\bar c c\bar c$ states are expected to become much wider. The resolution of the newly observed $J/\psi$-pair structures (possibly into several peaks) and the measurement of specific $J^{PC}$ quantum numbers will contribute immeasurably to an understanding of the structure of these states. Future studies of other charmonium-pair structures (including $\chi_c$, $h_c$, and $\eta_c$) will be no less valuable in this regard. \begin{acknowledgments} This work was supported by the National Science Foundation (NSF) under Grant No.\ PHY-1803912. \end{acknowledgments} \bibliographystyle{apsrev4-1}
2,877,628,088,702
arxiv
\section{Introduction and Discussion} Symmetry has proved to be of utmost importance in unveiling the remarkable beauty hidden in $\mathcal{N}=4$ super Yang-Mills. Two examples illustrate this rather nicely: \begin{enumerate} \item The study of the planar spectrum of this gauge theory is mapped to the study of an integrable model \cite{first}. Particle excitations in this model transform under an extended $SU(2|2)$ symmetry algebra which completely constrains the $2$-body S-matrix \cite{second}, the main ingredient in the computation of the exact spectrum of the theory.\footnote{In an integrable theory, finding the S-matrix is the main step towards the computation of the exact spectrum which follows a (not completely) standard recipe \cite{Zamolodchikov:1989cf}, carried out in the AdS/CFT context in \cite{first,second,BES,third} and references therein.} \item A second example, closely related to the subject of this paper, concerns planar scattering amplitudes in $\mathcal{N}=4$ SYM. Both at weak and strong coupling, these amplitudes possess an anomalous {\it dual conformal symmetry} \cite{dualpapers,dualpapers2,dualpapers3,AM, BM}. For $4$ and $5$ particles, this anomalous symmetry fixes the form of the Maximally Helicity Violating (MHV) amplitudes at any value of the t' Hooft coupling in terms of the cusp anomalous dimension \cite{dualpapers2} which can be computed from Integrability \cite{BES}. For more than $5$ particles, this symmetry is still very constraining. It fixes the form of the MHV planar amplitudes to be given by the BDS ansatz \cite{BDS} times a reminder function that depend only on the dual conformal cross-ratios and become trivial in collinear limits. Surprisingly, it is the mysterious dual superconformal symmetry that is under control at loop level whereas the usual superconformal symmetry is understood at tree level only \cite{tree1,tree2,Bargheer:2009qu,Korchemsky:2009hm}. \end{enumerate} There seems to be two deep connections between these two points. First, the usual conformal symmetry as well as the dual conformal symmetry of $\mathcal{N}=4$ SYM form a Yangian \cite{Ystrong1,BM,Ystrong2,tree2} -- the structure of higher charges arising in integrable models. Second, as emphasized in \cite{Bargheer:2009qu}, the superconformal generators which act on the generating function of scattering amplitudes, are expected to share many features with the length changing higher charges that acts on a single trace operators. These appear in the context of computing the planar spectrum of the theory. The possibility of making such nice connections precise in the future, as well as the remarkable success observed so far, entitles us to big expectations. At tree level, partial scattering amplitudes were shown to be invariant under superconformal transformations \cite{Witten,tree2}. However, due to the so called holomorphic anomaly \cite{CSW}, superconformal transformations fail to annihilate the tree level amplitudes whenever two adjacent momenta become collinear. As was shown in \cite{Bargheer:2009qu}, that failure can be corrected by adding a term to the superconformal generators that split one massless particle into two collinear ones. As we will show in this paper, already at tree level, there are additional points in phase space where superconformal transformations fail to annihilate a tree level amplitude. E.g., the points where the amplitude factorize on a multi-particle pole and, in addition, the on-shell internal momenta become collinear to one of the external momenta. At these points, the failure can be corrected by adding a term to the superconformal generators that at tree level, join two disconnected amplitude with a shared collinear momenta into a single connected amplitude. The resulting symmetry is therefore not a symmetry of an individual amplitude but instead, a symmetry of the tree level S-matrix. That correction of the tree level generators might seem like a picky detail -- after all, for generic momenta, the tree level amplitudes are indeed symmetric. However, at loop level this detail becomes of major importance. The reason is that internal loop momenta scan over all phase space and in particular on the points were they become collinear to an external momenta. As a result, the superconformal generators fails to annihilate {\it any} loop amplitude. That is clearly \textit{not} a picky detail! Further complication of loop amplitudes over tree level ones is the presence of IR divergences and the resulting need for regularization. These IR divergences arise from the region of integration where an internal momenta become collinear to an external one. Therefore, the IR divergences and the failure of superconformal invariance are closely related. The goal of this paper is to promote the superconformal symmetry to loop level. We will assume that the MHV diagrammatic expansion \cite{CSW} holds at any loop order, which although very plausible was only checked in the literature to one loop \cite{Brandhuber:2004yw}. Under that assumption, we will find a correction to the generators that annihilate the full S-matrix. The analysis will first be done without a regulator. Such analysis is only formal because ${\cal N}=4$ YM is conformal and therefore don't have asymptotic particles. As a result, the probability for scattering some fixed number of massless particles into another fixed number of massless particles is zero. Technically, the corresponding perturbative analysis is plagued with IR divergences. The S-matrix is however non-trivial by means that there are physical observables constructed from the would have been S-matrix. These are IR safe quantities such as inclusive cross sections\footnote{These usually involve an external probe. See \cite{Bork:2009ps} for a recent study of these in ${\cal N}=4$ SYM.}. In perturbation theory, the only known way to construct these observables is from the S-matrix elements which, for massless particles, are not good observables.\footnote{Ideally, one would like to have an alternative formulation of physical IR safe observables that do not pass through the IR unsafe S-matrix. Identifying these observables in the T-dual variables \cite{AM} may help in finding such formulation.} To overcome that problem, one first introduces an IR regulator. The resulting IR regulated theory has an S-matrix from which the desired observables are computed. A good IR regulator is a regulator that drops out of IR safe physical quantities leaving a consistent answer behind. We will argue that these physical observables are superconformal covariant. That is, we will show that no violation of superconformal invariance emerges from their (regulated) S-matrix elements building blocks. A similar issue arises in the study of dual conformal invariance of planar amplitudes. In analogy to conformal symmetry, there, loop amplitudes are formally dual conformal covariant. However, any regularization result in a dual conformal anomaly \cite{dualpapers2,Elvang:2009ya}. The anomaly however, can be recast as a correction to the dual conformal generators such that they act on the regulator.\footnote{As far as we know, that point was not illustrated in the literature. We have checked that explicitly in two different regularizations. One, is the regularization in which the external particles are given a small mass. The other, is the Alday-Maldacena regularization \cite{AM} where scattered particles are charged under a small gauge group on the Coulomb branch of the large $N$ one. Note added to this footnote: As this paper was being completed the work \cite{Alday:2009zm} appeared where this scenario was checked at one loop in the Coulomb Branch regularization, see \cite{Alday:2008yw} for a related discussion at strong coupling.} In other words, the corrected generators are anomaly free. Planar IR safe quantities are therefore dual superconformal covariant as no violation of dual conformal invariance arises from their S-matrix elements building blocks. The situation with conformal transformations is more involved. The reason is that the terms in the superconformal generators that, at tree level, join two disconnected amplitudes can also act on a connected part of an amplitude. When they do, a new loop is formed within the connected amplitude. It is therefore not enough for regulate the amplitudes but we must also regulate the superconformal generators. In the last section, we will regulate the amplitudes and repeat the calculation, now identifying a regularized form of the generators. The calculation will be done in a new regularization in which the external particles are given a small mass. The regulated amplitude is then computed using the CSW prescription, now treating the external particle in the same way as the internal ones. We expect that planar dual-conformal invariance can be proved to all orders in perturbation theory using the same techniques. The paper is organized as follows: In section \ref{sec:tree} we review the superconformal invariance of tree level amplitudes \cite{tree2,Bargheer:2009qu}. In section \ref{sec:fincuts} we find the unregularized form of the generators by demanding that they annihilate finite unitarity cuts of the MHV one loop generating function. In section \ref{sec:formal} we show that the generators found in section \ref{sec:fincuts} formally annihilate the unregularized one loop MHV generating function. In section \ref{all} we use the tree level results and a conjectured loop level CSW generalization to formally derive the superconformal invariance of the full S-matrix at any loop order. In section \ref{sec:reg} we will propose a new "holomorphic anomaly friendly" regularization named {\it sub MHV regularization}. It will allow us to keep the symmetry, found in the previous sections, under control and read of a regularized form of the generators. Appendix \ref{AppA} contains some technical details relevant to section \ref{sec:formal}. \section{Superconformal Invariance of Tree Level Amplitudes}\la{sec:tree} This section is a quick review of partial tree level partial scattering amplitudes in ${\cal N}=4$ SYM, their generating function and superconformal invariance. We assume the reader is familiar with the spinor helicity formalism and only set notation and highlight a few essential points. On-shell states in ${\cal N}=4$ are most conveniently represented in spinor helicity superspace. The light-like momenta is decomposed into a product of a positive chirality spinor $\lambda^a$ and a negative chirality spinor $\widetilde\lambda^{\dot a}$ through $k^{a\dot a}=\lambda_a\widetilde\lambda_{\dot a}=s\lambda^a\bar\lambda^{\dot a}$. Here $\bar\lambda=(\lambda)^*=s\widetilde\lambda$ and $s$ is the energy sign of $k$. We will work in $(+,-,-,-)$ signature so that $s={\rm sign}(k^0)={\rm sign}(k_0)$. In superspace, the scattering amplitude of $n$ particles is a function $$ {\cal A}_n(\lambda_1,\widetilde\lambda_1,\eta_1,\dots,\lambda_n,\widetilde\lambda_n,\eta_n)~, $$ where in our convention all particles are out-going and $\eta^A$, $A=1,\dots,4$ is a superspace coordinate transforming in the fundamental representation of the $SU(4)$ R-symmetry. Amplitudes can be classified by their helicity charge \begin{equation} h(n,k)=2n-4k~. \la{helicity} \end{equation} Here the number $k$ counts the number of $\eta$'s. It ranges between 8 for MHV amplitudes and $4n-8$ for $\overline{\rm MHV}$ amplitudes. The MHV partial amplitudes take the simple form \cite{MHVpapers} \begin{equation}\label{MHVgen} \mathcal{A}^{(0)\, \rm MHV}_n(1,2,\dots,n)=\delta^4(P_n)A_n^{(0)\,\rm MHV}(1,2,\dots,n) =i\frac{\delta^8(Q_n)\delta^4(P_n)}{\<12\>\<23\>\dots\<n1\>} ~, \end{equation} where $P_n=\sum_{j=1}^n k_j$ and $Q_n=\sum_{j=1}^n \lambda_j^a \eta_j^B$. Here, for keep notations simple, we omitted a factor of $g^{n-2}(2\pi)^4$. The dependence on the YM coupling $g$ will be restored below when defining the generating functional whereas, the $(2\pi)^4$ factor is systematically removed from vertices and propagators in our conventions. The superconformal generators that will be most relevant for us are the special conformal generator and its fermionic counterparts, the special conformal supercharges\footnote{We use Latin letters to denote the superconformal generators. Vive la r\'esistance.} \begin{equation}\la{Gen} S_{a A}=\sum_{i=1}^n {\d\over\d\lambda^a_i}{\d\over\d\eta_i^A}~,\qquad\bar S_{\dot a}^{A} = \sum_{i=1}^n \eta_i^A{\d\over\d\widetilde\lambda^{\dot a}_i}~,\qquad K_{a\dot a} =\sum_{i=1}^n {\d\over\d\lambda^a_i}{\d\over\d\widetilde\lambda^{\dot a}_i}~. \end{equation} They satisfy the commutation relation \begin{equation} \{S_{aA},\bar S_{\dot a}^B\}=\delta_A^BK_{a\dot a}~. \la{SSK} \end{equation} Furthermore, the $S$ generator can be obtained from $\bar S$ by conjugation. For this reason, in the next sections, we will mostly focus on the special conformal supercharge $\bar S$. The superconformal generators annihilate all tree level amplitudes provided the external momenta are generic \cite{Witten,tree2}. However, due to the presence of the so-called holomorphic anomaly \cite{CSW} \begin{equation}\la{holomorphicAnomaly} {\d\over\d\bar\lambda^{\dot a}} \frac{1}{\< \lambda,\mu\>} = \pi\bar\mu_{\dot a} \delta^2(\<\lambda,\mu\>) \end{equation} the action of the generators (\ref{Gen}) on the tree level amplitude results in extra terms supported at the points in phase space where two adjacent momenta are collinear. At these points, the generators (\ref{Gen}) fail to annihilate an individual tree level amplitude. Physically, the reason is that two on-shell collinear massless particles and a single particle carrying their momenta and quantum numbers are undistinguishable and are mixed by the generators (\ref{Gen}). Therefore, a more suitable object to act on is the generating function of all connected amplitudes whose ordered momenta forms the same polygon shape in momentum space.\footnote{That is, the same polygon Wilson loop dual in the sense of \cite{AM}.} For simplicity, one may consider instead the generating function of all connected tree level amplitudes \cite{Bargheer:2009qu} \begin{equation} \mathcal{A}^{(0)}_c[J] = \sum_{n=4}^\infty{g^{n-2}\over n} \int d^{4|4}\Lambda_1\, \dots d^{4|4}\Lambda_n\sum_{s_j=\pm} {\rm Tr} \( J(\Lambda_1^{s_1})\ldots J(\Lambda_n^{s_n}) \) \, \mathcal{A}_n^{(0)}(\Lambda_1^{s_1},\ldots,\Lambda_n^{s_n})~, \la{genfun} \end{equation} where $\Lambda^\pm=(\lambda,\pm\bar\lambda,\eta)$ parametrizes the null momenta and polarizations and the $(0)$ superscript stands for tree level. The sum over $s_j=\pm $ accounts for positive and negative energy particles. The $n$-particle partial amplitude is then given by $$ {\cal A}_n^{(0)}(\Lambda_1,\dots,\Lambda_n)=\left.{1\over N_c^n}{\rm Tr}\({\delta\over\delta J(\Lambda_n)}\dots{\delta\over\delta J(\Lambda_1)}\){\cal A}_c^{(0)}[J]\right|_{J=0}~. $$ When acting on the generating function, the special conformal supercharges take the form \beqa\la{bare} ({S}_{{1\to 1}})_{aA}&=&\sum_{s=\pm} \int d^{4|4}\Lambda\, {\rm Tr}\[\partial_a\partial_A J(\Lambda^s)\, \check{J}(\Lambda^s)\]~,\\(\bar{{S}}_{{1\to 1}})_{\dot a}^A&=&-\sum_{s=\pm}s\int d^{4|4}\Lambda\, \eta^A \, {\rm Tr}\[\bar\partial_{\dot a}J(\Lambda^s) \check{J}(\Lambda^s)\] \,,\nonumber \eeqa where $$\check{J}(\Lambda)=\frac{\delta}{\delta J(\Lambda)}~,\qquad s\bar\d_{\dot a}=s{\d\over\d\bar\lambda^{\dot a}}={\d\over\d\widetilde\lambda^{\dot a}}\,, $$ $\partial_A=\partial/\partial \eta^A$ and the $(1\to1)$ subscript indicate that they preserve the number of external particles. In \cite{Bargheer:2009qu} it was shown that a corrected version of the generators do annihilate the generating function. That is \begin{equation}\la{treecorr} \({\bar S}_{1\to 1}+g{\bar S}_{1\to 2}\)\mathcal{A}^{(0)}_c[J]=0~, \end{equation} where ${\bar S}_{1\to 2}$ splits a particle into two collinear ones. For the special conformal supercharges, these are given by\footnote{Here written in a slight different way than in \cite{Bargheer:2009qu}, using for example $\bar\lambda\eta'=\bar\lambda_1\eta_2-\bar\lambda_2\eta_1$. Moreover, the overall sign $s$ in $\bar S$ seems to have been overlooked in \cite{Bargheer:2009qu}.} \beqa \la{treegen} &&(\bar S_{1\to2})_{\dot a}^A=+2\pi^2{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda_{\dot a}{\eta'}^A{\rm Tr}\[\check J(\Lambda^s)\hat J(\Lambda_1^{s_1}) J(\Lambda_2^{s_2})\]~, \\ &&(S_{1\to 2})_{Aa}=-2\pi^2{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~\int d^{4|4}\Lambda d^4\eta'd\alpha\delta^{(4)}(\eta')\lambda_a\d'_A{\rm Tr}\[\check J(\Lambda^s)\hat J(\Lambda_1^{s_1}) J(\Lambda_2^{s_2})\]~, \eeqa where $$ \hat J(\Lambda)={1\over 2\pi}\int_0^{2\pi} d\varphi e^{2\varphi i}J(e^{i\varphi}\Lambda) $$ and $$ {\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~=\sum_{s,s_1,s_2=\pm} \delta_{0,(s-s_1)(s-s_2)} $$ is a sum over the energy signs $s$, $s_1$ and $s_2$ such that $s\in\{s_1,s_2\}$. For $s_1=s_2$ \begin{equation}\label{collinearityportion} \begin{array}{l} \lambda_1=\lambda\sin\alpha\\ \lambda_2=\lambda\cos\alpha \end{array}~, \qquad \begin{array}{l} \eta_1=\eta\sin\alpha-\eta'\cos\alpha\\ \eta_2=\eta\cos\alpha+\eta'\sin\alpha \end{array}~,\qquad\alpha\in[0,{\pi\over 2}]~. \end{equation} The other two cases where $s=s_1=-s_2$ and $s=s_2=-s_1$ are related to (\ref{collinearityportion}) by replacing $\sin(\alpha)\to{\rm sinh}(\alpha)$ and $\sin(\alpha)\to{\rm cosh}(\alpha)$ correspondently. Moreover, the corrected generators were shown to close the same superconformal algebra (see \cite{Bargheer:2009qu} for details). At tree level, (\ref{treecorr}) was claimed to hold at {\it any} point in phase space \cite{Bargheer:2009qu}. As we will see in section 5, there are extra points in phase space where the holomorphic anomaly contributes. These are the points where the tree level amplitude factorizes on a multi-particle pole and an internal momentum is collinear to one of the neighboring momenta. Similarly to $\bar S_{1\to2}$ correcting $\bar S_{1\to1}$, these are accounted for by the inclusion of two new corrections $\bar S_{2\to1}$ and $\bar S_{3\to0}$. These however (at tree level) act on two or three disconnected tree level partial amplitudes, joining them into a single connected amplitude. Therefore, the object that is superconformal invariant is not the generating function of all connected partial amplitudes (\ref{genfun}), but instead the generating function of all partial amplitudes $$ {\mathbb S}^{\rm tree}[J]=\exp{{\cal A}_c^{(0)}[J]}\,, $$ connected and disconnected. That is the interacting part of the tree level S-matrix (see section 5 for more details). For example, the generator $\bar S_{2\to 1}$ reads $$ \bar S_{2\to1}=2\pi^2\sum_{s=\pm}s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[J(\Lambda^s)\check J(\Lambda_1^s)\check J(\Lambda_2^s)\]~, $$ where $\Lambda_1$ and $\Lambda_2$ are given in (\ref{collinearityportion}). The corrected tree level symmetry is therefore \begin{equation}\la{coeetreesymmetry} \({\bar S}_{1\to 1}+g{\bar S}_{1\to 2}+g\bar S_{2\to1}+g\bar S_{3\to0}\){\mathbb S}^{\rm tree}[J]=0~. \end{equation} This structure generalizes to loop level \begin{equation}\la{coeeloopsymmetry} \({\bar S}_{1\to 1}+g{\bar S}_{1\to 2}+g\bar S_{2\to1}+g\bar S_{3\to0}\){\mathbb S}[J]=0~, \end{equation} where ${\mathbb S}=\exp{\cal A}_c[J]$ and \begin{equation} \mathcal{A}_c[J]= \sum_{n=4}^\infty \sum_{l=0}^\infty g^{2l}\mathcal{A}_n^{(l)}[J] \end{equation} is the connected generating function of scattering amplitudes. Here, $(l)$ stands for the number of loops and ${\cal A}^{(l)}_n[J]$ is defined as in (\ref{genfun}). Note in particular that the generators do not receive higher loop corrections. The full ``${\cal N}=4$ S-matrix" is obtained from ${\mathbb S}$ by adding the forward amplitudes where some of the particles do not interact. The quotation marks are to remind the reader that, before regularization, ${\cal N}=4$ SYM is conformal and therefore has no S-matrix. The correction of the formal relation (\ref{coeeloopsymmetry}) due to the IR regularization will be discussed in the last section. The aim of this paper is to show that indeed the tree level symmetry (\ref{coeetreesymmetry}) generalizes to the loop level (\ref{coeeloopsymmetry}). \section{Superconformal Invariance of One Loop Unitarity Cuts}\label{sec:fincuts} Unitarity cuts of an amplitude are physical observable that compute the total cross section in the corresponding channels \cite{Cutkosky:1960sp,cut2}. These are always less divergent than the full loop amplitude and therefore can provide finite, regularization independent, information. In this section we will compute the finite cuts of $\bar S_{1\to 1} {\cal A}$ at one loop and for $n$ final particles. By doing so, we will obtain an unregularized version of the generators and postpone the issue of regularization to latter sections. We start by acting with the generator $\bar S_{1\to 1}$ on a finite cut of the one loop amplitude. To isolate the cut in a specific momentum channel $t_i^{[m]}=(k_i+\dots+k_{i+m-1})^2$, we consider the amplitude in the (unphysical) kinematical regime where $t_i^{[m]}>0$ and all other momentum invariants are negative. Without loss of generality, we assume that $i=1$ and the energy of $k_1+\dots+k_m$ is positive. In that kinematical regime, the discontinuity of the amplitude is computed by \begin{equation} \label{cut} \Delta_1^{[m]}{\cal A}\equiv {\cal A}(t_1^{[m]}+i0^+)-{\cal A}(t_1^{[m]}-i0^+)=2i\,{\rm Im}\,{\cal A}(t_1^{[m]}+i0^+)~. \end{equation} For the one loop amplitude, the result is given by (see figure \ref{unitaritycut}) \beqa \label{cutamp} \Delta_1^{[m]}{\cal A}_n^{(1)}= (2\pi)^2 \int d^4l_1d^4l_2\delta^{(+)}(l_1^2)\delta^{(+)}(l_2^2)\int d^4\eta_{l_1}d^4\eta_{l_2} \mathcal{A}_L \mathcal{A}_R \eeqa where \beqa {\cal A}_L={\cal A}_{m+2}^{(0)}(l_1,l_2,\dots,m)~,\qquad {\cal A}_R={\cal A}_{n-m+2}^{(0)}(-l_2,-l_1,\dots,n)~,\qquad\eta_{-l_{1,2}}=-\eta_{l_{1,2}}~.\nonumber \eeqa The finite cuts are the ones in multi-particle channels $2<m<n-2$ and in that section, we restrict our discussion to that range. \begin{figure}[t] \epsfxsize=7cm \centerline{\epsfbox{cut.eps}} \caption{ \small\small\textsl{The cut of the one loop amplitude in the $t_1^{[m]}$ channel.}}\label{unitaritycut} \end{figure} We now act on $\Delta_1^{[m]}{\cal A}_n^{(1)}$ with the generator $\bar S^A_{\dot a}$ given in (\ref{treegen}). In the following, we will omit the indices from $\bar S$ since they can be trivially re-introduced. First, note that $\bar S$ fails to annihilate the discontinuity only due to the holomorphic anomaly. To see that we first define \begin{equation}\la{Baregen} \bar S_L=\sum_{i=1}^ms_i\eta_i\bar\d_i~,\qquad\bar S_R=\sum_{i=m+1}^ns_i\eta_i\bar\d_i~,\qquad\bar S_{l_1,l_2}=\sum_{i=1}^2\eta_{l_i}\bar\d_{l_i}=-\sum_{i=1}^2\eta_{-l_i}\bar\d_{l_i}~. \end{equation} Then \begin{eqnarray}\label{total} \bar S_{1\to 1}\Delta_1^{[m]}{\cal A}_n^{(1)}=(2\pi)^2\int dLIPS(l_1,l_2)\int d^4\eta_{l_1}d^4\eta_{l_2}\[\(\bar S_L+\bar S_R+\bar S_{l_1,l_2}\)-\bar S_{l_1,l_2}\] {\cal A}_L{\cal A}_R~. \end{eqnarray} If we ignore the anomaly, the term in parentheses does not contribute because $(\bar S_L+\bar S_{l_1l_2}){\cal A}_L=\bar S_R\, {\cal A}_L=0$ with similar expressions for ${\cal A}_R$.\footnote{Note that the internal momenta entering ${\cal A}_R$ are $-l_1$ and $-l_2$. These have negative energy. However, since $\eta_{-l_{1,2}}=-\eta_{l_{1,2}}$, ${\cal A}_R$ is annihilated by the sum $S_R+S_{l_1,l_2}$ and not by the difference $S_R-S_{l_1,l_2}$.} The last term, outside the parentheses, also vanishes since it is a total derivative $$ \int d^4l\delta^{(+)}(l^2)\bar\d_l^{\dot a}f(l^{b\dot b})=\int_0^\infty dt\,t\int_{\widetilde\lambda=t\bar\lambda}\<\lambda_l,d\lambda_l\>[\widetilde\lambda_l,d\widetilde\lambda_l]{\d\over\d\widetilde\lambda_l}f((t\lambda_l^b)(\widetilde\lambda_l^{\dot b}))=0\,. $$ We conclude that $\bar S\Delta_1^{[m]}{\cal A}_n^{(1)}$ is non zero only due to the holomorphic anomaly. Moreover, for non collinear external momenta, it is supported on the region of integration where one of the internal momenta is collinear to one of the external momenta adjacent to the cut. For simplicity, we will only consider the case where the $n$ particle amplitude is MHV. In that case, both tree level sub amplitudes in (\ref{cutamp}) are MHV. Using the tree level MHV generating function (\ref{MHVgen}), acting with $\bar S$ and picking the contribution from the holomorphic anomaly, we find \cite{Bena:2004xu,Korchemsky:2009hm} \begin{figure}[t] \epsfxsize=15cm \centerline{\epsfbox{Son1loop.eps}} \caption{ \small\small\textsl{The action of the superconformal generator $\bar S_ {1\to 1}^{(0)}$ on a one loop unitarity cut. The holomorphic anomalies set an internal momenta to be collinear to an external one, giving rise to a $n+1$ tree level amplitude with two collinear particles. We deduce that the correction to that generator must be of the form $\bar S^{(0)}_{2\to 1}$.}}\label{figSonloop} \end{figure} \begin{eqnarray}\label{barScut} \bar S_{1\to 1}\Delta_1^{[m]}{\cal A}_n^{(1)}= &-&i4\pi^3\[\eta_1P_L^2-\sum_{i=1}^m\eta_i\<i|\relax{\rm P\kern-0.7em /}_L|1]\]\widetilde\lambda_1{\<m,m+1\>\theta(l_1^0)\theta(s_1x_1)\over\<m|\relax{\rm P\kern-0.7em /}_L|1]\<m+1|\relax{\rm P\kern-0.7em /}_L|1]}{\cal A}_n^{(0)}\\ &+&i4\pi^3\[\eta_nP_L^2-\sum_{i=1}^m\eta_i\<i|\relax{\rm P\kern-0.7em /}_L|n]\]\widetilde\lambda_n{\<m,m+1\>\theta(l_n^0)\theta(s_nx_n)\over\<m|\relax{\rm P\kern-0.7em /}_L|n]\<m+1|\relax{\rm P\kern-0.7em /}_L|n]}{\cal A}_n^{(0)}\nonumber\\ &+&i4\pi^3\[\eta_mP_L^2-\sum_{i=1}^m\eta_i\<i|\relax{\rm P\kern-0.7em /}_L|m]\]\widetilde\lambda_m{\<n1\>\theta(l_m^0)\theta(s_mx_m)\over\<n|\relax{\rm P\kern-0.7em /}_L|m]\<1|\relax{\rm P\kern-0.7em /}_L|m]}{\cal A}_n^{(0)}\nonumber\\ &-&i4\pi^3\[\eta_{m+1}P_L^2-\sum_{i=1}^m\eta_i\<i|\relax{\rm P\kern-0.7em /}_L|m+1]\]\widetilde\lambda_{m+1}{\<n1\>\theta(l_{m+1}^0)\theta(s_{m+1}x_{m+1})\over\<n|\relax{\rm P\kern-0.7em /}_L|m+1]\<1|\relax{\rm P\kern-0.7em /}_L|m+1]}{\cal A}_n^{(0)}~,\nonumber \end{eqnarray} where $|i]$ stands for $\tilde \lambda_i=s_i \bar \lambda_i$ and \begin{equation}\label{xiPl} P_L=\sum_{i=1}^mk_i~,\qquad x_i={P_L^2\over 2k_i\cdot P_L}~,\qquad l_i=P_L-x_i k_i~. \end{equation} Notice that each line in (\ref{barScut}) has a clear origin, represented in figure \ref{figSonloop}. Namely, the first line comes from the holomorphic anomaly that sets $l_2$ and $\lambda_1$ to be collinear, i.e. it steams from the action of the superconformal generator on $1/\<1 l_2\>$.\footnote{The holomorphic anomaly that set $l_1$ collinear to $l_2$ do not contribute after preforming the Grassman integration over $\eta_{l_1}$ and $\eta_{l_2}$.} The other three lines, from top to bottom, come from the action on $1/\<l_2 n\>$, $1/\<m l_1\>$ and $1/\< l_1 m+1\>$. The relative signs originate from the sign difference between $1/\<\lambda_l \lambda_\alpha\>$ and $1/\<\lambda_\alpha \lambda_l\>$, where $l=l_1,l_2$ and $\alpha=1,m,m+1,n$. The two step functions in each term restrict the energy of the two internal on-shell momenta to flow from the left to the right. In the kinematical regime we consider here, these step functions are automatically satisfied. That is because $l_1+l_2=P_L$ is a positive energy time-like momenta. We chose to write these step functions explicitly because latter they will be used for understanding the recipe for cutting $\bar S_{2\to 1}^{(0)}{\cal A}^{(0)}$ in a general kinematical regime. The calculation above is valid only for $2<m<n-2$. The cases $m=2,n-2$ deserves a more delicate treatment and will not be considered in this section. Next, we will show that the same expression (\ref{barScut}) is obtained from the $(n+1)$ tree level amplitude, by the action of \begin{equation}\la{allgen} \bar S_{2\to 1}=2\pi^2\sum_{s=\pm}s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[J(\Lambda^s)\check J(\Lambda_1^s)\check J(\Lambda_2^s)\]~, \end{equation} where \begin{equation}\label{Grassman} \begin{array}{l} \lambda_1=\lambda\sin\alpha\\ \lambda_2=\lambda\cos\alpha \end{array}, \qquad \begin{array}{l} \eta_1=\eta\sin\alpha-\eta'\cos\alpha \\ \eta_2=\eta\cos\alpha+\eta'\sin\alpha \end{array} \,. \end{equation} Notice that this is indeed the structure indicated by figure \ref{figSonloop}: we act on a $n+1$ tree level amplitude rendering two of its legs collinear. Since in the previous section we restrict the one-loop $n$ amplitude to be MHV, we have to show that \begin{equation}\label{toshow} \Delta_1^{[m]}\[\bar S_{1\to 1}{\cal A}_n^{(1)MHV}+\bar S_{2\to 1}^{+}{\cal A}_{n+1}^{(0)NMHV}\]=0~, \end{equation} where the number of $\pm$'s stands for the change in the helicity charge (\ref{helicity}) with multiplicity of two\footnote{In (\ref{helicity}), $n$ is the number of particles and $4k$ is the number of $\eta$'s. Hence removing a leg reduce the helicity by 2 and integrating over $\eta'$ increase the helicity by 4.}. There are three other terms that in principle could have appeared: $\bar S_{1\to 1}^{(1)}{\cal A}_n^{(0)MHV}$, $\bar S_{2\to 1}^-{\cal A}_{n+1}^{(0)MHV}$ and $\bar S_{2\to 1}^{+++}{\cal A}_{n+1}^{(0)N^2MHV}$, where $\bar S_{1\to 1}^{(1)}$ is a one loop correction of $\bar S_{1\to 1}$. The first one does not have a cut and the validity of (\ref{toshow}) means that $\bar S_{2\to 1}^-=\bar S_{2\to 1}^{+++}=0$. We would like to compare (\ref{barScut}) with the cut of $\bar S_{2\to 1}{\cal A}_{n+1}^{(0)}$. The collinear $(n+1)$ amplitude is divergent. However, the cut of the generator is finite. That is because the divergent pieces of the collinear amplitude do not have a discontinuity and therefore drop out. The action of $\bar S_{2\to 1}$ on ${\cal A}_{n+1}^{(0)}$ produces a sum of terms in which one of the $n$-particles is replaced by two collinear adjacent particles. Only four of these terms have a discontinuity in the $t_1^{[m]}$ channel. These are the terms in which the particles adjacent to the cut are -- $\{1,m,m+1,n\}$ -- see figure \ref{figSonloop}. Notice that these contributions are indeed finite. For simplicity, we isolate from the action of $\bar S_{2\to 1}$ on ${\cal A}_{n+1}^{(0)}$ the term in which the two collinear momenta are collinear to $k_1$. The other terms are related to that by relabeling of the legs. We label the two collinear legs $1'$ and $(n+1)'$ to distinguish them from their sum $1=1'+(n+1)'$ which becomes leg $1$ of the $n$-particle amplitude. Using (\ref{allgen}) we find that the cut in the $t_1^{[m]}$ channel of that term is term is \begin{eqnarray} && \nonumber \Delta_1^{[m]}\[\bar S_{2\to 1}{\cal A}_{n+1}^{NMHV}\]_1=2\pi^2\Delta_1^{[m]}\int d^4\eta' \eta' \widetilde \lambda_1 \int_0^1 \frac{dx}{\sqrt{x(1-x)}} {\cal A}_{n+1}^{NMHV}(1',\dots,(n+1)')\,, \end{eqnarray} where $x=\cos(\alpha)$. It is clear that we can move $\Delta_1^{[m]}$ freely into the integral and equally write \begin{eqnarray} &&\la{actionONcut} \[\bar S_{2\to 1}\Delta_1^{[m]}{\cal A}_{n+1}^{(0)NMHV}\]_1=2\pi^2\int d^4\eta' \eta' \widetilde \lambda_1 \int_0^1 \frac{dx}{\sqrt{x(1-x)}} \Delta_1^{[m]}{\cal A}_{n+1}^{(0)NMHV}(1',\dots,(n+1)')~\,. \end{eqnarray} Next, we express the tree level amplitude on the right hand side as a CSW sum\footnote{The same result can be obtained using BCFW \cite{BCFW} instead. Here, we chose to use CSW because it has a straightforward generalization to loop level which we will use in the next section.} \cite{CSW}, i.e. as a sum over MHV vertices connected by off-shell propagators. As the $(n+1)$-amplitude at hand is NMHV, any term in the CSW sum consist of two MHV vertices connected by a single propagator. In the kinematical regime we are working, only one term has a discontinuity in the $t_1^{[m]}$ channel. That is the term (see figure \ref{treecut}) \begin{figure}[t] \epsfxsize=7cm \centerline{\epsfbox{J21cut.eps}} \caption{\small\small\textsl{The term in the CSW sum of ${\cal A}^{(0)}_{n+1}$ that has a cut in $t_1^{[m]}=(k_1+k_2+\dots+k_m)^2$, when integrated over the collinearity portion of leg $1'$ and leg $(n+1)'$.}}\label{treecut} \end{figure} \beqa \Delta_1^{[m]}\!{\cal A}^{(0)}((1\!-\!x)1,2,\!\dots\!,n,x1) \la{almost}\!=\!\Delta_1^{[m]}\!\delta(P_n)\!\!\int \!\!d^4\eta_l {A^{(0)}((1\!-\!x)1,2,\!\dots\!,m,-l)A^{(0)}(l,m\!+\!1,\!\dots\!,n,x1)\over P_L^2-2xk_1\cdot P_L+i0}~, \eeqa where $$ (\lambda_l)_a=(P_L-xk_1)_{a\dot a}\chi^{\dot a} $$ and $\chi$ is an arbitrary fixed null vector. We have removed the subscripts $n+1$, $m+1$ and $n-m+2$ since they can be easily read off by counting the number of arguments of the corresponding amplitude. Using (\ref{cut}) and the relation \begin{equation} {i\over y+i0}-{i\over y-i0}=2\pi \delta(y)~. \la{delta} \end{equation} we simplify $\Delta_1^{[m]}\!{\cal A}^{(0)}((1\!-\!x)1,2,\!\dots\!,n,x1)$ in (\ref{almost}) to \beqa 2\pi i\delta(P_n){\rm sign}(P_L\cdot k_1)\delta(x-x_1)\int d^4\eta_l A^{(0)}((1-x)1,2,\dots,m,-l_1){x\over P_L^2}A^{(0)}(l_1,m+1,\dots,n,x1)~,\label{CSWterm} \eeqa where $x_1$ and $l_1$ are given in (\ref{xiPl})\footnote{Note that the dependence on $\chi$ has dropped out.}. In the kinematical regime we are working in, $P_L^2>0$. For $x_1\in[0,1]$ it means that ${\rm sign}(P_L\cdot k_1)=1$. Next, we plug (\ref{CSWterm}) back into (\ref{actionONcut}) \beqa &&\[\bar S_{2\to 1}\Delta_1^{[m]}\!{\cal A}_{n+1}^{(0)NMHV}\]_1\label{cutbarStree}\\&=&\!\!\!\!i4\pi^3{\widetilde \lambda_1\over P_L^2}\int_0^1 dx\delta(x-x_1)\sqrt{\!{x\over 1-x}}\!\int\!\! d^4\eta_l d^4\eta' \eta' A^{(0)}((1\!-\!x)1,2,\!\dots\!,m,-l)A^{(0)}(l,m\!+\!1,\!\dots\!,n,x1)\nonumber\\&=&\!\!\!\!i4\pi^3{\cal A}_n^{(0)}\[\eta_1P_L^2-\sum_{i=1}^m\eta_i\<i|\relax{\rm P\kern-0.7em /}_L|1]\]\widetilde\lambda_1{\<m,m+1\>\over\<m|\relax{\rm P\kern-0.7em /}_L|1]\<m+1|\relax{\rm P\kern-0.7em /}_L|1]}~,\nonumber \eeqa where in the last step we preformed the Grassmanian integration using (\ref{Grassman}). Note that in the kinematical regime we are working $x_1=t_1^{[m]} /( t_1^{[m]}-t_2^{[m-1]})\in [0,1]$ is always inside the region of integration. The final result in \rf{cutbarStree} is exactly minus the first line of (\ref{barScut})! A summation over the three other term corresponding to particles $m$, $m+1$ and $n$ reproduces (\ref{barScut}) and confirms (\ref{toshow}). For (\ref{toshow}) to hold in a general kinematical regime, we must reproduce the step functions in (\ref{barScut}). Physically, these step function restrict the energy flow in the cut lines of figure \ref{unitaritycut} to flow from the left to the right. The $\theta(l_1^0)$ is reproduced by cutting the tree level propagator between the two MHV vertices $$ {1\over L^2}\quad\rightarrow\quad \delta^{(+)}(L^2)~. $$ The second step function $\theta(s_1x_1)$, has to be associated with the procedure of cutting a leg connecting ${\cal J}_{2\to 1}$ to the amplitude. It restrict the energy component of the corresponding collinear particle to be positive (see Fig \ref{treecut}). We suggest it to be the general procedure for taking unitarity cuts of ${\cal J}_{n \to m} {\cal A}$. \section{Formal Superconformal Invariance of One Loop MHV Amplitudes}\la{sec:formal} In the previous section we demonstrated the superconformal invariance of unitarity cuts. In this section we will show that a formal invariance continues to hold for the full one loop MHV amplitude. The invariance will only be formal because some of the integrals we will consider are divergent. That is however not the first time where non-trivial information is obtained from formal manipulations of divergent integrals. For example, in \cite{Bern:1994zx} Bern, Dixon, Dunbar and Kosower computed the one loop MHV amplitudes of ${\cal N}=4$ SYM by formal manipulations of its unitarity cuts. What allowed them to do so was the independent knowledge that these amplitude are given by a sum of box integrals. In our case the logic is different. That is, we will use these formal manipulations to define the superconformal generators. Then, in section \ref{sec:reg}, we will show that up to a conformal anomaly, the structure survives regularization. We start by repeating the computation of $\bar S_{2\to 1}{\cal A}_{n+1}^{(0)}$ above but without taking its unitarity cut. That is, we formally remove the cut from (\ref{actionONcut}) \begin{equation}\label{Nocut} \[\bar S_{2\to 1}{\cal A}_{n+1}^{(0){\rm NMHV}}\]_1=2\pi^2\int d^4\eta' \eta' \widetilde \lambda_1 \int_0^1 \frac{dx}{\sqrt{x(1-x)}} {\cal A}_{n+1}^{(0){\rm NMHV}}(1',\dots,(n+1)') \end{equation} and represent the tree amplitude as a CSW sum \cite{CSW} \begin{equation}\label{CSWsum} {\cal A}_{n+1}^{(0){\rm NMHV}}(1,\dots,n+1)=-i\sum_{i=1}^{n+1}\sum_{m=2}^{n-1}\int d^4\eta_l\int {d^4L\over L^2}\widetilde{\cal A}_L^{(0){\rm MHV}}\widetilde{\cal A}_R^{(0){\rm MHV}}~, \end{equation} where \begin{eqnarray} \widetilde{\cal A}_L^{(0){\rm MHV}}&=&\delta^4(P_L+L) A_{m+1}^{(0){\rm MHV}}(l,i,\dots,i+m-1) \,, \la{ALRtree}\\ \widetilde{\cal A}_R^{(0){\rm MHV}}&=&\delta^4(P_R-L) A_{n-m+1}^{(0){\rm MHV}}(-l,i+m,\dots,i-1)~,\nonumber \end{eqnarray} and \begin{equation}\label{PLPRl} P_L=\sum_{r=i}^{i+m-1}k_r \,,\qquad P_R=\sum_{r=i+m}^{i-1}k_r \, ,\qquad l=L-y\chi~,\qquad y={P_L^2\over 2P_L\cdot\chi}~, \end{equation} where $\chi$ is an arbitrary null vector. The only difference between the $\widetilde{\cal A}^{(0){\rm MHV}}$ and tree level MHV amplitudes ${\cal A}^{(0){\rm MHV}}$ is in the momentum conservation delta function where the off-shell momentum $L$ enters and not the on-shell momenta $l$.\footnote{Not only the off-shell amplitudes $\widetilde{\cal A}_n^{(0){\rm MHV}}$ are still annihilated by $\bar S_{1\to 1}$ for generic external momenta but also the correction $\bar S_{1\to 2}$ required to account for collinear external momenta comes from the same holomorphic anomaly and is therefore trivially generalized to act on these amplitudes. The generators $\bar S_{1\to 1}$ and $\bar S_{1\to 2}$ generalized to act on these amplitudes will be given latter in (\ref{offS11}) and (\ref{offS12}).} The superconformal generator in (\ref{Nocut}) sets two of the momenta in (\ref{CSWsum}) to be collinear; these two momenta can belong to different sub-amplitudes (one in $\mathcal{A}_L$, the other in $\mathcal{A}_R$) or they can be on the same side (both in $\mathcal{A}_L$ or both in $\mathcal{A}_R$). Terms where the two collinear momenta are on the same side vanish when plugged into (\ref{Nocut}). That is because these terms are proportional to $\<1'(n+1)'\>^{-1}$ whereas the Grassman integral over $\eta'$ produces a factor of $\<1'(n+1)'\>^3$ (resulting in a total factor of $\<1'(n+1)'\>^2$). Of course (\ref{CSWsum}) can be simplified using $$ \int{d^4L\over L^2}\delta^4(P_L+L)\delta^4(P_R-L)={\delta^4(P_L+P_R)\over P_L^2} \,. $$ We will not do so here but instead express it as \cite{Brandhuber:2004yw} \begin{equation}\label{onshelldecomposition} \int{d^4L\over L^2}\delta^4(P_L+L)=\int \frac{dy}{y} \int d^4 l\, \delta(l^2)\,\delta^{(4)}(P_L+y \chi+l){\rm sign}(\chi\cdot l) \,. \end{equation} Preforming the Grassman integrations over $\eta_l$ and $\eta'$, we get \beqa\nonumber \[\bar S_{2\to 1}{\cal A}_{n+1}^{(0){\rm NMHV}}\]_1&=&i2\pi^2\widetilde\lambda_1{\cal A}_n^{(0)}\sum_{m=2}^{n-1} \int_0^1 dx x \int \frac{dy}{y} \int d^4l\, \delta(l^2) \delta^{(4)}(P_{L,y}-xk_1 - l){\rm sign}(\chi\cdot l)\\&\times& {1\over P_{L,y}^2}\[\eta_1P_{L,y}^2-\sum_{j=1}^m\eta_j\<j | \relax{\rm P\kern-0.7em /}_{L,y} |1 ]\]\frac{\<l1\>^2\<m,m+1\>}{\<m l\> \< l,m+1\>}~.\nonumber \eeqa where\footnote{When acting with the superconformal generator, the momenta $k_1 \in P_L$ becomes $k_{1'}=(1-x)k_1$ hence justifying the extra term $-x k_1$ inside the delta function.} \begin{equation} P_{L,y}=k_1+\dots + k_m-y \chi~. \end{equation} We can now integrate over $y$ and $l$ to obtain \begin{equation}\label{barSontree} \hspace{-2mm} \boxed{ \[\bar S_{2\to 1}{\cal A}_{n+1}^{(0){\rm NMHV}}\]_1=i2\pi^2\widetilde\lambda_1{\cal A}_n^{(0)}\sum_{m=2}^{n-1}\int_0^1{dx\over x}{P_{L,y}^2\over P_{L,x}^2} \[\eta_1P_{L,y}^2-\sum_{j=1}^m\eta_j\<j | \relax{\rm P\kern-0.7em /}_{L,y} |1]\]\frac{\<m,m+1\>}{\<m| \relax{\rm P\kern-0.7em /}_{L,y} |1] \<m+1| \relax{\rm P\kern-0.7em /}_{L,y} | 1]}}~, \end{equation} where\footnote{Note that $\int{dx\over x}{P_{L,y}^2\over P_{L,x}^2}=\int{dy\over y}$.} $$ y={P_{L,x}^2\over 2P_{L,x}\cdot \chi}~,\qquad P_{L,x}=k_1+\dots + k_m-xk_1~. $$ Before we move on and consider the action of the superconformal generators on the one loop amplitude, a few comments are in order: \begin{enumerate} \item[$\bullet$] Taking the cut of (\ref{barSontree}) in the $t_1^{[m]}$ channel localizes the $y$-integral at $y=0$, yielding (\ref{cutbarStree}). \item[$\bullet$] Any term in the sum depends on the arbitrary chosen null vector $\chi$. The sum is however $\chi$ independent. \item[$\bullet$] For compactness of the expressions above, we have dropped the explicit $i\epsilon$ prescription of the Feynman propagator. It is trivial to add it back as will be done below. \item[$\bullet$] For $m=1,2,n-2,n-1$ the integrals in (\ref{barSontree}) are divergent. In section \ref{sec:reg} we will deal with their regularization. \end{enumerate} Next, we would like to compare (\ref{barSontree}) with the action of $\bar S_{1\to 1}$ on the $n$-particle MHV amplitude ${\cal A}_n^{(1){\rm MHV}}$. In \cite{Brandhuber:2004yw}, a generalization of the CSW formula to one-loop MHV amplitudes was given as \begin{equation} {\cal A}_n^{(1){\rm MHV}}=-2\pi i\sum_{i=1}^n\sum_{m=1}^{n-1}\int {dy\over y+i0}\int d^4l_1d^4l_2\delta^{(+)}(l_1^2)\delta^{(+)}(l_2^2)\int d^4\eta_{l_1}d^4\eta_{l_2}\widetilde{\cal A}_L^{(0){\rm MHV}}\widetilde{\cal A}_R^{(0){\rm MHV}}~, \la{oneloopcut} \end{equation} where \begin{eqnarray}\label{PLPR} \widetilde{\cal A}_L^{(0){\rm MHV}}&=&\delta^4(P_L-l_1-l_2-y\chi) A_{m+1}^{(0){\rm MHV}}(-l_1,-l_2,i,\dots,i+m-1) \,, \\ \widetilde{\cal A}_R^{(0){\rm MHV}}&=&\delta^4(P_R+l_1+l_2+y\chi) A_{n-m+1}^{(0){\rm MHV}}(l_2,l_1,i+m,\dots,i-1)\,,\nonumber \end{eqnarray} the left and right momenta $P_L$, $P_R$ are given in (\ref{PLPRl}) and we have chosen $\chi$ to have positive energy ($\chi^0>0$).\footnote{The step functions $\theta(l_1^0)\theta(l_2^0)$ imply that $P_L-y \chi$ is the sum of two positive energy null momenta and must therefore be a (positive energy) time-like vector. Thus, the integrand in each of the summands in (\ref{oneloopcut}) is nonzero for $y\ge -\frac{P_L^2}{2\chi\cdot P_L}$ (for $m=1$ this yields $y\ge 0$).} For any fixed value of $y$, the dLIPS integral in (\ref{oneloopcut}) computes the discontinuity of a one loop amplitude in the $P_{L,y}$ channel (where the tree level amplitude has been factored out). It depends on $y\chi$ only through the momentum conservation delta functions in (\ref{PLPR}). We can therefore apply the result of section \ref{sec:fincuts} directly to the loop amplitude. As before, $\bar S_{1\to 1}$ fails to annihilate the one loop amplitude only due to the holomorphic anomaly and we isolate the term in which an internal on-shell momenta is collinear to $k_1$ \begin{equation}\label{barSonloop} \hspace{-2mm} \boxed{ \[\bar S_{1\to 1}{\cal A}_n^{(1){\rm MHV}}\]_1=-i2\pi^2\widetilde\lambda_1{\cal A}_n^{(0)}\sum_{m=2}^{n-1}\int_0^1{dx\over x}{P_{L,y}^2\over P_{L,x}^2}\[\eta_1P_{L,y}^2-\sum_{j=1}^m\eta_j\<j | \relax{\rm P\kern-0.7em /}_{L,y} |1]\]\frac{\<m,m+1\> }{\<m| \relax{\rm P\kern-0.7em /}_{L,y} |1] \<m+1| \relax{\rm P\kern-0.7em /}_{L,y} | 1]}}~, \end{equation} where $$ x={P_{L,y}^2\over 2P_{L,y}\cdot k_1}~. $$ Comparing with (\ref{barSontree}) we see that, at the level of formal integrals, we obtain a match between the two expression, i.e., \begin{equation}\label{FormalSummary} \bar S_{1\to 1}{\cal A}_n^{(1){\rm NMHV}}+\bar S_{2\to 1}{\cal A}_{n+1}^{(0){\rm NMHV}}=0~. \end{equation} In obtaining (\ref{barSonloop}) there were two point that deserve explanation. \begin{enumerate} \item[$\bullet$] It is quite nontrivial that we obtain precisely the same region of integration $0<x<1$ in (\ref{barSonloop}) and in (\ref{barSontree}). Each $m$ summand in (\ref{barSonloop}) originates from four terms in the action of $\bar S_{1\to 1}$ on (\ref{oneloopcut}). These are the terms in which $(i,m)$ in (\ref{oneloopcut}) are equal to $\{(1,m),(m+1,n-m),(2,m-1),(m+1,n-m+1)\}$, where the first two are drawn in figure \ref{BarS21}.a and the last two in figure \ref{BarS21}.b. Each of these four terms is the same as (\ref{barSonloop}) with the integrand multiplied by the following step functions (see (\ref{barScut})) \begin{figure}[t] \epsfxsize=16cm \centerline{\epsfbox{BarS21.eps}} \caption{\small\small\textsl{{\bf a} and {\bf b} are terms that appear in the action of $\bar S_{1\to 1}^{(0)}$ on the one loop MHV $n$-particle amplitude ${\cal A}_n^{(1)}$, see figure \ref{figSonloop}. They correspond to two different terms in the CSW sum of the loop amplitude where an internal on-shell momenta become collinear to $k_1$. Their sum is equal to a term in $\bar S_{2\to 1}{\cal A}_{n+1}^{(0)}$ where the two collinear particles are collinear to $k_1$. That term is drawn on the left.}}\label{BarS21} \end{figure} \beqa\la{Regions} (1,m):&\qquad&+s_1\theta(+(P_{L,y}^0-xk_1^0))\theta(+xk_1^0)\nonumber\\ (m+1,n-m):&\qquad&-s_1\theta(-(P_{L,y}^0-xk_1^0))\theta(-xk_1^0)\\ (2,m-1):&\qquad&-s_1\theta(+(P_{L,y}^0-xk_1^0))\theta(-(1-x)k_1^0)\nonumber\\ (m+1,n-m+1):&\qquad&+s_1\theta(-(P_{L,y}^0-xk_1^0))\theta(+(1-x)k_1^0)~.\nonumber \eeqa When summing these four terms the dependence on $P_L$ and $s_1$ drops out leaving just \begin{equation}\la{Cancelation} \theta(x)\theta(1-x)~. \end{equation} as in (\ref{barSonloop}). The cancelation of the $P_L$ dependence in the region of the $x$ integration is essential for the locality of the correction $\bar S_{2\to1}$. Notice that for (\ref{FormalSummary}) to hold, it was crucial that in the definition of $\bar S_{2\to1}$ the portion of collinearity $x$ was integrated only between 0 and 1. Physically this means that the two collinear particles have the same energy sign. \item[$\bullet$] Note that the sum in (\ref{barSonloop}) starts at $m=2$. On the other hand, the CSW one loop sum (\ref{CSWsum}) contains a term where $(m,i)=(1,1)$. That is the term where one of the MHV vertices is a three vertex connecting particle $1$ to the two internal propagators. It already contributes to (\ref{barSonloop}) from the regions of integration where $l_1$ become collinear to $k_2$ or $l_2$ becomes collinear to $k_n$. One may expect that it will also contribute to (\ref{barSonloop}) from the region of integration where an internal momentum becomes collinear to $k_1$. In that term, the momentum conservation delta function of the MHV three vertex is $\delta^4(k_1-y\chi-l_1-l_2)$. Suppose $l_1=tk_1$. The only way to have momentum conservation is if $y=0$ or $t=1$.\footnote{That is, generically the holomorphic anomaly is supported outside the region of integration.} The point $t=1$ is however a point of measure zero in the dLIPS integration. The corresponding holomorphic anomaly is therefore supported at the point where $y=0$. That is the point where the original $y$-integrand is divergent. A formal manipulation of that contribution is therefore invalid. In other words, to make any sense out of that contribution, we must first introduce a regulator. After regularization, that contribution vanish. As the details are technical and involve a regularization not yet introduced, we present them in Appendix A. \end{enumerate} \section{Generalizations to All Loops and Helicities} \la{all} In \cite{Brandhuber:2004yw,Brandhuber:2005kd} a generalization of the CSW formula to one-loop MHV amplitudes was given as\footnote{The relation between (\ref{oneloopCSW}) and (\ref{oneloopcut}) will be reviewed in detail in the next section.} \begin{equation} \label{oneloopCSW} {\cal A}_n^{(1){\rm MHV}}=-\sum_{i=1}^n\sum_{m=1}^{n-1}\int{d^4L_1\over L_1^2+i0}\int{d^4L_2\over L_2^2+i0}\int d^4\eta_{l_1}d^4\eta_{l_2}\widetilde{\cal A}_{L}\widetilde{\cal A}_{R}~, \end{equation} where \begin{eqnarray}\label{PLPRCSW} \widetilde{\cal A}_L&=&\delta^4(P_L+L_1+L_2) A_{m+1}^{(0){\rm MHV}}(l_2,l_1,i,\dots,i+m-1) \\ \widetilde{\cal A}_R&=&\delta^4(P_R-L_1-L_2) A_{n-m+1}^{(0){\rm MHV}}(-l_1,-l_2,i+m,\dots,i-1)\nonumber \end{eqnarray} and \begin{equation}\la{offshelltonull} L_i=l_i+y_i\chi~. \end{equation} Here $\chi$ is an arbitrary chosen null vector. For every fixed values of $i$ and for every $m$ in the sum we can express the $L_1$ integral as \beqa\la{Ltol} \int{d^4 L_1\over L_1^2+i0}\delta^4(P_{L_m}+ L_1) =\int d^4l_1\delta(l_1^2){\rm sign}(l_1^0)\int{dy\over y+i0\ {\rm sign}(l_1^0)}\delta^4(P_{L_m}+l_1+y\chi)~,\nonumber \eeqa where $$ P_{L,m}=L_2+\sum_{j=i}^{i+m-1}k_j $$ and the energy sign of $\chi$ was chosen to be positive. Now, for every fixed value of $i$, $y$ and $l_1$, the sum over $m$ reproduce the CSW formula for an $(n+2)$ tree level (off-shell continued) NMHV amplitude $\widetilde{\cal A}_{n+2}^{(0){\rm NMHV}}$ with two adjacent legs been $l_1$ and $-l_1$ with momentum insertions $l_1+y\chi$ and $-l_1-y\chi$ correspondently\footnote{To be more precise, the CSW tree level sum also includes terms where the legs $l_1$ and $-l_1$ are attached to the same MHV vertex. These terms are proportional to the tree level splitting function that diverge as $1/\<l_1,-l_1\>$. However, the Grassmanian integration over $\eta_{l_1}$ produces a factor of $(|l_1\>-|-l_1\>)^4$, killing these terms \cite{Brandhuber:2005kd}. Note that even if we multiply first by $\eta_{l_1}^A$, these terms will not contribute.} (when expressing $\widetilde{\cal A}_{n+2}^{(0){\rm NMHV}}$ as a CSW sum, the null momenta used to go off-shell must be $\chi$ and cannot be chosen independently). We have then \beqa\nonumber {\cal A}_n^{(1){\rm MHV}}\!=\!-i\sum_{i=1}^n\int\! d^4l\delta(l^2){\rm sign}(l^0)\!\int\!{dy\over y+i0\ {\rm sign}(l^0)}\!\int \!d^4\eta_l\widetilde{\cal A}_{n+2}^{(0){\rm NMHV}}\![(l,y\chi),(-l,-y\chi),i,\dots,i+n-1]\nonumber \eeqa where $\widetilde{\cal A}^{(0)}$ is the off-shell continuation of tree-level amplitudes by means of momentum conservation at MHV vertices. That is, the external momenta entering $\widetilde{\cal A}$ can be off-shell and are treated in the same way as internal off-shell momenta via the CSW prescription \cite{CSW}. We can replace the ${\rm sign}(l_0)$ in this expression by explicitly summing over positive and negative energy momenta $l$ thus obtaining \beqa\la{looptree} {\cal A}_n^{(1){\rm MHV}}=-i\sum_{i=1}^n\sum_{s=\pm} \int{dy\over y+i0}\int {d^{4|4}\Lambda\over2\pi}\widetilde{\cal A}_{n+2}^{(0){\rm NMHV}}[(sl,sy\chi),(-sl,-sy\chi),i,\dots,i+n-1]~, \eeqa where $d^{4|4}\Lambda=d^4l\delta^{(+)}(l^2) d^4\eta_ld\varphi$. We will now use the above observation to conjecture a generalization of CSW \cite{CSW} to any loop order and any helicity configuration (not necessarily MHV). For that aim, we first introduce a couple of definitions \begin{itemize} \item We define the generating function of all generalized MHV vertices $$ \widetilde{\cal A}^{(0)\rm MHV}_c[J]=\sum_{n=3}^\infty{g^{n-2}\over n}\int\prod_{i=1}^n d^{4|4}\Lambda_i dy_i{\rm Tr}[J(\widetilde\Lambda_{1,y_1})\dots J(\widetilde\Lambda_{n,y_n})]\widetilde{\cal A}^{(0){\rm MHV}}_n(\Lambda_{1,y_1},\dots,\Lambda_{n,y_n})~, $$ where $\widetilde\Lambda^\pm_y=(\Lambda^\pm,y)$ and $$ \widetilde{\cal A}^{(0){\rm MHV}}_n(\Lambda_{1,y_1},\dots,\Lambda_{n,y_n})=\delta^4\(\sum_{i=1}^n(k_i+y_i\chi)\)A_n^{(0)\rm MHV}(\Lambda_1,\dots,\Lambda_n)~. $$ \item Next, we define the ``propagator inserting operator" \beqa\la{loopraising} {\cal L}=i\int{dy \over y+i0}\int {d^{4|4}\Lambda\over2\pi}{\rm Tr}[\check J(\widetilde\Lambda_{y})\check J(-\widetilde\Lambda_y)]~,\nonumber \eeqa where $\check J(\widetilde\Lambda_y)={\delta\over\delta J(\widetilde\Lambda_y)}$ and $(-\widetilde\Lambda_y)=(\lambda,-\bar\lambda,-\eta,-y)$. \item Finally, we express the full ${\cal N}=4$ S-matrix ${\mathbb S}_{matrix}$ (generating all connected, disconnected, planar and non-planar amplitudes) as \begin{equation} {\mathbb S}_{matrix}[J]=e^{F[J]}{\mathbb S}[J]~, \la{Smatrix} \end{equation} where \cite{Kim:1996nd} \begin{equation} F[J]=\int d^{4|4}\Lambda{\rm Tr}[J(\Lambda^+) \hat J(\Lambda^-)] \la{eqF} \end{equation} is introduced to take into account from sub-processes where some of the particles fly by unscattered and ${\mathbb S}$ is the interacting part of the S-matrix. It is equal to the exponent of all connected amplitudes with three or more external particles.\footnote{Note that ${\mathbb S}$ is not the transfer matrix (the latter only excludes the process where \textit{all} particles fly by unscattered).} \end{itemize} Using these definitions, the conjectured CSW generalization reads \begin{equation}\la{CSWloop} \hspace{-2mm} \boxed{ {\mathbb S}[J]=e^{{\cal L}}~e^{\widetilde{\cal A}^{(0)\rm MHV}_c[J]}}~. \end{equation} Note that at tree level and for one loop MHV amplitudes this is \textit{not} a conjecture, see respectively \cite{BCFW,Risager:2005vk} and \cite{Brandhuber:2004yw}.\footnote{The CSW construction was argued to hold for any helicity configuration in \cite{Brandhuber:2005kd}. Furthermore,the existence of an MHV Lagrangian which is obtained from the usual one by a field redefinition after light-cone gauge fixing \cite{axial} provides additional strong evidence towards the exactness of this expansion,see \cite{Ettle:2007qc} where the (non-local) field redefinitions were argued to be mild enough not to raise any issues at both tree level and one loop.} We will now study the symmetry transformations of this object; the transformation properties of the full S-matrix ${\mathbb S}_{matrix}$ will then be read off from these. \begin{figure}[t] \epsfxsize=16cm \centerline{\epsfbox{S21fromS12.eps}} \caption{\small\small\textsl{ Result of the action of $\bar S_{1\to 1}$ on ${\mathbb S}^{(1)}$. The operator $\bar S_{1\to 1}$ goes through ${\cal L}$ thus acting on the MHV tree level generating function ${\mathbb S}^{(0)}$. From \cite{Bargheer:2009qu} this gives rise to the action of $\bar S_{1\to 2}$ on ${\mathbb S}^{(0)}$. The two legs created by $\bar S_{1\to 2}$ can (a) be unrelated to the legs on which $\mathcal{L}$ acts, thus yielding terms which vanish for generic external momenta, (b) be acted upon by $\mathcal{L}$, giving a vanishing contribution due to the Grassmanian integration or (c,d) one of them can become an external leg while the other is acted upon by $\mathcal{L}$. The latter two contributions are identified with $\bar S_{2\to 1}$, (see figure \ref{Blue}). }}\label{LoopOperator} \end{figure} We start by writing a recursive relation for the number of propagators between CSW MHV vertices. To do so, we first introduce a parameter $x$ counting the number of such propagators $$ {\mathbb S}[x,J]=e^{x{\cal L}}~e^{\widetilde{\cal A}^{(0)\rm MHV}_c[J]}=\sum_{m=0}^\infty x^m{\mathbb S}^{(m)}[J]~. $$ To obtain the S-matrix, we set $x=1$ $$ {\mathbb S}[J]={\mathbb S}[1,J]~. $$ We can now write a recursive relation these coefficients\footnote{Here and everywhere the tilde stands for generalized off-shell amplitudes in the CSW sense.} \begin{equation}\la{recursive} {\mathbb S}^{(m)}[J]={{\cal L}\over m} \widetilde {\mathbb S}^{(m-1)}[J]~. \end{equation} Let us now explain how we can recover and generalize our previous results assuming (\ref{CSWloop}). First, we note that for $\bar S$, the results of \cite{Bargheer:2009qu} applies as well for generalized tree level MHV amplitudes (these are the CSW vertices)\footnote{Note that MHV amplitudes don't have multi-particle poles.} \begin{equation}\la{firststep} \(\bar S_{1\to1}+g\bar S_{1\to2}\){\mathbb S}^{(0)}[J]=0~, \end{equation} where for generalized off-shell legs \beqa \bar{{S}}_{{1\to 1}}&=&-\sum_{s=\pm}s\int d^{4|4}\Lambda\,dy\, \eta \, {\rm Tr}\[\bar\partial J(\widetilde\Lambda^s_y) \check{J}(\widetilde\Lambda^s_y)\] ~, \la{offS11} \\ \bar S_{1\to2}&=&2\pi^2{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~s\int dy_1dy_2\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[\check J(\widetilde\Lambda^s_{y_1+y_2})\hat J(\widetilde\Lambda_{1,y_1}^{s_1}) J(\widetilde\Lambda_{2,y_2}^{s_2})\]~. \la{offS12} \eeqa Now we recursively act with the bare generator $\bar S_{1\to1}$ on ${\mathbb S}^{(1)}[J]$ using (\ref{recursive}) \beqa \bar S_{1\to1}{\mathbb S}^{(1)}[J]&=&\bar S_{1\to1}{\cal L}\widetilde {\mathbb S}^{(0)}[J]={\cal L}\bar S_{1\to1}\widetilde {\mathbb S}^{(0)}[J]=-g{\cal L}\bar S_{1\to2}\widetilde {\mathbb S}^{(0)}[J]\nonumber\\&=&-g\bar S_{1\to2}\widetilde {\mathbb S}^{(1)}[J]-g[{\cal L},\bar S_{1\to2}]\widetilde {\mathbb S}^{(0)}[J]~,\nonumber \eeqa where when commuting ${\cal L}$ through $\bar S_{1\to1}$, we used the fact that $l$ is not an external leg and that \begin{equation}\la{Sbarl} \bar S_l=\eta_l{\d\over\d\widetilde\lambda_l}=\eta_{-l}{\d\over\d\widetilde\lambda_{-l}}=\bar S_{-l} \end{equation} is a total derivative. \begin{figure}[t] \epsfxsize=10cm \centerline{\epsfbox{S21blue.eps}} \caption{\small\small\textsl{The combined action of the propagator inserting operator $\mathcal{L}$ and the leg splitting operator $\bar S_{1\to 2}$ gives rise to the leg joining operator $\bar S_{2\to 1}$.}}\label{Blue} \end{figure} Now, (see figure \ref{Blue}) \beqa\la{UsingBiesert} [{\cal L},\bar S_{1\to2}]&=&i\pi\int{dy\over y+i0}{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[\check J(\widetilde\Lambda^s_{s_1y})\check J(-\widetilde\Lambda^{s_1}_{1,s_1y}) J(\Lambda_2^{s_2})\]\\&+&i\pi\int{dy\over y+i0}{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[\check J(-\widetilde\Lambda^{s_2}_{2,s_2y})\check J(\widetilde\Lambda^s_{s_2y}) J(\Lambda_1^{s_1})\]~,\nonumber \eeqa The two terms in (\ref{UsingBiesert}) comes from contracting the right $(1)$ and the left $(2)$ legs of $\bar S_{1\to 2}$ with a neighboring leg using $\mathcal{L}$. These are drawn in figures \ref{Blue}.a and \ref{Blue}.b respectively. By the following change of variables in the last line $$ (s,s_1,s_2,y)\quad\rightarrow\quad(-s_1,s_2,-s,ss_1y)~, $$ the sum of the two terms can be rewritten as \beqa\la{intygen} [{\cal L},\bar S_{1\to2}] =\pi{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~s\int dy\[{i\over y+i0}-{i\over y+iss_10}\]\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[\check J(\widetilde\Lambda^s_{s_1y})\check J(-\widetilde\Lambda^{s_1}_{1,s_1y}) J(\Lambda^{s_2}_2)\]~.\nonumber \eeqa For $s=s_1$ the two terms cancel. That is the same cancelation obtained in (\ref{Cancelation}). For $s=-s_1$ (and therefore $s=s_2$), the two terms gives a delta function (\ref{delta}). We conclude that \begin{equation}\la{corrAssume} \(\bar S_{1\to1}+g\bar S_{1\to 2}\){\mathbb S}^{(1)}[J]+g\bar S_{2\to1}{\mathbb S}^{(0)}[J]=0~, \end{equation} where \begin{equation} \hspace{-2mm} \boxed{ \bar S_{2\to1}=2\pi^2\sum_{s=\pm}s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[J(\Lambda^s)\check J(\Lambda_1^s)\check J(\Lambda_2^s)\]}~, \end{equation} is independent of $\chi$ and \begin{equation}\label{Grassman2} \begin{array}{l} \lambda_1=\lambda\sin\alpha\\ \lambda_2=\lambda\cos\alpha \end{array}, \qquad \begin{array}{l} \eta_1=\eta\sin\alpha-\eta'\cos\alpha \\ \eta_2=\eta\cos\alpha+\eta'\sin\alpha \end{array}~. \end{equation} This is precisely the form of the generator \rf{allgen} derived in the previous sections from the direct action on one loop MHV amplitudes. Next, we use (\ref{recursive}) and (\ref{corrAssume}) to act with $\bar S_{1\to1}$ on ${\mathbb S}^{(2)}[J]$ \beqa\la{ordertwo} \bar S_{1\to1}{\mathbb S}^{(2)}[J]&=&-g{{\cal L}\over 2}\(\bar S_{1\to2}\widetilde {\mathbb S}^{(1)}[J]+\bar S_{2\to1}\widetilde {\mathbb S}^{(0)}[J]\)\\&=&-g\bar S_{1\to2}\widetilde {\mathbb S}^{(2)}[J]-g\bar S_{2\to1}\widetilde {\mathbb S}^{(1)}[J]- g \bar S_{3\to 0} \widetilde {\mathbb S}^{(0)}[J]~.\nonumber \eeqa The new term appearing in (\ref{ordertwo}) is (see figure \ref{S30fig}) \beqa\la{Sthreezerobare} \bar S_{3\to 0} &=&{1\over 2}[{\cal L},\bar S_{2\to1}]={1\over 2}[{\cal L},[{\cal L},\bar S_{1\to2}]]\\ &=&{1\over 2}{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~ss_1s_2\int dy_1dy_2\,G_{12}\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[\hat{\check J}(\widetilde\Lambda^s_{y_1+y_2})\check J(-\widetilde\Lambda^{s_2}_{2,y_2})\check J(-\widetilde\Lambda^{s_1}_{1,y_1})\]~,\nonumber \eeqa where $$ G_{12}=-{1\over 3}\[{1\over y_1+is_10}{1\over y_2+is_20}-{1\over y_1+is_10}{1\over y_1+y_2+is0}-{1\over y_1+y_2+is\,0}{1\over y_2+is_2\,0}\] ~.\nonumber $$ represents the propagators in the three possible combinations represented in figure \ref{S30fig}. Similarly to (\ref{delta}) we now have $G_{12}= \pi ^2\delta(y_1)\delta(y_2)\(1+s_1s_2\)\(1-ss_1\)/3$. Plugging it back into (\ref{Sthreezerobare}), we conclude that \begin{equation} \hspace{-2mm} \boxed{ \bar S_{3\to0}={2\pi^2\over 3}\sum_{s=\pm}s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[\check J(\Lambda^s)\check J(-\Lambda^s_2)\check J(-\Lambda^s_1)\]}~. \la{S30fin} \end{equation} \begin{figure}[t] \epsfxsize=15cm \centerline{\epsfbox{S30.eps}} \caption{\small\small\textsl{The generator $\bar S_{3\to 0}$ obtained by composing twice the propagator inserting operator $\mathcal{L}$ with the splitting generator $\bar S_{1\to 2}$. }}\label{S30fig} \end{figure} Since $\bar S_{3\to0}$ does not have external legs (to be contracted with ${\cal L}$), it commutes with the propagator inserting operator $$ [{\cal L},\bar S_{3\to0}]=0~. $$ Thus, iff we now go to higher orders in out recursive argument, no new corrections to the special superconformal generator $\bar S$ are produced. Collecting the terms in the $x$ expansion of $\bar S_{1\to1}{\mathbb S}[J]$ we conclude that \begin{equation} \(\bar S_{1\to1}+g\bar S_{1\to2}+gx\bar S_{2\to1}+gx^2\bar S_{3\to0}\){\mathbb S}[x,J]=0~. \nonumber \end{equation} By setting $x=1$, we obtain \begin{equation} \hspace{-2mm} \boxed{ \(\bar S_{1\to1}+g\bar S_{1\to2}+g\bar S_{2\to1}+g\bar S_{3\to0}\){\mathbb S}[J]=0}~. \la{main} \end{equation} A few comments are in order: \begin{enumerate} \item[$\bullet$] Note that even though in our derivation we used a generalization of CSW (\ref{CSWloop}) that technically involved a choice of a reference null vector $\chi$, it dropped out of all our final results. \item[$\bullet$] Naively, the action of ${\cal L}\bar S_{1\to1}$ on a connected amplitude also produces an holomorphic anomaly proportional to $\delta^2(\<l,-l\>)$. Due to the Grassmanian integration, there is no such contribution (see Fig 6.b). \item[$\bullet$] The operator $\bar S_{2\to1}$ contributes only when the two collinear particles have the same energy sign. The reason for that is a cancelation between the two terms in $[{\cal L},\bar S_{1\to2}]$ (see figure 7). In that sense, $\bar S_{2\to1}$ is different from the tree level corrections $S_{1\to2}$ where the two collinear particles can have opposite energy sign \cite{Bargheer:2009qu}. Generalizing these two generators with $\bar S_{3\to0}$, we see that all corrections to the generators are made from the same three vertex connected to the amplitude by cut propagators. \end{enumerate} The corrections to the conjugate special superconformal generator $S$ are obtained in an identical way using anti-MHV CSW rules \begin{equation}\la{Scorrections} \boxed{ \begin{array}{ll}\displaystyle S_{2\to1}& \displaystyle =-2\pi^2\sum_{s=\pm}\int d^{4|4}\Lambda d^4\eta' d\alpha \,\delta(\eta') \lambda \partial/\partial \eta' {\rm Tr}\[J(\Lambda^s)\check J(\Lambda_1^s)\check J(\Lambda_2^s)\]\\ \displaystyle S_{3\to0}& \displaystyle =-{2\pi^2\over 3}\sum_{s=\pm}\int d^{4|4}\Lambda d^4\eta'd\alpha\delta(\eta') \lambda \partial/\partial \eta'{\rm Tr}\[\check J(-\Lambda^{s})\check J(\Lambda^s_2)\check J(\Lambda^s_1)\] \end{array}}~, \end{equation} where $\Lambda_1$ and $\Lambda_2$ are given in (\ref{Grassman2}). The special conformal generator can be obtained by commuting the superconformal generators as in (\ref{SSK}). This is one of the advantages of the approach of this section. Namely, since we obtain the corrected generators at one loop by acting on the tree level generators with the propagator inserting operator ${\cal L}$, we automatically get the good commutation relations for free: it suffices to conjugate (a straightforward off-shell generalization of) the commutation relations of \cite{Bargheer:2009qu} by the propagator inserting operator ${\cal L}$. We can now read the transformation of the full S-matrix ${\mathbb S}_{matrix}$ (\ref{Smatrix}) by multiplying (\ref{main}) by $e^F$ (\ref{eqF}) and commuting this through the generators. In this way we obtain\footnote{Similarly to $[{\cal L},\bar S_{1\to1}]=0$, we have $[F,\bar S_{1\to1}]=0$ because (\ref{Sbarl}) is a total derivative.} \begin{equation} \hspace{-2mm} \boxed{ \(\bar S_{1\to1}+g\bar S_{3\to0}+g\bar S_{0\to3} \){\mathbb S}_{matrix}[J]=0}~, \la{main2} \end{equation} where \begin{equation}\nonumber \boxed{ \bar S_{0\to3} \displaystyle=-{2\pi^2\over 3}\sum_{s=\pm}s\int d^{4|4}\Lambda d^4\eta'd\alpha\bar\lambda\eta'{\rm Tr}\[J(\Lambda^s) J(\Lambda^s_2) J(\Lambda^s_1)\]}~. \end{equation} The correction $\bar S_{0\to3}$ do not contain functional derivatives $\check J$ and with this respect it is distinct from $\bar S_{1\to 1}$ and $\bar S_{3\to 0}$. Note also that $\bar S_{2\to 1}$ and $\bar S_{1\to 2}$ are automatically reproduced from commuting $\bar S_{3\to 0}$ through $e^F$ and need not be included in \rf{main2}. \subsection{Superconformal Invariance of the Tree Level S-matrix} \begin{figure}[t] \epsfxsize=16cm \centerline{\epsfbox{S21tree.eps}} \caption{\small\small\textsl{The action of the bare superconformal generator $\bar S_{1\to1}$ on an amplitude at the point where it factorizes on a multi particle pole result in a new anomaly. The anomaly is identified with correction $\bar S_{2\to1}$ to $\bar S_{1\to1}$, acting on two disconnected amplitudes. In the T-dual polygon Wilson loop picture of \cite{AM}, the collinear multi particle factorization points correspond to configurations where a cusp collides with an edge. The superconformal generator mixes that configuration with two disjoint polygons touching at a point.} }\label{newTree} \end{figure} Our derivation of the main result \rf{main} is valid at any loop order. In this section we will consider the implication of this relation for tree level amplitudes. The first term is the only one which survives for generic configurations of external momenta and was considered for MHV amplitudes in \cite{Witten} and for all tree level amplitudes in \cite{tree1, dualpapers3}. The second term arises when two of the external momenta become collinear and was proposed in \cite{Bargheer:2009qu} as the correction to the bare superconformal generator. The last two terms in (\ref{main}) contribute already at tree level and were overlooked in the literature. In this subsection we shall explain for which configuration of external momenta they become relevant. We shall start by $\bar S_{2\to 1}$. Whenever a subset of adjacent momenta becomes on-shell $$ (k_i+\dots+k_{i+m-1})^2=P^2=0~, $$ the amplitude factorize as \begin{equation}\la{Multiparticlefactorization} {\cal A}_n(k_1,\dots,k_n)\quad\rightarrow\quad-i\int d^4L\int d^4\eta_P{\cal A}(L,k_i,\dots,k_{i+m-1}){1\over L^2+i0}{\cal A}(-L,m+i,\dots,i-1)~. \end{equation} That property of scattering amplitude follows directly from the unitarity of the S-matrix and is called multi particle factorization. The right hand side of (\ref{Multiparticlefactorization}) is nothing but two disconnected amplitudes joined by our propagator inserting operator ${\cal L}$. When acting with $\bar S_{1\to1}$, we can follow exactly the same steps as in the previous subsection (see figure \ref{newTree}). The result is therefore equal to $\bar S_{2\to1}$ acting on two disconnected amplitudes. It is non-zero whenever $P$ become collinear to one of the neighboring momenta $k_i$, $k_{i-1}$, $k_{i+m}$ or $k_{i+m+1}$. The only difference from acting on a connected piece of the amplitude is the absence of additional propagators connecting the two sub-amplitudes. These however, played no role in our previous derivation. \begin{figure}[t] \epsfxsize=16cm \centerline{\epsfbox{S30tree_2.eps}} \caption{\small\small\textsl{The action of the bare superconfornal generator $\bar S_{1\to1}$ on an tree level amplitude with a multi-particle pole. The result can be recast as $\bar S_{3\to 0}$ connecting three disconnected tree amplitudes. At one loop level the correction $\bar S_{3\to 0}$ plays a role at single multi-particle pole whereas starting from two loops this corrections becomes generically present (similarly to $\bar S_{2\to 1}$ at one loop). } }\label{newTree2} \end{figure} The last contribution, $\bar S_{3\to 0}$, arises at a double multi particle pole \begin{equation} (k_{i}+\dots+k_{j})^2,(k_{j+1}+\dots+k_{l})^2 \to 0 \,\,\, , \,\,\, i<j<l \end{equation} where the two subset of null momenta also become collinear\footnote{At that point, due to momenta conservation, the remaining momenta $k_l+\dots k_n$ will automatically become null and collinear with the two previous null subset of momenta.}. The derivation of this correction follows precisely as before. When the bare generator acts on an amplitude with such kinematics the relevant CSW diagrams are those in figure \ref{newTree2}.\footnote{This figure represents the relevant part of a bigger CSW graph, i.e. the dots could stand for extra propagators connecting to more MHV vertices} Two collinear legs are connected to one of the MHV vertices. The holomorphic anomaly associated with these two collinear legs generates a leg splitting operator $\bar S_{1\to 2}$ acting on an MHV vertex with one fewer leg. The two legs coming out of $\bar S_{1\to 2}$ are then connected to the other pieces of the graph. The sum over the three possible diagrams in figure \ref{newTree2} generates the correction $\bar S_{3\to 0}$ by exactly the same mechanism as explained in the main text. \section{Regularized Generators and Conformal Invariance of the Regularized S-matrix}\la{sec:reg} In the previous section we have seen that formally, the S-matrix is superconformal invariant. That analysis was formal because ${\cal N}=4$ SYM is conformal and therefore doesn't have asymptotic particles. However, the S-matrix observables we are interested in are IR safe quantities like an inclusive cross section. To compute these and argue for their superconformal covariance, one must first introduce an IR regulator. The IR regulated theory has an S-matrix from which the desired observables are computed. A good IR regulator is a regulator that drops out of IR safe physical quantities leaving a consistent answer behind.\footnote{The most commonly used regularization is dimensional regularization in which the external particles and helicities are kept four dimensional while the internal momenta are continued to $D=4-2\epsilon$ dimensions (where $\epsilon<0$). It is a good regularization however, it smears the holomorphic anomaly which makes it hard to separate the correction to the generators from a conformal anomaly. Moreover, we have seen that the bare generators mixes internal momenta with external ones. Therefore, the regularized generator $\bar S_{2\to1}$ for example, must act on an amplitude where the external momenta are treated in the same way as the internal ones (and therefore can carry momentum in the $-2\epsilon$ directions).} \begin{figure}[t] \epsfxsize=7cm \centerline{\epsfbox{subMHV.eps}} \caption{\small\small\textsl{An MHV sub-diagram inside a larger CSW graph. The total off-shellness entering the sub-diagram is zero (\ref{constraint}). For generic off-shell shifts of the "external" particles this sub-amplitude is finite. } }\label{MHVfig} \end{figure} In this section we will introduce an apparently new regularization scheme which we call {\it sub MHV regularization}. The advantages of this regularization scheme are that it is ``holomorphic anomaly friendly" and that the external momenta are treated in the same way as the internal ones. It is important to keep in mind that IR safe quantities should exhibit the symmetries of the theory -- in the case at hand superconformal symmetry -- however we can have perfectly suitable regulators which break part of the symmetry when considering intermediate regulator dependent quantities\footnote{E.g. lattice regularization typically breaks most of the symmetries of the continuous theory and yet it is the regularization which (almost always) leads to the most reliable and rigorous results.}. This is certainly the case for our proposal: we suggest a regulator which preserves the superconformal symmetry generated by $\bar S$; however the conjugate symmetry generated by $S$ (and therefore also special conformal transformations) will not be a symmetry of the regulated S-matrix. As for Lorentz invariance, the CSW procedure picks up a particular null momenta $\chi$ which can be thought of as a choice of light-cone gauge \cite{axial}. The regularized S-matrix depends on $\chi$ and the Lorentz generators also rotate this vector. Similarly we could design a conjugate regularization which would preserve the symmetry spanned by $S$. Assuming our proposed regularization to be a good regulator, this implies that IR safe (regulator independent) quantities will be invariant under both $S$ and $\bar S$ (and therefore also $K$). We saw before that in order to go from trees to loops it was quite useful to generalize scattering amplitudes to their tilded counterparts where external legs are put off-shell by means of the CSW prescription. These off-shell amplitudes appear quite naturally when we consider a sub-diagram inside a larger CSW expansion, (see figure \ref{MHVfig}). In this sub-diagram each ``external" leg is characterized by a null momenta $l_i$ and an off-shell momenta $L_i=l_i+\delta y_i\ \chi$ such that \begin{equation} \sum_{j=1}^n \delta y_j =0 \la{constraint}~, \end{equation} where $n$ is the number of ``external" legs. For generic values of the of-shell shifts ($\delta y_j$) the sub-amplitudes obtained in this way are finite. This leads us to suggest a \textit{sub MHV regularization} of scattering amplitudes: we replace any external (on-shell) momenta $l_i$ by an off-shell momenta of a small mass $m_i$ such that\footnote{The mass $m_i$ is taken small with respect to the amplitude Lorentz invariants.} \begin{equation}\la{Reg} l_i\quad\longrightarrow\quad L_i=l_i+\delta y_i\ \chi~,\qquad\delta y_i={m_i^2\over 2l_i\cdot\chi}~, \end{equation} where $\chi$ is an arbitrary null momenta and (\ref{constraint}) is imposed. The regulated amplitude is then given by the corresponding CSW sum where the external off-shell momenta are treated in the same footing as internal ones. Our proposal for the regulated S-matrix is just the same as considered before (\ref{CSWloop}). The only difference is that now we use it to generate slightly off-shell (\ref{Reg}) processes. That is, as internal $J$'s, all external $J$'s are functions of the on-shell momenta and superspace coordinate $\Lambda_i$ and the of-shellness parameter $y_i$. Notice that (\ref{constraint}) automatically leaves the total momenta undeformed. Furthermore, it can be implemented at the level of the tree level functional $\exp \widetilde{\cal A}^{(0)\rm MHV}_c[J,\chi] $ because the propagator inserting operator $\mathcal{L}$ always acts on legs with opposite $y$'s and thus does not spoil this condition.\footnote{We expect that up to sub-leading terms in the regulator, this regularization is equivalent to the more conventional regularization in which the external particles are given a small mass $m_i$.} It is instructive to understand what are the sources of divergence in one loop MHV amplitude (\ref{oneloopcut}) and why these are regulated by (\ref{Reg}). For generic value of $y$, and after dressing out the tree level amplitude, the dLIPS integral in (\ref{oneloopcut}) computes the discontinuity of a one-loop amplitude in a multi-particle channel. The result is finite, unless $P_{L,y}$ is equal to a linear combination of two momenta adjacent to the cut (these are $k_i$, $k_{i+m-1}$, $k_{i+m}$ and $k_{i-1}$). Near such a point $y_\star$, the dLIPS integral behaves as $\log(y-y_\star)$. It will lead to a divergence of the $y$-integral only if $y_\star=0$, where the $\log$ singularity coincide with the simple pole in the measure. That is the case for $m=1$ and $m=2$ only \footnote{Other than that, the simple pole at $y=0$ (with an $i0$ prescription) can lead to a divergence only if $y=0$ is a limit of integration. That is the case for $m=1$ only. We conclude that before regularization, the only divergent integrals in the sum (\ref{oneloopcut}) are the ones with $m=1$ and $m=2$.}. In both cases the divergences come from the region of the $y$-integration near $y=0$ where the integrand behaves as $ {\log(y-i0)\over y+i0}\,. $ For $m=1$, the point $y=0$ is also the limit of integration (this means that the divergences from both sides of the pole do not cancel each other and therefore the $m=1$ term diverges more severely than the $m=2$ contribution). For $m=2$, the point $y=0$ is not the limit of integration however the contour of integration is trapped between the $\log$ and the pole singularities. After regularization (\ref{Reg}), the simple pole and the log singularity are separated by $\delta y_i$ for $m=1$ and by $\delta y_i+\delta y_{i+1}$ for $m=2$. Moreover, for $m=1$ the location of the simple pole differ from the bottom limit of integration by $\delta y_i$. The resulting integrals (with $i0$ prescription) are therefore finite. In what follows, we will assume MHV sub regularization to be a good regularization. \footnote{We plan to consider this regularization in greater detail elsewhere.} We will now show that the superconformal generator $\bar S$ can be easily deformed in such a way that it is still a symmetry of the regularized S-matrix. The derivation of the regulated generators follows exactly the same steps as in the previous formal section. The only difference is that now the external momenta also carry an off-shell component parametrized by $\delta y_i$ (\ref{Reg}). Again, since MHV mass regularization leaves the MHV vertices untouched, so are the holomorphic anomalies. The resulting regularized form of the generators $\bar S_{1\to 1}$ and $\bar S_{2\to 1}$ are almost identical to their on-shell counterparts and read \beqa\la{Reggen} (\bar{{S}}_{{1\to 1}})_{Reg}&=&-\sum_{s=\pm}s\int d^{4|4}\Lambda\,dy\, \eta \, {\rm Tr}\[\bar\partial J(\widetilde\Lambda^s_y) \check{J}(\widetilde\Lambda^s_y)\] \\ (\bar S_{2\to1})_{Reg}&=& \pi{\sum_{s,s_1,s_2=\pm}}\!\!\!\!\!'~s\int d^{4|4}\Lambda d^4\eta'd\alpha\ dy\ \bar\lambda\eta'\\ &\times&\int dy'\[{i\over y'+y+i0}-{i\over y'-s_1s_2i0}\] {\rm Tr}\[J(\widetilde\Lambda^s_y)\check J(\widetilde\Lambda^{s_1}_{1,y'+y})\check J(\widetilde\Lambda^{s_2}_{2,-y'})\]~. \nonumber \eeqa When acting with $(\bar S_{2\to1})_{Reg}$ on the regularized amplitude a new loop is formed. In that loop, $y$ is the off-shell regulator. For any non zero value of $y$, we don't have the cancelation obtained in the formal discussion around (\ref{intygen}). Instead, $(\bar S_{2\to1})_{Reg}$ is connected to the amplitude by the difference between two off-shell propagators. As we try to remove the regulator, these propagators become more and more concentrated around the on-shell point. The generator $\bar S_{3\to 0}$ does not involve external legs and therefore stands as it is (\ref{S30fin}) after regularization; the off-shell generator $\bar S_{1\to 2}$ is given in (\ref{offS12}). We conclude that the formal structure obtained in the previous section survives regularization. It would be very interesting to repeat our analysis, both formal and regularized, for the dual conformal symmetry. We expect to be able to prove, under the same assumptions in this paper, dual conformal invariance of scattering amplitudes at any loop order. Furthermore, it would be very interesting to understand how these two symmetries combine into a Yangian at loop level. Finally, an obvious question is of course to which extent can we use all these symmetries for computational purposes. We plan to address these issues in a separate publication. \section*{Acknowledgements} We would like to thank N.Beisert, N.Gromov, J.Henn, V.Kazakov, T.McLoughlin, J.Penedones, J.Plefka and D. Skinner for interesting discussions. We would specially like to thank F.Cachazo for many enlightening discussions, suggestions and for referring us to many relevant references. The research of AS has been supported in part by the Province of Ontario through ERA grant ER 06-02-293. PV thanks the Perimeter Institute for warm hospitality during the concluding part of this work.
2,877,628,088,703
arxiv
\section*{Introduction} Higher education institutions are nationally assessed in a periodic manner across the globe, examples include the Research Excellence Framework~\cite{REF} in the United Kingdom, Excellenzinitiative~\cite{Excellen} in Germany and Star Metrics~\cite{StarMetrics} in the United States, and tremendous effort have been put in place in maximising research output as assessment outcomes often have a direct financial impact on revenue~\cite{geuna2006university,leru2012}. Bibliometrics are commonly used for this kind of performance evaluations~\cite{barabasi2012publishing,wang2013quantifying,deville2014career}, and the volume of grant income is generally seen as a good indicator of performance. While many studies have examined the collaboration patterns originated from publication information~\cite{moody2004structure,newman2004best,newman2004coauthorship,wagner2005network,glanzel2005analysing,guimera2005team,sonnenwald2007scientific}; little is known about the characteristics of project collaborations supported by research funding, which is undoubtedly a type of research output in its own right, but also the origin of other research outputs. The availability of funding is often subject to direct and indirect constraints arisen from internal research strategy and different levels of policy set out by the funding bodies and ultimately the government, manifesting into different emphasises on both the research area and mode of collaboration, and this potentially influences the way we form a project team. We have already seen examples of adaptive changes in our collaboration practises; for instance, research in the science and engineering sector is said to be increasingly inter-organisational~\cite{jones2008multi}. In addition, there are different theories on the factors that may affect the establishment of a collaboration and how well a research team operates. Elite universities were recognised as catalysts for facilitating large scale multi-partner research collaborations~\cite{jones2008multi}, and multidisciplinary collaborations were found to have higher potential to foster research outcomes~\cite{cummings2005collaborative}. As a result, the setup of a project consortium for a grant application might require considerable strategic planning, as who and how we collaborate can potentially impact the outcome of a bid, and we are yet to fully understand the mechanistic for success. In order to shed light into the relations between funding landscapes and scientific collaborations, here we examined over 43,000 collaborative projects funded between 1985 and 2013 by the Engineering and Physical Sciences Research Council (EPSRC), the government body in the United Kingdom that provides funding to universities to undertake research and postgraduate degrees in engineering and the physical sciences, including mathematics, chemistry, materials science, energy, information and communications technology, and innovative manufacturing. For each year, we constructed two different types of collaboration networks in which the nodes are investigators and their affiliations respectively, and an edge represents a funded project partnership between two nodes. We applied a novel network-based approach to analyse the local and global interlinkage in these networks; the former was performed by calculating the effective size~\cite{ronald1992structural,latora2013social} of individual nodes, which gauges the connectivity in the neighbourhood of a node. As for the global level, we calculated the rich-club coefficient~\cite{zhou2004rich, colizza2006detecting} of the network and characterised the members of such core structure using a novel profile technique~\cite{ma2014rich}. In addition, we explored how these patterns evolved over time with the availability of funding and how they correlated with research performance~\cite{citation,hirsch2005index, hirsch2007does}. Our results allow us to gain an insight into how changes in the funding landscape shaped the way we form research partnerships, providing a case study that is highly reflective of other countries in the European Union and possibly other developed countries world wide. \section*{Results} \subsection*{Changes in the funding landscape} During the period of study we found that the overall funding increased steadily over time, until it peaked in 2009 (Fig.~\ref{fig:grant}a). The number of grants fluctuated over time with a general decline after 2001 (Fig.~\ref{fig:grant}b) and, in essence, there was an obvious trend of fewer grants but of much larger monetary values (Fig.~\ref{fig:grant}c). This coincided with the emergence of larger research teams (see Supplementary Fig. S1 online) and the timing of this tied in with the EPSRC's initiative on developing larger specialist units in the UK, such as establishing Doctoral Training Centres in selected universities~\cite{EPSRC2010}. Investigators associated with a grant were classified into Principal Investigators (PIs), Co-Investigators (CIs) and Other Investigators (OIs). There were a total of 13,275 PIs from 201 different affiliations (out of a total of 1834), and the average number of grants per PI was 3.25. About half of the grants were associated with one or more CIs and/or OIs, and there was a noticeable rise in the average numbers of investigators since 2000, which is in line with the observed increase in the typical number of collaborators in scientific publications~\cite{grossman1995portion,newman2004best}. Though, we did not observe the same degree of widening participation in affiliations as the average number of affiliations associated to a grant only marginally increased (see Supplementary Fig. S1 online). \begin{figure}[!htb] \centering \includegraphics[width=10cm]{Figure1.pdf} \caption{\textbf{The funding landscape over the last three decades.} (a). The total amount of funding awarded by EPSRC showed a general tendency to increase, with a peak in 2009, and then a decrease. \emph{Actual} refers to the actual value on record while \emph{adjusted} refers to the value after the adjustment made with reference to the Consumer Prices Index. (b). The total number of grants awarded by EPSRC peaked in 2001. (c). The average amount of funding per grant continued to rise over time. (d). Overall distribution of funding among PIs and affiliations. Individual awardees (PIs or affiliations) were sorted in descending order of their total funding, and the percentage of funding was plotted against the corresponding percentage of awardees. The dotted line denotes where true equality lies and both cases showed strong inequality. (e). Distribution of funding consistently showed a high degree of inequality over time, in both cases of PIs and affiliations.} \label{fig:grant} \end{figure} The way in which funding has been awarded was highly skewed, as we found that in both cases of PIs and affiliations about half of the available funding was awarded to the top 8\% (Fig.~\ref{fig:grant}d); though the distribution of funding among the remaining PIs showed a much greater homogeneity. On the contrary, we observed over 90\% of all the funding was awarded just to the top 20\% of the affiliations, suggesting a high level of focussed funding in selected places. Furthermore, we referred to the Gini coefficient~\cite{RePEc:oxp:obooks:9780198281931} in order to measure how the distribution of funding over a population deviates from a perfectly uniform distribution, with values $G=0$ and $G=1$ denoting maximum equality and inequality respectively. Overall, we observed substantial inequality in both cases of PIs and affiliations of PIs. By examining the Gini coefficient over time, we found that the level of inequality intensified as the coefficient became closer to 1 (Fig. \ref{fig:grant}e), and this was particularly the case among the affiliations. \subsection*{Network brokerage} We examined the two constructed collaboration networks in which nodes are investigators and affiliations, respectively, and an edge represents a funded partnership between two nodes, and found that network properties evolved over time depending on the total available funding (see Supplementary Fig. S2 and Fig. S3 online). The investigator network appeared to be sparsely connected and consisted of a large number of disjoint parts, which means that collaborations were largely localised to small groups of investigators. In contrast, the affiliation network was found to be well connected with a definitive giant component. To further examine this noticeable level of interconnectedness in the affiliation network, we examined the extent of local cohesion between affiliations and their partners in the form of brokerage~\cite{ronald1992structural,latora2013social}; a broker is said to occupy an advantageous location in the network for detecting and developing opportunities through their connections to non-overlapping clusters. This was performed by computing the \emph{normalised effective size} $\zeta_i$ of the neighbourhood of each affiliation $i$ in the collaboration network. Such a quantity ranges from 0 to 1. It takes its smallest value when $i$ is part of a clique (i.e. a fully cohesive structure), while it is equal to 1 when $i$ is the centre of a star, and there is no link between any of its partners. Generally, the larger the value of $\zeta_i$, the less connected the neighbourhood of $i$ is, and consequently, the higher the \emph{brokerage opportunities} of affiliation $i$. Overall, we found that the effective size of affiliations increases with the total funding (Fig.~\ref{fig:richcore}a), with the top-funded universities occupying brokerage positions between otherwise disconnected affiliations, potentially playing a key role in developing new access to information and opportunities. \begin{figure}[!h] \centering \includegraphics[width=17.5cm]{Figure2.pdf} \caption{\textbf{The brokers of the affiliation network.} (a). The level of brokerage, as measured by the yearly average effective size $\zeta$, is reported for the 50 top-funded affiliations ranked in descending order of their total funding. (b). The level of brokerage of Imperial and Heriot-Watt exhibited opposites trends with the rise of funding. (c). Connectivity between Imperial College and its partners when the total sum was \pounds250 million and over \pounds1000 million as indicated. Both non-connecting and inter-connecting neighbours of Imperial College are shown. (d). Same as (c) but for Heriot-Watt University and its partners. (e). Funding obtained by Imperial and Heriot-Watt continued to rise linearly with the total funding available.} \label{fig:richcore} \end{figure} In order to gain a better insight into the factors that may contribute towards success in terms of awarded grants, we examined how the brokerage behaviour of the leading affiliations has changed with the total funding over the last three decades. We found that the rise in the volume of the total funding, which coincided with the emergence of investment on focussed research, has resulted in further centralisation of the local networks of the top 10 funded affiliations, as reflected by the increase of $\zeta$ with funding. This is clearly shown in the case of Imperial College (Fig.~\ref{fig:richcore}b), as it increasingly acted as information brokers between otherwise unconnected neighbours (Fig.~\ref{fig:richcore}c); most likely thanks to their ability in facilitating new research partnerships~\cite{jones2009burden}. On the contrary, the lesser funded affiliations were found to exhibit the opposite trend, as shown in the case of Heriot-Watt University (Fig.~\ref{fig:richcore}b); as their $\zeta$ decreased with the volume of funding, mainly due to the rise of (about 24\%) interconnecting neighbours (Fig.~\ref{fig:richcore}d). Overall, we also observed a higher level of interconnectedness among the neighbours of Heriot-Watt University. The two different collaboration strategies we observed in response to changes in funding policy appeared to be both effective ways in securing research grants as, among the well funded affiliations we examined, both the top 10 and those less funded, have directly benefited from their respective strategy as their awarded sums continued to rise with the available funds (Fig.~\ref{fig:richcore}e and Supplementary Fig. S4 online). \subsection*{The rich core of leading universities} We examined the level of global interconnectedness among the leading PIs and affiliations, which correspond to the nodes with the largest degree, the so-called hubs, by investigating the presence of a rich-club phenomenon in the networks~\cite{zhou2004rich, colizza2006detecting}. In order to do so, we evaluated the number of edges $E(k)$ among the $N(k)$ nodes of degree larger than a given value $k$, and we computed the rich-club coefficient, $\phi_{norm}(k)$, normalised with respect to the case of random graphs with the same node degrees as in the original networks. A value $\phi_{norm}(k)>1$ for large $k$ is an indication that the hubs of a network have organised into a rich club. Indeed, we observed this phenomenon in the affiliation network, but not in the case of investigators (Fig.~\ref{fig:RCncore}a). This suggests that the affiliations with a high number of grants tended preferentially to collaborate among each others, forming tightly interconnected communities, more than we would expect in a random case. We applied a core profiling method~\cite{ma2014rich} to extract members of the rich club for each of the 28 years and obtained a total of 45 affiliations (see Supplementary Table~S1 online). Imperial College and University of Oxford were among those in the so-called rich core during the entire period of study, closely followed by other frequently found members which were only absent during the early years and have remained in the rich core ever since. Figure~\ref{fig:RCncore}b shows the rich core in 2010, which was comprised of predominantly the leading affiliations. In particular, the top funded affiliations were not only well connected with the rest of the core, but also showed strong interlinkages among themselves. \begin{figure}[!htb] \centering \includegraphics[width=14cm]{Figure3.pdf} \caption{\textbf{Elitism in scientific collaborations. }(a). The affiliation network exhibited a rich-club phenomenon, with $\phi_{norm}(k) > 1$ for high degree nodes, while such a behaviour was not present in the PI network (the year 2010 is shown; other years give the same results). (b). The rich core for year 2010 showing the cohesive community formed by the high degree nodes. Each edge is weighted by the frequency of the partnerships found in that year. (c). The relative size of the rich core of the affiliation network, in general, shrank over time while the density of partnerships among the affiliations in the rich core continued to rise. } \label{fig:RCncore} \end{figure} We also found that the relative size of the rich core decreased gradually over time (Fig.~\ref{fig:RCncore}c). The rich core initially contained over 25\% of the affiliations and the rise in focussed funding since the start of the millennium has caused the rich core to further shrink and to be maintained to a relatively small size since (4-8\%), and this agrees with existing theory that stress in a system is often manifested in a reduction in core size~\cite{csermely2013structure,liu2011controllability}. The emergence in focussed funding, however, has fostered higher interlinkage among the affiliations in the rich core, as the density of links was initially found to be low and gradually increased during the same period. The density was found to be the highest when the rich core was the smallest. The presence of a rich core of the leading affiliations is a reflection of close collaborations among their researchers, and the increased level of interlinkage among these affiliations demonstrates a tendency for their researchers to collaborate with their peers in comparably reputable places, suggesting homophily~\cite{mcpherson2001birds,kossinets2009origins} with respect to affiliation excellence accounts for one of the driving forces of scientific collaborations. \subsection*{Research performance} Could being in the rich-core a key towards success in research? We investigated in terms of inputs versus outputs of its member affiliations by examining the amount of funding they received and their overall research performance~\cite{jones2008multi,deville2014career}. Firstly, we found that the awarded sum of an affiliation over the 28 years linearly correlated with the number of times, $N_{core}$, the affiliation was present in the rich core (Fig.~\ref{fig:performa}a and Supplementary Fig. S5 online), with the exception of Imperial College, University of Cambridge and University of Manchester which have accumulated funding beyond what is expected from a linear behaviour between $N_{core}$ and total fundings. In addition, we referred to the number of research areas of an affiliation as a measure of the {\em breadth} of its research, and we observed this quantity generally declines as $N_{core}$ decreases. We also considered the relative citation score~\cite{citation} and the h-index~\cite{hirsch2005index, hirsch2007does} of each affiliation to capture the {\em quantity} and the {\em depth} of research respectively; the former gauges the volume of citations among researchers in an affiliation, and the latter additionally captures quality. We found that the citation score increased linearly with $N_{core}$ (Fig.~\ref{fig:performa}b). The absence of an obvious deviation from the average trend here suggests the capital requirement on developing a full breadth of research areas is nontrivial; while the top 3 affiliations appeared to have successfully done so by scoring in all areas, they did not receive citations exceedingly. As for the h-index, again we observed that Imperial College, University of Cambridge and University of Manchester outperformed the rest of the universities in this metric, showing a marked deviation from the linear behaviour (Fig.~\ref{fig:performa}c) in all similar to that observed in the awarded sum. Their outstanding funding profiles appeared to have enabled them to develop the depth of research which has led to the generation of high quality papers; and University of Cambridge was particularly the case with the highest average h-index. \begin{figure}[!htb] \centering \includegraphics[width=17cm]{Figure4.pdf} \caption{\textbf{Fundings and research performance of the rich core.} (a). The total funding received by each university over the entire 28 years is plotted as a function of the number of times an affiliation was found in the rich core, $N_{core}$. Node size is proportional to the total funding of the affiliation, while the colour denotes the number of active research areas which have received a score. Dark and light blue lines denote the mean and one standard deviation from the mean respectively. (b). The citation scores of the most frequently found affiliations increased with their appearance in the rich core. (c). Similarly, the appearance in the core is indicative of an affiliation's h-index.} \label{fig:performa} \end{figure} \section*{Discussion} There is today an open question on the kind of collaboration mechanics that underpin success in science, and up to now, such an issue has been mainly explored by characterising the structural patterns of collaborations based on how scientists publish together~\cite{moody2004structure,newman2004best,newman2004coauthorship}. Our approach complements those studies based on coauthorship networks, addressing the question by looking at how scientists collaborate in order to obtain fundings for their research. Our results demonstrate that there has been a striking change in the funding landscape in the UK in the last three decades, with the largest engineering and physical sciences funding agency providing focused research investment to selected places. This has also shaped the anatomy of collaborations among investigators and among universities in terms of both the local cohesiveness and global interlinkage among them. In particular, the network analysis of successful project partnerships among affiliations has shown that elite universities have been able to capitalise on this competitive and rapidly changing environment; as they have become extensively the brokers of the network, orchestrating partnerships through a diverse source. This is likely to have arisen due to their ability to develop a broad range of expertise and extend partnerships with specialist entities~\cite{jones2009burden}. In addition to that, elite universities formed the very centre of a rich core through their strong reliance. Indeed, our findings show the presence of an elite group of affiliations over-attracting resources, and many in the research community would find such inequality in funding distribution highly controversial~\cite{fortin2013big}. However, the effect appeared not to be entirely adverse, as arguably, those elite affiliations which have successfully become very rich seemed to have repaid in both the variety of research and unmistakably in its quality. The prominent role of elite universities is often said to have a wider impact in driving science forward~\cite{jones2008multi,clauset2015systematic}. Here, this is demonstrated by the fact that other well-funded places, which might have less capacity to expand, have consistently benefited from their association with the elites through the rich core. This agrees with previous claims on the importance of elite universities in facilitating multi-partner collaborations~\cite{jones2008multi}. Here, we revealed how collaboration networks constantly undergo adaptive organisation in response to funding, resulting in an evident shift in network configuration across different scales. These shifts in collaboration patterns may potentially mean a change in the nature of these partnerships, altering their strengths (e.g. frequency and financial value). This line of research remains unexplored, and hence, a comprehensive study on the weighted version of these networks will provide a leap in understanding the magnitude of these variations in patterns. Furthermore, studying into other attributes that may also covary with funding, such as the degree of interdisciplinary research, will allow us to gain an insight into how mechanisms~\cite{pan2012world,sun2013social} for formulating partnerships have evolved, as the impact of funding is highly likely to be multidimensional. Funding is an essential part of research as it provides the very much sought after resources to support related activities. Applying for funding is highly competitive~\cite{Cimini2014} as suggested by the low average success rate, and in many countries the economic recession observed in recent years has led to considerable pressure on the research budget. A better understanding on successful project formations, therefore, provides an insight into how to thrive in such challenging landscape. \section*{Acknowledgements} AM acknowledge the partial financial support from ImpactQM. We thank Ursula Martin, David Arrowsmith, Pietro Panzarasa and Mario Chavez for their comments and discussions.
2,877,628,088,704
arxiv
\section{Introduction} The unusual title of this paper is meant to indicate the emotional aspects of being a coronal loops modeler. Whenever we start to feel confident that the problem is solved, new observations come along and force us to modify our thinking. It can be frustrating, but it is also very rewarding when we gain improved physical understanding of this fascinating phenomenon. The coronal loops problem is an outstanding example of how the greatest progress is made when observation and theory work together, one feeding off of the other. The loops problem can be thought of as a puzzle, with the pieces of the puzzle being observational constraints. The goal is to fit the pieces together into a physically consistent picture (there may be more than one solution). Five key pieces are: (1) density, (2) lifetime, (3) thermal distribution, (4) flows, and (5) intensity profile. For the density, we are particularly interested in how the observed density compares with the density that is expected for static equilibrium. Thermal distribution refers to whether and how the temperature varies over the loop cross section, i.e., across the loop axis, and intensity profile refers to the variation of brightness along the loop axis. For many years, our picture of coronal loops was relatively simple and the puzzle seemed easy to solve. The observational constraints came primarily from soft X-ray (SXR) observations of hot ($> 2$ MK) loops. These loops were found to be long-lived \citep[e.g.,][]{pk95jk} and to satisfy static equilibrium scaling laws \citep[e.g.,][]{rtv78jk,kt96jk}. The most straightforward explanation was that these loops are heated in a steady fashion. The picture became much more confused with new observations of warm ($\sim 1$ MK) loops made in the EUV by {\it SOHO}/EIT and {\it TRACE}. These warm loops can appear to occupy the same volume as hot loops---though not necessarily at the same time---but their properties are fundamentally different. Besides the obvious temperature difference, EUV loops are over dense relative to static equilibrium, they have super-hydrostatic scale heights, and they have exceptionally flat temperature profiles when measured with the filter ratio technique \citep[e.g.,][]{aetal99jk,letal99jk,asa01jk,wwm03jk}. These loops are clearly not in static equilibrium. This paper describes the logical progression that has been followed by the loops community in attempting to explain the observations, especially those of the more challenging EUV loops. We represent this progression with the flowchart in Figure 1, which is in many ways a recent history of how the discipline has evolved. \begin{figure}[!ht] \centering\includegraphics[width=10.0cm, angle=-90, trim = 50 130 20 0]{klimchuk_fig1.eps} \caption{Flow chart showing the logical progression used to infer the physical nature and heating of a coronal loop. Some boxes indicate observational questions and others indicate conclusions that are drawn from the answers.} \end{figure} \section{Density} Suppose we wish to investigate an observed loop. We can start by asking the question ``Is the loop over dense relative to static equilibrium?" Given the observed temperature and length, static equilibrium theory predicts a unique density. We want to know whether the observed density is larger than this value? If it is not, and if the loop does not evolve rapidly, then steady heating is a possible, though not unique, explanation. This was essentially where things stood through the {\it Yohkoh} mission in the 1990's. As we have already indicated, however, most EUV loops are indeed over dense. This is indicated in Figure 2 \citep[reproduced from ][]{k06jk}, which reveals the physics of what is going on. The figure shows the ratio of the radiative to conductive cooling times plotted against temperature for a large sample of loops. The warm loops were observed by {\it TRACE}, and the hot loops were observed by {\it Yohkoh}/SXT. The ratio of the cooling times is determined from the measured temperature, density, and length according to $\tau_{rad}/\tau_{cond} = T^4/(n^2L^2)$, although the power of temperature depends weakly on the radiative loss function and is slightly different in different temperature regimes. \begin{figure}[!ht] \centering\includegraphics[width=11.0cm]{klimchuk_fig2.ps} \caption{Ratio of radiative to conductive cooling times versus temperature for many observed loops. Solid line is the cooling track of an impulsively heated loop strand simulation. From Klimchuk (2006).} \end{figure} Coronal energy losses from radiation and thermal conduction are comparable for loops that are in static equilibrium \citep{vau79jk}, and such loops would fall along a horizontal line near 0 in the plot. Loops that lie above the line are under dense, and loops that lie below the line are over dense. The observed loops follow a clear trend ranging from hot and under dense in the upper-right to warm and over dense in the lower-left. Note, however, that the densities used for the cooling time ratios were measured using emission measures and loop diameters and assuming a filling factor of unity, $n = [EM / (df)]^{1/2}$, so they are lower limits. Smaller filling factors would shift the points downward in the plot. Thus, the hot loops could be in static equilibrium, and the warm loops could be even more over dense than indicated. It is abundantly clear that static equilibrium cannot explain warm loops. An explanation relying on steady end-to-end flows is also not viable \citep{pkm04jk}. Thermal nonequilibrium is a possibility that we return to later. The most promising explanation for the observed over densities of warm loops is implusive heating. This can also explain the under densities of hot loops, if they are real. The solid curve that fits the points so well in Figure 2 is the evolutionary track from a 1D hydrodynamic simulation of a loop that has been heated impulsively by a nanoflare. Cooling begins at the upper-right end of the track and progresses downward and to the left. The early stages are dominated by thermal conduction and are characterized by under densities, while the late stages are dominated by radiation and are characterized by over densities. The ability of nanoflare models to reproduce the observed densities of loops is well established \citep{k02jk,wwh02jk,wwm03jk,ck04jk,k06jk}. \section{Lifetime} If a loop is heated impulsively, then we might expect it to exist for approximately a cooling time (combining the effects of conduction and radiation), as determined from the observed temperature, density, and length. This is the next question in the flowchart. If the lifetime and cooling time are similar, we can conclude that the loop is a monolithic structure that heats and cools as a homogeneous unit, with uniform temperature over the cross section. Observations show that this is not case, however. The vast majority of loops live longer than a cooling time and sometimes much longer \citep[e.g.,][]{wws03jk,lkm07jk}. If these loops contain cooling plasma, then they cannot be monolithic. Rather, they must be bundles of thin, unresolved strands that are heated impulsively at different times. Although each strand cools rapidly, the composite bundle appears to evolve slowly \citep[e.g.,][]{wwm03jk}. Multi-stranded bundles of this type can explain a number of observed properties of warm loops: over density, long lifetime, super-hydrostatic scale height, and flat temperature profile. They can also explain the observed under density of hot loops. Realizing this was a time of rejoicing in the modeling community! But.... \section{Thermal Distribution} An important prediction of the multi-strand model is that loops should have multi-thermal cross-sections. Since the unresolved strands are heated at different times, they will be in different stages of cooling and out of phase with each other. A critical question became ``Are loops multi-thermal?" An intense debate ensued and continues to this day. Some have answered with a resounding yes \citep[the ``Schmelz camp," e.g.,][]{sm06jk} and others have answered with a resounding no \citep[the ``Aschwanden camp," e.g.,][]{an05jk}. As we now demonstrate, however, it is not especially useful to phrase the multi-thermal question in a way that requires a binary response. Imagine that a loop bundle is heated by a ``storm" of nanoflares that occur randomly over a finite window in time. It is easy to see that the range of strand temperatures that are present at any given moment depends on the duration of the storm. For a very short storm, all of strands will be heated at about the same time and will cool together. The instantaneous thermal distribution of the loop will be narrow. In contrast, a storm that lasts longer than a cooling time will produce a much wider thermal distribution. Some strands will have just been heated and will be very hot; others will have cooled to intermediate temperatures; and still others will have had time to cool to much lower temperatures. The flowchart in Figure 1 therefore asks the more meaningful question ``How multi-thermal is the loop?" A broad thermal distribution implies a long-duration nanoflare storm, and a narrow distribution implies a short-duration storm. It now appears that the multi-thermal and isothermal camps may both be correct. The duration of the nanoflare storm also determines the lifetime of the loop bundle, so the thermal width and lifetime will be closely related. Figure 3 shows results for simulated nanoflare storms lasting 500, 2500, and 5000 s, top to bottom. The left column has light curves (intensity versus time) as would be observed in the 195 channel of {\it TRACE}, with sensitivity peaking near 1 MK. The right column has emission measure distributions, EM($T$) = $T \times \!$DEM($T$) cm$^{-5}$, at the time of peak 195 intensity. Only the coronal part of the loop is included; the transition region footpoints are neglected. All three of the storms are comprised of identical nanoflares that have triangular heating profiles lasting 500 s. They were simulated with our ``0D" hydro code EBTEL and are the same as example 4 in \citet{kpc08jk}. In actuality, Figure 3 was produced with only one simulation. The light curves and EM distributions were constructed using sliding time windows that correspond to the storm durations. \begin{figure}[!ht] \centering\includegraphics[width=13.cm, trim = 25 0 0 0]{klimchuk_fig3.ps} \caption{Simulated 195 light curves (left) and emission measure distributions (right) for nanoflare storms lasting 500, 2500, and 5000 s, top to bottom. The instantaneous EM distributions are from the times of peak 195 intensity (t = 3958, 4705, 5445 s for the three storms).} \end{figure} As expected, both the lifetime and thermal width increase as the storms get longer. The full widths at half maximum (FWHM) of the light curves are 1098, 2579, and 5008 sec for the 500, 2500, and 5000 s storms, respectively. The FWHM of the EM distributions are 0.13, 0.23, and 0.36 in $\log T$. The full widths at the 1\% levels are 0.24, 0.62, and 1.14 in $\log T$. It may seem surprising at first that the EM distributions do not all reach the same maximum temperature, since the nanoflares are the same in all three storms. This is {\it not} because individual strands are reheated multiple times in the longer storms; all strands are heated only once. Rather, it is because the distributions are from the time of peak 195 intensity. In the short duration storm, all of the strands have cooled appreciably by the time the peak intensity is reached. Had we chosen to plot the distribution at an earlier time, it would still been narrow, but it would be shifted to higher temperature. \citet{wetal08jk} have made Gaussian fits to EM distributions observed by {\it Hinode}/EIS. They find a typical central temperature of 1.4 MK and a typical Gaussian half width of 0.3 MK. This corresponds to a FWHM in $\log T$ of roughly 0.24, which by Figure 3 implies a 195 lifetime of roughly 2500 s. Although Warren et al. did not measure the lifetimes of their loops, this value is consistent with the small number of 195 lifetimes that have been reported for other cases \citep{ww05jk,uww06jk}. To our knowledge, there does not exist a single published example where both the thermal width and lifetime have been measured for the same loop. Making such measurements should be a high priority. It is a crucial consistency check of the nanoflare concept. Density measurements should be made at the same time. \section{Very Hot and Very Faint Plasma} The nanoflare model makes two observational predictions in addition to the ones we have already discussed. First, it predicts that small amounts of very hot ($> 5$ MK) plasma should be present. Figure 4 shows two examples of long (infinite) duration storms, one comprised of relatively weak nanoflares and the other comprised of nanoflares that are ten times stronger. The solid curve in each case is the EM distribution for the whole loop, while the dashed and dot-dashed curves are the contributions from the coronal section and footpoints, respectively. We see that the EM of the hottest plasma is 1.5-2 orders of magnitude smaller than that of the most prevalent plasma. The reason is two-fold. First, the initial cooling after the nanoflare has occurred is very rapid, so the hottest plasma persists for a relatively brief period. Second, the densities are low during this early phase, because chromospheric evaporation has only just begun to fill the loop strand with plasma. \begin{figure}[!ht] \psfig{figure=klimchuk_fig4.ps,bbllx=80pt,bblly=540pt,bburx=470pt,bbury=720pt,width=11cm} \caption{Emission measure distributions for long (infinite) duration nanoflare storms comprised of weak (left) and strong (right) nanoflares: coronal section (dashed), transition region footpoints (dot-dashed), and whole loop (solid).} \end{figure} As a consequence of the small emission measures, the intensities of hot spectral lines and channels are predicted to be very faint. The intensities may be reduced still further by ionization nonequilibrium effects \citep{bc06jk,ro08jk}. Low levels of super-hot emission have nonetheless been detected recently by the {\it CORONAS}, {\it RHESSI}, and {\it Hinode} missions \citep{zetal06jk,m09jk,pk09jk,ketal09jk}. In particular, EM distributions inferred from multi-filter XRT observations of two active regions suggest that the distributions may have two distinct components \citep{setal09jk,retal09jk}. The implications are considerable, since this would rule out a simple power-law energy distribution for the responsible nanoflares. Detailed modeling is now underway. \section{Flows} High-speed upflows that reach or exceed 100 km s$^{-1}$ are predicted during the early evaporation phase of a nanoflare event. Depending on the geometry of the observations, these can produce highly blue-shifted emission. The emission will be very faint, however, for the reasons given above. A composite spectral line profile from a bundle of unresolved strands will be dominated by the weakly red-shifted emission produced during the much longer radiative cooling phase, when the plasma slowly drains and condenses back onto the chromosphere. Signatures of evaporation take the form of blue wing enhancements on this main component \citep{pk06jk}. They can be very subtle, and they only appear in lines that are well tuned to the temperature of the evaporating plasma. Significantly hotter and cooler lines are not expected to show evidence of evaporation. We have performed sit-and-stare observations with {\it Hinode}/EIS and find blue wing asymmetries in Fe XVII ($T \approx 5$ MK) similar to those predicted by our nanoflare models \citep{pk06jk}. The measurements are very challenging, however, due to the faint nature of the line. \citet{hetal08jk} also report blue-wing asymmetries that are suggestive of nanoflares. \section{Thermal Nonequilibrium} We have worked our way down the flowchart of Figure 1 and concluded that the observed properties of many loops can be explained by storms of nanoflares occurring within bundles of unresolved strands. There remains the possibility, indicated in the upper right, that many loops can also be explained by thermal nonequilibrium. We consider this possibility now. Thermal nonequilibrium is a fascinating phenomenon in which dynamic behavior is produced by perfectly steady heating \citep{ak91jk,ketal01jk,mph04jk,kak06jk}. No equilibrium exists if the steady heating is sufficiently highly concentrated near the loop footpoints. Instead, the loop goes through periodic convulsions as it searches for a nonexistent equilibrium. Cold, dense condensations form, slide down the loop leg, and later reform in a cycle that repeats with periods of several tens of minutes to several hours. We have recently explored whether thermal nonequilibrium can explain the observed properties of EUV loops \citep{kk09jk}. We first considered a monolithic loop, which we simulated with our 1D hydro code ARGOS \citep{skaetal99jk}. The code uses adaptive mesh refinement, which is critical for resolving the thin transition regions that exist on either side of the dynamic condensations. We imposed a steady heating that decreases exponentially with distance from both footpoints. The heating scale length of 5 Mm is one-fifteenth of loop halflength. We introduced a small asymmetry by making the amplitude of the heating on the right side only 75\% that on the left. Figure 5 shows the evolution of temperature, density, and intensity as would be observed in the 171 channel of {\it TRACE}. These are averages over the upper 80\% of the loop. The behavior is typical of the several cycles that we simulated. The loop is visible in 171 for only about 1000 s. This is a factor of 2-4 shorter than observed lifetimes \citep{ww05jk,uww06jk}. A more serious problem is the distribution of emission along the loop (the intensity profile), which disagrees dramatically with observations. Figure 6 shows 171 intensity and temperature as a function of position along the loop at $t = 5000$ s. The emission is strongly concentrated in transition region layers at the loop footpoints ($s = 45$ and $203$ Mm) and to either side of a cold condensation at $s = 163$ Mm. In stark contrast, most observed 171 loops have a fairly uniform brightness along their length. \begin{figure}[!ht] \centering\includegraphics[width=10.cm]{klimchuk_fig5.ps} \caption{Evolution of temperature (dashed), density (dotted), and 171 intensity (solid) for a monolithic loop undergoing thermal nonequilibrium. All quantities are normalized. The steady heating is 75\% as strong in the right leg as in the left.} \end{figure} \begin{figure}[!ht] \centering\includegraphics[width=10.cm]{klimchuk_fig6.ps} \caption{Temperature (dashed, MK) and 171 intensity (solid, arbitrary units) as a function of position along the loop at $t = 5000$ s in the simulation of Figure 5.} \end{figure} The maximum temperature in the loop is 4.4 MK and occurs before the condensation forms. We performed another simulation with a reduced heating rate that has a maximum temperature of only 1.8 MK. Neither the light curve nor the intensity profile are consistent with observations. We conclude that EUV loops are not monolithic structures undergoing thermal nonequilibrium, at least not under conditions that lead to cold condensations. We note, however, that \citet{metal08jk} report a different type of nonequilibrium behavior. One prominent loop in their 3D simulation of an active region exhibits a cooling and heating cycle, but the temperature never drops to the point where a condensation forms. The reasons for the differing behavior are yet to be understood. Whether the loop has properties matching observed loops (density, lifetime, thermal width) is unknown. The simulation of Figures 5 and 6 may nonetheless have some relevance to the Sun. The condensation falls onto the right footpoint at $t = 5600$ s. Falling condensations have been seen in the C IV channel of {\it TRACE} \citep{s01jk}. They are relatively rare, however, and occur in only a small fraction of loops. We next considered the possibility of a multi-stranded loop bundle in which the individual strands undergo thermal nonequilibrium in an out-of-phase fashion. To approximate such a loop, we performed two additional simulations, similar to the first but with heating imbalances of 50\% and 90\% instead of 75\%. We then averaged all three simulations in time and added them together along with their mirror images to form a composite loop. The resulting 171 intensity profile is shown in Figure 7. It is reasonably uniform except for the very intense spikes at the footpoints (note the logarithmic scale). A more realistic loop bundle with a wider variety of heating imbalances would be even more uniform. We tentatively conclude that the intensity profile is consistent with observations, although we are concerned because bright 171 moss emission is generally observed at the footpoints of SXR loops rather than the footpoints of EUV loops. \begin{figure}[!ht] \centering\includegraphics[width=10.cm]{klimchuk_fig7.ps} \caption{Logarithm of 171 intensity as a function of position along a composite loop bundle comprised of individual strands undergoing thermal nonequilibrium. See text for details.} \end{figure} \begin{figure}[!ht] \centering\includegraphics[width=10.cm]{klimchuk_fig8.ps} \caption{Temperature as a function of position along the composite loop bundle of Figure 7. Solid is the actual mean temperature, while dashed and dotted are the temperatures inferred from {\it TRACE} and {\it Yohkoh}/SXT filter ratios, respectively.} \end{figure} Figure 8 shows three temperature profiles for the composite loop: the average of the actual temperatures in the individual strands (solid), the temperature that would be inferred from {\it TRACE} 171/195 intensity ratios (dashed), and the temperature that would be inferred from {\it Yohkoh}/SXT Al12/AlMg intensity ratios (dotted). They are different because {\it Yohkoh}/SXT is more sensitive to the hotter plasma and {\it TRACE} is more sensitive to the warmer plasma. Notice that the profiles is very flat. This is a well-know property of EUV loops. We have also inferred densities from the simulated {\it TRACE} observations using exactly the same procedure that was used for the real loops in Figure 2. The model loop is over dense by a factor of 23, consistent with observed values. We have repeated this excise using reduced heating in the strands and find that the over density is a factor of 10 in this case. Although there is some reason for encouragement, it is not obvious that bundles of unresolved strands undergoing thermal nonequilibrium can explain all the salient properties of observed EUV loops. Reproducing the lifetimes is especially challenging. The condensations in the different strands must be sufficiently out of phase to give a uniform intensity profile, but they cannot be so out of phase as to produce a composite loop lifetime longer than 1 hour. Even if the phasing is correct for one condensation cycle, it is likely to be incorrect for subsequent cycles because the interval between condensations depends on both the amplitude of the heating and its left-right imbalance. The imbalance determines the location where the condensation forms, and it must be appreciably different among the strands in order to get a uniform intensity profile. Note that the results shown in Figures 7 and 8 make use of temporal averages over complete cycles, and therefore the lifetime of the equivalent loop bundle is effectively infinite. \section{Conclusions} We have described how a combination of observational and modeling work has led to the conclusion that warm ($\sim 1$ MK) EUV loops can be explained as bundles of unresolved strands that are heated by storms of nanoflares. Static equilibrium is out of the question. The observed lifetimes and thermal distributions of the plasma indicate that the storms last for typically 2-4$\times10^3$ s. Additional support for this picture is provided by the shapes of hot spectral line profiles and by the observation that line intensities peak at slightly later times for lines of progressively cooler temperature \citep{uwb09jk}. Also, there is now good evidence for very hot and very faint plasma, as predicted by the nanoflare models. It is not clear whether most hot ($> 2$ MK) SXR loops are also heated by nanoflares. If they are, the storms must be long duration in order to explain the observed lifetimes. The loops would then be expected to have co-spatial EUV counterparts, and it is not obvious that they do. One possibility is that the frequency of nanoflares is much higher in long-lived SXR loops, so that the plasma in a strand never cools to EUV temperatures before being reheated. It is worth noting that virtually all of the proposed coronal heating mechanisms predict impulsive energy release on individual magnetic field lines \citep{k06jk}. We considered the possibility that EUV loops can be explained by thermal nonequilibrium. We concluded that this is not a viable mechanism for monolithic loops under the conditions we have considered---although the results of \citet{metal08jk} are very intriguing---but that it may have application in multi-stranded bundles. Serious questions remain that require further investigation. We close by pointing out that distinct loops are only one component of the corona and that the diffuse component contributes at least as much emission. It is not generally appreciated that the intensity of EUV and SXR loops is typically much less than that of the background (of order 10-40\%). The diffuse component may also be made up of individual strands, but we must explain why the strands have a higher concentration in loops. \acknowledgements I am very pleased to acknowledge useful discussions with many people, but I especially wish to thank Spiros Patsourakos, Harry Warren, and Judy Karpen, my collaborator on the thermal nonequilibrium study that is being published here for the first time. I benefited greatly from participation in the Coronal Loops Workshop Series and the International Space Science Institute team led by Susanna Parenti. Financial support came primarily from the NASA Living With a Star program.
2,877,628,088,705
arxiv
\section*{Supplemental Material} \setcounter{equation}{0} \subsection{Solution of the Low equation} The Low equation with $V_{p,k}$ neglected reads \begin{align} T_{p,k} = \frac{g(p)\, g^*(k)}{h_k-E_B } + \int_0^\infty \frac{q^2dq}{(2\pi)^3} \frac{T_{p,q}T_{k,q}^*}{h_k+i\varepsilon-h_q} \,. \label{eq:appT} \end{align} As the only $T$-independent term of this equation is of a separable form, it is reasonable to consider also a separable ansatz: \begin{align} T_{p,k} = t_k\, g(p)\, g^*(k) \,. \end{align} Then the Low equation becomes \begin{align} t_k = \frac{1}{h_k - E_B} + \int_0^\infty \frac{q^2dq}{(2\pi)^3} \frac{|t_q|^2|g(q)|^2}{h_k+i\varepsilon-h_q} \,. \end{align} We also define the analytic continuation of $t_k$ as \begin{align} \tau(W) := \frac{1}{W - E_B} + \int_0^\infty \frac{q^2dq}{(2\pi)^3} \frac{|t_q|^2|g(q)|^2}{W-h_q} \,. \end{align} Now, it is better to work with $\tau^{-1}(W)$ because we have \begin{align} \mathrm{Im}\, \tau^{-1}(h_p+i\varepsilon) = \frac{\pi\, p\,\mu}{(2\pi)^3} |g(p)|^2 \theta(h_p)\,, \end{align} as a consequence of unitarity, where we have used $\tau(h_p+i\varepsilon)=t_p$. $\tau^{-1}(W)$ is a real analytic function with possible singularities residing only on the real axis. These singularities correspond to zeros of $\tau(W)$, which are known as the Castillejo-Dalitz-Dyson (CDD) zeros~\cite{Castillejo:1955ed}. As Weinberg did, we will look for a solution without such zeros (for a discussion of the impact of CDD zeros on the compositeness, see Refs.~\cite{Baru:2010ww,Hanhart:2011jz,Kang:2016jxw}). Then a twice-subtracted dispersive relation gives \begin{align} \tau^{-1}(W) =\, (W-E_B) + (W-E_B)^2 \int_0^\infty \frac{q^2dq}{(2\pi)^3} \frac{|g(q)|^2}{(h_q-E_B)^2(h_q-W)} \,, \end{align} where we have used \begin{align} \tau^{-1}(E_B) = 0 \,,\qquad \tau^{-1'}(E_B) = 1 \,. \end{align} Finally, one gets \begin{align} \tau(W) = \frac{1}{1-F(W)}\,\frac{1}{W-E_B}, \end{align} with \begin{align} F(W) &:= (W-E_B) \int_0^\infty \frac{q^2dq}{(2\pi)^3} \frac{|g(q)|^2}{(h_q-E_B)^2(W-h_q)} \,. \end{align} One can further work out a dispersive representation for the function $F(W)$. For that, it is convenient to define \begin{align} F_1(W):= \frac{\ln \left[1-F(W)\right]}{W-E_B}\,. \end{align} When $E>0$, one has \begin{align} \mathrm{Im}\,F_1(E+i\varepsilon) = - \frac{\delta_B(E)}{E-E_B} \, \end{align} with $\delta_B$ the phase of the on-shell $T$-matrix $T_{k,k}$ as given by the solution of Eq.~\eqref{eq:appT}; see Eq.~\eqref{eq:argdelta} in the main text. With the convention $\delta_B(0)=0$ we have taken, one can show that $F_1(E)$ is real for $E\leq0$ by noticing that $F(E)$ is monotonically increasing in the same region and $F(0)\leq0$. So it is a real analytic function, allowing for a standard dispersion relation which gives \begin{align} F_1(W) = -\frac{1}{\pi}\int_0^\infty dE \frac{\delta_B(E)}{(E-W)(E-E_B)} \,. \end{align} Then one gets \begin{align} F(W) = 1 - \exp\left(\frac{W-E_B}{\pi}\int_0^\infty dE \frac{-\delta_B(E)}{(E-W)(E-E_B)}\right) \,. \end{align} \end{document}
2,877,628,088,706
arxiv
\section{INTRODUCTION} The implementation of a collision mitigation or collision avoidance system requires in general the computation of a measure of criticality in order to assess the current traffic situation as well as its evolution in the short-term future. There are many criticality measures available, for example time-to-go (TTG) or time-to-collision (TTC), see e. g. \cite{jansson2008framework},\cite{Muntzinger_et_al_09}, or the brake threat number see e. g. \cite{stellet2015uncertainty}. All those measures are based on models of varying degrees of complexity of touching or penetrating the boundary of the potential colliding object, e. g. both the TTC $=-{x(0)\over \dot x(0)}$ (for a constant velocity model) and the brake threat number $a_{req}= -{\dot x^2(0)\over 2 x(0)}$ are based on the one-dimensional collision event $x(t) = 0$. In this paper we focus on this underlying collision event -- the boundary penetration -- in a fully probabilistic manner, i. e. we propose a new approach to compute the collision probability for automotive applications. The use of this collision probability for decision making in collision mitigation or avoidance systems is not subject of this investigation. There are two different approaches to computing a collision probability for automotive applications that are known to the authors: \begin{enumerate} \item probability of the spatial overlap of the host vehicle with the colliding vehicle's probability distribution, see e. g. \cite{jansson2005collision}, \cite{lambert2008fast}, and \item probability of penetrating a boundary around the host vehicle, see \cite{nordlund2008probabilistic}. \end{enumerate} There is currently no satisfying way to compute an automotive collision probability over a time period: there is a heuristic proposal to pick the maximal collision probability over that period as the collision probability for that time period \cite{jansson2008framework}, and there are calculations relying on strong assumptions (e. g. constant velocity models) that directly compute the collision probability over a time period \cite{nordlund2008probabilistic}. On the other hand in the field of collision risk modeling for air traffic scenarios (for a recent overview see \cite{mitici2018mathematical}) a mathematical result on multi-dimensional stochastic processes \cite{belyaev1968number} has been applied to air traffic specific setups \cite{blom2002conflict},\cite{blom2003collision}. This theory allows for the computation of a collision probability over an extended period of time. Another approach based on a result for a one-dimensional stochastic process with particular dynamics has been suggested in \cite{prandini2000probabilistic}. In the following, based on the formalism in \cite{belyaev1968number} we will derive an expression for the upper bound of the probability of penetrating a boundary around the host vehicle in a time period $\Delta T = [t_1, t_2]$. This will be the result of the temporal integration of an upper bound of the probability {\it rate} for which we derive a general expression valid for arbitrary prediction models including process noise. Inclusion of process noise is crucial for collision avoidance systems since it allows to encode the uncertainty in the relative motion of the host and the colliding vehicle. This uncertainty is particularly relevant for predictions over several seconds where it is unknown whether the colliding vehicle keeps its motion, accelerates or slows down, or whether the host vehicle driver perceives the risk and slows down, for example. The basis of our derivations are the time-dependent distributions $p_t( x, y, \dot x, \dot y, \dots), t\in\Delta T$. Those distributions characterize a non-stationary vector stochastic process that represents the predicted relative state $\xi^-(t)$ of the colliding vehicle. The stochastic process can be the result of a dynamical system whose flow $f$ can depend upon the state $\xi$, a time-dependent control input $u(t)$, process noise $\nu(t)$, and time $t$: \eq{ f\left( \xi, u(t), \nu(t), t \right) } In the remainder of this paper the time dependence of $\xi^-(t)$ and its elements will be suppressed, however the temporal dependence of probability distributions will be indicated by $p \rightarrow p_t$ where appropriate. The main contributions of this paper are as follows: \begin{itemize} \item[-] incorporation of the mathematical theory of level crossings of multi-dimensional stochastic processes developed in \cite{belyaev1968number} to the computation of a collision probability for automotive applications and derivation of upper bounds of the collision probability rate as well as the collision probability based on the entry intensity from \cite{belyaev1968number} \item[-] derivation of approximate formulae for the collision probability rate \item[-] numerical study with special emphasis on the accuracy of this approximation as well as on the upper bound and its saturation \item[-] proposal of an adaptive method to efficiently sample the collision probability rate \item[-] application of the computation of collision probability to a probabilistic treatment of extended objects by representative salient points of a vehicle's geometry \item[-] identification of the distribution of the collision probability rate as the distribution of time-to-collision \end{itemize} In the next section Monte-Carlo simulations of two collision scenarios are performed and it is shown that the result is naturally represented by a collision probability rate. \section{The collision probability ground truth: large-scale Monte-Carlo simulations} \label{section_Collision_probability_Monte_Carlo_simulation} Here, we want to investigate two examples of possible collision scenarios, one where the target vehicle is currently in front of the host vehicle and one where it is on the front right side. In order to obtain ground truth data for the future collision probability Monte-Carlo simulations are performed. The target vehicle on a possibly colliding path with the host vehicle is modeled by the state vector $\xi = ( x\ y\ \dot x\ \dot y\ \ddot x\ \ddot y)^\top$ and the dynamical system as specified in appendix \ref{app_vehicleModel}. The target vehicle is chosen to be detected by a radar sensor mounted at the middle of the front bumper of the host vehicle with standard radar measurements also specified in app. \ref{app_vehicleModel}. Note however that this state vector as well as the dynamical system specified in appendix \ref{app_vehicleModel} constitute just an example -- the central results in section \ref{sec_derivation_collision_prob} hold for general non-stationary as well as non-Gaussian stochastic processes. In particular, the absence of assumptions on the stationarity of the stochastic process means that processes derived from more general dynamical system -- including systems with explicit time dependence or time-dependent control inputs $u(t)$ -- are covered. The starting point for an individual simulation is a sample point in state space $\xi_i^-$ where the target vehicle is some distance away from the host vehicle - either directly in front or coming from the right side, see fig. \ref{fig_Monte_Carlo_sample_trajectory}. This sample point is drawn from a multivariate distribution characterized by its mean vector and covariance matrix which is usually the output of a probabilistic filter that takes into account the history of all previous sensor measurements that have been associated with this object. Instead of arbitrarily picking specific values for this {\it initial} covariance matrix we take its values from steady state\footnote{Strictly speaking there is no steady state at those points since the system is non-linear and the relative speed is not zero. Nevertheless the solution of the Riccati equation is still representative if the filter settles within a smaller time period than the time period in which the state changes significantly.} at this mean vector using the discrete algebraic Riccati equation. An instance $\xi_i^-$ of an initial state of the target vehicle is drawn as a sample of ${\mathcal N}(\xi^-; \mu_{\xi}^-, P^-_{\infty})$. This state is predicted using the stochastic differential equation \Ref{eq_diff_equation} until it crosses the host vehicle boundary or a certain time limit is exceeded. Hence: collision event = crossing of the target vehicle path with the host vehicle boundary. The time until the crossing is recorded and a new simulation with a new sample of initial conditions is started. Examples of colliding trajectories starting from an initial position in front of the host vehicle are depicted in fig. \ref{fig_Monte_Carlo_sample_trajectory}. \begin{figure*}[ht] \centering \null\hfill \subfloat[The target is coming from the front $\left(\mu_x,\mu_y\right)=\left(10,0\right)m$. The parameters for the time-dependent input as specified in app. \ref{app_vehicleModel} are $b_1 = -0.2 m s^{-3}, b_2 = -0.3 m s^{-3}, \omega = 0.5 s^{-1}$.]{\includegraphics[width = .9 \columnwidth]{fig1_a.pdf}} \hfill \subfloat[The target is coming from the front right $\left(\mu_x,\mu_y\right)=\left(10,10\right)m$. The parameters for the time-dependent input as specified in app. \ref{app_vehicleModel} are $b_1 = -0.4 m s^{-3}, b_2 = -0.5 m s^{-3}, \omega = 0.5 s^{-1}$.]{\includegraphics[width = .9 \columnwidth]{fig1_b.pdf}} \hfill\null \caption{Samples of simulated colliding trajectories for vehicles initially coming from the front (a) and from the front right (b) side.} \label{fig_Monte_Carlo_sample_trajectory} \end{figure*} We have performed simulations of $N_{traj} = 3\cdot 10^6$ trajectories for the two starting points. The result is represented by a histogram of the number of collisions that occur within a histogram bin, i. e. time interval, with respect to time. Hence simulating colliding trajectories naturally leads to a collision probability {\it rate}. An example is given in fig. \ref{fig_Monte_Carlo_collision_probability_rate} where the bins are normalized by the total number of trajectories $N_{traj}$ and the chosen bin width of $dt=0.05s$ to obtain a collision probability rate. In addition, the collision probability rate integrated by simple midpoint quadrature from 0 to time $t$ is shown. In this example the probability of collision with the target vehicle exceeds $60\%$ within the first $6s$. The asymptotic value of the collision probability as $t \rightarrow \infty$ indicates the overall probability of collision over all times. \begin{figure}[ht] \centering \includegraphics[width = .9 \columnwidth]{fig2_0_7.pdf} \caption{Collision probability rate as a function of time for $\left(\mu_x,\mu_y\right)=\left(10,0\right)m$ based upon $N_{traj} = 3\cdot 10^6$ trajectories. Also shown is the collision probability obtained by integrating over time.} \label{fig_Monte_Carlo_collision_probability_rate} \end{figure} The main contribution of this paper will be to derive formulae to obtain bounds of the collision probability and collision probability rate for a general dynamical system. We will also show that these formulae in many cases not only provide bounds but accurate approximations by comparing the applied formulae with Monte-Carlo simulations using the specific scenarios described in this section. In the following two sections we will review existing approaches to computing a collision probability. \section{Collision probability from 2D spatial overlap} \label{section_Collision_probability_from_2D_horizontal_integration} This is the probability of the spatial overlap between the host vehicle and the colliding vehicle as proposed in \cite{jansson2005collision}, \cite{lambert2008fast}. In \cite{jansson2005collision} the variables of the relevant probability distribution are either defined by the relative two-dimensional position and relative orientation $(x, y, \psi)$, or by the distribution of the difference of independent probability distributions of global two-dimensional position and orientation as in \cite{lambert2008fast}. It is not explained how higher order derivatives necessary for prediction such as $\dot x, \dot y, ...$ have been dealt with (e. g. by marginalization). Then in \cite{jansson2005collision} the collision probability is obtained as the integral over the pdf of relative position and orientation over a collision volume $D$ \eq{ P_C(t) = \iiint_{x, y, \psi \in D} p\left( x, y, \psi \right) dx dy d\psi \label{eq_2D_integral} } which in turn is approximated as the convolution of independent probability distributions of global two-dimensional position of both host and colliding vehicle - orientation is ignored. Note that even this two-dimensional integral, i. e. the cumulative distribution function of a bivariate Gaussian, cannot be solved in closed form; however, numerical approximation schemes exist \cite{genz2004numerical}. By further ignoring the $x-y$-covariance the multivariate Gaussian decouples so that integration can be factored into one-dimensional Gaussian distributions \cite{jansson2005collision}. In \cite{lambert2008fast}, the collision probability is computed as the integral of the product of the two global distributions. \begin{figure}[ht] \centering \includegraphics[width = .9 \columnwidth]{fig3.pdf} \caption{Example of an instantaneous collision probability over time derived from a collision defined by spatial overlap as described in section \ref{section_Collision_probability_from_2D_horizontal_integration}. This is based on the first scenario described in sec. \ref{section_Collision_probability_Monte_Carlo_simulation} with initial condition in front of the host vehicle.} \label{fig_collision_probability_spatial} \end{figure} A problem of deriving a collision probability from 2D spatial overlap is that this approach directly yields a collision probability for a specific time, see fig. \ref{fig_collision_probability_spatial}. Hence it does not allow to answer the question "What is the probability of collision within the next three seconds?" because integration of the collision probability over time does not yield a collision probability over a certain time period as already pointed out in \cite{nordlund2008probabilistic}. In particular, time is not a random variable that can be marginalized over and an integral over a time interval $\Delta T$: $\int_{\Delta T}P_C(t)dt$ has dimension of time and does not constitute a probability. A heuristic proposal to solve this problem has been to pick the maximal collision probability over a time period as the collision probability for that period \cite{prandini2000probabilistic},\cite{jansson2008framework}. Another issue is that an instantaneous collision probability based on the overlap of a spatial probability distribution with the area of the host vehicle is determined by those sample trajectories whose current end points, i. e. the position at the current time, lie within the area of the host vehicle. But this is independent of when the trajectory has crossed the host vehicle boundary hence all end points except those exactly on the boundary (whose contribution to the two-dimensional integral is zero) correspond to a collision event in the past and therefore too late for collision avoidance, see also fig. \ref{fig_collision_probability_spatial} for an example where the maximum of the instantaneous collision probability from spatial overlap occurs after the TTC in x-direction. Also, by only considering trajectories with current end points within the area of the host vehicle other colliding trajectories with current end points outside the host vehicle area that have already entered and exited the boundary are unaccounted for. What we are actually interested in is the probability of the colliding object touching and/or penetrating the boundary of the host vehicle. This requires a different approach than integration over state space as in eq. \Ref{eq_2D_integral} since the integral over a lower-dimensional subspace would always be zero. Some existing approaches that consider a boundary instead of a state space volume for the computation of a collision probability are reviewed in the next section. \section{Collision probability at boundary} A probabilistic approach to computing the probability of penetrating a boundary - instead of the probability of a spatial overlap - has been proposed in \cite{nordlund2008probabilistic}. Their method is based on the probability density of a so-called {\it time-to-go} which is the time to cross a straight, axis-aligned boundary assuming a constant velocity model. The derived collision probability refers to a time period and not just a time instant. It is only applicable to straight paths or combinations of piecewise straight paths and does not take into account more complex geometries such as a rectangle. It also relies on a separation of longitudinal and lateral motion. Another limitation is that the stochastic nature of their conflict detection approach only comes from the distribution of the initial condition of their state $( x\ y\ \dot x\ \dot y)^\top$ -- process noise is not considered. Although a central result on level crossings, albeit in one dimension, -- namely Rice's formula, see e. g. \cite{lindgren2012stationary} -- is cited, it is not used in its original context to compute the intensity of crossings of a scalar variable but as a technical tool to derive the quotient distribution (for time-to-go) of two correlated random variables. A somewhat complementary approach is taken in \cite{prandini2000probabilistic} for aircraft conflict detection in the sense that process noise is incorporated whereas the uncertainty of the initial condition is not. They propose two different algorithms, one for mid-range and one for short-range conflict detection. For mid-range conflict detection their measure of criticality is an instantaneous probability of conflict and similar to the 2D spacial overlap discussed in the previous section. It is computed by a specific Monte-Carlo scheme.\\ On the other hand their short-range conflict detection is based on the penetration of a spherical boundary around the aircraft as criticality measure. The dynamics is a constant velocity model perturbed by Brownian motion. Their closed-form expression for a collision probability over a time period requires additional assumptions, the strongest of which are the assumed decoupling of the 2D motion into separate 1D motions in longitudinal and lateral directions and the formulation of the collision probability $P_C$ over a time period $[0, t_f ]$ as a factorization into a probability density $p_\tau( t )$ of the minimal time to cross a threshold along a one-dimensional, longitudinal direction\footnote{Using a result on the temporal distribution of the crossing of a threshold for a one-dimensional constant velocity model perturbed by Brownian motion (sometimes referred to as Bachelier-Levy, see e. g. \cite{lerche2013boundary}).} and the distribution of a one-dimensional Wiener process $p_{Lateral}(y,t)$ along the lateral direction: \eq{ P_C(0,t_f) \approx \int_{0}^{t_f} p_\tau( t ) \int_{Lat.\ conflict\ width}p_{Lateral}(y,t) dy dt } The many strong assumptions, in particular constant velocity motion, specific Brownian noise model, and decoupling into one-dimensional motions make this approach hard to generalize. A different, semi-probabilistic approach is taken in \cite{Muntzinger_et_al_09} where the time-to-collision of a 2D constant velocity model with respect to the host vehicle's front boundary at $x=0$ is computed. This TTC is then inserted into the prediction equation to arrive at \eq{ \begin{pmatrix} x + t \dot x \cr y + t \dot y \cr \dot x \cr \dot y \end{pmatrix} \rightarrow \begin{pmatrix} 0 \cr y - {x \over \dot x} \dot y\cr \dot x \cr \dot y \end{pmatrix} } Then the second component of this vector is singled out and interpreted as a probabilistic expression. Its distribution as a function of the initial condition is determined by Monte-Carlo simulation and integrated over the host vehicle's front boundary to obtain a collision probability at the TTC. More complex geometries as well as process noise are not considered. The approaches discussed above are limited to constant velocity models with assumptions on the coupling of longitudinal and lateral motion, they either incorporate specific process noise or no process noise at all or exclude the uncertainty of the initial condition. Additionally, they all rely on a time-to-go or TTC as a prerequisite quantity - either probabilistic or non-probabilistic. As we will show in the next section, such a temporal collision measure is not necessary for the computation of a collision probability. Instead, we show that a fundamental quantity to compute the collision probability for stochastic processes is the collision probability {\it rate}. The mathematical foundation for this approach was provided in \cite{belyaev1968number}. \section{Collision probability rate at boundary} \subsection{Derivation of an upper bound for the collision probability rate} \label{sec_derivation_collision_prob} We have seen that simulating colliding trajectories naturally gives us a probability rate and that a collision probability rate allows us to perform temporal integration to arrive at a collision probability for an extended period of time. An expression for the upper bound of the collision probability rate will be derived on the basis of a theorem on boundary crossings of stochastic vector processes. For sake of lucidity of arguments we restrict ourselves to one of the four straight boundaries of the host vehicle, see fig. \ref{fig_front_boundary}; extension to the other boundaries is straightforward. We start with the prediction of the pdf of a state vector that at least contains relative position and its derivative, i. e. $\xi = ( x\ y\ \dot x\ \dot y\ \cdots)^\top$ for a two-dimensional geometry, of a colliding object from an initial condition at $t=0$ to a future time $t$ where process noise $\nu(t)$ is explicitly incorporated: \eq{ {\rm prediction:}\ p_0( x, y, \dot x, \dot y, \dots) \stackrel{t, \nu(t)}{\longmapsto} p_t( x, y, \dot x, \dot y, \dots) } Note that we do not make any assumptions on the used prediction model as well as noise model or explicit temporal dependencies, hence the stochastic dynamical system that gives rise the pdf could also explicitly depend upon time or a time-dependent control input $u(t)$. In order to cast the following expressions into a more readable format we define a probability distribution that only depends upon relative position and its derivative by marginalization (see app. \ref{app_Partitioned_Gaussian_densities} for marginalization of Gaussian densities, for example) of the predicted pdf over the other variables:\footnote{The state vector $\xi = ( x\ y\ \dot x\ \dot y\ \ddot x\ \ddot y)^\top$ specified in app. \ref{app_vehicleModel} is an obvious extension of the minimal state vector above with corresponding white noise jerk model described in eq. \Ref{eq_diff_equation} and is used as an example to illustrate the computation of collision probability rate. It is however by no means specific to the results stated in this paper.} \eq{ p_t( x, y, \dot x, \dot y ) := \!\!\!\!\!\!\!\!\int\limits_{\rm other\ var.}\!\!\!\!\!\!\!\!p_t( x, y, \dot x, \dot y, {\rm other\ var.}) d({\rm other\ var.}). } Given the pdf $p_t( x, y, \dot x, \dot y )$ what we are looking for is an expression for \eq{ {dP_C^+ \over dt }( \Gamma_{front}, t ) } i. e. the collision probability rate ${dP_C^+ \over dt }$ with dimension $[{s}^{-1}]$ at time $t$ for the front boundary $\Gamma_{front}$. The superscript $+$ is used to denote that this probability rate is referring to boundary crossings from outside to inside. \subsubsection{An Intuitive Motivation} \label{subsub_motivation} We start with the probability of the colliding object being inside an infinitesimally thin strip at the boundary $\Gamma_{front}$ (see fig. \ref{fig_front_boundary}) \eq{ dP_C^+( \Gamma_{front}, t ) = \int\limits_{y \in I_y}\int\limits_{\dot x \leq 0}\int\limits_{\dot y \in \mathbb R} p_t( x_0, y, \dot x, \dot y )\, dx dy d\dot x d\dot y \nonumber } Here, since we are only interested in colliding trajectories, i. e. trajectories that cross the boundary from outside to inside, we do not fully marginalize over $\dot x$ but restrict the $x-$velocity to negative values at the boundary. A collision probability rate can now be obtained by dividing the unintegrated differential $dx$ by $dt$; in that way the ``flow" of the target vehicle through the host vehicle boundary is described at $x_0$ with velocity $\dot x \leq 0$: \eq{ {dP_C^+ \over dt }( \Gamma_{front}, t ) \simeq -\!\!\!\!\!\int\limits_{y \in I_y}\int\limits_{\dot x \leq 0}\int\limits_{\dot y \in \mathbb R} p_t( x_0, y, \dot x, \dot y )\, \dot x\, dy d\dot x d\dot y\label{eq_intuitive_collision_probability_rate} } Here, since the velocity is restricted to negative values a minus sign is required to obtain a positive rate. \subsubsection{Derivation based upon the theory of level crossings} This intuitive derivation can be amended as well as generalized in a mathematically rigorous way by invoking a result on crossings of a surface element by a stochastic vector process stated in \cite{belyaev1968number} and generalized in \cite{lindgren1980model}. First we need to set up the notations and definitions for entries and exits (level crossings) across the boundary of a region. Let $\zeta(t)$ be a continuously differentiable $n-$dimensional vector stochastic process with values ${\bf x} \in \mathbb R^n$. The probability densities $p_t({\bf x})$ and $p_t(\bf{\dot x},\bf{x})$ exist where $\bf{\dot x} \in \mathbb R^n$ are the values of $\dot\zeta(t)$.\footnote{Further technical assumptions on the stochastic process and its probability densities apply \cite{belyaev1968number}.} Let the region $S\in \mathbb R^n$ be bounded by the smooth surface $\partial S$ defined by the smooth function $g$ as $\partial S = \{{\bf x}: g({\bf x}) = 0\}$ and let $\Gamma \subseteq \partial S$ be a subset of that surface. Let ${\bf n}_\Gamma({\bf x})$ be the surface normal at ${\bf x}$ directed towards the interior of the region. A sample function ${\bf x}(t)$ of $\zeta(t)$ has an entry (exit) across the boundary $\Gamma$ at $t_0$ if $g({\bf x}) > 0\ (g({\bf x}) < 0) \forall t \in (t_0 - \epsilon, t_0)$ and $g({\bf x}) < 0\ (g({\bf x}) > 0) \forall t \in (t_0 , t_0 + \epsilon)$ for some $\epsilon > 0$. For a temporal interval $\Delta T = [t_1,t_2]$ the number of entries/exits across $\Gamma$ in this interval is denoted by $N^\pm(\Gamma, t_1, t_2 )$. The importance of this mathematical setup is that using the number of entries a collision probability over $\Delta T$ can be defined\footnote{This definition is motivated by the probability distribution of the maximum of a continuous process, see e. g. \cite{lindgren2012stationary}.} as \al{ P^+_C( \Gamma, t_1, t_2 ) &\!\!\!:=\!\!\!& P\left( g\left(x(t_1)\right) \geq 0, N^+(\Gamma, t_1, t_2 ) \geq 1 \right) \nonumber\\ &\!\!\!\leq\!\!\!& P\left( N^+(\Gamma, t_1, t_2 ) \geq 1\right) \label{eq_coll_prob_def_upper_bound} } i. e. the probability that the stochastic process enters the boundary in $\Delta T$ at least once with initial value outside the boundary. The probability that the process is outside the boundary at initial time $t_1$ should be almost one: $P\left( g\left(x(t_1)\right) \geq 0 \right) \approx 1$ in automotive applications where a collision probability is to be computed for a time interval that begins at a time when the collision has not happened yet. The first moment of $N^+(\Gamma, t_1, t_2 )$ can be used to obtain an upper bound for $P( N^+(\Gamma, t_1, t_2 ) \geq 1 )$:\footnote{Using Markov's generalized inequality also a lower bound can be derived in terms of the first and second factorial moments \cite{belyaev1968number}.} \eq{ P( N^+(\Gamma, t_1, t_2 ) \geq 1 ) \leq {\rm E}\left\{ N^+(\Gamma, t_1, t_2 ) \right\} \label{eq_upper_bound} } This becomes obvious by writing out the expressions above: \eq{ P( N^+\!\!\geq 1 ) = \sum_{k=1}^\infty P( N^+\!\!= k ) \leq \sum_{k=0}^\infty k P( N^+\!\!= k ) = {\rm E}\left\{ N^+\!\right\} \label{eq_expected_number_inequality} } It also shows that if the probabilities for two or more entries are much smaller than for one entry then ${\rm E}\left\{ N^+(\Gamma, t_1, t_2 ) \right\}$ is not just an upper bound but a good approximation to $P( N^+(\Gamma, t_1, t_2 ) \geq 1 )$. It remains to compute the first moments for entry and exit which can be obtained via temporal integration of the entry/exit intensities $\mu^\pm$ as defined below: \eq{ \int\limits_{t_1}^{t_2} \mu^\pm (\Gamma, t ) dt := {\rm E}\left\{ N^\pm(\Gamma, t_1, t_2 ) \right\} } By combining eqs. \Ref{eq_coll_prob_def_upper_bound} and \Ref{eq_upper_bound} and evaluating the temporal derivative with respect to $t_2$ at $t_1$ we obtain \eq{ {dP^+_C\over dt}\left( \Gamma, t_1 \right) \leq \mu^+ (\Gamma, t_1 ) \label{eq_collision_probability_rate_upper_bound} } i. e. we have derived an upper bound for the collision probability {\it rate}. This upper bound can be further evaluated using the explicit expression for the entry/exit intensities $\mu^\pm$ from \cite{belyaev1968number}: \eq{ \mu^\pm\left( \Gamma, t \right) =\!\!\!\int\limits_{{\bf x}\in\Gamma} {\rm E}\left\{\left. \langle{\bf n}_\Gamma({\bf x}),\dot\zeta(t)\rangle^\pm\right|\zeta(t) = {\bf x}\right\}p_t({\bf x})ds_\Gamma({\bf x}) \label{eq_entry_exit_intensity} } where $\langle\cdot, \cdot \rangle$ is the scalar product, $ds_\Gamma({\bf x})$ is an infinitesimal surface element of $\Gamma$ at ${\bf x}$ and $(\cdot)^+ := \max(\cdot,0)$ and $(\cdot)^- := -\min(\cdot,0)$. Equation \Ref{eq_entry_exit_intensity} holds for general non-Gaussian as well as non-stationary stochastic processes. In order to apply eq. \Ref{eq_entry_exit_intensity} to the front boundary $\Gamma_{front}$ as in fig. \ref{fig_front_boundary} we need to perform the following identifications:\footnote{From now on we now do not distinguish anymore between a stochastic process and its sample values.} \al{ \zeta(t) &\!\!\!=\!\!\!& \left( x, y \right)^\top \nonumber\\ \Gamma_{front} &\!\!\!=\!\!\!& \{(x, y): x - x_0 = 0 \wedge y\in I_y \} \nonumber\\ g_{\Gamma_{front}}({\bf x}) &\!\!\!=\!\!\!& x - x_0 \nonumber\\ {\bf n}_{\Gamma_{front}}({\bf x}) &\!\!\!=\!\!\!& \left( -1, 0 \right)^\top \nonumber\\ ds_{\Gamma_{front}}({\bf x}) &\!\!\!=\!\!\!& dy } \begin{figure}[t] \centering \includegraphics[viewport = 7cm 3cm 20cm 14.2cm, clip, width = 0.7 \columnwidth]{fig4.pdf} \caption{Horizontal view of the host vehicle rectangle with local Cartesian coordinate system and coordinate origin at the middle of the front boundary characterized by $x=0$ and $y \in [y_L, y_R] = I_y$.} \label{fig_front_boundary} \end{figure} Hence we obtain for the intermediate expectation operator \begin{multline} {\rm E}\left\{\left. \langle{\bf n}_{\Gamma_{front}}({\bf x}),\dot\zeta(t)\rangle^+\right|\zeta(t) = {\bf x}\right\} =\\ -\int\limits_{\dot x \leq 0}\int\limits_{\dot y \in \mathbb R} \dot x\, p_t( \dot x, \dot y | x, y )\, d\dot x d\dot y \end{multline} and the entry intensity becomes \begin{align} \!\mu^+\!\left( \Gamma_{front}, t \right)&\!=\!-\!\!\!\!\!\int\limits_{y \in I_y}\!\!\!\!\!\left(\, \int\limits_{\dot x \leq 0}\int\limits_{\dot y \in \mathbb R}\!\!\! \dot x\, p_t( \dot x, \dot y | x_0, y ) d\dot x d\dot y\!\!\right)\!\! p_t( x_0, y ) dy \nonumber\\ &=\!-\!\!\!\!\!\int\limits_{y \in I_y} \int\limits_{\dot x \leq 0}\int\limits_{\dot y \in \mathbb R} \dot x\, p_t( x_0, y, \dot x, \dot y )\, dy d\dot x d\dot y\label{eq_front_boundary_entry_intensity} \end{align} This shows that the intuitive derivation of the collision probability rate (eq. \Ref{eq_intuitive_collision_probability_rate}) results in the correct expression for the upper bound. It should be noted, however, that the application of the formalism above to a rectangular boundary of the host vehicle is just an example. By the theorem stated above the formula can be applied to any subsets of smooth surfaces, including higher dimensional ones for three-dimensional objects, for example.\footnote{The results in \cite{belyaev1968number} have been extended to polyhedral \cite{veneziano1977vector} and other regions $S$ with a non-smooth surface $\partial S$, for an overview see \cite{illsley1998moments}.} The computation above applies to the front boundary of the host vehicle. In order to cover all four boundaries of the host vehicle the entry intensities of the four boundaries are added. Hence the total entry intensity is given by \al{ \!\!\!\!\!\!\!\!\!\mu^+(\Gamma_{host\ vehicle}, t ) &\!\!\!=\!\!\!& \mu^+(\Gamma_{front}, t ) + \mu^+(\Gamma_{right}, t )\nonumber\\ && + \mu^+(\Gamma_{left}, t ) + \mu^+(\Gamma_{rear}, t ) } With these expressions the collision probability rate and collision probability for the surface subset $\Gamma$ within a time interval $\Delta T = [t_1,t_2]$ are bounded by \al{ \!\!\!\!\!\!\!\!\! {dP^+_C\over dt}\left( \Gamma_{host\ vehicle}, t_1 \right) &\!\!\!\leq\!\!\!& \mu^+ (\Gamma_{host\ vehicle}, t_1 ) \\ \!\!\!\!\!\!\!\!\! P^+_C( \Gamma_{host\ vehicle}, t_1, t_2 ) &\!\!\!\leq\!\!\!& \int\limits_{t_1}^{t_2} \mu^+\left( \Gamma_{host\ vehicle}, t \right) dt \label{eq_temporal_integration} } In summary the upper bounds are due to the approximation that the starting point is outside the boundary (inequality \Ref{eq_coll_prob_def_upper_bound}) and the estimation of the probability of one or more boundary entries by the expected number of boundary entries (inequality \Ref{eq_upper_bound}). Note that the stochastic process $\xi$ representing the state of the colliding object needs to contain 2D relative position $(x\ y)^\top$ and 2D relative velocity $(\dot x\ \dot y)^\top$. In many ADAS applications the target vehicle dynamics is modeled directly in relative coordinates. For state vectors that do not contain the 2D relative velocity but other quantities such as the velocity over ground (see e. g. \cite{Altendorfer09}), a probabilistic transformation to relative velocities must be performed first. Here, the relative state of the colliding object is modeled by a point distribution. In many ADAS applications the object state is indeed modeled by a single reference point and possibly additional attributes such as width and length. However, the described approach can be executed in parallel for distributions of points representative of the colliding vehicle's geometry (see for example \cite{schueler2012360}) as detailed in sec. \ref{sec_salient_point_entry_intensities} or for individual point distributions of a Gaussian mixture model. \subsection{Implementation for Gaussian distributions} \label{sec_entry_intensity_approx} For further computations - especially in the Gaussian case - it will be convenient to marginalize over $\dot y$ and rewrite eq. \Ref{eq_front_boundary_entry_intensity} in terms of a conditional probability: \eq{ \mu^+( \Gamma_{front}, t ) = -p_t( x_0 )\int\limits_{\dot x \leq 0} \int\limits_{y \in I_y} \dot x\, p_t( \dot x, y | x_0 )\, d\dot x dy \label{eq_front_boundary_entry_intensity_cond} } For general distribution functions the integral in eq. \Ref{eq_front_boundary_entry_intensity_cond} cannot be computed in closed form and numerical integration methods must be used. Even in the bivariate Gaussian case there is no explicit solution known to the authors. However, by a Taylor-expansion with respect to the off-diagonal element of the inverse covariance matrix of $p( y, \dot x | x_0 )$ as detailed in app. \ref{app_computation_integral}, the integral can be factorized into one-dimensional Gaussians and solved in terms of the standard normal one-dimensional cumulative distribution function $\Phi$. To zeroth order the integration yields: \begin{align} \mu^+( \Gamma_{front}, t )=&-{\mathcal N}( x_0; \mu_x, \sigma_{x} ) \nonumber\\ & \cdot \Bigg(\bigg( \mu_{\dot x|x_0} \Phi\left({ - \mu_{\dot x|x_0} \over \tilde\sigma_{\dot x|x_0}}\right) \nonumber \\ & \quad -\tilde\sigma_{\dot x|x_0}^2 {\mathcal N}( 0; \mu_{\dot x|x_0}, \tilde\sigma_{\dot x|x_0} ) \bigg)\cdot \nonumber \\ & \cdot \left( \Phi\left({ y_R - \mu_{y|x_0} \over \tilde\sigma_{y|x_0}}\right) - \Phi\left({ y_L - \mu_{y|x_0} \over \tilde\sigma_{y|x_0}}\right) \right) \nonumber\\ & + {\mathcal O}\left(\Sigma^{-1}_{12}\right) \Bigg)\label{eq_coll_prob_rate_num_approx} \end{align} Here, if $\Sigma\in \mathbb R^{2\times 2}$ is the covariance matrix of $p( \dot x, y | x_0 )$, then $\tilde\sigma_{\dot x|x_0} = \sqrt{|\Sigma|\over \Sigma_{yy}}$ and $\tilde\sigma_{y|x_0} = \sqrt{|\Sigma|\over \Sigma_{\dot x\dot x}}$, see app. \ref{app_computation_integral} where the integration has also been carried out to first order in $\Sigma^{-1}_{12}$. Expression \Ref{eq_coll_prob_rate_num_approx} can be computed on an embedded platform using the complementary error function available in the C math library.\footnote{$\Phi$ is related to the error function ${\rm erf}$ and complementary error function ${\rm erfc}$ by $\Phi(x) = {1 \over 2}{\rm erfc}\left( {-x \over \sqrt{2}} \right) = {1 \over 2} - {1 \over 2}{\rm erf}\left( {-x \over \sqrt{2}} \right) $.} In the next section an extensive numerical study using the above formulae and Monte-Carlo simulations is presented. \section{Numerical Study} A numerical study has been carried out to address the following questions: \begin{itemize} \item Is the expression for calculating the entry intensity from eq. \Ref{eq_entry_exit_intensity} consistent with the results from large scale Monte-Carlo simulations? \item How does the approximation \Ref{eq_coll_prob_rate_num_approx} perform in comparison with the numerical integration of the derived expression \Ref{eq_front_boundary_entry_intensity_cond} for the entry intensity? \item Can the computational effort be reduced by increasing $\Delta t$ and still accurately calculating the entry intensity? \item Does the entry intensity still reproduce results from Monte-Carlo simulations after non-linear transformation from a reference point to representative salient points of the colliding vehicle's geometry? \end{itemize} \subsection{Is the upper bound of the collision probability rate corroborated by Monte-Carlo simulation?} \label{sec_numerical_corroboration} In order to address the first question, large scale Monte-Carlo simulations as described in sec. \ref{section_Collision_probability_Monte_Carlo_simulation} have been performed. Entry intensities were calculated based on $3\cdot 10^6$ sample trajectories for each of the two initial conditions ${\mathcal N}_i(\xi^-; {\mu_{\xi}^-}_i, {P^-_{\infty}})$, where ${\mu_{\xi}^-}_i$ is shown in table \ref{tab_intial_conditions_for_simulation}, and ${P^-_{\infty}}$ is calculated using the discrete Riccati equation with the matrices defined in appendix \ref{app_vehicleModel}. The two initial conditions $\left(i\in\left\{f, fr\right\}\right)$ describe a starting point directly in front of the host vehicle, and in front to the right at an angle of $45$ degrees with respect to the host vehicle. Note that in contrast to the study in \cite{blom2003collision} we include process noise and we employ a dynamical system as specified in appendix \ref{app_vehicleModel} that allows for multiple entries; this enables us to assess the influence of multiple entries on the accuracy of the upper bounds derived above. \begin{table}[h] \caption{Mean of initial conditions for Monte-Carlo simulations} \begin{center} \label{tab_intial_conditions_for_simulation} \begin{tabular}{ l | c | c } \hline \backslashbox{${\mu_{\xi}^-}$}{$i$} & $f$ & $fr$ \\ \hline ${\mu_{x}^-} \left[m\right]$ & $10$ & $10$ \\ \hline ${\mu_{y}^-} \left[m\right]$ & $0$ & $10$ \\ \hline ${\mu_{\dot x}^-} \left[\frac{m}{s}\right]$ & $-2$ & $-2$ \\ \hline ${\mu_{\dot y}^-} \left[\frac{m}{s}\right]$ & $0.4$ & $-1.6$ \\ \hline ${\mu_{\ddot x}^-} \left[\frac{m}{s^2}\right]$ & $-0.2$ & $-0.001$ \\ \hline ${\mu_{\ddot y}^-} \left[\frac{m}{s^2}\right]$ & $0.0$ & $-0.01$ \\ \hline \end{tabular} \end{center} \end{table} Table \ref{tab_number_collisions} shows the number of collisions divided into the respective boundaries of the host vehicle where the impact or boundary crossing occurred for the two different simulations. \begin{table}[h] \caption{Number of collisions at host vehicle boundaries for $3\cdot10^6$ simulated trajectories with different initial conditions.} \begin{center} \label{tab_number_collisions} \begin{tabular}{ l | c | c } \hline \backslashbox{$\Gamma_{j}$}{$i$} & $f$ & $fr$ \\ \hline $front$ & $1.50\cdot10^6$ & $8.31\cdot10^5$ \\ \hline $right$ & $4.25\cdot10^5$ & $1.39\cdot10^6$ \\ \hline $left$ & $0$ & $0$ \\ \hline $rear$ & $0$ & $0$ \\ \hline $\sum$ & $1.93\cdot10^6$ & $2.22\cdot10^6$ \\ \end{tabular} \end{center} \end{table} The resulting histograms of the collision probability rates are shown in fig. \ref{fig_Monte_Carlo_Histogram} together with the entry intensity obtained by numerical integration of the bivariate Gaussian in \Ref{eq_front_boundary_entry_intensity_cond} as well as the difference between the simulation and the calculation. The difference is calculated by evaluating the entry intensity at the same time as the mid points of the histogram bins. \begin{figure*}[h!] \centering \null\hfill \subfloat[Front scenario total collision probability rate and entry intensity]{\includegraphics[width = .7 \columnwidth]{fig5a_0_7.pdf}} \hfill \subfloat[Front-Right scenario total collision probability rate and entry intensity]{\includegraphics[width = .7 \columnwidth]{fig5b_0_1.pdf}} \hfill\null\\ \null\hfill \subfloat[Front scenario: difference between total collision probability rate and entry intensity]{\includegraphics[width = .7 \columnwidth]{fig5c.pdf}} \hfill \subfloat[Front-Right scenario: difference between total collision probability rate and entry intensity]{\includegraphics[width = .7 \columnwidth]{fig5d.pdf}} \hfill\null \caption{The histogram resulting from Monte-Carlo simulation is shown together with the entry intensity obtained by numerical integration of the bivariate Gaussian for front (a) and front-right (b) scenario. The differences between simulation and numerical integration are calculated by evaluating the numerical integration at the same time as the mid points of the histogram bins and shown in (c) and (d).} \label{fig_Monte_Carlo_Histogram} \end{figure*} As can be seen in fig. \ref{fig_Monte_Carlo_Histogram}, the entry intensity obtained by numerical integration of the exact expression (eq. \Ref{eq_front_boundary_entry_intensity_cond}) accurately reproduces the collision probability rate from Monte-Carlo simulations. In order to illustrate the increase in accuracy as a function of the number of simulated trajectories, fig. \ref{fig_Monte_Carlo_Histogram_front_right_side} shows the differences between simulation and numerical integration with increasing amount of simulated trajectories for collisions at the right side of the host vehicle in the front scenario. \begin{figure*}[tb!] \centering \null\hfill \subfloat[Simulation based on $1\cdot10^5$ trajectories.]{\includegraphics[width = .6 \columnwidth]{fig6a.pdf}} \hfill \subfloat[Simulation based on $1\cdot10^6$ trajectories.]{\includegraphics[width = .6 \columnwidth]{fig6b.pdf}} \hfill \subfloat[Simulation based on $1\cdot10^7$ trajectories.]{\includegraphics[width = .6 \columnwidth]{fig6c.pdf}} \hfill\null \caption{The collision probability rate for the right side of the host vehicle for the front scenario is shown comparing the results from Monte-Carlo simulation with increasing amount of simulated trajectories (a)-(c) with the entry intensity obtained by numerical integration of the bivariate Gaussian distribution (eq. \Ref{eq_front_boundary_entry_intensity_cond}). The process noise PSD for both coordinates is $\tilde q_x = \tilde q_y = 1.0125 m^2 s^{-5}$.} \label{fig_Monte_Carlo_Histogram_front_right_side} \end{figure*} The reason why the entry intensity approximates the observed collision probability rates so well is the very low occurrence of higher order entries, i. e. entries where the trajectory enters the boundary more than once (see statistics of a Monte-Carlo simulation in table \ref{tab_coll_prob_full_boundary}). In the absence of higher order entries the expected number of entries becomes equal to the probability of entering the boundary at least once, see eq. \Ref{eq_expected_number_inequality}. Since the corresponding time interval is arbitrary this equality propagates to an equality of the rates (compare to eq. \Ref{eq_collision_probability_rate_upper_bound}). In this context, we want to point out a subtlety concerning the number of entries regarding the entire vehicle boundary $\Gamma_{host\ vehicle}$ versus entries through one of the boundary segments such as $\Gamma_{right}$. In Monte-Carlo simulations we have observed trajectories as shown in fig. \ref{fig_trajectories_one_and_two_entries} where the trajectory first enters the front boundary, exits the right boundary and then enters the right boundary again. With respect to the entire vehicle boundary $\Gamma_{host\ vehicle}$ this is a second entry -- however with respect to the individual right boundary segment $\Gamma_{right}$ this is a first entry. This is illustrated in fig. \ref{fig_histogram_both_entries} where the entry intensity and Monte-Carlo histogram for $\Gamma_{right}$ are plotted. Only by taking into account all entries for $\Gamma_{right}$, i. e. entries of $\Gamma_{right}$ that are first crossings of $\Gamma_{right}$, as well as entries of $\Gamma_{right}$ that are second or higher crossings does the entry intensity for $\Gamma_{right}$ match the histogram from Monte-Carlo simulation. \begin{table}[h!] \caption{Absolute frequency $H$ and relative frequency $P$ of the number of entries $N^+$ of colliding trajectories for $\Gamma_{host\ vehicle}$ based on $1\cdot10^7$ simulated trajectories for $\Delta T = [0,8s]$.} \begin{center} \label{tab_coll_prob_full_boundary} \begin{tabular}{ l | r | c | l } \hline \backslashbox{$X$}{$\Gamma_{host\ vehicle}$} & \multicolumn{1}{c|}{$H(X)$} & \multicolumn{1}{c|}{$P(X)$} & \multicolumn{1}{c}{$\frac{P(X)}{P(N^+\geq 1)}$} \\ \hline $N^+\!\!= 1$ & $4,493,419$ & $0.4493$ & $0.9981$ \\ \hline $N^+\!\!= 2$ & $8,772$ & $0.0009$ & $0.0019$ \\ \hline $N^+\geq 1$ & $4,502,191$ & $0.4502$ & $1$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[ht] \centering \includegraphics[width = .9 \columnwidth]{fig7.pdf} \caption{Observed simulated trajectory entering the entire vehicle boundary $\Gamma_{host\ vehicle}$ once and trajectory entering twice.} \label{fig_trajectories_one_and_two_entries} \end{figure} \begin{figure}[h] \centering \includegraphics[width = .9 \columnwidth]{fig8.pdf} \caption{The entry intensity of the right side $\Gamma_{right}$ of the host vehicle for the front scenario is shown together with the Monte-Carlo histogram where entries by trajectories that have previously exited $\Gamma_{right}$ from inside $\Gamma_{host\ vehicle}$ are marked in dark gray.} \label{fig_histogram_both_entries} \end{figure} \subsection{Does the approximation by Taylor-expansion accurately reproduce the exact result?} \label{sec_numerical_study_Taylor} In order to be able to compute the entry intensity efficiently on an embedded platform, an approximation of the exact expression (eq. \Ref{eq_front_boundary_entry_intensity}) was derived in eq. (\ref{eq_coll_prob_rate_num_approx}). Fig. \ref{fig_numerical_approximation_diff} shows the differences between this approximation as well as a higher-order approximation where the pdf is Taylor-expanded to linear order with respect to the off-diagonal element of the inverse covariance matrix around $0$ (see app. \ref{app_computation_integral}) and the numerical integration of \Ref{eq_front_boundary_entry_intensity_cond}. As can be seen, the higher-order approximation reduces the error to a large extent while it can be still calculated efficiently on an embedded platform using the complementary error function. \begin{figure*}[h] \centering \null\hfill \subfloat[Front scenario]{\includegraphics[width = .9 \columnwidth]{fig9a.pdf}} \hfill \subfloat[Front-Right scenario]{\includegraphics[width = .9 \columnwidth]{fig9b.pdf}} \hfill\null \caption{Differences between numerical integration of the bivariate Gaussian in the expression of the entry intensity in eq. \Ref{eq_front_boundary_entry_intensity_cond} and two approximations. The first approximation (solid line) ignores the non-diagonal entries of the covariance matrix while the second approximation (dashed line) Taylor-expands the pdf to linear order with respect to the off-diagonal elements of the inverse covariance matrix around $0$.} \label{fig_numerical_approximation_diff} \end{figure*} \subsection{An adaptive method to sample the entry intensity over $\Delta T$} The approximations above of the exact expression of the entry intensity were evaluated at small time increments of $\Delta t=0.05s$. Thus, the calculation over the entire time period of interest (e.g $8s$ as used above) and for every relevant object could induce a substantial computational burden. In order to reduce this effort, we propose an adaptive method to sample the entry intensity function with variable -- i. e. in general larger -- time increments $\Delta t$ over the time period of interest while still capturing the characteristics of this function, in particular its shape around the maximum. The sampling starting point is based upon the non-probabilistic TTCs for single, straight boundaries using a one-dimensional constant acceleration model. Those TTCs for penetrating the front, left, and right boundaries can then be used as initial condition for the start of the sampling iteration of the entry intensity.\footnote{Due to the low probability of penetration the non-probabilistic TTC for the rear boundary is not considered for the determination of the sampling starting point.} To reproduce the entry intensity without substantial loss of information but with lower computational effort, the following algorithm is proposed: \begin{itemize} \item Calculate the times of penetrating the front, left and right boundaries based upon the non-probabilistic TTCs described above. \item Calculate the entry intensity for each time. Pick the time with the maximum entry intensity as a starting point. \item Move left and right from this starting point with equally spaced $\Delta t_1 > \Delta t$ and calculate the entry intensity at these time points. Stop on each side if the entry intensity has reached a lower threshold of ${dP_C^+ \over dt }_{low}$. \item While moving left and right, check if the slope of the entry intensity has changed its sign. \item On every slope sign change, calculate the entry intensity around this time interval with decreased $\Delta t_2 < \Delta t_1$. \end{itemize} Examples of this implementation can be found in fig. \ref{fig_delta_t_simulation} for the front and front-right scenarios. It can be seen that the entry intensity as well as the entry intensity integrated over a certain time period can be determined with considerably fewer sampling points while still capturing the shape of the functions to be approximated. \begin{figure*}[!ht] \centering \null\hfill \subfloat[Front scenario entry intensity]{\includegraphics[width = .7 \columnwidth]{fig10a.pdf}} \hfill \subfloat[Front-Right scenario entry intensity]{\includegraphics[width = .7 \columnwidth]{fig10b.pdf}} \hfill\null\\ \null\hfill \subfloat[Front scenario integrated entry intensity]{\includegraphics[width = .7 \columnwidth]{fig10c.pdf}} \hfill \subfloat[Front-Right scenario integrated entry intensity]{\includegraphics[width = .7 \columnwidth]{fig10d.pdf}} \hfill\null \caption{Examples for reducing the number of calculations to determine the entry intensity and the integrated entry intensity. (a) and (c) show the results for the front scenario and (b) and (d) for the front-right scenario. The parameters in these examples are $\Delta t_1=0.5s$, $\Delta t_2=0.2s$ and ${dP_C^+ \over dt }_{low} = 0.01$. In doing so the number of calculations for the entry intensity could be reduced from $120$ (using a fixed sampling increment of $\Delta t = 0.05s$) to $13$ for the front scenario and to $12$ for the front-right scenario, respectively.} \label{fig_delta_t_simulation} \end{figure*} \subsection{Entry intensities of representative salient points of colliding vehicle's geometry} \label{sec_salient_point_entry_intensities} In the previous sections the colliding vehicle was modeled as a point distribution with a single reference point (e.g. the middle of the vehicle's rear bumper or the middle of the rear axle). In this section, we investigate the entry intensities for representative salient points of the colliding vehicle's two-dimensional geometry, i.e. the four corner points of a vehicle's rectangular shape incorporating width and length information. \begin{figure*}[h!] \centering \null\hfill \subfloat[Front left corner of colliding vehicle]{\includegraphics[width = .7 \columnwidth]{fig11a.pdf}} \hfill \subfloat[Front right corner of colliding vehicle]{\includegraphics[width = .7 \columnwidth]{fig11b.pdf}} \hfill\null\\ \null\hfill \subfloat[Rear left corner of colliding vehicle]{\includegraphics[width = .7 \columnwidth]{fig11c.pdf}} \hfill \subfloat[Rear right corner of colliding vehicle]{\includegraphics[width = .7 \columnwidth]{fig11d.pdf}} \hfill\null \caption{Collision probability rate and entry intensity of four corner points in the front scenario with process noise PSD for both coordinates of $\tilde q_x = \tilde q_y = 0.0101 m^2 s^{-5}$ and input gain $B$ set to zero. Results for the entry intensity are given for numerical integration of the approximate 2d Gaussian distribution as well as two approximations to this integration as detailed in app. \ref{app_computation_integral}. It can be observed that due to the second order linearization of the probability distribution transformations to salient points deviations arise with respect to Monte-Carlo simulations.} \label{fig_corner_points_of_colliding_vehicle} \end{figure*} After prediction of the reference point's state distribution to a certain time it needs to be transformed to representative salient points as described in app. \ref{app_state_transformation_salient}. In order to apply the approximate formulae for Gaussian distributions as in sec. \ref{sec_entry_intensity_approx} the transformation is performed by the usual second order linearization, i. e. using the full nonlinear transformation for the mean and its Jacobian for the covariance matrix propagation. For this investigation, three approaches are compared in fig. \ref{fig_corner_points_of_colliding_vehicle}: first the numerical integration of the resulting 2d Gaussian distribution as well as two closed-form approximations derived in app. \ref{app_computation_integral} by Taylor-expansion. Contrary to the investigations in sec. \ref{sec_numerical_corroboration} and \ref{sec_numerical_study_Taylor} even the numerical integration of the 2d Gaussian distribution cannot fully match the Monte-Carlo simulations due to the Gaussian approximation of the non-Gaussian transformed predicted distributions. Also both closed-form approximations to the 2d Gaussian integral show deviations to the Monte-Carlo simulation which describes the front scenario with process noise PSD for both coordinates of $\tilde q_x = \tilde q_y = 0.0101 m^2 s^{-5}$ and input gain $B$ set to zero. The closed-form approximations by Taylor-expansion with respect to the off-diagonal element of the covariance matrix and the inverse covariance matrix show similar accuracy with respect to the Monte-Carlo simulations except for the salient point in fig. \ref{fig_corner_points_of_colliding_vehicle}d where the former expansion is favored. Nevertheless in these cases both Taylor-expansions approximately capture both the shape and the location of the maximum of the intensity distributions. \section{What is the TTC?} There are various approaches to computing the TTC. In \cite{Muntzinger_et_al_09}, the TTC is computed as the mean of the time distribution of reaching the $x_0$ boundary of the car as a function of the initial conditions assuming a constant speed model; process noise is not considered. This is also presented in \cite{jansson2008framework}; in addition the time distribution for reaching the $x_0$ boundary as a function of the initial conditions assuming a constant acceleration model is calculated by Monte-Carlo-simulation and its mean values depending upon the initial condition setup is given - again, process noise for this motion model is not considered. As a notable exception, in \cite{stellet2015uncertainty} the covariance of the distribution of TTC (or the related time-to-go in \cite{nordlund2008probabilistic}) has been augmented by standard error propagation and judicious use of the implicit function theorem to include the effect of process noise. Nevertheless their TTC is still based on a reduction to a one-dimensional, longitudinal motion. As will be shown below these restricted temporal quantities do not fully capture the characteristics of horizontal plane collision scenarios. What is required is a distribution of the TTC that takes into account process noise as well as two- or higher-dimensional geometries. In the following figures collision probability rates obtained by Monte-Carlo simulations as well as entry intensities are plotted together with initial condition TTC-distributions from Monte-Carlo simulations similar to \cite{jansson2008framework}. These Monte-Carlo simulations are based on TTC values for the front boundary $\Gamma_{front}$ ($x$-direction) and the right boundary $\Gamma_{right}$ ($y$-direction) as solutions of the deterministic equations \begin{align} x_{0} = x(TTC_{front})& = x(0) + \dot x(0) TTC_{front} + { \ddot x(0) \over 2 } TTC_{front}^2 \nonumber\\ y_{R} = y(TTC_{right})& = y(0) + \dot y(0) TTC_{right} + { \ddot y(0) \over 2 } TTC_{right}^2 \label{eq_deterministic_TTCs} \end{align} As an extension of the one-dimensional Monte-Carlo setup in \cite{jansson2008framework} the following conditions and constraints need to be considered for consistent TTC-histograms for one-dimensional boundaries embedded in two-dimensional space \begin{itemize} \item[-] for arbitrary initial conditions and values of $x_0, y_R$ all real, positive solutions of the quadratic equations above need to be considered \item[-] a real, positive solution for $TTC_{front}$ is only valid if $\left( x(TTC_{front}), y(TTC_{front}) \right) \in \Gamma_{front}$, and a real solution for $TTC_{right}$ is only valid if $\left( x(TTC_{right}), y(TTC_{right}) \right) \in \Gamma_{right}$ \item[-] the trajectory must enter the boundary from outside, e. g. for $TTC_{right}$ it is checked that $y(TTC_{right} - \epsilon) > y_R$ for a small $\epsilon > 0$ \end{itemize} Since time-dependent input cannot be handled in Monte-Carlo simulations only based on stochastic initial conditions we restrict the dynamical model in this section for comparison to a constant acceleration model, i. e. the input gain $B$ in app. \ref{app_vehicleModel} is set to zero. The deterministic TTC solutions are also the mean values of distributions derived by promoting the deterministic expressions of eq. \Ref{eq_deterministic_TTCs} to probability distributions due to the distribution of the initial conditions and obtaining the mean by the usual first-order approximation of non-Gaussian densities.\footnote{Note that the augmented TTC-computation in \cite{stellet2015uncertainty} does not alter the mean but only the covariance.} As a central result of this section, we show in fig. \ref{fig_Monte_Carlo_TTC_versus_entry_intensity} that initial condition TTC-distributions from Monte-Carlo simulations match the corresponding entry intensities where process noise is zero. This shows that the entry intensity can be interpreted (if contributions of higher order entries are negligible as discussed in sec. \ref{sec_numerical_corroboration}) as a TTC-probability density. It is also noteworthy that in this case the entry intensity in its approximate version from sec. \ref{sec_entry_intensity_approx} affords a closed-form expression for a distribution that hitherto had to be obtained by Monte-Carlo simulation. \begin{figure}[ht] \centering \includegraphics[width = .9 \columnwidth]{fig12.pdf} \caption{Entry intensities, TTC Monte-Carlo simulations, and deterministic TTCs for an initial condition at the front, right side of the vehicle: $(x,y)=(10,10)m$. For comparability, process noise had to be set to zero in the computation of the entry intensities.} \label{fig_Monte_Carlo_TTC_versus_entry_intensity} \end{figure} In fig. \ref{fig_collision_probability_rate_and_TTCs} the collision probability rate is plotted with initial condition TTC-distributions and deterministic TTCs for the $x$- and $y$-directions for an initial position at the front, right side. Both the deterministic TTCs in $x$- and $y$-direction are significantly different from the time of the maximum of the collision probability rate. Likewise, the initial condition TTC-distributions do not resemble the entry intensity and reach their maxima at later times. Since the bulk of the colliding trajectories go through two sides - front and right (see also fig. \ref{fig_Monte_Carlo_sample_trajectory}b) - only a collision model that takes into account process noise and the full geometry of the host vehicle can yield accurate results. \begin{figure}[ht] \centering \includegraphics[width = .9 \columnwidth]{fig13.pdf} \caption{Collision probability rate from Monte-Carlo simulation, entry intensity, initial condition TTC-distributions and deterministic TTCs for an initial condition at the front, right side of the vehicle: $(x,y)=(10,10)m$. The process noise PSD for both coordinates is $\tilde q_x = \tilde q_y = 0.0405 m^2 s^{-5}$.} \label{fig_collision_probability_rate_and_TTCs} \end{figure} In fig. \ref{fig_collision_probability_rate_and_TTCx_low_process} the collision probability rate is plotted together with initial condition TTC-distribution and deterministic TTC for the $x$-direction for an initial position that is straight in front of the vehicle hence almost all trajectories pass through the front boundary. Nevertheless the collision probability rate is lower and shifted to the left of the initial condition TTC-distribution. Also the maximum of the probability rate as well as the initial condition TTC-distribution occurs before the deterministic TTC. These differences increase as the process noise increases as can be seen in fig. \ref{fig_collision_probability_rate_and_TTCx_high_process}. This is due to the fact that the time of the maximum is strongly influenced by the factor $p_t( x_0 )$ in eq. \Ref{eq_front_boundary_entry_intensity_cond}; an increased level of process noise leads to a faster spreading of $p_t( x_0 )$ and hence the maximum is reached earlier. \begin{figure}[h] \centering \includegraphics[width = .9 \columnwidth]{fig14.pdf} \caption{Collision probability rate from Monte-Carlo simulation, entry intensity, initial condition TTC-distribution and deterministic TTC for an initial condition in front of the vehicle: $(x,y)=(10,0)m$. The process noise PSD for both coordinates is $\tilde q_x = \tilde q_y = 0.0101 m^2 s^{-5}$.} \label{fig_collision_probability_rate_and_TTCx_low_process} \end{figure} \begin{figure}[h] \centering \includegraphics[width = .9 \columnwidth]{fig15.pdf} \caption{Collision probability rate from Monte-Carlo simulation, entry intensity, initial condition TTC-distribution and deterministic TTC for an initial condition in front of the vehicle: $(x,y)=(10,0)m$. The process noise PSD for both coordinates has been increased to $\tilde q_x = \tilde q_y = 1.0125 m^2 s^{-5}$.} \label{fig_collision_probability_rate_and_TTCx_high_process} \end{figure} The above discussion shows that temporal collision characteristics are encoded by the distribution of the collision probability rate which incorporates the full geometry of the host vehicle as well as process noise during prediction. A scalar quantity called {\it TTC} can then be obtained as one of the characteristic properties of this distribution such as the mode or the mean or the median, or as a property of the integrated collision probability rate, e. g. the time when the collision probability exceeds a certain threshold. \section{Conclusions} As detailed in our literature review a common approach to compute a collision probability is via temporal collision measures such as time-to-collision or time-to-go. In this paper, however, we have pursued a different approach, namely the investigation of a collision probability rate without temporal collision measures as an intermediate or prerequisite quantity. A collision probability rate then affords the provision of a collision probability over an extended period of time by temporal integration. An expression for an upper bound of the collision probability rate has been derived based on the theory of level crossings for vector stochastic processes. The condition under which the upper bound is saturated, i. e. is a good approximation of the collision probability rate has been discussed. While the expression was exemplified by an application of Gaussian distributions on a two-dimensional rectangular surface, the formalism holds for general non-stationary as well as non-Gaussian stochastic processes and can be applied to any subsets of multidimensional smooth surfaces. We have also shown that computations of TTCs using assumptions based on the shape of trajectories or the prediction dynamics (disregard of process noise), or on simple geometries such as a single line segment do not properly characterize the true collision probability rate distributions. The ground truth collision probability rate distribution has been obtained by Monte-Carlo simulations and approximated by our derived bound for the collision probability rate. We have also implemented an approximation of the collision probability rate bound that can be computed in closed form on an embedded platform. This approximate formula provided bounds of the collision probability rate distributions that are almost indistinguishable from distributions obtained by numerical integration for the scenarios considered in this paper. In order to efficiently sample this probability rate distribution for determination of its characteristic shape we have worked out an adaptive method to obtain the sampling points. In our discussion of approaches to computing a TTC we illustrated the correspondence between classical TTC-distributions derived by Monte-Carlo simulations based on stochastic initial conditions and the entry intensity. We also showed that those classical one-dimensional TTC-distributions do not properly represent collision statistics in case of two-dimensional geometries and presence of process noise. We have identified the distribution of the collision probability rate as the distribution of the TTC. Point estimators derived from this distribution (e. g. the mode, mean, or median) as input signals to collision avoidance decision making could be investigated in the context of a complete collision avoidance system. \section{ACKNOWLEDGEMENTS} Helpful clarifications by Prof. Georg Lindgren are gratefully acknowledged. \begin{appendix} \label{app_collision_probability} \subsection{Partitioned Gaussian densities} \label{app_Partitioned_Gaussian_densities} In many calculations in stochastic estimation there is a need to marginalize over certain elements of a state vector or to obtain lower dimensional distributions by conditioning with respect to certain elements. For these calculations the original state vector $\xi$ can be rearranged or partitioned such that $x_r$ denotes the remaining state vector and $x_m$ denotes the states to be marginalized over or which are used for conditioning. \eq{ \xi = \begin{pmatrix} x_r \cr x_m \end{pmatrix} } Hence the mean vector $\mu$ and covariance matrix $\Sigma$ can be partitioned into \eq{ \mu = \begin{pmatrix} \mu_r \cr \mu_m \end{pmatrix},\quad \Sigma = \begin{pmatrix} \Sigma_{rr} & \Sigma_{rm} \cr \Sigma_{rm}^\top & \Sigma_{mm} \end{pmatrix} } The following two well-known results on multivariate Gaussians are used in this paper: \paragraph{Marginalization} The probability density of $\xi$ marginalized with respect to $x_m$ is \eq{ p\left( x_r \right) = \int_{x_m} p\left( \xi \right) dx_m = {\mathcal N}\left( x_r; \mu_r, \Sigma_{rr} \right) } \paragraph{Conditioning} The probability density of $\xi$ conditioned on $x_m$ is \al{ p\left( \xi | x_m \right) &\!\!\!=\!\!\!& p\left( x_r | x_m \right) \nonumber\\ &\!\!\!=\!\!\!& {\mathcal N}\left( x_r; \mu_{r|m}, \Sigma_{r|m} \right) } with \al{ \mu_{r|m} &\!\!\!=\!\!\!& \mu_r + \Sigma_{rm}\Sigma_{mm}^{-1}\left(x_m - \mu_m \right) \label{eq_conditional_mu}\\ \Sigma_{r|m} &\!\!\!=\!\!\!& \Sigma_{rr} - \Sigma_{rm}\Sigma_{mm}^{-1}\Sigma_{rm}^\top \label{eq_conditional_cov_matrix} } \subsection{Dynamical system} \label{app_vehicleModel} The example vehicle kinematics is characterized by a six-dimensional state vector \eq{ \xi = \begin{pmatrix} x & y & \dot x & \dot y & \ddot x & \ddot y \end{pmatrix}^\top \label{eq_state_vector} } The continuous dynamics is given by a continuous white noise jerk model (see e. g. \cite{BarShalomKirubarajan01}) with additional time-dependent control input $u(t)$: \eq{ \dot\xi = F\xi + L\nu + B u \label{eq_diff_equation} } where \eq{ F = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 \cr 0 & 0 & 0 & 1 & 0 & 0 \cr 0 & 0 & 0 & 0 & 1 & 0 \cr 0 & 0 & 0 & 0 & 0 & 1 \cr 0 & 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \nonumber } \eq{ L = B = \begin{pmatrix} 0 & 0 \cr 0 & 0 \cr 0 & 0 \cr 0 & 0 \cr 1 & 0 \cr 0 & 1 \end{pmatrix} \nonumber } and \eq{ u(t) = \begin{pmatrix} b_1 \sin( \omega t ) \cr b_2 \sin( \omega t ) \end{pmatrix} \nonumber } Process noise $\nu$ is characterized by the jerk power spectral density (PSD) $\tilde Q = {\rm diag}( \tilde q_x, \tilde q_y )$. The discrete dynamics, i. e. the solution of this differential equation, can be obtained by standard linear system techniques. The covariance matrix of discrete-time equivalent process noise is given by (see e. g. \cite{BarShalomKirubarajan01}) \eq{ Q( t_{k+1}, t_k ) = \int\limits_{t_k}^{t_{k+1}}\Phi(t_{k+1}, \tau )L \tilde Q L^\top \Phi^\top(t_{k+1}, \tau ) d\tau } where $\Phi$ is the transition matrix of the homogeneous differential equation. The closed-form expression for this covariance matrix reads \small\eq{ Q(\Delta t_k) = \begin{pmatrix} \frac{\Delta t_k^5}{20}\tilde q_x & 0 & \frac{\Delta t_k^4}{8}\tilde q_x & 0 & \frac{\Delta t_k^3}{6}\tilde q_x & 0 \cr 0 & \frac{\Delta t_k^5}{20}\tilde q_y & 0 & \frac{\Delta t_k^4}{8}\tilde q_y & 0 & \frac{\Delta t_k^3}{6}\tilde q_y \cr \frac{\Delta t_k^4}{8}\tilde q_x & 0 & \frac{\Delta t_k^3}{3}\tilde q_x & 0 & \frac{\Delta t_k^2}{2}\tilde q_x & 0 \cr 0 & \frac{\Delta t_k^4}{8}\tilde q_y & 0 & \frac{\Delta t_k^3}{3}\tilde q_y & 0 & \frac{\Delta t_k^2}{2}\tilde q_y \cr \frac{\Delta t_k^3}{6}\tilde q_x & 0 & \frac{\Delta t_k^2}{2}\tilde q_x & 0 & \Delta t_k\tilde q_x & 0 \cr 0 & \frac{\Delta t_k^3}{6}\tilde q_y & 0 & \frac{\Delta t_k^2}{2}\tilde q_y & 0 & \Delta t_k \tilde q_y \end{pmatrix} \nonumber }\normalsize with $\Delta t_k = t_{k+1} - t_k$. The measurement model is \eq{ z(t_k) = h(\xi^-(t_k)) + r(t_k) \label{eq_measurement_equation} } where $z(t_k)$ is the measurement and the measurement noise $r(t_k)$ is modeled by a white, mean-free Gaussian process with covariance matrix $R(t_k)$. The example measurement function $h$ is given by typical radar measurements $(r, \phi, \dot r)$, i. e. \al{ h(\xi) &\!\!\!=\!\!\!& \begin{pmatrix} \sqrt{x^2 + y^2} \cr \arctan\left( y \over x \right) \cr { x \dot x + y \dot y \over \sqrt{x^2 + y^2} }\end{pmatrix} \nonumber\\ \label{eq_measurement_model_radar} } and $H = {\partial h \over \partial\xi}$ is its linearization. For the illustration of our main results in the numerical study we have chosen a linear dynamical model with a time-dependent control input that can be solved in closed form for both the prediction of the mean and the covariance matrix. However, this dynamical system above is just an example to illustrate the application of the results in sec. \ref{sec_derivation_collision_prob} to a concrete setup; other in general non-linear dynamical systems and state vectors can be used as long as they contain relative position and its first derivative. \subsection{Evaluation of the 2D integral for the entry intensity} \label{app_computation_integral} The integral \eq{ \int\limits_{\dot x \leq 0} \int\limits_{y \in I_y} \dot x\, p_t( y, \dot x | x_0 )\, dy d\dot x } in eq. \Ref{eq_front_boundary_entry_intensity_cond} for the entry intensity cannot be computed in closed form if the covariance matrix of $p_t( y, \dot x | x_0 )$ is not diagonal. Here, we Taylor-expand the 2D pdf with respect to the off-diagonal element of the covariance matrix around 0 to a certain order and then integrate. For a general 2D Gaussian pdf $p( x_1, x_2 )={\mathcal N}(\xi;\mu,\Sigma)$ with $\xi = ( x_1, x_2 )^\top$ and mean $\mu$ and covariance matrix $\Sigma$ the Taylor-expansion to linear order with respect to $\Sigma_{12}$ reads \al{ {\mathcal N}(\xi;\mu,\Sigma) &\!\!\!=\!\!\!& {\mathcal N}\left( x_1; \mu_1, \sqrt{\Sigma_{11}} \right) {\mathcal N}\left( x_2; \mu_2, \sqrt{\Sigma_{22}} \right)\nonumber\\ &\!\!\! \!\!\!& + \Sigma_{12}\left( { x_1 - \mu_1 \over \Sigma_{11}} {\mathcal N}\left( x_1; \mu_1, \sqrt{\Sigma_{11}} \right) \right)\cdot \nonumber\\ &\!\!\! \!\!\!& \qquad\cdot\left( { x_2 - \mu_2 \over \Sigma_{22}} {\mathcal N}\left( x_2; \mu_2, \sqrt{\Sigma_{22}} \right) \right)\nonumber\\ &\!\!\! \!\!\!& + {\mathcal O}\left((\Sigma_{12})^2\right) \nonumber } which leads to the following integral \begin{multline*} \int\limits_{x_{1l}}^{x_{1u}}\int\limits_{x_{2l}}^{x_{2u}} x_1 p( x_1, x_2 ) dx_1 dx_2 = \\ \left[ \mu_1 \Phi\left({ x_1 - \mu_1 \over \sqrt{\Sigma_{11}}}\right) - \Sigma_{11} {\mathcal N}(x_1; \mu_1, \sqrt{\Sigma_{11}}) \right]_{x_{1l}}^{x_{1u}}\cdot\\ \cdot\left[ \Phi\left({ x_2 - \mu_2 \over \sqrt{\Sigma_{22}}}\right) \right]_{x_{2l}}^{x_{2u}}\\ + \Sigma_{12}\left[ \Phi\left({ x_1 - \mu_1 \over \sqrt{\Sigma_{11}}}\right) - x_1 {\mathcal N}(x_1; \mu_1, \sqrt{\Sigma_{11}}) \right]_{x_{1l}}^{x_{1u}}\cdot \\ \cdot\left[ - {\mathcal N}(x_2; \mu_2, \sqrt{\Sigma_{22}}) \right]_{x_{2l}}^{x_{2u}} + {\mathcal O}\left((\Sigma_{12})^2\right) \end{multline*} The quality of the approximation depends asymptotically upon the size of $\Sigma_{12}$. An alternative Taylor-expansion would be an expansion with respect to the off-diagonal element of the {\it inverse} covariance matrix. Its off-diagonal element $\Sigma^{-1}_{12} := \left(\Sigma^{-1}\right)_{12} = -{\Sigma_{12}\over |\Sigma|}$ has the determinant of $\Sigma$ in the denominator, hence for large determinants (i. e. large uncertainties as expected for long prediction times) this approximation is expected to be more accurate. For a general 2D Gaussian pdf $p( x_1, x_2 ) = {\mathcal N}(\xi;\mu,\Sigma)$ with $\xi = ( x_1, x_2 )^\top$ and mean $\mu$ and covariance matrix $\Sigma$ the Taylor-expansion to linear order with respect to $\Sigma^{-1}_{12}$ reads \al{ {\mathcal N}(\xi;\mu,\Sigma) &\!\!\!=\!\!\!& {\mathcal N}\left( x_1; \mu_1, \sqrt{\tilde\Sigma_{11}} \right) {\mathcal N}\left( x_2; \mu_2, \sqrt{\tilde\Sigma_{22}} \right)\nonumber\\ &\!\!\! \!\!\!& - \Sigma^{-1}_{12}\left( ( x_1 - \mu_1 ) {\mathcal N}\left( x_1; \mu_1, \sqrt{\tilde\Sigma_{11}} \right) \right)\cdot \nonumber\\ &\!\!\! \!\!\!& \qquad\cdot\left( ( x_2 - \mu_2 ) {\mathcal N}\left( x_2; \mu_2, \sqrt{\tilde\Sigma_{22}} \right) \right)\nonumber\\ &\!\!\! \!\!\!& + {\mathcal O}\left((\Sigma^{-1}_{12})^2\right) \nonumber } with $\tilde\Sigma_{11} = {|\Sigma|\over \Sigma_{22}}, \tilde\Sigma_{22} = {|\Sigma|\over \Sigma_{11}}$. This leads to the following integral \begin{multline*} \int\limits_{x_{1l}}^{x_{1u}}\int\limits_{x_{2l}}^{x_{2u}} x_1 p( x_1, x_2 ) dx_1 dx_2 = \\ \left[ {\mu_1\over 2} {\rm erf}\left({ x_1 - \mu_1 \over \sqrt{2\tilde\Sigma_{11}}}\right) - \tilde\Sigma_{11} {\mathcal N}\left(x_1; \mu_1, \sqrt{\tilde\Sigma_{11}}\right) \right]_{x_{1l}}^{x_{1u}}\cdot\\ \cdot\left[ {1\over 2}{\rm erf}\left({ x_2 - \mu_2 \over \sqrt{2\tilde\Sigma_{22}}}\right) \right]_{x_{2l}}^{x_{2u}}\\ - \Sigma^{-1}_{12}\left[x_1 \tilde\Sigma_{11} {\mathcal N}\left(x_1; \mu_1, \sqrt{\tilde\Sigma_{11}}\right) -{\tilde\Sigma_{11}\over 2}{\rm erf}\left({ x_1 - \mu_1 \over \sqrt{2\tilde\Sigma_{11}}}\right) \right]_{x_{1l}}^{x_{1u}}\cdot \\ \cdot\left[ \tilde\Sigma_{22}{\mathcal N}\left(x_2; \mu_2, \sqrt{\tilde\Sigma_{22}}\right) \right]_{x_{2l}}^{x_{2u}} + {\mathcal O}\left((\Sigma^{-1}_{12})^2\right) \end{multline*} If the covariance matrix of $p_t( y, \dot x | x_0 )$ is diagonal, i. e. $\Sigma_{12} = 0$, the integrand factorizes into Gaussians and can be integrated in a straightforward manner. \subsection{State vector transformation to salient points} \label{app_state_transformation_salient} In order to transform the state distribution describing the object's reference point (such as the middle of the rear bumper or the middle of the rear axle) to other points such as the four corners the deterministic state transformation is needed, which can be used either by propagation of the mean and covariance using linear system techniques or by Monte-Carlo sampling. Transformation to other points of an extended object requires knowledge of its orientation which can be derived in the Ackermann limit from the angle of the velocity vector. This is an appropriate setup if the vehicle's reference point is the middle of the rear axle and side-slip at the rear wheels can be neglected. \begin{figure}[t] \centering \includegraphics[viewport = 7cm 5cm 20cm 14cm, clip, width = 1.0 \columnwidth]{fig16.pdf} \caption{Horizontal view of the object rectangle with local Cartesian coordinate system and coordinate origin at the middle of the rear axle. The translation to the rear left corner as a salient point of the object's geometry is also drawn.} \label{fig_object_rectangle_with_transformation} \end{figure} Taking into account the state vector as defined in eq. \Ref{eq_state_vector} and translating the state along $( \Delta \tilde x\ \Delta \tilde y )^\top$ in the object's local coordinate system (see fig. \ref{fig_object_rectangle_with_transformation}) the position transformation reads \eq{ \begin{pmatrix} x \cr y\end{pmatrix}_{sal} = \begin{pmatrix} x \cr y\end{pmatrix}_{ref} + \bf{R} \begin{pmatrix} \Delta \tilde x \cr \Delta \tilde y\end{pmatrix} } with \eq{ \bf{R} = \begin{pmatrix} \cos\alpha & -\sin\alpha \cr \sin\alpha & \cos\alpha \end{pmatrix} } and $\alpha = \arctan{\dot y \over \dot x}$ the orientation angle as explained above. Then we have \al{ \begin{pmatrix} \dot x \cr \dot y\end{pmatrix}_{sal} &\!\!\!=\!\!\!& \begin{pmatrix} \dot x \cr \dot y\end{pmatrix}_{ref} + \dot\alpha \bf{R}^\prime \begin{pmatrix} \Delta \tilde x \cr \Delta \tilde y\end{pmatrix} \\ \begin{pmatrix} \ddot x \cr \ddot y\end{pmatrix}_{sal} &\!\!\!=\!\!\!& \begin{pmatrix} \ddot x \cr \ddot y\end{pmatrix}_{ref} - {\dot\alpha}^2 \bf{R} \begin{pmatrix} \Delta \tilde x \cr \Delta \tilde y\end{pmatrix} + \ddot\alpha \bf{R}^\prime \begin{pmatrix} \Delta \tilde x \cr \Delta \tilde y\end{pmatrix} } with \al{ \bf{R}^\prime &\!\!\!=\!\!\!& {d\over d\alpha} \bf{R} = \begin{pmatrix} -\sin\alpha & -\cos\alpha \cr \cos\alpha & -\sin\alpha \end{pmatrix} \nonumber\\ \dot\alpha &\!\!\!=\!\!\!& { \dot x \ddot y - \dot y \ddot x \over \dot x^2 + \dot y^2 } \nonumber\\ \ddot\alpha &\!\!\!=\!\!\!& 2 { \dot x \dot y ( {\ddot x}^2 - {\ddot y}^2 ) - \ddot x \ddot y ( {\dot x}^2 - {\dot y}^2 ) \over (\dot x^2 + \dot y^2)^2 } + { \dot x \dddot y - \dot y \dddot x \over \dot x^2 + \dot y^2 } \label{eq_ddot_alpha} } Note that this transformation is non-linear, hence propagation of a multivariate Gaussian distribution by this transformation will result in a non-Gaussian distribution. \end{appendix} \bibliographystyle{IEEEtran}
2,877,628,088,707
arxiv
\section{System and Adversary Model} \label{sec:model} We assume a system equipped with Intel SGX, i.e., a hardware mechanism to isolate data and execution of a software component from the rest of the system's software that is considered untrusted. The resources which are used to execute the isolated component (or enclave), however, are shared with the untrusted software on the system. The system's resources are managed by untrusted, privileged software. In this work, we assume a system running Linux. For managing enclaves the system is relying on the Intel SGX software developer kit (SDK). \cref{fig:highlevel} shows an abstract view of the adversary model, an enclave executing on a system with a compromised operating system, sharing a CPU core with an attacker process (Prime+Probe\xspace). The adversary's objective is to learn secret information from the enclave, e.g., a secret key generated \emph{inside} the enclave through a hardware random number generator, or sensitive data supplied to the enclave \emph{after} initialization through a secure channel. The attacker leverages his control over the system to minimize noise in the side channel. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{Highlevel.pdf} \caption{High-level view of our side channel attack; the victim enclave and the attacker's Prime+Probe\xspace code are running in parallel on a dedicated core. The attacker controlled OS ensures that no other code is executed on that core to minimize noise in its L1/L2 cache.} \label{fig:highlevel} \vspace*{-.4cm} \end{figure} \vspace{0.2cm} \noindent\textbf{Adversary capabilities.} The adversary is in control of all system software, except for the software executed inside the enclave.\footnote{Due to integrity verification, the adversary cannot modify the software executed inside the enclave, since SGX remote attestation would reveal tempering.} Although the attacker cannot control the program inside the enclave, he does know the initial state of the enclave, i.e., the program code of the enclave and the initial data. The attacker knows the mapping of memory addresses to cache lines and can reinitialize the enclave and replay inputs, hence, run the enclave arbitrarily often. Further, since the adversary has control over the OS he controls the allocation of resources to the enclave, including the time of execution, and the processing unit (CPU core) the enclave is running on. Similarly, the adversary can arbitrarily configure the system's hardware, e.g., define the system's behavior on interrupts, or set the frequency of timers. However, the adversary cannot directly access the memory of an enclave. Moreover, he cannot retrieve the register state of an enclave, neither during the enclave's execution nor on interrupts. \vspace{0.2cm} \noindent\textbf{Attack scenarios.} We consider two attack scenarios in this work (\cref{sec:attack} and \cref{sec:genome}). The attacker knows the code and memory layout of the victim enclave, and hence knows memory locations accessed by the victim enclave. The access pattern to the different memory locations allows him to draw conclusions about sensitive data processed by the victim. For instance, a cryptographic algorithm uses precomputed data stored in different memory locations and accesses these values depending on the secret key. The attacker observing the order of accesses to the precomputed values learns the key. Similarly, an algorithm that inserts genomic data into a hash table allows the attacker to observe the insertion of genome sequences by monitoring which part of the tables are accessed. This allows the attack to detect subsequences within the genome that can be used, for instance, to identify persons. \section{RSA Decryption Attack} \label{sec:attack} \renewcommand{\algorithmicrequire}{\textbf{Input: }} \renewcommand{\algorithmicensure}{\textbf{Output: }} In this section we describe how we apply the above attack techniques to the canonical example of key recovery from RSA decryption. We first describe our victim algorithm and implementation, then our attack details, and finally the key extraction results. \subsection{Victim Enclave} \label{subsec:victim} As our victim enclave we chose an RSA implementation from the Intel IIP crypto library in the Intel SGX SDK. The attacked decryption variant is a fixed-size sliding window exponentiation, the code is available online at \cite{intel-exp-fast}. The Intel IIP library includes also a variant of RSA that is hardened against cache attacks~\cite{intel-exp-secure}. We discuss such defenses and their limitations in \cref{sec:countermeasures}. In this section we focus on demonstrating how effective our attack techniques can be against standard cryptographic implementations. The chosen decryption algorithm uses the Chinese Remainder Theorem (CRT) optimization, where two values $d_p$ and $d_q$ are pre-computed from the private key primes $p$ and $q$. To decrypt a message, separate exponentiation operations are performed using $d_p$ and $d_q$. For our experiments we use an RSA key size of 2048 bits which means that the decryption performs two 1024-bit exponentiations. \begin{algorithm}[th] \caption{Fixed-window exponentiation} \label{alg:exp} \algorithmicrequire{$a, e, N \in \mathbb{N}$} \newline \algorithmicensure{$x \gets {a}^{e} \mod N$} \begin{algorithmic}[1] \State Precompute $g[i] \gets a^i$ for $1 \leq i \leq 2^k$ \State Let $e = (e_j, e_{j-1}, \ldots, e_1, e_0)$ be the base $2^k$ representation of the exponent $e$ with $e_j \neq 0$ \State Initialize $x \gets e_j$ \For {$i \gets j - 1$ \textbf{ down to } 0} \State $x \gets x^{2^k} \mod N$ \If {$e_i \neq 0$} \State $x \gets g[e_i] \cdot x \mod N$ \EndIf \EndFor \end{algorithmic} \end{algorithm} A pseudo code of the targeted exponentiation algorithm is shown in Algorithm~\ref{alg:exp}. Inputs of the algorithm are the base value $a$, the exponent $e$ (when CRT is used $d_p$ or $d_q$), and the public parameter $N$. The first step of the algorithm is a pre-computation of a multiplier table $g$ from the base value $a$. After that a $2^k$ representation of the exponent $e$ is computed, i.e., the exponent is divided into $\lceil n/k \rceil$ windows $(e_j, e_{j-1}, \ldots, e_1, e_0)$ of fixed size $k$ bits each. The algorithm iterates over all exponent windows starting from the most significant window (line 4 in Algorithm~\ref{alg:exp}) and, depending on the window value, it may perform a multiplication with a value from the multiplier table $g$. The value of the exponent window determines which pre-computed multiplier is accessed from the table $g$ on each iteration (line 7). Figure~\ref{fig:rsa} illustrates memory accesses and cache updates in the algorithm. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{Multiplier.pdf} \caption{Memory accesses and cache updates in RSA exponentiation. The processed window value from exponent $e$ determines the accessed entry from table $g$ in memory which defines the updated cache line.} \label{fig:rsa} \vspace*{-.4cm} \end{figure} We compiled the RSA decryption implementation as an enclave with default optimization flags and compiler settings. When started, the enclave decrypts a single encrypted message. The private key was randomly chosen. \begin{figure*}[th] \centering \includegraphics[trim={6cm 2.2cm 5cm 0.9cm}, clip, width=\textwidth]{RSA-multipliers.png} \caption{Access patterns of RSA key multipliers. Each dot represents 16 repeated memory accesses that correspond to a single multiplier in the precomputed table (see Algorithm~\ref{alg:exp}) and are observed from two monitored cache sets. We plot each monitored multiplier with a separate color. The monitoring process for each multiplier is repeated 15 times and each horizontal row in the plot represents one complete monitoring round. Most multiplier accesses are clearly distinguishable as separate colored vertical lines.} \label{fig:multipliers} \vspace*{-.2cm} \end{figure*} \subsection{Attack Details} \label{subsec:attack-details} Our attack proceeds as follows. Using the attack techniques described in Section~\ref{sec:design}, we monitor a single multiplier access at a time. Because each pre-computed multiplier is 1024 bits, this memory range corresponds to two cache sets. We probe two monitored cache sets every $c$ cycles and divide the observed memory accesses into epochs of $p$ probes. Because each multiplier in the table is 1024 bits, accessing the multiplier causes 16 repeated memory accesses to the memory range of the table entry. If we observe 16 repeated accesses within one epoch, we mark the multiplier as a potential access candidate. We repeat this process for a subset of all possible multipliers (10 out of 16 in our case), because extracting a sufficiently large fraction of the key bits is enough to derive the entire key. We also observed significant cache access interference in some of the monitored cache sets\footnote{Presumably caused by the victim}, and therefore we opted not to monitor them. Finally, we repeat the entire process $t$ times. Through experiments we observed that monitoring every $c=500$ cycles and dividing the monitoring into epochs of $p=33$ probes gave accurate results. To extract sufficiently large fraction of the key we needed to repeat the process $t=15$ times. Monitoring more than one multiplier at a time decreased multiplier access detection accuracy significantly. Similarly, performing monitoring more often than every $c=500$ cycles caused significant noise in measurements. The monitoring epoch $p=33$ probes was determined by the average execution time of a single exponentiation iteration. \subsection{Attack Results} \label{subsec:rsa-results} Figure~\ref{fig:multipliers} shows our results on extracting the accessed pre-computed multipliers which in turn determine the private key. Each colored dot represents a multiplier access candidate. We plot different candidates with a separate color. Each horizontal row in the plot represents one complete monitoring round, where the monitoring process is performed separately for each multiplier (two cache sets). Because the entire monitoring process is repeated $t=15$ times, the plot has 15 horizontal lines. As can be seen from Figure~\ref{fig:multipliers}, most multiplier accesses are clearly distinguishable as colored vertical lines. To recover the multiplier access pattern, we analyze this plot manually. We use a simple heuristic of determining an access: if more than half of the monitoring rounds have the same value for the same epoch, we consider this value the accessed multiplier. If we observe no multiplier accesses in one epoch, the then we conclude that the exponent window for this iteration of exponentiation was zero (line 6 in Algorithm~\ref{alg:exp}). From the multipliers we construct a key candidate and compare it to the private key used by the enclave. Our attack extracts 70\% of the key bits correctly. This matches with the fraction of monitored cache sets $\frac{10+1}{16}=0.69$, where the $+1$ comes from the fact that the exponent windows value zero we learn without monitoring. From the extracted key bits, the complete private key can be efficiently recovered \cite{boneh-98}. The closest previous cache attack is by Liu et al.~\cite{Liu2015}.\footnote{Percival~\cite{Per2005} demonstrates an attack against CRT RSA using sliding window on L1 cache, but does not report the number of decryptions.} They attack a sliding window RSA on through the Last Level Cache (LLC), because the attacker and the victim are running in different VMs. They are able to extract they key with tens of thousands of repeated decryptions, while we need 300 decryptions (10 observed multipliers, 15 repetitions, and two exponents). Although these two attack scenarios are not directly comparable, they do demonstrate that cache-based side-channel vulnerabilities are more severe in the SGX attacker model. \section{Background} \label{sec:background} This section provides the necessary background for the rest of the paper. We will start by describing Intel SGX, followed by a description of the cache architecture of current Intel x86 processors. Afterwards we will introduce performance monitoring counters (PMC), a hardware feature that allows software to retrieve information about the state of hardware units. \subsection{Intel SGX} \label{sec:background:sgx} SGX introduces a set of new CPU instructions for creating and managing isolated software components~\cite{Intel_SGX1,SGX_Ref}, called \emph{enclave}, that are isolated from all software running on the system including privileged software like the operating system (OS) and hypervisor. SGX assumes the CPU itself to be the only trustworthy hardware component of the system, i.e., enclave data is handled in plain-text only \emph{inside} the CPU. Data is stored unencrypted in the CPU's caches and registers, however, whenever data is moved out of the CPU, e.g., into the DRAM, it is encrypted and integrity protected. This protects enclaves, for instance, from being attacked by malicious hardware components with direct memory access (DMA). The OS, although untrusted, is responsible for creating and managing enclaves. It allocates memory for the enclaves from a dedicated region of the physical memory called Enclave Page Cache (EPC). It manages virtual to physical address translation for the enclave's memory and copies the initial data and code into the enclave. However, all actions of the OS are recorded securely by SGX and can be verified by an external party through (remote) attestation~\cite{Intel_SGX3}. The sealing capability of SGX enables the persistent secure storage of enclave data, such that the data is only available to correctly created instances of one specific enclave. During runtime of an enclave the OS can interrupt and resume the enclave like a normal process. Usually, upon an interrupt the OS is responsible for storing the current register content (context) of the interrupted process to free the register for use by the OS itself. To prevent information leakage, SGX handles the context saving of enclaves in hardware and erases the register content before passing control to the OS, called asynchronous enclave exit (AEX). When an enclave is resumed, again the hardware is responsible for restoring the enclave's context, preventing manipulations. \subsection{Cache Architecture} \label{sec:cache} In the following we will describe details of the Intel x86 cache architecture~\cite{Intel-manual,OptManual2012} required to understand the rest of the paper. We focus on the cache architecture of the Intel Skylake processor generation, i.e., the type of CPU we used for our implementation and evaluation.\footnote{At the time of writing Intel SGX is only available on Intel Skylake and Kaby Lake CPUs, hence, only those two processor generations are relevant for this work. To the best of our knowledge there are no differences in the cache architecture between Skylake and Kaby Lake.} Memory caching ``hides'' the latency of memory accesses to the system's dynamic random access memory (DRAM) by keeping a copy of currently processed data in cache. When a memory operation is performed, the cache controller checks whether the requested data is already cached, and if so, the request is served from the cache, called a \emph{cache hit}, otherwise \emph{cache miss}. Due to higher cost (production, energy consumption), caches are orders of magnitude smaller than DRAM. Hence, only a subset of the memory content can be present in the cache at any point in time. The cache controller aims to maximize the cache hit rate by predicting which data are used next by the CPU core. This prediction is based on the assumption of temporal and spatial locality of memory accesses. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{Cache.pdf} \caption{Cache hierarchy and configuration of Intel Skylake processors. The L3 cache is inclusive, i.e., all data stored in any per-core L1/L2 is also stored in L3. L1 cache is divided into separated parts for data and instructions.} \label{fig:cache_overview} \vspace*{-0.4cm} \end{figure} \cref{fig:cache_overview} shows the mapping of the main memory to the cache. For each memory access the cache controller has to check if the data are present in the cache. Sequentially iterating through the entire cache would be very expensive. Therefore, the cache is divided into \emph{cache lines} and for each memory address the corresponding cache line can be quickly determined, the lower bits of a memory address select the cache line. Hence, multiple memory addresses map to the same cache line, in \cref{fig:cache_overview} the first line of each \emph{cache page} in memory maps to the first cache line. Having one cache entry per cache line quickly leads to conflicts, i.e., if memory from the first line of pages $0$ and $m-1$ are used at the same time, they conflict and the controller must evict data from a cache line to replace it with newly requested data. The current Intel CPUs have a three level hierarchy of caches (\cref{fig:cache_overview}). The last level cache (LLC), also known as level 3 (L3) cache, is the largest and slowest cache; it is shared between all CPU-cores. Each CPU core has a dedicated L1 and L2 cache, but they are shared between the core's Simultaneous Multithreading (SMT) execution units (also known as hyper-threading). A unique feature of the L1 cache is the separation into data and instruction cache. Code fetches only affect the instruction cache and leave the data cache unmodified, and the other way around for data accesses. In L2 and L3 caches code memory and data memory compete for the available cache space. \subsection{Performance Monitoring Counters} \label{sec:background:pmc} Performance Monitoring Counters (PMC) represent a feature of the CPU for recording hardware events. Their primary goal is to give software developers insight into their program's effects on the hardware in order for them to optimize their programs The CPU has a set of PMCs, which can be configured to count different events, for instance, executed cycles, cache hits or cache misses for the different caches, mis-predicted branches, etc. PMCs are configured by selecting the event to monitor as well as the mode of operation. This is done by writing to model specific registers (MSR), which can only be written with the \texttt{wrsmr} instruction (write to model specific register). PMCs can only be set up by privileged software. PMCs are read via the \texttt{RDPMC} instruction (read performance monitoring counters), which can be configured to be available in unprivileged mode.\footnote{``CR4.PCE -- Performance-monitoring counter enable. Enables execution of the RDPMC instruction at any protection level''~\cite{Intel-manual}.} Hardware events recorded by PMCs could be misused as side-channels, e.g., to monitor cache hits or misses of a victim process or enclave. Therefore, the SGX enclaves can disable PMCs on entry by activating a feature known as ``Anti Side-channel Interference'' (ASCI)~\cite{Intel-manual}. This suppresses all thread-specific performance monitoring, except for fixed cycle counters. Hence, hardware events triggered by an enclave cannot be monitored through the PMC feature. For instance, cache misses of memory loaded by the enclave will not be recorded in the PMCs. \section{Conclusion} \label{sec:conclusion} Researchers have assumed that SGX may be vulnerable to cache-based information leakage. However, before our work, the practicality and the extent of such leakage was not well understood. In this paper we have demonstrated that cache attacks on SGX are indeed a serious concern. Our goal was to develop an attack that cannot be mitigated by the known countermeasures, and therefore we mount the attack on uninterrupted enclave execution. Such attack approach involves technical challenges. To address them, we developed a set of novel noise reduction techniques. We demonstrated them on RSA decryption and human genome indexing. Our attacks are more efficient than previous cache attacks and harder to mitigate than previous SGX side-channel attacks. \section{Our Attack Design} \label{sec:design} Our attack is based on the Prime+Probe\xspace cache side-channel attack technique. We will first explain the ``classical'' variant of Prime+Probe\xspace, then we discuss our improvements of that approach. \subsection{Prime+Probe\xspace} \label{sec:background:sidechannel} All cache-based side-channel attacks are based on similar approaches. The victim application and the attacker compete for the available cache, either by executing concurrently or interleaved. The attacker aims to learn about the victim's cache usage by observing effects of the cache availability in its own program. Different attack techniques have been developed that operate on different caches (L1 -- L3, instruction caches, virtual memory translation caches, etc.). \begin{figure*}[th] \centering \includegraphics[width=.75\textwidth]{prime_probe.pdf} \vspace*{-.2cm} \caption{Prime+Probe\xspace side-channel attack technique; first the attacker primes the cache, next the victim executes and occupies some of the cache, afterwards the attacker probes to identify which cache lines have been used by the victim. This information allows the attacker to draw conclusion on secret data processed by the victim process.} \label{fig:prime_and_probe} \vspace*{-.2cm} \end{figure*} For our attack we adapted the Prime+Probe\xspace approach for learning about the victim's memory accesses, \cref{fig:prime_and_probe} shows the main steps. First, the attacker \emph{primes} the cache, i.e., the attacker accesses memory such that the entire cache is filled with data of the attacker process. At time $t_0$ the attacker writes to all cache lines, e.g., in current x86 CPU he writes to consecutive $4\,KB$ of memory.\footnote{To prime all cache sets the attacker needs to write to $\#cachesets$ cache pages, see \cref{sec:cache} for details.} Afterwards, at time $t_1$, the victim executes code with memory access that are dependent on the sensitive data processed by the victim. In this example the victim processes a cryptographic key, which is sensitive data. The victim accesses different memory locations depending on the currently processed key-bit. In the example in \cref{fig:prime_and_probe} the key-bit is zero, therefore address $X$ is read. Address $X$ is mapped to \emph{cache line 2}, hence, the data stored at $X$ are loaded into the cache and the data that were present before in that cache line gets evicted. However, the data at address $Y$ are not accessed and therefore the data in \emph{cache line 0} remains unchanged. At time $t_2$ the attacker probes which of his cache lines got evicted, i.e., which cache lines were used by the victim. A common technique to check for cache line eviction is to measure access times: The attacker reads from memory mapped to each cache line and measures the access time. If the read operation returns the data fast, they were still cached, if the read operation takes longer, the data were evicted from the cache. In the example in \cref{fig:prime_and_probe}, the attacker will observe an increased access time for \emph{cache line 2}. Since the attacker knows the code and access pattern of the victim it knows that address $X$ of the victim maps to \emph{cache line 2}, and that the sensitive key-bit must be zero. This cycle is repeated by the attacker for each sensitive key-bit that is processed by the victim and the attacker learns all bits of the key. \subsection{Prime+Probe\xspace for SGX} Extracting information through a side-channel is challenging due to noise. The core idea of our attack is to reduce this noise. We exploit the design of SGX where the OS (adversary) has control over the system configuration, and the scheduling and management of enclaves. As mentioned before, we adapt the Prime+Probe\xspace approach to identify cache conflicts which we use as side-channel, i.e., we infer the victim's access to specific memory addresses based on the presence or absence of the corresponding entries in the cache. To detect whether a cache line was used by the victim, the attacker accesses the same cache line and checks if his own cache entry was evicted, i.e., if the victim used that cache line. To minimize the noise in the side-channel, we ensure that the cache is isolated and not affected by any system component except the victim enclave. \cref{fig:highlevel} shows our approach to isolate the victim enclave on a dedicated CPU core, which only executes the victim and our attacker Prime+Probe\xspace code. This way the per-core caches (L1/L2) are not influenced by any other process. Furthermore, we need to ensure that the operating system itself does not pollute the cache of our \emph{attack core}. \paragraph{Challenges.} Reducing noise of the cache side-channel faces a number of technical challenges: \begin{compactitem} \item[1.] Isolation of the attack core from use by other processes \item[2.] Minimization of cache pollution caused by the victim itself \item[3.] Running the victim uninterrupted to counter side-channel protection techniques and prevent cache pollution by the OS \item[4.] Reliably identify cache eviction caused by the victim \item[5.] Performing cache monitoring at a high frequency \end{compactitem} Below we will explain how we tackled each of these challenges. \subsection{Noise Reduction Techniques} \paragraph{(1.) Isolated attack core.} By default Linux schedules all processes of a system to run on any available CPU core, hence, impacting all caches. The attacker cannot distinguish between cache evictions caused by the victim and those caused by any other process. Which process could cause the eviction is different based on whether considering the Last Level Cache (LLC) / Level~3 (L3)\footnote{The LLC is synonymic to the Level~3 (L3) cache in current processors.} or the Level~1 or Level~2 (L1/L2) cache. By modifying the Linux scheduler, the adversary can make sure that one core (we call it \emph{attacker core}) is exclusively used by the victim and the attacker (``Core 0'' in \cref{fig:highlevel}). This way no other process can pollute this core's L1/L2 cache. Beside other processes, the OS can pollute the cache as well, we discuss this challenge below. \paragraph{(2.) Self-pollution.} The attacker needs to observe specific cache lines that correspond to memory locations relevant for the attack. From the attacker's point of view it is undesirable if those cache lines are used by the victim for any other reason than accessing these specific memory locations, e.g., by accessing unrelated data or code that map to the same cache line. In our attack we use the L1 cache. It has the advantage of being divided into a data cache (L1D) and an instruction cache (L1I). Therefore, code accesses, regardless of the memory location of the code, never map to the cache lines of interest to the attacker. Victim accesses to unrelated data mapping to relevant cache lines leads to noise in the side-channel. \paragraph{(3.) Uninterrupted execution.} Interrupting the victim enclave yields two relevant problems. (1) When an enclave is interrupted, an asynchronous enclave exit (AEX) is performed and the operating system's interrupt service routine (ISR) in invoked (see \cref{sec:background:sgx}). Both, the AEX and the ISR use the cache, and hence, induce noise into it. (2) By means of transactional memory accesses an enclave can detect that it has been interrupted. This feature has been used for a side-channel defense mechanism~\cite{t-sgx,incognito2017}. We discuss the details in \cref{sec:countermeasures}. Hence, making the enclave execute uninterrupted ensures that the enclave remains unaware of the side-channel attack. In order to monitor the changes in the victim's cache throughout the execution, we need to access the cache of the attack core in parallel. For this we execute the attacker code on the same core. The victim is running on the first SMT (Simultaneous Multithreading) execution unit while the attacker is running on the second SMT execution unit (see \cref{fig:highlevel}). As the victim and attacker code compete for the L1 cache, the attacker can observe the victims effect on the cache. The attacker code is, like the victim code, executed uninterrupted by the OS. Interrupts usually occur at a high frequency, e.g., due to arriving network packages, user input, etc. By default interrupts are handled by all available CPU cores, including the attack core, and thus the victim and attacker code are likely to be interrupted. The OS code executed on arrival of an interrupt will pollute the cache, or the victim enclave could detect its interruption, assume an attack, and stop itself. To overcome this problem we configured the interrupt controller such that interrupts are not delivered to the attack core, i.e., it can run uninterrupted. The only exception is the timer interrupt which is delivered per-core. Each CPU core has a dedicated timer and the interrupt generated by the timer can only be handled by the associated core. However, we reduced the interrupt frequency of the timer to $100\,Hz$, which allows victim and attacker code to run for $10\,ms$ uninterrupted. This time frame is sufficiently large to run the complete attack undisturbed (with high probability).\footnote{When an interrupt occurs, by chance, during the attack phase the run can be repeated. If the attack phase is longer than $10\,ms$ the timer frequency can be reduced further.} As a result, the OS is not executed on the attack core during the attack in progress, which is shown by the dashed-line OS-box above the attack core in \cref{fig:highlevel}. Also, the victim is not interrupted, thus, it remains unaware of the attack. \paragraph{(4.) Monitoring cache evictions.} In the previous Prime+Probe\xspace attacks, the attacker determines the eviction of a cache line by measuring the time required for accessing memory that maps to that cache line. This timing based measurements represent an additional source of noise to the side-channel. Distinguishing between cache hit and miss requires precise time measurements, for instance for the L1 cache a cache hit takes at least $4~cycles$. If the data got evicted from L1 cache, they can still be present in the L2 cache. In this case, when the data are accessed, they will be read from L2 cache, which takes $12~cycles$ in the best case.\footnote{Reported values for Skylake architecture, however, ``Software-visible latency will vary depending on access patterns and other factors''~\cite{OptManual2012}.} This small difference in access times makes it challenging to distinguish a cache hit in L1 cache and a cache miss in L1 that is served from L2 cache. Reading the time stamp counter, to determine the access time, by itself suffers from noise in the order the effect to be observed. Thus, when the timing measurement does not allow for a definitive distinction between a cache hit and a cache miss, the observation has to be discarded. To eliminate this noise we use existing Performance Monitoring Counters (PMC) to determine if a cache line got evicted by the victim. This is possible in the SGX adversary model because the attacker controls the OS and can freely configure and use the PMCs. The intuitive approach to monitor cache related events of the victim are prevented by the fact that PMCs are disabled for enclave code (cf. \cref{sec:background:pmc}). However, the attacker's Prime+Probe\xspace code shares the cache with the victim. The attacker has primed all cache before the victim is executed. Next the victim executes and evicts a subset of cache lines. Hence, when the attacker probes the cache these lines will result in a cache miss. The attacker uses PMC to identify these cache misses, learning which cache lines were used by the victim. \paragraph{(5.) Monitoring frequency.} As discussed before, the victim should run uninterrupted while its cache accesses are monitored in parallel. Hence, we need to execute priming and probing of the cache at a high frequency to not miss relevant cache events. In particular, probing each cache line to decide whether it has been evicted by the victim is time consuming and leads to a reduced sampling rate. The required monitoring frequency depends on the frequency at which the victim is accessing the secret-dependent memory locations. To not miss any access the attacker has to complete one prime and probe cycle before the next access occurs. In our implementation the access to PMCs is the most expensive operation in the Prime+Probe\xspace cycle. To tackle this challenge we monitor individual (or a small subset of) the cache lines over the course of multiple executions of the victim. In the first run we learn the victim's accesses to the first cache line, in the second run accesses to the second cache line, and so on. By aligning the results of all runs we learn the complete cache access pattern of the victim. \section{Discussion} \label{sec:discussion} \paragraph{Other algorithms.} In his paper we have demonstrated information leakage through secret-dependent data accesses in RSA decryption and human genome indexing. Both of these target algorithms construct a table that is repeatedly accessed while the algorithm processes through the confidential data. The same high-level algorithmic pattern is not limited to these two applications, but also found in many other domains, such as database indexing, compression algorithms, image processing. Based on our results, there is reason to believe that many of these algorithms would be vulnerable to cache-based information leakage, but we leave the demonstration of practical attacks as future work. \paragraph{Lessons learned.} Through our experiments we observed that there are certain key factors that determine how vulnerable a particular algorithm is to cache-based information leakage. The size of the constructed table determines if, and how many, multiple table entries map to the same cache set, and thus cause increased cache monitoring interference. The frequency of table accesses defines the available time budget for monitoring on each algorithm iteration round, and thus the probability of catching the data access. Large table entries and repeating patterns in the processed confidential data cause repeated data accesses that make the algorithm (and data) more vulnerable to our attacks. \section{Countermeasure Analysis} \label{sec:countermeasures} In this section we discuss potential countermeasures against cache-based side channel attacks and elaborate on their applicability to protection of SGX enclaves. \paragraph{Cache disabling.} The most straightforward countermeasure against cache-based side channels is to disable caching entirely~\cite{Aciicmez2010}. This approach, however, defeats performance optimizations for which cache memory was intended for at first place, resulting in severe performance degradation. More fine-grained approach is to disable the cache only when security critical code is scheduled for execution. In the context of SGX, it would mean to disable caching during enclave execution, which may still be prohibitively expensive given that SGX enclaves may need to process large datasets (e.g., human DNA) or perform expensive computation (e.g., cryptography), or run large applications. For instance, Haven architecture~\cite{baumann_shielding_2014} loads the entire database management system (DBMS) into an enclave. \paragraph{Architectural changes to cache organization.} Other approaches proposed to mitigate cache-based side channels with low overhead through redesign of the cache hardware. The first line of works includes proposals of new cache designs by applying the idea of cache partitioning so that security sensitive code never shares cache memory with untrusted processes (e.g., \cite{Page2003,Page05partitionedcache,WangLee2006,WangLee2007,DoJaLoAbPo2012}), while another one concentrates on access randomization within cache memory~\cite{WangLee2007,WangLee2008,Keramidas2008,LiLe2014}. However, these approaches would require a radical change to current cache designs, which cannot be easily implemented in practice. In particular, Intel processors with SGX extensions do not implement any countermeasures against cache side-channel attacks at the architectural level. Sanctum~\cite{sanctum} flushes the L1 cache on switches between enclave and non-enclave mode. This approach does not stop our attack since our attack runs in parallel to the enclave. The enclave is not interrupted to probe the cache, and hence, no mode switch and no cache flushing is triggered. \paragraph{Obfuscation techniques.} \label{sec:countermeasures:oblivious} The state-of-the-art obfuscation technique to defeat information leakage via side channels is Oblivious RAM (ORAM) \cite{GolOst1996,stefanov-ccs13,permuteRAM}, which provides means to hide memory access patterns of programs by continuously shuffling and re-encrypting data as they are accessed in RAM memory, disk or from a remote server. ORAM is typically applied in server-client models, and requires the client to store some state that is updated throughout the subsequent execution. While one could think of using similar techniques for cache protection, they are not directly applicable, as it is challenging to store ORAM internal state securely. Without hardware support this would require storing client state in a cache side-channel oblivious way, which is unfeasible given small size of every cache line. Other obfuscation techniques suggest to perform periodic scrubbing and flushes of shared caches~\cite{Zhang2013} or add noise to memory accesses~\cite{Page2003,Osv2006} to interfere with the signal observable by the attacker. These techniques, however, introduce a significant overhead and will not necessarily eliminate the attack we presented. Especially, these countermeasures are less effective on systems supporting simultaneous multithreading, where two threads or processes can be executed literally simultaneous, not in a time-sharing fashion. In this case the attacker process running in parallel with the victim can still observe memory access patterns between scrubbing and flushing rounds. Furthermore, an attacker may collect multiple execution traces and process them to filter out the injected noise. % \paragraph{Application-level hardening.} \label{sec:countermeasures:hardening} Application-level hardening techniques modify applications in order to protect their secrets from side-channel leakage. Such solutions can be classified into two categories: (i)~Side-channel free implementations (e.g., for cryptographic algorithms AES and RSA~\cite{BrGrSe2006,Konighofer2008}) and (ii) automated tools that can be applied to existing programs and do not require manual program modification~\cite{CoVeBoSu2009,Cleemput2012,CHBLF2015}. However, side-channel free implementations are application-specific and require significant manual effort and thorough understanding of the subject matter, although generally application developers cannot be expected to be security experts. On another hand, approaches that rely on automated processing, e.g., compiler transformations for limiting branching on sensitive data~\cite{CoVeBoSu2009} or reducing/masking timing variability~\cite{Cleemput2012,CHBLF2015}, typically cannot eliminate side channels entirely, since opportunities to do so automatically are limited. In context of SGX, Shinde et al.~\cite{ShChNaSa2016} proposed application hardening as a mitigation technique against page fault based side channel. The solution relies on developer-assisted compiler optimizations, or, if applied generally, imposes a performance overhead up to 4000x. While similar approach can be used to defeat cache-based side channels, associated drawbacks (either manual effort or impact on performance) limit its practicality. \paragraph{Randomization.} \label{sec:countermeasures:randomization} Address Space Layout Randomization (ASLR)~\cite{PaX-ASLR} is another alternative, which might provide a viable solution against cache-based side channels. Despite the fact that it was designed as a defense mechanism against code reuse attacks, similarly to ORAM, it can hide access patterns to secret-dependent code and data, if applied to randomize enclave's memory layout. ASLR randomizes the \emph{base addresses} of loaded code and data in memory, making memory layout of the vulnerable process different across different instances and even across different runs. In this form, ASLR is deployed on most mainstream computing platforms for PCs and mobile devices, including Windows, Linux, iOS and Android. However, in recent years there were many attacks demonstrated that have shown that randomization of base addresses provides insufficient entropy and can be brute forced~\cite{ShGoMoPfBo2004,LiJiDeJiDa2011}, or information about them can be obtained via information leakage attacks, e.g., by exploiting information used for linking dynamic libraries~\cite{acsac09:saratoga} or exploiting information leakage bugs (e.g.,~\cite{teso2001}). These attacks motivated further development of more fine-grained memory randomization forms, which randomize application binaries at granularity of functions~\cite{KiJuBoXuNi2006}, basic blocks\footnote{A basic block is a sequence of machine instructions with a single entry and exit instruction, where the latter one can be any branch instruction the processor supports}~\cite{WaMoHaLi2012,TUD-CS-2013-0042} and even single instructions~\cite{PaPoKe2012,Hi2012}. Fine-grained memory randomization techniques was undermined by Snow et al.~\cite{TUD-CS-2013-0026}, who demonstrated a dynamic code reuse attack that could disclose memory layout of the victim application through repeated exploitation of the memory leakage vulnerability and constructing the attack payload at the time when the attack is executed. Doing so requires certain amount of time, which motivated new approaches that consider periodic re-randomization performed at runtime~\cite{LNBL2016}. Recently, Seo et al.~\cite{SGXShield2017} proposed SGX Shield framework that enables code randomization for SGX enclaves. While the primary goal of SGX Shield is to protect enclaves from exploitable software bugs, authors mention that randomization imposes additional burden to side channel attackers, and in particular it provides reasonable protection against page-fault side-channel attacks, as it forces an attacker to brute force $2^{7}$ times in order to identify a single address value. However, this argumentation does not directly apply to the case of cache-based side channels, because SGX Shield concentrates on randomization of code, but does not randomize data. Hence, SGX shield cannot hide data-dependent memory access patterns. On another hand, randomization of data segments is challenging due to dynamic data allocations, large data objects (e.g., tables) that need to be split up and randomized, and pointer arithmetic which is typically used to access parts of large data objects (e.g., base-pointer offsets are often used to access table entries). \paragraph{Attack detection.} \label{sec:countermeasures:detection} Recently, two interesting works proposed detection methods for side-channel attacks that are based on frequent interruption of the victim enclave~\cite{t-sgx,incognito2017}. In particular, both solutions aim at mitigating side channels based on page-faults~\cite{Xu2015}. Here the OS incurs page faults during enclave execution and learns the execution flow of the enclave from the requested pages. In particular, both works suggest using a hardware implementation of transactional memory in Intel processors called Intel Transactional Synchronization Extensions (TSX) to notify an enclave about a (page fault) exception without interference by the system software. This generally enables enclaves to detect if their execution was preempted or interrupted. D{\'e}j{\'a} Vu~\cite{incognito2017} also aims at defeating cache-based side-channel attacks that preempt the victim enclave frequently to more accurately observe the victim's cache accesses. However, as we show in our work, cache-based side channels do not necessarily require preemption of the protected application to make side channel observations. Hence, such countermeasures cannot defeat our attack. \paragraph{Summary.} We believe that system-level defense mechanisms like memory randomization are more plausible, as they provide protection to any program, independently if they were implemented by security experts, and are more effective in closing side channels entirely. They do not require changes to underlying hardware and impose moderate performance overhead. However, the only memory randomization solution for SGX enclaves SGX Shield~\cite{SGXShield2017} does not support randomization of data objects, which is challenging to achieve, as we elaborated above. We aim to explore possible designs and ways to overcome associated challenges in our future work. \section{Introduction} \label{sec:intro} Intel Software Guard Extension (SGX) \cite{Cos2016, ISCA2015} enables execution of security-critical application code, called enclaves, in isolation from the untrusted system software. Protections in the processor ensure that a malicious OS cannot directly read or modify enclave memory at runtime. Through a mechanism called sealing enclaves can encrypt and authenticate data for persistent storage. Processors are also equipped with certified keys that can issue remotely verifiable attestation statements on enclave software configuration. These SGX mechanisms (isolation, sealing, attestation) enable the development of applications and online services with improved security. The SGX architecture is especially useful in cloud computing applications. Data and computation can be outsourced to an external computing infrastructure without having to fully trust the cloud provider and the entire software stack.\footnote{Compared to other protection mechanisms SGX can provide significant advantages. Solutions based on special-purpose encryption offer limited functionality (e.g., searchable encryption \cite{bellare-crypto07}). Generic techniques (e.g., fully homomorphic encryption \cite{gentry-stoc09} and secure multi-party computation \cite{huang-usenix11}) are, for most applications, too slow. } \paragraph{SGX information leakage.} However, previous research has demonstrated that SGX isolation has also weaknesses. The limited protected memory is used by unlimited number of enclaves, and therefore, memory management, including paging, is left to the OS \cite{Cos2016}. Consequently, the OS can force page faults at any point of enclave execution and from the requested pages learn secret-dependent enclave execution control flow \cite{Xu2015}. Information leakage is a serious concern, as it can defeat one of the main benefits of SGX -- the ability to compute over private data on an untrusted platform. Recent research has attempted to find ways to prevent such leakage. Currently the most promising system-level approach is to detect when the OS is intervening in enclave execution. For example, T-SGX~\cite{t-sgx} and D{\'e}j{\'a} Vu~\cite{incognito2017} detect page faults and allow the enclave to defend itself from a possible attack (i.e., to stop its execution). Sanctum \cite{sanctum} is an alternative security architecture, where the protected application itself is responsible for memory management, and thus able to prevent similar attacks. Researchers \cite{Cos2016} and Intel \cite[p.~35]{intel-dev-guide-2016} have assumed that information may leak also through other side-channels such as caches that are shared between the enclave and the untrusted software. However, before our work, such leakage was not demonstrated and evaluated. \paragraph{Our cache attack on SGX.} In this paper we demonstrate that SGX is indeed vulnerable to cache attacks. As a first use case we show our attack on the canonical RSA decryption and attack a standard sliding-window RSA implementation from the SGX SDK \cite{intel-exp-fast}. Using the Prime+Probe\xspace cache monitoring technique \cite{Osv2006, OsShTr2006} we can extract 70\% of the 2048-bit key with 300 repeated executions. From the extracted bits, the full RSA key can be effectively recovered \cite{boneh-98}. To the best of our knowledge, this work is the first to show that cache-based side-channel attacks are both practical and effective on SGX. Although cache-based attacks and cache monitoring techniques, such as Prime+Probe\xspace, are well studied, executing them in our setting involved a set of significant technical challenges. In particular, because our primary design goal was to explore attack techniques that cannot be easily mitigated by the recently suggested defensive approaches~\cite{t-sgx,incognito2017}, we opted to run both the victim and the attacker uninterrupted in parallel, so that the victim enclave is unaware of the attack and cannot take measures to defend itself. Hence, the victim cache monitoring needs to be fast, although monitoring all relevant cache sets can be slow. Furthermore, benign interrupts due to OS timers cause periodic enclave exists that cause severe interference in cache monitoring. Moreover, the execution of the victim itself can interfere with monitored cache sets. To overcome these challenges, we developed novel attack techniques. For instance, we leverage the capabilities of the privileged adversary to assign the victim process to a dedicated core, reduce the number of benign interrupts, and perform fast cache monitoring using CPU performance counters. Note that the SGX adversary model includes the capabilities of the OS. \paragraph{Current defenses.} Recent system-level defenses such as T-SGX~\cite{t-sgx} and D{\'e}j{\'a} Vu~\cite{incognito2017} can prevent those side-channel attacks that rely on frequent enclave interruption. Our attack, however, does not require the interruption of the victim enclave and hence remains undetected by these defenses. Besides system-level defenses, cache attacks can be tackled on the application level. Many cryptographic libraries provide encryption algorithm variants that have been specifically hardened against cache attacks. For every secret-dependent memory access the enclave can issue a set of memory accesses that manifest as changes in all the monitored cache sets. The accessed memory location is effectively hidden from the adversary. For instance, the NaCl library \cite{nacl} provides such side-channel resilient crypto implementations. Also the SGX SDK includes cryptographic algorithm variants that have been hardened against cache attacks \cite{intel-exp-secure}. While such defenses can be effective, they require significant expertise and effort from the enclave developer. Assuming that every developer is aware of possible information leakage and able to harden his implementation against cache attacks is unrealistic. Automated tools that require no developer effort (e.g., oblivious execution \cite{maas-ccs13, liu-csf13, liu2015ghostrider} and ORAM \cite{stefanov-ccs13}) are difficult to deploy securely in SGX content and cause very high runtime overhead. Disabling caching is not practical either. We argue that large classes of non-cryptographic SGX applications are vulnerable to cache attacks and illustrate this through our second use case, a genome indexing algorithm called PRIMEX~\cite{primex}, which uses hash tables to index a genome sequence. By monitoring the genome-dependent hash table accesses we can reliably identify if the processed human genome (DNA) includes a particular repeating sequence called microsatellite. Microsatellites are often used in applications such as forensics, genetic fingerprinting and kinship analysis \cite{Ballantyne-2010}. We review known countermeasures and conclude that all of them have serious limitations, and none of them prevents our attacks effectively in practice. \paragraph{Contributions.} To summarize, this paper makes the following contributions: \begin{compactitem} \item \textbf{Effective SGX cache attack.} We demonstrate that cache attacks are practical on SGX. Interestingly, our attack is more effective than previous comparable attacks. As part of our attack, we develop novel techniques to reduce side-channel noise. \item \textbf{Leakage from non-cryptographic applications.} We show that non-cryptographic applications deployed within SGX are vulnerable to cache attacks. We demonstrate this through a case study on a genome analysis enclave. \item \textbf{Countermeasure analysis.} We show that none of the known defenses mitigates our attacks effectively in practice. \end{compactitem} The rest of this paper is organized as follows. In \cref{sec:background} we provide background information. \cref{sec:model} introduces the system and adversary model, and \cref{sec:design} explains our attack design. In \cref{sec:attack} we provide RSA decryption attack details and results. \cref{sec:genome} focuses on genomic enclave case study. We analyse countermeasures in \cref{sec:countermeasures}, discuss other algorithms and lessons learned in \cref{sec:discussion}, and review related work in \cref{sec:relwork}. \cref{sec:conclusion} concludes the paper. \section{0pt}{6pt plus 4pt minus 2pt}{6pt plus 2pt minus 2pt} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{authblk} \usepackage{xspace} \usepackage{hyperref} \def\do\/\do-{\do\/\do-} \usepackage[capitalise,noabbrev]{cleveref} \usepackage{paralist} \usepackage{titlesec} \titlespacing{\section}{% 0pt} 0.6\baselineskip} 5px \titlespacing{\subsection}{% 0pt} 0.6\baselineskip} 5px \titlespacing{\subsubsection}{% 0pt} 0.6\baselineskip} 5px \titlespacing{\paragraph}{% 0pt} 0.15\baselineskip} 8px \newcommand{Prime+Probe\xspace}{Prime+Probe\xspace} \begin{document} \title{Software Grand Exposure: SGX Cache Attacks Are Practical} \author[1]{\rm Ferdinand Brasser} \author[2]{\rm Urs M\"{u}ller} \author[2]{\rm Alexandra Dmitrienko} \author[2]{\rm Kari Kostiainen} \author[2]{\rm Srdjan Capkun} \author[1]{\rm Ahmad-Reza Sadeghi} \affil[1]{System Security Lab, Technische Universit\"{a}t Darmstadt, Germany} \affil[ ]{\{ferdinand.brasser,ahmad.sadeghi\}@trust.tu-darmstadt.de} \affil[ ]{\vspace*{-1.5ex}} \affil[2]{Institute of Information Security, ETH Zurich, Switzerland} \affil[ ]{[email protected], \{alexandra.dmitrienko,kari.kostiainen,srdjan.capkun\}@inf.ethz.ch} \date{} \maketitle \newcommand{\todo}[1]{\textcolor{red}{TODO: #1}} \newcommand{\red}[1]{\textcolor{red}{#1}} \input{abstract.tex} \input{introduction.tex} \input{background.tex} \input{advmodel.tex} \input{design.tex} \input{attack.tex} \input{usecase.tex} \input{discussion.tex} \input{discussion2.tex} \input{relatedwork.tex} \input{conclusion.tex} \section{Related Work} \label{sec:relwork} In this section we review works related to the Intel SGX supported applications, to side channel attacks mounted against SGX enclaves and to cache-based side channel attacks on non-SGX platforms. \paragraph{SGX applications.} First applications leveraging SGX support were already developed and consider cloud scenarios~\cite{baumann_shielding_2014,ScCoGkPeMaRu2015,DiSaChOoZh2015,DeaGhe2008,HuZhXuPeWi2016,YSDCMOONF2009} and beyond~\cite{KiShHaKiHa2015,Shih2016}. All these applications are potential targets to cache-based side-channel attacks, and if not designed to be side-channel resistant, they may leak application secrets in the similar way as the genome processing application which we investigated in this paper (cf.~Section~\ref{sec:genome}). \paragraph{Side-channel attacks on SGX.} The SGX architecture was analyzed by Costan and Devadas~\cite{Cos2016}, who mentioned that SGX is likely to be vulnerable to side-channel attacks, that could potentially be used to leak protected secrets from within the SGX enclaves. Xu et al.~\cite{Xu2015} demonstrated page-fault based side-channel attacks on SGX, where an untrusted operating system infiltrates secrets from protected applications by tracking memory accesses at the granularity of memory pages. While cache-based side channel attacks, which we study in this paper, generally achieve more precise tracking of memory accesses at the granularity of cache lines, they have not been investigated in context of SGX in previous works. \paragraph{Cache attacks.} The first cache-based side channel attack~\cite{Per2005} demonstrated information leakage via L1 cache and was successfully applied to reveal RSA keys of OpenSSL implementation through monitoring accesses to the table with precomputed multipliers, which are used by the algorithm throughout the exponentiation. Detailed performance comparison to this attack is not possible, as the paper does not report details, such as how many repetitions are needed to extract the key. The attack was performed on more than 10 years old platform. The side-channel free implementation of RSA was proposed by Brickell et al.~\cite{BrGrSe2006}. It relies on a technique called \emph{scatter-gather} to interleave the multipliers in memory, which ensures that the same cache lines are accessed irrespective of the multiplier. However, eventually memory accesses within the same cache line with different offsets may also have time variations~\cite{OptManual2012}. This was exploited by CacheBleed attack~\cite{CacheBleed2016}, successfully recovering 60\% of exponent bits of the RSA key after observing 16,000 decryptions. We hypothesize that side channel attack based on cache-bank conflicts may also be applied to SGX enclaves, although we have not investigated this aspect in our work. Osvik et al.~\cite{Osv2006} formalized two cache-based side channel attack techniques, \emph{Evict+Time} and \emph{Prime+Probe}, which since then have been used to attack various cryptographic implementations~\cite{NeSeWa2006,OsShTr2006}, were applied to last level cache and used to build cross-core side channels~\cite{IrEiSu2015,Liu2015}. Furthermore, they were also shown to be applicable to mobile and embedded platforms~\cite{BoEiPaWi2010,WeHeSt2012,SpPl2013,SpGe2014}. In a context of cross-core attacks, new and more complex attack techniques were developed, such as Flush+Reload~\cite{YarFal2014}, Evict+Reload~\cite{GrSpMa2015}, and Flush+Flush~\cite{MarWag2015}. Similarly to us, some of the cross-core attacks~\cite{Liu2015} target RSA decryption. These attacks tens of thousands of repetitions, while our attack requires only about 300 executions. Uhsadel et al.~\cite{UGV08} study the use of hardware performance counters (HPCs) for side-channel attacks. They use HPCs to observe the behavior of their victim directly, e.g., record cache hit/miss events of the victim. This approach is not suitable for SGX enclaves because enclaves do not update HPCs. In contrast, we use HPCs to record cache events of the attacker's Prime+Probe\xspace code. \section{Genomic Data Processing Attack} \label{sec:genome} In this section we describe our second side-channel attack on a genome data processing enclave. Genome data processing is an emerging field that highly benefits from cloud computing due to the large amounts of data being processed. At the same time, genome data is highly sensitive, as it may allow the identification of persons and carry information whether a person is predisposed to a specific disease. Thus, maintaining the confidentiality of genomic data is paramount, in particular when processed in untrusted cloud environments. In the remainder of the section we first introduce the general concept of the genome processing algorithm we used. Then, we describe the implementation of the algorithm on SGX, followed by attack details and our results. Genome processing algorithms are just a representative for a large class of algorithms that produce memory accesses based on sensitive data, as we discuss in more detail in \cref{sec:discussion}. \subsection{Victim Enclave} \label{sec:genome:highlevel} Genome sequences analysis is an important technique to identify individuals, persons or animals. By locating particular sequences in different location of a genome individuals can be distinguished. Genome sequences are represented by the order of the four nucleotides adenine, cytosine, guanine and thymine, usually abbreviated by their first letter (A, C, G, T). Microsatellites, i.e., repetitive nucleotides base sequences, are commonly used for identifying individuals. They usually range from two to five base pairs, occurring five to 50 times in a row in the genome. Efficient search of large genome sequences is vital for these analysis methods. Therefore, the data are usually preprocessed before the actual analysis is performed. One common way of preprocessing is to divide the genome sequence into substrings of a fixed length $k$, called \emph{$k\textrm{-mer}$\xspace}. The $k\textrm{-mers}$\xspace represent a sliding window over the input string of genome bases. In \cref{fig:hashtable} the input \texttt{AGCGC$\dots$} is split into $2\textrm{-mers}$\xspace. Starting from the left the first is \texttt{AG}, next the sliding window is moved by one character resulting in the second $2\textrm{-mer}$\xspace \texttt{GC}, and so on. \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{Hashtable.pdf} \caption{Genome sequence analysis based on hash tables; subsequences of the genome (called $k\textrm{-mers}$\xspace) are inserted into a hash table for statistical analysis and fast search for $k\textrm{-mers}$\xspace.} \label{fig:hashtable} \vspace*{-.4cm} \end{figure} The $k\textrm{-mers}$\xspace are inserted into a hash table, usually, for each $k\textrm{-mer}$\xspace its position in the genome sequence is stored in the hash table. Thus, given a $k\textrm{-mer}$\xspace that is part of a microsatellite one can quickly lookup at which position it appears in the input genome sequence. Another use case is statistics of the input genome sequence, for instance, the distribution of $k\textrm{-mers}$\xspace in the sequence can easily be extracted from the hash table. \paragraph{Primex.} Our victim enclave implements the preprocessing step for a genome sequence analysis algorithm, as described above. We used and open source implementation of $k\textrm{-mer}$\xspace analysis tool called PRIMEX~\cite{primex}.\footnote{\url{https://www.researchgate.net/publication/233734306_mex-099tar}} The tool inserts each $k\textrm{-mer}$\xspace into the hash table. Each hash table entry holds a pointer to an array, which is used to store the positions of each $k\textrm{-mer}$\xspace. \subsection{Attack Details} \algnewcommand\algorithmicswitch{\textbf{switch}} \algnewcommand\algorithmiccase{\textbf{case}} \algnewcommand\algorithmicassert{\texttt{assert}} \algnewcommand\Assert[1]{\State \algorithmicassert(#1)}% \algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ \algorithmicdo}{\algorithmicend\ \algorithmicswitch}% \algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}% \algtext*{EndSwitch}% \algtext*{EndCase}% \begin{algorithm}[tb] \caption{Hash-Index Generation} \label{alg:idxgen} \algorithmicrequire{Genome G with $\text{G}_i \in \{A, C, G, T\}$, $k \in \mathbb{N}_{> 0}$} \newline \algorithmicensure{Hash-Index H} \begin{algorithmic}[1] \State Let H $\gets$ HashTable with $4^k$ entries \For {\textbf{each } $k$-mer M $\in \text{G}$} \State Let $pos$ be the offset of M in G \State Let $idx \gets 0$ \For {\textbf{each} nucleotide $n$ $\in$ M} \Switch{$p$} \Case{A: } $\overset{\sim}{n} \gets 0$ \EndCase \Case{C: } $\overset{\sim}{n} \gets 1$ \EndCase \Case{G: } $\overset{\sim}{n} \gets 2$ \EndCase \Case{T: } $\overset{\sim}{n} \gets 3$ \EndCase \EndSwitch \State $idx \gets 4 \cdot idx + \overset{\sim}{n}$ \EndFor \State H[$idx$].append($pos$) \EndFor \end{algorithmic} \end{algorithm} Our attack aims at detecting whether a specific subsequence, or microsatellite, is contained in the input genome sequence processed by the victim enclave. The microsatellite's position in the genome is revealed by the point in time when it is observed. Due to the controlled environment of our attack the execution time of the victim if very deterministic, allowing precise positioning of the observation within the input sequence. Additionally, the attack can be repeated for different microsatellites which allows the identification of individuals. Through our cache side channel we can observe cache activities that can be linked to the victim's insertion operation into the hash table (\cref{alg:idxgen}). \cref{fig:hashtable} shows that insertions into the hash table effect different cache lines. For each $k\textrm{-mer}$\xspace the victim looks up a pointer to the associated array from the hash table. From the source code we learn the hash function used to determine the table index for each $k\textrm{-mer}$\xspace, by reversing this mapping we can infer the input based on the accessed table index. Unfortunately, individual table entries do not map to unique cache lines. Multiple table entries fit within one cache line, so from observing the cache line accesses we cannot directly conclude which index was accessed. This problem is illustrated in \cref{fig:hashtable}. Here four table indexes map to a single cache line. When the attacker observes the eviction of cache line $0$ (meaning it was accessed by the victim), it does not learn the exact table index of the inserted $k\textrm{-mer}$\xspace, but a set of candidate $k\textrm{-mers}$\xspace that could have been inserted (\texttt{\{AA,AC,AG,AT\}}). However, the attacker can split up the microsatellite he is interested in into $k\textrm{-mers}$\xspace and determine which cache lines will be used when it appears in the input sequence. In \cref{fig:hashtable} the microsatellite is split into four $2\textrm{-mers}$\xspace, where the first $2\textrm{-mer}$\xspace (\texttt{AT}) will be inserted in the first quarter of the table, hence, cache line 0 (L0) will be used by the victim enclave. The second $2\textrm{-mer}$\xspace (\texttt{TC}) will be inserted into the last quarter of the hash table, thus activating cache line 3 (L3). Following this scheme the attacker determines a sequence of cache lines which will reveal to her that the microsatellite sequence was processed by the enclave. \subsection{Attack Results} We provided a real genome sequence string to the victim enclave and run it in parallel to our Prime+Probe\xspace attack code. We chose $k = 4$ for the $k\textrm{-mers}$\xspace leading to $4^4 = 256$ $4\textrm{-mers}$\xspace (four nucleotides possible for each of the four position). Each $4\textrm{-mer}$\xspace is represented by a unique table entry, each table entry is a pointer ($8\,byte$), and thus each cache line contains $64\,byte / 8\,byte = 8$ table entries. In our attack we were searching for a tetra-nucleotide microsatellite of length ten ($(\texttt{ATCG})_{10}$). First, the four $4\textrm{-mers}$\xspace occurring repeatedly in microsatellite are determined, and for each $4\textrm{-mer}$\xspace the corresponding cache lines: $\texttt{ATCG} \Rightarrow \mathrm{cache\ line\ }62$; $\texttt{TCGA} \Rightarrow \mathrm{cache\ line\ }63$; $\texttt{CGAT} \Rightarrow \mathrm{cache\ line\ }22$; $\texttt{GATC} \Rightarrow \mathrm{cache\ line\ }39$. We monitor these four cache lines individually and align them, as shown in \cref{fig:satellite}. When the microsatellite appears in the input string, the cache lines $62$, $63$, $22$ and $39$ will all be used repeatedly by the victim enclave. This increase in utilization of these cache sets can be observed in the measurements. In \cref{fig:satellite} at $x \approx 25,000$ the increased density of observed cache events is visible. Since all four cache lines are active at the same time, one can conclude that the microsatellite did occur in the input sequence. \begin{figure*}[th] \centering \includegraphics[trim={6.6cm 2.4cm 5.8cm 1.15cm}, clip, width=\textwidth]{Microsatellite_with_noise.png} \caption{Access pattern of hash table accesses by PRIMEX processing a genome sequence~\cite{primex}. Four cache sets are shown in different colors with 20 repeated measured for each cache set. The cache sets are correspond to the $4\textrm{-mers}$\xspace of the microsatellite \texttt{ATCG}. At $x \approx 25,000$ increased activity in all four cache sets indicates the occurrence of the microsatellite in the processed genome sequence.} \label{fig:satellite} \vspace*{-.2cm} \end{figure*} \paragraph{False positive analysis.} False positives can occur for two reasons, (1) sequences that map to the same cache lines as the microsatellite we are searching for, (2) noise in the cache. We calculated the set of accessed cache lines for all possible tetra-nucleotide microsatellite, we found no collusions. The only exception is are $4\textrm{-mers}$\xspace that are one of the three possible rotations of the microsatellite sequence we are searching for. This means that no other sequence of $4\textrm{-mers}$\xspace produces activity in the same sets of cache lines and cause a false positive. False positives due to noise are very unlikely due to the fact that we are observing four cache lines. \cref{fig:satellite} shows extensive activation in the top cache line (pink) in the interval $x \approx 80,000$ to $x \approx 95,000$. However, in all three other cache lines there is low activity making this event clearly distinguishable from a true positive event.
2,877,628,088,708
arxiv
\section{Conclusion}\label{sec:Conclusion} In this paper, we have investigated the problem of improving the timeliness of collective road awareness, concentrating on the VRU use case and focusing on a freeway segment under cellular network coverage. With the aim of minimizing \ac{E2E} signaling latency, we have proposed a MEC-assisted network architecture, according to which MEC hosts are collocated with \ac{eNB}s, thus, they can receive and process VRU messages at the edge of the access network. Towards quantifying the benefits of the new approach, we have defined the latencies related to radio transmission and message processing, driven by realistic assumptions. By means of numerical evaluation, it has been observed that, for some of the investigated system parameterizations, the proposed overlaid deployment of MEC hosts offers up to 80\% average gains in latency reduction, as compared to the conventional network architecture. It is interestingly shown that performance benefits remain significant for different vehicle/ VRU deployment densities, as well as for different vehicle cluster sizes when VRU-to-vehicle distance-dependent multi-cast signaling is performed. \section*{Acknowledgment} The research leading to these results has been performed under the framework of the Horizon 2020 project ONE5G (ICT-760809) receiving funds from the European Union. \section{Introduction}\label{section:introduction} \ac{V2X} communication paves the way for a drastically improved road safety and driving experience via reliable and low latency wireless services \cite{5GAA} \cite{2016}. The efficient \ac{V2X} system development is based on a plethora of reliably-functioning sensors, which provide an enhanced environmental perception by means of exchanging critical messages among vehicles, pedestrians and road infrastructure \cite{Gunther2015}. Such a system, as depicted in Fig.~\ref{fig:future_systems}, incorporates different information exchange paths, namely, \ac{V2I}, \ac{V2N}, \ac{V2P} and \ac{V2V} communication. These signaling paths can be either established via \ac{DSRC}, or, assisted by the cellular \ac{LTE} network providing coverage (\ac{C-V2X}), or, through an interworking of the two technologies \cite{Abboud2016}. Focusing on the \ac{C-V2X} technology, the architecture of the cellular network is expected to have a vital impact on the support of delay-intolerant \ac{V2X} services. This occurs, because the \ac{E2E} latency of \ac{C-V2X} signaling is limited by the quality and dimensioning of the cellular infrastructure, i.e., the capacity of backhaul connections, as well as the delays introduced by both the \ac{CN}, as well as the \ac{TN}. As one would expect, these latency bottlenecks will be more prominent for high loads corresponding to coverage areas of high vehicular/ pedestrian densities. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{introduction/figures/future_systems/future_systems.eps} \caption{Envisioned \ac{5G} \ac{V2X} system.} \label{fig:future_systems} \end{center} \end{figure} To cope with such requirements, extensive research has recently taken place to enhance the advent experience of \ac{V2X} communication, with emphasis on latency shortening. For instance, in \cite{Safiulin2016}, the packet delivery latency and network utilization, focusing on an \ac{LTE} system, were investigated for \ac{MBSFN}. Furthermore, in \cite{Cattoni2015}, considering an \ac{LTE} network architecture, \ac{CN} gateway relocation is proposed for \ac{V2X} latency improvement. Finally, with reference to implementation aspects, the authors in \cite{Lee2017} investigated latency-reduction techniques such as \ac{TTI} shortening and self-contained sub-frames in \ac{C-V2X} systems, whereas, in \cite{Cao2017}, a \ac{5G} implementation testbed for autonomous vehicles based on \ac{SDR} incorporating different solutions, was presented. Nevertheless, in contrast to the above mentioned works, we argue that stringent latency requirements posed by the \ac{V2X} system can attract the utilization of \emph{\ac{MEC} technology}. Leveraging its ability to provide processing capabilities at the edge of the network, an overlaid \ac{MEC} deployment is expected to assist in obtaining low packet delays, due to its close proximity to end users \cite{Emara2017}. As a consequence, in this paper, concentrating on the \ac{VRU} use case, which studies the safe interaction between vehicles and non-vehicle road users (pedestrians, motorbikes, etc.) \cite{Sabella2017} via the exchange of \emph{periodic \ac{CAM}}, we aim to highlight the latency-related benefits of introducing \ac{MEC} system deployment over a state-of-the-art cellular network. Our study assumes Uu-based \ac{V2X} communication, which is one of the \ac{LTE} solutions exploiting the existing cellular infrastructure \cite{September2015}. The remainder of this paper is organized as follows: in Section \ref{sec:system_model}, we present an overview of the studied system model; Section \ref{sec:latency_model} provides a detailed description of the \ac{E2E} latency components and Section \ref{sec:simulation_results} presents the relevant numerical results. Finally, Section \ref{sec:Conclusion} concludes the paper. \section{Latency Modeling}\label{sec:latency_model} As mentioned earlier, the objective of this work is to investigate the \ac{E2E} latency performance achieved through collocated deployment of \ac{MEC} hosts and cellular network \ac{eNB}s. Towards accomplishing this aim, in this section, we model the various latency components related to \ac{CAM} transmission, routing and processing for both system approaches. Regarding the conventional cellular network architecture approach (Fig.~\ref{fig:latency_model}), the one-way \ac{CAM} messaging latency is modeled as $T_\text{one-way} = T_{\text{UL}} + T_{\text{BH}} + T_{\text{TN}} + T_{\text{CN}} + T_{\text{Exc}}$, where $T_{\text{UL}}$ is the radio \ac{UL} transmission latency, $T_{\text{BH}}$ is the \ac{BH} network latency, $T_{\text{TN}}$ is the \ac{TN} latency, $T_{\text{CN}}$ is the \ac{CN} latency and $T_{\text{Exc}}$ is the \ac{CAM} processing latency. Consequently, the \ac{E2E} latency, is expressed as: \begin{equation}\label{eq:total_latency} T_\text{E2E} = T_{\text{UL}} + \underbrace{2(T_{\text{BH}} + T_{\text{TN}} + T_{\text{CN}})}_\textrm{Network latency} + T_{\text{Exc}} + T_{\text{DL}}, \end{equation} where, $T_{\text{DL}}$ represents the \ac{DL} transmission latency. For the proposed, \ac{MEC}-enabled network approach, the network latency can be avoided via processing the \ac{CAM} packets at the \ac{MEC} host, collocated with the connected \ac{eNB}. In what follows, we provide further explanations regarding the mentioned latency components. \subsection{Radio Latency} As described in Section \ref{sec:system_model}, each \ac{VRU} generates a packet for transmission within a random offset time index. The time required for the $k$-th \ac{VRU} to transmit a packet of size of $l_k$ bits to its serving \ac{eNB} is calculated as follows \begin{align} T_{\text{UL},k} &= \frac{l_k}{r^{UL}_k}, \\ r^{UL}_k &= \eta_k \text{log}_2(1+\text{SNR}_k), \end{align} where $r^{UL}_k$ is the achievable \ac{UL} rate, $\eta_k$ is the number of allocated \ac{PRBs} and $\text{SNR}_k$ represents the received \ac{SNR} at the \ac{eNB}. Throughout this work, we assume fair resource allocation, where the total number of available \ac{PRBs}s is shared equally among the \ac{VRUs} transmitting at the same time index. As a result, the number of these \ac{VRUs}, denoted by $\hat{N}_k$, sharing the resources with the $k$-th \ac{VRU} is computed as follows \begin{align} \hat{N}_k &= \sum_{i=1}^{N} \mathbbm{1} (\tau_i = \tau_k), \; \forall k=\{1,2,\cdots,N \}, \end{align} where $\mathbbm{1}(\cdot)$ is the indicator function. Due to the periodic nature of message generation, the computation of shared resources is carried out for each time window (i.e., $[T_j, T_{j+1}],\; \forall j=\{1,2,\cdots\}$). As mentioned in Section \ref{sec:system_model}, for \ac{DL} transmissions, after successful packet processing at the server, we resort to the concept of cluster-based multicast transmission \cite{Luoto2017}. The main idea is to select a set of existing vehicles in the system for transmission, in order to avoid large latencies caused by cell-edge vehicles, which would not be of high criticality for the \ac{VRU}, as the set of \ac{VRUs} is assumed to be located close to the cell center. Consequently, the vehicle cluster for the $k$-th \ac{VRU} denoted as $\mathcal{S}_k$, will consist of the $M$ closest vehicles to that \ac{VRU}. Thus, the \ac{DL} latency can be expressed as follows \begin{equation}\label{eq:furthest} T_{\text{DL},k} = \text{max}_{(\forall i \in \mathcal{S}_k)}\{\frac{l_k}{r^{DL}_k} \}, \end{equation} where the maximum operator is used to measure the farthest vehicle's packet reception delay in cluster $\mathcal{S_k}$. Regardless of the \ac{eNB} location, having the $k$-th \ac{VRU} position as a reference, the maximum radio \ac{DL} latency serves as a cluster-wide metric, which is aimed to be minimized. As it will be shown later, the effect of the cluster size is significant, since the available radio resources in the \ac{DL} have to be shared among all vehicles within cluster $\mathcal{S}_k$. \begin{figure} \begin{center} \input{latency_calculation/figures/latency_model/latency_model.tex} \caption{One-way signaling latency for two \ac{VRUs} - conventional approach.} \label{fig:latency_model} \end{center} \end{figure} \subsection{Network Latency} As mentioned earlier, the following latency components are non-existent for the \ac{MEC}-assisted \ac{CAM} signaling case, since there is no involvement of the \ac{TN} and the \ac{CN} in \ac{CAM} packet routing. \subsubsection{Backhaul Latency} The \ac{BH} latency $T_{\text{BH}}$ represents the time required for packets to be routed through the \ac{BH} network, which has a finite capacity, denoted by $C_{\text{BH}}$. It is assumed that the \ac{BH} capacity is equally shared among the $\hat{N}_k$ \ac{VRUs} concurrently uploading their messages at time instant $\tau_k$. As a result, the \ac{BH} latency for the $k$-th \ac{VRU} is \begin{equation} T_{\text{BH},k} = \frac{l_k\hat{N}_k}{C_{BH}}. \end{equation} \subsubsection{Transport and Core Latency} In order to provide realistic modeling of the \ac{TN} and \ac{CN} latencies, we resorted to the recent results reported in \cite{Sabella}, where a proof-of-concept was implemented for an \ac{LTE} environment with commercial terminals, running a real-time adaptive video streaming service routed through a \ac{MEC} host and several \ac{eNB} agents placed at different locations, as compared to the \ac{MEC} host position. Inspired by the results presented in this work, the two latency components are assumed to be uniformly distributed, over a range of realistic values, as it will be shown in the numerical evaluation section. \subsection{Execution Latency} Finally, we model the time required for processing a packet of size of $l_k$ bits at a server, either collocated with the \ac{eNB} or at the distant cloud. Assuming that the input packet requires $\beta_k$ cycles/bit for processing and the server has a processing capacity denoted by $F$, the execution latency for the $k$-th \ac{VRU} is expressed as \begin{equation} T_{\text{Exc},k} = \frac{\hat{N}_k l_k\beta_k}{F}. \end{equation} \section{Simulation Results}\label{sec:simulation_results} In order to illustrate the latency improvements via \ac{MEC} deployment within cellular systems for \ac{V2X} communications, we provide different simulation scenarios via varying values of two main system parameters; namely, the vehicles and \ac{VRU}s spatial densities. Moreover, we also aim at observing the vehicles' cluster size impact on the experienced latency. For both the proposed and conventional cellular network architectures, the focused metric is the \ac{E2E} latency, as well as its individual components as explained in eq.~(\ref{eq:total_latency}). The values of all involved parameters are presented in Table \ref{Table:simulation_parameters}, unless otherwise stated. \begin{table} \centering \caption{Simulation parameters} {\def\arraystretch{1}\tabcolsep=8pt \begin{tabular}{|c|c|c|} \hline \textit{Entity} & \textit{Parameter} & \textit{Value} \\ \hline \multirow{4}*{Vehicles} & Speed (km/h) & $\sim \mathcal{U}(70,140)$ \\ & Inter-veh. distance (m) & 10 \\ & $\lambda$ (vehicle/m) & 0.01 \\ & Cluster size & 5 \\ \hline \multirow{4}*{VRU} & Number of VRUs ($N$) & 100 \\ & x-coordinates (m) & $\sim \mathcal{U}(1200,1800)$ \\ & Tx power (dBm) & 23 \\ & $l_k$ (kbits) & $\sim \mathcal{U}(8,12)$ \\ & $\beta_k$ (cycles/bit) & $\sim \mathcal{U}(100,300)$ \\ \hline \multirow{4}*{\ac{eNB} / \ac{MEC} host} & Tx power (dBm) & 46 \\ & Bandwidth (MHz) & 9 \\ & $C_{BH}$ (Mbps) & 10 \\ & $F$ (Gcycles/sec) & 9 \\ \hline \multirow{10}*{General} & Frequency (GHz) & 5.9 \\ & Number of lanes & 2 \\ & Lane length (km) & 3 \\ & Lane width (m) & 4 \\ & Pathloss exponent & 3 \\ & Shadowing std. dev. (dB) & 3 \\ & Fast fading std. dev. (dB) & 4 \\ & Thermal noise (dBm) & -110 \\ & Additional losses (dB) & 15 \\ & \ac{TN}+\ac{CN} latency (ms) & $\sim \mathcal{U}(15,35)$\\ \hline \end{tabular} } \label{Table:simulation_parameters} \end{table} \subsection{Effect of VRU Density} First, we look into the case of increasing \ac{VRU}s. As explained in the previous sections, each \ac{VRU} is assigned a random timing offset for transmission. Thus, the generated periodic message traffic increases accordingly with the \ac{VRU}s. In Figure~\ref{fig:inc_VRUs}, the average \ac{E2E} signaling latency with and without \ac{MEC} host deployment is shown both as a whole and component-wise. Clearly, \ac{MEC} utilization provides a lower \ac{E2E} latency (the observed gains are in the range of 66\%-80\%), due to the exploitation of processing resource proximity offered by the \ac{MEC} host. Additionally, we observe an increasing behavior of the latency along with the \ac{VRU} density, which is due to the increasing demand of the available resources. First, for the radio transmission latency components, as the number of \ac{VRU}s increases, the available resources per \ac{VRU} decrease, due to the equal allocation assumption. Similar explanations hold for the \ac{BH} and the execution latencies. It should be noted that the \ac{TN} and \ac{CN} latencies were modeled as random variables, independent of the system parameters, which can be further modified in future work. \begin{figure} \centering \input{simulation_results/figures/inc_vehicles/Inc_VRU.tex} \caption{(a) Average \ac{E2E} latency for increasing number of VRUs. (b) Component-wise latency breakdown.} \label{fig:inc_VRUs} \end{figure} \subsection{Effect of Vehicles Density} In this part, an alternative scenario of fixing the number of \ac{VRU}s and increasing the spatial density of the vehicles is studied, as per Fig.~\ref{fig:inc_vehicles}. Since the \ac{VRU}s in the investigated use case are the active agents and the vehicles are the passive ones, i.e., transmission is always initiated by the \ac{VRU}s, the \ac{E2E} latency is dependent on the vehicles' spatial density. As discussed in Section \ref{sec:system_model}, the vehicles' density (i.e., $\lambda$) only plays a role in the radio \ac{DL} latency. Since a location-based multicast transmission is employed, where the cluster size (i.e., $|\mathcal{S}_k|$) is fixed, as the number of vehicles increases, the probability to have the cluster closer to the \ac{VRU} of interest will increase as well. Hence, as expected, the \ac{DL} latency decreases with increasing $\lambda$. \begin{figure} \centering \input{simulation_results/figures/inc_vehicles/Inc_vehicles.tex} \caption{(a) Average \ac{E2E} latency for increasing vehicles' deployment densities. (b) Component-wise latency breakdown.} \label{fig:inc_vehicles} \end{figure} Since the cluster size highly affects the \ac{E2E} latency through its contribution to the \ac{DL} radio latency, the experienced \ac{DL} latency for increasing vehicle cluster sizes is simulated and presented in Fig.~\ref{fig:clustersize}. Due to the definition of the \ac{DL} latency (eq. (\ref{eq:furthest})) and its dependence on the cluster's farthest vehicle to successfully receive the packet, as the cluster size increases, the probability of vehicles being far from the focused \ac{VRU} will increase as well. As a result, this explains the increasing fashion of the radio \ac{DL} latency, which is as depicted in Fig.~\ref{fig:clustersize}. \begin{figure} \centering \input{simulation_results/figures/inc_vehicles/k_neighbour.tex} \caption{Average cluster-related radio \ac{DL} latency as a function of the vehicle cluster size.} \label{fig:clustersize} \end{figure} \section{System Model}\label{sec:system_model} In this section, different aspects of the evaluation platform will be presented. First, we identify the multiple entities constituting the system setup and then we describe the \ac{VRU} use case. Finally, we review the link model and its accompanying assumptions. \subsection{System Setup} Throughout this work, a freeway road environment is assumed, consisting of one lane per direction, as shown in Fig. \ref{fig:full_scenario}. To provide a basis for possible future analytical work, which is, however, outside the scope of this paper, the vehicles are placed at the start of each system realization following a Mat\'ern hard-core point process over one dimension \cite{Haenggi2012}, with speeds drawn from a uniformly distributed random variable (i.e., $\in \mathcal{U}(\text{v}_{min}, \text{v}_{max})$). To model the inter-vehicle distance, we have resorted to the hardcore parameter of the mentioned point process, which represents the repulsion between any two generated points. Moreover, a cluster of $N$ \ac{VRUs} is on a pedestrian area between the two lanes; whereas such a populated area can be mapped to real-world scenarios like gas stations or other service points across a freeway. At the network side, it is assumed that the focused freeway segment is under \ac{LTE} coverage; given that, for brevity, we consider a single-cell setup, the occurrence of any handover events is not taken into account by the evaluation platform. The serving \ac{eNB} is assumed to be collocated with a \ac{MEC} host of given processing capabilities, as will be explained later on. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{system_model/figures/system_deployment/system_deployment.eps} \caption{The investigated two-lanes freeway scenario.} \label{fig:full_scenario} \end{center} \end{figure} \subsection{Vulnerable Road User - signaling model} As highlighted in Section \ref{section:introduction}, a \ac{VRU} is assumed to interact with vehicles and, possibly, other users on the road. A straightforward example is the one of safety-related applications \cite{Kawasaki2017}, in which periodically generated \ac{VRU} messages (e.g., \ac{CAM}) can be exploited for crash prevention purposes. In order to model the generation of those periodic messages, we assume that the $k$-th \ac{VRU} generates data packets of size of $l_k \in \mathcal{U}(l_\text{min},l_\text{max})$ bits at random starting time offsets, denoted as $\tau_k$. Such \ac{CAM} transmission randomness is used to model the nature of road-safety applications. Due to the \ac{CAM} signaling periodicity, this cycle is repeated every $T$ seconds with newly generated transmission offsets. A visualization of the messaging scheme for two \ac{VRUs} is shown in Fig. \ref{fig:packet_generation}. It should be mentioned that, depending on the periodicity of packet generation and the number of \ac{VRUs} existent at the focused service point, the available \ac{UL} radio resources will need to be shared among the \ac{VRUs}. Once a given \ac{VRU} transmits its \ac{CAM} in the \ac{UL} exploiting the Uu interface, the corresponding input packet will be processed by the \ac{MEC} host collocated with the serving \ac{eNB} and then, the processed information (output packet) will be forwarded to vehicles in the vicinity of the \ac{VRU} by means of \ac{DL} Uu-based transmission. According to the key results in \cite{Luoto2017}, the main challenge in designing efficient \ac{C-V2X} \ac{CAM} signaling is to serve the cell edge vehicles. Due to their low quality experienced channels, these vehicles require a large number of \ac{PRBs}, as compared to their cell-center counterparts. Therefore, accounting for the nature of \ac{CAM} messages, where the \ac{E2E} latency is dependent on the successful reception of the packets by the destined vehicles, we resort to the concept of \emph{location-based vehicle clustering}. According to this approach and, based on location availability, each \ac{VRU} defines a cluster of close-by vehicles and a cluster-based multicast transmission takes place in the \ac{DL}. \begin{figure} \begin{center} \input{system_model/figures/packets_generation/packets_generation.tex} \caption{Packet generation procedure for two \ac{VRUs} (black square and red cross, respectively) with random transmission timing offsets.} \label{fig:packet_generation} \end{center} \end{figure} \subsection{Link Model} All considered vehicles and \ac{VRUs} are assumed to be served by an \ac{eNB}, based on the pathloss model adopted from the \textit{WINNER+} project \cite{September2007}, as follows \begin{align} \text{PL (dB)} &= 22.7\text{log}_{10}(d) - 17.3\text{log}_{10}(\tilde{h}_{\text{eNB}}) -17.3\text{log}_{10}(\tilde{h}_{\text{UE}}) \nonumber\\ &+ 2.7\text{log}_{10}(f_\text{c}) - 7.56, \end{align} where $d$ is the distance between the transmitter and receiver, $f_\text{c}$ is the center carrier frequency and $\tilde{h}_{\text{eNB}}$ and $\tilde{h}_{\text{VRU}}$ represent the effective antenna heights at the \ac{eNB} and \ac{VRU}, respectively. The latter quantities are computed as follows: $\tilde{h}_{\text{eNB}} = h_{\text{eNB}} - 1.0$ and $\tilde{h}_{\text{VRU}} = h_{\text{VRU}}- 1.0$, with $h_{\text{eNB}}$ and $h_{\text{VRU}}$ being the actual antenna heights (i.e., in meters). Additionally, independent and identically distributed (i.i.d.) random variables are used to model the fast fading and shadowing-based attenuation phenomena. Also, it should be noted that the scheduler employed in our work equally distributes the available \ac{PRBs} over all scheduled VRUs and vehicles. In the following section, a thorough \ac{E2E} latency analysis is presented, focusing on both the proposed, \ac{MEC}-assisted network architecture, as well as the conventional, ``distant-cloud''-based cellular architecture, which will serve as a comparison benchmark for the numerical evaluations. \section*{Acknowledgment} \end{document}
2,877,628,088,709
arxiv
\section{Introduction} \label{intro} Magnetic fields hosted by the nearby spiral galaxies are observed to have a coherent large-scale component spanning kilo-parsec length scales, and strengths of several micro-Gauss \citep{fletcher_nearby_2010,Beck2012,beck_wielebinski,krause2018chang}. Such large-scale magnetic fields are thought to be maintained by a mean-field or large-scale dynamo through the combined action of helical interstellar turbulence and galactic differential rotation \citep[see eg.][and references therein]{beck_1996,shukurov_2005,anvar_2004}. The mathematical modelling of the large-scale dynamo, relies upon mean-field electrodynamics, where the magnetic field $\mathbf{B}$ is split into a mean field $\mean{\mathbf{B}}$ and a fluctuation $\mathbf{b}$ and similarly for the velocity field $\mathbf{U}=\mean{ \mathbf{U}} +\mathbf{u}$, with the mean defined by some suitable averaging \citep{Mof78,BS05}. The averaged induction equation then picks up a new contribution, the mean turbulent EMF $\mean{\mathbf{ \mathcal{E}}}= \mean{ \mathbf{ u } \times \mathbf{b}} $ which is the cross correlation between fluctuating velocity and magnetic field and is crucial for driving the large-scale dynamo. In order to get a closed equation for the mean magnetic field, using a two-scale approach, $\mean{\mathbf{\mathcal{E}}} $ is expressed as a linear expansion in the mean magnetic field and its derivatives. The resulting expansion coefficients encapsulate the various properties of underlying turbulence, such as the $ \alpha$-effect (which depends on the turbulent helicity) and turbulent diffusivity which then determine the mean field evolution \citep[see][for the details of formulation]{radler2014}. It is important to determine these turbulent transport coefficients to both compare with theoretical expectations and understand the working of the dynamo. This will be our aim here. A number of different methods have been formulated and implemented so far to extract the dynamo coefficients. \citet{CH96} calculated the random magnetic field generated when a uniform field $\mean{ \mathbf{B}}$ is imposed in a helical turbulent flow, used it to find $\mean{\mathbf{\mathcal{E}}}=\mean{{\mathbf{u}\times \mathbf{b}}}$ and inverted the relation $\mean{\cal E}_i=\alpha_{i\!j} \mean{ B }_j$ to estimate the turbulent coefficients $\alpha_{i\!j}$. \citet{angstrom} adapted an experimental method developed by {\AA}ngstrom for measuring the conductivity of solids, to determine the large-scale diffusivity of magnetic fields, in two dimensional systems. In another approach which can also handle additive noise, \citet{BranSok02} (hereafter BS02) and \citet{Kowal06} computed different moments of mean fields with themselves and the EMF and fitted their linear relation with the data to extract the dynamo coefficients. More sophisticated methods like the test-field method ({TF}) have also been previously used to estimate dynamo coefficients in direct numerical simulations of forced helical turbulence, ISM turbulence driven by supernovae, accretion disk turbulence, and convective turbulence in the context of Solar and Geo-dynamos \citep{schriner_test,schriner_test1,Bran05,sur2007kinetic,gressel_2008,kapala_test,bendre2015dynamo,GP15,War17}. This method relies on the idea that the fluctuating velocity $\mathbf{u}$ determined from solving the magnetohydrodynamic equations in any turbulence simulation contains all the information about the turbulent transport coefficients, in both the kinematic and dynamic phases. One then solves for the small scale magnetic field $\mathbf{b}_T$ induced by $\mathbf{u}$ acting on additional passive large-scale test fields $\mean{\mathbf{B}}_T$ with well defined functional forms, along with the direct simulations. The relation of the associated additional components turbulent EMF $\mean{ \mathbf{\mathcal{E}}}_T=\mean{ \mathbf{u}\times \mathbf{b }_T}$ to $\mean{\mathbf{B}}_T$ is then used to determine the underlying dynamo coefficients \cite[See][for an overview and more remarks on TF method]{Brandenburg2009,brandenburg_2018}. An alternative direct approach has been implemented by \citet{racine2011mode} and \citet{simard2016characterisation}, wherein the computation of dynamo coefficients is handled as a problem of least-square minimisation. Specifically, the time series of the EMF is fitted as a linear function of the mean field and mean current time series using the singular value decomposition (SVD) method. One convenience of this method over the {TF} is that it could be used as a post processing tool for the simulation data thereby making it computationally less expensive. In contrast to the {TF} method, where one solely uses $\mathbf{u}$ from DNS, one here additionally uses the actual $\mathbf{b}$ obtained directly from the simulation to calculate $\mean{\mathbf{\mathcal{E}}}$ and fit its relation to $\mean{\mathbf{B}}$ also obtained from the simulation, to estimate the dynamo coefficients. Thus there is no ambiguity in the applicability of the SVD method, at least in regimes where the transport coefficients can be assumed to be constant -- that is, both in the kinematic and fully-quenched regimes of the dynamo. In view of these possible advantages, we explore here the SVD method as a tool to recover the turbulent transport coefficients in the previously published galactic dynamo simulation of \citet*{bendre2015dynamo}. These were magnetohydrodynamic simulations of a local box of stratified ISM, with turbulence driven via SN explosions. Specific parameters in the simulation domain were set to partially mimic the conditions in a galaxy like the Milky Way. In these simulations, it was found that large-scale magnetic fields emerge with an e-folding time of about 200\Myr. We analyze specifically the time series data from one of these runs to estimate the values of the turbulent transport coefficients using the SVD method. An added advantage is that we can also compare the results obtained here using the SVD method with the results obtained earlier for the same run from the {TF} method. The paper is structured as follows. In \sref{sec:nirvana_setup} we describe the numerical setup for the direct numerical simulations (DNS) of the turbulent interstellar medium (ISM), followed by a brief discussion of its results in \sref{subsec:dns_results}. In \sref{sec:dynamo} we summarize the mean-field formulation and the algorithm we adopt for the extraction of dynamo coefficients. {\aref{app_mockdata} tests the SVD algorithm on mock data.} Results of our SVD analysis are discussed in \sref{sec:dns_dynamo_coefficients} and \sref{sec:1-d_dynamo}. {A comparison of these results with that obtained previously with the {TF} method in the kinematic phase is given in \aref{alphas_tf}. \aref{sec:regr} compares the SVD results with that from a simple regression analysis using the method of { BS02}.} The final section presents a discussion of our results and our conclusions. \section{Direct Numerical Simulations} \label{sec:nirvana_setup} We briefly recall the setup and results of the DNS of galactic dynamo that is analyzed here. A detailed description of the numerical setup is also presented in \citet{bendre2015dynamo,Bendre2016}. The NIRVANA code \citep{nirvana} was used to simulate the multi-phase ISM in a local Cartesian box ($ L_x = L_y = 0.8 $\kpc) of the Galaxy. To study the vertical distribution of the turbulent properties; we use disc of thickness $\sim 4 $ \kpc ($-2.12$\kpc $< z < 2.12$ \kpc). The simulations use $96\times96\times512$ grid cells which gives a numerical resolution of $\sim8.3$ \pc. We impose the shearing periodic boundary conditions in the radial, $x$ direction to match the differential rotation, and periodic in the azimuthal, $y$ direction to account for the approximate axisymmetric azimuthal flows observed in the disc galaxies. The galactic rotation curve is taken to be flat, with angular velocity decreasing with radius $R$ as $\Omega \propto 1/R$, and having a value $\Omega_0=100$ \kms\kpc$^{-1}$ at the centre of the simulation box. Furthermore, outflow conditions are used at the vertical boundaries to allow the outflow of gas from the boundaries while preventing inflow. Turbulence is driven via SN explosions, the locations of which are chosen randomly with a prescribed rate of $\sim 7.5$ \kpc$^{-2}$\Myr$^{-1}$, almost a quarter of the average SN rate of Milky Way. The SN explosions are simulated as localized Gaussian expulsions of thermal energy. A stratified vertical profile of ISM mass density with a scale-height of $\sim300$\pc (and midplane value of $10^{-24}$\g\pcc) is also set up as the initial condition and it is initially in hydrostatic equilibrium under gravity. The vertical profile of gravity is adapted from \citet{gravity_1,gravity_2,gravity_3}. To further capture the multi-phase morphology of ISM we adopt an optically thin radiative cooling function as a piece wise power law, $\Lambda \left( T\right)=\Lambda_iT^{\beta_i}$. The cooling coefficients of different ISM thermal phases $\Lambda_i$ are chosen similar to \citet{radiative_cooling}. This prescription does not quite capture the detailed cooling processes in highly dense cold environments, although the primary goal here is to simulate the dynamical aspects of ISM at moderate densities and large length scales. Our initial magnetic field profile was chosen to have a net vertical flux, and strength of $\sim$\nG which is 3-4 orders of magnitude smaller than the equipartition strength. This numerical set-up corresponds to model `Q' from our previous analysis \citep{gressel2012magnetic,bendre2015dynamo,Bendre2016} and details of the various source and sink terms are included therein. \subsection{Evolution of Mean Fields in the DNS} \label{subsec:dns_results} The magnetic field, $\mathbf{B}$, in this setup amplifies exponentially during the initial kinematic phase with e-folding time of $\sim200 $\Myr, until it grows to approximately equipartition strength within $ \sim1$\Gyr. After reaching near equipartition values, the magnetic field continues to grow exponentially. We refer the initial amplification phase of $\sim1$\Gyr as the kinematic and the later as the dynamical phase. To explore the behaviour of large and small-scale fields separately, we define the mean components of $\mathbf{B}$ and $\mathbf{U }$ by averaging them over the $x$-$y$ (i.e., radial-azimuthal) plane, so as to have only $ z$ as an independent variable. Thus we define \begin{align} \overline{\mathbf{B}}\left(z,t\right)=\frac{1}{L_x L_y}\,\iint \mathbf{B}\left(x,y,z,t\right) \,dx\,dy ,\nonumber \\ \overline{\mathbf{U}}\left(z,t\right)=\frac{1}{L_x L_y}\,\iint \mathbf{U}\left(x,y,z,t\right) \,dx\,dy. \label{avg} \end{align} This definition of averaging in the current setup satisfies the Reynolds averaging rules. Moreover, the $z$ component of $\overline{\mathbf{B}}$ stays unchanged throughout the evolution; subject to the solenoidality constraint. Also the $x$ component of mean velocity stays negligibly small compared to the $z$ component - the outward wind, which has a linear profile in the $z$ direction. Both $\overline{\mathbf{B}}$ and the turbulent field, ${\mathbf{b}}$, in the DNS have the same growth rate of $\sim200$\Myr during the kinematic phase, which later slows down in the dynamical phase, identical to the behaviour of the total magnetic field \citep[see Figure 2. from][]{gressel2012magnetic}. Further, both $\overline{\mathbf{B}}$ and ${\mathbf{b}}$ have an approximate bell-shaped vertical profiles that peak at the midplane \citep[see Fig. 3.3 and 3.4 from][]{Bendre2016}. The scale height and peak strength of mean field are approximately $0.6$\kpc and $3$\muG, respectively at $t=2.5$ Gyr, i.e., the end of the simulation. The growth of the mean magnetic field energy density is shown below in the right panel of Fig.~\ref{comparison_1}. Further, the space-time diagram of $\mean{B}_x$ and $\mean{B}_y$ are shown in the bottom panels of \fref{comparison_2}. These will be discussed further below while comparing with the SVD predictions. \section{The Mean field dynamo} \label{sec:dynamo} The amplification of the large-scale magnetic fields is generally understood using the mean-field dynamo theory \citep{Mof78}. In mean-field theory, the magnetic $\mathbf{B}$ and velocity $\mathbf{U}$ fields are separated into their corresponding large and small scale components. In particular as described above we write $\mathbf{ B}= \overline{\mathbf{B }}+ \mathbf{ b } $ and $\mathbf{U} = \overline{\mathbf{U}} + \mathbf{ u } $, where the average is calculated over a suitable domain (in our case, over $x$-$y$ plane as defined in \eref{avg}). The evolution of mean magnetic field is then governed by the averaged induction equation, \begin{align} \frac{\partial \overline{\mathbf{B}}}{\partial t} = \nabla \times \left( \overline{\mathbf{U}} \times \overline{\mathbf{B}} + \mean{\mathbf{\mathcal{E}}} - \eta_m \nabla \times \overline{\mathbf{B}} \right) \label{mfe} \end{align} where $\mean{\mathbf{\mathcal{E}}}=\mean{\mathbf{u}\times \mathbf{b}}$ is the turbulent EMF and $\eta_m$ the microscopic diffusivity. Using the well established Second-Order Correlation Approximation \citep[SOCA, ][]{Mof78,radler2014}, the turbulent EMF can be expanded in terms of the mean field and its gradient as \begin{equation} \mean{\mathcal{ E}}_i =\alpha_{i\!j} \mean{B}_j- \eta_{i\!j}(\nabla\times\mean{\mathbf{B}})_j\,. \end{equation} For brevity of notation, in what follows, we set $\nabla \times\mean{\mathbf{B}}=\overline{\mathbf{J}}$ the mean current density adopting $\mu_0 =1$. Dynamo coefficients $\alpha_{i\!j}\left(z,t\right)$ and $ \eta_{i\!j}\left(z , t\right) $ are the tensorial quantities that depend on the properties of background turbulence. More explicitly, the turbulent EMF for this numerical setup is written as, \begin{align} \begin{pmatrix} \,\, \overline{\mathcal{E}}_{x} \,\,\\ \overline{\mathcal{E}}_{y} \end{pmatrix} = \begin{pmatrix} \alpha_{xx} & \alpha_{xy}\\ \alpha_{yx} & \alpha_{yy} \end{pmatrix}\, \begin{pmatrix} \,\,\, \overline{{B}}_x \,\,\\ \,\, \overline{{B}}_y \end{pmatrix} - \begin{pmatrix} \eta_{xx} & \eta_{xy} \\ \eta_{yx} & \eta_{yy} \end{pmatrix}\, \begin{pmatrix} \,\,\, \overline{{J}}_x \,\,\\ \overline{{J}}_y \end{pmatrix} \label{e1} \end{align} Diagonal elements of the $\alpha$ tensor represent the alpha-effect, proportional to the kinetic helicity of the turbulence when the magnetic field is not dynamically important and the turbulence is isotropic. The anti-symmetric part of the off-diagonal components, $\alpha_{xy}$ and $\alpha_{yx}$, can be combined to produce the so called gamma-effect, $\gamma= 0.5\, \left(\alpha_{yx} - \alpha_{xy}\right)$, sometimes called ``turbulent pumping''. It leads to a component of $\mean{\mbox{\boldmath ${\cal E}$} {}} = \gamma \times \mean{\mathbf{B}}$ and so advects the mean magnetic field similar to the mean velocity $\mean{\mathbf{U}}$. The diagonal components of the $\eta$ tensor represent the turbulent diffusivity of the mean magnetic field by small-scale motions and the off-diagonal terms can lead to for example the \citet{Radler69} effect. \subsection{Determination of Dynamo Coefficients} \label{sec:coefficients} In order to invert \eref{e1} and compute all eight dynamo coefficients, one needs a sufficient number of independent data points. In our previous work \citep*{bendre2015dynamo}, we used the {TF} method to measure these coefficients. The current analysis, in contrast, relies only upon the simulation data, and uses the SVD method to perform least-square fit of mean field and mean-current data to the EMF data to extract the dynamo coefficients. This is similar to the method used by \citet{simard2016characterisation,racine2011mode}, for the analysis of thermally driven convective turbulence in solar MHD simulations. We now turn to the detailed implementation of SVD in the present setting. \subsection{The Singular Value Decomposition Method} \label{sec:svd_method} The SVD method relies only upon the information of turbulent EMF and mean fields generated from the DNS. Here we compute the vertical profiles of $\mean{\mathcal{E}}\left(z,t\right)$, $\mean{\mathbf{B }}\left(z,t \right)$ and $ \mean{\mathbf{ J}}\left(z,t \right ) $ at various times, by averaging the DNS data over $x$-$y$ plane and treat the time series as the data. The extraction of the $\alpha_{i\!j}$ and $\eta_{i\!j}$ tensors is then achieved by fitting this data to the model described by \eref{e1} by minimising the square of the residual vector components, \begin{align} R_{i} =\mean{\mathcal{E}}_{i} - \alpha_{i\!j} \, \mean{B}_{j} + \eta_{i\!j} \, \mean{J}_{j} \label{residual_vector} \end{align} using the following algorithm. Specifically, we extract the time series of the different components of turbulent EMF ($ \mean{ \mathcal{E}}_x$ and $ \mean{\mathcal{E}}_y $), components of mean field ($\mean{B}_x$ and $\mean{B}_y$) and that of mean current ($\mean{J}_x$ and $ \mean{J}_y $) at given $z=z'$ at independent times. If $ N$ is the length of these extracted time series $ \left( t_1,\,t_2 , ... ,\,t_N\right) $, a design matrix $\mathcal{A}$ is defined as follows, \begin{align} \mathcal{A} = \begin{pmatrix} \overline{{B}}_x\left(t_1,z'\right) &\overline{{B}}_y\left(t_1,z'\right) &-\overline{{J}}_x\left(t_1,z'\right) &-\overline{{J}}_y\left(t_1,z'\right) \\ \overline{{B}}_x\left(t_2,z'\right) & \overline{{B}}_y\left(t_2,z'\right) &-\overline{{J}}_x\left(t_2,z'\right) &-\overline{{J}}_y\left(t_2,z'\right) \\ \vdots & \vdots &\vdots &\vdots \\ \overline{{B}}_x\left(t_N,z'\right) & \overline{{B}}_y\left(t_N,z'\right) &-\overline{{J}}_x\left(t_N,z'\right) &-\overline{{J}}_y\left(t_N,z'\right) \\ \end{pmatrix}\, \label{design_matrix} \end{align} We note that the time series of mean field components and mean-current components form the different columns of $\mathcal{A} $, and each row corresponds to the values at any particular time, which makes $\mathcal{A}$, a matrix of dimensions $N\times4$. Since each of these columns are functions of $ z $, the matrix $\mathcal{A}$ also has a $ z $ dependence. This definition is used to write the following set of equations at $z=z'$ motivated by the model for EMF given in \eref{e1}, \begin{align} \mathcal{Y}\left(z'\right) = \mathcal{A}\left(z'\right)\, \mathcal{X}\left(z'\right) + \mathcal{N}\left(z'\right) \label{svd_equation} \end{align} where, \begin{align} \mathcal{Y}\left(z'\right) = \begin{pmatrix} \overline{\mathcal{E}}_x\left(t_1,z'\right) &\overline{\mathcal{E}}_y\left(t_1,z'\right)\\ \overline{\mathcal{E}}_x\left(t_2,z'\right) &\overline{\mathcal{E}}_y\left(t_1,z'\right)\\ \vdots & \vdots \\ \overline{\mathcal{E}}_x\left(t_N,z'\right) &\overline{\mathcal{E}}_y\left(t_1,z'\right)\\ \end{pmatrix}\, \label{data_matrix} \end{align} \begin{align} \mathcal{X}\left(z'\right) = \begin{pmatrix} \alpha_{xx}\left(z'\right) & \alpha_{yx}\left(z'\right)\\ \alpha_{xy}\left(z'\right) & \alpha_{yy}\left(z'\right)\\ \eta_{xx}\left(z'\right) & \eta_{yx}\left(z'\right)\\ \eta_{xy}\left(z'\right) & \eta_{yy}\left(z'\right)\\ \end{pmatrix}\, \label{coefficients_matrix} \end{align} and the matrix $\mathcal{N}$ is the noise matrix, the rows of which represent the level of noise in the data at different times and is also a function of $z$. Note that \eref{svd_equation} is completely consistent with the SOCA model for EMF \eref{e1}, at each time. It generalizes \eref{e1} to include a ``noise" component to the EMF independent of mean field, which we can infer from the SVD algorithm. We also assume that the matrix $\mathcal{X}$, comprising of the dynamo coefficients are time independent. This is expected to hold in the kinematic regime (or during any period when the mean field grows exponentially) and the steady state saturation, but not during the transition between growth and saturation. Matrix $\mathcal{Y}$ and $\mathcal{X}$ are of dimensions $N\times2$ and $4\times2$ respectively, while $\mathcal{N}$ has the same dimensions as $\mathcal{ Y}$. For the $i^{\rm th}$ component ($i\in x,y$), $\overline{\mathcal{E}}_i$, of the turbulent EMF, at a given height $z =z'$, the data vector $\mathbf{y}_i \left(z',t\right)$ is defined simply as the $i^{\rm th}$ column (i.e., the $1^{\rm st}$ and $2^{\rm nd}$ column for the $x$ and $y$ component, respectively) of the data matrix $\mathcal{Y}\left(z'\right)$, that is, \begin{align} \mathbf{y}_i\left(z'\right)& = \begin{pmatrix} \overline{\mathcal{E}}_i \left(z',t_1\right) \\ \overline{\mathcal{E}}_i \left(z',t_2\right) \\ \vdots \\ \overline{\mathcal{E}}_i \left(z',t_N\right) \\ \end{pmatrix}\,. \label{data_vector} \end{align} This data vector, $\mathbf{y}_i$, is also related to the coefficient vectors $\mathbf{ x}_i$ (or the $i^{\rm th}$ column of matrix $\mathcal{X}$ in \eref{coefficients_matrix}). With these definitions \eref{svd_equation} can be rewritten separately for each column of $\mathcal{Y}$ as, \begin{align} {\mathbf{y}_i}\left(z'\right)& = \mathcal{A}\left(z'\right) \, \mathbf{x}_i\left(z'\right) + \Hat{\mathbf{n}}_i\left(z'\right)\,, \label{e3} \end{align} where the vector $\Hat{ \mathbf{n}}_i $ represents the $ i^{\rm th}$ column vector of matrix $\mathcal{N}$. With this representation of EMF, the problem of estimation of dynamo coefficients is the one of determination of vector $\mathbf{x}_i$ that satisfies \eref{e3} at each $ z'$ separately. We note that \eref{e3}, comprises of $N$ simultaneous equations in four unknowns. The least square solution $\Hat{\mathbf{ x } }\left( z' \right)$ is the one that minimizes the two norm \begin{align} \chi_i^2 \left(z'\right)&= \frac{1}{N}\,\displaystyle\sum_{n=1}^{N}\, \left[ \frac{\mathbf{y}_i(z', t_n) - \mathcal{A}\left(z',t_n\right) \, \mathbf{x}_i^\top\left(z'\right)}{\sigma_i} \right]^2 \label{e5} \end{align} at each height $z'$, and for each component $i$ (which can either be $x$ or $y$ in our case). Moreover $ \sigma_i$ is the variance associated with the noise matrix $\Hat{\mathbf{n}}_i$, which we assume independent of $n$ and will estimate post-facto from the fit itself (see below). The least square solution is obtained by employing SVD, which relies upon the unique decomposition of matrix $\mathcal{A}$ in the form, \begin{align} \mathcal{A}& = \mathbf{U}\, \mathbf{w}\, \mathbf{V}^\top\,, \label{e6} \end{align} where $\mathbf{U}$ and $\mathbf{ V }$ are orthonormal matrices and the matrix $\mathbf{w}$ is diagonal. One advantage of the representation of $\mathcal{A}$ as in \eref{e6} is that the least square solution vector $\Hat{\mathbf{x}}_i$ (components of dynamo coefficient tensors) is given simply by ``pseudo-inverting'' $\mathcal{A}$ \citep{mendel_svd,recepies} to yield \begin{align} \Hat{\mathbf{x}}_i = \mathbf{V}\, \mathbf{w^{-1}} \mathbf{U}^\top \mathbf{y}_i\,. \label{e7} \end{align} The hat notation in the above equation is used to denote the least-square solution of \eref{e3} (which is different from $\mathbf{x}_{i}$ in \eref{e3}). The SVD method also gives an estimate for the covariance between the components of $\Hat{\mathbf{x}}_i$, in terms of the matrices $\mathbf{V}$ and $\mathbf{w}$. The covariance between $l^{\rm th}$ and $m^{\rm th}$ element of the vector $\Hat{\mathbf{x}}_j$ is given by \begin{align} \mathrm{Cov}\Big([\Hat{\mathbf{x}}_j]_l, [\Hat{\mathbf{x}}_j]_m\Big) = \sum_{i} \frac{\mathbf{V}_{li} \mathbf{V}_{mi}}{ \mathbf{w}_{ii}^{2}}\,. \label{cov} \end{align} Note that the elements of the covariance matrix for both $\Hat{ \mathbf{ x}}_i$ ($i=1$ or $ i=2$) are the same, since the associated design matrix, $\mathcal{A} $, is the same for both of them. Therefore the covariance between the identically indexed pairs of components of $\Hat{\mathbf{x}}_1$ and $\Hat{\mathbf{ x}}_2$ are the same, that is, \begin{eqnarray} \mathrm{Cov}\left(\alpha_{xx}, \alpha_{xy} \right) & = & \mathrm{Cov}\left(\alpha_{yx}, \alpha_{yy} \right)\,,\nonumber\\ \mathrm{Cov}\left(\alpha_{xx}, \eta_{xx} \right) & = & \mathrm{Cov}\left(\alpha_{yx}, \eta_{yx} \right)\,, \end{eqnarray} and so on. The diagonal elements of \eref{cov} further provide a measure of the variance in the determination of individual fitting parameters. For instance, the error in the estimation of the $l^{\rm th}$ component of each $\Hat{\mathbf{x}}_i$ vector is given by, \begin{align} \mathrm{Var}\Big([{\Hat{\mathbf{x}}_i}]_l\Big) &= \sum_{k} \left[\frac{\mathbf{V}_{lk}}{\mathbf{w}_{kk}}\right]^2\sigma_i^2\,. \label{var} \end{align} The term in the square brackets is the $l^{\rm th}$ diagonal element of the covariance matrix defined in \eref{cov}. Here $\sigma_i^2$ is determined from the data, and the fitted parameter vector$\Hat{\mathbf{x }}_i$, that is \begin{align} \sigma_i^2 = \frac{1}{N}\left(\mathbf{y}_i - \mathcal{A}\Hat{\mathbf{x}}_i\right)\left(\mathbf{y}_i - \mathcal{A}\Hat{\mathbf{x}}_i\right)^\top. \end{align} We note that the term in square bracket is the same for both columns $\Hat{\mathbf{x}}_i$ (with $i\in 1,2$) but $\sigma_i$ is different for the two. In \aref{app_mockdata}, we have tested the SVD algorithm on noisy mock data generated by assuming dynamo coefficient profiles very similar to those we recover in the real data, running a 1-D mean-field dynamo model, and adding noise. We find the SVD method to be quite robust in recovering input parameters, if independent noise is added to the field and the current. If we add noise only to the magnetic field and derive the current from this, we find SVD method still recovers with good accuracy the $\alpha_{i\!j}$ tensor and with less accuracy the $\eta_{i\!j}$ tensor, the latter effect arising due to extra noise introduced in calculating the current from the field. \section{Results of the SVD Reconstruction} \label{sec:dns_dynamo_coefficients} We use the 3-D DNS data for model Q, described in \sref{sec:nirvana_setup} and construct the vertical profiles of $ \mean{\mbox{\boldmath ${\cal E}$} {}} = \mean{ \mathbf{u} \times \mathbf{b}}$, $\mean{\mathbf{B}}$ and $\mean{\mathbf{ J}}$, at each time step using \eref{avg}. We further smooth the profiles by applying a box filter with the window size equal to the SNR scale (which is approximately equivalent to the size of 4 grid cells, and this essentially filters out all noise below turbulence forcing scale). We note that this smoothing preserves the Reynolds rules. We then choose a time period corresponding to the range $0.1-1$\Gyr in the kinematic phase of the DNS model, and extract the time series of $\mean{\mbox{\boldmath ${\cal E}$} {}}$, $\mean{\mathbf{B}}$ and $\mean{\mathbf{J}}$, independently at each $z$. The time interval between each data point is smaller than the expected correlation time of $10$ Myr \citep{anvar_2004,Bendre2016}. Thus we choose subsets of the full time series where data points are more than $10$ Myr apart such that each data point in the time series can be considered as independent. We construct 9 such time series, referred to as $S1,S2,...,S9$, by starting from different initial times. For comparison, we also carry out the SVD analysis of the full time series, which we refer to as $S$. From any of these time series as columns, the data matrices $\mathcal{Y}$ and design matrices $\mathcal{A}$ are constructed using definitions \eref{data_matrix} and \eref{design_matrix} respectively (i.e., separately at each $z$). Since the chosen time period corresponds to the kinematic phase of magnetic field we expect the constancy of dynamo coefficients in this period, i.e. the time independence of coefficient matrix $\mathcal{X}$ (\eref{coefficients_matrix}). The mean field, current and EMF components from the DNS all grow exponentially and increase by almost three orders at the end of kinematic phase compared to their initial value. To compensate for this large growth, we multiply the time series of all mean field, current and EMF components by $\exp{\left(-t/200{\rm \Myr}\right)}$ and take out the exponential growth factor before fitting the data. {Choice of this particular scaling factor is based on the approximate exponential growth factor of mean magnetic energy, seen in the actual DNS data. We find the evolution of mean magnetic energy to be roughly proportional to $\exp{\left(t/100{\rm \Myr}\right)}$ (see the right panel of \fref{comparison_1}), a square-root of this factor is therefore used to approximate the exponential amplification of mean field and EMF components. We have checked that the results are insensitive to the exact choice of the scaling factor}. We apply the SVD algorithm described in \sref{sec:svd_method} to this data and determine the vertical profiles and variances of $\alpha_{i\!j}(z)$ and $\eta_{i\!j}(z)$ using \eref{e7}. In particular, we use the `svdcmp' algorithm described in \citet{recepies} to decompose the design matrix $\mathcal{A}$. \begin{figure} \centering\includegraphics[width=\columnwidth]{figures_with_z_smoothing/alpha_eta_aniso_kin.eps} \caption{The red solid red lines show the average of the vertical profiles of different dynamo coefficients computed by applying SVD method to nine time different series $S1, S2,...,S9$ during the initial kinematic phase. Orange regions show the $1-\sigma$ variances obtained from these nine different vertical profiles. The inherent error in the SVD estimate of individual coefficients (from \eref{var}) are almost an order of magnitude smaller than the orange regions in the graph. The dashed blue line indicate the vertical profiles of the corresponding coefficients, calculated using the entire time series $S$ in the kinematic phase. We point out that the red solid lines and blue dashed lines almost coincide for all coefficients. Note that $\alpha_{xy}\approx-\alpha_{yx}$ giving rise to a vertical pumping term.} \label{alpha_eta} \end{figure} The profiles of all the components of $\alpha_{i\!j}$ and $\eta_{i\!j} $ tensors, recovered using the SVD method, from the various time series are shown in \fref{alpha_eta}. The solid line in each panel shows the average profile obtained by averaging the individual profiles recovered from the time series $S1$ to $S9$, while the dashed line shows dynamo coefficients obtained from the full time series $S$. We see a close correspondence between these two, showing that the oversampling implicit in the full time series $S$ does not affect the results. {In plotting we smooth these profiles over a window of size $100$\pc which corresponds roughly to the turbulent correlation length scales.} Also shown in \fref{alpha_eta} by the orange shaded regions are the variance obtained in the dynamo coefficients recovered using the time series $S1$ to $S9$. We note that the formal error from the SVD analysis on the $\alpha_{i\!j}$ and $\eta_{i\!j}$ coefficients obtained using Eq.~\ref{var} are much smaller than this variance. We see from \fref{alpha_eta} that the overall shapes of the profiles of diagonal $\alpha$ components are linear in the inner disc of approximate $-0.8$\kpc to $0.8$\kpc, and they have the opposite signs above and below the midplane. The magnitude of $\alpha_{yy }$, which is the crucial part of the $\alpha$-effect for the $\alpha-\Omega$ dynamo, is zero at the midplane (as expected) and rises with $z$ to attain a maximum of about $3 {\rm \km\s^{-1}} $ by $z=1$\kpc. A curious feature is that $\alpha_{xx}$ is larger and opposite in sign to $\alpha_{yy}$. Another significant result of this analysis is the emergent antisymmetry of $\alpha$ tensor, which is to say that off-diagonal elements are of opposite signs and similar magnitude. As already found previously \citep{gressel_2008}, these two combine, to constitute a turbulent pumping term $\gamma\sim10$\kms at the height of $z=1$\kpc that acts to transport mean magnetic fields towards the equator, against the outward advection by the vertical velocity $\overline{U}_z$. The profiles of the $\eta$ tensor, as recovered by the SVD method, are expectedly much noisier. The shape of both diagonal components of $\eta$ tensor profiles is approximately inverted bell shaped, with a maximum turbulent diffusivity of $\simeq 10^{26} {\rm \cm^2 \s^{-1}}$ at a distance of a kpc from the disk midplane. These values compare favorably with theoretical expectations \citep{anvar_2004}. The off diagonal component $\eta_{xy}$ oscillates around zero, while curiously $\eta_{yx}$ is also bell shaped, positive and similar to $\eta_{yy}$. We have also checked the robustness of SVD results in an alternate manner, by inter-comparing the coefficients computed within different sub-intervals of the original time series. In particular, we divide the kinematic phase time series (100 to 1000\Myr) of mean field, mean current and EMF components, into four sections of equal lengths and compute all dynamo coefficients corresponding to each of these sections, using the SVD method. These four profiles are then used to compute the average profile and the dispersion about the average. In \fref{alpha_eta_slots} we show as red-solid curves, the average vertical profiles of the dynamo coefficients, obtained from the four different sub-intervals of kinematic phase. This can be readily compared with blue-dashed curves which shows the same dynamo coefficients, computed for the entire kinematic phase time series (same as the profiles shown in \fref{alpha_eta}). We see reasonably good agreement between the two curves which shows the robustness of the SVD method and the validity of the assumption that the dynamo coefficients are approximately constant during the kinematic phase. Represented by the shaded orange region is the 1-$ \sigma$ interval for these four vertical profiles of dynamo coefficients. They also show that the fluctuation of $\alpha_{i\!j}$ and $\eta_{i \!j}$ with time is larger than the formal error given by the SVD analysis using the full time series. \begin{figure} \centering\includegraphics[width=\linewidth]{figures_with_z_smoothing/err_sigma_coeff.eps} \caption{Plotted in the red-solid lines are the average vertical profiles of the dynamo coefficients computed for four different sections of kinematic phase (0.1 to 1 \Gyr) of size 225\Myr. Blue-dashed lines represent the same for respective dynamo coefficients, but for the entire kinematic phase time series, these are identical to the ones shown in \fref{alpha_eta}. Notice the significant agreement between the red-solid and the blue -dashed curves. Orange regions in each panel represent the corresponding 1-$\sigma$ interval computed using the profiles of dynamo coefficients in the four sections of kinematic phase.} \label{alpha_eta_slots} \end{figure} These profiles of dynamo coefficients recovered via SVD are in qualitative agreement with those recovered from {TF} method for the same model \citep{bendre2015dynamo, Bendre2016}. For ready reference we have summarized the {TF} results derived using the same DNS data in \aref{alphas_tf}. The magnitude of $\alpha_{yy}$, recovered by the SVD method, is within a $1-\sigma$ confidence interval of its {TF} counterpart. However the magnitude of $\alpha_{xx}$ from the SVD method is systematically larger by a factor of about 3. Furthermore, a bell shaped profile of both diagonal components of $\eta_{i\!j}$ tensor is obtained in both the SVD and {TF} methods. The magnitudes of $\eta_{xx}$ and $\eta_{yy}$ however, are substantially smaller in SVD reconstruction, compared to the {TF} results. One possible explanation for this discrepancy is that the SVD and {TF} methods, in fact, may be sampling very different length scales in the problem, and that the different values for $\eta$ simply reflect this aspect. We will discuss this issue further in Section~\ref{discussion}. {In addition, in \sref{sec:regr} the results from the SVD analysis are also compared with that from a simple regression method due to {BS02}. The mean values of the coefficients obtained in this method are in agreement with that obtained using SVD. The standard deviations tend to be however somewhat larger. Moreover as we will see below, we also use the SVD to compute systematically the full covariance matrix on the coefficients.} \subsection{Covariance of the Dynamo Coefficients} In principle, the overdetermined system defined by \eref{svd_equation} has no unique set of solutions in a sense that the same EMF time series could be produced by the different sets of parameters. The SVD algorithm provides a solution that is the best approximation in a least-squared sense. There however could be a degeneracy in the determination which could be probed quantitatively by comparing the off-diagonal elements of covariance matrix to the diagonal ones. Since the time series of both $ x $ and $ y $ components of EMF (columns of $\mathcal{Y}$) depend on the first and second column of $ \mathcal{X} $ through the same design matrix $\mathcal{A}$ covariance between the elements of $\Hat{\mathbf{x}}_1$ and $\Hat{\mathbf{x}}_2$ are identical (see \eref{cov}) and following relations hold: \begin{align*} \mathrm{Cov}\Big(\alpha_{xx},\alpha_{xy}\Big) = \mathrm{Cov}\Big(\alpha_{yx},\alpha_{yy}\Big)\,,\\ \mathrm{Cov}\Big(\alpha_{xx},\eta_{xx}\Big) = \mathrm{Cov}\Big(\alpha_{yx},\eta_{yx}\Big)\,, \\ \mathrm{Cov}\Big(\alpha_{xx},\eta_{xy}\Big) = \mathrm{Cov}\Big(\alpha_{yx},\eta_{yy}\Big)\,, \\ \mathrm{Cov}\Big(\alpha_{xy},\eta_{xx}\Big) = \mathrm{Cov}\Big(\alpha_{yy},\eta_{yx}\Big)\,, \\ \mathrm{Cov}\Big(\alpha_{xy},\eta_{xy}\Big) = \mathrm{Cov}\Big(\alpha_{yy},\eta_{yy}\Big)\,, \\ \mathrm{Cov}\Big(\eta_{xx},\eta_{xy}\Big) = \mathrm{Cov}\Big(\eta_{yx},\eta_{yy}\Big)\,. \end{align*} \begin{figure} \centering \includegraphics[width=\linewidth]{figures_with_z_smoothing/cov.eps} \caption{Vertical profiles of all off-diagonal elements of covariance tensor normalized with their respective diagonal components.} \label{covar} \end{figure} In \fref{covar}, we plot the vertical profiles of the off-diagonal elements of the covariance matrix (calculated using \eref{cov}) normalized with the corresponding diagonal elements. If two parameters are uncorrelated the corresponding normalized covariance will be zero, full correlation corresponds to $+1$ and complete anti-correlation to $-1$. From this figure, we see that the diagonal and off-diagonal elements of the $\alpha_{i\!j}$ tensor, like $\alpha_{xx}$ and $\alpha_{xy}$ (or $\alpha_{yy}$ and $\alpha_{yx}$) are correlated for all $z$, while the corresponding diagonal and off-diagonal elements of the $\eta_{i\!j}$ tensor are anti-correlated (see the top-left and bottom-right panels of \fref{covar}). Moreover, the correlations between the first and third element and the second and the fourth element of $\Hat{ \mathbf{x}}_i$ (equivalently $\mathrm{Cov}\left( \alpha_{xx},\eta_{xx}\right)$ and $ \mathrm{Cov} \left(\alpha_{yy},\eta_{yy}\right)$ are non negligible within the central half a kpc of the disk, where the mean field is strong. These correlations can arise if there are definite correlations between the mean field components with themselves and with the current components. For example if we have a definite eigenmode of the mean-field dynamo with say $\mean{B}_y \approx - c_1\mean{B}_x$, with a constant $c_1$. Then $\mean{\mathcal{E}}_x = \alpha_{xx}\mean{B}_x + \alpha_{xy}\mean{B}_y+...\approx (\alpha_{xx} -c_1 \alpha_{xy}) \mean{B}_x + ....$, and there can be a mixing (or correlation) between $\alpha_{xx}$ and $\alpha_{xy}$, as indeed observed. Similarly, if the field is partially helical, then there would be a correlation between $\mean{B}_x$ and $\mean{J }_x$ (also $\mean{B}_y$ with $\mean{J}_y$). This could indeed induce a partial correlation between $\alpha_{xx}$ and $\eta_{xx}$ (and $\alpha_{yy}$ with $\eta_{yy}$), as also indeed observed in the top right panel of \fref{covar}. We see therefore that all the turbulent dynamo coefficients determined by the SVD, by directly fitting the $\mean{\mathbf{\mathcal{E}}}$ data from the numerical simulation, need not be completely independent. This will be the case if the mean fields and currents are partially correlated. The coefficients determined by the SVD do however give the best fit to the data in a least-squared sense. This also explains why the {TF} results and the SVD method, even if they differ somewhat in the amplitudes of different coefficients, could lead to very similar $\mean{\mathbf{\mathcal{E}}}$ and hence predict similar evolution for the mean field evolution. \subsection{Quenching of the Dynamo Coefficients} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures_with_z_smoothing/alpha_eta_aniso_dyn.eps} \caption{Same as \fref{alpha_eta} but for the dynamical phase.} \label{alpha_quench_all} \end{figure} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{figures_with_z_smoothing/gamma_ayy_quenching_aniso_3_steps} \caption{\textit{Left Panel:} Vertical profile of the $\alpha_{yy}$ coefficient in the kinematic (black-solid line), the dynamical phases ($1.5$ - $2.0$) Gyr (red-dashed line) and ($2.0$ - $2.5$) Gyr (blue-dash-dotted line). \textit{Right Panel:} shows the same but for turbulent pumping (i.e., $\gamma$) term with the same colour coding.} \label{alpha_quench} \end{figure*} It is of interest to examine the behaviour of the dynamo coefficients as the mean magnetic field grows and Lorentz forces become important. Our previous analysis based on the {TF} method \citep{gressel2012magnetic,Bendre2016} indicated that both $\alpha$ and $\eta$ coefficients quench drastically in the presence of dynamically significant mean fields as an algebraic function of relative field strengths. We perform a similar analysis using SVD method here. To quantify the strengths of the mean fields relative to turbulent kinetic energy we use the dimensionless ratio of the magnetic to kinetic energy defined by, \begin{align} \beta^2 = \frac{\mean{\mathbf{B}}\cdot\mean{\mathbf{B}}}{\mu_0 \,\rho\, u^2}\,. \end{align} To investigate the effect of strong mean field on the dynamo coefficients, we examine their behaviour at two regions of $\beta$. We choose the time series of $\mean{ \mathbf{ B }}$, $\mean{\mathbf{J}}$ and $\mean{\mbox{\boldmath ${\cal E}$} {}}$ from DNS, corresponding to $0.1$ to $1 $\Gyr representing its initial kinematic phase ($\beta\leq0.01$). Conversely, for the dynamical phase ($\beta\geq 1$) we use the time series between $1.5 $ \Gyr and $2.54 $\Gyr. Equipped with this data, we compute vertical profiles of all dynamo coefficients using the SVD method as discussed in the previous section. It should be noted that, the SVD algorithm we have employed requires the dynamo coefficients to stay constant for a chosen range of time. Consequently while choosing the time slots at various values of $\beta$ we are tacitly assuming the constancy of dynamo coefficients for that range. This assumption may be justifiable in the kinematic phase where the exponential growth of magnetic is consistent with the solution of $\alpha-\Omega$ dynamo with constant dynamo coefficients. Moreover, in the dynamical phase above $\sim 1.5$\Gyr, the average growth of mean fields appears to be approximately exponential, however with a drastically reduced growth rate. This also hints the existence of dynamo action with a set of approximately constant (but quenched) dynamo coefficients. For the intermediate phase of approximately 1.0 and 1.5\Gyr however, where the transition between kinematic and quenched dynamo coefficients is supposed to occur, the constraint of the constancy may not hold. We therefore focus here on the dynamo coefficients in the kinematic and dynamical phase using this method, and not in the intermediate phase of transition. The dynamo coefficients in the dynamical phase are shown in \fref{alpha_quench_all}. The plots show a drastic suppression of the mean $\alpha_{yy}$ and the off-diagonal terms, $\alpha_{xy}$ and $\alpha_{yx}$, for $ z < 0$, which in turn implies a drastic suppression of the pumping term $ \gamma=\left(\alpha_{ yx} - \alpha_{ xy }\right)/2$. It appears that the mean value of these coefficients are less affected for $z>0$. However, the lower envelope of $\alpha_{yy}$ is closer to zero in the dynamic phase compared to the kinematic phase, even for $z >0$. We find that the other coefficients do not undergo any systematic quenching, although the variance in these coefficients are much larger in the dynamic compared to the kinematic phase {and it is harder therefore to quantitatively determine the quenching}. To elucidate the behaviour of the dynamo coefficients in the dynamical phase further, we have split this period in to two sub-periods, $ 1.5$ to $2$ \Gyr, and $2$ to $2.5$ Gyr. In \fref{alpha_quench} we compare the vertical profiles of $ \alpha_{yy}$ and $\gamma$ coefficients for these two dynamical regimes with the kinematic regime. The figures show clearly, that for the final period of $2$ to $2.5$ Gyr, the suppression of $\alpha_{yy} $ and the pumping term $\gamma$ is now drastic for all values of $z$, This also shows that the turbulent coefficients are evolving in the dynamic phase and \fref{alpha_quench_all} shows the average behaviour, assuming they are constant. Note that in the kinematic stage the $\gamma$ term pumped back the mean field into the disk part, which an outflowing wind was carrying out in the halo. The decrease of $\alpha_{yy}$ and $\gamma$ could therefore be the cause of dynamical saturation of magnetic field seen in the DNS. We should also point out that the mean vertical velocity $\overline{U}_z$ is suppressed as the $\beta$ or the mean magnetic field increases, as already shown in \citet*{bendre2015dynamo}. \subsection{Comparison of recovered \texorpdfstring{$\mean{\mbox{\boldmath ${\cal E}$} {}}$}{E} with the DNS} \label{DNS_SVD_emf} \begin{figure*} \centering\includegraphics[width=0.9\linewidth]{figures_with_z_smoothing/emf_contour.eps} \caption{\textit{Left Panels:} Bottom panel shows the time evolution of the vertical profile of $\overline{\mathcal{E}}_x$ obtained from the DNS and the top panel depicts the same for the $x$ component of reconstructed EMF using the SVD estimates of $\alpha$ and $ \eta$ tensors and the profiles of mean field components from DNS. \textit{Right Panels:} Show the same quantities as the left one but for the $y$ component of EMF. Note that the SOCA the model for EMF components roughly reproduces the actual DNS data. The white patch ranging from 1 to $1.5 $\Gyr corresponds to the transition phase between the kinematic and dynamical phase of field evolution, since the dynamo coefficients in this range cannot be reliably extracted by present SVD method, we have omitted this patch in this comparison. Color code here is normalized with respect to the exponential scaling factor of $\exp{\left(t/200 {\rm Myr} \right)}$, in the kinematic phase to compensate for the exponential amplification of EMF components as mentioned in \sref{sec:svd_method}.} \label{emf_contour} \end{figure*} It is important to ask how well $\mean{\mbox{\boldmath ${\cal E}$} {}}$, obtained from the turbulent transport coefficients recovered with the SVD method according to \eref{e1} or \eref{svd_equation} agrees with $\mean{\mbox{\boldmath ${\cal E}$} {}}$ in the DNS and what is the level of residual noise in this fit. We compare in \fref{emf_contour} the evolution of $\overline{ \mathcal{E}}_x \left(z\right)$ and $\overline{\mathcal{E}}_y\left(z\right)$ obtained from the DNS (Bottom panels) with that reconstructed using the SVD estimates of the $\alpha_{i\!j}$ and $\eta_{i\!j}$ tensors (top panels). We see that that there is reasonable agreement between the two, especially in the kinematic stage. \begin{figure} \centering \includegraphics[width=\linewidth]{figures_with_z_smoothing/noise_hist_kin.eps} \caption{Black-solid lines show the residual noise `probability' distributions obtained by subtracting the {kinematic phase} time series of $\mean{\mathbf{\mathcal{E}}}$ {obtained in the SVD reconstruction} from {that} obtained in the DNS. The noise is expressed in the units of percentage of the DNS $\mean{ \mathbf{\mathcal{E}}}$ value. The red lines show the Gaussian best fit of the respective distributions. Mean (avg) and standard deviation ($\sigma$) of each fit is given in each box {and the square brackets show the errors in their determination}.} \label{emf_noise_kin} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{figures_with_z_smoothing/noise_hist_dyn.eps} \caption{Same as \fref{emf_noise_kin} but for the dynamical phase time series (restricted to the time period of $t \ge1.5$\Gyr) of EMF components.} \label{emf_noise_dyn} \end{figure} In order to make this comparison more quantitative and estimate the level of the residual noise in the SVD fit, we have also compared the time-series of both components of EMF obtained from the DNS ($ \mean{\mathcal{E}}_i=\mean{\left(\mathbf{u}\times \mathbf{b}\right)_i}$) and the ones reconstructed using the dynamo coefficients ($\mean{\mathcal{E} }_i = \alpha_{i\!j}\mean{B}_j-\eta_{i\!j}\mean{J }_j $) at specific locations. This is shown in \fref{emf_noise_kin} and \fref{emf_noise_dyn}, where we plot the histograms of the relative differences (in percentage) between $x$ and $y$ components of EMF obtained from the DNS and corresponding estimates using SVD, {for the kinematic ($0.1$ to $1$\Gyr) and the dynamical phase (above $1.5$ \Gyr) respectively. The left and right panels of each figure, correspond respectively to the distribution of this residual noise in the $x$ and $y$ components of EMF. We do this analysis at various heights and the panels from bottom to top show the results at $z=-1$\kpc, $-0.5$ \kpc, $0$\kpc, $0.5$ and $1.0$ \kpc}. We see that the relative difference between the two is mostly normally distributed at all heights. We {then} fit these histograms with the Gaussian functions {(shown with red curves)}, and determine both the mean and the dispersion $ \sigma$ of the distributions. We see from \fref{emf_noise_kin} that the mean of the residual noise is very close to zero during the kinematic phase, while their $\sigma$ values turn out to be less than a few percent at all locations. Therefore the reconstructed EMF using the SVD method does give a good fit to that obtained directly in the simulations during the kinematic evolution. On the other hand, both the mean residual noise and its dispersion are larger during the dynamical phase as can be seen from \fref{emf_noise_dyn}. The mean noise ranges from a few percent to 30\% for the $y$ component of the EMF with the dispersion $\sigma$ less than 30\%. For the $x$ component, the mean residual noise has a similar range except around $z\sim -0.5$ kpc where it becomes as much as 70\%. The dispersion in the noise is also larger at this location. At all heights, however the zero value is within the 1-$\sigma$ range of the noise distribution. These features indicate that while the SVD method provides an excellent fit in the kinematic phase, it is not providing as good a fit in the dynamical phase. At the same time, we shall see in Section~\ref{sec:1-d_dynamo} that the a 1-D mean field dynamo model using the turbulent dynamo coefficients obtained from the SVD method reproduces reasonably well the evolution of the mean magnetic field, not only in the kinematic phase but also in the dynamical phase. \section{Comparison of one-D dynamo model with the DNS} \label{sec:1-d_dynamo} The validity of the computed profiles of dynamo coefficients in the kinematic and dynamical phase can also be verified, self-consistently, by demonstrating that a 1-D dynamo model using these coefficients gives results very similar to the DNS. The 1-D dynamo equations are, \begin{align} \frac{\partial\overline{{B}}_x}{\partial t}& = \frac{\partial}{\partial z} \Bigg( -\left(\overline{U}_z +\alpha_{yx}\right) \overline{{B}}_x -\alpha_{yy} \, \overline{{B}}_y -\eta_{yy} \, \overline{{J}}_y -\eta_{yx} \, \overline{{J}}_x \Bigg)\nonumber\\\nonumber \frac{\partial\overline{{B}}_y}{\partial t} &= \frac{\partial}{\partial z} \Bigg( -\left(\overline{U}_z -\alpha_{xy}\right) \overline{{B}}_y +\alpha_{xx} \, \overline{{B}}_x +\eta_{xx} \, \overline{{J}}_x -\eta_{xy} \, \overline{{J}}_y \Bigg)\nonumber\\&+q\,\Omega\,\overline{{B}}_x. \label{e_dynamo} \end{align} Note that from $\nabla\cdot{\mathbf{B}}=0$, $ \mean{B}_z=\rm const.$ for the $x-y$ averaged mean. We solve \eref{e_dynamo} with a resolution of 512 grid points similar to the DNS. Adopting a continuous gradient boundary condition for $ \mean{B}_x $ and $\mean{B}_y$, we evolve \eref{e_dynamo} on a staggered grid using the finite difference method. Initial profiles of $\mean{B}_x$ and $\mean{B}_y $ components are taken directly from the respective DNS data averaged over first $150 $\Myr. The vertical profiles of dynamo coefficients used for the first gigayear are as determined from the SVD analysis and given in \fref{alpha_eta}. For the latter period of 1 to $1.5$ \Gyr, we use the linearly interpolated profiles of $\alpha_{yy}$ and pumping term between black-solid and blue-dashed curves shown in \fref{alpha_quench} to roughly mimic the transition between kinematic and dynamical phases and keep other coefficients the same. For the period after $1.5$\Gyr, to simulate the dynamical quenching of the coefficients, we further replace these with the once shown in \fref{alpha_quench} with blue-dashed curves and keep the rest of the coefficients constant. We run this model up to $2.54$\Gyr. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{figures_with_z_smoothing/by_z.eps} \caption{\textit{Left Panel:} Black-solid lines show the vertical profile of $\overline{{B }}_y$ seen in the DNS, while its dashed counterparts show the same for 1D dynamo model. The approximately symmetric profiles with respect to the midplane corresponds to the $\overline{{B}}_y$ at end of dynamical phase, 2.5 \Gyr, and the antisymmetric ones correspond to 0.8\Gyr. \textit{Right Panel:} The black-solid line represents the time evolution of the mean-magnetic energy expressed in logarithmic units for the DNS, while the dashed line represents the same for 1-D model.} \label{comparison_1} \end{figure*} The results of these simulations are shown in \fref{comparison_1}, and \fref{comparison_2}. After an initial period of $\sim80$\Myr when the initial transients decay, the overall evolution of magnetic field in 1-D model is reasonably consistent with the outcome of the DNS. To corroborate this, in \fref{comparison_1} (right panel) we first compare the time evolution of mean magnetic energies in DNS (shown with solid line) and 1-D simulations (shown with dashed line). We see that mean magnetic energy curves from the direct and 1-D simulations, overlap closely both in kinematic and dynamical phase. This clearly shows the overall similarity in the magnetic energy growth, with the e-folding time of $\sim100$\Myr in the kinematic and $\sim520$\Myr in the dynamical phase as in the DNS. Furthermore, in \fref{comparison_2} we compare the space-time butterfly diagrams of azimuthal field component ($z-t$ evolution). This also shows the qualitative similarity with which the field profile is reproduced along with the reversals and the emergence of final symmetric mode. To supplement this, we additionally compare in \fref{comparison_1} (left panel) the vertical profiles of $\overline{B}_y$ from DNS (solid lines) and 1-D simulations (dashed lines) at an intermediate time of $0.8$\Gyr (when there obtains an antisymmetric mode with respect to the mid-plane) and near the end of the simulation at $2.5$\Gyr, (when a symmetric mode is prevalent). This comparison also shows that the approximate shape of the $ \overline{B}_y$ profile is well replicated in 1-D dynamo simulations. Overall, the similarity in the evolution of the mean magnetic field in the DNS and 1-D models which uses the dynamo coefficients determined using SVD method supports the robustness of the chosen approach. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{figures_with_z_smoothing/b_contour.eps} \caption{\textit{Left Panel:} Shows the time evolution of the vertical profile of $\overline{ B}_x$ seen in the DNS (bottom panel), and in the 1D dynamo model (top panel) \textit{Right Panel:} Shows the same but for the $y$ component of mean field. We have normalized the colour code with mean magnetic energy to compensate for the exponential amplification of mean field and make its initial features visible.} \label{comparison_2} \end{figure*} \section{Discussion and Conclusions} \label{discussion} The determination of turbulent transport coefficients in direct simulations which give rise to large-scale dynamo action is important to understand how the dynamo operates to grow and maintain the large-scale magnetic fields. Several methods have been suggested in the past, including what is known as the test-field ({TF}) method and the singular value decomposition (SVD) method. The SVD is useful particularly for post processing analysis of simulation data. We have presented in this paper a SVD analysis of the simulation of SNe driven ISM turbulence of \citet*{bendre2015dynamo}, which had led to large-scale field generation. {TF} results for this simulation have been presented there, which makes it also possible to compare the results obtained from these two very different methods. The profiles of dynamo coefficient tensors $\alpha_{i\!j}(z) $ and $\eta_{i\!j}(z)$ in the SVD method are obtained from the turbulence data by minimising the least squares of a residual vector $R_{i}=\mean{\mathcal{E}}_{i}-\alpha_{i\!j} \,\mean{B}_{j}+\eta_{i\!j}\,\mean{J}_{j}$. As a consistency check we verify the efficacy of SVD algorithm in \aref{app_mockdata} and by using the exact data produced in 1-D simulations, adding random white noise up to a level of 50\% of the actual data, and showing that the SVD effectively reconstructs the dynamo coefficient tensors. The profiles of $\alpha_{i\!j}(z)$ and $\eta_{i\! j}(z)$ tensors, calculated using the SVD method, are shown in \fref{alpha_eta} for the kinematic phase and \fref{alpha_quench_all} for the dynamical phase when the dynamo growth has decreased. We also show that the turbulent EMF components predicted using the reconstructed $ \alpha_{i\!j}$ and $\eta_{i\!j}$ tensors match quite well with actual DNS data for the kinematic phase, with very little residual noise. The match is not as good for the dynamical phase. However, we show that the evolution of the mean magnetic fields, predicted by solving the 1-D mean-field dynamo equations using the reconstructed dynamo coefficients match remarkably well, with that determined from the DNS, as can be seen from \fref{comparison_1} and \fref{comparison_2}. Thus it seems that the SVD does indeed give a reasonable reconstruction of the dynamo coefficients. The predicted magnitude of $\alpha_{yy}$, crucial for regenerating $\mean{B}_x$ from $\mean{B}_y$ in the $\alpha-\Omega$ dynamo, is zero at the midplane (as expected) and rises with $z$ to attain a maximum of about $3 {\rm\km\s^{-1}}$ by $z= 1$ \kpc. We also predict a turbulent pumping term $\gamma \sim10$\kms at the height of a \kpc, that acts to transport mean fields towards the equator, against the outward advection by the vertical velocity $\overline{U}_z$. The diagonal components of turbulent diffusion tensor, as recovered by the SVD method, are much more noisy, approximately inverted bell shaped, with a maximum turbulent diffusivity of $\simeq 10^{26} {\rm \cm^2 \s^{-1}}$ at a distance of a \kpc from the disk midplane. These numbers compare favorably with that expected on basis of simple estimates for galactic dynamos \citep{anvar_2004}. We find that as the mean field becomes stronger, both $\alpha_{yy}$ and $\gamma$ get suppressed, but other coefficients remain largely unaltered. The vertical profiles of all dynamo coefficients constructed in SVD analysis (\fref{alpha_eta}) are qualitatively similar to their {TF} counterparts (shown in \fref{alpha_eta_tf}) during the kinematic phase. {In this phase, they are also similar to that obtained from a simple regression method of {BS02}, as shown in \fref{fig:alpha_eta_regression}.} The amplitude of $\alpha_{xx}$ however is larger by a factor of 3 as determined in the SVD analysis {compared to the {TF} method}, while $\eta_{xx}$ and $\eta_{yy}$ are substantially smaller. We stress that the SVD method is likely most sensitive to vertical gradients in the mean fields on relatively {smaller} scales. This is because it is restricted to the length scales that are actually sampled from the fields present in the DNS, which are confined to a few hundred parsec around the midplane. In contrast, the {TF} method (in its form used here) is essentially a spectral method, where one is free to choose the length scale probed. It has to be noted that the particular choice was to have test fields that vary on the largest vertical scales accessible in the tall simulation domain. \citet{GP15} have demonstrated the scale-dependence of the {TF} coefficients for the case of magnetorotational turbulence, where coefficients were found to decay from their peak value at the largest available scales by a factor of a few, when approaching the smallest scales accessible to the simulation. This scale-dependence may in fact fully explain the tension between the SVD and {TF} results, which can be tested by running the latter method at higher vertical wavenumber. In the dynamically quenched phase, the {TF} method predicts not only suppression of $\alpha_{yy}$ and $\gamma$ but also the turbulent diffusion tensor, with the latter feature not having been {clearly} obtained in the SVD analysis. Nevertheless, the 1-D model using the {TF} results had also matched the evolution of the mean magnetic fields obtained in the DNS \citep*{bendre2015dynamo}. This seems to indicate a degeneracy in the determination of the magnitudes of turbulent transport coefficients using different methods. A potential explanation for this feature is the existence of partial correlations between some of the parameters, as explicitly shown in \fref{covar}. Our work here has demonstrated that the SVD method is a self-consistent and useful way of determining turbulent transport coefficients. In contrast to the {TF} method, it is computationally less expensive since it is used merely as a post-processing tool. However, as a potential downside of the SVD method, some of the determined parameters can also get correlated if the different components of the mean fields and currents have definite correlations. The reconstruction of coefficients which couple to the current is also more noisy in the SVD method. These latter issues are tacitly avoided in {TF} method as they study the inductive response to known functional forms of additional mean test fields. The {TF} method itself is believed to have difficulties in dealing with a strong small-scale dynamo, where small scale fields are generated independent of the mean magnetic field. Here such fields will merely appear as an extra noise term to be recovered self-consistently in the SVD reconstruction. It will be important to study a case where both large and small-scale dynamos are active (e.g., the case in \citet{BSB19}), with the SVD method. Also of interest will be to recover the dynamo coefficients when the mean field is defined differently, like in the filtering approach (eg. \cite{gent_2013}). In addition, it would be useful to be able to implement Bayesian priors for the dynamo coefficients, perhaps using the information field theory approach \citep[e.g.,][]{IFT_torsten}, while doing the least-square minimisation of the data; a study which is left for the future. \section*{Acknowledgements} We used the NIRVANA code version 3.3, developed by Udo Ziegler at the Leibniz-Institut f\"ur Astrophysik Potsdam (AIP). For computations we used Leibniz Computer Cluster, also at AIP. We thank Dipankar Bhattacharya, Torsten En{\ss}lin and Anvar Shukurov for very insightful discussions. \bibliographystyle{mnras}
2,877,628,088,710
arxiv
\section{Introduction} The task of Visual Question Answering has received growing interest in the recent years (see \cite{malinowski2014multi,antol2015vqa,wu2016survey} for example). One of the more interesting aspects of the problem is that it combines computer vision, natural language processing, and artificial intelligence. In its \textit{open-ended} form, a question is provided as text in natural language together with an image, and a correct answer must be predicted, typically in the form of a single word or a short phrase. In the \textit{multiple-choice} variant, an answer is selected from a provided set of candidates, alleviating evaluation issues related to synonyms and paraphrasing. \begin{figure}[t] \begin{center} \includegraphics[width=0.99\linewidth]{fig-teaser.pdf} \end{center} \label{fig:teaser} \vspace{-6pt} \caption{We encode the input scene as a graph representing the objects and their spatial arrangement, and the input question as a graph representing words and their syntactic dependencies. A neural network is trained to reason over these representations, and to produce a suitable answer as a prediction over an output vocabulary.} \vspace{-8pt} \end{figure} Multiple datasets for VQA have been introduced with either real \cite{antol2015vqa,krishnavisualgenome,malinowski2014multi,ren2015image,zhu2015visual7w} or synthetic images \cite{antol2015vqa,zhang2015balanced}. Our experiments uses the latter, being based on clip art or ``cartoon'' images created by humans to depict realistic scenes (they are usually referred to as ``abstract scenes'', despite this being a misnomer). Our experiments focus on this dataset of clip art scenes as they allow to focus on semantic reasoning and vision-language interactions, in isolation from the performance of visual recognition (see examples in Fig.~\ref{fig:qualitative}). They also allow the manipulation of the image data so as to better illuminate algorithm performance. A particularly attractive VQA dataset was introduced in \cite{zhang2015balanced} by selecting only the questions with binary answers (\emph{e.g.}~ yes/no) and pairing each (synthetic) image with a minimally-different complementary version that elicits the opposite (no/yes) answer (see examples in Fig.~\ref{fig:qualitative}, bottom rows). This strongly contrasts with other VQA datasets of real images, where a correct answer is often obvious without looking at the image, by relying on systematic regularities of frequent questions and answers \cite{antol2015vqa,zhang2015balanced}. Performance improvements reported on such datasets are difficult to interpret as actual progress in scene understanding and reasoning as they might similarly be taken to represent a better modeling of the language prior of the dataset. This hampers, or at best obscures, progress toward the greater goal of general VQA. In our view, and despite obvious limitations of synthetic images, improvements on the aforementioned ``balanced'' dataset constitute an illuminating measure of progress in scene-understanding, because a language model alone cannot perform better than chance on this data. \paragraph{Challenges} The questions in the clip-art dataset vary greatly in their complexity. Some can be directly answered from observations of visual elements, \emph{e.g.}~ \textit{Is there a dog in the room ?}, or \textit{Is the weather good ?}. Others require relating multiple facts or understanding complex actions, \emph{e.g.}~ \textit{Is the boy going to catch the ball?}, or \textit{Is it winter?}. An additional challenge, which affects all VQA datasets, is the sparsity of the training data. Even a large number of training questions (almost 25,000 for the clip art scenes of \cite{antol2015vqa}) cannot possibly cover the combinatorial diversity of possible objects and concepts. Adding to this challenge, most methods for VQA process the question through a recurrent neural network (such as an LSTM) trained from scratch solely on the training questions. \paragraph{Language representation} The above reasons motivate us to take advantage of the extensive existing work in the natural language community to aid processing the questions. First, we identify the syntactic structure of the question using a dependency parser \cite{demarneffe2008parser}. This produces a graph representation of the question in which each node represents a word and each edge a particular type of dependency (\emph{e.g.}~ \textit{determiner}, \textit{nominal subject}, \textit{direct object}, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot). Second, we associate each word (node) with a vector embedding pretrained on large corpora of text data \cite{pennington2014glove}. This embedding maps the words to a space in which distances are semantically meaningful. Consequently, this essentially regularizes the remainder of the network to share learned concepts among related words and synonyms. This particularly helps in dealing with rare words, and also allows questions to include words absent from the training questions/answers. Note that this pretraining and \textit{ad hoc} processing of the language part mimics a practice common for the image part, in which visual features are usually obtained from a fixed CNN, itself pretrained on a larger dataset and with a different (supervised classification) objective. \paragraph{Scene representation} Each object in the scene corresponds to a node in the scene graph, which has an associated feature vector describing its appearance. The graph is fully connected, with each edge representing the relative position of the objects in the image. \paragraph{Applying Neural Networks to graphs} The two graph representations feed into a deep neural network that we will describe in Section~\ref{sec:architecture}. The advantage of this approach with text- and scene-graphs, rather than more typical representations, is that the graphs can capture relationships between words and between objects which are of semantic significance. This enables the GNN to exploit (1)~the \textit{unordered} nature of scene elements (the objects in particular) and (2)~the \emph{semantic relationships} between elements (and the grammatical relationships between words in particular). This contrasts with the typical approach of representing the image with CNN activations (which are sensitive to individual object locations but less so to relative position) and the processing words of the question serially with an RNN (despite the fact that grammatical structure is very non-linear). The graph representation ignores the order in which elements are processed, but instead represents the relationships between different elements using different edge types. Our network uses multiple layers that iterate over the features associated with every node, then ultimately identifies a soft matching between nodes from the two graphs. This matching reflects the correspondences between the words in the question and the objects in the image. The features of the matched nodes then feed into a classifier to infer the answer to the question (Fig.~\ref{fig:teaser}). \vspace{1ex} \noindent The main contributions of this paper are four-fold. \begin{enumerate}[topsep=2pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,label={\arabic*)},leftmargin=3.0ex] \item We describe how to use graph representations of scene and question for VQA, and a neural network capable of processing these representations to infer an answer. \item We show how to make use of an off-the-shelf language parsing tool by generating a graph representation of text that captures grammatical relationships, and by making this information accessible to the VQA model. This representation uses a pre-trained word embedding to form node features, and encodes syntactic dependencies between words as edge features. \item We train the proposed model on the VQA ``abstract scenes'' benchmark \cite{antol2015vqa} and demonstrate its efficacy by raising the state-of-the-art accuracy from 71.2\% to 74.4\% in the multiple-choice setting. On the ``balanced'' version of the dataset, we raise the accuracy from 34.7\% to 39.1\% in the hardest setting (requiring a correct answer over \textit{pairs} of scenes). \item We evaluate the uncertainty in the model by presenting --~for the first time on the task of VQA~-- precision/recall curves of predicted answers. Those curves provide more insight than the single accuracy metric and show that the uncertainty estimated by the model about its predictions correlates with the ambiguity of the human-provided ground truth. \end{enumerate} \section{Related work} \begin{figure*}[t] \vspace{-3pt} \includegraphics[width=.95\linewidth]{fig-network.pdf} \vspace{-1pt} \caption{Architecture of the proposed neural network. The input is provided as a description of the scene (a list of objects with their visual characteristics) and a parsed question (words with their syntactic relations). The scene-graph contains a node with a feature vector for each object, and edge features that represent their spatial relationships. The question-graph reflects the parse tree of the question, with a word embedding for each node, and a vector embedding of types of syntactic dependencies for edges. A recurrent unit (GRU) is associated with each node of both graphs. Over multiple iterations, the GRU updates a representation of each node that integrates context from its neighbours within the graph. Features of all objects and all words are combined (concatenated) pairwise, and they are weighted with a form of attention. That effectively matches elements between the question and the scene. The weighted sum of features is passed through a final classifier that predicts scores over a fixed set of candidate answers.} \label{fig:network} \vspace{-10pt} \end{figure*} The task of {visual question answering} has received increasing interest since the seminal paper of Antol \emph{et al.} \cite{antol2015vqa}. Most recent methods are based on the idea of a \textbf{joint embedding} of the image and the question using a deep neural network. The image is passed through a convolutional neural network (CNN) pretrained for image classification, from which intermediate features are extracted to describe the image. The question is typically passed through a recurrent neural network (RNN) such as an LSTM, which produces a fixed-size vector representing the sequence of words. These two representations are mapped to a joint space by one or several non-linear layers. They can then be fed into a classifier over an output vocabulary, predicting the final answer. Most recent papers on VQA propose improvements and variations on this basic idea. Consult \cite{wu2016survey} for a survey. A major improvement to the basic method is to use an \textbf{attention mechanism} \cite{zhu2015visual7w,xu2015ask,Chen2015ABC,Jiang2015compositional,andreas2015deep,yang2015stacked}. It models interactions between specific parts of the inputs (image and question) depending on their actual contents. The visual input is then typically represented a spatial feature map, instead of holistic, image-wide features. The feature map is used with the question to determine spatial weights that reflect the most relevant regions of the image. Our approach uses a similar weighting operation, which, with our graph representation, we equate to a subgraph matching. Graph nodes representing question words are associated with graph nodes representing scene objects and vice versa. Similarly, the co-attention model of Lu \emph{et al.} \cite{lu2016hierarchical} determines attention weights on both image regions and question words. Their best-performing approach proceeds in a sequential manner, starting with question-guided visual attention followed by image-guided question attention. In our case, we found that a joint, one-pass version performs better. A major contribution of our model is to use \textbf{structured representations} of the input scene and the question. This contrasts with typical CNN and RNN models which are limited to spatial feature maps and sequences of words respectively. The dynamic memory networks (DMN), applied to VQA in \cite{xiong2016} also maintain a set-like representation of the input. As in our model, the DMN models interactions between different parts of the input. Our method can additionally take, as input, features characterizing arbitrary relations between parts of the input (the edge features in our graphs). This specifically allows making use of syntactic dependencies between words after pre-parsing the question. Most VQA systems are trained end-to-end from questions and images to answers, with the exception of the visual feature extractor, which is typically a CNN pretrained for image classification. For the \textbf{language processing} part, some methods address the the semantic aspect with word embeddings pretrained on a language modeling task (\emph{e.g.}~~\cite{shih2015look,fukui2016multimodal}). The syntactic relationships between the words in the question are typically overlooked, however. In \cite{zhang2015balanced}, hand-designed rules serve to identify primary and secondary objects of the questions. In the Neural Module Networks \cite{andreas2015deep,andreas2016learning}, the question is processed by a dependency parser, and fragments of the parse, selected with \textit{ad hoc} fixed rules are associated with modules, are assembled into a full neural network. In contrast, our method is trained to make direct use of the output of a syntactic parser. \textbf{Neural networks on graphs} have received significant attention recently \cite{duvenaud2015molecular,jain2016graphs,li2015graph}. The approach most similar to ours is the Gated Graph Sequence Neural Network \cite{li2015graph}, which associate a gated recurrent unit (GRU \cite{cho2014learning}) to each node, and updates the feature vector of each node by iteratively passing messages between neighbours. Also related is the work of Vinyals \emph{et al.}~\cite{vinyals2015sets} for embedding a set into fixed-size vector, invariant to the order of its elements. They do so by feeding the entire set through a recurrent unit multiple times. Each iteration uses an attention mechanism to focus on different parts of the set. Our formulation similarly incorporates information from neighbours into each node feature over multiple iterations, but we did not find any advantage in using an attention mechanism within the recurrent unit. \section{Graph representation of scenes and questions} \vspace{-4pt} The input data for each training or test instance is a question, and a parameterized description of contents of the scene. The question is processed with the Stanford dependency parser \cite{demarneffe2008parser}, which outputs the following. \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1.0ex,label={\tiny$\bullet$},leftmargin=2.0ex] \vspace{1pt} \item A set of $N^\mathrm{Q}$ words that constitute the nodes of the question graph. Each word is represented by its index in the input vocabulary, a token $x_i^\mathrm{Q} \in \mathbb{Z}$ ($i\in1..N^\mathrm{Q}$). \vspace{1pt} \item A set of pairwise relations between words, which constitute the edges of our graph. An edge between words $i$ and $j$ is represented by $e_{ij}^\mathrm{Q} \in \mathbb{Z}$, an index among the possible types of dependencies. \end{enumerate} \vspace{1pt} The dataset provides the following information about the image \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1.0ex,label={\tiny$\bullet$},leftmargin=2.0ex] \vspace{1pt} \item A set of $N^\mathrm{S}$ objects that constitute the nodes of the scene graph. Each node is represented by a vector $x_i^{\mathrm{S}} \in \mathbb{R}^C$ of visual features ($i\in1..N^\mathrm{S}$). Please refer to the supplementary material for implementation details. \vspace{1pt} \item A set of pairwise relations between all objects. They form the edges of a fully-connected graph of the scene. The edge between objects $i$ and $j$ is represented by a vector $e_{ij}^\mathrm{S} \in \mathbb{R}^D$ that encodes relative spatial relationships (see supp. mat.). \end{enumerate} \vspace{1pt} Our experiments are carried out on datasets of clip art scenes, in which descriptions of the scenes are provided in the form of lists of objects with their visual features. The method is equally applicable to real images, with the object list replaced by candidate object detections. Our experiments on clip art allows the effect of the proposed method to be isolated from the performance of the object detector. Please refer to the supplementary material for implementation details. The features of all nodes and edges are projected to a vector space $\mathbb{R}^H$ of common dimension (typically $H$=300). The question nodes and edges use vector embeddings implemented as look-up tables, and the scene nodes and edges use affine projections: \abovedisplayskip=3pt \belowdisplayskip=3pt \begin{flalign} & x_i^{'\mathrm{Q}} = W_1\big[ x^\mathrm{Q}_i \big] ~~~~~~\;\; e_{ij}^{'\mathrm{Q}} = W_2\big[ e_{ij}^\mathrm{Q} \big] \\ & x_i^{'\mathrm{S}} = W_3 x_i^\mathrm{S} + b_3 ~~~~~ e_{ij}^{'\mathrm{S}} = W_4 e_{ij}^\mathrm{S} + b_4 \end{flalign} with $W_1$ the word embedding (usually pretrained, see supplementary material), $W_2$ the embedding of dependencies, $W_3 \in \mathbb{R}^{h \times c}$ and $W_4 \in \mathbb{R}^{h \times d}$ weight matrices, and $b_3 \in \mathbb{R}^c$ and $b_4 \in \mathbb{R}^d$ biases. \section{Processing graphs with neural networks} \label{sec:architecture} We now describe a deep neural network suitable for processing the question and scene graphs to infer an answer. See Fig.~\ref{fig:network} for an overview. The two graphs representing the question and the scene are processed independently in a recurrent architecture. We drop the exponents $\mathrm{S}$ and $\mathrm{Q}$ for this paragraph as the same procedure applies to both graphs. Each node $x_i$ is associated with a gated recurrent unit (GRU \cite{cho2014learning}) and processed over a fixed number $T$ of iterations (typically $T$=4): \begin{flalign} \label{eq:gru} & h^0_i = 0\\ & n_i = \pool\nolimits_j (\,e'_{ij} \circ x'_j\,)\\ & h^t_i = \gru \hspace{-.2ex} \big( \, h^{t-1}_i, \; [ x'_i \,;\, n_i ] \, \big) ~~~~~~t \in [1,T]. \end{flalign} Square brackets with a semicolon represent a concatenation of vectors, and $\circ$ the Hadamard (element-wise) product. The final state of the GRU is used as the new representation of the nodes: $x''_i~=~h^T_i$. The $\pool$ operation transforms features from a variable number of neighbours (\emph{i.e.}~ connected nodes) to a fixed-size representation. Any commutative operation can be used (\emph{e.g.}~ sum, maximum). In our implementation, we found the best performance with the average function, taking care of averaging over the variable number of connected neighbours. An intuitive interpretation of the recurrent processing is to progressively integrate context information from connected neighbours into each node's own representation. A node corresponding to the word 'ball', for instance, might thus incorporate the fact that the associated adjective is 'red'. Our formulation is similar but slightly different from the gated graph networks \cite{li2015graph}, as the propagation of information in our model is limited to the first order. Note that our graphs are typically densely connected. We now introduce a form of attention into the model, which constitutes an essential part of the model. The motivation is two-fold: (1)~to identify parts of the input data most relevant to produce the answer and (2)~to align specific words in the question with particular elements of the scene. Practically, we estimate the relevance of each possible pairwise combination of words and objects. More precisely, we compute scalar ``matching weights'' between node sets $\{x_i^{'\mathrm{Q}}\}$ and $\{x_i^{'\mathrm{S}}\}$. These weights are comparable to the ``attention weights'' in other models (\emph{e.g.}~~\cite{lu2016hierarchical}). Therefore, $\forall ~~i\in1..N^\mathrm{Q}, j\in1..N^\mathrm{S}$: \begin{gather} \label{eq:weights} a_{ij} = \sigma \Bigg( W_5 \Big( \frac{x_i^{'\mathrm{Q}}}{\norm{x_i^{'\mathrm{Q}}}} \circ \frac{x_j^{'\mathrm{S}}}{\norm{x_j^{'\mathrm{S}}}} \Big) + b_5 \Bigg) \end{gather} where $W_5 \in \mathbb{R}^{1 \times h}$ and $b_5 \in \mathbb{R}$ are learned weights and biases, and $\sigma$ the logistic function that introduces a non-linearity and bounds the weights to $(0,1)$. The formulation is similar to a cosine similarity with learned weights on the feature dimensions. Note that the weights are computed using the initial embedding of the node features (pre-GRU). We apply the scalar weights $a_{ij}$ to the corresponding pairwise combinations of question and scene features, thereby focusing and giving more importance to the matched pairs~(Eq.\,\ref{eq:out1}). We sum the weighted features over the scene elements~(Eq.\,\ref{eq:out2}) then over the question elements~(Eq.\,\ref{eq:out3}), interleaving the sums with affine projections and non-linearities to obtain a final prediction: \begin{flalign} & y_{ij} ~=~ a_{ij} \, . \, [ x^{''\mathrm{Q}}_i \,;\, x^{''\mathrm{S}}_j ] \label{eq:out1} \\ & y'_i \, ~=~ f \big( W_6 \textstyle\sum_j^{N^\mathrm{S}} y_{ij} + b_6 \big) \label{eq:out2} \\ & y'' ~=~ f' \big( W_7 \textstyle \sum_i^{N^\mathrm{Q}} y'_i + b_7 \big) \label{eq:out3} \end{flalign} with $W_6$, $W_7$, $b_6$, $b_7$ learned weights and biases, $f$ a ReLU, and $f'$ a softmax or a logistic function (see experiments, Section~\ref{sec:pr}). The summations over the scene elements and question elements is a form of pooling that brings the variable number of features (due to the variable number of words and objects in the input) to a fixed-size output. The final output vector $y'' \in \mathbb{R}^T$ contains scores for the possible answers, and has a number of dimensions equal to 2 for the binary questions of the ``balanced'' dataset, or to the number of all candidate answers in the ``abstract scenes'' dataset. The candidate answers are those appearing at least $5$ times in the training set (see supplementary material for details). \section{Evaluation} \label{sec:evaluation} \paragraph{Datasets} Our evaluation uses two datasets: the original ``abstract scenes'' from Antol \emph{et al.} \cite{antol2015vqa} and its ``balanced'' extension from \cite{zhang2015balanced}. They both contain scenes created by humans in a drag-and-drop interface for arranging clip art objects and figures. The original dataset contains $20k/10k/20k$ scenes (for training$/$validation$/$test respectively) and $60k/30k/60k$ questions, each with 10 human-provided ground-truth answers. Questions are categorized based on the type of the correct answer into \textit{yes/no}, \textit{number}, and \textit{other}, but the same method is used for all categories, the type of the test questions being unknown. The ``balanced'' version of the dataset contains only the subset of questions which have binary (yes/no) answers and, in addition, complementary scenes created to elicit the opposite answer to each question. This is significant because guessing the modal answer from the training set will the succeed only half of the time (slightly more than $50\%$ in practice because of disagreement between annotators) and give $0\%$ accuracy over complementary pairs. This contrasts with other VQA datasets where blind guessing can be very effective. The pairs of complementary scenes also typically differ by only one or two objects being displaced, removed, or slightly modified (see examples in Fig.~\ref{fig:qualitative}, bottom rows). This makes the questions very challenging by requiring to take into account subtle details of the scenes. \paragraph{Metrics} The main metric is the average ``VQA score'' \cite{antol2015vqa}, which is a soft accuracy that takes into account variability of ground truth answers from multiple human annotators. Let us refer to a test question by an index $q=1..M$, and to each possible answer in the output vocabulary by an index $a$. The \textbf{ground truth score} $s(q,a)=1.0$ if the answer $a$ was provided by $m\hspace{-0.35em}\geq\nospace3$ annotators. Otherwise, $s(q,a)=m/3$\footnote{Ground truth scores are also averaged in a 10--\textit{choose}--9 manner~\cite{antol2015vqa}.}. Our method outputs a \textbf{predicted score} $\hat{s}(q,a)$ for each question and answer ($y''$ in Eq.\,\ref{eq:out3}) and the overall accuracy is the average ground truth score of the highest prediction per question, \emph{i.e.}~ $\frac{1}{M} \sum_q^{M} s(q,\argmax_a \hat{s}(q,a) )$. It has been argued that the ``balanced'' dataset can better evaluate a method's level of visual understanding than other datasets, because it is less susceptible to the use of language priors and dataset regularities (\emph{i.e.}~ guessing from the question\cite{zhang2015balanced}). Our initial experiments confirmed that the performances of various algorithms on the balanced dataset were indeed better separated, and we used it for our ablative analysis. We also focus on the hardest evaluation setting \cite{zhang2015balanced}, which measures the accuracy over \emph{pairs} of complementary scenes. This is the only metric in which blind models (guessing from the question) obtain null accuracy. This setting also does not consider pairs of test scenes deemed ambiguous because of disagreement between annotators. Each test scene is still evaluated independently however, so the model is unable to increase performance by forcing opposite answers to pairs of questions. The metric is then a standard ``hard'' accuracy, \emph{i.e.}~ all ground truth scores $s(i,j)\in\{0,1\}$. Please refer to the supplementary material for additional details. \begin{figure*}[t] \begin{center} \includegraphics[width=0.29\linewidth]{pr-abstractmc.pdf}\hspace{2.0em} \includegraphics[width=0.29\linewidth]{pr-abstractoe.pdf}\hspace{2.0em} \includegraphics[width=0.29\linewidth]{pr-balanced.pdf} \end{center} \caption{Precision/recall on the ``abstract scenes'' (\textbf{left}: multiple choice, \textbf{middle}: open-ended) and ``balanced'' datasets (\textbf{right}). The scores assigned by the model to predicted answers is a reliable measure of its certainty: a strict threshold (low recall) filters out incorrect answers and produces a very high precision. On the ``abstract scenes'' dataset (left and middle), a slight advantage is brought by training for soft target scores that capture ambiguities in the human-provided ground truth.} \label{fig:pr} \end{figure*} \subsection{Evaluation on the ``balanced'' dataset} We compare our method against the three models proposed in \cite{zhang2015balanced}. They all use an ensemble of models exploiting either an LSTM for processing the question, or an elaborate set of hand-designed rules to identify two objects as the focus of the question. The visual features in the three models are respectively empty (blind model), global (scene-wide), or focused on the two objects identified from the question. These models are specifically designed for binary questions, whereas ours is generally applicable. Nevertheless, we obtain significantly better accuracy than all three (Table~\ref{tab:balanced}). Differences in performance are mostly visible in the ``pairs'' setting, which we believe is more reliable as it discards ambiguous test questions on which human annotators disagreed. During training, we take care to keep pairs of complementary scenes together when forming mini-batches. This has a significant positive effect on the stability of the optimization. Interestingly, we did not notice any tendency toward overfitting when training on balanced scenes. We hypothesize that the pairs of complementary scenes have a strong regularizing effect that force the learned model to focus on relevant details of the scenes. In Fig.~\ref{fig:qualitative} (and in the supplementary material), we visualize the matching weights between question words and scene objects (Eq.\,\ref{eq:weights}). As expected, these tend to be larger between semantically related elements (\emph{e.g.}~ daytime$\leftrightarrow$sun, dog$\leftrightarrow$puppy, boy$\leftrightarrow$human) although some are more difficult to interpret. Our best performance of about $39\%$ is still low in absolute terms, which is understandable from the wide range of concepts involved in the questions (see examples in Fig.~\ref{fig:qualitative} and in the supplementary material). It seems unlikely that these concepts could be learned from training question/answers alone, and we suggest that any further significant improvement in performance will require external sources of information at training and/or test time. \paragraph{Ablative evaluation} We evaluated variants of our model to measure the impact of various design choices (see numbered rows in Table~\ref{tab:balanced}). On the \textbf{question side}, we evaluate (row 1) our graph approach without syntactic parsing, building question graphs with only two types of edges, \textit{previous}/\textit{next} and linking consecutive nodes. This shows the advantage of using the graph method together with syntactic parsing. Optimizing the word embeddings from scratch (row 2) rather than from pretrained Glove vectors \cite{pennington2014glove} produces a significant drop in performance. On the \textbf{scene side}, we removed the edge features (row 3) by setting $e_{ij}^\mathrm{S}=1$. It confirms that the model makes use of the spatial relations between objects encoded by the edges of the graph. In rows 4--6, we disabled the recurrent graph processing ($x''_i=x'_i$) for the either the question, the scene, or both. We finally tested the model with uniform matching weights ($a_{ij}=1$, row 10). As expected, it performed poorly. Our weights act similarly to the attention mechanisms in other models (\emph{e.g.}~\cite{zhu2015visual7w,xu2015ask,Chen2015ABC,Jiang2015compositional,yang2015stacked}) and our observations confirm that such mechanisms are crucial for good performance. \renewcommand{\arraystretch}{1.2} \begin{table} \vspace{-4pt} \small \renewcommand{\tabcolsep}{0.1mm} \renewcommand{\arraystretch}{1.3} \begin{center} \begin{tabular*}{\linewidth}{@{}>{}l@{\extracolsep{\fill}}*{2}{c}@{}} ~ & Avg. score & Avg. accuracy \\ Method & over scenes & over pairs \\ \hline Zhang \emph{et al.} \cite{zhang2015balanced} blind & 63.33 & 0.00 \\ ~~with global image features & 71.03 & 23.13 \\ ~~with attention-based image features & 74.65 & 34.73 \\ \hline \textbf{Graph VQA} (full model) & \textbf{74.94} & \textbf{39.1} \\ \end{tabular*} \begin{tabular*}{\linewidth}{@{}>{}l@{\extracolsep{\fill}}*{2}{c}@{}} \scriptsize{(1)}\small~~Question: no parsing (graph with previous/next edges) & 37.9\\ \scriptsize{(2)}\small~~Question: word embedding not pretrained & 33.8\\ \scriptsize{(3)}\small~~Scene: no edge features ($e_{ij}^{'\mathrm{S}}$=$1$) & 36.8\\ \scriptsize{(4)}\small~~Graph processing: disabled for question ($x^{''\mathrm{Q}}_i$=$x^{'\mathrm{S}}_i$) & 37.1\\ \scriptsize{(5)}\small~~Graph processing: disabled for scene ($x^{''\mathrm{S}}_i$=$x^{'\mathrm{Q}}_i$) & 37.0\\ \scriptsize{(6)}\small~~Graph processing: disabled for question/scene & 35.7\\ \scriptsize{(7)}\small~~Graph processing: only 1 iteration for question ($T^\mathrm{Q}$=1) & 39.0\\ \scriptsize{(8)}\small~~Graph processing: only 1 iteration for scene ($T^\mathrm{S}$=1) & 37.9\\ \scriptsize{(9)}\small~~Graph processing: only 1 iteration for question/scene & 39.1\\ \scriptsize{(10)}\small~~Uniform matching weights ($a_{ij}$=$1$) & 24.4\\ \hline \end{tabular*} \end{center} \caption{Results on the test set of the ``balanced'' dataset \cite{zhang2015balanced} (in percents , using balanced versions of both training and test sets). Numbered rows report accuracy over pairs of complementary scenes for ablated versions of our method.} \label{tab:balanced} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.95\linewidth]{reducedTrainingSet.pdf} \end{center} \vspace{-7pt} \caption{Impact of the amount of training data on performance (accuracy over pairs on the ``balanced'' dataset). Language preprocessing always improve generalization: pre-parsing and pretrained word embeddings both have a positive impact individually, and their effects are complementary to each other.} \label{fig:limitedTrainingData} \vspace{-3pt} \end{figure} \paragraph{Precision/recall} \label{sec:pr} We are interested in assessing the confidence of our model in its predicted answers. Most existing VQA methods treat the answering as a hard classification over candidate answers, and almost all reported results consist of a single accuracy metric. To provide more insight, we produce precision/recall curves for predicted answers. A precision/recall point $(p,r)$ is obtained by setting a threshold $t$ on predicted scores such that \begin{flalign} & p = \frac{ \sum_{i,j} \mathbbm{1}\big( \hat{s}(i,j) \hspace{-0.4ex}>\hspace{-0.4ex} t \big) \; s(i,j) }{\sum_{i,j} \mathbbm{1}( \hat{s}(i,j) \hspace{-0.4ex}>\hspace{-0.4ex} t) } \\ & r = \frac{ \sum_{i,j} \mathbbm{1}\big( \hat{s}(i,j) \hspace{-0.4ex}>\hspace{-0.4ex} t \big) \; s(i,j) }{\sum_{i,j} s(i,j)} \end{flalign} where $\mathbbm{1}(\cdot)$ is the $0/1$ indicator function. We plot precision/recall curves in Fig.~\ref{fig:pr} for both datasets\footnote{The ``abstract scenes'' test set is not available publicly, and precision/recall can only be provided on its validation set.}. The predicted score proves to be a reliable indicator of the model confidence, as a low threshold can achieve near-perfect accuracy (Fig.~\ref{fig:pr}, left and middle) by filtering out harder and/or ambiguous test cases. We compare models trained with either a softmax or a sigmoid as the final non-linearity (Eq.\,\ref{eq:out3}). The common practice is to train the softmax for a hard classification objective, using a cross-entropy loss and the answer of highest ground truth score as the target. In an attempt to make better use of the multiple human-provided answers, we propose to use the \emph{soft} ground truth scores as the target with a logarithmic loss. This shows an advantage on the ``abstract scenes'' dataset (Fig.~\ref{fig:pr}, left and middle). In that dataset, the soft target scores reflect frequent ambiguities in the questions and the scenes, and when synonyms constitute multiple acceptable answers. In those cases, we can avoid the potential confusion induced by a hard classification for one specific answer. The ``balanced'' dataset, by nature, contains almost no such ambiguities, and there is no significant difference between the different training objectives (Fig.~\ref{fig:pr}, right). \paragraph{Effect of training set size} Our motivation for introducing language parsing and pretrained word embeddings is to better generalize the concepts learned from the limited training examples. Words representing semantically close concepts ideally get assigned close word embeddings. Similarly, paraphrases of similar questions should produce parse graphs with more similarities than a simple concatenation of words would reveal (as in the input to traditional LSTMs). We trained our model with limited subsets of the training data (see Fig.~\ref{fig:limitedTrainingData}). Unsurprisingly, the performance grows steadily with the amount of training data, which suggests that larger datasets would improve performance. In our opinion however, it seems unlikely that sufficient data, covering all possible concepts, could be collected in the form of question/answer examples. More data can however be brought in with other sources of information and supervision. Our use of parsing and word embeddings is a small step in that direction. Both techniques clearly improve generalization (Fig.~\ref{fig:limitedTrainingData}). The effect may be particularly visible in our case because of the relatively small number of training examples (about $20$k questions in the ``balanced'' dataset). It is unclear whether huge VQA datasets could ultimately negate this advantage. Future experiments on larger datasets (\emph{e.g.}~ \cite{krishnavisualgenome}) may answer this question. \subsection{Evaluation on the ``abstract scenes'' dataset} We report our results on the original ``abstract scenes'' dataset in Table~\ref{tab:benchmark}. The evaluation is performed on an automated server that does not allow for an extensive ablative analysis. Anecdotally, performance on the validation set corroborates all findings presented above, in particular the strong benefit of pre-parsing, pretrained word embeddings, and graph processing with a GRU. At the time of our submission, our method occupies the top place on the leader board in both the open-ended and multiple choice settings. The advantage over existing method is most pronounced on the binary and the counting questions. Refer to Fig.~\ref{fig:qualitative} and to the supplementary for visualizations of the results. \begin{table*} \small \renewcommand{\tabcolsep}{0.1mm} \begin{center} \begin{tabularx}{\textwidth}{l c *8{>{\Centering\arraybackslash}X}} ~ & \multicolumn{4}{l}{\st{~~~~~~~~~~~~~~~~~~~}~~~~Multiple choice~~~~\st{~~~~~~~~~~~~~~~~~}} & \multicolumn{4}{r}{\st{~~~~~~~~~~~~~~~~~~~~~~~~~}~~~~Open-ended~~~~\st{~~~~~~~~~~~~~~~~~~~~~~~~}} \\ Method & Overall & Yes/no & Other & Number~~~ & ~~~Overall & Yes/no & Other & Number \\ \hline LSTM blind \cite{antol2015vqa} & 61.41 & 76.90 & 49.19 & 49.65 & 57.19 & 76.88 & 38.79 & 49.55 \\ LSTM with global image features \cite{antol2015vqa}~~~~~~~& 69.21 & 77.46 & 66.65 & 52.90 & 65.02 & 77.45 & 56.41 & 52.54 \\ Zhang \emph{et al.} \cite{zhang2015balanced} (yes/no only) & 35.25 & 79.14 & --- & --- & 35.25 & 79.14 & --- & --- \\ Multimodal residual learning \cite{kim2016multimodal} & 67.99 & 79.08 & 61.99 & 52.57 & 62.56 & 79.10 & 48.90 & 51.60 \\ U. Tokyo MIL (ensemble) \cite{saito2016dualnet,vqaleaderboard} & 71.18 & 79.59 & 67.93 & 56.19 & 69.73 & 80.70 & \textbf{62.08} & 58.82 \\ \hline \textbf{Graph VQA} (full model) & \textbf{74.37} & \textbf{79.74} & \textbf{68.31} & \textbf{74.97} & \textbf{70.42} & \textbf{81.26} & 56.28 & \textbf{76.47} \\ \hline \end{tabularx} \end{center} \vspace{-3pt} \caption{Results on the test set of the ``abstract scenes'' dataset (average scores in percents).} \label{tab:benchmark} \vspace{25pt} \end{table*} \begin{figure*}[t] \begin{tabularx}{\textwidth}{*{4}{>{\Centering\arraybackslash}X}} \renewcommand{\tabcolsep}{0.1mm} \renewcommand{\arraystretch}{1.0} \footnotesize \resultsAttentionAbstractSmall{20020}{20046}\\ \resultsAttentionBalancedSmall{8057-11746}{8058-900117462}\\ \resultsAttentionBalancedSmall{7166-837}{7167-900008371} \end{tabularx} \vspace{-9pt} \caption{Qualitative results on the ``abstract scenes'' dataset (top row) and on ``balanced'' pairs (middle and bottom row). We show the input scene, the question, the predicted answer, and the correct answer when the prediction is erroneous. We also visualize the matrices of matching weights (Eq.\,\ref{eq:weights}, brighter correspond to higher values) between question words (vertically) and scene objects (horizontally). The matching weights are also visualized over objects in the scene, after summation over words, giving an indication of their estimated relevance. The ground truth object labels are for reference only, and not used for training or inference.} \label{fig:qualitative} \vspace{-9pt} \end{figure*} \section{Conclusions} We presented a deep neural network for visual question answering that processes graph-structured representations of scenes and questions. This enables leveraging existing natural language processing tools, in particular pretrained word embeddings and syntactic parsing. The latter showed significant advantage over a traditional sequential processing of the questions, \emph{e.g.}~ with LSTMs. In our opinion, VQA systems are unlikely to learn everything from question/answer examples alone. We believe that any significant improvement in performance will require additional sources of information and supervision. Our explicit processing of the language part is a small step in that direction. It has clearly shown to improve generalization without resting entirely on VQA-specific annotations. We have so far applied our method to datasets of clip art scenes. Its direct extension to real images will be addressed in future work, by replacing nodes in the input scene graph with proposals from pretrained object detectors. \section*{Graph-Structured Representations for\\Visual Question Answering} \section*{Supplementary material} \section{Implementation} \label{implDetails} \noindent We provide below practical details of our implementation of the proposed method. \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label={\tiny$\bullet$},leftmargin=2.0ex] \item Size of vector embeddings of node features, edge features, and all hidden states within the network: $H$\sheq300. Note that smaller values such as $H$\sheq200 also give very good results (not reported in this paper) at a fraction of the training time. \item Number of recurrent iterations to update graph node representations: $T^\mathrm{Q}$=$T^\mathrm{S}$\sheq4. Anecdotally, we observed that processing the scene graph benefits from more iterations than the question graph, for which performance nearly saturates with 2 or more iterations. As reported in the ablative evaluation (Table~\ref{tab:balanced}), the use of at least a single iteration has a stronger influence than its exact number. \item All weights except word embeddings are initialized randomly following \cite{glorot2010understanding}. \item Word embeddings are initialized with Glove vectors \cite{pennington2014glove} of dimension 300 available publicly \cite{gloveWebsite}, trained for 6 billion words on Wikipedia and Gigaword. The word embeddings are fine-tuned with a learning rate of $1/10$ of the other weights. \item Dropout with ratio 0.3 is applied between the weighted sum over scene elements (Eq.\,\ref{eq:out2}) and the final classifier (Eq.\,\ref{eq:out3}). \item Weights are optimized with Adadelta \cite{zeiler2012adadelta} with mini-batches of 128 questions. We run optimization until convergence (typically 20 epochs on the ``abstract scenes'', 100 epochs on the ``balanced'' dataset) and report performance on the test set from the epoch with the highest performance on the validation set (measured by VQA score on the ``abstract scenes'' dataset, and accuracy over pairs on the ``balanced'' dataset). \item The edges between word nodes in the input question graph are labeled with the dependency labels identified by the Stanford parser \cite{demarneffe2008parser,stanfordParserWebsite}. These dependencies are directed, and we supplement all of them with their symmetric, albeit tagged with a different set of labels. The output of the parser includes the propagation of conjunct dependencies (its default setting). This yields quite densely connected graphs. \item The input features of the object nodes are those directly available in the datasets. They represent: the object category (human, animal, small or large object) as one one-hot vector, the object type (table, sun, dog window, ...) as a one-hot vector, the expression/pose/type (various depictions being possible for each object type) as a one-hot vector, and 10 scalar values describing the pose of human figures (the X/Y position of arms, legs, and head relative to the torso). They form altogether a feature vector of dimension 159. The edge features between objects represent: the signed difference in their X/Y position, the inverse of their absolute difference in X/Y position, and their relative position on depth planes as +1 if closer (potentially occluding the other), -1 otherwise. \item All input features are normalized for zero mean and unit variance. \item When training for the ``balanced'' dataset, care is taken to keep each pair of complementary scenes in a same mini-batch when shuffling training instances. This has a noticeable effect on the stability of the optimization. \item In the open-ended setting, the output space is made of all answers that appear at least 5 times in the training set. These correspond to 623 possible answers, which cover 96\% of the training questions. \item Our model was implemented in Matlab from scratch. Training takes in the order of 5 to 10 hours on one CPU, depending on the dataset and on the size $H$ of the internal representations. \end{enumerate} \section{Additional details} \noindent\textbf{\textit{Why do we choose to focus on abstract scenes ? Does this method extend to real images ?}} \noindent The balanced dataset of abstract scenes was the only one allowing evaluation free from dataset biases. Abstract scenes also enabled removing confounding factors (visual recognition). It is not unreasonable to view the scene descriptions (provided with abstract scenes) as the output of a ``perfect'' vision system. The proposel model could be extended to real images by building graphs of the images where scene nodes are candidates from an object detection algorithm. \vspace{5pt} \noindent\textbf{\textit{The multiple-choice (M.C.) setting should be easier than open-ended (O.E.). Therefore, why is the accuracy not better for binary and number questions in the M.C setting (rather than O.E.) ?}} \noindent This intuition is incorrect in practice. The wording of binary and number questions (``\textit{How many ...}'') can easily narrow down the set of possible answers, whether evaluated in a M.C. or O.E. setting. One thus cannot qualify one as strictly easier than the other. Other factors can then influence the performance either way. Note also that, for example that most choices of number questions are not numbers. \vspace{5pt} \noindent\textbf{\textit{In Table 1, why is there a large improvement of the metric over balanced pairs of scenes, but not of the metric over individual scenes ?}} \noindent The metric over pairs is much harder to satisfy and should be regarded as more meaningful. The other metric (over scenes) essentially saturates at the same point between the two methods. \vspace{5pt} \noindent\textbf{\textit{How are precison/recall curves helping better understand model compared to a simple accuracy number ?}} \noindent A P/R curve shows the confidence of the model in its answers. A practical VQA system will need to provide an indication of certainty, including the possibility of ``I don't know''. Reporting P/R is a step in that direction. P/R curves also contain more information and can show differences between methods (\emph{e.g.}~ Fig.3 left) that may otherwise not be appreciable through an aggregate metric. \vspace{5pt} \noindent\textbf{\textit{Why is attention computed with pre-GRU node features ?}} \noindent This performed slightly better than the alternative. The intuition is that the identity of each node is sufficient, and the context (transfered by the GRU from neighbouring nodes) is probably less useful to compute attention. \vspace{5pt} \noindent\textbf{\textit{Why are the largest performance gains obtained with ``number'' questions ?}} \noindent We could not draw definitive conclusions. Competing methods seem to rely on dataset biases (predominance of \textit{2} and \textit{3} as answers). Ours was developed (cross-validated) for the balanced dataset, which requires \emph{not} to rely on such biases, and may simply be better at utilizing the input and not biases. This may in turn explain minimal gains on other questions, which could benefit from using biases (because of a larger pool of reasonable answers). \section{Additional results} We provide below additional example results in the same format as in Fig.~\ref{fig:qualitative}. \clearpage \subsection{Additional results: abstract scenes dataset} \vspace{10mm} \resultsAttentionAbstractPageSix{26817}{27250}{27555}{27578}{27709}{27778} \clearpage \resultsAttentionAbstractPage{29551}{29627}{29685}{29758}{29804}{29819}{29865}{29907} \clearpage \resultsAttentionAbstractPage{22937}{23128}{23273}{23301}{23328}{23359}{23429}{23521} \clearpage \clearpage \subsection{Additional results: balanced dataset} \vspace{10mm} \resultsAttentionBalancedPageSix{7269-14758}{7270-900147580}{7309-18242}{7310-900182422}{7586-16901}{7587-900169010} \clearpage \resultsAttentionBalancedPage{8057-11746}{8058-900117462}{8458-5536}{8459-900055360}{8466-14578}{8467-900145780}{9149-3819}{9150-900038192} \clearpage \resultsAttentionBalancedPage{14965-5372}{14966-900053720}{14967-10677}{14968-900106771}{15752-7867}{15753-900078671}{16088-4845}{16089-900048451} \section{Abstract scenes dataset} \vspace{10mm} \resultsAttentionAbstractPageSix{26817}{27250}{27555}{27578}{27709}{27778}\clearpage \resultsAttentionAbstractPage{20020}{20046}{20200}{20220}{20331}{20427}{20474}{20781}\clearpage \resultsAttentionAbstractPage{21098}{21322}{21326}{21358}{21392}{21547}{21553}{21768}\clearpage \resultsAttentionAbstractPage{21900}{22056}{22060}{22088}{22264}{22479}{22746}{22769}\clearpage \resultsAttentionAbstractPage{22937}{23128}{23273}{23301}{23328}{23359}{23429}{23521}\clearpage \resultsAttentionAbstractPage{23911}{23966}{24278}{24374}{24453}{24508}{24775}{24915}\clearpage \resultsAttentionAbstractPage{25081}{25147}{25248}{25300}{25302}{25345}{25459}{25506}\clearpage \resultsAttentionAbstractPage{25841}{25883}{26002}{26132}{26216}{26447}{26616}{26722}\clearpage \resultsAttentionAbstractPage{26817}{27250}{27555}{27578}{27709}{27778}{28492}{29403}\clearpage \resultsAttentionAbstractPage{29551}{29627}{29685}{29758}{29804}{29819}{29865}{29907}\clearpage \section{Balanced dataset} \vspace{10mm} \resultsAttentionBalancedPageSix{10873-13131}{10874-900131310}{1123-12079}{1124-900120790}{11323-10510}{11324-900105102}\clearpage \resultsAttentionBalancedPage{11657-4819}{11658-900048190}{11958-8858}{11959-900088580}{11983-4002}{11984-900040022}{12225-1930}{12226-900019300}\clearpage \resultsAttentionBalancedPage{12459-16227}{12460-900162272}{12532-16383}{12533-900163832}{12552-8490}{12553-900084901}{12599-3843}{12600-900038431}\clearpage \resultsAttentionBalancedPage{1278-5264}{1279-900052642}{13058-3390}{13059-900033900}{13427-13615}{13428-900136151}{13557-12188}{13558-900121882}\clearpage \resultsAttentionBalancedPage{14434-9530}{14435-900095302}{14504-17659}{14505-900176591}{148-8470}{149-900084701}{14901-15544}{14902-900155441}\clearpage \resultsAttentionBalancedPage{14965-5372}{14966-900053720}{14967-10677}{14968-900106771}{15752-7867}{15753-900078671}{16088-4845}{16089-900048451}\clearpage \resultsAttentionBalancedPage{16192-5870}{16193-900058702}{16205-17429}{16206-900174291}{1623-6519}{1624-900065190}{16619-3622}{16620-900036220}\clearpage \resultsAttentionBalancedPage{16745-2820}{16746-900028202}{16831-14053}{16832-900140531}{16840-11055}{16841-900110550}{16936-17607}{16937-900176072}\clearpage \resultsAttentionBalancedPage{17333-8625}{17334-900086252}{17804-12203}{17805-900122032}{18066-5025}{18067-900050252}{18173-7759}{18174-900077591}\clearpage \resultsAttentionBalancedPage{18324-6762}{18325-900067621}{18399-13857}{18400-900138571}{18417-1879}{18418-900018792}{18773-505}{18774-900005050}\clearpage \resultsAttentionBalancedPage{19527-9248}{19528-900092482}{20079-13794}{20080-900137942}{20447-15749}{20448-900157491}{20642-8769}{20643-900087690}\clearpage \resultsAttentionBalancedPage{20814-9405}{20815-900094050}{20882-6603}{20883-900066030}{2099-11890}{2100-900118900}{21111-13283}{21112-900132831}\clearpage \resultsAttentionBalancedPage{21382-14671}{21383-900146711}{21788-7343}{21789-900073431}{2919-19778}{2920-900197782}{2930-2634}{2931-900026342}\clearpage \resultsAttentionBalancedPage{3148-11816}{3149-900118161}{3175-18362}{3176-900183621}{3351-87}{3352-900000872}{343-10451}{344-900104510}\clearpage \resultsAttentionBalancedPage{3616-11169}{3617-900111690}{3821-16937}{3822-900169371}{3825-11652}{3826-900116520}{3897-10133}{3898-900101331}\clearpage \resultsAttentionBalancedPage{4352-18589}{4353-900185892}{4393-18336}{4394-900183361}{4688-15266}{4689-900152662}{5176-7412}{5177-900074122}\clearpage \resultsAttentionBalancedPage{5277-13578}{5278-900135780}{5522-8679}{5523-900086791}{5597-6127}{5598-900061271}{6000-8226}{6001-900082261}\clearpage \resultsAttentionBalancedPage{629-6464}{630-900064642}{6415-206}{6416-900002061}{6442-6382}{6443-900063820}{6495-3730}{6496-900037302}\clearpage \resultsAttentionBalancedPage{6644-12328}{6645-900123282}{6891-10878}{6892-900108780}{7108-17397}{7109-900173971}{7166-837}{7167-900008371}\clearpage \resultsAttentionBalancedPage{7269-14758}{7270-900147580}{7309-18242}{7310-900182422}{7586-16901}{7587-900169010}{77-17312}{78-900173122}\clearpage \resultsAttentionBalancedPage{8057-11746}{8058-900117462}{8458-5536}{8459-900055360}{8466-14578}{8467-900145780}{9149-3819}{9150-900038192}\clearpage \end{document}
2,877,628,088,711
arxiv
\section{\label{sec:level1}Introduction} Amorphous packings of particles occur in many contexts, ranging from glassy polymers to colloidal gels and geological sediments. These materials are well known to have complex deformation behavior. For example, their mechanical response is often strain history dependent~\cite{1977_lade,2002_viasnoff,2009_lee,tighe2014shear}. The amorphous nature of the microstructure of these systems makes it notoriously challenging to understand the origin of such strain history dependence and mechanical behavior in general~\cite{berthier2011dynamical}. Notably, these ``granular'' material systems are characterized by a length-scale proportional to particle size, that makes their theoretical description using classical continuum physics concepts particularly challenging. For accurate descriptions of amorphous packing mechanics, the traditional views of continuum mechanics desperately needs updating from a microscopic point of view. One route towards a more general continuum description considers material point rotations and has an origin in the work from the Cosserat brothers~\cite{cosserat1909theory}. Indeed, it has long been recognized that the rotational motion of particles in thermally driven amorphous packings can be linked to slowdown effects and glassy dynamics~\cite{1984_schwartz, 1994_Stillinger, 2012_Edmond}. Nevertheless, much progress is needed to include (particle) rotational degrees-of-freedom in continuum mechanics approaches ~\cite{2003_matsushima, poorsolhjouy19jmps}. Here we show experimentally that even in an completely athermal amorphous packing, rotational degrees-of-freedom are directly coupled to its mechanical response, both at the particle level and via mesoscopic spatial anticorrelations in the rotation field. Our data suggests that particle rotations are an essential yet overlooked kinematical quantity in the study of dense amorphous packing. In addition, the spatial autocorrelation analysis of rotations can reveal essential features in materials science of a large variety of materials with intrinsic length scales. To study the role of rotational degrees of freedom, decoupling rotation from translation is challenging. In many circumstances the involved molecules, colloids or grains are not spherically symmetric, hence their rotation requires also \emph{spatial displacement} of their neighbors, particularly for high density, jammed granular materials. To probe the role of \emph{only} the rotational degrees of freedom in the strain dependence of amorphous packings, athermal round particle systems are an optimal prototypical choice. Such particles can be designed to experience contact friction, which directly couples rotational degrees of freedom to displacements. In an athermal packing of frictional disks, shear for even circular particles is thus directly coupled to rotations without necessarily requiring particle displacements. While rotational dynamics of spherical particles in athermal packings has been probed via wave-propagation measurements~\cite{2011_merkel}, particle-level experimental evidence that links rotational degrees of freedom directly to mechanics in amorphous packings has so far not been obtained. The unique set of experimentally measured data analyzed in this work shows that the particles' micro-rotation dynamics are linked to both the packing density and the particle surface characteristics (or ``friction'') which directly mediates in tangential motion of contacting circular grain-pairs. We will see that particle rotations display strain-induced diffusive behavior even at very small strain amplitudes. The diffusive behavior changes with particle packing density and friction coefficient in manners consistent with previously observed packing properties~\cite{ren13prl, 2018_trimer_dong, 2020_wang}. Additionally, particle rotations display non-local correlations as revealed by spatial auto-correlation measures~\cite{moran1950notes}, indicating that the non-local mechanical effects well known to exist in sheared glassy granular media and amorphous materials in general~\cite{hebraud1998modecoupling,2009_bocquet, 2013_henann} can be mediated by rotational dynamics. Our results show that rotational degrees of freedom are a crucial element to be considered in the quest to understand the flow behavior of amorphous materials. Granular material systems exhibit many non-standard physical phenomena, such as that of negative group velocity \cite{nejadsadeghi2020role,WEI2020105433}, frequency band gaps \cite{MOURAILLE2008498,boechler2011tunable,goncu2012exploiting,misra16cmt}, chirality~\cite{misra2020chiral} and load path dependency. The latter is the key physical phenomenon that underlies our results: collections of discrete athermal particles have the remarkable property that they can become \emph{rigid} when assembled into certain arrangements. Referred to as granular media, these loose particle packings can resist shear or compression when a sufficient number of them is present per unit volume, a feature commonly called \emph{jamming}. Even when not spatially confined, packing of particles can enhance its rigidity when the assembly is subject to shear strain. This shear deformation induced rigidification is known as \emph{shear jamming}. The peculiar strain dependence of granular media can be indicated by the correspondence between rotational particle motion and collective packing mechanics. Indeed, some work had already hinted at the relevance of rotational degrees of freedom of particles in loose particle assemblies for both slow~\cite{misra1997measured,2003_matsushima, kuhn2004contact, ando2012grain} and fast granular flows~\cite{seto13prl,lin2015hydrodynamic, abhi_2020_roll_friction} and recently also for the statistical mechanics of sheared packings~\cite{2012_ciamarra,sun2020frictioncontrolled}. \section{Rotation Kinematics} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig_example.eps} \caption{(a) Examples of the UV image taken to track particle orientation. Blue bars show actual UV marks and white lines indicate tracked orientation. Red circles mark edges of disks. (b - d) Examples of photoelastic particles used for different inter-particle friction coefficients $\mu_{l,m,h}$ respectively. (e) The microrotation of grains induced by the first shear step; particle locations shown in their initial configuration. Results obtained for $\gamma = 0.27$, $\mu_m$ with packing fraction $\phi$=0.816. (f) evolution of the mean $\langle\theta_i^m\rangle$ and standard deviation $\sigma_m$ of the microrotation, and the rigid body rotation $\theta^R$, for five shear tests at a given density. $\theta^R$ grows linearly at a rate of 0.0013~rad per frame as expected from the imposed strain.} \label{fig:example} \end{figure} Packing of disk-shaped particles at different packing fractions $\phi$, were subjected to quasi-static stepwise shear to observe the diffusive rotational dynamics of particles. The packing were imaged after each strain step. For details of the experiments, see the Methods section. In our experiments we have used quasi-two dimensional packing, as three dimensional shear experiments have the propensity for formation of finite shear localization bands into which the particle rotations typically concentrate ~\cite{cheng2019quantification,hall2010discrete,alshibli2006microscopic,ando2012grain}. In contrast, our two dimensional geometry with articulated base allows us to suppress shear localization entirely~\cite{ren13prl,2018_trimer_dong}. A fluorescent bar placed on the particles allows us to track the absolute orientation of every particle in the packings; see Fig.~\ref{fig:example}a. We use three different particle friction coefficients and refer to these as $\mu_l$, $\mu_m$ and $\mu_h$; examples of these particle types are shown in Fig.~\ref{fig:example}b-d. We probe the dynamics of orientations obtained from image analysis of each frame. An example rotation field from a single shear step is shown in Fig.~\ref{fig:example}e for $\mu_m$ particles at packing fraction $\phi=0.816$. While the grain displacements tend to follow the imposed macro-scale deformation field, there exists fluctuations from the imposed linear macro-scale deformation field in the grain centroid displacements that follow a complex process dictated by not only the nearest or contacting grains, but also by their neighbors and by extension an increasingly larger neighborhood for every grain~\cite{MISRA20171,nima2020MMS, 2020_wang}. \begin{figure} \includegraphics[width=\linewidth]{Fig_std.eps} \caption{\label{fig:Fig_fit15} Standard deviation of rotations as a function of imposed shear strain in three different packing fractions for the (a) $\mu_l$, (b) $\mu_m$, and (c) $\mu_h$ particles. All experiments are repeated five times. (d) and (e) show, respectively, the variation of the parameters $D$ and $n$ as a function of packing fraction of the sets in $\mu_{l,m,h}$. The highlighted data in yellow are the data used in panels a-c and Fig.~\ref{fig:var-global}.} \end{figure} Similarly, the grain rotation observed in Fig.~\ref{fig:example}e can be decomposed into two parts. One part of the rotation of each grain is a result of the imposed affine macro-scale deformation field, which contributes an overall rigid body rotation. The second rotation contribution is due to the micro-scale phenomena such as individual grain spin that we call \emph{microrotation}. Denoting the rotation of grain $i$ by $\theta_i$ and the macro-scale rigid body rotation by $\theta^R$, the microrotation of grain $i$, $\theta_i$, is obtained as $\theta_i^m = \theta_i-\theta^R$. The mean of the microrotations $\langle\theta_i^m\rangle$ and the standard deviation of microrotations $\langle{\theta_i^m}^2\rangle = \sigma_{m}$ change as a function of strain as observed in Fig.~\ref{fig:example}f for all five repeats done for $\mu_m$ at $\phi=0.816$. The initial frame is taken as the reference configuration to obtain the evolution of the grain-spin measures as a function of imposed strain. The rigid body rotation $\theta^R$ grows linearly with strain as expected from the linearly increasing strain field imposed on the packing. Notably, the mean microrotation $\langle\theta_i^m\rangle$ is zero for the packing: that is, there is no preferred direction in which grain rotation \emph{fluctuations} occur. This null-result is highly reproducible between different repeats of the experiments and consistent with earlier numerical simulations~\cite{aharonov2002shear, kuhn2004contact,calvetti1997experimental,misra1997measured,alshibli2006microscopic}. It is also noteworthy that the microrotations follow a nearly Gaussian distribution for all cases (see results in SI). However, the amplitude of grain rotation fluctuations increases strongly with strain. Note that some of such shear induced rotational fluctuations have been observed in the experimental work of Matsushima in~\cite{2003_matsushima} on nonspherical particles. Focusing further on the growth of $\sigma_m(\gamma)$ we see that its strong growth with $\gamma$ and the reproducibility among different initial configurations is also observed for different $\phi$ over the entire range of relevant densities and $\mu$ as shown in Fig.~\ref{fig:Fig_fit15}a-c. Up to a strain of 0.15, $\sigma_m$ can be well described by the empirical relation $\sigma_m = D\gamma^n+\sigma_0$ as shown by the good quality of the fits. It is tempting to interpret prefactor $D$ as a diffusion constant as done previously for rotations induced by thermal fluctuations~\cite{kim2011colloidal, shanda2017decoupling}. Power law index $n$ indicates the (weakly) nonlinear strain dependence and $\sigma_0$ is a possible offset that is negligible for all experiments. We see that the strength of the fluctuations captured by $D(\phi)$ and $n(\phi)$ is very sensitive to friction $\mu$ as shown in Fig.~\ref{fig:Fig_fit15}d,e. The friction dependence does however capture the mechanical performance of the packing as well: at large $\mu$, particle interactions associated with rotation are stronger even at smaller $\phi$, and this trend is observed in both $D(\phi)$ and $n(\phi)$. When considering $D$ as a diffusion constant, its thermal analogue would be given by the ratio of thermal fluctuations and viscous damping. Such competition can also be seen in the rotational diffusion: both $D$ and $n$ indicate that there are two mechanisms that play a role in the rotational diffusion, which is especially visible for $\mu_m$. Initially, $D,n(\phi)$ grows with $\phi$, indicating the enhanced particle interactions that give more fluctuations in rotations. However, above a certain $\phi_c(\mu)$, $D$ decreases, and above the packing fraction, $\phi \approx 0.80$, parameters $D,n$ tend towards a plateau, indicating that competing mechanism emerge in high packing fractions suppressing the growth of further rotational fluctuations. Steric hindrance does play a role for $\mu_h$, the gear-shaped particles, but less so for the much smoother $\mu_{l,m}$ particles. The exponent for the $\mu_h$ can be as high as 1.4, indicating superdiffusive behavior if we consider $\gamma$ as a time variable and $\langle{\theta_i^m}^2\rangle = \sigma_{m}$ as a displacement fluctuation metric. Interestingly, the values for $n(\phi)$ become \emph{independent} of $\mu$ above a volume fraction of about 0.81, tending towards a linear behavior at very high packing fraction. These trends are even more visible if we consider less strain, see Supplementary Information. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig_moran.eps} \caption{(a) The average neighborhood variance $\sigma_m(\gamma)$; different colors indicate different $\mu$; different hues represent different $\phi$; color scheme applies to all panels. (b) $I$ as a function of $\gamma$. (c) $I_i$ standard deviation $\sigma_I$ as as function of $\gamma$. (d) $\mu_l$, $\phi = 0.828$ local Moran's I. (e) $\mu_h$, $\phi = 0.807$ local Moran's I.} \label{fig:var-global} \end{figure} \section{Correlations in micro-rotations} The observed Gaussian nature of the particle rotation fluctuations belies the underlying correlations in particle motion as expected to exist in dense amorphous packings where many grains are in contact. There are long range correlations in particle rotations as visible in our two step approach to quantifying the spatial autocorrelation of rotations, leading to a mathematically well defined quantitative system average signal widely used for geographical data called Moran's $I$ ~\cite{moran1950notes}. We first compute the particle average neighborhood rotational variance $S_{n}$. For details, see the Methods sections. A positive/negative value of $S_{n}$ means that in general a particle rotates in the same/opposite direction as its neighborhood. Fig.~\ref{fig:var-global}(a) shows the average neighborhood variance of materials with different $\mu$ and packing fraction $\phi$. The average neighborhood variance is clearly monotonically dependent on both $\mu$ and $\phi$ and grows with $\gamma$: the higher friction or density, the lower the average neighborhood variance, which means greater difference between a particle's micro-rotation and its neighborhood's. This anticorrelation makes mechanical sense: gear-like motion forces rotation of opposite direction in interlocking particles. A large absolute value of $S_{n}$ may however be caused by either (dis)similarity between neighborhood particle micro-rotations, or a large variance in the micro-rotation. To focus only on the comparison of the dissimilarity among neighborhood particles micro-rotation across the packing, we have to normalize $S_{n}$ by the variance of the particle micro-rotation, $\sigma_m^2$. We showed the dynamics of $\sigma_m^2$ and its non-monotonic dependence on friction and packing fraction in Fig.~\ref{fig:Fig_fit15}. By computing $S_{n}/\sigma_m^2$, we arrive immediately at the system wide spatial autocorrelation metric called Moran's $I$. Generally, the micro-rotations of the grains in all the materials in these analyses are negatively autocorrelated: the grains rotate like a chain of gears to some extent. Figure \ref{fig:var-global}(b) shows the trends of $I$ as the shear strain increases. The differences in behavior for $\mu_{l,m,h}$ is evident: low friction particles have a weak spatial autocorrelation, whereas particles with higher friction coefficient develop stronger autocorrelations, with $I$ decreasing to -0.3. The difference between the packings with a different $\phi$ is small but not insignificant. In general, anti-autocorrelations increase with larger packing fractions. Strikingly, also the rotational correlations are very strain sensitive, with 3\% strain being enough to indicate significant difference between packings of the different $\phi$ and $\mu$. We go one step further and use the normalized neighborhood variance to gain insight into the local mechanics of sheared amorphous packings. Spatial autocorrelations as captured by $I$ are not the same everywhere; in fact there are clusters of (anti)correlated rotations in $I_i$. We can quantify the local variability of these correlations by computing the standard deviation $\sigma_I$ of $I_i$. This metric captures the rotational ``floppyness'' in the packing: at large $\phi$ in a highly overconstrained system, interlocking grains must all have the same rotational behavior so the variability of $I$ in the packing should be small. At smaller $\phi$, there are more ways to reach mechanical equilibrium, hence the variability among correlations should be higher. Similarly, $\sigma_I$ should express the phenomena of shear jamming: at small strain, the shear jamming mechanisms has not been activated yet, so $\sigma_I$ is small. As strain increases, the packing moves from partially to completely constrained and should thus achieve a small $\sigma_I$. Finally, the role of $\mu$ should also be non-linear: at small and large $\mu$, the rotational variability should be high as per previous arguments, so $\sigma_I(\mu)$ should have an optimum. We observe indeed all these mechanically reasonable trends in $\sigma_I(\phi,\gamma,\mu)$. $I_i$ standard deviation is strain dependent, exhibiting a distinct peak floppiness at about 3\% strain. Note that at these strain levels, system level pressure is undetectable, highlighting again the sensitive nature of rotations. The connection to the rotational diffusion is also still visible in the fluctuations of the anticorrelated micro-rotation: observe how for $\phi > 0.80$, $\sigma_I$ is small for all $\mu$, precisely where also the diffusivity of rotation becomes independent of $\mu$. Finally, we show two examples of the spatial distribution $I_i(x,y)$ for two situations $\mu_l$, $\phi = 0.828$ and $\mu_h$, $\phi = 0.807$ in Fig.~\ref{fig:var-global}d,e. These examples clearly show clusters of isotropic and and anisotropic shapes emerging along boundaries and in the bulk of the packing. Spatial fluctuations can span up to ten particle diameters and can be string-like or globular, highlighting again the spatial anisotropy that can build up in the amorphous system (see Supplementary Information videos). While the complete spatial dynamics is neighborhood rotation similarity is challenging to interpret due to the dual and non-monotonic role of both friction and density, we can clearly evidence particle rotation becoming an essential parameter necessary to include in continuum modeling theories with non-local mechanical couplings inside sheared amorphous packings. \section{Conclusions} We have shown that simple shear induces spatially correlated fluctuations in the a rotational dynamics of round, frictional particles. Individual particle motion is diffusive, and diffusive motion is $\mu$ and $\phi$ dependent as one would expect based on the mechanical characteristics of the packing. The local neighborhood of particles shows on-average anticorrelated motion that reveals that two distinct mechanisms affect the mechanics of individual grains. Rotational motion fluctuations indicate the state of the system early in the deformation regime after a few percent shear strain, even though the \emph{average} particle micro-rotation is zero. Our results indicate that rotational motion is a highly relevant field in the study of amorphous particulate materials, ranging from sands to frictional emulsions, colloids and even molecular glasses. Beyond materials analyses, the results have a broader relevance to spatial data science, particularly in reference to the ``first law of Geography''~\cite{tobler1970computer} stating that nearby things are similar. The value of the widely used geographical spatial autocorrelation measure Moran's $I$ is negative for granular materials systems with a clear physical interpretation related to particle friction. This finding is in contrast with a large majority of spatial datasets coming from human-scale natural systems which have positive spatial autocorrelation. Intriguingly, the role of absolute interparticle orientations has long been recognized for system mechanics: the role of the bond angle is recognized as essential in constraint counting approaches for glassy polymeric systems~\cite{1995_thorpe} and is also relevant for protein folding dynamics~\cite{Jacobs2002}. Not surprisingly rotational dynamics has been measured indirectly on a system scale via dielectric spectroscopy~\cite{madden1984consistent}, for example to probe glassy dynamics in rotational degrees of freedom in nonspherically symmetric glassforming molecules. Note that not only friction can make the rotational degree of freedom relevant for the packing dynamics. Rotations also play a role for particle packings that are composed of aspherical, adhesive or deformable particles, which covers many types of particulate materials, ranging from granular materials to colloids, proteins~\cite{haradisan2019rotationalprotein}, emulsions and even metamaterials in which the node hinges are not ideal~\cite{misra2020chiral}. In particular, it is of interest to explore how energy is stored in sheared granular packings and how rotations and friction in contacts play a role in this. Our work thus suggest that rotational dynamics are a potentially unifying characteristic through which the often suggested similarity among amorphous materials can be understood.\\ \begin{acknowledgments} We thank the organizers and participants of the Lorentz Center workshop ``Granular Matter Across Scales'' for fostering an environment where the seeds for this work were planted. We are grateful to the late Robert Behringer for always reminding us of the importance of grain rotations and friction. AM and NN are supported in part by the United States National Science Foundation grant CMMI-1727433 and EEC-1840432 (which also involves SS). YL is supported by the University of Minnesota Doctoral Dissertation Fellowship. \end{acknowledgments} \clearpage \section{Methods} \subsection{Experimental Setup} In our experiments, we analyze a series of experiments that allow for the tracking of rotation of every disk-shaped particle in a $\sim$1000-particles large shear environment in which shear bands and other large scale inhomogeneities have been completely eliminated, as reported elsewhere \cite{ren13prl,2018_trimer_dong}. Shear is applied quasi-statically from an isotropic stress free state and tracked during the initial shear transient up to a strain of 0.5. Previous experiments have described dilatancy and displacement dynamics in these packings \cite{ren13prl,2018_trimer_dong}. Within the scope of the experimentation in the current research, we study the effect of inter-particle friction using granular assemblies with controlled variations of friction coefficient, as well as the effect of different initial packing fractions upon the response of the granular assembly. One set of particles was cut from photoelastic sheets as in previous experiments\cite{ren13prl}, having an inter-particle friction coefficient $\mu_m$ of approximately 0.7. After conducting experiments with this set, we wrapped these particles with teflon tape. Dry teflon-teflon contacts have a friction coefficient of $\mu_l \sim$0.15~\cite{2020_wang}. A third set of data was obtained with photoelastic disks cut with fine teeth on their circumference so that particles will interlock when they come into contact. Such a particle shape mimics an extremely large friction coefficient; we refer to these particles as $\mu_h$. The diameter ratio of big to small disks is 1.25:1, and the number ratio is roughly 1:3.3 (big to small) for each packing. Particles were first randomly placed in the shear cell and manually relaxed until no inter-particle contact force very visible by eye. Then starting from either a parallelogram or a rectangle, the shear cell was deformed by strain steps of 0.0027. The system was then relaxed for 10 seconds followed by taking three kinds of pictures: one with white light, one with polarized light, and one with UV light. These three pictures reveal particle positions, particle contact forces/pressure, and particle orientation, respectively. Such a process of shearing, relaxing and picture taking was repeated until a certain total shear strain was achieved. For each packing fraction and friction coefficient, we repeated the experiment five times with the exception of the lowest density $\mu_l$ runs. Note that the analysis of the images acquired during the experiment reveals that not all the grains were detected in all frames, where some grains move out of or inside the boundaries of the images from one frame to another. As a result, for the analysis performed in the current paper, only grains common between all the frames were considered, and the grains present at one frame and not detected in another frame are excluded. Moreover, the grains on the boundary were removed from the analysis. \subsection{Rotations} The rigid body rotation between any two frames can be measured as half of the difference in the slope of straight lines fitted to the coordinates of grains centroids in the two frames. We note that, in general, the relation between the measured change in slope and the rigid rotation is nonlinear especially in finite deformation. However, a linear relation in the current analysis for the considered shear strain range is a good approximation. \subsection{Neighborhood Variance} The neighborhood variance of each particle refers to the product of its micro-rotation deviation from mean micro-rotation and the mean micro-rotation deviation of its Voronoi neighborhood from mean micro-rotation. We compute the ``average neighborhood variance'' by \begin{equation} S_{n} = \frac{Z^TWZ}{N-1},\text{ and }Z = \Theta^m - \langle \theta_i^m \rangle, \end{equation} where $\Theta^m$ is a vector of particles' micro-rotation whose $i$th element is $\theta_i^m$, $\langle \theta_i^m \rangle$ is the mean of all particles' micro-rotation, and $N$ is the number of particles. $W$ is the row-wise normalized spatial weight matrix. A commonly used spatial weight matrix is the adjacency matrix whose element at the $i$th row and $j$th column indicates whether the $i$th particle is adjacent to the $j$th particle. If the particles are adjacent, the element is 1. Otherwise, the element is 0. A row-wise normalized spatial weight matrix is gotten by dividing each row of a spatial weight matrix by the row sum of the matrix. We conducted analyses of the materials with different surfaces and density: $\mu_l$; $\phi=0.783,0.810,0.828$; $\mu_m$; $\phi=0.692, 0.758,0.816$; $\mu_h$; $\phi=0.713,0.744,0.807$. The observations of each grain were the micro-rotation. We constructed Delaunay triangles to link grains with their neighbor grains, and removed the link whose length was greater than the sum of the radius of the two grains connected by the link. \subsection{Global Moran's $I$} Spatial autocorrelation is a measure of the correlation between spatially proximate observations. Positive spatial autocorrelation is the tendency for spatially proximate observations to be similar, while negative spatial autocorrelation means spatially proximate observations tend to be different. Global Moran's $I$ is defined as follows: \begin{equation} I = \frac{Z^TWZ}{Z^TZ} = \sigma_n/\sigma_m, \text{ and } Z=\Theta^m-\langle \theta_i^m \rangle, \end{equation} where $\Theta^m$ is a vector of particles' micro-rotation whose $i$th element is $\theta_i^m$, $\langle \theta_i^m \rangle$ is the mean of all particles' micro-rotation, which is negligible. $W$ is the the row-wise normalized spatial weight matrix. This metric measures the average spatial autocorrelation of the entire dataset. The expected value of global Moran's $I$ under the null hypothesis of no spatial autocorrelation is $E(I)=-\frac{1}{N-1}$, where $N$ is the number of observations. In other words, the more observations there are, the closer the expectation to 0. Values of $I$ usually range from -1 to +1. Values significantly below $E(I)$ indicate negative spatial autocorrelation and values significantly above $E(I)$ indicate positive spatial autocorrelation. \subsection{Local Moran's $I$} There are cases where there is no global trend of spatial autocorrelation, but there are local communities where spatial autocorrelation is strong. Local Moran’s $I$ is used to represents the spatial autocorrelation within the local neighborhood of each observation, which is defined as follows: \begin{equation} I_i = \frac{z_i W_{i:}Z}{Z^TZ/(N-1)}, \text{ and } z_i = \theta_i^m - \langle \theta_i^m \rangle, \end{equation} where $\theta_i^m$ is the ith particle's micro-rotation. A positive value of $I_i$ means within the $i$th observation’s neighborhood the observations are similar, while a negative value means the observations are different. In order to analyze whether the local communities in a dataset are homogeneous regarding to spatial autocorrelation, we compute the standard deviation of local Moran’s $I$ defined as $\sigma_I$. The greater this standard deviation, the greater the differences between local communities.
2,877,628,088,712
arxiv
\section{Introduction} When spacetime is noncommutative, it is often the case that diffeomorphisms do not act as a {\it group} of automorphisms of this algebra. Instead it can be the case that symmetries act on the spacetime algebra as a {\it Hopf} or a {\it quasi-Hopf} algebra \cite{Dimitrijevic:2004rf, Aschieri:2005yw, Chaichian:2004za}. A prominent example is provided by the Groenewald-Moyal (GM) plane $\mathcal{A}_\theta(\mathbb{R}^d)$ and the Poincar\'e symmetry. The algebra $\mathcal{A}_\theta(\mathbb{R}^d)$ is the algebra of functions on $\mathbb{R}^d$ with the ``$\ast$'' product \begin{align} f_1\ast f_2=& f_1 e^{\frac{i}{2}\overleftarrow{\partial_\mu}\theta^{\mu\nu}\overrightarrow{\partial_\nu}}f_2,\\ f_i\ \in\ \mathcal{A}_\theta(\mathbb{R}^d),&\ \ \theta^{\mu\nu}=-\theta^{\nu\mu}=\textrm{ constant}.\nonumber \end{align} Let $\hat{\mathcal{P}}_+^\uparrow$ be the standard universal cover of the (let us say) the connected Poincar\'e group. Then $\hat{\mathcal{P}}_+^\uparrow$ does not act as a standard group of automorphisms on $\mathcal{A}_\theta(\mathbb{R}^d)$ since $\theta^{\mu\nu}$ are constants. There is however a Hopf algebra $(\mathbb{C}\hat{\mathcal{P}}_+^\uparrow, \Delta_\theta)$ where $\mathbb{C}\hat{\mathcal{P}}_+^\uparrow$ is the group algebra of $\hat{\mathcal{P}}_+^\uparrow$ and $\Delta_\theta$ is a deformed coproduct: \begin{align} \Delta_\theta(g)&=F_\theta^{-1}g\otimes g F_\theta,\ g\in \hat{\mathcal{P}}_+^\uparrow\quad,\\ F_\theta&= e^{\frac{i}{2}\partial_\mu\theta^{\mu\nu}\otimes \partial_\nu}=\textrm{Drinfel'd's twist factor}\quad . \end{align} (We do not include the counit $\epsilon$ and the antipode $S$ in the notation for simplicity). The twist of coproduct implies the twist of statistics as well. Its effects can be accounted for by ``dressing" \cite{Zamolodchikov:1978xm,Grosse:1977ua, Faddeev:1980zy} the quantum field $\phi_0$ of matter for $\theta^{\mu\nu}$=0: \begin{align} \phi_\theta &= \textrm{quantum field of matter for noncommutativity parameter} \ \theta^{\mu\nu}\nonumber\\ &=\phi_0 e^{\frac{1}{2}\overleftarrow{\partial_\mu}\theta^{\mu\nu}P_\nu},\ P_\nu =\textrm{Total momentum of {\it all} fields.}\nonumber \end{align} (Although here we focus on just one field, this formula is valid in an interacting field theory with many fields. Then $P_\nu$ refers to the full four-momentum of the interacting field theory.) Hereafter $A\wedge B$ will denote $A_\mu\theta^{\mu\nu}\otimes B_\nu$. A remarkable feature of the dressing transformation is its self-reproducing property: \begin{equation} \phi_\theta\ast\chi_\theta=(\phi_0\chi_0)e^{\frac{1}{2}\overleftarrow{\partial}\wedge P}. \end{equation} In particular, for the interaction Hamiltonian density, it implies \cite{Balachandran:2005pn, Balachandran:2006pi} that, \begin{equation} \mathcal{H}_I^\theta=\mathcal{H}_I^0 e^{\frac{1}{2}\overleftarrow{\partial}\wedge P} \end{equation} and that the interaction representation $S$-operator is independent of $\theta^{\mu\nu},\ S_\theta=S_0$. But scattering amplitudes show time delays which depend on $\theta^{\mu\nu}$ \cite{Balachandran:2005pn,Buchholz:2008rd}. The above approach has no major physical problems in the absence of gauge fields. When gauge fields are introduced, new issues arise. In the covariant derivative , $D_\mu=\partial_\mu+A_\mu,$ at first sight, it seems natural to regard $A_\mu$ as $\underline{G}$-valued functions on $\mathcal{A}_\theta(\mathbb{R}^d)$ where $\underline{G}$ is the Lie algebra of the compact simple group $G$ underlying the gauge theory. Unfortunately, as is well-known, this point of view cannot be sustained , since $[D_\mu, D_\nu]$ is valued in the enveloping algebra $\mathcal{U}(\underline{G})$ of $\underline{G}$. If we work in an $N$-dimensional irreducible representation of $\underline{G}$, $[D_\mu, D_\nu]$ is generally valued in $\underline{U(N)}$. One {\it may} thus be obliged to introduce new gauge fields \cite{Aschieri:2006ye} causing problems in formulating for example the standard model on $\mathcal{A}_\theta(\mathbb{R}^d)$. We remark however that new gauge degrees of freedom may not be necessary. Vassilevich \cite{Vassi} has found new gauge invariant expressions which vanish as $\theta^{\mu\nu}\to 0$ and which can be added to the action. With their inclusion, it may be possible to avoid new gauge degrees of freedom. In past work \cite{Balachandran:2007kv, Balachandran:2007yf}, we developed an alternative formulation. There the gauge fields $A_\mu$ are $\underline{G}$-valued functions on the commutative algebra $\mathcal{A}_0(\mathbb{R}^d)$. The fields $A_\mu$ are thus not twisted: $A_\mu^\theta=A_\mu^0$. Matter fields are still based on $\mathcal{A}_\theta(\mathbb{R}^d)$ and are given by $\phi_\theta$ where $P_\nu$ is now the total momentum including that of gauge fields. Such a formulation is possible since $\mathcal{A}_\theta(\mathbb{R}^d)$ is an $\mathcal{A}_0(\mathbb{R}^d)$-module. It has specific consequences such as the appearance of new types of diagrams, UV-IR mixing of a new sort and CPT violation \cite{Balachandran:2007yf,Jo,Invariance,Anosh}. Thus gauge fields are based on the commutative algebra of functions $\mathcal{A}_0(\mathbb{R}^d)$. Hence Poincar\'e transformations act on gauge fields with the untwisted coproduct $\Delta_0$. The corresponding P$\widehat{{\rm oincar}}$\'e Hopf algebra is $(\mathds{C}\hat{\mathcal{P}}_+^\uparrow, \Delta_0)$ whereas it is $(\mathds{C}\hat{\mathcal{P}}_+^\uparrow, \Delta_\theta)$ for matter fields. (The hat on P$\widehat{{\rm oincar}}$\'e is to show that we deal with its covering group.) As gauge and matter fields interact, the existence of two different P$\widehat{{\rm oincar}}$\'e Hopf algebras raises consistency questions regarding our treatment of Poincar\'e symmetry. In this paper, we formulate a {\it single} P$\widehat{{\rm oincar}}$\'e quasi-Hopf symmetry acting on both matter and gauge fields \cite{Mack:1991sr,Mack:1991tg,Mack:1992ez,Majid}. The coproduct on this symmetry algebra is not coassociative. As a result, the product on the spacetime algebra is not associative. The statistics group too is changed: it is neither the permutation nor the braid group. Quasi-Hopf algebras were formulated by Drinfel'd. They were later studied by Mack, Schomerus \cite{Mack:1991sr,Mack:1991tg,Mack:1992ez}, Majid \cite{Majid} and others. But perhaps it is here that they appear for the first time in the context of relativistic quantum field theories. In this note, we describe the preceding new results indicating all the necessary steps. But there are several aspects not elaborated here such as the properties of the $\mathcal{R}$-matrix and the construction of ``covariant products of quantum fields" \cite{Mack:1991sr,Mack:1991tg,Mack:1992ez}. Elsewhere we will give a full treatment basing our considerations on the work of Mack and Schomerus \cite{Mack:1991sr,Mack:1991tg,Mack:1992ez}. But, for now, in the interests of simplicity, we highlight just the main points. This paper has been written with the Lehmann-Symanzik-Zimmermann (LSZ) formalism of quantum field theories (qft's) on $\mathcal{A}_{\theta}(\mathbb{R}^d)$ in \cite{Balachandran:2009gx} in mind. It works with interacting fields and total energy-momentum operators $P_{\mu}$ which include interactions. But it is easily adapted to the perturbative approach of \cite{Balachandran:2007kv} by replacing $P_{\mu}$ by their free-field counterparts. \section{The Drinfel'd Twists and Quasi-Hopf Algebras}\label{sec2} Drinfel'd gives a general procedure to obtain new Hopf algebras starting from a given Hopf algebra using twists. The construction of the coproduct $\Delta_\theta$ is an example of this general theory of twisting. This section follows the treatment of Drinfel'd's work as given in \cite{Majid}. We always assume that a quasitriangular structure (the $\mathcal{R}-$matrix) exists. Here we only give the definitions and properties which are essential to follow the later sections for completeness. For details, see \cite{Majid}. Consider a Hopf algebra $H$ with a coproduct $\Delta$, which acts in another algebra $\mathcal{A}$ with multiplication map $m_0$. Now consider an invertible element $F\in H\otimes H$ ( the twist element) which is a counital 2-cocycle ( a condition which we will describe shortly). Then one can define a new Hopf algebra with the same algebra structure as $H$, but with the new coproduct \begin{equation} \Delta_F=F^{-1}\Delta F, \end{equation} and this algebra acts in a new carrier algebra $\mathcal{A}_F$ where the multiplication rule is now given by \begin{equation} m_F=m_0F. \end{equation} The new coproduct is generally not cocommutative (even if the original untwisted coproduct is) i.e. if we flip the entries in the tensor product which appears in $\Delta_F(\cdot)$, we do not get back the original coproduct: \begin{equation} \Delta'_F\equiv s\Delta_F\neq\Delta_F \end{equation} where $s$ is the transposition map which flips the entries in the tensor product in the coproduct. Hence the usual symmetrization/antisymmetrization in the tensor products of the carrier algebra ( that is, the statistics) is not compatible with the coproduct. Rather, in any theory with multiparticle states, the statistics is governed by the $\mathcal{R}-$matrix associated with the coproduct. The $\mathcal{R-}$matrix has the property \begin{equation} \mathcal{R}\Delta=\Delta'\mathcal{R}.\label{rmatrix} \end{equation} Therefore the correct statistics operator $\tau$ on $\mathcal{A}\otimes \mathcal{A}$ which is compatible with a general coproduct is given by \begin{equation} \tau=\sigma\circ(\rho\otimes\rho)(\mathcal{R}). \end{equation} Here $\sigma$ is the flip operator on the tensor product $V\otimes V$ of representation carrier space $V$ and $\rho$ is a representation by which $H$ acts in $V$. The diagonalization with respect to $\tau$ gives states which are superselected. It is easy to see that the $\mathcal{R}-$matrix for the coproduct obtained by twisting procedure from a trivial coproduct is given by \begin{equation} \mathcal{R}=F_{21}^{-1}F,\label{rtwist} \end{equation} where \begin{equation} F_{21}^{-1}=s F^{-1}. \end{equation} where $s$ again flips the entries in the tensor product of $F^{-1}$. So $\tau$ can be written as \begin{equation} \tau=\sigma\circ(\rho\otimes\rho)( F_{21}^{-1} F). \end{equation} We will often omit the representation symbol $\rho$ when it is clear from the context. Thus we see that the twisting procedure works at three levels. It not only twists the coproduct of the symmetry group and the product in the spacetime algebra, but it also changes the usual bosonic/fermionic statistics to twisted bosonic/fermionic statistics. \subsection{Coassociativity and Quasi-Hopf Agebras} The coassociativity of coproduct is defined by \begin{equation} (id\otimes \Delta)\Delta=(\Delta\otimes id)\Delta. \end{equation} By duality, this represents the associativity of the carrier algebra \cite{Balachandran:2009st}. Drinfel'd has defined more general algebraic structures where the above condition fails to hold, called quasi-Hopf algebras. However, this failure is controlled by an intertwiner $\phi \in H\otimes H\otimes H$ ( fulfilling certain properties which we will not discuss) such that \begin{equation} (id\otimes \Delta)\Delta(h)=\phi\big((\Delta\otimes id)\Delta(h)\big)\phi^{-1}\label{quasi} \end{equation} for all $h\in H$. The definitions for antipode. counit and quasi-triangular structure are also appropriately modified. But we will not discuss those here as well. It is (\ref{quasi}), that is the central element leading to the definition of quasi-Hopf algebras. These quasi-Hopf algebras can actually be obtained by twisting with a twist element $F$ which is required to be counital i.e., \begin{equation} (id\otimes \epsilon)F=(\epsilon\otimes id)F=\mathds{1}\label{counital} \end{equation} where $\epsilon$ is the counit. It is the 2-cocycle condition on the twist element $F$, \begin{equation} (F\otimes \mathds{1})\cdot(\Delta\otimes id)(F)=(\mathds{1}\otimes F)\cdot (id\otimes \Delta)(F),\label{cocycle} \end{equation} which ensures the coassociativity of the twisted coproduct. If the twist element $F$ does not fulfill this condition, the resulting Hopf algebra is only a quasi-Hopf algebra. Notice that $F$ only needs to obey (\ref{counital}) to qualify as a twist for a resulting quasi-Hopf algebra. It is important to note that even in a quasi-Hopf algebra, the $\mathcal{R}-$ matrix still obeys (\ref{rmatrix}) and is still obtained via (\ref{rtwist}) from the twist operator $F$. Hence the twisted statistics for a twisted quasi-Hopf algebra is again given by \begin{equation} \tau=\sigma F^{-1}_{21}F . \end{equation} Where we omitted the symbol $\circ$ after $\sigma$. In general, a quasi-Hopf algebra is a complicated object. However, if it is obtained from a twist $F$, it is easy to use it as all the structures of quasi-Hopf algebras follow from this twist. \section{The Twisted Fields} Twisted fields such as $\phi_\theta$ contain all the information on statistics, and hence the coproduct on the symmetry algebra $\mathds{C}\hat{\mathcal{P}}_+^\uparrow$ and the product on spacetime algebra. This is fully explained in \cite{Balachandran:2007yf,Balachandran:2007vx}. Therefore we first focus on a uniform construction of the twisted fields. For this purpose, we have to enlarge $\mathds{C}\hat{\mathcal{P}}_+^\uparrow$ by introducing a central element $u$. We call the extended algebra as $\overline{\mathds{C}\hat{\mathcal{P}}_+^\uparrow}$. The central element $u$ is effectively a grading operator for the quantum fields. It behaves like a pure group element under the coproduct $\overline{\Delta_\theta}$, counit $\overline{\epsilon}$ and antipode $\overline{S}$ of the extended algebra. Thus \begin{align}\label{Mario1} \overline{\Delta_\theta}(u)&= u\otimes u,\\ \overline{\epsilon}(c)&=\mathds{1},\\ \overline{S}(u)&=u^{-1}. \end{align} The $\ast$-operator on $\mathds{C}\hat{\mathcal{P}}_+^\uparrow$ is extended to $\overline{\mathds{C}\hat{\mathcal{P}}_+^\uparrow}$ by setting \begin{equation} u^{\ast}u=uu^{\ast}=\mathds{1}. \end{equation} It is thus a unitary element. Let $\chi_0^g$ and $\chi_0^m$ generically denote a basic untwisted gauge and matter field. The element $u$ acts on the fields by conjugation as usual. This representation of $u$ on fields is denoted by $Ad$. Thus \begin{equation} Ad\,u\ \chi_0^{g,m}\ :=\ u\chi_0^{g,m}u^{-1}. \end{equation} We set \begin{equation} Ad\,u\ \chi_0^g=+\chi_0^g\ \ ,\ \ Ad\,u\ \chi_0^m=-\chi_0^m.\label{set} \end{equation} Thus $u$ is a grading operator with $\chi_0^g$ being even and $\chi_0^m$ being odd. We complete the definition of $u$ in quantum field theory by setting $u=\mathds{1}$ on vacuum: \begin{equation} u|0\rangle=|0\rangle. \end{equation} It follows from (\ref{set}) that \begin{equation} \delta_{Ad\,u,-1}\equiv\frac{1}{2}[\mathds{1}-Ad\,u] \end{equation} acts as $0$ on $\chi_0^g$ and identity on $\chi_0^m$: \begin{equation} \delta_{Ad\,u,-1}\chi_0^g=0\ ,\ \delta_{Ad\,u,-1}\chi_0^m=\chi_0^m.\label{charges} \end{equation} It is thus a projector. We avoid the use of $P$ in denoting it as $P$ stands for the momentum operator elsewhere. We set as usual \begin{align} Ad\Delta_0(u)&\equiv (Ad\otimes Ad) (u\otimes u)\\ &=Ad\,u\otimes Ad\,u. \end{align} We now write the twisted field $\chi_\theta^{g,m}$, which can be matter or gauge, as \begin{equation} \chi_\theta^{g,m}=\chi_0^{g,m}e^{\frac{1}{2}\overleftarrow{\partial}\wedge P(\overleftarrow{\delta_{Ad\,u,-1}})} \end{equation} where the left arrow indicates action on $\chi_0^{g,m}$. In view of (\ref{charges}), \begin{equation} \chi_\theta^g=\chi_0^g\ ,\ \chi_\theta^m=\chi_0^m e^{\frac{1}{2}\overleftarrow{\partial}\wedge P}. \end{equation} These are exactly what we want. The representation $Ad$ extends to $\chi_\theta^{g,m}$ in a natural way: \begin{equation} Ad\,u\ \chi_\theta^{g,m}=u\chi_\theta^{g,m}u^{-1}. \end{equation} {\it Remark:} The introduction of a new element to convert a symmetry algebra into a Hopf algebra has occurred before . Thus the SUSY algebra is not Hopf. Now let $N_F$ be the fermion number and consider $(-1)^{N_F}$. It is the grading operator, commuting with even and anticommuting with odd SUSY generators. Mack and Schomerus \cite{Mack:1991sr} extend SUSY to $\overline{\textrm{SUSY}}$ by including this element and show that $\overline{\textrm{SUSY}}$, unlike SUSY, is Hopf. \section{The Coproduct $\overline{\Delta}_\theta$ on $\overline{\mathds{C}\hat{\mathcal{P}}_+^\uparrow}$} In the previous section, we did not specify the twisted coproduct $\overline{\Delta_\theta}$ on $\mathds{C}\hat{\mathcal{P}}_+^\uparrow\subset\overline{\mathds{C}\hat{\mathcal{P}}_+^\uparrow}$. We take up that task here. We know that the coproduct on the gauge sector is just the usual coproduct without any twist, \begin{equation} \overline{\Delta_\theta}\mid_{\textrm{Gauge fields}}=\Delta_0\mid_{\textrm{Gauge fields}}, \end{equation} where \begin{equation} \Delta_0(g)=g\otimes g \end{equation} for a Poincar\'e group element $g\in \overline{\mathds{C}\hat{\mathcal{P}}_+^\uparrow}$. For the matter sector, the coproduct is given by, \begin{align} \overline{\Delta_\theta}\mid_{\textrm{Matter fields}}&=\Delta_\theta=F_\theta^{-1}\Delta_0F_\theta,\\ F_\theta &=e^{-\frac{i}{2}P_\mu\theta^{\mu\nu}\otimes P_\nu}. \end{align} We want to write a coproduct which reduces to the corresponding coproducts on each sector using a single twist operator: \begin{equation} \overline{\Delta_\theta}=\overline{\mathfrak{F}_{\theta}}^{-1} \Delta_0 \overline{\mathfrak{F}_\theta}. \end{equation} In this way we will be defining a new Hopf symmetry structure on the full theory. The twist operator $\overline{\mathfrak{F}_{\theta}}$ which does this job is given by \begin{equation} \overline{\mathfrak{F}_{\theta}}=e^{-\frac{i}{2}P_\mu\theta^{\mu\nu}\otimes P_\nu (\delta_{Ad\,u,-1}\otimes\mathds{1})}.\label{twistfact} \end{equation} It reduces to corresponding twist factors in the respective sectors. We can do a check that this is indeed the twist factor for our coproduct. We know that for a field $\phi$ to carry a representation of any coproduct $\Delta$, it must fulfill \begin{equation} U(g)\phi=\phi( U\otimes \overleftarrow{\rho})(id\otimes S)\Delta(g),\label{coprodrep} \end{equation} where $S$ in the antipode (inverse for pure group elements), $U(g)$ is the operator representative of $g$ on the Hilbert space and $\rho$ is the representation of the group on the field $\phi$: \begin{equation} (\rho(g)\phi)(x)=\phi(g^{-1}x),\ \ \ g\in\hat{\mathcal{P}}_+^\uparrow. \end{equation} The argument for $\rho$ comes from the second factors in $(id\otimes S)\Delta(g)$. They act as usual from left to right in $\phi$. For the untwisted coproduct $\Delta_0=g\otimes g$, (\ref{coprodrep}) produces the standard result, \begin{equation} U(g)\phi(x)U(g)^\dagger =\phi(gx) \end{equation} In previous papers \cite{Balachandran:2007yf,Balachandran:2007vx}, we have shown that the usual expressions for operators $U(g)$ in terms of untwisted oscillators fulfill equation (\ref{coprodrep}) with twisted coproduct when acting on the twisted fields. Thus one knows that the algebraic structure in $\overline{\mathds{C}\hat{\mathcal{P}}_+^\uparrow}$ is not changed by changing the coproduct. Also, since the operators act on the same Hilbert space as before, it is expected that the operators $U(g)$ do not change when written in terms of untwisted oscillators because otherwise they will not satisfy the $\overline{\mathds{C}\hat{\mathcal{P}}_+^\uparrow}$ algebra. But the remarkable fact is that when they act on twisted fields, they reproduce the twisted coproduct. In other words, the transformations of the twisted fields with correct coproduct can be obtained by simply transforming the untwisted field and $P$'s in the standard manner. Let us show this for spinless fields and $g$ a Lorentz transformation. The results for more general fields follow easily. Consider the twisted field $\chi_\theta^{g,m}$ given by \begin{equation} \chi_\theta^{g,m}\equiv\chi_0^{g,m}e^{\frac{1}{2}\overleftarrow{\partial}\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}. \end{equation} We can explicitly write the generators of $\hat{\mathcal{P}}_+^\uparrow$ in terms of in ( or out) fields. Thus since $P_\mu$ is time-independent, at least formally, we have, on letting $x_0\longrightarrow -\infty$, \begin{equation} \chi_\theta^{g,m,in}=\chi_0^{g,m,in}e^{\frac{1}{2}\overleftarrow{\partial}\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}. \end{equation} The Poincar\'e generators ( of the fully interacting theory) have the expansions as in the free field case, but in terms of $\chi_0^{in}$. Therefore we can calculate how $\chi_\theta^{g,m,in}$ transforms and that is enough to find the coproduct on $\mathds{C}\hat{\mathcal{P}}_+^\uparrow$. Now, acting by $U(g)$ on $\chi_\theta^{g,m,in}$ and transforming the untwisted field $\chi_0^{g,m,in}$ and $P_\mu$ in the standard way, we have, for $g$ a Lorentz transformation \begin{align} U(g)\chi_\theta^{g,m,in}(x)&=U(g)\chi_0^{g,m,in}(x)e^{\frac{1}{2}\overleftarrow{\partial}\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}\\ &=\chi_0^{g,m,in}(gx)U(g)e^{\frac{1}{2}\overleftarrow{\partial}\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}\\ &=\chi_0^{g,m,in}(gx)e^{\frac{1}{2}(\overleftarrow{g\partial})\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}e^{-\frac{1}{2}(\overleftarrow{g\partial})\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}U(g)e^{\frac{1}{2}\overleftarrow{\partial}\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}\\ &=\chi_\theta^{g,m,in}(gx)e^{-\frac{1}{2}(\overleftarrow{g\partial})\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}U(g)e^{\frac{1}{2}\overleftarrow{\partial}\wedge P\big(\overleftarrow{\delta}_{Ad\,u,-1}\big)}\label{last2} \end{align} Now if we recall that on any field $\phi$, the representation is \begin{equation} \rho(g^{-1})\phi(x)=\phi(gx),\ \ \ \rho(P_\mu)\phi(x)=i\partial_\mu\phi(x), \end{equation} then (\ref{last2}) is exactly same as (\ref{coprodrep}) with $\Delta_\theta=\overline{\mathfrak{F}_{\theta}}^{-1}(g\otimes g)\overline{\mathfrak{F}_{\theta}}$. ( Note that $S$ is an anti-homomorphism). \section{On Lack of Coassociativity of Coproduct $\overline{\Delta_\theta}$}\label{lack} The coproduct $\overline{\Delta_\theta}$ is not coassociative. We can see this by evaluating $(id\otimes \overline{\Delta_\theta})\overline{\Delta_\theta}(g)$ and $(\overline{\Delta_\theta}\otimes id )\overline{\Delta_\theta}(g)$ on vectors $e_p\otimes e_q\otimes e_r \in V_{\textrm{Gauge}}\otimes V_{\textrm{Matter}}\otimes V_{\textrm{Matter}}$ where $V_{\textrm{Gauge}}$ and $V_{\textrm{Matter}}$ denote vector spaces with $u=1$ and $u=-1$ and $e_k\ \ (k=p,q,r)$ denote plane wave vectors : $e_k(x)=e^{ik\cdot x}$. Hence $P_\mu e_k=k_\mu e_k$. Consider $\overline{\Delta_\theta}(g)$: \begin{align} \overline{\Delta_\theta(g)}=&e^{\frac{i}{2}P_\mu\theta^{\mu\nu}\otimes P_\nu \big(\delta_{u,-1}\otimes\mathds{1}\big)}(g\otimes g)\nonumber\\ &e^{-\frac{i}{2}P_\mu\theta^{\mu\nu}\otimes P_\nu\big(\delta_{u,-1}\otimes\mathds{1}\big)}\\ =&(g\otimes g)e^{\frac{i}{2}(\Lambda(g)P)_\mu\theta^{\mu\nu}\otimes (\Lambda(g)P)_\nu\big(\delta_{u,-1}\otimes\mathds{1}\big)}\nonumber\\ &e^{-\frac{i}{2}P_\mu\theta^{\mu\nu}\otimes P_\nu \big(\delta_{u,-1}\otimes\mathds{1}\big)} \end{align} where $\Lambda:g\longrightarrow \Lambda(g)$ is the homomorphism from $SL(2,C)$ to $\mathcal{L}_+^\uparrow$. Now apply $id\otimes \overline{\Delta_{\theta}}$ on the above vectors and collect the exponentials with no $\Lambda(g) P$. They come from the last term : \begin{align} (id\otimes \overline{\Delta_{\theta}})&e^{-\frac{i}{2}P_\mu\theta^{\mu\nu}\otimes P_\nu \big(\delta_{u,-1}\otimes\mathds{1}\big)}\nonumber\\ =&exp\{-\frac{i}{2}P_\mu\otimes(\mathds{1}\otimes\theta^{\mu\nu} P_\nu+\theta^{\mu\nu}P_\nu\otimes \mathds{1})\times\nonumber\\ &\times (\delta_{u,-1}\otimes\mathds{1}\otimes\mathds{1})\}\quad. \end{align} Applying this to $e_p\otimes e_q\otimes e_r \in V_{\textrm{Gauge}}\otimes V_{\textrm{Matter}}\otimes V_{\textrm{Matter}}$, where the $V$'s denote the vector spaces for gauge fields or matter as indicated by subscripts, \begin{equation} \textrm{left-hand side acting on } e_p\otimes e_q\otimes e_r=e_p\otimes e_q\otimes e_r\quad. \end{equation} Also \begin{align} (\overline{\Delta_\theta}\otimes id)&e^{-\frac{i}{2}P_\mu\theta^{\mu\nu}\otimes P_\nu \big(\delta_{u,-1}\otimes\mathds{1}\big)}e_p\otimes e_q\otimes e_r\nonumber\\ =&exp\{-\frac{i}{2}(P_\mu\otimes\mathds{1}+\mathds{1}\otimes P_\mu)\otimes \theta^{\mu\nu} P_\nu\times\nonumber\\ &\times (\delta_{(u\otimes u,-1)}\otimes\mathds{1})\}e_p\otimes e_q\otimes e_r\\ =&e^{-\frac{i}{2}(p+q)_\mu\theta^{\mu\nu}r_\nu}e_p\otimes e_q\otimes e_r \end{align} so that \begin{equation}\label{Bab1} (id\otimes\overline{\Delta_\theta})\overline{\Delta_\theta}\neq(\overline{\Delta_\theta}\otimes id)\overline{\Delta_\theta}. \end{equation} \section{The Algebra of Functions} Let us denote it by $\mathcal{B}_\theta(\mathbb{R}^N)$. It has two components, with gradings $ u=+1$ and $-1$: \begin{equation} \mathcal{B}_\theta(\mathbb{R}^N)=\mathcal{B}_\theta^{+1}(\mathbb{R}^N)\oplus\mathcal{B}_\theta^{-1}(\mathbb{R}^N). \end{equation} The $*$-product on functions $\alpha,\beta \in \mathcal{B}_\theta(\mathbb{R}^N)$ is \begin{equation} \alpha\ast\beta=m_0[\overline{F_\theta}\ \alpha\otimes\beta] \end{equation} where $m_0$ is the point-wise multiplication map and \begin{equation} \overline{F_\theta}=e^{\frac{i}{2}\partial_\mu\otimes \theta^{\mu\nu}\partial_\nu\{\delta_{u,-1}\otimes\mathds{1}\}}\quad. \end{equation} as follows in the standard manner from the coproduct. It is easy to check that this product is not coassociative. The calculation is similar to the one leading to (\ref{Bab1}) Thus \begin{equation} e_p\ast(e_q\ast e_r)\neq (e_p\ast e_q)\ast e_r \end{equation} \begin{equation} e_p\in \mathcal{B}_\theta^{+1}(\mathbb{R}^N),\ \ e_{q,r}\in \mathcal{B}_\theta^{-1}(\mathbb{R}^N). \end{equation} The loss of associativity also follows using general considerations and the nonassociativity of the coproduct \cite{Mack:1991 sr, Mack:1991 tg, Mack:1992ez}. \section{The Quasi-Hopf Structure of $\overline{\hat{\mathcal{P}}_+^\uparrow}$} We have a new coproduct on $\overline{\hat{\mathcal{P}}_+^\uparrow}$ which is obtained by twisting with the twist element $\overline{\mathfrak{F}_{\theta}}$. For this twist to generate a Hopf algebra, it must satisfy (\ref{counital}), which it does, owing to the fact that \begin{equation} \epsilon(P_\mu)=0. \end{equation} Now we also saw that the resulting coproduct $\overline{\Delta_\theta}$ is not coassociative. It means that the resultant Hopf algebra is only a quasi-Hopf algebra. Indeed, for the twist to generate a Hopf algebra, it must satisfy (\ref{cocycle}). But we can show that the twist element $\overline{\mathfrak{F}_{\theta}}$ does not satisfy it. By simple algebra one can calculate that \begin{align} (\overline{\mathfrak{F}_{\theta}}\otimes\mathds{1})(\Delta_0\otimes id)\overline{\mathfrak{F}_{\theta}}=exp[-\frac{i}{2}\theta^{\mu\nu}\big(&(P_\mu\otimes \mathds{1}\otimes P_\nu+\mathds{1}\otimes P_\mu\otimes P_\nu)(\delta_{u\otimes u,-1}\otimes\mathds{1})\nonumber\\ &+(P_\mu\otimes P_\nu\otimes\mathds{1})( \delta_{u,-1}\otimes\mathds{1}\otimes\mathds{1})\big)].\label{2} \end{align} On the other hand \begin{align} (\mathds{1}\otimes\overline{\mathfrak{F}_{\theta}})(id\otimes \Delta_0)\overline{\mathfrak{F}_{\theta}}=exp[-\frac{i}{2}\theta^{\mu\nu}\big(&(P_\mu\otimes P_\nu\otimes \mathds{1}+P_\mu\otimes\mathds{1}\otimes P_\nu)(\delta_{u,-1}\otimes\mathds{1}\otimes\mathds{1})\nonumber\\ &+(\mathds{1}\otimes P_\mu\otimes P_\nu)(\mathds{1}\otimes \delta_{u,-1}\otimes\mathds{1})\big)].\label{1} \end{align} It is clear that (\ref{2}) and (\ref{1}) are not equal. The terms involving $\delta_{(u\otimes u),-1}$ in (\ref{2}) are absent in (\ref{1}). Hence \begin{equation} \mathds{1}\otimes\overline{\mathfrak{F}_{\theta}}(id\otimes \Delta_0)\overline{\mathfrak{F}_{\theta}}\neq\overline{\mathfrak{F}_{\theta}}\otimes\mathds{1}(\Delta_0\otimes id)\overline{\mathfrak{F}_{\theta}}. \end{equation} Thus this twist does not give an ordinary Hopf algebra. But we do get a quasi-Hopf algebra. For as we explained in section \ref{sec2} , all one needs is the property (\ref{counital}) to get a quasi-Hopf algebra. Actually if (\ref{cocycle}) were satisfied, $\overline{\Delta_\theta}$ would have been coassociative. Its failure proved in section \ref{lack} thus already shows that (\ref{cocycle}) is not fulfilled. \section{Final Remarks} We have shown the existence of a quasi-Hopf symmetry structure in a quantum gauge field theory where only matter fields feel the noncommutativity of spacetime. The coproduct is not coassociative and the $\ast$-product on the (two-sheeted) spacetime is not associative in such a theory int the presence of both matter and gauge fields. \section{Acknowledgements} We want to thank Alberto Ibort and the Universidad Carlos III de Madrid for their wonderful hospitality and support. A.P.B. thanks T. R. Govindarajan and the Institute of Mathematical Sciences, Chennai for very friendly hospitality as well. A.P.B. is most grateful to Mario Martone for help in preparing the manuscript. The work of A.P.B.was supported in part by DOE under the grant number DE-FG02-85ER40231 and by the Department of Science and Technology, India. The work of B.Q. was supported by IRCSET fellowship and by Perimeter Institute.
2,877,628,088,713
arxiv
\section{Introduction} Our use of lattice field theory to calculate the structure of the nucleon from first principles has two complementary objectives. One goal is to achieve quantitative agreement with experimental observables such as nucleon form factors and parton distributions, to confirm the quantitative precision of our solution of QCD and to establish the credibility to make predictions and guide future experiments. However, merely producing a black box that only reproduces experiment would be unfulfilling. Hence, our second goal is to obtain insight into how QCD works, revealing, for example, the origin of the nucleon spin, physical mechanisms, such as instantons, responsible for essential features of hadron structure, and the dependence on parameters of QCD like $N_C$, $N_f$, $m_q$, and the gauge group. A crucial issue in making quantitative contact with experiment is calculating sufficiently far into the chiral regime that reliable chiral extrapolations to the physical pion mass are possible. Hence, in this work we have utilized the extensive set of configurations with dynamical improved staggered quarks generated by the MILC collaboration~\cite{Bernard:2001av} to explore nucleon structure in the chiral regime. \section{Mixed Action Lattice Calculation} As explained in Ref.~\cite{Renner:2004ck}, we utilize a hybrid action combining domain wall valence fermions with improved staggered sea quarks. Improved staggered sea quarks offer the advantage that due to the relative economy of the algorithm, lattices with large volumes, small pion masses, and several lattice spacings are publicly available from the MILC collaboration. Although the fourth root of the fermion determinant remains controversial, current evidence suggests it is manageable~\cite{Bernard:2006ee,Sharpe:2006re}. Renormalization group arguments indicate that the coefficient of the nonlocal term approaches zero in the continuum limit~\cite{Shamir:2006nj}, partially quenched staggered chiral perturbation theory accounts well for the artificial properties at finite lattice spacing~\cite{Bernard:2006zw}, and the action has the advantage of being improved to ${\cal{O}}(a^2)$. Domain wall valence quarks offer equally compelling advantages to justify investing resources in calculating hadron observables on staggered configurations that are roughly comparable to the resources required to generate the configurations themselves. Domain wall fermions prevent mixing of quark observables by chiral symmetry, are accurate to ${\cal{O}}(a^2)$, and possess a conserved five dimensional axial current that facilitates calculation of renormalization factors. In addition, hybrid action (often referred to as mixed action) chiral perturbation theory results are available for many observables, and by virtue of an exact lattice chiral symmetry, one loop results have the simple chiral behavior observed in the continuum. The parameters of the configurations used in this work are shown in Table~[\ref{table}], and details can be found in Refs.~\cite{Renner:2004ck,Edwards:2005kw}. \begin{table}[htb] \begin{center} \begin{minipage}{18pc} \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline $a m_{u/d}^{\mathrm{asqtad}}$ & $L/a$ & $L$ & $m_\pi^{\mathrm{DWF}}$ & \# \\ \hline & & $\mathrm{fm}$ & $\mathrm{MeV}$ & \\ \hline $0.05$ & $20$ & $2.52$ & $761$ & $425$ \\ \hline $0.04$ & $"$ & $"$ & $693$ & $350$ \\ \hline $0.03$ & $"$ & $"$ & $594$ & $564$ \\ \hline $0.02$ & $"$ & $"$ & $498$ & $486$ \\ \hline $0.01$ & $"$ & $"$ & $354$ & $656$ \\ \hline $0.01$ & $28$ & $3.53$ & $353$ & $270$ \\ \hline \end{tabular} \end{center} \caption{\label{table}Lattice parameters used in this work.} \end{minipage} \end{center} \end{table} \section{Moments of Parton Distributions} Parton distributions measure forward matrix elements of the gauge invariant light cone operators \begin{equation} {\cal O}_\Gamma(x) =\int \!\frac{d \lambda}{4 \pi} e^{i \lambda x} \overline q (-\lambda n/2) \Gamma \,{\cal P} e^{-ig \int_{\lambda / 2}^{-\lambda / 2} d \alpha \, n \cdot A(\alpha n)}\! q(\lambda n/2), \label{LCop} \end{equation} where $x$ is a momentum fraction, $n$ is a light cone vector and $\Gamma = \fslash{n}$ or $\Gamma =\fslash{n} \gamma_5$. Using the operator product expansion, the operators in Eq.~[\ref{LCop}] yield towers of symmetrized, traceless local operators that can be evaluated on a Euclidean lattice \begin{equation} {\cal O }_{[\gamma_5]}^{\{\mu_1\ldots\mu_n\}} =\overline q \gamma^{\{\mu_1} [\gamma_5] i\stackrel{\leftrightarrow}{D}{}^{\!\mu _{2}}\!\cdots i\stackrel{\leftrightarrow}{D}{}^{\!\mu _{n}\}} q\,, \label{LocalOp} \end{equation} where $[\gamma_5]$ denotes the possible inclusion of $\gamma_5$, the curly brackets represent symmetrization over the indices $\mu_i$ and subtraction of traces, and $\stackrel{\leftrightarrow}{D}=1/2 (\stackrel{\rightarrow}{D}-\stackrel{\leftarrow}{D})$. A related operator for transversity distributions is \begin{equation} {\cal O }_{\sigma}^{\mu \{\mu_1\ldots\mu_n\}} =\overline q \gamma_5 \sigma^{\mu \{\mu_1} i\stackrel{\leftrightarrow}{D}{}^{\!\mu _{2}}\!\cdots i\stackrel{\leftrightarrow}{D}{}^{\!\mu _{n}\}} q\,. \label{LocalOpt} \end{equation} Using the notation and normalization of Ref.~\cite{Dolgov:2002zm}, the forward matrix elements $\langle P,S| {\cal O }^{\{\mu_1\ldots\mu_{n+1}\}} |P,S\rangle$ yield moments of the unpolarized quark distribution: \begin{equation} \langle x^n \rangle_q = \int_0^1\!\!dx\, x^n [q(x) + (-1)^{n+1}\overline{q}(x)], \end{equation} the forward matrix elements $\langle P,S| {\cal O }_{\gamma_5}^{\{\mu_1\ldots\mu_{n+1}\}} |P,S \rangle $ yield moments of the helicity distribution: \begin{equation} \langle x^n \rangle_{\Delta q} = \int_0^1\!\!dx\, x^n [\Delta q(x)+ (-1)^{n}\Delta \overline{q}(x)], \end{equation} and the forward matrix elements $\langle P,S| {\cal O }_{\sigma}^{\mu \{\mu_1\ldots\mu_{n+1}\}} |P,S \rangle $ yield moments of the transversity distribution: \begin{equation} \langle x^n \rangle_{\delta q} = \int_0^1\!\!dx\, x^n [\delta q(x)+ (-1)^{n+1}\delta \overline{q}(x)]. \end{equation} In this work, we calculate only connected diagrams, and hence concentrate as much as possible on isovector quantities. All quark bilinear operators in Eqs.~[\ref{LocalOp}] and [\ref{LocalOpt}] are renormalized as follows~\cite{Bistrovic}. The axial current is renormalized exactly using the conserved five dimensional axial current. By virtue of the suppression of loop integrals by HYP smearing, the ratio of the one-loop perturbative renormalization factor for a general bilinear operator to the renormalization factor for the axial current is within a few percent of unity, suggesting adequate convergence at one-loop level. Hence the complete renormalization factor is written as the exact axial current renormalization factor times the ratio of the perturbative renormalization factor for the desired operator divided by the perturbative renormalization factor for the axial current. \subsection{Chiral Perturbation Theory} Ideally, we would like to perform high statistics calculations at pion masses below 350 MeV and extrapolate them in pion mass and volume using a chiral perturbation theory expansion of sufficiently high order to provide a quantitatively controlled approximation. In practice, our most convincing chiral extrapolation has been for $g_A$ using the finite volume results including $\Delta$ intermediate states of Ref.~\cite{Beane:2004rf}, where the fit involving 6 low energy parameters yielded an excellent fit up to the order of a 700 MeV pion mass and agreed with experiment with 6.8\% errors~\cite{Edwards:2005ym}. Similar extrapolations of $g_A$ have been performed by other groups~\cite{Khan:2006de}. This success for $g_A$ is particularly relevant to the subsequent discussion of nucleon spin, because it involves the same operators $\langle 1 \rangle_q $ as $\Delta \Sigma$. We note that because the nucleon and $\Delta$ should be included together at large $N_c$~\cite{Dashen:1993jt} and indeed show large cancellations in the axial charge, we prefer to include the $\Delta$ as an explicit degree of freedom in the analysis. An unresolved puzzle in calculating moments of structure functions is the relatively flat behavior of the momentum fraction $\langle x \rangle$ at a constant value substantially higher than experiment~\cite{Detmold:2001jb,Orginos:2005uy}. Hence, it is particularly interesting to ask whether a chiral perturbation theory fit determined without knowledge of the experimental result is in fact statistically consistent with experiment. Since there is presently insufficient data to perform a full analysis including the $\Delta$, here we present a simple self-consistent improved one-loop analysis using only nucleon degrees of freedom that appears to work very well in our regime. The details will be presented in a future publication~\cite{Renner}, but the basic idea is as follows. We begin with the one loop expression at scale $\mu$~\cite{Arndt:2001ye,Chen:2001eg}. \begin{equation} \langle x^n \rangle_{u-d} = a_n \left( 1 - \frac{(3 g^2_{A,0} + 1)}{(4\pi f_{\pi,\scriptscriptstyle{0}})^2} m_\pi^2 \ln \left( \frac{m_\pi^2}{\mu^2} \right) \right) + b^\prime_n(\mu) m_\pi^2 \end{equation} in which we explicitly note that $g_{A,0}$ and $f_{\pi,\scriptscriptstyle{0}}$ are $g_A$ and $f_\pi$ in the chiral limit. We are free to choose the scale $\mu$ to be $f_\pi$. Additionally we replace $g_{A,0}$ and $f_{\pi,\scriptscriptstyle{0}}$ with their values at the given pion mass $g_{A,m_\pi}$ and $f_{\pi,m_\pi}$, so that the result may be rewritten as \begin{equation} \langle x^n \rangle_{u-d} = a_n \left( 1 - \frac{(3 g_{A,m_\pi}^2 + 1)}{(4\pi)^2} \frac{m_\pi^2}{f_{\pi,m_\pi}^2} \ln \left( \frac{m_\pi^2}{f_{\pi,m_\pi}^2} \right) \right) + b_n \frac{m_\pi^2}{f_{\pi,m_\pi}^2}. \end{equation} \begin{figure}[tb] \begin{minipage}{17.75pc} \includegraphics[width=17pc,angle=270,scale=0.8]{lpol_0_umd} \caption{\label{lpol_0_umd}Zeroth moment of helicity distribution.} \end{minipage} \hspace{0pc} \begin{minipage}{17.75pc} \includegraphics[width=17pc,angle=270,scale=0.8]{tpol_0_umd} \caption{\label{tpol_0_umd}Zeroth moment of transversity distribution.} \end{minipage} \end{figure} \begin{figure}[tb] \begin{minipage}{17.75pc} \includegraphics[width=17pc,angle=270,scale=0.8]{upol_1_b_umd} \caption{\label{upol_1_b_umd}First moment of unpolarized distribution.} \end{minipage} \hspace{0pc} \begin{minipage}{17.75pc} \includegraphics[width=17pc,angle=270,scale=0.8]{lpol_1_b_umd} \caption{\label{lpol_1_b_umd}First moment of helicity distribution.} \end{minipage} \end{figure} If we view this as an expansion in the ratio $r = \frac{m_\pi^2}{(4\pi)^2f^2_{\pi, m_\pi}}$, one can show that shifting to an expansion around $g_A$ and $f_\pi$ defined at another mass only introduces changes of ${\cal{O}}(r^2)$. Hence, to leading order, we may write an expression in which we use the values $g_{A,lat}$, $f_{\pi,lat}$, and $m_{\pi,lat}$ calculated on the lattice at specific values of the quark mass. Then, the expressions for the moments of the unpolarized, helicity, and transversity distributions are the following: \begin{eqnarray} \langle x^n \rangle_{u-d} & = & a_n \left( 1 - \frac{(3 g_{A,\mathrm{lat}}^2 + 1)}{(4\pi)^2} \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \ln \left( \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \right) \right) + b_n \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \\ \langle x^n \rangle_{\Delta u-\Delta d} & = & \Delta a_n \left( 1 - \frac{(2 g_{A,\mathrm{lat}}^2 + 1)}{(4\pi)^2} \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \ln \left( \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \right) \right) + \Delta b_n \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \\ \langle x^n \rangle_{\delta u-\delta d} & = & \delta a_n \left( 1 - \frac{(4 g_{A,\mathrm{lat}}^2 + 1)}{2(4\pi)^2} \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \ln \left( \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2} \right) \right) + \delta b_n \frac{m_{\pi,\mathrm{lat}}^2}{f_{\pi,\mathrm{lat}}^2}\,. \end{eqnarray} These results allow a least-squares two-parameter fit to the lattice data for moments and provides an extrapolation to the physical pion mass with a corresponding error band. Note that the series is substantially rearranged, by virtue that the calculated values of $g_A$, $f_\pi$, and $m_\pi$ are used at each value of the bare quark mass. Although we cannot prove that this self-consistent improved one-loop result should be accurate throughout the range of our data, to the extent to which it is successful, we believe its success arises from this self-consistent rearrangement. Additionally, the use of physical rather than chiral limit values for $f_\pi$ was first tried in~\cite{Beane:2005rj} and has since been studied in chiral perturbation~\cite{Chen:2005ab,O'Connell:2006sh} and applied to a variety of lattice calculations~\cite{Beane:2006gj,Beane:2006kx,Beane:2006fk,Beane:2006pt,Beane:2006mx}. \subsection{Lattice Results} Here we show the results of this one-loop analysis. Figure~[\ref{lpol_0_umd}] shows the result for $g_A$, which is nearly as good as the complete analysis of Ref.~\cite{Edwards:2005ym}, and yields a comparable extrapolation and error bar. Reassured by this result, we show analogous results for $\langle 1 \rangle_{\delta u - \delta d}$, $\langle x \rangle_{u - d}$, $\langle x \rangle_{\Delta u - \Delta d}$,$\langle x \rangle_{\delta u - \delta d}$, and $\langle x^2 \rangle_{\delta u - \delta d}$ in Figs.~[\ref{upol_1_b_umd}]-[\ref{lpol_2_umd}]. \begin{figure}[htb] \begin{minipage}{17.75pc} \includegraphics[width=17pc,angle=270,scale=0.8]{tpol_1_umd} \caption{\label{tpol_1_umd}First moment of transversity distribution. \end{minipage} \hspace{0pc} \begin{minipage}{17.75pc} \includegraphics[width=17pc,angle=270,scale=0.8]{lpol_2_umd} \caption{\label{lpol_2_umd}Second moment of helicity distribution. \end{minipage} \end{figure} Note that in every case for which there is experimental data, this analysis, which in no way includes the experimental result in the fit, yields an extrapolation consistent with experiment. The results are collected together in Fig.~[\ref{summary_mod}], where because experimental results are not available for all cases, we have normalized all results to the corresponding lattice result. \begin{figure}[htb] \begin{center} \includegraphics[width=24pc]{summary_mod}\vspace{-3pc} \caption{\label{summary_mod}The six moments considered in this work. Lattice results are shown in blue and experimental measurements in red, and each is normalized to the corresponding lattice result.} \end{center} \end{figure} \section{Form Factors} Form factors are interesting physically because at low momentum transfer they characterize the spatial size of charge and current distributions and at high momentum transfer they measure the ability of the nucleon to absorb a large momentum and distribute it to all the constituents such that the system remains in its ground state. Although for a relativistic system, the slope of the form factor is not precisely related to the rms radius, we will adhere to the common usage and refer to the slope as the rms radius. (We note in passing that the slope of $F_1$ is in fact related to the transverse rms radius in the infinite momentum frame.) Our primary focus here is on the qualitative approach to experiment as the quark mass is decreased, and to avoid uncalculated contributions of disconnected diagrams, we consider isovector form factors. The nucleon vector current form factors $F_1$ and $F_2$ are defined by \begin{equation} \langle p | \overline{\psi} \gamma^\mu \psi | p' \rangle = \overline{u} (p) [F_1(q^2)\gamma^\mu + F_2(q^2)\frac{i \sigma^{\mu\nu} q_\nu}{2 m} ] u(p')\,. \end{equation} Figure~[\ref{F1}] shows the lattice data and dipole fits for $F_1$ at five pion masses, and one observes that the lattice results systematically approach the experimental curve as the pion mass decreases to 359~MeV. One can quantitatively observe how the rms radius $\langle r^2 \rangle^{n-p}$ defined from the slope approaches the experimental value by fitting it with the simple chiral extrapolation formula~\cite{Dunne:2001ip} \begin{equation} \langle r^2 \rangle^{u-d} = a_0 -\frac{1+5g_A^2}{(4 \pi f_\pi)^2}\log\left(\frac{m^2_\pi}{m^2_\pi + \Lambda^2}\right), \label{rms-extrap} \end{equation} where $\Lambda$ is a phenomenological cutoff or equivalently, coefficient of an $m_\pi^2$ term in the expansion. Note that, in contrast to most chiral extrapolations which contain finite terms of the form $m_\pi^2\log m_\pi^2$, the isovector radius diverges like $\log m_\pi^2$, rendering the variation of the radius quite substantial near the physical pion mass. Figure~[\ref{rms}] shows the results of fitting the lattice data with Eq.~[\ref{rms-extrap}], and one notes that without having been constrained to do so, the chiral extrapolation is consistent with experiment. \begin{figure}[tb] \begin{minipage}{17.5pc} \includegraphics[width=17pc,angle=0,scale=1.0]{F1_u-d_vs_expt_5} \caption{\label{F1}F$_1$ isovector form factor at five masses compared with experiment~\cite{Kelly:2004hm}. \end{minipage} \hspace{0.5pc} \begin{minipage}{17.5pc} \includegraphics[width=17pc,angle=0,scale=1]{charge-radius_5} \caption{\label{rms}Chiral extrapolation of isovector form factor slope. \end{minipage} \end{figure} \begin{figure}[tb] \begin{minipage}{17.5pc} \includegraphics[width=17pc,angle=0,scale=1]{Ratio_F2_F1new} \caption{\label{F2_F1}Isovector form factor ratio $F_2/F_1$ at three masses compared with experiment~\cite{Kelly:2004hm}.} \end{minipage} \hspace{0.5pc} \begin{minipage}{17.5pc} \raisebox{0cm}{\includegraphics[width=17pc,angle=0,scale=1.0]{ratio_GA_F1_u-d_vs_expt}} \caption{\label{GA_F1}Isovector form factor ratio $G_A/F_1$ at four masses compared with experiment. \end{minipage} \end{figure} \begin{figure}[tb] \begin{minipage}{17.5pc} \includegraphics[width=17pc,angle=0,scale=0.95]{ratio_GP_GA_u-d_vs_expt_zoom} \caption{\label{GP_GA} Isovector form factor ratio $G_P/G_A$ at six masses compared with experiment. \end{minipage} \hspace{0.5pc} \begin{minipage}{17.5pc} \includegraphics[width=17pc,angle=-90,scale=0.79]{gP_o_gA_010}\vspace{0.35pc} \caption{\label{GP_GA_pole} Isovector pion pole contribution, normalized as \hbox{ $(q^2+m_\pi^2)G_P / 4 m^2 G_A$}, at $m_\pi = $350 MeV . \end{minipage} \end{figure} The form factor $F_2$ is of particular interest following the observation at JLab~\cite{Gayou:2001qd} that measurement of spin transfer yields a form factor that decreases more slowly with momentum transfer than the traditional Rosenbluth separation, which is now believed to suffer from substantial contamination from two photon exchange contributions. Figure~[\ref{F2_F1}] shows that the lattice results indeed approach the experimental ratio $F_2/F_1$ from spin transfer as the pion mass decreases. The nucleon axial vector current form factors $G_A$ and $G_P$ are defined by \begin{equation} \langle p | \overline{\psi} \gamma^\mu \gamma_5 \psi | p' \rangle = \overline{u}(p)[ G_A(q^2)\gamma^\mu \gamma_5 + \frac{q^\mu}{2m} \gamma_5 G_P(q^2) + \sigma^{\mu\nu}\gamma_5 q_\nu G_M(q^2) ] u(p'). \end{equation} Figure~[\ref{GA_F1}] shows lattice calculations of the isovector ratio $G_A/F_1$ compared with experimental form factors extracted from pion photoproduction and neutrino scattering. Again, to within the roughly 10\% discrepancy between the two experiments, the lattice results are qualitatively consistent with experiment. In the soft pion limit, $G_P$ is dominated by the pion pole: \begin{equation} G_p(q^2) \sim \frac{4 m^2 G_A(q^2)}{q^2 +m^2_\pi}. \label{pole} \end{equation} Figure~[\ref{GP_GA}] shows lattice results for the the ratio $G_P/G_A$ compared with experiment. To demonstrate the degree to which Eq.~[\ref{pole}] is satisfied, Fig.~[\ref{GP_GA_pole}] shows the ratio $(q^2+m_\pi^2)G_P / 4 m^2 G_A$ for lattice data at pion mass 350 MeV, which is consistent with unity. \section{Generalized Parton Distributions} \begin{figure}[tb] \begin{center} \includegraphics[width=17pc,angle=90,scale=1.8]{ABC_0606}\vspace{1.8pc} \caption{\label{ABC} Generalized form factors $A_{20}, B_{20}$ and $C_{20}$ for $u-d$ and $u+d$ at four masses. \end{center} \end{figure} Off diagonal matrix elements of the tower of operators in Eq.~[\ref{LocalOp}] yield the generalized form factors \begin{eqnarray} \langle P' | {\cal O}^{\mu_1} | P \rangle &=&\dlangle \gamma^{\mu_1 }\rangle\!\rangle A_{10}(t) + \frac{\imag}{2 m} \dlangle \sigma^{\mu_1 \alpha} \rangle\!\rangle \Delta_{\alpha} B_{10}(t)\,, \nonumber \\ [.5cm] % \langle P' | {\cal O}^{\lbrace \mu_1 \mu_2\rbrace} | P \rangle &=& \overline{P}{}^{\lbrace\mu_1}\dlangle \gamma^{\mu_2\rbrace}\rangle\!\rangle A_{20}(t) + \frac{\imag}{2 m} \overline{P}{}^{\lbrace\mu_1} \dlangle \sigma^{\mu_2\rbrace\alpha}\rangle\!\rangle \Delta_{\alpha} B_{20}(t) +\frac{1}{m}\Delta^{\{ \mu_1} \Delta^{ \mu_2 \} } C_{20}(t)\,, \nonumber \\[.5cm] % \langle P' | {\cal O}^{\lbrace\mu_1 \mu_2 \mu_3\rbrace} | P \rangle &=& \overline{P}{}^{\lbrace\mu_1}\overline{P}{}^{\mu_2} \dlangle \gamma^{\mu_3\rbrace} \rangle\!\rangle A_{30}(t) + \frac{\imag}{2 m} \overline{P}{}^{\lbrace \mu_1}\overline{P}{}^{\mu_2} \dlangle \sigma^{\mu_3\rbrace\alpha} \rangle\!\rangle \Delta_{\alpha} B_{30}(t) \nonumber \\ &+& \Delta^{\lbrace \mu_1}\Delta^{\mu_2} \dlangle \gamma^{\mu_3\rbrace}\rangle\!\rangle A_{32}(t) + \frac{\imag}{2 m} \Delta^{\lbrace\mu_1}\Delta^{\mu_2} \dlangle \sigma^{\mu_3\rbrace\alpha}\rangle\!\rangle \Delta_{\alpha} B_{32}(t), \label{Para1} \end{eqnarray} where we use the short-hand notation $\dlangle \Gamma \rangle\!\rangle=\overline U(P',\Lambda ')\Gamma U(P,\Lambda)$ for matrix elements of Dirac spinors $U$ and where $\Delta=P'-P$ and $t=\Delta^2$. Of particular interest in this work is the relationship between the generalized form factors and the origin of the nucleon spin. The contribution of the spin of the up and down quarks to the total spin of the nucleon is given by the zeroth moment of the spin dependent structure function $\langle 1\rangle_{\Delta q}$ as \begin{equation} \frac{1}{2} \Delta \Sigma = \frac{1}{2}\langle 1\rangle_{\Delta u + \Delta d}\,. \label{spin} \end{equation} Note both that our previous calculation of $g_A$ confirms our ability to calculate the connected contributions to $\langle 1\rangle_{\Delta q}$ accurately on the lattice and that $\Delta \Sigma$ requires the calculation of disconnected as well as connected contributions. Throughout this section, we will only discuss the results of connected diagrams, so all results will eventually need to be corrected for the effect of disconnected diagrams as well. The total contribution to the nucleon spin from both the spin and orbital angular momentum of quarks is given by the Ji sum rule~\cite{Ji:1996ek}: \begin{equation} J_q = \frac{1}{2}\left(A_{20}^{u+d}(0) + B_{20}^{u+d}(0)\right), \end{equation} so that the contribution of the orbital angular momentum is given by $L_q = J_q - \frac{1}{2}\Delta \Sigma$. Earlier calculations in the heavy quark regime showed that for 700 MeV pions, roughly 68\% of the spin of the nucleon arises from the spin of quarks, 0\% arises from orbital angular momentum, and hence the remaining 32\% must come from gluons~\cite{Hagler:2003jd,Gockeler:2003jf,Mathur:1999uf}. Figure~[\ref{ABC}] shows the recent lattice data for $A_{20}$, $B_{20}$ and $C_{20}$ for lighter pion masses, and the results for $\Delta \Sigma$ and $L_q $ are shown in Fig.~[\ref{OAM1}]. The full decomposition showing the contribution of spin and orbital angular momentum from up and down quarks is shown in Fig.~[\ref{OAM2}]. Here, one observes that the angular momentum contributions of both up and down quark are separately substantial, and it is only the sum that is extremely small. In previous work in the heavy quark regime~\cite{Hagler:2003is}, we have emphasized the dramatic differences in the slopes of $A_{10}$, $A_{20}$, and $A_{30}$ and the fact that this reflects a sharp decrease in the transverse size of the nucleon as $x$ approaches 1. To show that this behavior also arises in the chiral regime, Fig.~[\ref{A30_A10}] shows that the slope of the form factor ratio $ A_{30}/A_{10}$ differs substantially from unity as the pion mass is decreased and also approaches the result given by a phenomenological parameterization of the generalized parton distributions~\cite{Diehl:2004cx}. \begin{figure}[tb] \begin{minipage}{17.5pc} \includegraphics[width=17pc,angle=0,scale=1]{OAM1_mod} \caption{\label{OAM1}Nucleon spin decomposition. Squares denote $\Delta \Sigma^{u+d} /2$, the star indicates the experimental quark spin contribution, and diamonds denote $L^{u+d}$. \end{minipage} \hspace{0.5pc} \begin{minipage}{17.5pc} \includegraphics[width=17pc,angle=0,scale=1]{OAM2_mod} \caption{\label{OAM2} Nucleon spin decomposition by flavor. Squares denote $\Delta \Sigma^u /2$, diamonds denote $\Delta \Sigma^d /2$, triangles denote $L^u$, and circles denote $L^d$.} \end{minipage} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=17pc,angle=0,scale=1.8]{A30_A10}\vspace{2pc} \caption{\label{A30_A10} Comparison of the ratio $ A_{30} / A_{10}$ for $u-d$ and $u+d$ at four masses with a phenomenological fit to generalized parton distributions. \end{center} \end{figure} \section{Conclusions} In summary, the hybrid combination of valence domain wall quarks on an improved staggered sea has enabled us to begin to enter the era of quantitative solution of full lattice QCD in the chiral regime. The axial charge, $g_A$, represents a successful, ``gold-plated'' test, which argues well for the prospects of quantitative control of a range of important nucleon observables. The chiral extrapolation of moments of quark distributions using our self-consistently improved one-loop analysis is encouraging, but of course we would still like to directly calculate the turn over in the approach to the chiral regime. Similarly, the general agreement between nucleon form factors of the vector and axial currents is highly encouraging. Generalized form factors are also being calculated well into the chiral regime, and give promise for understanding the origin of the nucleon spin and the transverse structure of the nucleon. In the long term, since lattice calculations determine moments of generalized parton distributions and experiments measure convolutions of generalized parton distributions, there is an excellent opportunity for synergy between experiment and theory in jointly determining generalized parton distributions. The final analysis of all the data shown in this work is currently being completed, and full results will be published in the near future. This work also indicates obvious challenges for the future. Clearly the calculations must be extended to lower quark masses and finer lattices and analyzed with partially quenched hybrid chiral perturbation theory, and disconnected diagrams must be calculated. In addition, it is important to develop new techniques to explore form factors at high momentum transfer, gluon observables, and transition form factors for unstable states. \section{Acknowledgments} This work was supported by the DOE Office of Nuclear Physics under contracts DE-FC02-94ER40818, DE-AC05-06OR23177 and DE-AC05-84150, the EU Integrated Infrastructure Initiative Hadron Physics (I3HP) under contract RII3-CT-2004-506078, the DFG under contract FOR 465 (Forschergruppe Gitter-Hadronen-Ph\"anomenologie) and the DFG Emmy-Noether program. Computations were performed on clusters at Jefferson Laboratory and at ORNL using time awarded under the SciDAC initiative. We are indebted to the members of the MILC collaboration for providing the dynamical quark configurations which made our full QCD calculations possible.
2,877,628,088,714
arxiv
\section{Introduction} Filamentous actin (F-actin) is a semiflexible biopolymer that has been the object of intensive research from several domains. As a major constituent of the cytoskeleton, F-actin networks play a key role in the ongoing puzzle of cell mechanics \cite{howard01,bausch06} and cell motility \cite{pantaloni01}. Depending on the presence of binding proteins, F-actin strands at medium concentration can form both chemical (cross-linked) and physical (entangled) networks with different elastic properties \cite{gardel04,wagner06,hinner98}. Besides rheological methods, single polymer visualizations also are feasible, as the strands exceed most synthetic polymers by length. This facilitated the observation of tube-like regions along which DNA or F-actin filaments reptate \cite{perkins94,kas94}. The confinement of polymers to these cylindrical cages confirmed the ``tube model'' postulated earlier by de Gennes \cite{gennes79} and Doi and Edwards \cite{doi86}. This long standing paradigm had proven a successful concept to reduce the complex structure of entangled networks to a single polymer problem. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./cartoon.eps} \caption{The effect of all surrounding polymers that hinder the test polymer's transverse displacement is described by a hypothetical tube.} \label{fig:cartoon} \end{figure} In such entangled networks, polymers can effortlessly slide past each other but are not allowed to cross. Their interaction is thus mainly of entropic nature as entanglements mutually restrict the accessible configuration space. Grasping this feature in a single polymer model has lead to the famous tube model \cite{gennes79,doi86}. The suppression of transverse undulations of a test polymer by the surrounding polymers (Fig.\ref{fig:cartoon}) is modelled by a tube. This tube follows the average path of the test polymer and its profile is frequently modelled by a harmonic potential. The average strength of this potential is determined by the local density of the network. The tube concept has proven a successful tool to derive scaling laws for several network properties \cite{doi86,odijk83}. For example, due to the confinement energy of the filament inside, the tube diameter can be connected to mechanical properties of the network, e.g. the different moduli \cite{hinner98,mackintosh95,isambert96}. However, due to the phenomenological nature of the tube model, most of its benefits have been mainly qualitative. Recently, also quantitative predictions of the plateau modulus and the tube diameter of flexible polymers melts were achieved by a novel approach based on the microscopic foundations and the topological structure of the network \cite{everaers04,tzoumanekas06}. Even if most concepts developed for flexible polymers can not be carried over to the semiflexible case with its large persistence length, the tube model is perfectly applicable as well. However, while in general scaling laws of the tube diameter \cite{semenov86} or the plateau modulus \cite{isambert96} are well established, quantitative theories are still under debate and lack from approval by measurements of sufficient accuracy. Again the challenge is to make the successful tube model quantitative by connecting the phenomenological tube and its microscopic origins. In the present work, we contribute to the discussion by supplying an absolute value for the tube diameter from a theory supported by extensive computer simulations. We will proceed as follows: in Section \ref{sec:model-definition} the model under investigation is defined and all relevant length scales are discussed. By the analysis of the free energy cost for confining a polymer to a hypothetical tube, the tube diameter is derived as a function of Odijk's deflection length for finite-length polymers. The appropriate deflection length for a given polymer concentration and persistence length is derived in the following sections. To this end, the polymer is modeled by a sequence of independent rods in Section \ref{sec:indep-rod-model}. Criteria for the correct choice of the independent rod length and a self-consistent determination of the tube diameter are developed, before a final result for the tube diameter is obtained in Section \ref{sec:plugging-it-all}. Extensive numerical simulations supporting these result and providing additional insight are presented in Section \ref{sec:simulations} followed by our conclusion in Section \ref{sec:conclusion}. \section{Model Definition}\label{sec:model-definition} We consider a mono-disperse network of physically entangled polymers with a particular focus on pure solutions of the biopolymer F-actin. The polymer density is given by the number $\nu$ of polymers of length $L$ per unit volume. The polymers are of bending stiffness $\kappa$ corresponding to a persistence length $l_{\rm p}=\kappa/k_{\rm B} T$. A single polymer's configuration ${\bf r}(s)$ is parameterized by the arc length $s$ and the average distance between the polymer chains can be characterized by a mesh size $\xi:=\sqrt{3/(\nu L)}$ \footnote{The mesh size has the unit length and can be interpreted as an average distance between network constituents. While the denominator $\sqrt{1/\nu L}$ ensures the correct scaling the numerator is a mere definition.}. We will describe the constituent polymers by the worm-like chain model \cite{kratky49,saito67} and exploit the tube model concept \cite{gennes79,doi86} to reduce the description of the network to a single polymer and its neighbors. In the following we will begin our analysis with an investigation of the different length scales involved in the system. \subsection{Length Scales}\label{sec:length-scales} Typical F-action solutions are polydisperse with a mean filament length $L \approx 22 \mu$m \cite{kaufmann92}. With a persistence length $l_{\rm p} \approx 17 \mu$m \cite{gittes93,goff02} comparable to its length, it is the textbook example of a semiflexible polymer. At a concentration of $c=0.5$~mg/ml corresponding to $\nu \approx 1 \mu m^{-3}$ \cite{schmidt89} the average mesh size equals $\xi \approx 0.4 \mu$m. We can thus state that the persistence length of a filament is much larger than the distance to its neighbors, $l_{\rm p} \gg \xi$. Since the tube diameter $L_\perp$ is at most of the order of the mesh size, this additionally implies $l_{\rm p} \gg L_\perp$. The polymer will thus not deviate far from the tube center. Consequently, configurations where the polymer folds back onto itself are rendered unlikely. This is a minimal requirement to model the tube by a harmonic potential of strength $\gamma$. The potential has to be seen as a hypothetical tube representing the joint contribution of all surrounding polymers which constrain the transverse undulations of a given polymer (see Fig.\ref{fig:cartoon}). The energy of a certain polymer contour ${\bf r}(s)$ is the sum of the bending energy of the polymer and its confinement into the harmonic potential and is given in the weakly-bending rod approximation by \begin{equation} \label{eq:hamilton} H(\gamma,\kappa)=\int_0^L ds \left[\frac{\kappa}{2}({\bf r}_\perp^{\prime \prime}(s))^2+\frac{\gamma}{2}{\bf r}^2_\perp(s) \right] \;. \end{equation} Here ${\bf r}(s)=(s,{\bf r}_\perp(s))$ is a parameterization in arc-length $s$ and transverse displacement ${\bf r}_\perp(s)=(y(s),z(s))$ from the tube center. The prime denotes a derivative with respect to $s$. This harmonic approximation to the Hamiltonian of the worm-like chain model is valid as long as $\vert {\bf r}_\perp^{\prime \prime} \vert \ll 1$, i.e. as long as the transverse coordinates of the tube coordinate can be considered to remain single valued. With the thermal average $\langle \cdot \rangle$ the tube diameter can now be defined as \begin{equation} L_\perp := \frac{1}{L} \, \big\langle \int_0^L ds \, {\bf r}_\perp^2 (s) \big\rangle \;. \end{equation} So far we have identified two length scales: the length scale of persistence length and the total polymer length describing the properties of one specific polymer, and the length scale of mesh size and the tube diameter describing the properties of the network structure. Additionally we introduce the deflection length $L_{\rm d} := (\kappa/\gamma)^{1/4}$ as a third useful length scale. It is interpreted below as that length on which interactions between single polymer and network occur. More precisely, it is a measure for the number of contacts of the polymer with the tube walls. For large confinement strength $\gamma$ the tube is small, making interaction with the encaged polymer more likely and therefore resulting in a small deflection length. On the other hand, for a large polymer rigidity $\kappa$ transverse undulations allowing contacts with the tube walls are energetically unfavorable and the distance between contact will decrease. For $l_{\rm p} \gg L_\perp$ we expect the deflection length to be distinctively smaller than the polymer length, but also larger than the tube diameter. For quantification we consider the free energy cost $\Delta F(\gamma)$ of confining the polymer to the tube. It can be found from the partition sum that is obtained as a path integral over all polymer configurations: \begin{equation} \label{eq:free_energy} \exp \left[ -\beta \Delta F(\gamma) \right] = \int {\cal D}[{\bf r}_\perp(s)] \exp[-\beta H(\kappa,\gamma)] \end{equation} with $\beta=1/k_{\rm B} T$. In the limit of infinitely long polymers the free energy cost is \cite{burkhardt95} \begin{equation} \label{eq:free_energy_result} \Delta F = \sqrt{2} k_{\rm B} T \, \frac{L}{L_{\rm d}} \;. \end{equation} This result fits into the picture of the deflection length as measure for the average distance between successive collisions of the polymer with its tube. If the typical distance between two collisions is given by $L_{\rm d}$, the free energy loss results as the sum over all $L/L_{\rm d}$ points of contact where every collision costs one $k_{\rm B} T$. The free energy now allows one to derive the tube diameter as \begin{equation} L_\perp^2 = \frac{2}{L} \, \frac{\partial \Delta F}{\partial \gamma} = \frac{L_{\rm d}^3}{\sqrt{2}l_{\rm p}} \;. \end{equation} In the limit of infinite polymer length we have thus derived the tube diameter as a function of the deflection length by differentiation of the free energy cost. The above consideration also sets the road map for the remaining work. To calculate the tube diameter for the network, we need first to connect free energy and tube diameter for polymers of finite length and then derive the deflection length for the model under investigation. \subsection{Finite length Polymers} For finite size polymers the path integral in Eq.(\ref{eq:free_energy}) can be evaluated exactly \cite{burkhardt95,kleinert86} and with the dimensionless deflection length $l_{\rm d} := L_{\rm d}/L$ results in \begin{equation} \Delta F= - 2 k_{\rm B} T g(l_{\rm d}) \end{equation} with \begin{equation} \label{eq:definition_g} g(l_{\rm d})=\ln(l_{\rm d}^2)-\frac{1}{2} \ln \left(\sinh^2 \frac{1}{\sqrt{2} l_{\rm d}}-\sin^2 \frac{1}{\sqrt{2} l_{\rm d}} \right) \;. \end{equation} The limit of small $l_{\rm d}$ that is guaranteed by $L \gg L_{\rm d}$ as stated above, allows an expansion \begin{equation} \label{eq:approx_g} g(l_{\rm d})=-\frac{1}{\sqrt{2}l_{\rm d}}+\ln(l_{\rm d}^2) +{\cal O} (e^{-1/l_{\rm d}}) \quad \mathrm{for} \quad l_{\rm d} \to 0 \;, \end{equation} where the first term is just the result for polymers with infinite length (\ref{eq:free_energy_result}). Upon again using the relation $L_\perp^2 = (2/L) (\partial \Delta F (l_d)/ \partial \gamma)$ with the inner derivative $\partial l_{\rm d}/ \partial \gamma = - (L^4/4\kappa)l_{\rm d}^5$ the tube diameter becomes \begin{equation} L_\perp^2=\frac{L^3}{2 l_{\rm p}} l_{\rm d}^5 g^\prime(l_{\rm d}) \;. \end{equation} For later convenience we simplify this to $l_\perp^2=h(l_{\rm d})$ by introducing a dimensionless tube width $l_\perp$ and function $h(x)$ as \begin{equation} \label{eq:definition_h} l_\perp^2 := \frac{L_\perp^2 l_{\rm p}}{L^3} \qquad \mathrm{and} \qquad h(x):=\frac{x^5 g^\prime(x)}{2} \;. \end{equation} This relation connects the wanted tube diameter to the deflection length and hence to the hypothetical tube potential $\gamma$ at a given bending rigidity. In the following we will further investigate the tube properties and set up a model that allows one to derive the deflection length and thereby the hypothetical harmonic tube potential strength from the polymer concentration and persistence length. \section{Independent Rod Model}\label{sec:indep-rod-model} For simplification and as an anticipation towards the computer simulations, consider for the time being a polymer in a two-dimensional (2D) plane. In this case the transverse displacement vector ${\bf r_\perp}$ reduces to a single component. The undulations of the test polymer in 2D are hindered by point-like obstacles as depicted in Fig.~\ref{fig:irm} (top). These obstacles represent the cuts of the surrounding polymers in three dimensions with the chosen fluctuation plane. Given an appropriate number of 2D obstacles equivalent to the density of surrounding polymers in 3D, the transverse displacement will correspond to one of the two components of the displacement vector ${\bf r_\perp}$, if we assume the fluctuations of these components to be independent. Bearing in mind the large persistence length compared to the mesh size, the surrounding polymers in 3D are modelled as rigid rods and the area density $\rho_{\rm MC}$ of obstacles in 2D corresponding to a polymer concentration $\nu$ in the 3D network is $\rho_{\rm MC}=2 \nu L/\pi$. It is computed in Appendix \ref{sec:density} and will be explicitly needed for the comparison with simulation results. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./irm2.eps} \caption{(top) The fluctuation tube of a semiflexible polymer in a network of constraints is determined by a delicate balance of entropic and bending energy. (middle) Scheme of decomposition of a semiflexible polymer into rigid rods of length $\bar L$. The flexibility is localized to the joints between independent rods. Given the proper choice of $\bar L$ both models produce the same transverse fluctuation area. (bottom) Small rod length overestimates and large rod length underestimates fluctuations.} \label{fig:irm} \end{figure} Recalling the Hamiltonian (\ref{eq:hamilton}), the polymer's free energy has a bending and an entropic contribution. To minimize the free energy it can be favorable to trade in bending energy for a wider tube. Thereby entropy is gained due to a larger available free volume, but the polymer is forced to sacrifice energy to obtain its curvature (see Fig.~\ref{fig:irm} (top)). This competition defines a characteristic length $\bar L$ that has to be of the order of the deflection length $L_{\rm d}$, since this is the length scale characterizing interaction of the test polymer and its environment. In the following we will develop an analytical theory based on an independent rod model (IRM) that is inspired by the competition we have just discussed. To this end, we use a simplified model of a semiflexible polymer, in which the flexibility is localized to the joints of a sequence of independent stiff rods of length $\bar L$. After deriving the transverse fluctuations of a single independent rod in an environment of fluctuating neighbors, we apply a self-consistency argument to arrive at the corresponding tube width of the full length semiflexible polymer. Note that the analysis is carried out for three dimensions and the 2D simplification only serves for illustration and for simulations later on. To begin with, consider the test polymer to be divided into independent segments of length $\bar L$ that are assumed to be completely rigid rods and are only allowed to undergo transverse fluctuations. As the flexibility in the IRM depends on the number of joints, it is obvious that the choice of $\bar L$ is crucial for the resulting tube diameter. Picturing the decomposition of the test polymer as in Fig.~\ref{fig:irm} (middle) it can be seen that the transverse fluctuations of the independent rods are hindered by the two closest obstacle polymers normal to either side of each segment of length $\bar L$. If $\bar L$ is chosen too large (e.g. $\bar L=L$ in the worst case) the area of transverse fluctuations will be much smaller than for a true semiflexible polymer because flexibility is underestimated (Fig.~\ref{fig:irm} (bottom, right)). On the contrary, if $\bar L$ is chosen too small, the normal distance to the nearest obstacle can be quite large (Fig.~\ref{fig:irm} (bottom, left)). This overestimation of flexibility results in a transverse fluctuation area that is large compared to the polymer we try to model. Before we further discuss the proper choice of $\bar L$, we will focus on the behavior of a single independent rod in more detail. The transverse fluctuation of a single stiff rod in the $(y,z)$-plane are constrained by the projections of the surrounding network constituents to this plane as depicted in Fig.~\ref{fig:crossection} (left). Since the mesh size is much smaller than the persistence length, the surrounding polymers can be assumed to be straight and ``dangling ends'' are neglected. The size of the shaded cross section will decrease with increasing density of polymers, i.e. with a decreased mesh size. Thus the tube diameter is of order of the mesh size and scales as $L_\perp \propto \xi$ for a given 2D plane. Furthermore, an increase of the length $\bar L$ of the rigid rod signifies an increase of obstacles that will be projected to the plane. As the average distance between surrounding polymers in direction of the test rod is also given by the mesh size $\xi$, the average number projected onto the plane increases as $\bar L/\xi$. As this reduces the cross section area, we finally arrive at an overall scaling of the tube diameter as $L_\perp \propto \xi^2/\bar L$. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./crosssection.eps} \caption{(left) Projection of constraining polymers to the plane of transverse fluctuations of a test polymer (black dot). As the mesh size is much smaller than the persistence length, the constraining filaments, can be assumed to be straight. The shaded area is the accessible tube area for a specific obstacle configuration. (right) Corresponding setup for a simplified geometry where obstacles can only be aligned with coordinate axes. } \label{fig:crossection} \end{figure} Before we quantify this scaling result in the next section, let us first have a closer look at the obstacles. In a self-consistent treatment these evidently have themselves to be regarded as semiflexible polymers of the network and therefore undergo fluctuations around an average position as well. This causes the cross section area to smear out, as the test polymer has now a non-vanishing probability to take on values behind the average obstacle position. In terms of a confinement potential the cross section is no more described by an infinite well, but by some continuous potential which earlier has been assumed to be harmonic with strength $\gamma$ per unit length of a polymer. The obstacle fluctuations will also be modelled as Gaussian and to distinguish between the test polymer mean square displacement $L_\perp^2$ and the obstacle's, the latter is denoted as $\sigma^2$. In a self-consistent treatment of the network the average tube width $L_\perp$ of the test polymer is then determined as a function of the obstacle fluctuations $\sigma$, where $\sigma$ is chosen such that $L_\perp=\sigma$. Of course, the value $L_\perp$ of a single obstacle configuration will not only depend on $\sigma$ but also on the obstacle positions in that specific configuration. Consequently, averaging over all obstacle configurations will result in a distribution $P(L_\perp)$ and self-consistency would then also require a distribution $P(\sigma)$. However, if we assume these distributions to be reasonably peaked, we can use their averages as a good approximation. The self-consistency of distributed tube widths is verified by simulations in Section \ref{sec:simulations}. \subsection{Single stiff rod in simplified geometry} According to the assumptions made above the obstacles (in a top view) are completely described by a normal distance $r_k$ from the test polymer and an orientation $\alpha_k$; compare Fig.~\ref{fig:crossection}. We will neglect correlations and assume the obstacles to be uniformly distributed. The probability to find an obstacle with a certain direction at a specified point is independent of the direction and that point. This corresponds to a complete factorization of the network distribution function into single polymer distribution functions. Consider first a simplified geometry in which all obstacles are either parallel to the $y$ or the $z$ axis as depicted in Fig.~\ref{fig:crossection} (right). As fluctuations in both coordinates are assumed to be independent and equivalent, the task of computing the tube width is reduced to a one dimensional problem with a single coordinate $r$. The network density or mesh size enters as the number $\rho$ of obstacles per unit length. This density should be chosen such, that the average number of obstacles at a certain distance $r$ from the test rod in the IRM is the same as the average number of obstacle polymers featuring a minimal distance $r$ from the test polymer. This density is proportional to the length $\bar L$ of the stiff segment and the number of surrounding polymers in a unit volume $\nu L$. The exact relation $\rho=(\pi/2) (\nu L \bar L/4)$ is calculated in Appendix \ref{sec:density2}. As the obstacles are assumed to undergo Gaussian fluctuations around their average position $r_k$, the corresponding probability density is \begin{equation} P_0(r-r_k,\sigma):=\left( 2 \pi \sigma^2 \right)^{-1/2} e^{\frac{-(r-r_k)^2}{2 \sigma^2}} \;. \end{equation} If the test rod interacts with only a single obstacle, we can state that the probability to find the test rod at a certain position is given by the fraction of realizations still accessible to the obstacle. In this case \begin{equation} P_+(r,r_k,\sigma)=\int_r^\infty dr^\prime P_0(r^\prime-r_k,\sigma) \end{equation} is the fraction of configuration space still accessible to the obstacle if the test rod is placed at $r$ (for $r_k>0$). Completing the integral yields \begin{equation} P_+(r,r_k,\sigma) = \frac{1}{2} \mathrm{erfc} \left( \frac{r-r_k}{\sqrt{2}\sigma} \right) \end{equation} and the corresponding probability for obstacles at negative positions $P_-(r,r_k,\sigma)$ is simply obtained by a inverted sign of the argument. The probability to find the test rod at a position $r$ for a given configuration of obstacles $\{r_k\}$ is then given by the product of all probabilities \begin{equation} \label{eq:product} P(r,\{r_k\},\sigma) = \frac{1}{{\it N}} \prod_{k,r_k>0} P_+(r,r_k,\sigma) \prod_{k,r_k<0} P_-(r,r_k,\sigma) \;. \end{equation} The normalization ${\it N}={\it N}(\{r_k\},\sigma)$ is determined by the condition $\int dr P(r,\{r_k\},\sigma)=1$ and depends on the obstacle configuration. As the function $P_+(r,r_k,\sigma)$ reduces to a Heaviside function in the case of $\sigma \to 0$, the product in Eq. (\ref{eq:product}) can be written as $\theta(r-r_-)\theta(r-r_+)/(r_+-r_-)$ where $r_+$ and $r_-$ are the positions of the two closest obstacles. This reduction is justified because all further obstacles are completely shadowed by the two nearest neighbors. In the case of a non-vanishing $\sigma$ the probability distribution $P(r,\{r_k,\alpha_k\},\sigma)$ will not be rectangular anymore but smear out. The test rod has a non-vanishing probability to be found behind the average position of the closest obstacle and thus a chance to feel the interaction of further network constituents. However, sketching the distribution in Fig.~\ref{fig:probability}, it becomes intuitively clear that this probability rapidly approaches zero for far obstacles or small fluctuation amplitudes $\sigma$. We will exploit this fact in the numerical analysis below and in the simulations. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./probability.eps} \caption{Probability density to find the test rod at a spatial position for mutual interaction with a single obstacle (solid lines) and resulting probability in an environment of all obstacles (dashed line). The x-axis tics mark the center position of each obstacle. Distant obstacle only have a negligible influence on the overall probability function. } \label{fig:probability} \end{figure} The distribution function, Eq.~(\ref{eq:product}), for the test rod at hand, averages of any function $f(r)$ can now be calculated for a single realization of obstacles as \begin{equation} \label{eq:test_average} \overline{f(r)}_{\{r_k\}}=\int dr f(r) P(r,\{r_k\},\sigma) \;, \end{equation} where the index $\{r_k\}$ denotes the specific obstacle configuration. The tube center of the test rod is then \begin{equation} \overline{r} (\{r_k\},\sigma):= \overline{r}_{\{r_k\}} \end{equation} and the width of the probability distribution is the wanted tube diameter \begin{equation} L_\perp^2(\{r_k\},\sigma) := \overline{r^2}_{\{r_k\}}-\overline{r}^2_{\{r_k\}} \;. \end{equation} The derived tube diameter of the test rod is not only a function of the fluctuation width $\sigma$ but also of the specific obstacle configuration. Consequently, sampling over different obstacle sets will result in a distribution of values for $L_\perp$. As mentioned earlier this distribution should be described by a single characteristic value - consistent with the obstacle fluctuations that have also be assumed to be of equal size. Since the obstacles are uniformly distributed, they can be fully described by the density encoded in the average number of obstacles $\rho$ per line. This is achieved by integrating out all obstacle positions and orientations in $L_\perp^2(\{r_k\},\sigma)$ to arrive at the only density dependent $L_\perp^2(\rho,\sigma)$. We choose a simple average over a large number $N$ of obstacle sets $\{r_k\}$ like \begin{equation} \label{eq:obstacle_integration} \langle f(\{r_k\}) \rangle_\rho = \left( \prod_{k=1}^N \int_{-R/2}^{R/2} \frac{dr_k}{R}\right) f(\{r_k\}) \;, \end{equation} where $R=N/\rho$. In this nomenclature the average tube diameter~\footnote{Of course, one could also image a different characterization of the average tube diameter, e.g. the median or the maximal diameter. We choose the average as the most obvious quantity experimental groups might measure, e.g. in analyzing different fluorescent microscopy images.} is obtained as $L_\perp^2(\rho,\sigma)=\langle L_\perp^2(\{r_k\},\sigma) \rangle_\rho$. Self-consistency is now expressed as \begin{equation} L_\perp^2(\rho,\sigma)=\sigma^2 \end{equation} at the point of self-consistency (PSC) $\sigma=\sigma^*$. By measuring length in $1/\rho$ we can rewrite this to a dimensionless master curve $l(\rho \sigma)$: \begin{equation} L_\perp^2(\rho,\sigma)=\frac{1}{\rho^2} l(\rho \sigma) \;, \end{equation} since $L_\perp, 1/\rho$ and $\sigma$ are all lengths, The task of finding the self-consistent tube width $L_\perp(\rho,\sigma^*)=\sigma^*$ translates to finding $l(C)=C^2$ where the constant $C=\rho \sigma^*$. As soon as this is achieved, the self-consistent tube diameter is available as a function of density $\rho$ only and hence it depends like \begin{equation} \label{eq:tube_diameter_of_rho} L_\perp=\frac{C}{\rho}=\frac{2 C}{3 \pi} \frac{\xi^2}{\bar L} \end{equation} on the rod length $\bar L$ and mesh size $\xi$. The numerical determination of $C$ is achieved by an integration using a Monte-Carlo procedure. It includes the $N$-fold integrals over the obstacle positions $r_k$ from Eq. (\ref{eq:product}) as well as the integration over the test polymer position $r$ from Eq. (\ref{eq:test_average}). As mentioned above, the probability distribution rapidly decreases at distances far from the closest obstacle. Hence, we have restricted the integration range of the $dr$ integration in the Monte-Carlo samples to values $[y_{min}-5 \sigma,y_{max}+5 \sigma]$ where $y_{min}$ and $y_{max}$ are the closest obstacle at either side. Furthermore, the fast decrease of the probability distribution renders the contribution of distant obstacles quasi to zero. We can therefore drop all obstacles with $y_k \not\in [y_{min}-10 \sigma, y_{max}+10 \sigma]$. The results are depicted in Fig.~\ref{fig:mciso} and a graphical solution for the PSC constant results in \begin{equation} C \approx 3.64 \;. \end{equation} Special attention should be paid to the behavior at $\rho \sigma=0$. It provides a good test whether the used IRM is adequate and allows for a verification of the numerics. At a finite density as required by the tube concept, $l(0)$ reflects the situation of immobile obstacles with $\sigma=0$. At this point the tube diameter should remain finite and its value should be given by the density of obstacles. From the obstacle statistics and density per unit length $\rho/4$, the probability to find the first obstacle at position $r_\pm$ is known to be $P(r_\pm) = \exp(-r_\pm \rho/4)$. In the case of fixed obstacles the available fluctuation area is $2 L_\perp=r_+-r_-$ and the expectation value $4 \langle L_\perp^2 \rangle=\langle (r_+-r_-)^2 \rangle$ can be computed from the probability density above. Taking care of the normalization one arrives at $L_\perp=\sqrt{8}/\rho$. The master function yields $l(0)=\rho^2 L_\perp^2=8$, a value in good agreement with the data (circles) in Fig.~\ref{fig:mciso}. \subsection{Generic 2d Geometry} If the simplification of axis-parallel obstacle polymers is dropped again, the obstacle configuration needs to be specified by a set of radii $\{r_k\}$ and angles $\{\alpha_k\}$. The probability to find the test rod at a position $(y,z)$ for a given configuration of obstacles $\{r_k,\alpha_k\}$ in the two-dimensional case as in Fig.~\ref{fig:crossection} (left) is then again given by the product of all probabilities where different angles have to be accounted for: \begin{equation} \label{eq:product_iso} P(y,z,\{r_k,\alpha_k\},\sigma)=\frac{1}{{\it N}} \prod_k P_{\pm}(y \cos \alpha_k + z \sin \alpha_k, r_k, \sigma) \;. \end{equation} The normalization factor ${\it N} = {\it N}(\{r_k,\alpha_k\},\sigma)$ is again determined by the condition $\int dy dz P(y,z,\{r_k,\alpha_k\},\sigma)=1$. In a single obstacle configuration the tube diameters $L_{\perp y,z}$ in the $y$ and $z$ direction will in general be different. However, in averaging over all configurations isotropy must be recovered to show \begin{equation} L_\perp^2(\rho,\sigma)=\langle L_{\perp y}^2(\{r_k,\alpha_k\},\sigma) \rangle_\rho=\langle L_{\perp z}^2(\{r_k,\alpha_k\},\sigma) \rangle_\rho \;. \end{equation} The average over obstacle configurations at fixed density of uniformly distributed obstacles is performed as \begin{equation} \label{eq:iso_obstacle_integration} \langle f(\{r_k,\alpha_k\}) \rangle_\rho = \left( \prod_{k=1}^N \int_0^R \frac{dr_k}{R} \int_0^{2 \pi} \frac{d\alpha_k}{2 \pi }\right) f(\{r_k,\alpha_k\}) \end{equation} with integration range being again $R=N/\rho$. Note that contrary to the simplified geometry the obstacle density per unit length $\rho$ in this case is given by $\rho=(\pi/2) (\nu L \bar L)$. Evaluating the integrals in Eq. (\ref{eq:iso_obstacle_integration}) again by the Monte-Carlo method results in the data plotted in Fig.~\ref{fig:mciso} (triangles), where suppression of irrelevant obstacles was implied analog to the simplified geometry. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./psc.eps} \caption{Master curve $l(\rho \sigma)$ of the tube diameter rescaled by obstacle density obtained by MC simulation for simplified (circles) and generic geometry (triangles); intersection with quadratic obstacle fluctuation amplitude marks the point of self-consistence. The error of the simplified geometry is surprisingly small.} \label{fig:mciso} \end{figure} The results do not deviate much from the data obtained earlier (circles), i.e. the mistake in using a simplified geometry is surprisingly small. Again the value of the PSC is obtained graphically. It yields \begin{equation} C \approx 3.52 \end{equation} and will be used in the remainder of this work. \subsection{Choice of Independent Rod Length} As discussed before and illustrated in Fig.~\ref{fig:irm} (bottom) the choice of $\bar L$ is crucial for the success of the IRM. The number $L/\bar L$ of independent rods can be regarded as a measure for the flexibility of the modeled polymer and has to be chosen such that the transverse excursions of the ensemble of stiff rods equal the fluctuations of the actual semiflexible polymer. To this end we consider both systems in a generic harmonic potential \begin{equation} U[y(s)]=\frac{\gamma}{2}\left[y(s)-y^0(s) \right]^2 \end{equation} with the potential minimum $y^0(s)$ as a Gaussian variable with $\langle y^0(s) y^0(s') \rangle = \alpha \delta(s-s')$. This corresponds to the assumption of a ``Gaussian random backbone'' as a general property of the tube. We use this intuitive assumption as one possible prerequisite to determine the segment length $\bar L$. Of course, other possibilities can be imagined. Note that the simulations in Sec. \ref{sec:simulations} will justify this assumptions a posteriori. The average position $\overline{y(s)}$ as a functional of a given potential $y^0(s)$ is obtained as an average over all polymer configurations in this potential. Averaging then over all potential conformations yields the the mean square of the polymer's transverse fluctuations $\langle \overline{y(s)}^2 \rangle$. The over-line thus denotes an average in a given potential and the brackets denote an average over all potentials. While the transverse fluctuations of a rigid rod are a function of the potential parameters $\alpha, \gamma$ only, the response of a semiflexible polymer will additionally depend on its stiffness. This evidently provides a tool to connect the semiflexible polymer persistence length and the length $\bar L$ from the IRM by demanding that the fluctuations $\langle \overline{y(s)}^2 \rangle$ for given potential parameters $\alpha, \gamma$ are the same for both cases. Starting with the IRM, it is sufficient to consider only one stiff rod, as the individual rods are statistically independent. The average position is then \begin{equation} \overline{y}=\frac{1}{\bar L} \int_0^{\bar L} \textrm{ds} y^0(s) \end{equation} and the transverse fluctuations \begin{equation} \langle \overline{y}^2 \rangle = \frac{1}{\bar L^2} \int_0^{\bar L} ds \int_0^{\bar L} ds' \langle y^0(s) y^0(s') \rangle = \frac{\alpha}{\bar L} \;. \end{equation} For the semiflexible polymer the fluctuations of polymer and tube potential are decomposed into modes (Appendix \ref{sec:mode-analys-polym}): \begin{equation} \langle \overline{y}^2 \rangle = \frac{1}{L} \int_0^L \langle \overline{y(s)}^2 \rangle = \frac{1}{L} \sum_k \langle \overline{y_k}^2 \rangle \;, \end{equation} where the mode analysis yields $\overline{y_k}=y_k^0/(1+q_k^4 l_{\rm d}^4)$ (compare Eq.~(\ref{eq:app2_1})) with $q_k \approx \pi (k+1/2)$. Using now the correlations of the Gaussian random tube profile and the identity (\ref{eq:app2_3}) the polymer fluctuations can be related to the deflection length as: \begin{equation} \langle \overline{y}^2 \rangle=\frac{1}{L} \sum_k \frac{\langle (y_k^0)^2 \rangle}{(1+q_k^4 l_{\rm d}^4)^2}=\frac{\alpha}{L}\frac{h'(l_{\rm d})}{4 l_{\rm d}^3} \;. \end{equation} Equating the fluctuations for the IRM and the semiflexible polymer fixes the segment length to \begin{equation} \label{eq:rod_length} \bar L=L \frac{4 l_{\rm d}^3}{h'(l_{\rm d})} \;. \end{equation} Concluding the last section, we have obtained the tube diameter for a sequence of independent rods of length $\bar L$ and derived a condition how to fix this length to correctly mimic the behavior of a semiflexible polymer in a network of same mesh size. It has turned out the the criteria for the correct rod size is a function of the deflection length. \section{Results}\label{sec:plugging-it-all} If we recall that the tube diameter for a semiflexible polymer was derived in Sec. \ref{sec:model-definition} from the Hamiltonian with a likewise dependence on deflection length, we are now equipped to set up an implicit equation to determine this deflection length. Afterwards the tube diameter can be derived from a simple calculation. Equating the expressions for the tube diameter of the polymer (\ref{eq:definition_h}) and the IRM (\ref{eq:tube_diameter_of_rho}) respectively yields \begin{equation} L_\perp^2=\frac{L^3}{l_{\rm p}} h(l_{\rm d})=\frac{4 C^2}{9 \pi^2} \frac{\xi^4}{\bar L^2} \;. \end{equation} With the correct rod length (\ref{eq:rod_length}) the implicit equation for the dimensionless deflection length is \begin{equation} \label{eq:implicit_of_h} h(l_{\rm d}) = \frac{C^2}{32 \pi^2} \frac{[h'(l_{\rm d})]^2}{l_{\rm d}^6} \frac{l_{\rm p} \xi^4}{L^5} \;. \end{equation} Solving this equation, determines $l_{\rm d}$ from the system's parameter $l_{\rm p}, L$ and $\xi$. It is achieved by introducing a dimensionless function \begin{equation} l_{\rm p} \xi^4/L^5=j(l_{\rm d}) :=\frac{l_{\rm d}^6 32 \pi^2 h(l_{\rm d})}{C^2 [h'(l_{\rm d})]^2 } \;. \end{equation} Inversion then yields \begin{equation} l_{\rm d}=j^{-1}(l_{\rm p} \xi^4/L^5) \;. \end{equation} With the abbreviation $D=3 C/\pi$ we finally obtain for the dimensional deflection length to first order and second order in the argument of $j^{-1}$: \begin{equation} L_{\rm d}=\frac{D^{2/5}}{2^{13/10}} \xi^{4/5} l_{\rm p}^{1/5} + \frac{D^{4/5}}{2^{11/10}3}\frac{\xi^{8/5}l_{\rm p}^{2/5}}{L} \;. \end{equation} By application of (\ref{eq:definition_h}), the tube diameter is easily obtained as \begin{equation} \label{eq:tube2order} L_\perp=\frac{D^{3/5}}{2^{27/10}}\frac{\xi^{6/5}}{l_{\rm p}^{1/5}} + \frac{D}{2^{5/2}} \frac{\xi^2}{L} \;. \end{equation} Evaluation of the numerical factors holds the following results to first order \begin{equation} L_{\rm d} \approx 0.66 \xi^{4/5} l_{\rm p}^{1/5} \;, \qquad L_\perp \approx 0.32 \frac{\xi^{6/5}}{l_{\rm p}^{1/5}} \;. \end{equation} Note that this also determines the confinement free energy of the polymer to $\Delta F \approx 2.14 \, k_{\rm B} T \, \frac{L}{\xi^{4/5} l_{\rm p}^{1/5}}$. The leading term of the tube diameter agrees with the established scaling \cite{semenov86}. The additional term's dependence on the inverse polymer length indicates a finite length effect. It can be traced back to the partition sum of a finite polymer (\ref{eq:approx_g}) and accounts for boundary effects at the end of the tube. If the free energy of infinite polymers (\ref{eq:free_energy_result}) is used throughout the calculations, all higher order terms vanish accordingly. In an earlier work \cite{morse01} another prefactor of $L_\perp \approx 0.53 \xi^{6/5} l_{\rm p}^{-1/5}$ for the scaling term has been predicted by rather different accounting of obstacles. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./deviations.eps} \caption{Relative correction obtained by the second order term of the tube diameter (\ref{eq:tube2order}) for different biopolymers as a function of mesh size $\xi$. fd-viruses ($L \approx 0.9 \mu$m, $l_{\rm p} \approx 2.2 \mu$m \cite{schmidt00}) show a large correction due to their small length compared to the mesh size, while this effect is rather small in microtubules ($L \approx 50 \mu$m, $l_{\rm p} \approx 5000 \mu$m \cite{pampaloni06,gittes93}). The correction for F-actin has been plotted for different length from a typical length distribution.} \label{fig:dev} \end{figure} It is important to be aware of the subtle difference between the explicit length dependence of the first order term and the implicit dependence on $L$ that enters via the mesh size $\xi=\sqrt{3/\nu L}$. In a polydisperse polymer solution the $L$ in the mesh size has to be the average polymer length, while the $L$ in the second order terms is the length of the actually observed filament in the tube. In a monodisperse solution as in our theory these quantities are identical. The importance of the second order term depends heavily on the nature of the polymers making up the network. In Fig.~\ref{fig:dev} the relative tube width correction obtained by the second order term is displayed for several semiflexible biopolymers as a function of mesh size $\xi$. It is interesting to note that the intuitive dependence on the relative persistence length $l_{\rm p}/L$ present in the second order term of the deflection length is rather negligible. The most dominant effect of the correction term is not obtained for the stiffest biopolymer, a microtubule, but for the small fd-virus. This is due to its small length to mesh size ratio. Finite length effects will influence a large fraction of the polymer strand and not only the boundaries. Given a proper control of polymer length, this effect should be experimentally observable in F-actin solutions. Focussing back on F-actin, Fig.~\ref{fig:results} displays the result of our model in comparison to experimental data \cite{kas96,dichtl99}. While theoretical and experimental results are certainly qualitatively comparable, a more detailed discussion is difficult due to the large fluctuations of the measurements. However, it seems reasonable to interpret these measurements regardless of their ambiguity as an upper limit to the tube diameter. Two main reasons cause an experimental observation of tube widths systematically higher than in the presented theory: from a technical point of view the microscope resolution broadens the observed tubes. Additionally, this effect is further enhanced by collective fluctuations of the complete elastic medium that remain unaccounted for in our approach. Contrary, the computer simulations presented below, can be tailored to avoid these effects and study the exact model system used by the theory. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./compare.eps} \caption{Comparison of tube diameter from theory, numerical simulations (squares) and reanalyzed experimental measurements (triangles) from \cite{kas96,dichtl99}. While Dichtl has directly measured potential strengths, K\"as has recorded the maximal tube width $a$. Therefrom we estimated a lower boundary of $\sigma=a/6$.} \label{fig:results} \end{figure} \section{Simulations} \label{sec:simulations} We have conducted intensive numerical simulations of the model system for several reasons: on the one hand they serve as a tool to verify the validity of several approximations used in the theoretical description developed above, being for example the harmonic description of the tube potential or the assumption of a single fluctuation amplitude for the obstacles. Furthermore, the comparison between the simulated transverse fluctuations and the final result of our theory can prove if we have succeeded in correctly predicting the tube diameter in a network of semiflexible polymers. Finally, the simulations give us the chance to analyze observables that go beyond the analytical theory presented. These are in particular distribution functions and open up a further possibility to comparison with experiments. We use a Monte Carlo simulation of a single polymer in two dimensions that is surrounded by point-like obstacles. This reduction will result in an equal fluctuation amplitude as in the 3D model, because we have assumed the fluctuations along the different coordinates to be independent. Simulating a test polymer in 2D and measuring its transverse displacement, will thus on average correspond to either $L_{\perp,y}$ or $L_{\perp,z}$ given that the number of obstacle points has been chosen correctly. We calculate this number as the number of stiff rods that cut an arbitrary unit area plane if these rods are of length $L$, density per unit volume $\nu$ and equally distributed both in position and orientation. The approximation as rigid rods is justified by the large persistence length compared to the mesh size. The relation between polymer concentration $\nu$ and point-density in simulations $\rho_{\rm MC}$ then yields (\ref{eq:app1_1}): \begin{equation} \rho_{\rm MC}=\frac{2}{\pi} \nu L \;. \end{equation} Of course, the obstacles will cut the plane under different angles. These can be incorporated via different statistics of the obstacle fluctuations. However, simulations show that no significant differences compared with orthogonal cuts occur. This can be explained by an averaging out of anisotropies in performing ensemble averages. We therefore choose to assume orthogonal intersections of the obstacle polymers with the plane of simulation. Having defined a suitable conversion for the polymer density, we will proceed to implement the other contribution to the test polymer's Hamiltonian in our simulations, i.e. the persistence length. The test polymer is modelled as a chain of $N$ rigid segments that approximate the continuous contour of a worm-like chain. The joint angle between two segments gives rise to a bending energy summed over all bonds: \begin{equation} \beta H(\{t_i\})=k \sum_{i=1}^{N-1} {\bf t}_i {\bf t}_{i+1} \;, \end{equation} where the ${\bf t_i}$ are the tangents and k is chosen such to reproduce the energy of a semiflexible polymer of persistence length $l_{\rm p}$. The relation in two dimensions is computed to \begin{equation} \frac{L}{l_{\rm p}}=-N \ln \left[ \frac{I_1(k)}{I_0(k)} \right] \end{equation} with $I_0$ as modified Bessel function of first kind and its derivative $I_1$. The simulations start from an equilibrium conformation of the test polymer and with obstacle centers $r_i^0$ that are uniformly distributed. At this point, obstacles are discriminated into those in the half-spaces left and right of the test polymer. During the following evolution of the system, every move of an obstacle and every conformation change of the test polymer is rejected if it would result in a reclassification of any obstacle into the other half-space. Besides this constraint the evolution is only governed by the bending energy of the test polymer and a harmonic potential $U(r_i)=\sigma/2 \, (r_i-r_i^0)^2$ for every obstacle. During the evolution the transverse displacements $L_\perp$ of every bond from the average contour are recorded over the whole evolution. To avoid boundary phenomena, this is only done in the bulk. The whole procedure is then carried out repeatedly for different initial sets of random obstacles and random test polymer conformations. If the computation is repeated for different values of $\sigma$, a function $L_\perp(\sigma)$ is obtained from which the point of self consistence $L_\perp(\sigma)=\sigma$ and its error can be deduced graphically. Repeating the procedure for different parameters, holds results for the tube diameter in dependence of persistence length $l_{\rm p}$ and concentration $\nu L$ and can be compared to the theoretical prediction and the available scarce experimental data. As displayed in Fig.~\ref{fig:results} the simulation results and the theoretical prediction to both first and second order agree remarkably well. On the basis of the available data any discrimination between first and second order would be bold. However, it has to be considered that any deviations due to lack of simulation time or shortcomings in the Monte Carlo moves will tend to reduce the observed tube width. The obtained simulation results are thus a lower boundary to the real tube diameter. Even if the good agreement between the theoretical predicted tube diameter and the values observed in numerical simulations suggests our theoretical description to be valid, we employ the developed algorithms to explicitly check on some of the assumption made in the course of deriving the tube diameter. One central assumption in the realm of the tube model is the substitution of an ensemble of neighboring polymers by an effective tube potential. This tube potential is modelled by an harmonic function of strength $\gamma$ as in the Hamiltonian (\ref{eq:hamilton}). This harmonic assumption seems sensible and is also supported by preliminary experiments with colloidal probes \cite{dichtl99}. Our numerical simulations can provide further proof to the exact form of the potential. To this end, we have monitored the transverse displacement as a function of arc length. In the resulting histogram - see Fig.~\ref{fig:harmonic} (top) for some examples - we identify the distributions maximum as the center position and analyze the form of the potential. Evidently, the resulting profiles in the test polymer's bulk are reasonably Gaussian shaped, while deviations at the boundaries (compare data for $s=0.08$ in Fig.~\ref{fig:harmonic} (top)) occur but are negligible for a tube model where $L>L_\perp$. For a quantification the ratio of fourth moment to square of second moment of transverse fluctuations \begin{equation} Q = \frac{\overline{(y-\bar y)^4}}{\overline{(y-\bar y)^2}^2} \end{equation} was considered. For a perfect Gaussian distribution this quantity evolves to $Q=3$. As shown in Fig.~\ref{fig:harmonic} (bottom) this value is also asymptotically obtained in the simulations after sufficient simulation time. These results clearly support the validity of a harmonic tube potential. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./histo.eps} \hspace{0.5cm} \includegraphics[width=\columnwidth]{./time.eps} \caption{(top) Distribution of transverse excursions at different arc-lengths $s$ shows a Gaussian potential profile with rather large variability in the potential width. At the test polymer's boundaries deviations occur. (bottom) After sufficient simulation time the ratio $Q$ (solid) approaches the characteristic value $Q=3$ of a Gaussian distribution. The transverse fluctuation area (dashed) converges likewise.} \label{fig:harmonic} \end{figure} In contrast to the classical picture of an Edwards-tube with a rather homogeneous diameter the simulations reveal a rather large variability in the local tube diameter as has also been observed experimentally \cite{kas96,dichtl99}. Carrying out extensive simulations in a large number of different obstacle environments allows one to record the distribution function of the tube diameters. This is of crucial importance, as our theoretical description has assumed that the tube diameter - and hence due to self-consistency also the obstacle fluctuation width - can be described by a single characteristic value. This approach only seems feasible if the distribution described by the characteristic value is reasonably well peaked. The simulations prove that the resulting distribution is indeed equipped with a well-defined peak (Fig.~\ref{fig:distribution}). However, the variability of the observed tube diameters is rather large with a half-width of the size of the average tube diameter itself. We observe a sharp cut-off for small tube widths while the distribution's tail to wide tubes is longer. The behavior at small tube width is dominated by the energy cost of confining a polymer into an increasingly smaller tube and can thus be considered as a polymer property. On the contrary the distribution at tube widths larger than the average diameter is due to void spaces. These will follow an exponential decay and are therefore a characteristic of the network architecture. \begin{figure}[htbp] \center \includegraphics[width=\columnwidth]{./distribution.eps} \caption{Distribution of $L_\perp$ sampled over polymer arc-length and different obstacle environments. Distributions are well-peaked and exhibit longer tails at large tube widths. The noise at diameters far from the distributions maximum is an artefact from the numerical discretization.} \label{fig:distribution} \end{figure} Finally, the numerical simulations provide a means to explicitly check if the self-consistence is guaranteed in spite of the simplifying assumption of a single fluctuation width. To this end, we have used the resulting histogram of tube diameters from above to compute a normalized distribution function. The fluctuation width of the obstacles are now initialized according to this very distribution. The resulting histogram of tube diameters is then again fed back into the simulation as obstacle fluctuation distribution. This procedure is carried out until both distributions converge against each other in a self-consistent manner. Surprisingly, this is already the case after the first iteration step of the process as displayed in Fig.~\ref{fig:distribution}. This gives strong evidence that due to the self-averaging over obstacles the modelling of a network with Gaussian tube profile and a single average tube diameter is sufficient to describe the physical reality. \section{Conclusion}\label{sec:conclusion} We have presented a new approach to determine the absolute value of the tube diameter in semiflexible polymer networks supported by computer simulations. To this end the deflection length of a polymer in a hypothetical harmonic tube was connected to the tube's diameter via the free energy cost for finite length polymers. The assumption of a harmonic tube was confirmed by simulation results. By decomposition into independent stiff rods of appropriate length, we were able to establish an implicit equation for the deflection length. The resulting tube width $L_\perp$ is in agreement with the established scaling law $L_\perp= c \xi^{6/5}l_{\rm p}^{-1/5}$ with mesh size $\xi=\sqrt{3/\nu L}$ and persistence length $l_{\rm p}$. Our theory provides a prefactor of $c \approx 0.32$ and a higher order term that accounts for finite length effects and scales with $\xi^2/L$. The available experimental data is consistent with our predictions. However, its quality does not allow for detailed comparison. To provide a precise validation, we have complemented our theoretical work by extensive Monte Carlo simulations of a test polymer in an environment of obstacles. The resulting self-consistent tube widths perfectly match the theoretical value predicted. This strongly supports the validity of the absolute value for the concentration dependent tube diameter. Furthermore, we have employed simulations to observe properties beyond the analytical theory. We have recorded the distribution function of tube widths in a network for different concentrations. Thereby we were able to explicitly confirm self-consistency of the simplifying model with a fixed tube diameter. Both our theoretical predictions, e.g. the finite length contributions to the tube diameter, and our simulation data, e.g. the distribution functions, provide the opportunity of feasible experimental comparison. On the theoretical side, the significance of correlations and collective fluctuations of the complete medium, as well as an analytical model of distribution functions may open up promising continuations of this work. \begin{acknowledgement} We kindly acknowledge helpful discussions with M. Degawa, M. Giesen, R. Merkel and M. Romanoska. Financial support of the German Excellence Initiative via the program "Nanosystems Initiative Munich (NIM)" is gratefully acknowledged. HH acknowledges support by the international graduate program Nano-Bio-Technology funded by the Elite Network of Bavaria. \end{acknowledgement}
2,877,628,088,715
arxiv
\section{Introduction} The present paper contains a probabilistic analysis of some mathematical model of asynchronous algorithm for parallel simulation. For the detailed discussion of synchronization issues in parallel and distributed algorithms we refer to~\cite{JeWI,BeTsi}. Here we give only a brief description of the problem. In large-scale parallel computation it is necessary to coordinate the activities of different processor which are working together on some common task. Usually such coordination is implemented by using a so-called message-passing system. This means that a processor shares data with other processors by sending timestamped messages. Between sending or receiving the messages the processors work independently. It can be happened that till the moment of receiving of a message some processor can proceed farther in performing its program than the value of timestamp indicated in this newly received message; in this case the state of the processor should be \emph{rolled back} to the indicated value. It is clear that due to possible rollbacks the mean speed of a given processor in the computing network will be lower than its proper speed. One of the most important performance characteristics of the system is the progress of the computing network on large time intervals. Probabilistic models for such system are studied already for twenty years. From the probabilistic point of view these models consists of many relatively independent components which synchronize from time to time their states according to some special algorithm. The detailed review of all existing publications is out of range of this paper. We would like to mention only that the bibliography on this subject consists mostly of two group of papers. The first group of publication \cite{MitMit,KuSh,GrShSt,GuKuSh,ShKuRe,GuKu} are devoted to the case of two processors. The paper~\cite{MitMit} is of special interest since it contains an exhaustive basic analysis of the two-dimensional model and had a big influence on further research. The case of many processors was studied in \cite{MadWalMes,GuAkFu,AkChFuSer,PGM,GrShSt,T1,T3}. An important difference of such models from two-dimensional case is that in a realistic model with more than two processors one message can provoke a multiple rollback of a chain of processors. Since the multi-dimensional model is much more complicated for a rigorous study, in the above papers the authors deal with the set of identical processors and their mathematical results are contained in preparatory sections before large numerical simulations. It should be noted also that probabilistic models with synchronization mechanism are interesting also for modelling database systems~(see for example, \cite{JeWI}). Moreover, now synchronization-like interactions are considered as well in the framework of interaction particle systems~\cite{MM-RR,MShCh,MM-TVP}. The model considered in the present paper is of special interest for the following reasons. We deals with a nonhomogeneus model consisting of several different processors. We consider case of message-passing topology other from the topology of complete graph which was considered in all previous papers. Our main interest is the cascade model which pressupose a subordination between processors. We put forward a conjecture on the cluster behavior of the system: processors can be divided into separated groups which are asymptotically independent and have their own proper performance characteristics. Our main goal is to justify this conjecture. One should point out that in the case of complete graph topology the cluster decomposition into groups is degenerated and, thus, not interesting. We describe our model in terms of multi-dimensional continuous time Markov process. To get asymptotical performance characteristics of the model we combine two probabilistic methods (stochastic comparison and Foster--Lyapunov functions). The paper is organized as follows. In Section~\ref{sec:Description-of-continuous} we introduce a general continuous time Markov model and define a cascade model as a special subclass of the general model. In~Section~\ref{sec:Definition-of-discrete} we pass to the embedded Markov chain. Main problem is now to study a long-time behavior of Markov chain with highly nonhomogeneous transition probabilities. To do this we consider relative coordinates and find groups of processors whose evolution is ergodic (convergences to a steady state) in these relative coordinates. To our opinion the method of Foster-Lyapunov functions seems to be the only one to prove the stability in relative coordinates for the Markov chain under consideration. First of all in Section~\ref{sec:Case-N-2} we start from the case of two processors ($N=2$) and the analysis here is rather simple and similar to~\cite{MitMit}. In the study of the three-dimensional case (Section~\ref{sec:N-3-Lyap}) the main point is the proof of ergodicity. We propose an explicit construction of some nonsmooth Foster-Lyapunov function. Our construction is rather nontrivial as it can be seen by comparing with already existing explicit examples of Lyapunov functions (see~\cite{FMM}). All this analysis bring us to some conclusions presented in Section~\ref{sec:Conclusions}. This section contains decomposition into groups (clusters) in the case of cascade model with any number of processors~$N$ and our main Conjecture~\ref{con-N-main}. We show that the proof of this conjecture could be related with progress in explicit construction of multi-dimensional Foster-Lyapunov functions. Analysis of random walks in~$\bZ _{+}^{n}$ (which was done in~\cite{FMM}) shows that, in general, this technical problem may be very difficult. In the next papers we hope to overcome these difficulties by using specific features of our concrete Markov processes. \paragraph*{Acknowledgements. } The first author is very grateful to the team TRIO (INRIA--Lorraine) and to l'Ecole des Mines de Nancy for their hospitality during his stay at Nancy in summer 2004 when the main results of this paper were obtained. \section{Description of continuous time model} \label{sec:Description-of-continuous} \subsection{General model} We present here some mathematical model for parallel computations. There are $N$ computing units (processors) working together on some common task. The state of a processor $k$ is described by an integer variable $x_{k}\in \mathbf{Z} $ which is called a local (or inner) time of the processor~$k$ and has a meaning of amount of job done by the processor $k$ up to the given time moment. Assume that the state $\left(x_{1},x_{2},\ldots ,x_{N}\right)$ of the system evolves in continuous time~$t\in \mathbf{R} _{+}$. Any change of a state is possible only at some special random time instants. Namely, with any processor~$k$ we associate a Poissonian flow $\Pi ^{k}=\left\{ 0=\sigma _{0}^{k}<\sigma _{1}^{k}<\cdots <\sigma _{n}^{k}<\cdots \right\} $ with intensity~$\lambda _{k}$ and with a pair $(k,l)$ of processors we associate a Poissonian flow $\Pi ^{kl}=\left\{ 0=\sigma _{0}^{kl}<\sigma _{1}^{kl}<\cdots <\sigma _{n}^{kl}<\cdots \right\} $ with intensity~$\beta _{kl}$. This means, for example, that $\left\{ \sigma _{n}^{k}-\sigma _{n-1}^{k}\right\} _{n=1}^{\infty }$ is a sequence of independent exponentially distributed random variables with mean~$\lambda _{k}^{-1}$: $\forall n=1,2,\ldots \quad \mathsf{P} \left\{ \sigma _{n}^{k}-\sigma _{n-1}^{k}>s\right\} =\exp \left(-\lambda _{k}s\right)$, and similarly for the flows $\Pi ^{kl}$. We also assume that all these flows $\Pi ^{k}$ and $\Pi ^{kl}$ are mutually independent. Let us now define a stochastic process $\left(X(t)=\left(x_{1}(t),\ldots ,x_{N}(t)\right),t\in \mathbf{R} _{+}\right)$ on the state space $\bZ ^{N}$ according to the following rules. 1) At time instants $\sigma _{n}^{k}$ the processor $k$ increases its local time~$x_{k}$ by $1$:~ $x_{k}(\sigma _{n}^{k}+0)=x_{k}(\sigma _{n}^{k})+1$. 2) There is an exchange of information between different processors. At time instant~$\sigma _{i}^{kl}$ the processor~$k$ sends a message $m_{kl}^{(x_{k})}$ to the processor~$l$. We assume that messages reach their destination immediately. A message $m_{kl}^{(x_{k})}$ coming to node $l$ from node~$k$ contains an information about local time $x_{k}(\sigma _{i}^{kl})=x_{k}$ of the sender~$k$. \emph{If} at the time instant~$\sigma _{i}^{kl}$ (when the message $m_{kl}^{(x_{k})}$ arrives to the node~$l$) we have $x_{l}(\sigma _{i}^{kl})>x_{k}(\sigma _{i}^{kl})$ \emph{then} the local time~$x_{l}$ rolls back to the value $x_{k}$: $x_{l}(\sigma _{i}^{kl}+0)=x_{k}(\sigma _{i}^{kl})$. Moreover, if the processor~$l$ rolls back, then all messages sent by the processor $l$ during the time interval~$\mathcal{I}=(\theta _{l}(x_{k},\sigma _{i}^{kl}),\sigma _{i}^{kl}),$ where $\theta _{l}(x,u):=\max \left\{ s\leq u:\, x_{l}(s)=x,\, x_{l}(s+0)=x+1\right\} ,\sigma _{i}^{kl}),$ should be eliminated. This may generate a cascading rollback of local times for some subset of processors. For example, assume that there is a processor $q$ which received a message $m_{lq}^{(x'_{l})}$ at some time instant $s'\in \mathcal{I}$ and $x_{q}(\sigma _{i}^{kl})>x_{l}(s')=x'_{l}$. Then the local clock~of~$q$ should be rolled back to the value $x_{l}(s')$: $x_{q}(\sigma _{i}^{kl}+0)=x_{l}(s')$ and, moreover, all messages sent by $q$ during the interval~$\mathcal{I}=(\theta _{q}(x_{l}(s'),\sigma _{i}^{kl}),\sigma _{i}^{kl})$ should be deleted, and so on. Hence, at time instant $\sigma _{i}^{kl}$ a message from~$k$ to $l$ can provoke a multiple rollback of processor~$l,q,\ldots $ in the system. \subsection{Cascade model} \label{sub:Cascade-model} From now we shall consider the following special subclass of the above general model. A chain of processors $1,2,\ldots ,N$ is called a \emph{cascade} if any processor $j$ can send a message only to its right neighbour $j+1$. Hence, the processor~$N$ does not send any message and the processor~$1$ does not receive any message. In other words, $\beta _{ij}\not =0\, \Leftrightarrow \, (j=i+1)$. A message sent from $j$ to $j+1$ can provoke a cascading roll-back of processors $j+2,\ldots $~. Recall that all above time intervals are exponentially distributed and assumed to be independent. Obviously, the stochastic process $X_{c}^{(N)}(t)=\left(\, x_{1}(t),\ldots ,x_{N}(t)\, \right)$ is Markovian. A very important property is that any {}``truncated'' marginal process $X_{c}^{(N_{1})}(t)=\left(\, x_{1}(t),\ldots ,x_{N_{1}}(t)\, \right)$, $N_{1}\leq N$, is also Markovian. Assume that for any $j$ the following limit \begin{equation} v^{*} _{j}=\lim _{t\rightarrow +\infty }\frac{x_{j}(t)}{t}\qquad (\textrm{in probability})\label{eq:v*j-lim}\end{equation} exists. Then the numbers $v^{*} _{j}$, $j=1,\ldots ,N$, characterize \emph{performance} of the model. The main goal of the present paper is to prove the existence of these limits and to calculate them. Note that if we uniformly transform the absolute time scale $t=cs$, where $c>0$ is a constant and $s$ is a new absolute time scale, the performance characteristics~(\ref{eq:v*j-lim}) will not change. \section{Definition of the discrete time cascade model} \label{sec:Definition-of-discrete} Consider a sequence \[ 0=\tau _{0}<\tau _{1}<\tau _{2}<\cdots <\cdots \] of time moments when changes of local time at nodes may happen (we mean local time updates and moments of sending of messages). It is clear that $\left\{ \tau _{r+1}-\tau _{r}\right\} _{r=0}^{\infty }$ is a sequence of independent identically distributed r.v. having exponential distribution with parameter\[ Z=\sum _{i=1}^{N}\lambda _{i}+\sum _{i=1}^{N-1}\beta _{i,i+1}.\] Observing the continuous time Markov process $(x_{1}(t),\ldots ,x_{N}(t))$ at epochs $\tau _{n}$ we get the so-called embedded discrete time Markov chain $\left\{ \Xd{n},n=0,1,\ldots \right\} $ with state space $\mathbf{Z} _{+}^{N}$. In the sequel we will be interested in the long-time behaviour of the chain $\left\{ \Xd{n},n=0,1,\ldots \right\} $. \paragraph*{Transition probabilities.} In the MC $\left\{ \Xd{n},n=0,1,\ldots \right\} $ there are transitions produced by the free dynamics and transitions generated by rollbacks. By the free dynamics we mean updating of local times\[ P\left\{ \Xd{n+1}=x+\mathbf{e} _{j}\, |\, \Xd{n}=x\right\} =\lambda _{j}Z^{-1},\quad j=1,\ldots ,N\, ,\] where $\mathbf{e} _{j}=(0,\ldots ,0,\begin{array}[t]{c} 1\\ ^{j}\end{array},0,\ldots ,0)$. It is easy to see that if a state $x=(x_{1},\ldots ,x_{N})$ is such that for some $j$ ~ $x_{j}<x_{j+1}$ then a message sent from $j$ to $j+1$ produces a transition of the following form \begin{equation} (x_{1},\ldots ,x_{j},x_{j+1},\ldots ,x_{l},x_{l+1},\ldots ,x_{N})\rightarrow (x_{1},\ldots ,x_{j},w_{j+1},\ldots ,w_{l},x_{l+1},\ldots ,x_{N})\label{eq:x-tran}\end{equation} with probability\begin{equation} Z^{-1}\beta _{j,j+1}\, \prod _{q=j+1}^{l-1}p(w_{q},x_{q};w_{q+1},x_{q+1})\, \times \, \left(1-b_{l}\right)^{\min \left(x_{l},x_{l+1}-1\right)-w_{l}+1},\label{eq:pr-x-tran}\end{equation} where \begin{itemize} \item sequence $\left(w_{j+1},\ldots ,w_{l}\right)$ is admissible in the following sense:\[ j<l\leq N,\quad \quad w_{j+1}=x_{j}\qquad w_{q}\leq w_{q+1}\leq \min \left(x_{q},x_{q+1}-1\right),\quad (j<q<l)\] \item $p(w_{q},x_{q};w_{q+1},x_{q+1})=b_{q}\left(1-b_{q}\right)^{w_{q+1}-w_{q}}$ \item $\displaystyle b_{q}=\frac{\lambda _{q}}{\lambda _{q}+\beta _{q,q+1}}$, $q<N$. \end{itemize} Here $b_{q}$ is the probability of an event that processor~$q$ in state~$x_{q}$ sends at least one message to~$q+1$ before updating its state $x_{q}\rightarrow x_{q}+1$. For $q=N$ we put $b_{N}=0$. So in the case $l=N$ the probability~(\ref{eq:pr-x-tran}) takes the form \[ Z^{-1}\beta _{j,j+1}\, \prod _{q=j+1}^{N-1}p(w_{q},x_{q};w_{q+1},x_{q+1})\, .\] \paragraph*{Relative coordinates.} Note that the first processor $x_{1}(t)$ evolves independently of other processors. It is useful to introduce new process $Y_{c}(t)=(y_{2}(t),\ldots ,y_{N}(t))\in \mathbf{Z} ^{N-1}$ in relative coordinates as viewing by an observer sitting at the point $x_{1}(t)$:\[ y_{j}(t):=x_{j}(t)-x_{1}(t),\quad j=2,\ldots ,N\, .\] In a similar way we define $\Yd{n}=Y_{c}(\tau _{n})$, $n=0,1,\ldots $~. The free dynamics produce the following transitions of $\Yd{n}$: \begin{eqnarray} P\left\{ \Yd{n+1}=y+\mathbf{e} _{j}\, |\, \Yd{n}=y\right\} & = & \lambda _{j}Z^{-1},\quad j=2,\ldots ,N\, ,\label{eq:trYo}\\ P\left\{ \Yd{n+1}=y-{\textstyle \sum _{j=2}^{N}}\mathbf{e} _{j}\, |\, \Yd{n}=y\right\} & = & \lambda _{1}Z^{-1}.\label{eq:trYd} \end{eqnarray} Since rollback does not affect on the first processor the corresponding transitions have the same form and the same probabilities as~(\ref{eq:x-tran}) and~(\ref{eq:pr-x-tran}). \section{Stochastic monotonicity} All statements of this section are valid for the both Markov processes $X_{c}^{(N)}(t)$, $t\in \mathbf{R} _{+}$, and $X(n)$, $n\in \mathbf{Z} _{+}$. For the sake of breavity we give here results only for the contionuous time model $X_{c}^{(N)}(t)$. The following results will play a significant part in the proof of the Theorem~\ref{t:N-3} in Section~\ref{sec:N-3}. \begin{theorem}\label{t-st-mon} \noindent Let us consider two cascade models (say $X_{c,1}^{(n)}(t)$ and $X_{c,2}^{(n)}(t)$~) with processors $1,2,\ldots ,n$ and parameters $\lambda _{1},\ldots ,\lambda _{n}$ and $\beta _{12}^{(1)},\beta _{23}^{(1)},\ldots ,\beta _{n-1,n}^{(1)}$ for the first model $X_{c,1}^{(n)}(t)$ and parameters $\lambda _{1},\ldots ,\lambda _{n}$ and $\beta _{12}^{(2)},\beta _{23}^{(2)},\ldots ,\beta _{n-1,n}^{(2)}$ for the second model $X_{c,2}^{(n)}(t)$. Assume that \[ \beta _{i,i+1}^{(1)}\leq \beta _{i,i+1}^{(2)}\qquad \forall i\, .\] Then $X_{c,1}^{(n)}$ is stochastically larger than $X_{c,2}^{(n)}$~, that is: if $X_{c,1}^{(n)}(0)=X_{c,2}^{(n)}(0)$ then $X_{c,1}^{(n)}(t)\geq _{{st}} X_{c,2}^{(n)}(t)$ for any~$t$.~% \footnote{It means that there exists a \emph{coupling} $\left(\widetilde{X}_{1}^{(n)}(t,\omega ),\widetilde{X}_{2}^{(n)}(t,\omega )\right)$of stochastic processes $X_{1}^{(n)}(t)$ and $X_{2}^{(n)}(t)$ such that $P\left\{ \omega :\, \widetilde{X}_{1}^{(n)}(t,\omega )\geq \widetilde{X}_{2}^{(n)}(t,\omega )\, \, \forall t\right\} =1$. If $w,z\in \mathbf{R} ^{n}$ we say $w\geq z$ if $w_{i}\geq z_{i}$ for all $i=1,\ldots ,n$~(partial order).% } \noindent \end{theorem} \smallskip{} \noindent \textbf{Proof} may be given by an explicit coupling construction of the processes $X_{c,1}^{(n)}(t)$ and $X_{c,2}^{(n)}(t)$ on the same probability space. The following fact should be used: a~Poisson flow with intensity $\beta _{12}^{(1)}$ can be obtained from a Poisson flow with intensity $\beta _{12}^{(2)}$ in which any point (independently from other) is killed with probability $1-\beta _{12}^{(1)}/\beta _{12}^{(2)}$. \medskip{} \begin{cor}[Solid barriers]\label{cor-barrier} \noindent Fix some $1\leq r_{1}<r_{2}<\cdots <r_{b}<n$ and consider two cascade models: $X_{c,1}^{(n)}(t)$ with parameters $\left(\, \lambda _{1},\ldots ,\lambda _{n}\, ;\, \beta _{12}^{(1)},\beta _{23}^{(1)},\ldots ,\beta _{n-1,n}^{(1)}\, \right)$ and $X_{c,2}^{(n)}(t)$ with parameters $\left(\, \lambda _{1},\ldots ,\lambda _{n}\, ;\, \beta _{12}^{(2)},\beta _{23}^{(2)},\ldots ,\beta _{n-1,n}^{(2)}\, \right)$, where\[ \beta _{i,i+1}^{(2)}=\beta _{i,i+1}^{(1)}\quad \forall i\not \in \left\{ r_{1},\ldots ,r_{b}\right\} ,\qquad \beta _{i,i+1}^{(2)}=0\quad \forall i\in \left\{ r_{1},\ldots ,r_{b}\right\} .\] We can say that the model $X_{c,2}^{(n)}(t)$ differs from the model $X_{c,1}^{(n)}(t)$ by the presence of $b$ solid barriers between processors $r_{1}$ and $r_{1}+1$, \ldots, $r_{b}$ and $r_{b}+1$. Then by Theorem~\ref{t-st-mon} we have that \[ X_{c,1}^{(n)}(t)\leq _{\mathrm{s}t} X_{c,2}^{(n)}(t)\, .\] \end{cor} \section{Case $N=2$} \label{sec:Case-N-2} We start with the Markov chain $X_{c}^{(2)}(t)$. Since processor $1$ works independently, it is enough to consider the Markov chain $Y_{c}^{(2)}(t)=x_{2}(t)-x_{1}(t)$. Bearing in mind the remark at the end of Subsection~\ref{sub:Cascade-model}, for brevity of notation let us rescale absolute time in such a way that $Z=1$. Then the Markov chain $Y(n)$ has the following transition probabilities\[ p_{i,i+1}=\lambda _{2},\quad p_{i,i-1}=\lambda _{1},\quad p_{i,0}=\beta _{12}\, \, (i\geq 0),\quad p_{i,i}=\beta _{12}\, \, (i<0)\] and $p_{i,j}=0$ for any another pair $i,j$~. \begin{center}\includegraphics [width=6in]{n-2.eps}\end{center} \begin{theorem}\label{t:n-2} If $\lambda _{1}<\lambda _{2}$ then the Markov chain $\Yd{n}$ is ergodic and we have $v^{*} _{1}=v^{*} _{2}=\lambda _{1}$. If $\lambda _{1}>\lambda _{2}$ then the Markov chain $\Yd{n}$ is transient and we have $v^{*} _{1}=\lambda _{1}$, $v^{*} _{2}=\lambda _{2}$. \end{theorem} \begin{proof}The Markov chain $Y(n)$ is one-dimensional and its analysis is quite easy. To~establish ergodicity under assumption $\lambda _{1}<\lambda _{2}$ we use the Foster-Lyapunov criterion (Theorem~\ref{t:Foster}, see~Appendix) with test function $f(y)=|y|$, $y\in \mathbf{Z} $. This implies that $x_{2}(t)-x_{1}(t)$ has a limit in distribution as $t\rightarrow \infty $. Recall that $x_{1}(t)$ is a Poissonian process hence the limit $\displaystyle v^{*} _{1}=t^{-1}\lim _{t}x_{1}(t)=\lambda _{1}$ exists (in probability). It follows from this that $\displaystyle v^{*} _{2}=t^{-1}\lim _{t}x_{2}(t)=\lambda _{1}$. Under assumption $\lambda _{1}>\lambda _{2}$ we get transience by choosing the function $f(y)=\min (e^{\delta y},1)$, $y\in \mathbf{Z} $, where we fix sufficiently small~$\delta >0$, and applying Theorem~\ref{t:kr-trans} from Appendix. Therefore any trajectory of $Y(n)$ spends a finite time in any prefixed domain $\{y\geq C\}$ entailing $\lim _{t\rightarrow \infty }x_{2}(t)-x_{1}(t)=-\infty $ (a.s.). It means that after some time, the messages from $1$ to~$2$ can not produce a rollback anymore, so $x_{1}(t)$ and $x_{2}(t)$ become asymptotically independent and hence $\displaystyle v^{*} _{2}=t^{-1}\lim _{t}x_{2}(t)=\lambda _{2}$. \end{proof} \section{Case $N=3$} \label{sec:N-3} \begin{theorem}\label{t:N-3} Four situations are possible. \begin{enumerate} \item If $\lambda _{1}<\min \left(\lambda _{2},\lambda _{3}\right)$ then $v^{*} _{1}=v^{*} _{2}=v^{*} _{3}=\lambda _{1}$. \item If $\lambda _{2}>\lambda _{1}>\lambda _{3}$ then $v^{*} _{1}=v^{*} _{2}=\lambda _{1}$, $v^{*} _{3}=\lambda _{3}$. \item If $\lambda _{2}<\min \left(\lambda _{1},\lambda _{3}\right)$ then $v^{*} _{1}=\lambda _{1}$, $v^{*} _{2}=v^{*} _{3}=\lambda _{2}$. \item If $\lambda _{1}>\lambda _{2}>\lambda _{3}$ then $v^{*} _{1}=\lambda _{1}$, $v^{*} _{2}=\lambda _{2}$, $v^{*} _{3}=\lambda _{3}$. \end{enumerate} \end{theorem} Items 2, 3 and 4 can be reduced in some sense to the results of the case $N=2$ (see Theorem~\ref{t:n-2}). We prove them in the current section. Proof of the item~1 is much more intricate and relies heavily on the construction of an adequate Lyapunov function needing lengthy developments deferred to the following section~\ref{sec:N-3-Lyap}. \paragraph{Proof of Theorem~\ref{t:N-3} (items 2--4).} We start from the item~2: $\lambda _{2}>\lambda _{1}>\lambda _{3}$. Since the first two processors are governed by the Markov chain $X_{c}^{(2)}(t)$ and do not depend on the state of processor~3 we apply Theorem~\ref{t:n-2} and conclude that $X_{c}^{(2)}(t)$ is ergodic and $v^{*} _{1}=v^{*} _{2}=\lambda _{1}$. Let us compare the following two cascade models \[ X_{c}^{(3)}(t):\qquad 1\stackrel{\beta _{1,2}}{\, \longrightarrow }\, 2\stackrel{\beta _{2,3}}{\, \longrightarrow }\, 3\] \[ X_{c,2}^{(3)}(t):\qquad 1\stackrel{\beta _{1,2}}{\, \longrightarrow }\, 2\stackrel{0}{\, \longrightarrow }\, 3\] (parameters $\lambda _{1}$, $\lambda _{2}$ and $\lambda _{3}$ are the same for the both models $X_{c}^{(3)}(t)$ and $X_{c,2}^{(3)}(t)$~). In the model $X_{c,2}^{(3)}$ the groups of processors $\{1,2\}$ and $\{3\}$ evolve independently. Evidently, an asymptotic speed of processor~$3$ in the model $X_{c,2}^{(3)}$ exists and is equal to~$\lambda _{3}$. By Corollary~\ref{cor-barrier} $X_{c}^{(3)}(t)\leq _{\mathrm{s}t} X_{c,2}^{(3)}(t)$. Hence in the model $X_{c}^{(3)}$ an asymptotic speed of the processor~$3$ is \emph{not greater} than $\lambda _{3}$. Since $\lambda _{3}<\lambda _{1}$ we conclude that there exists some time moment $T_{0}$ such that for $t\geq T_{0}$ in the model~$X_{c}^{(3)}$ messages from $2$ to $3$ that roll back the processor~3 will be very {}``rare''. So these rare rollbacks will be not essential for an asymptotical speed of the processor~3. In other words, as $t\rightarrow \infty $ the groups of processors $\{1,2\}$ and $\{3\}$ of the model $X_{c}^{(3)}$ become asymptotically independent, so the processor~3 will move with the average speed $\lambda _{3}$. Items 3 and 4 can be considered in a similar way. Note the item~3 consists of two subcases: $\lambda _{1}>\lambda _{3}>\lambda _{2}$ and $\lambda _{3}>\lambda _{1}>\lambda _{2}$. We omit details. \section{Explicit construction of Lyapunov function} \label{sec:N-3-Lyap} In this section we prove the item~1 of Theorem~\ref{t:N-3}. Recall that our key assumption here is \begin{equation} \lambda _{1}<\lambda _{2},\quad \lambda _{1}<\lambda _{3}.\label{eq:osn-predp}\end{equation} The main idea is to prove that the Markov chain~$Y(n)$ is ergodic. To do this we apply the Foster-Lyapunov criterion (see Theorem~\ref{t:Foster} in Appendix). As in the case of Theorem~\ref{t:n-2} ergodicity of $Y(n)$ implies that $v^{*} _{j}=\lambda _{1}$, $j=1,2,3$~. \subsection{Transition probabilities} \label{sub:Transition-probabilities} Consider the embedded Markov chain $Y(n)$. A stochastic dynamics produced by this Markov chain consists of two components: transitions generated by the free dynamics and transitions generated by roll-backs. For each transition probability $p_{\alpha \beta }$, $\alpha \not =\beta $, we have the following representation:\begin{equation} p_{\alpha \beta }=s_{\alpha \beta }+r_{\alpha \beta }\, ,\label{eq:p-s-r}\end{equation} where $s_{\alpha \beta }\geq 0$ corresponds to a transition $\alpha \rightarrow \beta $ which occurs due to the free dynamics and $r_{\alpha \beta }$ corresponds to a roll-back transition $\alpha \rightarrow \beta $. Taking into account the remark at the end of Subsection~\ref{sub:Cascade-model}, without loss of generality we assume that the time is rescaled in such way that $Z=1$. This slightly simplifies notation for transition probabilities. For example, free dynamics transitions~(\ref{eq:trYo})--(\ref{eq:trYd}) are equal to $\lambda _{2}$, $\lambda _{3}$ and $\lambda _{1}$ correspondingly. On the next figure we show all non-zero transitions $\alpha \rightarrow \beta $, $(\alpha \not =\beta )$. It is true, of course, that $p_{\alpha \alpha }=1-\sum _{\beta \not =\alpha }p_{\alpha \beta }$, but it is useless to put this information on the picture. Below we give the explicit form of rollback transition probabilities: \begin{eqnarray*} 1\rightarrow 2: & \qquad & r_{yz}=\beta _{12}\quad \quad \textrm{for }0<y_{2}\\ 2\rightarrow 3: & & r_{yz}=\beta _{23}\quad \quad \textrm{for }y_{2}<y_{3}\\ 1\rightarrow 2\rightarrow 3: & & r_{yz}=\left\{ \begin{array}{rl} \beta _{12}\left(1-b_{2}\right)^{z_{3}}b_{2},\quad & z_{3}<y_{3}\\ \beta _{12}\left(1-b_{2}\right)^{y_{3}},\quad & z_{3}=y_{3}\end{array}\right.\quad \textrm{for }0<y_{3}\leq y_{2}\\ 1\rightarrow 2\rightarrow 3: & & r_{yz}=\left\{ \begin{array}{rl} \beta _{12}\left(1-b_{2}\right)^{z_{3}}b_{2},\quad & z_{3}\leq y_{2}\\ \beta _{12}\left(1-b_{2}\right)^{y_{2}+1},\quad & z_{3}=y_{3}\end{array}\right.\quad \textrm{for }0<y_{2}<y_{3} \end{eqnarray*} where $y=(y_{2},y_{3})$, $z=(z_{2},z_{3})$. \includegraphics [width=5in]{n-3.eps} \subsection{Contour of level 1} In the plane $Oy_{2}y_{3}$ consider the ellipse\[ e(y_{2},y_{3})=ay_{2}^{2}+b\left(y_{2}-y_{3}\right)^{2}=1,\qquad a>0,b>0,\] and draw a tangent line to it with normal vector $\left(-\Delta ,1\right)$. Evidently, there exist two tangent lines with the same normal vector $\left(-\Delta ,1\right)$. If $\Delta >0$ is sufficiently large then one of this tangent line touches the ellipse at some point $T_{3}$ of the domain $y_{2}<0$, $y_{3}<0$. Take a segment on this line from the point $T_{3}$ to a point $K_{3}=(0,u_{3})$ of intersection with coordinate axis $Oy_{3}$. Now let us draw tangent lines to the ellipse corresponding to a normal vector $\left(1,-\Delta \right)$. If $\Delta >0$ is sufficiently large, then one of these lines touches the ellipse at some point $T_{2}$ of the domain $y_{3}<0$. Let us take this tangent line and fix a segment on it from the point $T_{2}$ to a point $K_{2}=(u_{2},0)$ of intersection with coordinate axis~$Oy_{2}$. It is evident that $[K_{2}K_{3}]=\mathbf{R} _{+}^{2}\cap \left\{ (y_{2},y_{3}):\, y_{2}/u_{2}+y_{3}/u_{3}=1\right\} $. Let us consider now a closed contour $L$, consisting of subsequently joined segment $K_{3}K_{2}$, segment $K_{2}T_{2}$, arc $T_{2}T_{3}$ of the ellipse and segment $T_{3}K_{3}$. This contour has the following property: any ray of the form$\{cv,\, c>0\}$, where $v\in \mathbf{R} ^{2}$, $v\not =0$, has exactly one common point with the contour $L$. \begin{center}\includegraphics [width=5in]{elli3.eps}\end{center} We denote by $n(y)$ the outer normal unitary vector of the contour $L$ corresponding to the point $y\in L$, $n(y)$ is well defined at all points of~$L$ except the points $K_{2}$ and $K_{3}$ and, moreover, this function is continuous on~$L$ except the points~$K_{2}$ and $K_{3}$. The~behaviour of $n(y)$ on the arc $T_{2}T_{3}$ is of prime interest: \[ n(y)=\frac{\nabla e(y)}{\left\Vert \nabla e(y)\right\Vert },\qquad \nabla e(y)=2(\, ay_{2}+b(y_{2}-y_{3}),-b(y_{2}-y_{3})\, ),\quad y\in T_{2}T_{3}\subset L\, .\] It is easy to see that $n(y)=n(T_{2})$ for $y\in (K_{2}T_{2}]$, $n(y)=n(T_{3})$ for $y\in [T_{3}K_{3})$ and\[ n(y)=\left(u_{2}^{-1},u_{3}^{-1}\right)\qquad y\in (K_{3}K_{2}).\] For the sequel it is important to point out the following points of the arc $T_{2}T_{3}$: $y^{(3)}=(-a^{-1/2},-a^{-1/2})$ and $y^{(2)}$, $\left\{ y^{(2)}\right\} =T_{2}T_{3}\cap \left\{ y_{3}^{(2)}=\frac{a+b}{b}y_{2}^{(2)}\right\} $. It is easy to check that \[ n(y^{(2)})\, \Vert \, Oy_{3},\quad n(y^{(3)})\, \Vert \, Oy_{2}\, .\] Obviously, both points belong to the domain $\left\{ y_{2}<0,\, y_{3}<0\right\} $. \begin{lemma} ~\label{l-normal} The function $n(y)$ has the following properties: \begin{itemize} \item $\skp{n(y)}{y}\not =0$ $\forall y\in L\backslash \{K_{2},K_{3}\}$ \item If $\Delta >0$ is sufficiently large then there exist continuous functions $c_{2}(y)$ and $c_{3}(y)$ such that \[ c_{2}(T_{2})=c_{3}(T_{3})=1,\qquad c_{2}(T_{3})=c_{3}(T_{2})=0,\qquad c_{2}(y)>0,\, c_{3}(y)>0\quad y\in (T_{2},T_{3})\] \[ n(y)=c_{2}(y)n(T_{2})+c_{3}(y)n(T_{3}),\qquad y\in (T_{2},T_{3}).\] \item $\skp{n(y)}{(0,-1)}<0$ if $y_{2}<0$, $y_{3}>y_{2}$, and $\skp{n(y)}{(-1,0)}<0$ if $y_{2}>0$, $y_{3}<0$. \end{itemize} \end{lemma} \subsection{Definition of function $\varphi $} For any point $(y_{2},y_{3})\in \mathbf{R} ^{2}\backslash \{0\}$ define $\varphi (y_{2},y_{3})>0$ such that \[ \frac{(y_{2},y_{3})}{\varphi (y_{2},y_{3})}\, \in \, L\, .\] For $(y_{2},y_{3})=0$ we put $\varphi (0,0)=0$. The function $\varphi (y_{2},y_{3})$ is well-defined and has the following properties: \begin{itemize} \item $\varphi :\, \mathbf{R} ^{2}\rightarrow \mathbf{R} _{+}$ (positivity) \item $\varphi (ry_{2},ry_{3})=r\varphi (y_{2},y_{3})$, $r>0$, (homogeneity) \item $L=\{y:\, \varphi (y)=1\}$. \end{itemize} To any point $y=(y_{2},y_{3})$ we put in correspondance a point $y^{*}:=\frac{y}{\varphi (y)}\in L$. Therefore, $\varphi (y^{*})=1$. \begin{center}\includegraphics [width=5in]{elli4.eps}\end{center} \begin{lemma} \label{l-grad-fi}~ \begin{itemize} \item The gradient $\nabla \varphi (y)$ exists at all points except that $y$ for which $y^{*}=K_{2}$ or $K_{3}$ and, moreover, the gradient is constant on rays of the form $\{cv,\, c>0\}$, $v\in \mathbf{R} ^{2}$:\[ \nabla \varphi (y)=\frac{n(y^{*})}{\skp{y^{*}}{n(y^{*})}}\, .\] \item Let $y=(y_{2},y_{3})$ be such that $y^{*}\in T_{2}T_{3}$. Then\begin{equation} \left|\varphi (w)-\skp{\nabla \varphi (y^{*})}{w}\right|\leq \frac{const}{\varphi (y)}\, \left\Vert w-y\right\Vert ^{2}.\label{eq:lagrange-ineq}\end{equation} In other words, in a neighbourhood of the point $y$ the function $\varphi $ can be approximated by the linear function $\skp{\nabla \varphi (y^{*})}{\cdot }$. \end{itemize} \end{lemma} In particular, $\varphi (y)=\skp{\nabla \varphi (y^{*})}{y}$. Proof of Lemma~\ref{l-grad-fi} is a straightforward computation. \subsection{Modification of the principle of local linearity} For any state $\alpha $ define a set $T_{\alpha }=\{\beta :\, p_{\alpha \beta }>0\}$. Recall decomposition~(\ref{eq:p-s-r}) and define $F_{\alpha }=\{\beta :\, s_{\alpha \beta }>0\}$ and $R_{\alpha }=\{\beta :\, r_{\alpha \beta }>0\}$. It is evident that $T_{\alpha }=F_{\alpha }\cup R_{\alpha }$. The most simple case is $F_{\alpha }\cup R_{\alpha }=\varnothing $. The case $F_{\alpha }\cup R_{\alpha }\not =\varnothing $ can be reduced to the previous one by a dilatation of the state space. Thus we assume that $F_{\alpha }\cup R_{\alpha }=\varnothing $ and consider the events $\left\{ Y (n+1)\in F_{\alpha }\right\} $ and $\left\{ Y (n+1)\in R_{\alpha }\right\} $. On the set $\left\{ \omega \in \Omega :\, Y (n,\omega )=\alpha \right\} $ we have $\IF{\alpha }(\omega )+\IR{\alpha }(\omega )\equiv 1$. Hence, \begin{eqnarray*} \mathsf{E}\, \left(f(Y (n+1))\, |\, Y (n)=\alpha \right)-f(\alpha ) & = & \mathsf{E}\, \left(\left(f(Y (n+1))-f(y)\right)\IF{\alpha }\, |\, Y (n)=\alpha \right)+\\ & & +\mathsf{E}\, \left(\left(f(Y (n+1))-f(y)\right)\IR{\alpha }\, |\, Y (n)=\alpha \right) \end{eqnarray*} It follows from definition of the Markov chain~$Y(n)$ (see Subsect.~\ref{sub:Transition-probabilities}) that the diameters $d_{\alpha }:=diam\, F_{\alpha }$ are uniformly bounded in~$\alpha $: $d=\max _{\alpha }d_{\alpha }<+\infty $. Define a vector\[ M_{F}(\alpha )=\mathsf{E}\, \left(\left(Y (n+1)-\alpha \right)\IF{\alpha }\, |\, Y (n)=\alpha \right)=\sum _{\beta \in F_{\alpha }}(\beta -\alpha )p_{\alpha \beta }\, .\] This is an analogue of a notion of mean jump (see~(\ref{eq:M-alpha}) in Appendix). In the next subsection we shall need the following modification of the principle of local linearity from~\cite{FMM} (see also Subsect.~\ref{sub:Principle-of-local} in Appendix). \begin{lemma}\label{l-mod-loclin}Assume that the following condition holds \[ \inf _{l}\, \sup _{\tilde{\alpha }\in \mathbf{R} ^{n},\| \tilde{\alpha }-\alpha \Vert \leq d_{\alpha }}\, \left|f(\tilde{\alpha })-l(\tilde{\alpha })\right|\, <\, \varepsilon \, ,\] where $\inf $ is taken over all linear functions~$l$. If \[ f\left(\alpha +M_{F}(\alpha )\right)-f(\alpha )<-5\varepsilon \, ,\] then the following inequality\[ \mathsf{E}\, \left(\, \left(\, f(Y (n+1))-f(\alpha )\, \right)\IF{\alpha }\, |\, Y (n)=\alpha \, \right)<-\varepsilon \, \] holds. \end{lemma} The proof of this statement repeats the proof of principle of local linearity presented in~\cite{FMM} and is omitted. \subsection{Proof of the Foster condition} The validity of the Foster condition will follow from several ancillary lemmas dealing with the following different domains of the state space: \begin{eqnarray*} E_{-} & := & \left\{ y=(y_{2},y_{3}):\, \min (y_{2},y_{3})<0\right\} ,\\ E_{1} & := & \left\{ y=(y_{2},y_{3}):\, y_{2}>0,y_{3}>0\right\} ,\\ E_{1,2} & := & \{y_{2}>0,\, y_{3}=0\},\\ E_{1,3} & := & \{y_{2}=0,\, y_{3}>0\}. \end{eqnarray*} \begin{lemma}\label{l-leftlower} Consider the domain $E_{-}=\left\{ y=(y_{2},y_{3}):\, \min (y_{2},y_{3})<0\right\} $. There exists $\CC{l-leftlower}>0$, such that if $\varphi (y)>\CC{l-leftlower}$, then ~\quad ~a) ~\[ \mathsf{E}\, \left(\left(\varphi (Y (n+1))-\varphi (y)\right)\IR{y}\, |\, Y (n)=y\right)\leq 0\, ,\] ~\quad ~b) there exists $\varepsilon >0$ such that\[ \varphi \left(y+M_{F}(y)\right)-\varphi (y)<-5\varepsilon \, .\] \end{lemma} \begin{proof} It is evident that the vector\[ M_{F}(y)=\left(\lambda _{2}-\lambda _{1},\lambda _{3}-\lambda _{1}\right)\] is constant (does not depend on $y$). Since the vector $n(T_{2})$ is co-directed with the vector $\left(1,-\Delta \right)$ and the vector $n(T_{3})$ is co-directed with the vector $\left(-\Delta ,1\right)$ and the conditions $\lambda _{2}>\lambda _{1}$, $\lambda _{3}>\lambda _{1}$ hold, we can find a large $\Delta _{1}>0$ such that \[ \skp{M_{F}(y)}{n(T_{2})}<0,\quad \skp{M_{F}(y)}{n(T_{3})}<0\, ,\qquad \forall \Delta >\Delta _{1}\, .\] Fix this $\Delta _{1}$. Hence, by Lemma~\ref{l-normal} there exists $\varepsilon >0$ such that\[ \skp{M_{F}(y)}{n(y)}<-6\varepsilon \quad \textrm{if }\, \min (y_{2},y_{3})<0.\] Put $w=y+M_{F}(y)$ and consider\begin{eqnarray*} \varphi (w)-\varphi (y) & = & \varphi (w)-\skp{\nabla \varphi (y^{*})}{y}\\ & = & \varphi (w)-\skp{\nabla \varphi (y^{*})}{w}+\skp{\nabla \varphi (y^{*})}{w-y}\\ & = & \varphi (w)-\skp{\nabla \varphi (y^{*})}{w}+\skp{\nabla \varphi (y^{*})}{M_{F}(y)}\, . \end{eqnarray*} By~(\ref{eq:lagrange-ineq}) for any given $\varepsilon >0$ we can choose $C_{0}>0$ such that \[ \left|\varphi (y+M_{F}(y))-\skp{\nabla \varphi (y^{*})}{M_{F}(y)}\right|\leq \frac{const}{\varphi (y)}\, d^{2}\leq \varepsilon \, \qquad \textrm{if }\varphi (y)\geq C_{0}\, .\] Now the item b) of the lemma easily follows. Let us prove the item a) of the lemma. Note that in the domain $y_{3}>y_{2},\, y_{2}<0$ a rollback decreases coordinate $y_{3}$: $(y_{2},y_{3})\rightarrow \left(y'_{2},y'_{3}\right)=(y_{2},y_{2})$. From geometrical properties of level sets of function $\varphi $ and item~3 of Lemma~\ref{l-normal} it follows that any transition generated by a rollback decreases a value of the function~$\varphi $: $\varphi (\, (y_{2},y_{3})\, )<\varphi (\, (y'_{2},y'_{3})\, )$. In the domain $y_{3}<0,\, y_{2}>0$ a rollback has the following form: $(y_{2},y_{3})\rightarrow \left(y'_{2},y'_{3}\right)=(0,y_{3})$. For similar reasons we again have $\varphi (\, (y_{2},y_{3})\, )<\varphi (\, (y'_{2},y'_{3})\, )$. In the domain $y_{3}\leq y_{2}<0$ there is no rollback. Now the item a) easily follows.\end{proof} \begin{lemma}\label{l-posoct} Consider the domain: $E_{1}=\left\{ y=(y_{2},y_{3}):\, y_{2}>0,y_{3}>0\right\} $. \begin{enumerate} \item The conditional expectation\[ \mathsf{E}\, \left(\left(\varphi (Y (n+1))-\varphi (y)\right)\IF{y}\, |\, Y (n)=y\right)=\skp{\left(u_{2}^{-1},u_{3}^{-1}\right)}{M_{F}(y)}\] does not depend on $y$. \item There exist constants $\CC{l-posoct},\ga{l-posoct}>0$ such that \begin{equation} \mathsf{E}\, \left(\left(\varphi (Y (n+1))-\varphi (y)\right)\IR{y}\, |\, Y (n)=y\right)\leq -\ga{l-posoct}\varphi (y)\, \quad \textrm{if}\, \textrm{ }\varphi (y)>\CC{l-posoct}\label{eq:pos-quat-ineq}\end{equation} \end{enumerate} \end{lemma} \begin{proof} The first statement follows from the fact that in this domain $\varphi (y)=\skp{\left(u_{2}^{-1},u_{3}^{-1}\right)}{y},$ and the vector $M_{F}(y)$ does not depend on $y$. Let us prove~(\ref{eq:pos-quat-ineq}). Fix some level set \[ L_{C}^{+}:=\left\{ y:\, \varphi (y)=C\right\} \cap E_{1}\equiv \left\{ y_{2}/u_{2}+y_{3}/u_{3}=C,\, y_{2}>0,y_{3}>0\right\} \] and consider an action of rollbacks for $y\in L_{C}^{+}$. We have three different situations. a) Let $y$ be such that $y_{2}\geq y_{3}>0$. It follows that $\displaystyle y_{2}\geq \frac{C}{\frac{1}{u_{2}}+\frac{1}{u_{3}}}\, .$ As it can be easily concluded from Subsection~\ref{sub:Transition-probabilities}, with probability $\beta _{12}$ we have a rollback of the following form $(y_{2},y_{3})\rightarrow (0,y_{3}')$ where $0\leq y_{3}'\leq y_{3}$. Then we obtain\begin{eqnarray*} \varphi \left((0,y_{3}')\right)-\varphi \left((y_{2},y_{3})\right) & = & \left(\frac{0}{u_{2}}+\frac{y_{3}'}{u_{3}}\right)-\left(\frac{y_{2}}{u_{2}}+\frac{y_{3}}{u_{3}}\right)\\ & \leq & -\frac{y_{2}}{u_{2}}\, \leq \, -\frac{C}{\frac{1}{u_{2}}+\frac{1}{u_{3}}}\, \, \left(u_{2}\right)^{-1}=-\frac{C}{1+u_{2}/u_{3}}\, \end{eqnarray*} uniformly in $y_{3}'$ such that $y_{3}'\leq y_{3}$. To phrase it, we will say that with probability~$\beta _{12}$ the increment of $\varphi (y)$ is less or equal to $\, -\, \displaystyle \frac{C}{1+u_{2}/u_{3}}$~. Hence the conditional mean \begin{equation} \mathsf{E}\, \left(\left(\varphi (Y (n+1))-\varphi (y)\right)\IR{y}\, |\, Y (n)=y\right)\label{eq:c-mean}\end{equation} does not exceed the value $\, -\displaystyle \frac{\beta _{12}C}{1+u_{2}/u_{3}}$ if $y\in L_{C}^{+}$, $y_{2}\geq y_{3}>0$. b) Let $y\in L_{C}^{+}$ be such that $0<\frac{1}{2}y_{3}\leq y_{2}<y_{3}$. It follows that $y_{2}\geq \frac{C}{2\left(\frac{1}{u_{2}}+\frac{1}{u_{3}}\right)}$. With probability $\beta _{12}$ we have a rollback $(y_{2},y_{3})\rightarrow (0,y_{3}')$ where $0\leq y_{3}'\leq y_{3}$ and with probability~$\beta _{23}$ we have a rollback $(y_{2},y_{3})\rightarrow (y_{2},y_{2})$. Both of them give negative increments of the function $\varphi $. But the first rollback gives the increment $\varphi \left((0,y_{3}')\right)-\varphi \left((y_{2},y_{3})\right)$ which is less or equal to~$\, -\, \displaystyle \frac{C}{2(1+u_{2}/u_{3})}$. So we conclude that the above conditional mean~(\ref{eq:c-mean}) will not exceed the value $\, -\frac{1}{2}\beta _{12}\left(1+u_{2}/u_{3}\right)^{-1}C$. c) Now let $y\in L_{C}^{+}$ be such that $0<y_{2}\leq \frac{1}{2}y_{3}$. It follows that $y_{3}-y_{2}\geq K(C)$ where\[ K(C):=\frac{1}{2}\cdot \frac{C}{\left(\frac{1}{2u_{2}}+\frac{1}{u_{3}}\right)}=\frac{Cu_{3}}{u_{3}/u_{2}+2}\, .\] With probability $\beta _{12}$ we have a rollback $(y_{2},y_{3})\rightarrow (0,y_{3}')$, $0\leq y_{3}'\leq y_{3}$, and with probability~$\beta _{23}$ we have a rollback $(y_{2},y_{3})\rightarrow (y_{2},y_{2})$. The first rollback gives a negative increment of the function~$\varphi $, and the second rollback gives the increment $\varphi \left((y_{2},y_{2})\right)-\varphi \left((y_{2},y_{3})\right)$ which is less or equal to $\, -\, K(C)/u_{3}$. Hence the conditional expectation~(\ref{eq:c-mean}) does not exceed the value $\, -\beta _{23}K(C)/u_{3}=-\beta _{23}\left(u_{3}/u_{2}+2\right)^{-1}C$. The proof of the lemma is completed. \end{proof} \begin{lemma}\label{l-axes} Consider the cases when $y$ belongs to the axes: $y\in E_{1,3}$, $y\in E_{2,3}$. Here\[ \mathsf{E}\, \left(\left(\varphi (Y (n+1))-\varphi (y)\right)\IR{y}\, |\, Y (n)=y\right)\leq -\ga{l-axes}\varphi (y)\, ,\] and\[ \mathsf{E}\, \left(\left(\varphi (Y (n+1))-\varphi (y)\right)\IF{y}\, |\, Y (n)=y\right)\] does not depend on $y$, where $y\in \{y:\, \varphi (y)>\CC{l-axes}\}$. \end{lemma} \begin{proof} We consider in details the case $E_{1,3}=\{y_{2}=0,\, y_{3}>0\}$. We start with the free dynamics. The following transition \[ (0,y_{3})\rightarrow (y_{2}',y_{3}')=(-1,y_{3}-1)\in E_{3}=\{y_{2}<0,y_{3}>y_{2}\}.\] occurs with probability $\relax \lambda _{1}$. It is easy to see that for $y_{3}>\CC{l-axes}$ values of the function $\varphi (\cdot )$ in both points $(0,y_{3})$ and $(-1,y_{3}-1)$ concide with the values of linear function $\skp{n(T_{3})}{\cdot }$. With probability $\relax \lambda _{2}$ we have a transition\[ (0,y_{3})\rightarrow (y_{2}',y_{3}')=(1,y_{3})\in E_{1}=\{y_{2}>0,y_{3}>0\},\] and with probability $\relax \lambda _{3}$ we have a transition\[ (0,y_{3})\rightarrow (y_{2}',y_{3}')=(0,y_{3}+1)\in E_{3}.\] Evidently, that in $(0,y_{3})$, $(1,y_{3})$ and $(0,y_{3}+1)$ the values of $\varphi (\cdot )$ coincide with the values of a linear function $\skp{n_{1}}{\cdot }$. Hence, \begin{eqnarray*} \mathsf{E}\, \left(\left(\varphi (Y (n+1))-\varphi (y)\right)\IF{y}\, |\, Y (n)=y\right) & = & \lambda _{1}\skp{n(T_{3})}{(-1,-1)}\\ & & \null +\lambda _{2}\skp{n_{1}}{(1,0)}+\lambda _{3}\skp{n_{1}}{(0,1)}.\\ & & \end{eqnarray*} Since the r.h.s.\ does not depend on~$y$ we get the second statement of the lemma. Due to a rollback the Markov chain $Y(n)$ goes from the state $(0,y_{3})\in E_{1,3}$ to a state $(0,0)$ with probability $\relax \beta _{23}$. Note that values of $\varphi (\cdot )$ at these two points can be calculated by using the linear function $\skp{n_{1}}{\cdot }$. Obviously, that the increment of~$\varphi $ corresponding to this rollback is equal to $\varphi \left(\, (0,0)\, \right)-\varphi \left(\, (0,y_{3})\, \right)=-C$, where $C=\varphi (\, (0,y_{3})\, )$. The first statement of the lemma is proved. The case of the domain $E_{1,3}=\{y_{2}>0,\, y_{3}=0\}$ is similar. \end{proof} \begin{lemma}\label{l-origin} For any $C_{0}>0$\[ \sup _{y:\, \varphi (y)\leq C_{0}}\mathsf{E}\, \left(\varphi (Y (n+1))\, |\, Y (n)=y\right)<+\infty \, .\] \end{lemma} \begin{proof} This statement follows from the fact that the jumps of any fixed neighbourhood of $(0,0)$ are bounded and the fact that the function $\varphi $ is continuous. \end{proof} In view of the Lemmas \ref{l-mod-loclin}--\ref{l-origin}, the Foster-Lyapunov criterion (Theorem~\ref{t:Foster}, Appendix) is fulfilled with $f(y)=\varphi (y)$ therefore the Markov chain~$Y(n)$ is ergodic and hence the proof of the item~1 of Theorem~\ref{t:N-3} is completed. \section{Conclusions, conjectures and perspectives} \label{sec:Conclusions} \subsection{Decomposition into groups} We shall always assume that all $\lambda _{1},\ldots ,\lambda _{N}$ are different. Define a function \[ \ell (m):=\min _{i\leq m}\lambda _{i}\, .\] Evidently, this function has the following property:\[ \ell (1)=\lambda _{1}\geq \cdots \geq \ell (m)\geq \ell (m+1)\geq \cdots \geq \min (\lambda _{1},\ldots ,\lambda _{N})=\ell (N)\, .\] Level sets of function~$\ell $ generate a partition of the set $\left\{ 1,\ldots ,N\right\} $. Namely, there exists a sequence $j_{1}=1<j_{2}<\cdots <j_{K}<j_{K+1}=N+1$ such that the set of all processors can be divided into several nonintersecting groups \begin{equation} \left\{ 1,\ldots ,N\right\} =\bigcup _{k=\overline{1,K}}G_{k}\, ,\label{eq:part-Gk}\end{equation} \[ G_{k}:=\, \left\{ j_{k},j_{k}+1,\ldots ,j_{k+1}-1\right\} ,\, \quad \, \ell (j_{k}-1)>\ell (j_{k})=\cdots =\ell (j_{k+1}-1)>\lambda _{j_{k+1}}\, .\] \begin{rem} \noindent \emph{An equivalent description of the group is possible. We say, for example, that $1,2,\ldots ,k$ is a group if \begin{equation} \lambda _{1}\leq \min \left(\lambda _{2},\ldots ,\lambda _{k}\right),\quad \lambda _{1}>\lambda _{k+1}.\label{eq:char-group}\end{equation} } \end{rem} \subsection{Long-time behaviour of the groups} Taking into account Theorems~\ref{t:n-2} and~\ref{t:N-3} and the above notion of groups of processors we put forward the following Conjecture. \noindent \begin{conj}\label{con-N-main} \emph{}Assume that all $\lambda _{1},\ldots ,\lambda _{N}$ are different. For any $j$ the following limit $\displaystyle v^{*} _{j}=\lim _{t\rightarrow +\infty }\frac{x_{j}(t)}{t}$ exists and $v^{*} _{j}=\ell (j)$ . \noindent \end{conj} Therefore this conjecture entails $v^{*} _{j}=\ell (j_{k})$ for $j\in G_{k}$. If for some $k$ the group $G_{k}$ consists of \emph{more than one} processor we may say that the processors of the group $G_{k}$ are \emph{synchronized}. \begin{rem}[On monotone cases] \noindent \emph{If $\lambda _{1}<\cdots <\lambda _{N}$ then $v^{*} _{j}=\lambda _{1}$ for any~ $j$. \\ If $\lambda _{1}>\cdots >\lambda _{N}$ then for all~ $j$ we have $v^{*} _{j}=\lambda _{j}$. } \smallskip{} \end{rem} Let us discuss briefly perspectives of rigorous proof of the above Conjecture for large values of~$N$. In fact, we have already proved this conjectures for a wide class of cascade models. \begin{theorem} Assume that all $\lambda _{1},\ldots ,\lambda _{N}$ are different and a partition~(\ref{eq:part-Gk}) of the set of processors $\left\{ 1,\ldots ,N\right\} $ is such that $\left|G_{k}\right|\leq 3$ for all~$k$. Then the limits $\displaystyle v^{*} _{j}=\lim _{t\rightarrow +\infty }\frac{x_{j}(t)}{t}$ exist and $v^{*} _{j}=\ell (j)$ . \end{theorem} The proof of this statement is just a combination of the result of Theorem~\ref{t:N-3} (item~1) and arguments of the proof of items 2-4 of Theorem~\ref{t:N-3}. We will not pursue further. So the key to the proof of Conjecture consists in generalization of item~1 of Theorem~\ref{t:N-3}. As it was seen in Section~\ref{sec:N-3-Lyap}, a possible way of such generalization is an explicit construction of Foster-Lyapunov function in high dimensions. This seems to be a difficult technical problem which is out of scope of this paper.
2,877,628,088,716
arxiv
\section{Introduction} Among non-functional properties of programs, bounds on the amount of resources (like computation time and space) programs need when executed are particularly significant. The problem of deriving such bounds is indeed crucial in safety-critical systems, but is undecidable whenever non-trivial programming languages are considered. If the units of measurement become concrete and close to the physical ones, the problem becomes even more complicated and architecture-dependent. A typical example is the one of WCET techniques adopted in real-time systems~\cite{wcet}, which \remove{do }not only need to deal with how many machine instructions a program corresponds to, but also with how much time each instruction costs when executed by possibly complex architectures (including caches, pipelining, etc.), a task which is becoming even harder with the current trend towards multicore architectures. A different approach consists in analysing the \emph{abstract} complexity of programs. As an example, one can take the number of instructions executed by the program as a measure of its execution time. This is of course a less informative metric, which however becomes more accurate if the actual time taken \emph{by each instruction} is kept low. One advantage of this analysis is the independence from the specific hardware platform executing the program at hand: the latter only needs to be analysed once. A variety of \emph{complexity analysis} techniques have been employed in this context, from abstract interpretation~\cite{speed} to type systems~\cite{HOAA10} to program logics~\cite{deBakker80} to interactive theorem proving. Properties of programs written in higher-order functional languages are for various reasons well-suited to be verified by way of type systems. This includes not only safety properties (e.g. well-typed programs do not go wrong), but more complex ones, including resource bounds~\cite{HOAA10,BaillotTeruiIC,GaboardiRonchi,DalLago2011}. In this paper, we delineate a methodology for complexity analysis of higher-order programs \emph{with control operators}. The latter are constructs which are available in most concrete functional programming languages (including \textsf{Scheme} and \textsf{OCaml}), and allow control to flow in non-standard ways. The technique we introduce takes the form of a type system for de Groote's $\lambda\mu$-calculus~\cite{de1994relation} derived from Girard, Scedrov and Scott's Bounded Linear Logic~\cite{GSS92TCS} ($\mathsf{BLL}$ in the following). We prove it to be sound: typable programs can indeed be reduced in a number of steps lesser or equal to a (polynomial) bound which can be read from the underlying type derivation. A similar result can be given when the cost model is the one induced by an abstract machine. To the authors' knowledge, this is the first example of a complexity analysis methodology coping well not only with higher-order functions, but also with control operators. In the rest of this section, we explain the crucial role Linear Logic has in this work, in the meantime delineating its main features. \subsection{Linear Logic and Complexity Analysis} Linear Logic~\cite{Girard87} is one of the most successful tools for characterizing complexity classes in a higher-order setting, through the Curry-Howard correspondence. Subsystems of it can indeed be shown to correspond to the polynomial time computable functions~\cite{GSS92TCS,Girard98IC,Lafont04TCS} or the logarithmic space computable functions~\cite{Schoepp07LICS}. Many of the introduced fragments can then be turned into type systems for the $\lambda$-calculus~\cite{BaillotTeruiIC,GaboardiRonchi}, some of them being relatively complete in an intensional sense~\cite{DalLago2011}. The reason for this success lies in the way Linear Logic decomposes intuitionistic implication into a linear implication, which has low complexity, and an \emph{exponential modality}, which marks those formulas to which structural rules can be applied. This gives a proper status to proof duplication, without which cut-elimination can be performed in a linear number of steps. By tuning the rules governing the exponential modality, then, one can define logical systems for which cut-elimination can be performed within appropriate resource bounds. Usually, this is coupled with an encoding of all functions in a complexity class $\mathcal{C}$ into the system at hand, which makes the system a \emph{characterization} of $\mathcal{C}$. Rules governing the exponential modality $!$ can be constrained in (at least) two different ways:\begin{varitemize} \item On the one hand, one or more of the rules governing $!$ (e.g., dereliction or digging) can be \emph{dropped} or \emph{restricted}. This is what happens, for example, in Light Linear Logic~\cite{Girard98IC} or Soft Linear Logic~\cite{Lafont04TCS}. \item On the other, the logic can be further refined and \emph{enriched} so as to control the number of times structural rules are applied. In other words, rules for $!$ are still all there, but in a refined form. This is what happens in Bounded Linear Logic~\cite{GSS92TCS}. Similarly, one could control so-called modal impredicativity by a system of levels~\cite{BaillotMazza10}. \end{varitemize} The first approach corresponds to cutting the space of proofs with an axe: many proofs, and among them many corresponding to efficient algorithms, will not be part of the system because they require one of the forbidden logical principles. The second approach is milder in terms of the class of good programs that are ``left behind'': there is strong evidence that with this approach one can obtain a quite expressive logical system~\cite{DalLago2009,DalLago2011}. Not much is known about whether this approach scales to languages in which not only functions but also first-class continuations and control operators are present. Understanding the impact of these features to the complexity of programs is an interesting research topic, which however has received little attention in the past \subsection{Linear Logic and Control Operators} On the other hand, more than twenty years have passed since classical logic has been shown to be amenable to the Curry-Howard paradigm~\cite{Griffin90}. And, interestingly enough, classical axioms (e.g. Pierce's law or the law of the Excluded Middle) can be seen as the type of control operators like \textsf{Scheme}'s \texttt{callcc}. In the meantime, the various facets of this new form of proofs-as-programs correspondence have been investigated in detail, and many extensions of the $\lambda$-calculus for which classical logic naturally provides a type discipline have been introduced (e.g. \cite{Parigot,CurienHerbelin}). Moreover, the decomposition provided by Linear Logic is known to scale up to classical logic~\cite{Girard91}. Actually, Linear Logic was known to admit an involutive notion of negation from its very inception~\cite{Girard87}. A satisfying embedding of Classical Logic into Linear Logic, however, requires restricting the latter by way of polarities~\cite{phdlaurent}: this way one is left with a logical system with most of the desirable dynamical properties. In this paper, we define $\mathsf{BLLP}$, a polarized version of Bounded Linear Logic. The kind of enrichment resource polynomials provide in $\mathsf{BLL}$ is shown to cope well with polarization. Following the close relationship between polarized linear logic and the $\lambda\mu$-calculus~\cite{laurent2003polarized}, $\mathsf{BLLP}$ gives rise to a type system for the $\lambda\mu$-calculus. Proofs and typable $\lambda\mu$-terms are both shown to be reducible to their cut-free or normal forms in a number of steps bounded by a polynomial weight. Such a result for the former translates to a similar result for the latter, since any reduction step in $\lambda\mu$-terms corresponds to one or more reduction steps in proofs. The analysis is then extended to the reduction of $\lambda\mu$-terms by a Krivine-style abstract machine~\cite{de1998environment}. \section{Bounded Polarized Linear Logic as A Sequent Calculus} In this section, we define $\mathsf{BLLP}$ as a sequent calculus. Although this section is self-contained, some familiarity with both bounded~\cite{GSS92TCS} and polarized~\cite{laurent2003polarized} linear logic would certainly help. \shortv{Some more details can be found in an extended version of the present paper~\cite{EV}.} \subsection{Polynomials and Formulas}\label{sect:polform} A \emph{resource monomial} is any (finite) product of binomial coefficients in the form $\displaystyle\prod_{i=1}^{m} \left({x_i \atop n_i}\right)$, where the $x_i$ are distinct variables and the $n_i$ are non-negative integers. A \emph{resource polynomial} is any finite sum of resource monomials. Given resource polynomials $p, q$ write $p\sqsubseteq q$ to denote that $q-p$ is a resource polynomial. If $p\sqsubseteq r$ and $q\sqsubseteq s$ then also $q\circ p \sqsubseteq s\circr$. Resource polynomials are closed by addition, multiplication, bounded sums and composition~\cite{GSS92TCS}. A \emph{polarized formula} is a formula (either positive or negative) generated by the following grammar \shortv{ \begin{align*} P&::=V\middP\otimesP\; \; \mbox{\Large{$\mid$}}\;\; 1 \; \; \mbox{\Large{$\mid$}}\;\; !_{x<p}N;\\ N&::=\NOT{V}\middN\parrN\; \; \mbox{\Large{$\mid$}}\;\; \bot \; \; \mbox{\Large{$\mid$}}\;\; ?_{x<p}P; \end{align*}} \longv{ \begin{align*} P&::=V(\vec p) \; \; \mbox{\Large{$\mid$}}\;\; P\otimes P \; \; \mbox{\Large{$\mid$}}\;\; \remove{P\oplus P \; \; \mbox{\Large{$\mid$}}\;\;} 1 \; \; \mbox{\Large{$\mid$}}\;\; \exists V P \; \; \mbox{\Large{$\mid$}}\;\; !_{x<p}N;\\ N&::=\NOT{V}(\vec p) \; \; \mbox{\Large{$\mid$}}\;\; N\parrN \; \; \mbox{\Large{$\mid$}}\;\; \remove{N\withN \; \; \mbox{\Large{$\mid$}}\;\;} \bot \; \; \mbox{\Large{$\mid$}}\;\; \forall V N \; \; \mbox{\Large{$\mid$}}\;\; ?_{x<p}P. \end{align*} }where $V$ ranges over a countable sets of atoms. Throughout this paper, formulas (but also terms, contexts, etc.) are considered modulo $\alpha$-equivalence. Formulas (either positive or negative) are ranged over by metavariables like $A,B$. Formulas like $\NOT{V}$ are sometime denoted as $X,Y$. In a polarized setting, contraction can be performed on any negative formula. As a consequence, we need the notion of a \emph{labelled\footnote{Keep in mind that linear logic contains a subset of formulas which is isomorphic to (polarized) classical logic. $\LABEL{N}{x}{p}$ (resp. $\LABEL{P}{x}{p}$) can be thought of roughly as $?_{x < p}\NOT{N}$ (resp. $!_{x < p}\NOT{P}$), i.e., in a sense we can think of labelled formulas as formulas hiding an implicit exponential modality.} formula} $\LABEL{A}{x}{p}$, namely the \emph{labelling} of the formula $A$ with respect to $x$ and $p$. All occurrences of $x$ in $A$ are bound in $\LABEL{A}{x}{p}$. Metavariables for labellings of positive (respectively, negative) formulas are $\mathbf{P},\mathbf{Q},\mathbf{R}$ (respectively, $\mathbf{N},\mathbf{M},\mathbf{L}$). Labelled formulas are sometimes denoted with metavariables $\mathbf{A},\mathbf{B}$ when their polarity is not essential. Negation, as usual in classical linear system, can be applied to any (possibly labelled) formula, \emph{\`a la} De Morgan. When the resource variable $x$ does not appear in $A$, then we do not need to mention it when writing $\LABEL{A}{x}{p}$, which becomes $\LABEL{A}{}{p}$. Similarly for $!_{x<p}N$ and $?_{x<p}P$. Both the space of formulas and the space of labelled formulas can be seen as partial orders by stipulating that two (labelled) formulas can be compared \remove{only if}iff they have \emph{exactly} the same skeleton and the polynomials occurring in them can be compared. \shortv{ As an example, \begin{align*} !_{x<p}N\sqsubseteq\; !_{x<q}M &\mbox{ iff } q\sqsubseteq p \wedge N \sqsubseteq M;\\ ?_{x<p}P\sqsubseteq\; ?_{x<q}Q &\mbox{ iff } p\sqsubseteq q \wedge P\sqsubseteq Q. \end{align*}}\longv{ Formally, \begin{align*} V(p_1, \dots, p_n)&\sqsubseteq V(q_1, \dots, q_n) \mbox{ iff } \forall i.p_i\sqsubseteq q_i;\\ \NOT{V}(p_1, \dots, p_n)&\sqsubseteq \NOT{V}(q_1, \dots, q_n) \mbox{ iff } \forall i.q_i\sqsubseteq p_i;\\ 1 &\sqsubseteq 1;\\ \bot &\sqsubseteq \bot;\\ P \otimes Q &\sqsubseteq R \otimes S \mbox{ if{}f }P \sqsubseteq R \wedge Q \sqsubseteq S;\\ N \parr M &\sqsubseteq O \parr K \mbox{ if{}f } N \sqsubseteq O \wedge M \sqsubseteq K;\\ !_{x<p}N&\sqsubseteq !_{x<q}M \mbox{ if{}f } q\sqsubseteq p \wedge N \sqsubseteq M;\\ ?_{x<p}P&\sqsubseteq ?_{x<q}Q \mbox{ if{}f } p\sqsubseteq q \wedge P\sqsubseteq Q;\\ \forall V.N&\sqsubseteq \forall V.M \mbox{ if{}f } N \sqsubseteq M;\\ \exists V.P&\sqsubseteq \exists V.Q \mbox{ if{}f } P \sqsubseteq Q. \end{align*} } In a sense, then, polynomials occurring next to atoms or to the \emph{whynot} operator are in positive position, while those occurring next to the \emph{bang} operator are in negative position. In all the other cases, $\sqsubseteq$ is defined component-wise, in the natural way, e.g. $P\otimesQ\resleqR\otimesS$ iff both $P\resleqR$ and $Q\resleqS$. Finally $\LABEL{N}{x}{p}\sqsubseteq \LABEL{M}{x}{q}$ iff $N \sqsubseteq M \wedge p \sqsupseteq q$. And dually, $\LABEL{P}{x}{p}\sqsubseteq \LABEL{Q}{x}{q}$ iff $N \sqsubseteq M \wedge p \sqsubseteq q$. \longv{ \begin{lemma}\label{Lem:subneg} $A\sqsubseteqB$ iff $\NOT{B}\sqsubseteq\NOT{A}$. Moreover, $\mathbf{A}\sqsubseteq\mathbf{B}$ iff $\NOT{\mathbf{B}}\sqsubseteq\NOT{\mathbf{A}}$. \end{lemma} \begin{proof} $A\sqsubseteqB$ iff $\NOT{B}\sqsubseteq\NOT{A}$ can be proved by induction on the structure of $A$. Consider the second part of the statement, now. Suppose that $A, B$ are positive, and call them $P, Q$ respectively. Then \begin{align*} \LABEL{P}{x}{p} \sqsubseteq \LABEL{Q}{x}{q}&\LeftrightarrowP \sqsubseteq Q\wedgep \sqsubseteq q\\ &\Leftrightarrow\NOT{Q}\sqsubseteq\NOT{P}\wedgep \sqsubseteq q\\ &\Leftrightarrow\LABEL{\NOT{Q}}{x}{q}\sqsubseteq\LABEL{\NOT{P}}{x}{p}. \end{align*} The case when $A,B$ are negative is similar. \end{proof} } Certain operators on resource polynomials can be lifted to formulas. As an example, we want to be able to \emph{sum} labelled formulas provided they have a proper form: $$ \LABEL{N}{x}{p}\uplus\LABEL{\subst{N}{x}{y+p}}{y}{q}:= \LABEL{N}{x}{p+q}. $$ We are assuming, of course, that $x,y$ are not free in either $p$ or $q$. This construction can be generalized to \emph{bounded} sums: suppose that a labelled formula is in the form $$ \LABEL{M}{y}{r}=\LABEL{N\{x/y+\sum_{u<z} \subst{r}{z}{u}\}}{y}{r}, $$ where $y$ and $u$ are not free in $N$ nor in $r$ and $z$ is not free in $N$. Then the labelled formula $\sum_{z<q}\LABEL{M}{y}{r}$ is defined as $\LABEL{N}{x}{\sum_{z<q}r}$. See \cite[\S 3.3]{GSS92TCS} for more details about the above constructions. \longv{ An \emph{abstraction formula} of arity $n$ is simply a formula $A$, where the $n$ resource variables $x_1,\ldots,x_n$ are meant to be bound. $\LET{A}{V}{B}$ is the result of substituting a second order abstraction term $B$ (of arity $n$) for all free occurrences of the propositional variable $V$ (of the same arity) in $A$. This can be defined formally by induction on the structure of $A$, but the only interesting clauses are the following two: \begin{align*} \LET{V(p_1,\dots,p_n)}{V}{B}&=\subst{B}{x_1,\ldots,x}{p_1,\dots,p_n}\\ \LET{\NOT{V}(p_1,\dots,p_n)}{V}{B}&=\subst{\NOT{B}}{x_1,\ldots,x}{p_1,\dots,p_n} \end{align*} } \subsection{Sequents and Rules}\label{sect:sequents} The easiest way to present $\mathsf{BLLP}$ is to give a sequent calculus for it. Actually, proofs will be structurally identical to proofs of Laurent's $\mathsf{LLP}$. Of course, only \emph{some} of $\mathsf{LLP}$ proofs are legal $\mathsf{BLLP}$ proofs --- those giving rise to an exponential blow-up cannot be decorated according to the principles of Bounded Linear Logic. A \emph{sequent} is an expression in the form $\vdash\Gamma$, where $\Gamma=\mathbf{A}_1,\ldots \mathbf{A}_n$ is a multiset of labelled formulas such that at most one among $\mathbf{A}_1,\ldots,\mathbf{A}_n$ is positive. If $\Gamma$ only contains (labellings of) negative formulas, we indicate it with metavariables like $\mathcal{N},\mathcal{M}$. The operator $\uplus$ can be extended to one on multi-sets of formulas component-wise, so we can write expressions like $\mathcal{N} \uplus \mathcal{M}$: this amounts to summing the polynomials occurring in $\mathcal{N}$ and those occurring in $\mathcal{M}$. Similarly for bounded sums. The rules of the sequent calculus for $\mathsf{BLLP}$ are in Figure~\ref{fig:sequentcalc}. \begin{figure*} \fbox{\shortv{\scriptsize} \begin{minipage}{.98\textwidth} \centering \ \vspace{5pt} \\ \AxiomC{$\mathbf{N} \sqsubseteq \mathbf{M}$} \AxiomC{$\NOT{\mathbf{M}} \sqsupseteq \mathbf{P}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \mathbf{N}, \mathbf{P}$} \DisplayProof \qquad \AxiomC{$\vdash \Gamma, \mathbf{N}$} \AxiomC{$\vdash \mathcal{N}, \NOT{\mathbf{N}}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \Gamma, \mathcal{N}$} \DisplayProof \\ \vspace{5pt} \AxiomC{$\vdash \Gamma, \LABEL{N}{x}{p}, \LABEL{M}{x}{q}$} \AxiomC{$p\resleqr$\quad$q\resleqr$} \RightLabel{$\parr$} \BinaryInfC{$\vdash \Gamma, \LABEL{N\parrM}{x}{r}$} \DisplayProof \qquad \AxiomC{$\vdash \mathcal{N}, \LABEL{P}{x}{p}$} \AxiomC{$\vdash \mathcal{M}, \LABEL{Q}{x}{q}$\quad$r\resleqp$\quad$r\resleqq$} \RightLabel{$\otimes$} \BinaryInfC{$\vdash \mathcal{N}, \mathcal{M}, \LABEL{P\otimes Q}{x}{r}$} \DisplayProof \\ \vspace{5pt} \AxiomC{$\vdash \mathcal{N}, \LABEL{N}{x}{p}$} \AxiomC{$\mathcal{M} \sqsubseteq \sum_{y<q}\mathcal{N}$} \RightLabel{!} \BinaryInfC{$\vdash \mathcal{M}, \LABEL{!_{x<p}N}{y}{q}$} \DisplayProof \qquad \AxiomC{$\vdash \mathcal{N}, \LABEL{\subst{P}{y}{0}}{x}{\subst{p}{y}{0}}$} \AxiomC{$\mathbf{N}\sqsubseteq\LABEL{?_{x<p}P}{y}{1}$} \RightLabel{$?d$} \BinaryInfC{$\vdash \mathcal{N},\mathbf{N}$} \DisplayProof \\ \vspace{5pt} \AxiomC{$\vdash \Gamma$} \RightLabel{$?w$} \UnaryInfC{$\vdash \Gamma,\mathbf{N}$} \DisplayProof \qquad \AxiomC{$\vdash \Gamma,\mathbf{N},\mathbf{M}$} \AxiomC{$\mathbf{L}\sqsubseteq\mathbf{N}\uplus\mathbf{M}$} \RightLabel{$?c$} \BinaryInfC{$\vdash \Gamma,\mathbf{L}$} \DisplayProof \qquad \AxiomC{$\vdash \Gamma$} \RightLabel{$\bot$} \UnaryInfC{$\vdash \Gamma, \LABEL{\bot}{x}{p}$} \DisplayProof \qquad \AxiomC{$\vphantom{\vdash \Gamma}$} \RightLabel{$1$} \UnaryInfC{$\vdash \LABEL{1}{x}{p}$} \DisplayProof \shortv{ \\ \ \vspace{5pt}} \longv{ \\ \vspace{5pt} \AxiomC{$\vdash \Gamma, \LABEL{N}{x}{p}$} \AxiomC{$V\not\in\FV{N}$} \RightLabel{$\forall$} \BinaryInfC{$\vdash \Gamma, \LABEL{\forall V N}{x}{p}$} \DisplayProof \qquad \AxiomC{$\vdash\mathcal{N}, \LABEL{\LET{P}{V}{Q}}{x}{p}$} \RightLabel{$\exists$} \UnaryInfC{$\vdash\mathcal{N}, \LABEL{\exists V P}{x}{p}$} \DisplayProof\\ \vspace{5pt} \ }\\ \end{minipage}} \caption{$\mathsf{BLLP}$, Sequent Calculus Rules}\label{fig:sequentcalc} \end{figure*} Please observe that: \begin{varitemize} \item The relation $\sqsubseteq$ is implicitly applied to both formulas and polynomials whenever possible in such a way that ``smaller'' formulas can always be derived (see Section \ref{sect:malleability}). \item As in $\mathsf{LLP}$, structural rules can act on any negative formula, and not only on exponential ones. Since all formulas occurring in sequents are labelled, however, we can still keep track of how many times formulas are ``used'', in the spirit of $\mathsf{BLL}$. \item A byproduct of taking sequents as multisets of \emph{labeled} formulas is that multiplicative rules themselves need to deal with labels. As an example, consider rule $\otimes$: the resource polynomial labelling the conclusion $P\otimesQ$ is anything smaller or equal to the polynomials labeling the two premises. \end{varitemize} \shortv{ The sequent calculus we have just introduced could be extended with second-order quantifiers and additive logical connectives. For the sake of simplicity, however, we have kept the language of formulas very simple here. The interested reader can check \cite{phdlaurent} for a treatment of these connectives in a polarized setting or \cite{EV} for \remove{some }more details.} \longv{ The sequent calculus we have just introduced could be extended with additive logical connectives. For the sake of simplicity, however, we have kept the language of formulas very simple here. } As already mentioned, $\mathsf{BLLP}$ proofs can be seen as obtained by decorating proofs from Laurent's $\mathsf{LLP}$~\cite{laurent2003polarized} with resource polynomials. Given a proof $\pi$, $\BLLPtoLLP{\pi}$ is the $\mathsf{LLP}$ proof obtained by erasing all resource polynomials occurring in $\pi$. If $\pi$ and $\rho$ are two $\mathsf{BLLP}$ proofs, we write $\pi\sim\rho$ iff $\BLLPtoLLP{\pi}\equiv\BLLPtoLLP{\rho}$, i.e., iff $\pi$ and $\rho$ are two decorations of the same $\mathsf{LLP}$ proof. Even if structural rules can be applied to all negative formulas, only certain proofs will be copied or erased along the cut-elimination process, as we will soon realize. A \emph{box} is any proof which ends with an occurrence of the $!$ rule. In non-polarized systems, only boxes can be copied or erased, while here the process can be applied to \emph{$\otimes$-trees}, which are proofs inductively defined as follows: \begin{varitemize} \item Either the last rule in the proof is $\mathsf{Ax}$ or $!$ or $1$; \item or the proof is obtained from two $\otimes$-trees by applying the rule $\otimes$. \end{varitemize} A $\otimes$-tree is said to be \emph{closed} if it does not contain any axiom nor any box having auxiliary doors (i.e., no formula in the context of the $!$ rules). \subsection{Malleability}\label{sect:malleability} The main reason for the strong (intensional) expressive power of $\mathsf{BLL}$~\cite{DalLago2009} is its \emph{malleability}: the conclusion of any proof $\pi$ can be modified in many different ways without altering its structure. Malleability is not only crucial to make the system expressive, but also to prove that $\mathsf{BLLP}$ enjoys cut-elimination. In this section, we give four different ways of modifying a sequent in such a way as to preserve its derivability. Two of them are anyway expected and also hold in $\mathsf{BLL}$, while the other two only make sense in a polarized setting. First of all, taking smaller formulas (i.e., more general --- cf. \cite[\S 3.3, p. 21]{GSS92TCS}) preserves derivability: \begin{lemma}[Subtyping]\label{lem:subtyping} If $\pi\;\triangleright\;\vdash \Gamma, \mathbf{A}$ and $\mathbf{A}\sqsupseteq\mathbf{B}$, then there is $\rho\;\triangleright\;\vdash\Gamma,\mathbf{B}$ such that $\pi\sim\rho$. \end{lemma} \longv{ \begin{proof} By a simple induction on $\pi$. The crucial cases: \begin{varitemize} \item If the last rule used is an axiom: \begin{prooftree} \AxiomC{$\mathbf{N} \sqsubseteq \mathbf{M}$} \AxiomC{$\NOT{\mathbf{M}} \sqsupseteq \mathbf{P}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \mathbf{N}, \mathbf{P}$} \end{prooftree} If $\mathbf{B}\sqsubseteq\mathbf{N}$, then we know that $\mathbf{B}\sqsubseteq\mathbf{N}\sqsubseteq\mathbf{M}$, from which it follows that $\mathbf{B}\sqsubseteq\mathbf{M}$. We can thus take $\rho$ as \begin{prooftree} \AxiomC{$\mathbf{B} \sqsubseteq \mathbf{M}$} \AxiomC{$\NOT{\mathbf{M}} \sqsupseteq \mathbf{P}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \mathbf{B}, \mathbf{P}$} \end{prooftree} If $\mathbf{B}\sqsubseteq\mathbf{P}$, then we know that $\mathbf{B}\sqsubseteq\mathbf{P}\sqsubseteq\NOT{\mathbf{M}}$, from which it follows that $\mathbf{B}\sqsubseteq\NOT{\mathbf{M}}$. We can thus take $\rho$ as \begin{prooftree} \AxiomC{$\mathbf{B} \sqsubseteq \mathbf{M}$} \AxiomC{$\NOT{\mathbf{M}} \sqsupseteq \mathbf{B}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \mathbf{B}, \mathbf{M}$} \end{prooftree} \item Suppose the last rule used is $!$: \begin{prooftree} \AxiomC{$\sigma\;\triangleright\;\vdash \mathcal{N}, \LABEL{N}{y}{r}$} \AxiomC{$\mathcal{M}\sqsubseteq\sum_{x<q} \mathcal{N}$} \RightLabel{$!$} \BinaryInfC{$\vdash \mathcal{M}, \LABEL{!_{y<r}N}{x}{q}$} \end{prooftree} If $\mathbf{B}\sqsubseteq\LABEL{!_{y<r}N}{x}{q}$, then necessarily $\mathbf{B}=\LABEL{!_{y<s}M}{x}{p}$, where $N \sqsupseteq M$, $q\sqsupseteq p$ and $s \sqsupseteq r$. Hence $\LABEL{N}{y}{r} \sqsupseteq\LABEL{M}{y}{s}$ and, by induction hypothesis, there is $\lambda$ such that $\lambda\sim\sigma$ and $\lambda\;\triangleright\;\vdash\mathcal{N}, \LABEL{M}{y}{s}$. As a consequence $\rho$ can be simply defined as, because $\sum_{x<q}\mathcal{N}\sqsubseteq\sum_{x<p}\mathcal{N}$: \begin{prooftree} \AxiomC{$\lambda\;\triangleright\;\vdash\mathcal{N}, \LABEL{M}{y}{s}$} \AxiomC{$\mathcal{M}\sqsubseteq\sum_{x<p} \mathcal{N}$} \RightLabel{$!$} \BinaryInfC{$\vdash \mathcal{M}, \LABEL{!_{y<s}M}{x}{q}$} \end{prooftree} If $\mathbf{B}\sqsubseteq\mathbf{N}\in\mathcal{M}$, then we can just derive the thesis from transitivity of $\sqsubseteq$. \item If the last rule used is $?d$: \begin{prooftree} \AxiomC{$\sigma\;\triangleright\;\vdash\mathcal{N}, \LABEL{\subst{P}{x}{0}}{y}{\subst{r}{x}{0}}$} \AxiomC{$\mathbf{N}\sqsubseteq\LABEL{?_{y<r}P}{x}{1}$} \RightLabel{$?d$} \BinaryInfC{$\vdash \mathcal{N},\mathbf{N}$} \end{prooftree} Then the induction hypothesis immediately yields the thesis. \end{varitemize} This concludes the proof. \end{proof} } Substituting resource variables for polynomials itself preserves typability: \begin{lemma}[Substitution]\label{lem:subst} Let $\pi\;\triangleright\;\vdash\Gamma$. Then there is a proof $\subst{\pi}{x}{p}$ of $\vdash\subst{\Gamma}{x}{p}$. Moreover, $\subst{\pi}{x}{p}\sim \pi$. \end{lemma} \longv{ \begin{proof} By an easy induction on the structure of $\pi$. \end{proof}} \shortv{ Both Lemma~\ref{lem:subtyping} and Lemma~\ref{lem:subst} can be proved by easy inductions on the structure of $\pi$. } \longv{ \begin{lemma}\label{Lem:submon} $\LABEL{A}{x}{p} \sqsupseteq \LABEL{B}{x}{p} \Rightarrow \LABEL{\subst{A}{x}{y+q}}{y}{p} \sqsupseteq \LABEL{\subst{B}{x}{y+q}}{y}{p}.$ \end{lemma} \begin{proof} $\LABEL{A}{x}{p}\sqsupseteq \LABEL{B}{x}{p} \Rightarrow A \sqsupseteq B \Rightarrow \subst{A}{x}{y+q} \sqsupseteq \subst{B}{x}{y+q} \Rightarrow \LABEL{\subst{A}{x}{y+q}}{y}{p} \sqsupseteq \LABEL{\subst{B}{x}{y+q}}{y}{p}$ \end{proof} } As we have already mentioned, one of the key differences between ordinary Linear Logic and its polarized version is that in the latter, arbitrary proofs can potentially be duplicated (and erased) along the cut-elimination process, while in the former only special ones, namely boxes, can. This is, again, a consequence of the fundamentally different nature of structural rules in the two systems. Since $\mathsf{BLLP}$ is a refinement of $\mathsf{LLP}$, this means that the same phenomenon is expected. But beware: in a bounded setting, contraction is not symmetric, i.e., the two copies of the proof $\pi$ we are duplicating are not identical to $\pi$. What we need to prove, then, is that proofs can indeed be \emph{split} in $\mathsf{BLLP}$: \longv{ But preliminary to that is the following technical lemma: \begin{lemma}[Shifting Sums]\label{Lem:splitpro2} If $\sumlf{z}{q}{\LABEL{M}{y}{r}}=\LABEL{N}{y}{\sum_{z<q}r}$, then the formula $\mathbf{N}=\subst{\LABEL{M}{y}{r}}{z}{z+q}$ is such that $$ \sumlf{z}{p}{\mathbf{N}}= \LABEL{\subst{N}{x}{x+\sum_{z<q}r}}{y}{\sum_{z<p} \subst{r}{z}{z+q}} $$ \end{lemma} \begin{proof} The fact $\sumlf{z}{q}{\LABEL{M}{y}{r}}$ exists implies that there exist $N,x,u$ such that $$ M=\subst{N}{x}{y+\sum_{u<z}\subst{r}{z}{u}} $$ and $y,z\notin\FV{N}$ and $y\notin\FV{r}$. As a consequence: \begin{align*} \subst{\LABEL{M}{y}{r}}{z}{z+q} &=\LABEL{\subst{\subst{N}{x}{y+\sum_{u<z} \subst{r}{z}{u}}}{z}{z+q}}{y}{\subst{r}{z}{z+q}}\\ &=\LABEL{\subst{N}{x}{y+\sum_{u<z+q} \subst{r}{z}{u}}}{y}{\subst{r}{z}{z+q}}\\ &=\LABEL{\subst{N}{x}{y+\sum_{u<q}\subst{r}{z}{u}+\sum_{u<z} \subst{r}{z}{u+q}}}{y}{\subst{r}{z}{z+q}}\\ &=\LABEL{\subst{\subst{N}{x}{x+\sum_{u<q}\subst{r}{z}{u}}}{x}{y+\sum_{u<z} \subst{r}{z}{u+q}}}{y}{\subst{r}{z}{z+q}}\\ &=\LABEL{\subst{\subst{N}{x}{x+\sum_{u<q}\subst{r}{z}{u}}}{x}{y+\sum_{u<z} \subst{\subst{r}{z}{z+q}}{z}{u}}}{y}{\subst{r}{z}{z+q}} \end{align*} Call the last formula $\mathbf{N}$. As a consequence, $\sumlf{z}{p}{\mathbf{N}}$ exists and is equal to $$ \LABEL{\subst{N}{x}{x+\sum_{u<q}\subst{r}{z}{u}}}{y}{\sum_{z<p}\subst{r}{z}{z+q}}. $$ This concludes the proof. \end{proof} } \begin{lemma}[Splitting]\label{lem:splitting} If $\pi\;\triangleright\;\vdash \mathcal{N}, \LABEL{P}{x}{p}$ is a $\otimes$-tree and $p \sqsupseteq r+s$ then there exist $\mathcal{M},\mathcal{O}$ such that $\rho\;\triangleright\;\vdash \mathcal{M},\LABEL{P}{x}{r}$, $\sigma\;\triangleright\;\vdash \mathcal{O},\LABEL{\subst{P}{x}{y+r}}{y}{s}$. Moreover, $\mathcal{N}\sqsubseteq\mathcal{M}\uplus\mathcal{O}$ and $\rho \sim \pi\sim \sigma$. \end{lemma} \longv{ \begin{proof} By induction on $\pi$: \begin{varitemize} \item If the last rule used is an axiom then it is in the form \begin{prooftree} \AxiomC{$\LABEL{N_1}{x}{q_1}\sqsubseteq \LABEL{M}{x}{t}$} \AxiomC{$\LABEL{\NOT{M}}{x}{t}\sqsupseteq \LABEL{P}{x}{p}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \LABEL{N}{x}{q}, \LABEL{P}{x}{p}$} \end{prooftree} for some $M, t$. We know that $$ r+s\sqsubseteq p\sqsubseteq t \sqsubseteq q. $$ Observe that, we can form the following derivations \begin{prooftree} \AxiomC{$\LABEL{N_1}{x}{r}\sqsubseteq \LABEL{M}{x}{r}$} \AxiomC{$\LABEL{\NOT{M}}{x}{r}\sqsupseteq \LABEL{P}{x}{r}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \LABEL{N_1}{x}{r}, \LABEL{P}{x}{r}$} \end{prooftree} \begin{prooftree} \AxiomC{$\LABEL{\subst{N}{x}{y+r}}{y}{s}\sqsubseteq \LABEL{\subst{M}{x}{y+r}}{y}{s}$} \AxiomC{$\LABEL{\subst{\NOT{M}}{x}{y+r}}{y}{s}\sqsupseteq \LABEL{\subst{P}{x}{y+r}}{y}{s}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \LABEL{\subst{N}{x}{y+r}}{y}{s}, \LABEL{\subst{P}{x}{y+r}}{y}{s}$} \end{prooftree} where in building the second one we made use, in particular, of Lemma~\ref{Lem:submon}. \item If the last rule used is $\otimes$ then we can write $\pi$ as \begin{prooftree} \AxiomC{$\lambda_1\;\triangleright\;\vdash\mathcal{N}_1, \LABEL{P_1}{x}{p_1}$} \AxiomC{$\lambda_2\;\triangleright\;\vdash\mathcal{N}_2, \LABEL{P_2}{x}{p_2}$} \RightLabel{$\otimes$} \BinaryInfC{$\vdash \mathcal{N}_1, \mathcal{N}_2,\LABEL{P_1 \otimes P_2}{x}{p}$} \end{prooftree} where $p\resleqp_1$ and $p\resleqp_2$. As a consequence, $p_1\resgeqq+r$ and $p_2\resgeqq+r$, and we can thus apply the induction hypothesis to $\lambda_1,\lambda_2$ easily reaching the thesis. \item If the last rule used is promotion $!$ then $\pi$ has the following shape: \begin{prooftree} \AxiomC{$\lambda\;\triangleright\;\vdash \mathcal{N}, \LABEL{N}{z}{q}$} \AxiomC{$\mathcal{M}\sqsubseteq\sum_{x<r}\mathcal{N}$} \RightLabel{$!$} \BinaryInfC{$\vdash \mathcal{M}, \LABEL{!_{z<q}N}{x}{p}$} \end{prooftree} Then $\rho$ is simply \begin{prooftree} \AxiomC{$\lambda\;\triangleright\;\vdash \mathcal{N}, \LABEL{N}{z}{q}$} \RightLabel{$!$} \UnaryInfC{$\vdash\sum_{x<r}\mathcal{N}, \LABEL{!_{z<q}N}{x}{r}$} \end{prooftree} About $\sigma$, observe that $\subst{\lambda}{x}{y+r}$ has conclusion $$ \vdash \subst{\mathcal{N}}{x}{y+r}, \LABEL{\subst{N}{x}{y+r}}{z}{\subst{q}{x}{y+r}} $$ By Lemma~\ref{Lem:splitpro2}, it is allowed to form $\sumlf{y}{s}{\subst{\mathcal{N}}{x}{y+r}}$. As a consequence, $\sigma$ is \begin{prooftree} \AxiomC{$\vdash \subst{\mathcal{N}}{x}{y+r}, \LABEL{\subst{N}{x}{y+r}}{z}{\subst{q}{x}{y+r}}$} \RightLabel{$!$} \UnaryInfC{$\vdash \sumlf{y}{s}{\subst{\mathcal{N}}{x}{y+r}}, \LABEL{\subst{(!_{z<q}N)}{x}{y+r}}{y}{s}$} \end{prooftree} Observe that the conclusions of $\rho$ and $\sigma$ are in the appropriate relation, again because of Lemma~\ref{Lem:splitpro2}. \end{varitemize} This concludes the proof. \end{proof} } Observe that not every proof can be split, but only $\otimes$-trees can. \shortv{ The proof of Lemma~\ref{lem:splitting} is not trivial and requires some auxiliary results (see~\cite{EV} for more details). } A parametric version of splitting is also necessary here: \begin{lemma}[Parametric Splitting]\label{Lem:parsplitting} If $\pi\;\triangleright\;\vdash \mathcal{N}, \LABEL{P}{x}{p}$, where $\pi$ is a $\otimes$-tree and $p \sqsupseteq \sum_{x<r}s$, then there exists, $\rho\;\triangleright\;\vdash \mathcal{M}, \LABEL{P}{x}{s}$ where $\sum_{x<r}\mathcal{M}\sqsupseteq\mathcal{N}$. and $\rho \sim \pi$. \end{lemma} While splitting allows to cope with duplication, parametric splitting implies that an arbitrary $\otimes$-tree proof can be modified so as to be lifted into a box through one of its auxiliary doors Please observe that $p^\pi$ continues to be such an upper bound even if any natural number is substituted for any of its free variables, \remove{as }an easy consequence of Lemma~\ref{lem:subst}. The following is useful when dealing with cuts involving the rule $?d$: \begin{lemma}\label{lem:tensornew} If $q\sqsupseteq 1$, then $\sumlf{z}{q}{\LABEL{M}{y}{r}} \sqsubseteq\subst{\LABEL{M}{y}{r}}{z}{0}$. \end{lemma} \longv{ \begin{proof} By hypothesis, we have that $\sumlf{z}{q}{\LABEL{M}{y}{r}}=\LABEL{N}{y}{p}$ for some $N,y,p$. As a consequence $$ M\equiv\subst{N}{x}{y+\sum_{u<z}\subst{p}{z}{u}}. $$ Now: \begin{align*} \subst{\LABEL{M}{y}{r}}{z}{0}&\equiv\LABEL{\subst{N}{x}{y+\sum_{u<0}\subst{p}{z}{u}}}{y}{\subst{r}{z}{0}}\\ &\equiv\LABEL{\subst{N}{x}{y}}{y}{\subst{r}{z}{0}}\equiv\LABEL{N}{x}{\subst{r}{z}{0}}\sqsupseteq\LABEL{N}{x}{\sum_{z<q}r} \end{align*} This concludes the proof. \end{proof} } \section{Cut Elimination} In this Section, we \longv{show how a cut-elimination procedure for $\mathsf{BLLP}$ can be defined}\shortv{give some ideas about how cuts can be eliminated from $\mathsf{BLLP}$ proofs}. \longv{ We start by showing how \emph{logical} cuts can be reduced, where a cut is logical when the two immediate subproofs end with a rule introducing the formula involved in the cut. We describe how logical cuts can be reduced in the critical cases in Figure~\ref{fig:logcutelim}, which needs to be further explained: \begin{figure*} \fbox{ \begin{minipage}{.98\textwidth} \centering \vspace{5pt} \ \\ \textsl{Multiplicatives}\\ \vspace{5pt} {\scriptsize \AxiomC{$\vdash \Gamma, \LABEL{N}{x}{p}, \LABEL{M}{x}{q}$} \RightLabel{$\parr$} \UnaryInfC{$\vdash \Gamma, \LABEL{N\parrM}{x}{t}$} \AxiomC{$\vdash \mathcal{N}, \LABEL{\NOT{N}}{x}{r}$} \AxiomC{$\vdash \mathcal{M}, \LABEL{\NOT{M}}{x}{s}$} \RightLabel{$\otimes$} \BinaryInfC{$\vdash \mathcal{N}, \mathcal{M}, \LABEL{\NOT{N}\otimes\NOT{M}}{x}{t}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \Gamma, \mathcal{N}, \mathcal{M}$} \DisplayProof} $\longmapsto$ {\scriptsize \AxiomC{$\vdash \Gamma, \LABEL{M}{x}{t}, \LABEL{N}{x}{t}$} \AxiomC{$\vdash \mathcal{N}, \LABEL{\NOT{N}}{x}{t}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \Gamma, \mathcal{N}, \LABEL{M}{x}{t}$} \AxiomC{$\vdash \mathcal{M}, \LABEL{\NOT{M}}{x}{t}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \Gamma, \mathcal{N}, \mathcal{M}$} \DisplayProof}\\ \vspace{10pt} \textsl{Dereliction}\\ \vspace{5pt} {\scriptsize \AxiomC{$\pi\;\triangleright\;\vdash \mathcal{N}, \LABEL{N}{y}{p}$} \RightLabel{$!$} \UnaryInfC{$\vdash\mathcal{M}, \LABEL{!_{y<p}N}{x}{q}$} \AxiomC{$\rho\;\triangleright\;\vdash \mathcal{O}, \LABEL{\subst{\NOT{M}}{x}{0}}{y}{\subst{r}{x}{0}}$} \RightLabel{$?d$} \UnaryInfC{$\vdash \mathcal{O}, \LABEL{?_{y<p} \NOT{N}}{x}{q}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \mathcal{M}, \mathcal{O}$} \DisplayProof $\longmapsto$ \AxiomC{$\sigma\;\triangleright\;\vdash\mathcal{M}, \LABEL{\subst{N}{x}{0}}{y}{\subst{p}{x}{0}}$} \AxiomC{$\lambda\;\triangleright\;\vdash\mathcal{O},\LABEL{\subst{N}{x}{0}}{y}{\subst{p}{x}{0}}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash\mathcal{M}, \mathcal{O}$} \DisplayProof}\\ \vspace{10pt} \textsl{Contraction}\\ \vspace{5pt} {\scriptsize \AxiomC{$\pi\;\triangleright\;\vdash\mathcal{N}, \LABEL{\NOT{N}}{x}{r}$} \AxiomC{$\rho\;\triangleright\;\vdash \Gamma, \LABEL{O}{x}{p}, \LABEL{\subst{O}{x}{y+p}}{y}{q}$} \RightLabel{$?c$} \UnaryInfC{$\vdash \Gamma, \LABEL{N}{x}{r}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \mathcal{N}, \Gamma$} \DisplayProof\\ \vspace{3pt} $\longmapsto$\\ \vspace{3pt} \AxiomC{$\lambda\;\triangleright\;\vdash \mathcal{O}, \LABEL{\subst{\NOT{O}}{x}{y+p}}{y}{q}$} \AxiomC{$\sigma\;\triangleright\;\vdash \mathcal{M}, \LABEL{\NOT{O}}{x}{p}$} \AxiomC{$\pi\;\triangleright\;\vdash \Gamma, \LABEL{O}{x}{p}, \LABEL{\subst{O}{x}{x+p}}{y}{q}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \mathcal{M},\Gamma, \LABEL{\subst{O}{x}{x+p}}{y}{q},\LABEL{\subst{N}{x}{x+p}}{y}{q}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \mathcal{M},\mathcal{O},\Gamma$} \doubleLine \RightLabel{$?c$} \UnaryInfC{$\vdash \mathcal{N}, \Gamma$} \DisplayProof}\\ \vspace{10pt} \textsl{Digging}\\ \vspace{5pt} {\scriptsize \AxiomC{$\pi\;\triangleright\;\vdash\mathcal{N},\LABEL{\NOT{N}}{x}{r}$} \AxiomC{$\rho\;\triangleright\;\vdash \mathcal{O},\LABEL{O}{y}{s},\LABEL{M}{y}{p}$} \RightLabel{$!$} \UnaryInfC{$\vdash\mathcal{M},\LABEL{N}{x}{r},\LABEL{!_{y<p}M}{x}{q}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash\mathcal{N},\mathcal{M},\LABEL{!_{y<p}M}{x}{q}$} \DisplayProof \vspace{3pt} $\longmapsto$ \vspace{3pt} \AxiomC{$\sigma \;\triangleright\; \vdash \mathcal{K}, \LABEL{\NOT{O}}{y}{s}$} \AxiomC{$\rho\;\triangleright\;\vdash \mathcal{O},\LABEL{O}{y}{s},\LABEL{M}{y}{p}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \mathcal{K}, \mathcal{O}, \LABEL{M}{y}{p}$} \RightLabel{$!$} \UnaryInfC{$\vdash\mathcal{N},\mathcal{M},\LABEL{!_{y<p}M}{x}{q}$} \DisplayProof}\\ \vspace{5pt} \ \\ \end{minipage}} \caption{Some Logical Cut-Elimination Steps}\label{fig:logcutelim} \end{figure*} \begin{varitemize} \item When reducing multiplicative logical cuts, we extensively use the Subtyping Lemma. \item In the dereliction reduction step, $\subst{\pi}{x}{0}$ (obtained through Lemma~\ref{lem:subst}) has conclusion $\vdash\subst{\mathcal{N}}{x}{0}, \LABEL{\subst{N}{x}{0}}{y}{\subst{p}{x}{0}}$. By Lemma~\ref{lem:tensornew}, $\mathcal{M}\sqsubseteq\sum_{x<q}\mathcal{N}\sqsubseteq\subst{\mathcal{N}}{x}{0}$, and as a consequence, there is $\sigma\;\triangleright\;\vdash\mathcal{M},\LABEL{\subst{N}{x}{0}}{y}{\subst{p}{x}{0}}$. From $\LABEL{?_{y<p}\NOT{N}}{x}{q}\sqsupseteq\LABEL{?_{y<r}\NOT{M}}{x}{1}$, it follows that $\LABEL{\subst{\NOT{M}}{x}{0}}{y}{\subst{r}{x}{0}} \longv{\linebreak[1]} \sqsupseteq \LABEL{\subst{N}{x}{0}}{y}{\subst{p}{x}{0}}$, and there is a proof $\lambda\;\triangleright\;\vdash \mathcal{O},\LABEL{\subst{N}{x}{0}}{y}{\subst{p}{x}{0}}$. \item In the contraction reduction step, we suppose that $\pi$ is a $\otimes$-tree. Then we can apply Lemma \ref{lem:splitting} and Lemma \ref{lem:subtyping}, and obtain $\sigma\;\triangleright\;\vdash\mathcal{M}, \LABEL{\NOT{O}}{x}{p}$ and $\lambda\;\triangleright\;\vdash\mathcal{O}, \LABEL{\subst{\NOT{O}}{x}{y+p}}{y}{q}$ such that $\mathcal{M}\uplus\mathcal{O}\sqsupseteq\mathcal{N}$. \item In digging, by Lemma~\ref{Lem:parsplitting} from $\pi$ we can find $\sigma \;\triangleright\; \vdash \mathcal{K}, \LABEL{\NOT{O}}{y}{s}$, where $\mathcal{N} \sqsubseteq \sum_{x < q} \mathcal{K}$. \end{varitemize}} \shortv{\emph{Logical cuts} (i.e., those in which the two immediate subproofs end with a rule introducing the formula involved in the cut) can be reduced as in $\mathsf{LLP}$~\cite{phdlaurent}, but exploiting malleability whenever polynomials need to be modified. This defines the reduction relation $\longmapsto$ (see~\cite{EV} for more details).} All instances of the $\mathsf{Cut}$ rule which are not logical are said to be \emph{commutative}, and induce a relation \shortv{$\cong$} on proofs. \longv{ As an example, the proof \begin{prooftree} \AxiomC{$\pi\;\triangleright\;\vdash\Gamma,\mathbf{N},\LABEL{N}{x}{p}, \LABEL{M}{x}{q}$} \RightLabel{$\parr$} \UnaryInfC{$\vdash \Gamma,\mathbf{N},\LABEL{N\parrM}{x}{r}$} \AxiomC{$\rho\;\triangleright\;\vdash\mathcal{N},\NOT{\mathbf{N}}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash\Gamma,\mathcal{N},\LABEL{N\parrM}{x}{r}$} \end{prooftree} is equivalent to \begin{prooftree} \AxiomC{$\pi\;\triangleright\;\vdash\Gamma,\mathbf{N},\LABEL{N}{x}{p}, \LABEL{M}{x}{q}$} \AxiomC{$\rho\;\triangleright\;\vdash\mathcal{N},\NOT{\mathbf{N}}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash\Gamma,\mathcal{N},\LABEL{N}{x}{p}, \LABEL{M}{x}{q}$} \RightLabel{$\parr$} \UnaryInfC{$\vdash \Gamma,\mathcal{N},\LABEL{N\parrM}{x}{r}$} \end{prooftree} This way we can define an equivalence relation $\cong$ on the space of proofs.} In general, not all cuts in a proof are logical, but any cut can be turned into a logical one: \begin{lemma}\label{lemma:logcut} Let $\pi$ be any proof containing an occurrence of the rule $\mathsf{Cut}$. Then, there are two proofs $\rho$ and $\sigma$ such that $\pi\cong\rho\longmapsto\sigma$\remove{. Moreover, }, where $\rho$ can be effectively obtained from $\pi$. \end{lemma} The proof of Lemma~\ref{lemma:logcut} goes as follows: given any instance of the $\mathsf{Cut}$ rule \begin{prooftree} \AxiomC{$\pi\;\triangleright\; \vdash \Gamma, \LABEL{N}{x}{p}$} \AxiomC{$\rho\;\triangleright\; \vdash \mathcal{N}, \LABEL{P}{x}{p}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \Gamma, \mathcal{N}$} \end{prooftree} consider the path (i.e., the sequence of formula occurrences) starting from $\LABEL{N}{x}{p}$ and going upward inside $\pi$, and the path starting from $\LABEL{P}{x}{p}$ and going upward inside $\rho$. Both paths end either at an $\mathsf{Ax}$ rule or at an instance of a rule introducing the main connective in $N$ or $P$. The game to play is then to show that these two paths can always be \emph{shortened} by way of commutations, thus exposing the underlying logical cut. Lemma~\ref{lemma:logcut} is implicitly defining a cut-elimination procedure: given any instance of the $\mathsf{Cut}$ rule, turn it into a logical cut by the procedure from Lemma~\ref{lemma:logcut}, then fire it. This way we are implicitly defining another reduction relation $\longrightarrow$. The next question is the following: is this procedure going to terminate for every proof $\pi$ (i.e., is $\longrightarrow$ strongly, or weakly, normalizing)? How many steps does it take to turn $\pi$ to its cut-free form? Actually, $\longrightarrow$ produces reduction sequences of very long length, but is anyway strongly normalizing. A relatively easy way to prove it goes as follows: any $\mathsf{BLLP}$ proof $\pi$ corresponds to a $\mathsf{LLP}$ sequent calculus proof $\BLLPtoLLP{\pi}$, and the latter itself corresponds to a polarized proof net $\BLLPtoLLPpn{\pi}$~\cite{laurent2003polarized}. Moreover, $\pi\longrightarrow\rho$ implies that $\BLLPtoLLPpn{\pi}\mapsto\BLLPtoLLPpn{\rho}$, where $\mapsto$ is the canonical cut-elimination relation on polarized proof-nets. Finally, $\BLLPtoLLPpn{\pi}$ is identical to $\BLLPtoLLPpn{\rho}$ whenever $\pi\cong\rho$. Since $\mapsto$ is known to be strongly normalizing, $\longrightarrow$ does not admit infinite reduction sequences: \begin{proposition}[Cut-Elimination] The relation $\longrightarrow$ is strongly normalizing. \end{proposition} This does not mean that cut-elimination can be performed in (reasonably) bounded time. Already in $\mathsf{BLL}$ this can take hyperexponential time: the whole of Elementary Linear Logic~\cite{Girard98IC} can be embedded into it. \subsection{Soundness}\label{subsec:soundness} To get a soundness result, then, we somehow need to restrict the underlying reduction relation $\longrightarrow$. Following~\cite{GSS92TCS}, one could indeed define a subset of $\longrightarrow$ just by imposing that in dereliction, contraction, or box cut-elimination steps, the involved $\otimes$-trees are closed. Moreover, we could stipulate that reduction is external, i.e., it cannot take place inside boxes. Closed and external reduction, however, is not enough to simulate head-reduction in the $\lambda\mu$-calculus, and not being able to reduce under the scope of $\mu$-abstractions does not make much sense anyway. We are forced, then, to consider an extension of closed reduction. The fact that this new notion of reduction still guarantees polynomial bounds is technically a remarkable strengthening with respect to $\mathsf{BLL}$'s Soundness Theorem~\cite{GSS92TCS}. There is a quite natural notion of \emph{downward} path in proofs: from any occurrence of a negative formula $\mathbf{N}$, just proceed downward until you either find (the main premise of) a $\mathsf{Cut}$ rule, or a conclusion. In the first case, the occurrence of $\mathbf{N}$ is said to be \emph{active}, in the second it is said to be \emph{passive}. Proofs can then be endowed with a new notion of reduction: all dereliction, contraction or box digging cuts can be fired only if the negative formula occurrences in its rightmost argument are all passive. In the literature, this is sometimes called a \emph{special cut} (e.g.~\cite{Baillot11}). Moreover, reduction needs to be external, as usual. This notion of reduction, as we will see, is enough to mimic head reduction, and is denoted with $\Longrightarrow$. The next step consists in \remove{attributing} associating a weight, in the form of a resource polynomial, to every proof, similarly to what happens in $\mathsf{BLL}$. The \emph{pre-weight} $\pfd{\pi}$ of a proof $\pi$ with conclusion $\vdash\mathbf{A}_1,\ldots,\mathbf{A}_n$ consists in: \begin{varitemize} \item a resource polynomial $p^\pi$. \item $n$ disjoints sets of resource variables $S_1^\pi,\ldots,S_n^\pi$, each corresponding to a formula in $\mathbf{A}_1,\ldots,\mathbf{A}_n$; if this does not cause ambiguity, the set of resource variables corresponding to a formula $\mathbf{A}$ will be denoted by $\srvp{\pi}{\mathbf{A}}$. Similarly for $\srvp{\pi}{\Gamma}$, where $\Gamma$ is a multiset of formulas. \end{varitemize} If $\pi$ has pre-weight $p^\pi,S_1^\pi,\ldots,S_n^\pi$, then the \emph{weight} $q^\pi$ of $\pi$ is simply $p^\pi$ where, however, all the variables in $S_1^\pi,\ldots,S_n^\pi$ are substituted with $0$: $\subst{p^\pi}{\cup_{i=1}^nS_i^\pi}{0}$. \longv{ The pre-weight of a proof $\pi$ is defined by induction on the structure of $\pi$, following the rules in Figure~\ref{figure:preweights}. \begin{figure*} \centering \begin{tabular}{|c|c|}\hline\hline \vspace{-10pt} \\ $\pi$ & $\pfd{\pi}$\\ \vspace{-20pt} \\ \\ \hline\hline \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\LABEL{M}{x}{q} \sqsubseteq \LABEL{N}{x}{p}$} \AxiomC{$\LABEL{\NOT{N}}{x}{p} \sqsupseteq \LABEL{P}{x}{r}$} \RightLabel{$\mathsf{Ax}$} \BinaryInfC{$\vdash \LABEL{M}{x}{q},\LABEL{P}{x}{r}$} \DisplayProof & \longv{\scriptsize} $\{y\},\emptyset,y$ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\rho\;\triangleright\;\vdash \Gamma, \LABEL{N}{x}{p}$} \AxiomC{$\sigma\;\triangleright\;\vdash \mathcal{N}, \LABEL{\NOT{N}}{x}{p}$} \RightLabel{$\mathsf{Cut}$} \BinaryInfC{$\vdash \Gamma, \mathcal{N}$} \DisplayProof & \longv{\scriptsize} $\srvp{\rho}{\Gamma},\srvp{\sigma}{\mathcal{N}}, \subst{p^\rho}{\srvp{\rho}{\LABEL{N}{x}{p}}}{1}+ \subst{p^\sigma}{\srvp{\sigma}{\LABEL{\NOT{N}}{x}{p}}}{1}$ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\rho\;\triangleright\;\vdash \Gamma, \LABEL{N}{x}{p}, \LABEL{M}{x}{q}$} \AxiomC{$p\resleqr$\quad$q\resleqr$} \RightLabel{$\parr$} \BinaryInfC{$\vdash \Gamma, \LABEL{N\parrM}{x}{r}$} \DisplayProof & \longv{\scriptsize} $ \srvp{\rho}{\Gamma},\srvp{\rho}{\LABEL{N}{x}{p}}\cup \srvp{\rho}{\LABEL{M}{x}{p}}\cup\{y\},p^\rho+y $ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\rho\;\triangleright\;\vdash \mathcal{N}, \LABEL{P}{x}{p}$} \AxiomC{$\sigma\;\triangleright\;\vdash \mathcal{M}, \LABEL{Q}{x}{p}$\quad$r\resleqp$\quad$r\resleqq$} \RightLabel{$\otimes$} \BinaryInfC{$\vdash \mathcal{N}, \mathcal{M}, \LABEL{P\otimes Q}{z}{r}$} \DisplayProof & \longv{\scriptsize} $ \srvp{\rho}{\mathcal{N}},\srvp{\sigma}{\mathcal{M}}, \srvp{\rho}{\LABEL{P}{x}{p}}\cup \srvp{\sigma}{\LABEL{Q}{x}{p}}, p^\rho+p^\sigma $ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\rho\;\triangleright\;\vdash\mathbf{N}_1,\ldots,\mathbf{N}_n,\LABEL{M}{x}{p}$} \AxiomC{$\mathbf{M}_i\sqsubseteq\sum_{y<q}\mathbf{N}_i$} \RightLabel{!} \BinaryInfC{$\vdash\mathbf{M}_1,\ldots,\mathbf{M}_n, \LABEL{!_{x<p}M}{y}{q}$} \DisplayProof & \longv{\scriptsize} $ \srvp{\rho}{\mathbf{N}_1}\cup\{y_1\},\ldots,\srvp{\rho}{\mathbf{N}_n}\cup\{y_n\},p\cdotp^\rho+ y_1+\ldots+y_n $ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\rho\;\triangleright\;\vdash \mathcal{N}, \LABEL{\subst{P}{y}{0}}{x}{\subst{p}{y}{0}}$} \AxiomC{$\mathbf{N}\sqsubseteq\LABEL{?_{x<p}P}{y}{1}$} \RightLabel{$?d$} \BinaryInfC{$\vdash \mathcal{N},\mathbf{N}$} \DisplayProof & \longv{\scriptsize} $ \srvp{\rho}{\mathcal{N}},\srvp{\rho}{\LABEL{\subst{P}{y}{0}}{x}{\subst{p}{y}{0}}}\cup\{y\}, p^\rho+y $ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\rho\;\triangleright\;\vdash \Gamma \vphantom{\LABEL{N}{x}{p}}$} \RightLabel{$?w$} \UnaryInfC{$\vdash \Gamma,\mathbf{N}$} \DisplayProof & \longv{\scriptsize} $ \srvp{\rho}{\Gamma},\{y\} $ \\ \vspace{-5pt} & \\ \AxiomC{$\rho\;\triangleright\;\vdash \Gamma,\mathbf{N},\mathbf{M}$} \AxiomC{$\mathbf{L}\sqsubseteq\mathbf{N}\uplus\mathbf{M}$} \RightLabel{$?c$} \BinaryInfC{$\vdash \Gamma,\mathbf{L}$} \DisplayProof & \longv{\scriptsize} $ \srvp{\rho}{\Gamma},\srvp{\rho}{\mathbf{N}}\cup\srvp{\rho}{\mathbf{M}}\cup\{y\} $ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\rho\;\triangleright\;\vdash \Gamma$} \RightLabel{$\bot$} \UnaryInfC{$\vdash \Gamma,\LABEL{\bot}{x}{p}$} \DisplayProof & \longv{\scriptsize} $ \srvp{\rho}{\Gamma},\{y\},p^\rho+y $ \\ \vspace{-5pt} & \\ \longv{\scriptsize} \AxiomC{$\vphantom{\vdash \Gamma}$} \RightLabel{$1$} \UnaryInfC{$\vdash \LABEL{1}{x}{p}$} \DisplayProof & \longv{\scriptsize} $ \emptyset,0 $ \\ \vspace{-5pt} & \\ \hline\hline \end{tabular} \vspace{5pt} \caption{Pre-weights for Proofs.}\label{figure:preweights} \end{figure*} Please notice how any negative formula $\mathbf{N}$ in the conclusion of $\pi$ is associated with some fresh variables, each accounting for the application of a rule to it. When $\mathbf{N}$ is then applied to a cut, all these variables are set to $1$. } \shortv { The pre-weight of a proof $\pi$ is defined by induction on the structure of $\pi$ (see~\cite{EV} for more details). The idea is that every occurrence of negative formulas is attributed a fresh variable, which later is instantiated with either $0$ (if the formula is passive) or $1$ (if it is active). } This allows to discriminate between the case in which rules can ``produce'' time complexity along the cut-elimination, and the case in which they do not. Ultimately, this leads to: \begin{lemma}\label{lemma:monotone} If $\pi\cong\rho$, then $q^\pi=q^\rho$. If $\pi\Longrightarrow\rho$, then $q^\rho\resltq^\pi$. \end{lemma} The main idea behind Lemma~\ref{lemma:monotone} is that even if the logical cut we perform when going from $\pi$ to $\rho$ is ``dangerous'' (e.g. a contraction) \emph{and} the involved $\otimes$-tree is not closed, the residual negative rules have null weight, because they are passive. We can conclude that: \begin{theorem}[Polystep Soundness]\label{theo:polystepbllp} For every proof $\pi$, if $\pi\Longrightarrow^n\rho$, then $n\leqq^\pi$. \end{theorem} In a sense, then, the weight of any proof $\pi$ is a resource polynomial which can be easily computed from $\pi$ \longv{(rules in \longv{Figure~\ref{figure:preweights}}\shortv{\cite{EV}} are anyway inductively defined)} but which is also an upper bound on the number of logical cut-elimination steps separating $\pi$ from its normal form. Please observe that $q^\pi$ continues to be such an upper bound even if any natural number is substituted for any of its free variables, \remove{as }an easy consequence of Lemma~\ref{lem:subst}. Why then, are we talking about \emph{polynomial} bounds? In $\mathsf{BLL}$, and as a consequence also in $\mathsf{BLLP}$, one can write programs in such a way that the size of the input is reflected by a resource variable occurring in its type. \shortv{ As an example, the type of (Church encodings of) binary strings of length at most $x$ could be the following in $\mathsf{BLLP}$: $$ (X\arrow{}{1}X)\arrow{}{x}(X\arrow{}{1}X)\arrow{}{x}(X\arrow{}{1}X) $$ (where $N\arrow{}{p}M$ stands for $?_p\NOT{N}\parrM$). The weight, then, turns out to be a tool to study the behavior of terms seen as functions taking arguments of varying length. A more in-depth discussion about these issues is outside the scope of this paper.} Please refer to \cite{GSS92TCS}. \section{A Type System for the $\lambda\mu$-Calculus} We describe here a version of the $\lambda\mu$-calculus as introduced by de Groote~\cite{DBLP:conf/caap/Groote94}. Terms are as follows $$ t,u\;::=\ x\; \; \mbox{\Large{$\mid$}}\;\;\lambdax.t\; \; \mbox{\Large{$\mid$}}\;\;\mu\alpha.t\; \; \mbox{\Large{$\mid$}}\;\;\ [\alpha]t\; \; \mbox{\Large{$\mid$}}\;\;(t)t, $$ where $x$ and $\alpha$ range over two infinite disjoint sets of variables (called $\lambda$-variables and $\mu$-variables, respectively). In contrast with the $\lambda\mu$-calculus as originally formulated by Parigot~\cite{Parigot}, $\mu$-abstraction is not restricted to terms of the form $[\alpha]t$ here. \subsection{Notions of Reduction} The reduction rules we consider are the following ones: \newcommand{\to_{\mathsf{w}}}{\to_{\mathsf{w}}} \newcommand{\to_{\mathsf{h}}}{\to_{\mathsf{h}}} \newcommand{\lsubst}[3]{{#1}[{}^{#3}/{}_{#2}]} \begin{align*} (\lambda x.t)u&\to_\beta t[{}^u/{}_x]; & & & (\mu \alpha.t)u&\to_\mu \mu\alpha.t[{}^{[\alpha](v)u}/{}_{[\alpha]v}]; & & & \mu\alpha.[\alpha]t&\to_\theta t; \end{align*} where, as usual, $\to_\theta$ can be fired only if $\alpha \not \in \FV{t}$. In the following, $\to$ is just $\to_{\beta\mu\theta}$. In so-called \emph{weak reduction}, denoted $\to_{\mathsf{w}}$, reduction simply cannot take place in the scope of binders, while \emph{head reduction}, denoted $\to_{\mathsf{h}}$, is a generalization of the same concept from pure $\lambda$-calculus~\cite{de1998environment}. Details are in Figure~\ref{fig:whred}. \begin{figure*} \fbox{ \begin{minipage}{.98\textwidth} \vspace{5pt} \begin{center} \AxiomC{$t\tou$} \UnaryInfC{$t\towu$} \DisplayProof \hspace{1pt} \AxiomC{$t\towu$} \UnaryInfC{$tv\towuv$} \DisplayProof \hspace{1pt} \AxiomC{$t\towu$} \UnaryInfC{$[\alpha]t\to_{\mathsf{w}}[\alpha]u$} \DisplayProof \longv{\\ \ \vspace{5pt} \\}\shortv{\hspace{1pt}} \AxiomC{$t\towu$} \UnaryInfC{$t\tohu$} \DisplayProof \hspace{1pt} \AxiomC{$t\tohu$} \UnaryInfC{$\lambdax.t\to_{\mathsf{h}}\lambdax.u$} \DisplayProof \hspace{1pt} \AxiomC{$t\tohu$} \UnaryInfC{$\mu\alpha.t\to_{\mathsf{h}}\mu\alpha.u$} \DisplayProof \end{center} \vspace{-3pt} \end{minipage}} \caption{Weak and Head Notions of Reduction}\label{fig:whred} \end{figure*} Please notice how in head reduction, redexes can indeed be fired even if they lie in the scope of $\lambda$-or-$\mu$-abstractions, which, however, cannot themselves be involved in a redex. This harmless restriction, which corresponds to taking the \emph{outermost} reduction order, is needed for technical reasons that will become apparent soon. \subsection{The Type System} Following Laurent~\cite{laurent2003polarized}, types are just negative formulas. Not all of them can be used as types, however: in particular, $N\parrM$ is a legal type only if $N$ is in the form $?_{x<p}\NOT{O}$, and we use the following abbreviation in this case: $N \arrow{x}{p} M\equiv (?_{x<p}\NOT{N})\parr M$. In particular, if $M$ is $\bot$ then $N \arrow{x}{p} \bot$ can be abbreviated as $\pneg{x}{p}{N}$. \emph{Typing formulas} are negative formulas which are either $\bot$, or $X$, or in the form $N \arrow{x}{p}M$ (where $N$ and $M$ are typing formulas themselves). A \emph{modal formula} is one in the form $?_{x<p}\NOT{N}$ (where $N$ is a typing formula). Please observe that all the constructions from Section~\ref{sect:polform} (including labellings, sums, etc.) easily apply to typing formulas. Finally, we use the following abbreviation for labeled modal formulas: $\DLABEL{y}{q}{N}{x}{p} \equiv \LABEL{?_{y<q}\NOT{N}}{x}{p}$. A \emph{typing judgement} is a statement in the form $\tj{\Gamma}{t}{\mathbf{N}}{\Delta}$, where: \begin{varitemize} \item $\Gamma$ is a context assigning labelled modal formulas to $\lambda$-variables; \item $t$ is a $\lambda\mu$-term; \item $\mathbf{N}$ is a typing formula; \item $\Delta$ is a context assigning labelled typing formulas to $\mu$-variables. \end{varitemize} The way typing judgments are defined allows to see them as $\mathsf{BLLP}$ sequents. This way, again, various concepts from Section~\ref{sect:sequents} can be lifted up from sequents to judgments, and this remarkably includes the subtyping relation $\sqsubseteq$. Typing rules are in Figure~\ref{fig:typingadditive}. \begin{figure*} \fbox{\shortv{\scriptsize} \begin{minipage}{.98\textwidth} \centering \ \vspace{10pt} \\ \AxiomC{$1\sqsubseteq p, \subst{r}{y}{0}\sqsubseteq q, M \sqsubseteq \subst{N}{y}{0}$} \RightLabel{\textsf{var}} \UnaryInfC{$\Gamma, x: \DLABEL{z}{r}{N}{y}{p} \vdash x: \LABEL{M}{z}{q} \mid\Delta$} \DisplayProof \qquad \AxiomC{$\Gamma, x : \DLABEL{z}{s}{N}{y}{p} \vdash t: \LABEL{M}{y}{q}\mid\Delta$} \AxiomC{$r\sqsupseteq q, r\sqsupseteq p$} \RightLabel{\textsf{abs}} \BinaryInfC{$\Gamma \vdash \lambda x.t: \LABEL{N \arrow{z}{s}M}{y}{r}\mid\Delta$} \DisplayProof \\ \ \vspace{10pt} \\ \AxiomC{$\Theta \vdash t : \LABEL{N \arrow{x}{p} M}{y}{q}\mid\Psi$} \AxiomC{$\Xi \vdash u : \LABEL{N}{x}{p}\mid\Phi$} \AxiomC{\parbox{120pt}{ $ h \sqsupseteq q\qquad k \sqsupseteq q\\ \Gamma \sqsubseteq \Theta \uplus \Upsilon\qquad\Upsilon \sqsubseteq \sum_{b<h} \Xi\\ \Delta \sqsubseteq \Psi \uplus \Pi\qquad\Pi \sqsubseteq \sum_{b<h}\Phi $}} \RightLabel{\textsf{app}} \TrinaryInfC{$\Gamma \vdash (t)u: \LABEL{M}{y}{k}\mid\Delta$} \DisplayProof \\ \ \vspace{10pt} \\ \AxiomC{$\Gamma \vdash t : \mathbf{N}\mid\alpha: \mathbf{M}, \Delta$} \AxiomC{$\mathbf{L}\sqsubseteq\mathbf{N}\uplus\mathbf{M}$} \RightLabel{$\mu$\textsf{-name}} \BinaryInfC{$\Gamma \vdash [\alpha]t: \LABEL{\bot}{z}{q}\mid\alpha: \mathbf{L}, \Delta$} \DisplayProof \qquad \AxiomC{$\Gamma \vdash t: \LABEL{\bot}{z}{q}\mid\beta: \mathbf{N}, \Delta$} \RightLabel{$\mu$\textsf{-abs}} \UnaryInfC{$\Gamma \vdash \mu \beta t: \mathbf{N}\mid\Delta$} \DisplayProof \\ \ \vspace{10pt} \end{minipage}} \longv{\caption{(Additive) Type Assignment Rules}}\shortv{\caption{Type Assignment Rules}}\label{fig:typingadditive} \end{figure*} The typing rule for applications, in particular, can be seen as overly complicated. In fact, all premises except the first two are there to allow the necessary degree of malleability for contexts, without which even subject reduction would be in danger. Alternatively, one could consider an explicit subtyping rule, the price being the loss of syntax directness. Indeed, all malleability results from Section~\ref{sect:malleability} can be transferred to the just defined type assignment system. \subsection{Subject Reduction and Polystep Soundness} The aim of this Section is to show that \emph{head} reduction preserves types, and as a corollary, that the number of reduction steps to normal form is bounded by a polynomial, along the same lines as in Theorem~\ref{theo:polystepbllp}. Actually, the latter will easily follow from the former, because so-called Subject Reduction will be formulated (and in a sense proved) with a precise correspondence between type derivations and proofs in mind. In order to facilitate this task, Subject Reduction is proved on a modified type-assignment system, called $\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}$ which can be proved equivalent to $\mathsf{BLLP}_{\lambda\mu}$. The only fundamental difference between the two systems lies in how structural rules, i.e., contraction and weakening, are reflected into the type system. As we have already noticed, $\mathsf{BLLP}_{\lambda\mu}$ has an \emph{additive} flavour, since structural rules are implicitly applied in binary and $0$-ary typing rules. This, in particular, makes the system syntax directed and type derivations more compact. The only problem with this approach is that the correspondence between type derivations and proofs is too weak to be directly lifted to a dynamic level (e.g., one step in $\to_{\mathsf{h}}$ could correspond to possibly many steps in $\Longrightarrow$). In $\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}$, on the contrary, structural rules are explicit, and turns it into a useful technical tool to prove properties of $\mathsf{BLLP}_{\lambda\mu}$. \shortv{The rules of $\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}$ are in~\cite{EV}.} \longv{ $\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}$'s typing judgments are precisely the ones of $\mathsf{BLLP}_{\lambda\mu}$. What changes are typing rules, which are in Figure~\ref{fig:typingmultiplicative}. \begin{figure*} \centering \fbox{ \begin{minipage}{.98\textwidth} \begin{center} \ \vspace{5pt} \\ \AxiomC{$1\sqsubseteq p, \subst{r}{y}{0}\sqsubseteq q, M \sqsubseteq \subst{N}{y}{0}$} \RightLabel{\textsf{var}} \UnaryInfC{$x: \DLABEL{z}{r}{N}{y}{p} \vdash x: \LABEL{M}{z}{q} |$} \DisplayProof \qquad \AxiomC{$\Gamma, x : \DLABEL{z}{s}{N}{y}{p} \vdash t: \LABEL{M}{y}{q} | \Delta$} \AxiomC{$r\sqsupseteq q, r\sqsupseteq p$} \RightLabel{\textsf{abs}} \BinaryInfC{$\Gamma \vdash \lambda x.t: \LABEL{N \arrow{z}{s}M}{y}{r} | \Delta$} \DisplayProof \\ \ \vspace{10pt} \\ \AxiomC{$\Gamma \vdash t : \LABEL{N \arrow{x}{p} M}{y}{q} | \Delta$} \AxiomC{$\Theta \vdash u : \LABEL{N}{x}{p} | \Xi$} \AxiomC{\parbox{2.6cm}{ $ h \sqsupseteq q, k \sqsupseteq q,\\ \Psi \sqsubseteq \sum_{b<h} \Theta,\\ \Phi \sqsubseteq \sum_{b<h}\Xi $}} \RightLabel{\textsf{app}} \TrinaryInfC{$\Gamma, \Psi \vdash (t)u: \LABEL{M}{y}{k} | \Delta, \Phi$} \DisplayProof \\ \ \vspace{10pt} \\ \AxiomC{$\Gamma \vdash t : \mathbf{N} | \Delta$} \RightLabel{$\mu$\textsf{-name}} \UnaryInfC{$\Gamma \vdash [\alpha]t: \LABEL{\bot}{z}{q} | \alpha: \mathbf{N}, \Delta$} \DisplayProof \qquad \AxiomC{$\Gamma \vdash t: \LABEL{\bot}{z}{q} | \beta: \mathbf{N}, \Delta$} \RightLabel{$\mu$\textsf{-abs}} \UnaryInfC{$\Gamma \vdash \mu \beta t: \mathbf{N} | \Delta$} \DisplayProof \\ \ \vspace{10pt} \\ \AxiomC{$\Gamma \vdash t:\mathbf{N} \mid \Delta$} \RightLabel{$?w^\lambda$} \UnaryInfC{$\Gamma, y:\mathbf{M} \vdash t:\mathbf{N} \mid \Delta$} \DisplayProof \qquad \AxiomC{$\Gamma, x: \mathbf{N}, y: \mathbf{M} \vdash t: \mathbf{O} \mid \Delta$} \AxiomC{$\mathbf{L} \sqsubseteq \mathbf{N} \uplus \mathbf{M}$} \RightLabel{$?c^\lambda$} \BinaryInfC{$\Gamma, z: \mathbf{L} \vdash \subst{\subst{t}{x}{z}}{y}{z}:\mathbf{O} \mid \Delta$} \DisplayProof \\ \ \vspace{10pt} \\ \AxiomC{$\Gamma \vdash t:\mathbf{N} \mid \Delta$} \RightLabel{$?w^\mu$} \UnaryInfC{$\Gamma \vdash t:\mathbf{N} \mid \Delta, \alpha:\mathbf{M} $} \DisplayProof \qquad \AxiomC{$\Gamma \vdash t: \mathbf{O} \mid \Delta, \alpha: \mathbf{N}, \beta: \mathbf{M}$} \AxiomC{$\mathbf{L} \sqsubseteq \mathbf{N} \uplus \mathbf{M}$} \RightLabel{$?c^\mu$} \BinaryInfC{$\Gamma\vdash \subst{\subst{t}{\alpha}{\gamma}}{\beta}{\gamma}: \mathbf{O} \mid \Delta, \gamma: \mathbf{L} $} \DisplayProof\\ \ \vspace{5pt} \\ \end{center} \end{minipage}} \caption{(Multiplicative) Type Assignment Rules}\label{fig:typingmultiplicative} \end{figure*}} Whenever derivability in one of the system needs to be distinguished from derivability on the other, we will put the system's name in subscript position (e.g. $\tjp{\Gamma}{\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}}{t}{\mathbf{N}}{\Delta}$). Not so surprisingly, the two $\mathsf{BLLP}_{\lambda\mu}$ and $\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}$ type exactly the same class of terms: \begin{lemma} $\tjp{\Gamma}{\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}}{t}{\mathbf{N}}{\Delta}$ iff $\tjp{\Gamma}{\mathsf{BLLP}_{\lambda\mu}}{t}{\mathbf{N}}{\Delta}$ \end{lemma} \begin{proof} The left-to-right implication follows from weakening and contraction lemmas for $\mathsf{BLLP}_{\lambda\mu}$, which are easy to prove. The right-to-left implication is more direct, since additive $\textsf{var}$ and $\textsf{app}$ are multiplicatively derivable.\shortv{\qed} \end{proof} Given a $\mathsf{BLLP}_{\lambda\mu}^\mathsf{mult}$ type derivation $\pi$, one can define a $\mathsf{BLLP}$ proof $\pfd{\pi}$ \longv{following the rules in Figure~\ref{fig:mapping}, which work by induction on the structure of $\pi$. \begin{figure*} \centering {\footnotesize \begin{tabular}{|c|c|}\hline\hline \vspace{-5pt} & \\ $\pi$ & $\pfd{\pi}$\\ \vspace{-5pt} & \\ \hline\hline \vspace{-1pt} & \\ \AxiomC{$1\sqsubseteq p, \subst{r}{y}{0}\sqsubseteq q, M \sqsubseteq \subst{N}{y}{0}$} \RightLabel{\textsf{var}} \UnaryInfC{$x: \DLABEL{z}{r}{N}{y}{p} \vdash x: \LABEL{M}{z}{q} |$} \DisplayProof & \AxiomC{} \UnaryInfC{$\vdash\LABEL{\subst{\NOT{N}}{y}{0}}{z}{\subst{r}{y}{0}},\LABEL{M}{z}{q}$} \UnaryInfC{$\vdash\LABEL{?_{z<r}\NOT{N}}{y}{p},\LABEL{M}{z}{q}$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{$\rho\;\triangleright\;\Gamma, x : \DLABEL{z}{s}{N}{y}{p} \vdash t: \LABEL{M}{y}{q} | \Delta$} \RightLabel{\textsf{abs}} \UnaryInfC{$\Gamma \vdash \lambda x.t: \LABEL{N \arrow{z}{s}M}{y}{r} | \Delta$} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\vdash\Gamma, \LABEL{?_{z<s}N}{y}{p},\LABEL{M}{y}{q},\Delta$} \UnaryInfC{$\vdash\Gamma, \LABEL{?_{z<s}N\parrM}{y}{r},\Delta$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{ \parbox{120pt}{ \begin{center} $\rho\;\triangleright\;\Gamma \vdash t : \LABEL{N \arrow{x}{p} M}{y}{q} | \Delta$\\ $\sigma\;\triangleright\;\Theta \vdash u : \LABEL{N}{x}{p} | \Xi$\end{center}}} \RightLabel{\textsf{app}} \UnaryInfC{$\Gamma, \Psi \vdash (t)u: \LABEL{M}{y}{k} | \Delta, \Phi$} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\vdash\Gamma,\LABEL{?_{x<p}\NOT{N}\parrM}{y}{q},\Delta$} \AxiomC{$\pfd{\sigma}\;\triangleright\;\vdash\Theta,\LABEL{N}{x}{p},\Xi$} \UnaryInfC{$\vdash\Psi,\LABEL{!_{x<p}N}{y}{h},\Phi$} \AxiomC{$\vdash\LABEL{\NOT{M}}{y}{k},\LABEL{M}{y}{k}$} \BinaryInfC{$\vdash\Psi,\LABEL{!_{x<p}N\otimes\NOT{M}}{y}{q},\Phi,\LABEL{M}{y}{k}$} \BinaryInfC{$\vdash\Gamma,\Psi,\LABEL{M}{y}{k},\Delta,\Phi$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{$\rho\;\triangleright\;\Gamma \vdash t : \mathbf{N} | \Delta$} \RightLabel{$\mu$\textsf{-name}} \UnaryInfC{$\Gamma \vdash [\alpha]t: \LABEL{\bot}{z}{q} | \alpha: \mathbf{N}, \Delta$} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\Gamma,\mathbf{N},\Delta$} \UnaryInfC{$\pfd{\rho}\;\triangleright\;\Gamma,\LABEL{\bot}{z}{q},\mathbf{N},\Delta$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{$\rho\;\triangleright\;\Gamma \vdash t: \LABEL{\bot}{z}{q} | \beta: \mathbf{N}, \Delta$} \RightLabel{$\mu$\textsf{-abs}} \UnaryInfC{$\Gamma \vdash \mu \beta t: \mathbf{N} | \Delta$} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\vdash\Gamma,\LABEL{\bot}{z}{q},\mathbf{N},\Delta$} \AxiomC{$\vdash\LABEL{1}{z}{q}$} \BinaryInfC{$\vdash\Gamma,\mathbf{N},\Delta$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{$\rho\;\triangleright\;\Gamma \vdash t:\mathbf{N} \mid \Delta$} \RightLabel{$?w^\lambda$} \UnaryInfC{$\Gamma, y:\mathbf{M} \vdash t:\mathbf{N} \mid \Delta$} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\vdash\Gamma,\mathbf{N},\Delta$} \UnaryInfC{$\vdash\Gamma,\mathbf{M},\mathbf{N},\Delta$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{$\rho\;\triangleright\;\Gamma \vdash t:\mathbf{N} \mid \Delta$} \RightLabel{$?w^\mu$} \UnaryInfC{$\Gamma \vdash t:\mathbf{N} \mid \Delta, \alpha:\mathbf{M} $} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\vdash\Gamma,\mathbf{N},\Delta$} \UnaryInfC{$\vdash\Gamma,\mathbf{N},\mathbf{M},\Delta$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{$\rho\;\triangleright\;\Gamma, x: \mathbf{N}, y: \mathbf{M} \vdash t: \mathbf{O} \mid \Delta$} \RightLabel{$?c^\lambda$} \UnaryInfC{$\Gamma, z: \mathbf{L} \vdash \subst{\subst{t}{x}{z}}{y}{z}: \mathbf{O} \mid \Delta$} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\vdash\Gamma,\mathbf{N},\mathbf{M},\mathbf{O},\Delta$} \UnaryInfC{$\vdash\Gamma,\mathbf{L},\mathbf{O},\Delta$} \DisplayProof \\ \vspace{-1pt} & \\ \AxiomC{$\rho\;\triangleright\;\Gamma \vdash t: \mathbf{O} \mid \Delta, \alpha: \mathbf{N}, \beta: \mathbf{M}$} \RightLabel{$?c^\mu$} \UnaryInfC{$\Gamma\vdash \subst{\subst{t}{\alpha}{\gamma}}{\beta}{\gamma}: \mathbf{O} \mid \Delta, \gamma: \mathbf{L} $} \DisplayProof & \AxiomC{$\pfd{\rho}\;\triangleright\;\vdash\Gamma,\mathbf{O},\mathbf{N},\mathbf{M},\Delta$} \UnaryInfC{$\vdash\Gamma,\mathbf{O},\mathbf{L},\Delta$} \DisplayProof \\ \vspace{-1pt} & \\ \hline\hline \end{tabular}} \vspace{5pt} \caption{Mapping of (multiplicative) derivations into $\mathsf{BLLP}$ proofs} \label{fig:mapping} \end{figure*}} \shortv{by induction on the structure of $\pi$, closely following Laurent's translation~\cite{laurent2003polarized}.} This way one not only gets some guiding principles for subject-reduction, but can also prove that the underlying transformation process is nothing more than cut-elimination: \longv{ \begin{lemma}[$\lambda$-Substitution]\label{Lem:lamsub} If $\pi\;\triangleright\;\Gamma, x: \DLABEL{z}{s}{N}{y}{p}\vdash t: \LABEL{M}{y}{q}\mid\Delta$ and $\rho\;\triangleright\;\Theta\vdash u: \LABEL{N}{z}{s}\mid\Xi$, then for all $h\resleqq$ there is $\sigma_h$ such that $$ \sigma_h\;\triangleright\;\Gamma, \sum_{b <h}\vdash \subst{t}{x}{u}: \LABEL{M}{y}{q}\mid\Delta, \sum_{b< h} \Xi. $$ Moreover, the proof obtained by $h$-boxing $\pfd{\rho}$ and cutting it against $\pfd{\pi}$ is guaranteed to $\Longrightarrow$-reduce to $\sigma_h$. \end{lemma} \begin{proof} As usual, this is an induction on the structure of $\pi$. We only need to be careful and generalize the statement to the case in which a \emph{simultaneous} substitution for many variables is needed. \end{proof} \begin{lemma}[$\mu$-Substitution]\label{Lem:musub} If $\pi\;\triangleright\;\Gamma \vdash t: \LABEL{\bot}{y}{q}\mid\Delta, \alpha: \LABEL{N\arrow{z}{s} M}{y}{p}$ and $\rho\;\triangleright\; \Theta \vdash u: \LABEL{N}{z}{s}\mid\Xi$, then for all $h\resgeqq$ there is $\sigma_h$ such that $$ \Gamma,\sum_{b < h}\Theta\vdash\subst{t}{[\alpha]w}{[\alpha](w)u}: \LABEL{\bot}{y}{q}\mid\Delta, \alpha: \LABEL{M}{y}{p}, \sum_{b < h} \Xi $$ Moreover, the proof obtained by $h$-boxing $\pfd{\rho}$, tensoring it with an axiom and cutting the result against $\pfd{\pi}$ is guaranteed to $\Longrightarrow$-reduce to $\sigma_h$. \end{lemma} } \begin{theorem}[Subject Reduction]\label{theo:subjred} Let $\pi\;\triangleright\;\tj{\Gamma}{t}{\mathbf{N}}{\Delta}$ and suppose $t\tohu$. Then there is $\rho\;\triangleright\;\tj{\Gamma}{u}{\mathbf{N}}{\Delta}$. Moreover $\pfd{\pi}\Longrightarrow^+\pfd{\rho}$. \end{theorem} \longv{ \begin{proof} By induction on the structure of $\pi$. Here are some interesting cases: \begin{varitemize} \item If $t$ is an application, reduction takes place inside $t$, and $\pi$ is as follows \begin{prooftree} \AxiomC{$\Gamma \vdash t : \LABEL{N \arrow{x}{p} M}{y}{q} | \Delta$} \AxiomC{$\Theta \vdash v : \LABEL{N}{y}{p} | \Xi$} \AxiomC{$h \sqsupseteq q, k \sqsupseteq q$} \RightLabel{app} \TrinaryInfC{$\Gamma \uplus \sum_{b<h} \Theta \vdash (t)v: \LABEL{M}{z}{k} | \Delta \uplus \sum_{b<h}\Xi$} \end{prooftree} then $\rho$ is \begin{prooftree} \AxiomC{$\Gamma \vdash u : \LABEL{N \arrow{x}{p} M}{y}{q} | \Delta$} \AxiomC{$\Theta \vdash v : \LABEL{N}{y}{p} | \Xi$} \AxiomC{$h \sqsupseteq q, k \sqsupseteq q$} \RightLabel{app} \TrinaryInfC{$\Gamma \uplus \sum_{b<h} \Theta \vdash (u)v: \LABEL{M}{z}{k} | \Delta \uplus \sum_{b<h}\Xi$} \end{prooftree} which exists by induction hypothesis. We omit the other trivial cases. \item If $t$ is a $\beta$-redex, then $\pi$ looks as follows: \begin{prooftree} \AxiomC{$\Gamma, x : \DLABEL{z}{s}{N}{y}{p} \vdash t: \LABEL{M}{y}{q} | \Delta$} \AxiomC{$r\sqsupseteq q, r\sqsupseteq p$} \RightLabel{abs} \BinaryInfC{$\Gamma \vdash \lambda x.t: \LABEL{N \arrow{y}{s} M}{u}{r} | \Delta$} \AxiomC{$\Theta \vdash u : \LABEL{N}{z}{s} | \Xi$} \AxiomC{$h \sqsupseteq q, k \sqsupseteq q$} \RightLabel{app} \TrinaryInfC{$\Gamma,\sum_{b<h} \Theta \vdash (\lambda x.t)u: \LABEL{M}{y}{k} | \Delta,\sum_{b<h}\Xi$} \end{prooftree} Lemma~\ref{Lem:lamsub} ensures that the required type derivation actually exists: \begin{prooftree} \AxiomC{$\Gamma,\sum_{b<h} \Theta \vdash \subst{t}{x}{u}: \LABEL{M}{y}{k} | \Delta,\sum_{b<h}\Xi$} \end{prooftree} \item If $t$ is a $\mu$-redex, then $\pi$ looks as follows: \begin{prooftree} \AxiomC{$\Gamma \vdash t: \LABEL{\bot}{z}{q} | \beta: \LABEL{N \arrow{z}{s} M}{y}{p}, \Delta$} \RightLabel{$\mu$-abs} \UnaryInfC{$\Gamma \vdash \mu \beta t: \LABEL{N \arrow{z}{s} M}{y}{p} | \Delta$} \AxiomC{$\Theta \vdash u : \LABEL{N}{y}{s} | \Xi$} \AxiomC{$h \sqsupseteq p, k \sqsupseteq p$} \RightLabel{app} \TrinaryInfC{$\Gamma, \sum_{b<h} \Theta \vdash (\mu \beta.t)u: \LABEL{M}{z}{k} | \Delta,\sum_{b<h}\Xi$} \end{prooftree} and Lemma~\ref{Lem:musub} ensures us that $\rho$ exists for \begin{prooftree} \AxiomC{$\Gamma \uplus \sum_{b<h} \Theta \vdash \mu \beta.\lsubst{t}{[\beta]v}{[\beta](v)u}: \LABEL{M}{z}{k} | \Delta \uplus \sum_{b<h}\Xi$} \end{prooftree} \item If $t$ is a $\theta$-redex, then $\pi$ looks as follows: \begin{prooftree} \AxiomC{$\pi \;\triangleright\; \Gamma \vdash t: \LABEL{N}{x}{p} \mid \Delta$} \RightLabel{$?w^\mu$} \UnaryInfC{$\Gamma \vdash t: \LABEL{N}{x}{p} \mid \Delta, \alpha: \LABEL{N}{y}{q}$} \AxiomC{$r \sqsupseteq p+q$} \RightLabel{\textsf{$\mu$-name}} \BinaryInfC{$\Gamma \vdash [\alpha]t: \LABEL{\bot}{x}{s} \mid \Delta, \alpha: \LABEL{N}{y}{r}$} \RightLabel{\textsf{$\mu$-abs}} \UnaryInfC{$\Gamma \vdash t: \LABEL{N}{x}{r} \mid \Delta$} \end{prooftree} Since $r=p+q \sqsupseteq p$ we know that \begin{prooftree} \AxiomC{$\pi^S \;\triangleright\; \Gamma \vdash t: \LABEL{N}{x}{r} \mid \Delta$} \end{prooftree} where $\pi^S$ is the derivation obtained from $\pi$, applying the Subtyping Lemma to the derivation $\pi$. \end{varitemize} This concludes the proof. \end{proof} } Observe how performing head reduction corresponds to \remove{following }$\Longrightarrow$, instead of the more permissive $\longrightarrow$. The following, then, is an easy corollary of Theorem~\ref{theo:subjred} and Theorem~\ref{theo:polystepbllp}: \begin{theorem}[Polystep Soundness for Terms]\label{theo:polystepterms} Let $\pi\;\triangleright\;\tj{\Gamma}{t}{\mathbf{N}}{\Delta}$ and let $t\to_{\mathsf{h}}^nu$. Then $n\leqp_{\pfd{\pi}}$. \end{theorem} \section{Control Operators} In this section, we show that $\mathsf{BLLP}_{\lambda\mu}$ is powerful enough to type (the natural encoding of) two popular control operators, namely \ensuremath{\mathsf{Scheme}}'s \texttt{callcc}\ and Felleisen's {\ensuremath{\mathcal{C}}} \cite{ariola2003minimal} \cite{laurent2003polarized} Control operators change the evaluation context of an expression. This is simulated by the operators $\mu$ and $[\cdot]$ which can, respectively, save and restore a stack of arguments to be passed to subterms. This idea, by the way, is the starting point of an extension of Krivine's machine for de Groote's $\lambda\mu$ \cite{de1998environment} (see Section~\ref{sec:absmac}). \longv{ An extension of de Groote's calculus named $\Lambda\mu$-calculus \cite{saurin2005separation} satisfies a B\"ohm separation theorem that fails for Parigot's calculus \cite{david2001calculus}. Hence in an untyped setting the original $\lambda\mu$ of Parigot is strictly less expressive than de Groote's calculus. } \subsection{\large \texttt{callcc}} An encoding of \texttt{callcc}\ into the $\lambda\mu$-calculus could be, e.g., $\kappa=\lambda x.\mu\alpha.[\alpha](x)\lambday.\mu\beta.[\alpha]y$. Does $\kappa$ have the operational behavior we would expect from \texttt{callcc}? First of all, it should satisfy the following property (see \cite{felleisen1990expressive}). If $k\not\in\FV{e}$, then $(\kappa)\lambda k.e\to^* e$. Indeed: $$ (\lambda x.\mu\alpha.[\alpha](x)\lambday.\mu\beta.[\alpha]y)\lambda k.e\to_{\mathsf{h}}\mu\alpha.[\alpha](\lambda k.e)\lambday.\mu\beta.[\alpha]y \to_{\mathsf{h}} \mu\alpha.[\alpha]e\to_{\mathsf{h}} e, $$ where the second $\beta$-reduction step replaces $\subst{e}{k}{\lambday.\mu\beta.[\alpha]y}$ with $e$ since $k\not\in \FV{e}$ by hypothesis. It is important to observe that the second step replaces a variable for a term with a free $\mu$-variable, hence weak reduction gets stuck. (Actually, our notion of weak reduction is even more restrictive than the one proposed by de Groote in \cite{de1998environment}.) Head reduction, on the contrary, is somehow more liberal. Moreover, it is also straightforward to check that the reduction of {\texttt{callcc}} in \cite[\S 3.4]{Parigot} can be simulated by head reduction on $\kappa$. But is $\kappa$ typable in $\mathsf{BLLP}_{\lambda\mu}$? The answer is positive: a derivation typing it with (an instance of) Pierce's law is in Figure~\ref{fig:callcc}, where $\pi$ is the obvious derivation of $$ x: \DLABEL{}{r}{(X \arrow{}{s} Y)\arrow{}{1}X}{}{1} \vdash x: \LABEL{(X \arrow{}{s} Y)\arrow{}{1} X}{}{r}\mid\alpha: \LABEL{X}{}{0}. $$ \begin{figure*} \fbox{ \scriptsize \begin{minipage}{\textwidth} \centering \vspace{5pt} \begin{prooftree} \AxiomC{$\pi$} \AxiomC{ \RightLabel{\textsf{var}} \UnaryInfC{$y: \DLABEL{}{s}{X}{}{1} \vdash y: \LABEL{X}{}{s}\mid\alpha: \LABEL{X}{}{0}, \beta: \LABEL{Y}{}{0}$} \RightLabel{$\mu$\textsf{-name}} \UnaryInfC{$y: \DLABEL{}{s}{X}{}{1} \vdash [\alpha]y: \LABEL{\bot}{}{0}\mid\alpha: \LABEL{X}{}{s}, \beta: \LABEL{Y}{}{0}$} \RightLabel{$\mu$\textsf{-abs}} \UnaryInfC{$y: \DLABEL{}{s}{X}{}{1} \vdash \mu\beta.[\alpha]y: \LABEL{Y}{}{0}\mid\alpha: \LABEL{X}{}{s}$} \RightLabel{\textsf{abs}} \UnaryInfC{$\vdash \lambday.\mu\beta.[\alpha]y: \LABEL{X\arrow{}{s} U}{}{1}|\alpha: \LABEL{X}{}{s}$} \RightLabel{\textsf{app}} \BinaryInfC{$x: \DLABEL{v}{r}{(X\arrow{}{s} Y)\arrow{}{1} X}{}{1} \vdash (x)\lambday.\mu\beta.[\alpha]y: \LABEL{X}{v}{r}|\alpha: \LABEL{X}{v}{\sum_{v<r} s}$} \AxiomC{$\begin{array}{rcl}k&\sqsupseteq&r+\sum_{v<r}s \\ k &\sqsupseteq& 1\end{array}$} \RightLabel{$\mu$\textsf{-name}} \BinaryInfC{$x: \DLABEL{v}{r}{(X\arrow{}{s} Y)\arrow{}{1} X}{}{0} \vdash [\alpha](x)\lambday.\mu\beta.[\alpha]y: \LABEL{\bot}{}{1}\mid\alpha: \LABEL{X}{}{k}$} \RightLabel{$\mu$\textsf{-abs}} \UnaryInfC{$x: \DLABEL{v}{r}{(X\arrow{v}{s} Y)\arrow{}{1} X}{}{1} \vdash \mu\alpha.[\alpha](x)\lambday.\mu\beta.[\alpha]y: \LABEL{X}{}{k}\mid$} \RightLabel{\textsf{abs}} \UnaryInfC{$\vdash \lambda x.\mu\alpha.[\alpha](x)\lambday.\mu\beta.[\alpha]y: \LABEL{((X\arrow{}{s} Y)\arrow{}{1} X)\arrow{v}{r} X}{}{k}\mid$} \end{prooftree}\ \end{minipage}} \caption{A Type Derivation for $\kappa$} \label{fig:callcc} \end{figure*} \subsection{Felleisen's {\large \ensuremath{\mathcal{C}}}} The canonical way to encode Felleisen's \ensuremath{\mathcal{C}}\ as a $\lambda\mu$-term is as the term $\aleph=\lambdaf.\mu\alpha.(f)\lambda x.[\alpha]x$. Its behavior should be something like $(\aleph)w t_1 \dots t_k\to (w)\lambda x.(x)t_1 \dots t_k$, where $x\not\in FV(t_1, \dots, \linebreak[1] t_k)$, i.e., $x$ is a fresh variable. Indeed $$ (\aleph)w t_1 \dots t_k\to_{\mathsf{h}}(\mu\alpha\remove{:K}.(w)\lambda x.[\alpha](x))t_1 \dots t_k\to_{\mathsf{h}}^k \mu\alpha\remove{:N}.(w)\lambda x\remove{:(N_1\arrow{y_1}{p_1} \dots N_k\arrow{y_k}{p_k} N)}.[\alpha](x)t_1\dots t_k. $$ A type derivation for $\aleph$ is in Figure~\ref{fig:fellc}, where $\sigma$ is a derivation for $$ f:\DLABEL{v}{h}{\pneg{}{1}{\pneg{}{r}{X}}}{}{1} \vdash f:\LABEL{\pneg{}{1}{\pneg{}{r}{X}}}{v}{h}\mid\alpha:\LABEL{X}{}{0}. $$ \begin{figure*}[bpt] \fbox{ \scriptsize \begin{minipage}{0.98\textwidth} \centering \vspace{5pt} \begin{prooftree} \AxiomC{$\sigma$} \AxiomC{ \RightLabel{\textsf{var}} \UnaryInfC{$x: \DLABEL{}{r}{X}{}{1} \vdash x: \LABEL{X}{}{r}$} \RightLabel{$\mu$\textsf{-name}} \UnaryInfC{$x: \DLABEL{}{r}{X}{}{1} \vdash [\alpha]x: \LABEL{\bot}{}{0} \mid \alpha: \LABEL{X}{}{r}$} \RightLabel{\textsf{abs}} \UnaryInfC{$\vdash \lambda x.[\alpha]x: \LABEL{\pneg{}{r}{X}}{}{1} \mid \alpha: \LABEL{X}{}{r}$} \RightLabel{\textsf{app}} \BinaryInfC{$f: \DLABEL{v}{h}{\pneg{}{1}{\pneg{}{r}{X}}}{}{1} \vdash (f)\lambda x.[\alpha]x: \LABEL{\bot}{v}{h} \mid \alpha: \LABEL{X}{}{\sumlf{v}{h}{r}}$} \RightLabel{$\mu$\textsf{-abs}} \UnaryInfC{$f: \DLABEL{v}{h}{\pneg{}{1}{\pneg{}{r}{X}}}{}{1} \vdash \mu\alpha.(f)\lambda x.[\alpha]x: \LABEL{X}{}{\sumlf{v}{h}{r}}\mid$} \AxiomC{$\begin{array}{rcl}k&\sqsupseteq& 1\\k&\sqsupseteq&\sumlf{v}{h}{r}\end{array}$} \RightLabel{\textsf{abs}} \BinaryInfC{$\vdash \lambdaf.\mu\alpha.(f)\lambda x.[\alpha]x: \LABEL{\pneg{}{1}{\pneg{}{r}{X}}\arrow{v}{h}X}{}{k}$} \end{prooftree}\ \end{minipage} } \caption{A Type Derivation for $\aleph$} \label{fig:fellc} \end{figure*} It is worth noting that weak reduction is strong enough to properly simulating the operational behavior of $\ensuremath{\mathcal{C}}$. It is not possible to type $\mathcal{C}$ in Parigot's $\lambda\mu$, unless an open term is used. Alternatively, a free continuation constant must be used (obtaining yet another calculus \cite{ariola2003minimal}). This is one of the reasons why we picked the version of $\lambda\mu$-calculus proposed by de Groote over other calculi. See \cite{de1994relation} for a discussion about $\lambda\mu$-and-$\lambda$-calculi and Felleisen's $\mathcal{C}$. \section{Abstract Machines}\label{sec:absmac} Theorem~\ref{theo:polystepterms}, the main result of this paper so far, tells us that the number of \emph{head-reduction steps} performed by terms typable in $\mathsf{BLLP}_{\lambda\mu}$ is bounded by the weight of the underlying type derivation. One may wonder, however, whether taking the number of reduction steps as a measure of term complexity is sensible or not --- substitutions involve arguments which can possibly be much bigger than the original term. Recent work by Accattoli and the first author~\cite{accattoli2012invariance}, however, shows that in the case of $\lambda$-calculus endowed with head reduction, the unitary cost model is polynomially invariant with respect to Turing machines. We conjecture that those invariance results can be extended to the $\lambda\mu$-calculus. \shortv{It can be shown that $\mathsf{BLLP}_{\lambda\mu}$ is polystep sound for another cost model, namely the one induced by de Groote's $\mathsf{K}$, an abstract machine for the $\lambda\mu$-calculus. This is done following a similar proof for $\mathsf{PCF}$ typed with linear dependent types~\cite{DalLago2011} and Krivine's Abstract Machine (of which $\mathsf{K}$ is a natural extension). The main idea consists in extending $\mathsf{BLLP}$ to a type system for $\mathsf{K}$'s configurations, this way defining a \emph{weight} for each of them in the form of a resource polynomial. The weight, as expected, can then be shown to decrease at each $\mathsf{K}$'s computation step. It is worth noting that the weight defined this way is fundamentally different than the one from Section~\ref{subsec:soundness}. See \cite{EV} for some more details.} \longv{In this Section, we show that $\mathsf{BLLP}_{\lambda\mu}$ is polystep sound for another cost model, namely the one induced by de Groote's $\mathsf{K}$, an abstract machine for the $\lambda\mu$-calculus. This will be done following a similar proof for $\mathsf{PCF}$ typed with linear dependent types~\cite{DalLago2011} and Krivine's Abstract Machine (of which $\mathsf{K}$ is a natural extension). } \longv{ Configurations of $\mathsf{K}$ are built around environments, closures and stacks, which are defined mutually recursively as follows: \begin{varitemize} \item \emph{Environments} are partial functions which makes $\lambda$-variables correspond to closures and $\mu$-variables correspond to stacks; metavariables for environments are $\mathscr{E},\mathscr{F}$, etc.; \item \emph{Closures} are pairs whose first component is a $\lambda\mu$-term and whose second component is an environment; metavariables for closure are $\mathcal{C},\mathcal{D}$, etc. \item \emph{Stacks} are just finite sequences of closures; metavariables for stacks are $\mathscr{S},\mathscr{T}$, etc. \end{varitemize} Configurations are pairs whose first component is a closure and whose second component is a stack, and are indicated with $C,D$, etc. Reduction rules for configurations are in Figure~\ref{fig:kmachine}. \begin{figure*} \begin{center} \fbox{ \begin{minipage}{.98\textwidth} \begin{align*} ((x,\mathscr{E}),\mathscr{S})&\hookrightarrow(\mathscr{E}(x),\mathscr{S});\\ ((\lambdax.t,\mathscr{E}),\mathcal{C}\cdot\mathscr{S}) &\hookrightarrow((t,\subst{\mathscr{E}}{x}{\mathcal{C}}),\mathscr{S});\\ ((tu,\mathscr{E}),\mathscr{S})&\hookrightarrow ((t,\mathscr{E}),(u,\mathscr{E})\cdot\mathscr{S});\\ ((\mu\alpha.t,\mathscr{E}),\mathscr{S})&\hookrightarrow ((t,\subst{\mathscr{E}}{\alpha}{\mathscr{S}}),\varepsilon);\\ (([\alpha]t,\mathscr{E}),\varepsilon)&\hookrightarrow ((t,\mathscr{E}),\mathscr{E}(\alpha)).\\ \end{align*} \end{minipage}} \end{center} \caption{$\mathsf{K}$-machine Transitions.}\label{fig:kmachine} \end{figure*} The $\mathsf{K}$-machine is sound and complete with respect to head reduction~\cite{de1998environment}, where however, reduction can take place in the scope of $\mu$-abstractions, but not in the scope of $\lambda$-abstractions.\longv{\footnote{The authors are aware of the work in \cite{streicher1998classical}, in which a Krivine machine for $\lambda\mu$ is derived semantically rather than syntactically (independently of de Groote). In the same paper there is also a further extension of the machine which allows to reduce under $\mu$- and even $\lambda$-abstractions. The paper is not essential for our purposes since the abstract machine of de Groote is enough to work with control operators. Still, even though there are some important differences with respect to our setting (the calculus considered is an untyped variant of Parigot's $\lambda\mu$), it might be worthwhile to investigate in the future.}} Actually, $\mathsf{BLLP}_{\lambda\mu}$ can be turned into a type system for $\mathsf{K}$'s \emph{configurations}. \longv{ We closely follow Laurent~\cite{Laurent03note} here. } The next step is to assign a weight $q^\pi$ to every type derivation $\pi\;\triangleright\;C:\mathbf{N}$, similarly to what we have done in type derivations for \emph{terms}. The idea then is to prove that the weight of (typable) configurations decreases at every transition step: \begin{lemma} If $C\toconfD$, then $q^C\resgtq^D$. \end{lemma} This allows to generalize polystep soundness to $\mathsf{K}$: \begin{theorem}[Polystep Soundness for the $\mathsf{K}$]\label{theo:polystepmachine} Let $\pi\;\triangleright\;\tjc{C}{\mathbf{N}}$ and let $C\hookrightarrow^nD$. Then $n\leqq^{\pi}$. \end{theorem} Please observe how Theorem~\ref{theo:polystepmachine} holds in particular when $C$ is the initial configuration for a typable term $t$, i.e., $\pair{\pair{t}{\emptyset}}{\varepsilon}$. \section{Conclusions} In this paper\remove{, some evidence has been given on the fact} we have presented some evidence that the enrichment to intuitionistic linear logic provided by bounded linear logic is robust enough to be lifted to polarized linear logic and the $\lambda\mu$-calculus. This paves the way towards a complexity-sensitive type system, which on the one hand guarantees that typable terms can be reduced to their normal forms in a number of reduction steps which can be read from their type derivation, and on the other allows to naturally type useful control operators. Many questions have been purposely left open here: in particular, the language of programs is the pure, constant-free, $\lambda\mu$-calculus, whereas the structure of types is minimal, not allowing any form of polymorphism. We expect that endowing $\mathsf{BLLP}$ with second order quantification or $\mathsf{BLLP}_{\lambda\mu}$ with constants and recursion should not be particularly problematic, although laborious: the same extensions have already been considered in similar settings in the absence of control~\cite{GSS92TCS,DalLago2011}. Actually, a particularly interesting direction would be to turn $\mathsf{BLLP}_{\lambda\mu}$ into a type system for Ong and Stewart's $\mu\mathsf{PCF}$~\cite{OngS97POPL}, this way extending the linear dependent paradigm to a language with control. This is of course outside the scope of this paper, whose purpose was only to delineate the basic ingredients of the logic and the underlying type system. As we stressed in the introduction, we are convinced this work to be the first one giving a time complexity analysis methodology for a programming language with higher-order functions \emph{and control}.\longv{\footnote{Tatsuta has investigated the maximum length of $\mu$-reduction for a language \emph{without $\lambda$-abstractions} (RTA 2007).}} One could of course object that complexity analysis of $\lambda\mu$-terms could be performed by translating them into equivalent $\lambda$ terms, e.g. by way of a suitable CPS-transform~\cite{DBLP:conf/caap/Groote94}. This, however, would force the programmer (or whomever doing complexity analysis) to deal with programs which are structurally different from the original one. And of course, translations could introduce inefficiencies, which are maybe harmless from a purely qualitative viewpoint, but which could make a difference for complexity analysis.\remove{\longv{\footnote{CPS translations in general complicate types, e.g., the CPS translation in \cite{DBLP:conf/caap/Groote94} corresponds to the negative translation. Thus existing techniques for complexity analysis on $\lambda$ calculus are harder to apply.}}} \longv{\bibliographystyle{plain}} \shortv{\bibliographystyle{abbrv}}
2,877,628,088,717
arxiv
\section{Introduction} In 1969, Vitaly Efimov, following a work by Thomas(1935) \cite{0Thomas}, first predicted a puzzling quantum-mechanical effect, when a resonant pairwise interaction gives rise to an infinite number of three-body loosely bound states even though each particle pair is unable to bind \cite{1Efimov:1970zz,8efimov:nature09}. These properties are universal and independent of the details of the short-range interaction when the two-body scattering length `a' is much larger than the range of the interaction potential `r$_0$'. The existence of resonant two-body forces is the basic condition for the Efimov effect. Although there has been an extensive search in many different physical systems including atoms, molecules and nuclei, the experimental confirmation of existence of Efimov states has proved to be challenging especially for nuclei \cite{0Thomas,1Efimov:1970zz,8efimov:nature09, 2Braaten2015, 3Tumino:2015jaa, 4Kraemer2006, 5Zaccanti2009, 6Huang2014, 7Gattobigio2014}. Recently, Tumino {\it et al.} reported about the discovery of the existence of triple-alpha resonances, very close to the Efimov scenario, by studying $^{6}$Li+$^{6}$Li$\rightarrow$3$\alpha$ reactions at low beam energy and using hyperspherical formalism. A geometrical interpretation of these mechanisms \cite{HuaarXiv2018} suggests that the Thomas state corresponds to three equal energies, while a sequential decay mechanism ($^{12}$C$\rightarrow$$^{8}$Be+$\alpha$$\rightarrow$3$\alpha$) might correspond to Efimov states \cite{1Efimov:1970zz}. This prescription refers mainly to $^{12}$C levels in the vicinity of the breakup threshold of three $\alpha$-particles or $\alpha$+$^{8}$Be, taking into account the Coulomb force among $\alpha$ particles which destroys the 1/R$^2$ (R is the hyperradius) scaling at large distance where Coulomb force is dominant \cite{1Efimov:1970zz}. This is surely relevant with stellar processes, where $^{12}$C nucleus is formed and it might occur inside a dense star or on its surface, thus in different density and temperature conditions. A way to simulate some stellar conditions is to collide two heavy ions at beam energies near the Fermi energy. In this work, we present the possible signature of Efimov (Thomas) states at reconstructed 7.458 MeV excitation energy of $^{12}$C from the reactions $^{70(64)}$Zn($^{64}$Ni) + $^{70(64)}$Zn($^{64}$Ni) at beam energy of E/A=35 MeV/nucleon \cite{SuyalatuarXiv2018}. \section{Experiment} The experiment was performed at the Cyclotron Institute, Texas A$\&$M University. $^{64}$Zn, $^{70}$Zn, and $^{64}$Ni beams at 35 MeV/nucleon from the K-500 superconducting cyclotron were used to respectively irradiate self-supporting $^{64}$Zn, $^{70}$Zn, and $^{64}$Ni targets. The 4$\pi$ NIMROD-ISiS setup \cite{9Wuenschel2009, 10Schmitt1995} was used to collect charged particles and free neutrons produced in the reactions. A detailed description of the experiment can be found in Refs. \cite{11Kholey:2010phd, 12Kohley2011, 13Kohley2012}. \begin{center} \includegraphics[scale=0.35]{Figure1_CoMD_densityVStime.eps} \figcaption{\label{fig1_density} The time evolution of the average density in the central region in $^{70}$Zn + $^{70}$Zn at 35 MeV/nucleon. } \end{center} When two heavy ions at 35 MeV/nucleon collide, the excitation energy deposited in the system is large enough for the system to get gently compressed at the beginning and then it expands and enters an instability region, the spinodal region, similar to the liquid-gas (LG) phase transition \cite{14Giuliani2014, 15Bonasera:1994pr, mayg:1999prl,mayg:2005prc,mayg:2018ppnp}. Fig. \ref{fig1_density} shows the time evolution of the average density in the central region $[-3,3]^{3} fm^{3}$ at incident energy of 35 MeV/nucleon in collisions of $^{70}$Zn + $^{70}$Zn with the Constrained Molecular Dynamics approach (CoMD) \cite{14Giuliani2014}. The average density increases in the compression stage and decreases in the expansion stage. The maximum average density reaches around 60 fm/c when the initial distance between projectile and target nuclei is set to 15 fm. In such conditions, fragments of different sizes are formed and later can be detected. The NIMROD detector used in this experiment can distinguish charge numbers from 1 to 30 and masses up to 50 \cite{11Kholey:2010phd}. A typical result is plotted in Fig. \ref{fig2_yield} \cite{11Kholey:2010phd} together with the CoMD results \cite{14Giuliani2014}, showing a satisfactory agreement to the data. In order to test if some fragments are formed in excited states, an evaporation model, Gemini \cite{14Giuliani2014,mayg:2002prc,mayg:2018nst,mayg:2006cpc,gemini:2018cpc} is applied. The reaction was followed up to a maximum time t$_{max}$ in the CoMD model. Within the same model, the excitation energy of each fragment formed at t$_{max}$ is obtained and fed into the Gemini model, which gives the final de-excited fragments. As can be seen from the figure, the effect of secondary evaporation is negligible after t$_{max} >$600 fm/c. The abundance of $^{12}$C fragments are about two orders of magnitude less than those of proton and $\alpha$-particles. These ions survive the violence of the collision while other $^{12}$C might be in one of the excited states and decays before reaching the detector or collides with other fragments and gets destroyed. Our technique is tailored to select the $^{12}$C$\rightarrow$3$\alpha$ decay channel among all the possibilities. \begin{center} \includegraphics[scale=0.45]{Figure2_CoMD_yield.eps} \figcaption{\label{fig2_yield} (color online) The charge (Z) and mass (A) distributions from the $^{70}$Zn+$^{70}$Zn system are shown for the filtered CoMD simulation in comparison to the experimental data. The results have been normalized by the total number of events \cite{11Kholey:2010phd}. } \end{center} In the experiments, it is straightforward to select all the events where one or more $\alpha$ particles are detected. In Fig. \ref{fig3_mult}, we plot the $\alpha$ particle multiplicity distribution for the three colliding systems considered. The total number of events is $\sim 2.7 \times 10^8$ and we have observed events where at least 15 $\alpha$ are produced. In Refs. \cite{Schmidt:2016lpt, Mabiala:2016gpt, Marini:2015zwa}, an analysis was performed for events as in Fig. \ref{fig3_mult} in terms of Boson-Fermion mixtures, i.e. including all fragments as reported in Fig. \ref{fig2_yield}, which can give also a signature of Bose Einstein Condensation (BEC) \cite{SchuckPRL2001, SchuckPRL2008}. Temperature, density and excitation energy are recovered using different approaches \cite{14Giuliani2014} with most of the events in the high excitation energy region up to about 8 MeV/nucleon. We notice that most of the novel techniques discussed in this work might be easily generalized to cases where the $\alpha$ multiplicity is larger than 3. We plan to discuss this in future papers. More conventional analysis based on Dalitz plots \cite{Freer:1994zz,Raduta:2011yz,Kirsebom:2012zza,Manfredi:2012zz,Rana:2013hka,Itoh:2014mwa,DellAquila:2017ppe,Smith:2017jub} cannot be easily generalized when the $\alpha$-multiplicity is larger than 3. For the purpose of our work, we further selected all the events with only three $\alpha$ particles detected. It is important to stress that multiple $\alpha$ revealed are accepted if they hit different detectors, i.e. all possible double hits in an event are excluded. Furthermore, in the present analysis, a random position on the surface of the detector was assigned to each $\alpha$. This limits the precision of $\alpha$-$\alpha$ correlations especially when their relative energies or momenta are very small. A critical comparison of different methods to assign the hitting position on the detector will be discussed in a future work, here it is sufficient to say that the results discussed are independent on the different methods. In these cases, the total number of events reduced to $\sim 4.5 \times 10^7$. From the above discussion, it is clear that if only three $\alpha$s are in an event, other fragments must be presented and the sum of all the fragment masses is up to 140 (maximum) including the three $\alpha$s. This is a rich environment depending on the proximity of different fragments to the $\alpha$, $^{8}$Be or $^{12}$C ions, the properties and shell structure of the fragments might be modified. In particular, short living states of $^{12}$C or $^{8}$Be might be modified by the presence of other nearby fragments. On the other hand, long living states such as the Hoyle state of $^{12}$C might not be influenced at all since its lifetime is much longer than the reaction time. Of course, in such `soup', $\alpha$-particles might come from the decay of $^{12}$C or $^{8}$Be and from different excited fragments, or directly produced during the reaction, thus it is crucial to implement different methods to distinguish among different decay channels. \begin{center} \includegraphics[scale=0.3]{Figure3_alphaMult.eps} \figcaption{\label{fig3_mult} $\alpha$ multiplicity distribution for the $^{70(64)}$Zn($^{64}$Ni) + $^{70(64)}$Zn($^{64}$Ni) at 35 MeV/nucleon from the NIMROD detector. } \end{center} In order to distinguish different decay channels, the kinetic energy of the $\alpha$ particles must be measured in a good precision. The kinetic energy distribution from the NIMROD detector for the events with $\alpha$ multiplicity equal to three is given in Fig. \ref{fig4_Ek}. It extends above 100 MeV/nucleon and displays a large yield around 8 MeV/nucleon. Since the kinetic energies are relatively large, the detector is performing its best, and the error estimate (including the instrumental error especially because of the detector granularity, detectors' energy, position, and angle resolution) gives results in less than 1$\%$ of the kinetic energy value. The error becomes larger for smaller kinetic energies and particles whose kinetic energy is below a threshold (about 1 MeV/nucleon) are not detected \cite{11Kholey:2010phd}. Thus it is a clear advantage to use the beam of heavy ions near or above the Fermi energy. Fragments are emitted in the laboratory frame with high kinetic energies (due to the center of mass motion) and can be carefully detected. When we reconstruct $^{8}$Be from $\alpha$-$\alpha$ correlations, the center of mass motion is cancelled out and small relative kinetic energies can be obtained with an estimated error of about 45 keV for the smallest relative kinetic energies. This error is due to the detector granularity as discussed above. \begin{center} \includegraphics[scale=0.3]{Figure4_alphaKinetic.eps} \figcaption{\label{fig4_Ek} $\alpha$-kinetic energy distribution in the laboratory frame from all the events with $\alpha$-multiplicity equal to three. Inset: zoom of the lower energy region in this figure. } \end{center} \section{Method} For the three body system with equal masses, we can define the excitation energy E${^*}$ as: \begin{equation} E^* = \frac{2}{3}\sum^3_{i=1,j>i} E_{ij} - Q \label{Estar} \end{equation} where E$_{ij}$ is the relative kinetic energy of the two particles, Q is the Q-value. Notice that the important ingredients entering Eq.(\ref{Estar}) are the relative kinetic energies; since we have three indistinguishable bosons, we analyze the E$_{ij}$ distribution by cataloging for each event the smallest relative kinetic energy, $E_{ij}^{Min.}$, the middle relative kinetic energy, $E_{ij}^{Mid.}$, and the largest relative kinetic energy, $E_{ij}^{Lar.}$. In this work, we reconstruct the E${^*}$=7.458 MeV of the $^{12}$C from 3$\alpha$s when the sum of the three E$_{ij}$ is 0.276 MeV (0.092$\times$3 MeV, i.e. 0.092 MeV is the relative energy of 2$\alpha$s corresponding to the ground state decay of $^{8}$Be \cite{HuaPLB2018, SuyalatuarXiv2018}) with the Q-value= -7.275 MeV. In Fig. \ref{fig5_Eijmin}, the minimum relative kinetic energy distribution is shown. In the top panel, the solid black circles give the distribution obtained from the real events. They show bumps but no real structures. This is due to the fact that in the fragmentation region, some $\alpha$s may come from the decay of $^{8}$Be or $^{12}$C, or from completely non-correlated processes, for example, the $\alpha$ emission from a heavy fragment. To distinguish the correlation from the non-correlated events, we randomly choose three different $\alpha$s from three different events and build the distribution displayed in Fig. \ref{fig5_Eijmin} (mixing events-red open circles). The total number of real and mixing events are normalized to one, respectively. We fit the highest points of Fig. \ref{fig5_Eijmin} (top) with an exponential function. This allows us to derive the instrumental error $\Delta$E=1/22 MeV=0.045 MeV. By subtracting the fit function from the real events, we obtain the open squares in Fig. \ref{fig5_Eijmin} (top), which can be considered as the real events corrected by the detector acceptance. The ratio (1+R$_{3}$) of the real and mixing events is plotted in the bottom of figure \ref{fig5_Eijmin}, together with Breit-Wigner fits. As one can see that the first peak around 0.088 MeV (very close to 0.092 MeV) with 1192 fm/c width corresponds to the ground state of $^{8}$Be and it somehow depends on the detector correction given by the exponential fit. The second peak around 3.05 MeV and 14.2 fm/c width (independent on the detector correction) corresponds to the first excited state of $^{8}$Be, and also higher energy peaks are visible above 10 MeV. \begin{center} \includegraphics[scale=0.35]{Figure5_Eijmin.eps} \figcaption{\label{fig5_Eijmin} (color online) (Top) Relative kinetic energy distribution as a function of the minimum relative kinetic energy. The solid black circles represent data from real events, red open circles are from mixing events, and the blue open squares represent the difference between the real events and the exponential fit function (solid line), which takes into account the experimental error. (Bottom) The ratios of the real data (pink open triangles) and the real data minus the fit function (green solid squares) are divided by the mixing events as a function of the minimum relative kinetic energy. The solid lines are Breit-Wigner fits. } \end{center} In order to determine if we have events with equal relative kinetic energies, we have selected 3$\alpha$ events with $E_{ij}^{Min.} = E_{ij}^{Mid.} = 0.092\pm\frac{\delta E}{3}$ MeV and decreased the value of $\delta E$ to the smallest value allowed by statistics. In Fig. \ref{fig6_ESEk}, we plot the results for the real (solid black circles) and the mixing (red open circles) events in the upper panels, and their ratio (1+R$_3$) in the bottom panels. Even though the number of real events decreases to almost 90 when the $\delta$E=0.06 MeV case, we can see a hint of a signal around ($E_{ij}^{Lar.} + E_{ij}^{Mid.} + E_{ij}^{Min.} )\times \frac{2}{3}$ - Q $\leq$ 7.47 MeV which is consistent with the suggested Efimov (Thomas) state \cite{HuaPLB2018,SuyalatuarXiv2018,HuaarXiv2018} at an excitation energy of $^{12}$C of about 7.458 MeV. Similar to Fig. \ref{fig6_ESEk}, we have selected 3$\alpha$ events with $E_{ij}^{Min.} = 0.092\pm\frac{\delta E}{3}$ MeV, $E_{ij}^{Mid.} = 0.092\times2\pm\frac{\delta E}{3}$ MeV in Fig. \ref{fig7_123Ek}. We can also observe the events where the largest relative energy is three times of the minimum one around ($E_{ij}^{Lar.} + E_{ij}^{Mid.} + E_{ij}^{Min.} )\times \frac{2}{3}$ - Q $=$ 7.64 MeV with different $\delta E$. These events suggest that there are events where the 3$\alpha$ relative energies are in the ratio of 1:2:3. In Figs. \ref{fig6_ESEk} and \ref{fig7_123Ek}, we can see significant signals around E$^{*}$=7.65 MeV, which is consistent with the famous 0$^{+}$ Hoyle state of $^{12}$C predicted by Fred Hoyle in 1953 \cite{FHoyle1954}. \begin{center} \includegraphics[scale=0.45]{Figure6_ESTotalEk.eps} \figcaption{\label{fig6_ESEk} (color online) The reconstructed excitation energy distributions of $^{12}$C from 3$\alpha$s with $E_{ij}^{Min.} = E_{ij}^{Mid.} = 0.092\pm\frac{\delta E}{3}$ MeV. The solid black circles are from the real events, red open circles are the mixing events, pink open triangles indicate the ratios of the real events to the mixing events.} \end{center} \begin{center} \includegraphics[scale=0.45]{Figure7_123Ek.eps} \figcaption{\label{fig7_123Ek} (color online) The reconstructed excitation energy distributions of $^{12}$C from 3$\alpha$s with $E_{ij}^{Min.} = 0.092\pm\frac{\delta E}{3}$ MeV, $E_{ij}^{Mid.} = 0.092\times2\pm\frac{\delta E}{3}$ MeV. The solid black circles and the red open circles respectively denote the real events, the mixing events, pink open triangles indicate the ratios of the real events to the mixing events. } \end{center} \section{Summary} We have discussed the Efimov (Thomas) states in excited $^{12}$C nuclei in the reactions $^{70(64)}$Zn($^{64}$Ni) + $^{70(64)}$Zn($^{64}$Ni) at beam energy of E/A=35 MeV/nucleon. In order to investigate the $^{12}$C, we analyzed the events with $\alpha$ multiplicity equal to three. The excitation energies of $^{12}$C are reconstructed by considering the three $\alpha$ relative kinetic energies. The interaction between any two of the three $\alpha$ particles provides events with one, two or three $^{8}$Be interfering levels. The events with three relative kinetic energies equal to the ground state energy of $^{8}$Be are found when decreasing the acceptance width. It might be a signature of the Efimov (Thomas) states in $^{12}$C excited energy level of 7.458 MeV. Dedicated experiments with better experimental resolution are suggested in order to exclude any possible experimental effect in the data analysis. \end{multicols} \vspace{-1mm} \centerline{\rule{80mm}{0.1pt}} \vspace{2mm} \begin{multicols}{2}
2,877,628,088,718
arxiv
\section{Introduction} \label{intro} Open charm baryon masses are presently the subject of much experimental and theoretical interest. In particular, most doubly charmed baryons (i.e.\ $ccq$ states) have not been experimentally observed even though as stable states in QCD they must exist. Indeed there has only been the observation of a candidate state, $\Xi_{cc}^+$ (with quark content $ccd$) by the SELEX Collaboration, \cite{selex02a}. However this state was not seen by the BaBar, \cite{aubert06a} or BELLE, \cite{chistov06a}. Collaborations. Very recently, however, the LHCb Collaboration, \cite{LHCb17a}, has announced a state -- the $\Xi_{cc}^{++}$ baryon -- with a quark content $ccu$. It is unlikely that isospin breaking effects are significant (i.e.\ QED effects and $m_u \not= m_d$), or that the states have been missidentified so between the SELEX and LHCb result is an unexplained and puzzling mass difference of $\sim 100\,\mbox{MeV}$. In this talk we shall describe the QCDSF-UKQCD approach to determining the hadron mass spectrum, with particular emphasis on the charm sector and the open doubly charmed baryon masses. This continues the programme initialised in \cite{horsley13a}. $2+1$ flavour dynamical lattice simulations consist of two mass degenerate (i.e.\ $m_u = m_d$) light flavour $u$, $d$ quarks and a heavier flavour $s$ quark. Since the light quark masses are typically larger than the `physical' masses required for the experimental spectrum, we are forced to consider how we can approach the physical $u$, $d$, $s$ quark masses. While many current simulations determine the physical $s$ quark mass and then extrapolate the $u$, $d$ quark masses to the physical quark mass, another possibility as suggested in \cite{bietenholz11a} is to consider an SU(3) flavour breaking expansion from a point $m_0$ on the flavour symmetric line keeping the average quark mass $\overline{m} = (m_u+m_d+m_s)/3$ constant. This procedure significantly reduces the number of expansion coefficients allowed. (Also, not considered here, the expansion coefficients remain the same whether we consider $m_u \not= m_d$ or $m_u = m_d$ and thus allows for the possibility of finding the pure QCD contribution to isospin breaking effects using just $n_f = 2+1$ numerical simulations.) As the charm quark is considerably heavier than the up, down and strange quarks an SU(4) flavour breaking expansion is poorly convergent (in distinction to the SU(3) flavour breaking expansion). Another possibility is to make independent SU(3) flavour breaking expansions in each charm sector (but this not a very unified approach). We adopt an intermediate aproach here, first noting that as the charm quark mass is much heavier than the $u$, $d$ and $s$ quark masses, it contributes little to the dynamics of the sea of the hadron. Thus we can regard the charm quark as a `Partially Quenched' or PQ quark. By this we mean that the quark masses making up the meson or baryon have not necessarily the same mass as the sea quarks. Also the SU(3) flavour breaking expansion can also be extended to valence quark masses. As the expansion coefficients are just functions of $\overline{m}$, provided this is kept constant then the coefficients are unchanged. We shall say the `Unitary Limit' when the masses of the valence quarks coincide with the sea quarks. PQ determinations have the advantage of not being expensive compared to dynamical simulations of the quarks. This can also help in the determination of the expansion coefficients as a wider range of quark masses than just the unitary masses can be used. Briefly the method employed here is to first determine the expansion coefficients of the pseudoscalar mesons and octet baryons. Extrapolating to the physical pseudoscalar masses determines the `physical' quark masses, which are then employed in the baryon expansions to determine the (open) charm masses. \section{SU(3) flavour breaking expansions} We shall only consider here hadrons which lie on the outer ring of their associated multiplet and not the central hadrons. So no mixing or quark--line disconnected correlation functions are considered here. The SU(3) flavour symmetry breaking expansions are given in terms of \begin{eqnarray} \delta \mu_q = \mu_q - \overline{m} \,, \quad \overline{m} = \third(2m_l + m_s) \,, \nonumber \end{eqnarray} where $\mu_q$ are the PQ masses for quark $q$. In the unitary limit we have $\mu_q \to m_q$, where $m_q$ is the unitary quark mass (i.e.\ equal to the sea quark mass). Here we have the obvious constraint $2\delta m_l + \delta m_s = 0$. For the pseudoscalar mesons with valence quarks $q = a$ and $b$, their masses are given to NLO (or quadratic in $\delta\mu_q$ by \begin{eqnarray} M^2(a\overline{b}) = M^2_{0\pi} + \alpha(\delta\mu_a + \delta\mu_b) + \beta_0\sixth(2\delta m_l^2 + \delta m_s^2) + \beta_1(\delta\mu_a^2 + \delta\mu_b^2) + \beta_2(\delta\mu_a - \delta\mu_b)^2 + \ldots \,, \nonumber \label{M2ps_expan} \end{eqnarray} (cubic or NNLO terms are given in \cite{horsley12a}). The expansion coefficients are functions of $\overline{m}$ only, so if we keep $\overline{m}$ fixed we have constrained fits between the pseudoscalar mesons. Numerically it is advantageous to use scale invariant quantities. A useful additional quantity with this method is to use flavour singlet or blind quantities, only defined in the unitary limit. There are many possibilities, for example for pseudoscalar meson quantities a convenient one is \begin{eqnarray} X_\pi^2 = \third(2M_K^2 + M_\pi^2) = M_{0\pi}^2 + O(\delta m_l^2) \,, \nonumber \end{eqnarray} Now forming dimensionless ratios, $\tilde{M}^2 = M^2/X_\pi^2$, $\tilde{\alpha} = \alpha/M_{0\pi}^2$, $\tilde{\beta}_i = \beta_i/M_{0\pi}^2 \,, \ldots$ leads to the modified expansion \begin{eqnarray} \tilde{M}^2(a\overline{b}) = 1 + \tilde{\alpha}(\delta\mu_a + \delta\mu_b) - (\twothird\tilde{\beta}_1 + \tilde{\beta}_2) (2\delta m_l^2 + \delta m_s^2) + \tilde{\beta}_1(\delta\mu_a^2 + \delta\mu_b^2) + \tilde{\beta}_2(\delta\mu_a - \delta\mu_b)^2 + \ldots \,. \nonumber \label{Mps2twid_expan} \end{eqnarray} Similarly for the outer ring of the baryon octet, we have the expansion \begin{eqnarray} \tilde{M}_{\Sigma}^2(aab) &=& 1 + \tilde{A}_1(2\delta\mu_a + \delta\mu_b) + \tilde{A}_2(\delta\mu_b - \delta\mu_a) \nonumber \\ & & - (\tilde{B}_1 + \tilde{B}_3) (2\delta m_l^2+\delta m_s^2) + \tilde{B}_1(2\delta\mu_a^2 + \delta\mu_b^2) + \tilde{B}_2(\delta\mu_b^2 - \delta\mu_a^2) + \tilde{B}_3(\delta\mu_b - \delta\mu_a)^2 \,, \nonumber \label{M2N_expan} \end{eqnarray} where we have collectively denoted these baryons with a $\Sigma$ index. Similarly to the meson case, we have formed dimensionless quantities by normalising with the singlet quantity \begin{eqnarray} X_N^2 = \third( M_N^2 + M_\Sigma^2 + M_\Xi^2 ) = M_{0N}^2 + O(\delta m_l^2) \,. \nonumber \label{XN_def} \end{eqnarray} The expansion given here to NLO (quadratic in $\delta\mu_q$). The expansion to NNLO (at next order with is given, for example, in \cite{horsley14a} Appendix C. NNLO is used in the analysis presented here. Provided $m_u = m_d$ then for the $\Lambda$ we have \begin{eqnarray} \tilde{M}^2_{\Lambda}(aa^\prime b) &=& 1 + \tilde{A}_1(2\delta\mu_a + \delta\mu_b) - \tilde{A}_2(\delta\mu_b - \delta\mu_a) \nonumber \\ & & - (\tilde{B}_1 + \tilde{B}_3) (2\delta m_l^2+\delta m_s^2) + \tilde{B}_1(2\delta\mu_a^2 + \delta\mu_b^2) - \tilde{B}_2(\delta\mu_b^2 - \delta\mu_a^2) + \tilde{B}_4(\delta\mu_b - \delta\mu_a)^2 \,, \nonumber \label{M2L_expan} \end{eqnarray} (the NNLO terms are also given in \cite{horsley14a}. We shall use a prime, such as $a^\prime$, to denote a distinct quark, but with the same mass as quark $a$. (It turns out that the fits are slightly better if the square of the mass is used rather than just the mass. The SU3 flavour breaking expansions are valid for any function of the mass.) Finally note that for open charm masses investigating mass splittings can give information on SU(3) mass splittings, as to LO there is no influence from the charm quark mass. For example for the pseudoscalar mesons \begin{eqnarray} \tilde{M}(a\overline{c}) - \tilde{M}(b\overline{c}) = \half \tilde{\alpha}(\delta\mu_a + \delta\mu_b) + \ldots \,, \nonumber \end{eqnarray} while for the octet baryons we have \begin{eqnarray} \tilde{M}(aac) - \tilde{M}(bbc) &=& (\tilde{A}_1 - \half\tilde{A}_2) (\delta\mu_a-\delta\mu_b) + \ldots \,, \nonumber \\ \tilde{M}(cca) - \tilde{M}(ccb) &=& \half(\tilde{A}_1+\tilde{A}_2)(\delta\mu_a-\delta\mu_b) + \ldots \,. \nonumber \end{eqnarray} \section{Lattice} We use a $O(a)$ non-perturbatively improved clover fermion action, together with a Symanzik tree level improved glue. The fermions are also mildly stout smeared. Further details may be found in \cite{cundy09a}. Thus the quark mass is given by \begin{eqnarray} \mu_q = {1 \over 2} \left( {1\over \kappa_q} - {1\over \kappa_{0c}} \right) \,, \nonumber \end{eqnarray} where $\kappa_q$ is the hopping parameter, $\kappa_0$ is the hopping parameter along the symmetric line with $\kappa_{0c}$ being its chiral limit. Note that for $\delta\mu_q$, the `distance' from the initial SU3 flavour symmetry point, $\kappa_{0c}$, cancels and so does not have to be determined. We have presently analysed four lattice spacings where $a \sim 0.052$ -- $0.074\,\mbox{fm}$, but are aiming for five spacings. In the LH panel of Fig.~\ref{X2_const} we show $X_S^{{\rm lat}\,2}$ for various \begin{figure}[!h] \begin{center} \begin{minipage}{0.45\textwidth} \vspace*{-0.25in} \begin{center} \includegraphics[width=6.25cm] {FIGS/b5p50_mps2oXpi2_a2X2_150608_MpiLgt4_lat15} \end{center} \end{minipage}\hspace*{0.05\textwidth} \begin{minipage}{0.45\textwidth} \begin{center} \includegraphics[width=6.25cm]{FIGS/beta_a2+RGline_170528} \end{center} \end{minipage} \end{center} \caption{Left panel: $X^{\rm lat}_S)^2$ for $S = t_0$, $N$, $w_0$, $\rho$ and $\pi$ along the unitary line, from the symmetric point $\delta m_l = 0$ down to the physical point (vertical dashed line) together with constant fits (for $\beta = 5.50$, $\kappa_0 = 0.120900$). Right panel: $a^2(\beta)$ values normalised to $a^2(5.80)$ ($a^2 \sim 0.035\,\mbox{fm}^2$). The line shows the two-loop $\beta$-function.} \label{X2_const} \end{figure} singlet quantities, $S = t_0$, $N$, $w_0$, $\rho$, $\pi$. This can be used to determine both the lattice spacing and $\kappa_0$ -- the point on the SU(3) flavour symmetric, to start the path to the physical point. \cite{bornyakov15a}. For example, this could be achieved by tuning $X_\pi^{{\rm lat}\,2}/X_N^{{\rm lat}\,2}$ to its `physical' value, using wherever possible the `pure' QCD values in FLAG3, \cite{Aoki:2016frl}, otherwise from the PDG, \cite{PDG17a}. In the RH panel we show $a^2$ values normalised by the value at $\beta = 5.80$ using the two loop $\beta$-function. We first determine the pseudoscalar meson expansion coefficients. In the LH panel of Fig.~\ref{expan_coeff} we show \begin{figure}[h] \begin{center} \begin{minipage}{0.45\textwidth} \begin{center} \includegraphics[width=6.25cm] {FIGS/b5p80_dmua+dmubo2_mpsab2oXps2_161024_k0p122810_lat17} \end{center} \end{minipage}\hspace*{0.05\textwidth} \begin{minipage}{0.45\textwidth} \begin{center} \includegraphics[width=6.25cm] {FIGS/b5p80_2dmua+dmubo3_mNmLamaaboXN2_170530_NNLO_k0p122005_lat17} \end{center} \end{minipage} \end{center} \caption{Left panel: $\tilde{M}(a\overline{b}) = M^2(a\overline{b})^2/X_\pi^2$ versus $(\delta\mu_a+\delta \mu_b)/2$ for $(\beta,\kappa_0) = (5.80, 0.122810)$. The PQ masses are given by opaque red circles (the unitary data by filled red circles). Removing the NN term (leaving the LO terms) gives the blue opaque circles and linear fit term. Right panel: Similarly for $(\tilde{M}_{\Sigma}^2(aab) + \tilde{M}_\Lambda^2(aa^\prime b))/2$ versus $(2\delta\mu_a+\delta\mu_b)/3$.} \label{expan_coeff} \end{figure} $\tilde{M}(a\overline{b}) = M^2(a\overline{b})^2/X_\pi^2$ versus $(\delta\mu_a+\delta_b)/2$ (for $(\beta,\kappa_0) = (5.80, 0.122810)$). To illustrate the fit (which at LO is a function $(\delta\mu_a+\delta \mu_b)/2$ only) after fitting we subtract the higher order terms to leave only the LO term. Similarly in the RH panel of Fig.~\ref{expan_coeff} we show the equivalent result for the octet baryon case by plotting $(\tilde{M}_{\Sigma}^2(aab) + \tilde{M}_\Lambda^2(aa^\prime b))/2$ versus $(2\delta\mu_a+\delta\mu_b)/3$. To determine $\delta m_l^*$ the `physical' quark mass we set the pseudoscalar meson combination $M_\pi^2/X_\pi^2$ to its physical value. Similarly we use $M_{\eta_c}^2/X_\pi^2$ to determine $\delta m_c^*$. Although we have scanned in detail the appropriate region to determine the initial point, $\kappa_0$ for the trajectory some fine tuning is possible. Bearing in mind that changes will come mostly from the valence quark mass (rather than the sea quark masses), we do not consider exactly the unitary limit, but allow $\delta\mu_l^*$, $\delta\mu_s^*$ to be slightly different to $\delta m_l^*$, $\delta m_s^*$ by fitting $X_\pi^{pq\,2}/X^{pq\,2}_N$ and $M_\pi^{pq\,2}/X^{pq\,2}_\pi$ to their physical values, while keeping the splitting between $\delta \mu_l^*$, $\delta \mu_s^*$ the same as $\delta m_l^*$, $\delta m_s^*$. Presently we see little difference, \cite{QCDSF18a}, and shall not discuss this further here. \section{Results} After having determined the expansion coefficients and the `physical' quark masses, we can first determine the open charm pseudsoscalar mesons. All errors shown in the following plots are statistical (however we expect systematic errors, if any, to be small). In the LH panel of Fig.~\ref{D0_DD_pD_ps} we show preliminary results for \begin{figure}[!h] \begin{center} \begin{minipage}{0.45\textwidth} \begin{center} \includegraphics[width=6.25cm] {FIGS/a2_Ds+D_170611_inf_lat17} \end{center} \end{minipage}\hspace*{0.05\textwidth} \begin{minipage}{0.45\textwidth} \begin{center} \includegraphics[width=6.25cm]{FIGS/MM_c_pdg_170615_lat17} \end{center} \end{minipage} \end{center} \caption{Left panel: Preliminary $D_s(s\overline{c})$, and $D(l\overline{c})$ masses (upper and middle plot) together with their mass difference (lower plot) versus $a^2$, together with linear fits in $a^2$. The experimental results are shown with a star (slightly displaced in the -ve--$x$ direction for clarity). Right panel: Comparison of the extrapolated results to other recent results, \protect\cite{Davies:2010ip,Bali:2015lka,Cichy:2016bci,Cheung:2016bym}. The opaque red circle for the $\eta_c$ mass is just to indicate that this has been used in the determination of the charm quark mass.} \label{D0_DD_pD_ps} \end{figure} the continuum extrapolation of the $D_s(s\overline{l})$, $D(c\overline{l})$ masses and their mass difference (upper to lower plots). Also shown is a linear extrapolation in $a^2$ to the continuum limit. The phenomenological values are denoted by a star. The mass differences in particular are sensitive to unknown $u-d$ quark mass difference and QED effects (the present computation is for pure QCD only). From the plots we see that there do not seem to be strong scaling violations present. The RH panel of Fig.~\ref{D0_DD_pD_ps} shows a comparison with some other determinations. Presently we have determined charm baryon states with nucleon-like wavefunctions ${\cal B} = \epsilon q(q^TC\gamma_5c)$ ($q = l$, $s$). In Table~\ref{charm_wfs} we show the possible states in the isospin \begin{table}[!htb] \begin{center} \begin{tabular}{ccrcc} $C$ & $S$ & $I$ & baryon & wavefunction \\ \hline 0 & 0 & $\half$ & $N(lll^\prime)$ & $\epsilon(l^TC\gamma_5l^\prime)l$ \\ 0 & 1 & $1$ & $\Sigma(lls)$ & $\epsilon(l^TC\gamma_5s)l$ \\ 0 & 2 & $\half$ & $\Xi(ssl)$ & $\epsilon(s^TC\gamma_5l)s$ \\ 0 & 1 & $0$ & $\Lambda(ll^\prime s)$ & $\oosqrtsix\epsilon[2(l^TC\gamma_5l^\prime)s + (l^TC\gamma_5s)l^\prime - (l^{\prime\,T}C\gamma_5s)l]$ \\ \hline 1 & 0 & $1$ & $\Sigma_c(llc)$ & $\epsilon(l^TC\gamma_5c)l$ \\ 1 & 1 & $\half$ & $\Xi_c^{\prime}(lsc)$ & $\oosqrttwo\epsilon[(s^TC\gamma_5c)l + (l^TC\gamma_5c)s]$ \\ 1 & 2 & $0$ & $\Omega_c(ssc)$ & $\epsilon(s^TC\gamma_5c)s$ \\ 1 & 0 & $0$ & $\Lambda_c(ll^\prime c)$& $\oosqrtsix\epsilon[2(l^TC\gamma_5l^\prime)c + (l^TC\gamma_5c)l^\prime - (l^{\prime\,T}C\gamma_5c)l]$ \\ 1 & 1 & $\half$ & $\Xi_c(csl)$ & $\oosqrtsix\epsilon[2(s^TC\gamma_5l)c + (s^TC\gamma_5c)l - (l^TC\gamma_5c)s]$ \\ \hline 2 & 0 & $\half$ & $\Xi_{cc}(ccl)$ & $\epsilon(c^TC\gamma_5l)c$ \\ 2 & 1 & $0$ & $\Omega_{cc}(ccs)$& $\epsilon(c^TC\gamma_5s)c$ \\ \end{tabular} \end{center} \caption{The possible $C=0$, $1$ and $2$ baryon octet states in the isospin symmetric limit, $m_u = m_d \equiv m_l$. A prime is used to denote a distinct quark, but with the same mass.} \label{charm_wfs} \end{table} symmetric limit, $m_u = m_d \equiv m_l$. (A prime denotes a distinct quark in the wavefunction, but with the same mass.) Thus in the charm sector, we have presently investigated: $\Sigma_c(llc)$, $\Omega_c(ssc)$, $\Xi_{cc}(ccl)$ and $\Omega_{cc}(ccs)$. In the left panel of Fig.~\ref{charm_baryon} we show the $C=1$ open charmed baryons \begin{figure}[h] \begin{center} \begin{minipage}{0.45\textwidth} \begin{center} \includegraphics[width=6.25cm] {FIGS/spectrum_MB_c_a2_170611_inf_lat17} \end{center} \end{minipage}\hspace*{0.05\textwidth} \begin{minipage}{0.45\textwidth} \begin{center} \includegraphics[width=6.25cm] {FIGS/spectrum_MB_cc_a2_170611_inf_lat17+LHCb} \end{center} \end{minipage} \end{center} \caption{Left panel: Preliminary $\Omega_c(ssc)$, $\Sigma_c(llc)$ and mass difference results versus $a^2$ (upper to lower plots), together with linear extrapolations to the continuum limit. Right panel: similarly for the $C=2$ open charm states $\Sigma_c(llc)$, $\Omega_c(ssc)$. The LHCb result, \protect\cite{LHCb17a}, is also shown in the centre plot.} \label{charm_baryon} \end{figure} $\Omega_c(ssc)$, $\Sigma_c(llc)$ masses, together with their mass splitting. For the single open charm states there is reasonable agreement with the experimental results. Again, as for the open charm pseudoscalar masses, the scaling violations seem to be moderate. In the right panel of Fig.~\ref{charm_baryon} we show the $C=2$ open charmed baryons. We examine the result for the mass of the $\Xi_{cc}$ in greater detail in Fig.~\ref{large_scale}. Our tentative conclusion \begin{figure}[!h] \begin{center} \includegraphics[width=13.00cm] {FIGS/spectrum_Xicc_a2_170611_inf_lat17+SELEX+LHCb} \end{center} \caption{An enlarged plot for the $\Xi_{cc}$ mass from Fig.~\protect\ref{charm_baryon}. Also indicated are the LHCb result, \protect\cite{LHCb17a} and the SELEX result, \cite{selex02a}.} \label{large_scale} \end{figure} is that our results can differentiate between LHCb and SELEX and tend to support the LHCb result. (As mentioned before we assume that isospin breaking results are small.) This also is in agreement with other recent results, such as \cite{Namekawa:2013vu,Alexandrou:2014sha,Bali:2015lka,Alexandrou:2017xwd}. \section{Conclusions} For $u$, $d$, $s$ quarks, have developed a method to approach the physical point on a path starting from a point on the SU(3) flavour symmetric line. We have developed precise SU(3) flavour symmetry breaking expansions -- nothing is ad-hoc. The expansions have been extended -- PQ or when mass valence quarks $\not=$ mass sea quarks). This enables a better determination of the expansion coefficients and also allows the expansion to be applied in the region of the $c$ quark mass. We have data for four lattice spacings and have applied the method to determine some open charm state masses, in particular the recently discovered $C=2$ open charm state $\Xi_{cc}^{++}$. The preliminary results are displayed in Fig.~\ref{large_scale}. In the future we plan to extend the results to the other states shown in Table~\ref{charm_wfs} increasing the number of lattice spacing to five. Furthermore, the expansions also are valid for $m_u \not= m_d$, so with the determined coefficients it may be possible to investigate pure QCD isospin breaking effects, in particular to generalise to include mixing, \cite{horsley14a}, for example to $\Sigma_c^+$ - $\Lambda_c^+$, $\Xi_c^0$ - $\Xi_c^{\prime 0}$ mixing. Further possibilities include an investigation of the charmed baryon decuplet and QED effects. For the latter we are in the process of investigating using our recently generated ensemble of fully dynamical QCD+QED configurations \cite{Horsley:2015eaa,Horsley:2015vla}. \section*{Acknowledgements} The numerical configuration generation (using the BQCD lattice QCD program \cite{nakamura10a}) and data analysis (using the Chroma software library \cite{edwards04a}) was carried out on the IBM BlueGene/Qs using DIRAC 2 resources (EPCC, Edinburgh, UK), and at NIC (J\"ulich, Germany) and the SGI ICE 8200 and Cray XC30 at HLRN (The North-German Supercomputer Alliance) and on the NCI National Facility in Canberra, Australia (supported by the Australian Commonwealth Government). HP was supported by DFG Grant No. SCHI 422/10-1 and GS was supported by DFG Grant No. SCHI 179/8-1. PELR was supported in part by the STFC under contract ST/G00062X/1 and JMZ was supported by the Australian Research Council Grant No. FT100100005 and DP140103067. We thank all funding agencies.
2,877,628,088,719
arxiv
\section*{Acknowledgments}\label{sec:acknowledgements} This work was funded by grants from the German Government, Federal Ministry of Education and Research for the project D-Werft (03WKCJ4D). {\footnotesize \bibliographystyle{splncs03} \enlargethispage{1.0cm} \section{Introduction} \label{sec:introduction} Many datasets on the Web of Data reflect data related to current events or ongoing activities. Thus, such datasets are dynamic and evolve over time \cite{kafer-t-2013-dyldo}. As a consequence, query results that have been obtained from a SPARQL endpoint may become outdated. Therefore, long-running applications that cache and repeatedly use query results have to resubmit the queries regularly to ensure up-to-dateness of the~results. There would be no need for such regular tests if SPARQL endpoints would provide information about dataset modifications. There exist manifold approaches for providing such information. Examples are cache validators for SPARQL requests~(using HTTP header fields such as \texttt{\footnotesize Last-Modified} or \texttt{\footnotesize ETag})~\cite{williams-g-2011-enabling} and publicly available dataset update logs~(as provided by DBpedia Live at {\footnotesize \url{http://live.dbpedia.org/changesets/}}). Unfortunately, existing SPARQL endpoints rarely support such approaches~\cite{kjernsmo-k-2015-survey}, nor is update information provided in any other form by the dataset providers. The information needed has to be generated by the datastore underlying the SPARQL endpoint or by dataset wrappers that exclusively control all the updates applied to the dataset, which is often not possible, e.g.\ in the case of popular RDB2RDF servers, as they typically work as one-way RDF exporters. Without information about dataset modifications and changes from dataset side, the only viable alternative is to re-execute the respective SPARQL queries and check whether the obtained results have changed. This approach is feasible only if the number of such regular refresh queries is manageable. With an increasing number of applications adopting this approach, the SPARQL endpoint might become overloaded with the refresh queries. A~more scalable approach would be to use a middle-ware component at which the applications register their queries and get notified updates once the query results have changed. Then, this middle-ware is able to schedule the repeated execution of the refresh queries without risking to overload~the~endpoint. A main use case of such a middle-ware is the sparqlPuSH approach to provide a notification service for data updates in RDF stores \cite{passant-a-2010-sparqlpush}. sparqlPuSH relies on SPARQL queries and tracks changes of the result sets that then are published as an RSS feed and broadcasted via the PubSubHubbub protocol \cite{fitzpatrick-b-2010-pubsubhubbub}. However, the existing implementation of sparqlPuSH is limited to the particular use case of micro-posts and circumvents the problem of detecting changes by expecting dataset updates to be performed via the sparqlPuSH interface% ~\cite{knuth-m-2015-towards}% . To generalize the idea of sparqlPuSH scheduling the re-eval\-u\-a\-tion of SPARQL queries has been identified as an unsolved research~problem~\cite{knuth-m-2015-towards}. In this paper, we study this problem of scheduling refresh queries for a large number of registered SPARQL queries; as an overload-avoiding constraint we assume an upper bound on the length of time slots during which sequences of refresh queries can be run. We investigate various scheduling strategies and compare them experimentally% . For our experiments, we use a highly dynamic real-world dataset over a period of three months, in combination with a dedicated set of queries. The dataset~(DBpedia~Live) comprises all real-time changes in the Wikipedia that are relevant for DBpedia. The main contributions of the paper are an empirical evaluation of a corpus of real-world SPARQL queries regarding result set changes on a dynamic dataset and an experimental evaluation of different query re-evaluation strategies. Our experiments show that the change history of query results is the main influential factor, and scheduling strategies based on the extent of previously recognized changes (dynamics) and an adaptively allocated maximum lifetime for individual query results provide the best performances. The remainder of the paper is structured as follows: Sec.~\ref{sec:related} discusses related work. Sec.~\ref{sec:definitions} provides definitions and prerequisites. These are needed for Sec.~\ref{sec:strategies} which introduces the scheduling strategies used for the experiments. Sec.~\ref{sec:setup} describes the experimental setup, including the dataset and queryset that we used and the applied evaluation metrics. Sec.~\ref{sec:experiments} and Sec.~\ref{sec:conclusions} present the experimental results and discuss them, respectively. Sec.~\ref{sec:outlook} concludes the paper with an outlook on ongoing and future~work. \section{Related Work} \label{sec:related} A variety of existing applications is related to change detection of query results on dynamic RDF datasets, such as (external) query caching~\cite{martin-m-2010-improving}, partial dataset update~\cite{endris-k-2015-interest}, as well as notification services~\cite{passant-a-2010-sparqlpush}. However, even though Williams and Weaver show how the \texttt{\footnotesize Last-Modified} date can be computed with reasonable modifications to a state-of-the-art SPARQL processor~\cite{williams-g-2011-enabling}, working implementations are rare. In fact, Kjernsmo has shown in an empirical survey that only a miniscule fraction of public SPARQL endpoints actually support caching mechanisms on a per-query basis~\cite{kjernsmo-k-2015-survey}. To overcome this lack of direct cache indicators, alternative approaches are required to recognize dataset updates. The most common approach is to redirect updates through a wrapper that records all changes~\cite{martin-m-2010-improving,passant-a-2010-sparqlpush}. However, this approach is not applicable for datasets published by someone else. If data publishers provide information on dataset updates, this information can be analyzed. For instance, Endris et al.\ introduce an approach to monitor the changesets of DBpedia~Live for relevant updates~\cite{endris-k-2015-interest}~(such a changeset is a log of removed and inserted triples). Tools for dataset update notification, such as \emph{DSNotify}~\cite{popitsch-n-2011-dsnotify} and \emph{Semantic Pingback}~\cite{tramp-s-2010-weaving}, are available but extremely rarely deployed. Further hints for possible changes may be obtained from metadata about datasets% ; for instance, the DCAT recommendation suggests to~use \ttlinline{dcterms:modified} or \ttlinline{dcterms:accrualPeriodicity} to describe update frequencies of a dataset. \footnote{\url{http://www.w3.org/TR/vocab-dcat/}} Since the aforementioned cache indicators and hints for change detection are missing almost entirely in practice, we rely on re-execution of queries. Apparently, such an approach causes overhead in terms of additional network traffic and server load. In order to reduce this overhead we investigate effective scheduling strategies in this paper. A similar investigation in the context of updates of Linked Data has been presented by Dividino~et~al.~\cite{dividino-r-2015-strategies}. The authors show that change-aware strategies are suitable to keep local \emph{data caches} up-to-date. We also evaluate a strategy adopted from Dividino~et~al.'s \emph{dynamicity} measure. We observe that, in our context, this strategy performs well for highly dynamic queries, but it is prone to starvation for less dynamic~queries. Query result caches are also used for database systems where the main use case is to enhance the scalability of backend databases for dynamic da\-ta\-base-driven websites. The most prominent system is \emph{Memcached}\footnote{\url{http://www.memcached.org/}} which supports the definition of an expiration time for individual cache entries, as well as local cache invalidation, e.\,g. when a client itself performs an update. Consequently, updates from other sources cannot be invalidated. More sophisticated systems, such as the proxy-based query result cache \emph{Ferdinand} \cite{garrod-c-2008-queryresult}, use update notifications to invalidate local caches. To determine the queries that are affected by an update it is necessary to solve the query-update dependence problem~\cite{levy-a-1993-queries}. This process demands access to the dataset updates, which, as said, are not available in the general case for externally published~Linked~Datasets. \section{Experimental Results} \label{sec:experiments} We have conducted the experiment for three different values of $K_\mathsf{maxExecTime}$: 10~sec, 50~sec, and 1,000~sec. This variation of the upper bound execution time allows us to pretend different workloads: As we assume a fixed one-hour interval stepping with 10,000 queries, the workload can be scaled in terms of the number of queries and the time interval, respectively. In the following we present the results for each configuration. The metrics as introduced in Sec.~\ref{sec:metrics} are listed in tabular form. The two best and worst achieved results per metric are highlighted in shades of green and red, respectively. Table~\ref{tab:1000sec} shows the results for $K_\mathsf{maxExecTime}=$ 1,000~sec, which, for our query set, is equivalent to unlimited runtime; that is, all queries could be executed for every revision. Consequently, the theoretically optimal CV policy has no misses and delay, and executes only relevant queries. In contrast, as the non-selective scheduling policies (RR/ SJF/LJF/CR/DJ) execute all queries and therefore detect all relevant changes, they execute a massive amount of irrelevant queries as overhead, resulting in a low effectivity. The selective TTL policy reduces the number of query executions effectively, and more updates are detected by resetting a query's time-to-live when a change has been detected. The best performing configuration tested (TTL$_{{max}=32,reset}$) detects 81\,\% of all changes (12,311 of 15,256) while performing only 3.4\,\% of the query executions compared to the non-selective policies~(738,566 vs.\ 22,064,744). And still, TTL$_{{max}=256}$ detects 75\,\% (11,459) with 0.75\,\% query executions (154,185). The reduced query overhead comes at the expense of more delay and in particular higher maximum delay times. \begin{table}[ht] \centering \small \caption{Config $K_\mathsf{maxExecTime}=1000sec$} \label{tab:1000sec} \begin{tabular}{l||r|r|r|r||r|r||r|r} & \begin{tabular}{@{}c@{}} total qe \end{tabular} & \begin{tabular}{@{}c@{}} irrelevant \end{tabular} & \begin{tabular}{@{}c@{}} relevant \end{tabular} & \begin{tabular}{@{}c@{}} eff. (\%) \end{tabular} & \begin{tabular}{@{}c@{}} abs\\ delay \end{tabular} & \begin{tabular}{@{}c@{}} max\\ delay \end{tabular} & \begin{tabular}{@{}c@{}} abs\\ miss \end{tabular} & \begin{tabular}{@{}c@{}} max\\ miss \end{tabular} \\ \hline CV & 15,256 & 0 & 15,256 & 100 & 0 & 0 & 0 & 0 \\ \hline RR/SJF/LJF/CR/DJ & 22,080,000 &\cellcolor{red!25}} %{0.9 22,064,744 &\cellcolor{green!25}} %{0.9 15,256 &\cellcolor{red!25}} %{0.9 .07 &\cellcolor{green!25}} %{0.9 0 &\cellcolor{green!25}} %{0.9 0 &\cellcolor{green!25}} %{0.9 0 &\cellcolor{green!25}} %{0.9 0 \\ TTL$_{{max}=32}$ & 744,565 & 732,685 & 11,880 &\cellcolor{red!10}} %{0.9 1.60 & 26,866 &\cellcolor{green!10}} %{0.9 31 & 3,376 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=32,reset}$ & 750,877 &\cellcolor{red!10}} %{0.9 738,566 &\cellcolor{green!10}} %{0.9 12,311 & 1.64 &\cellcolor{green!10}} %{0.9 23,492 &\cellcolor{green!10}} %{0.9 31 &\cellcolor{green!10}} %{0.9 2,945 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=64}$ & 405,175 & 393,507 & 11,668 & 2.88 & 40,747 & 63 & 3,588 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=128}$ & 245,246 &\cellcolor{green!10}} %{0.9 233,683 &\cellcolor{red!10}} %{0.9 11,563 & 4.71 &\cellcolor{red!10}} %{0.9 61,639 &\cellcolor{red!10}} %{0.9 127 &\cellcolor{red!10}} %{0.9 3,693 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=128,reset}$& 252,714 & 240,550 & 12,164 &\cellcolor{green!10}} %{0.9 4.81 & 53,655 &\cellcolor{red!10}} %{0.9 127 & 3,092 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=256}$ & 165,644 &\cellcolor{green!25}} %{0.9 154,185 &\cellcolor{red!25}} %{0.9 11,459 &\cellcolor{green!25}} %{0.9 6.92 &\cellcolor{red!25}} %{0.9 86,202 &\cellcolor{red!25}} %{0.9 255 &\cellcolor{red!25}} %{0.9 3,797 &\cellcolor{red!10}} %{0.9 19 \end{tabular} \end{table} \iffalse config strategy revs q qe rel irrel delay max miss max maxRuntime=1000000,all ESClairvoyant 1771 352 15256 15256 0 0 0 0 0 maxRuntime=1000000,all ESTtl,initialTtl=1,maxTtl=32 2196 10000 744565 11880 732685 26866 31 3376 19 maxRuntime=1000000,all ESTtl,initialTtl=1,maxTtl=64 2191 10000 405175 11668 393507 40747 63 3588 19 maxRuntime=1000000,all ESTtl,initialTtl=1,maxTtl=128 2186 10000 245246 11563 233683 61639 127 3693 19 maxRuntime=1000000,all ESTtl,initialTtl=1,maxTtl=256 2185 10000 165644 11459 154185 86202 255 3797 19 maxRuntime=1000000,all ESTtl-1,initialTtl=1,maxTtl=32 2206 10000 750877 12311 738566 23492 31 2945 19 maxRuntime=1000000,all ESTtl-1,initialTtl=1,maxTtl=128 2206 10000 252714 12164 240550 53655 127 3092 19 \fi Table \ref{tab:50sec} shows the evaluation results for a runtime limitation of 50~seconds, which corresponds roughly to the maximum runtime needed for executing all relevant queries of the query set (cf.\ Sec.~\ref{sec:queries}). The CV policy has no miss, but it cannot execute all queries on time; instead, it delays three relevant executions for one revision each. As expected, SJF has most and LJF has least query executions given an execution time limitation, because short respectively long running queries are preferred. As the decay factor $\lambda$ is increased, in both cases the number of executed queries tends towards RR. Nevertheless, none of both strategies outperforms RR regarding relevant query executions, delay, or number of misses. The change rate based policies (CR) demonstrate that the result history is a good indicator and a significant number of changes was detected: 92.9\,\% for CR$_{\lambda=0.0}$ and 66.7\,\% for CR$_{\lambda=0.5}$. The dynamicity-based policy (DJ) detects by far the most result updates (99.7\,\%) and produces the least delay; the effectiveness is above CR. The TTL configurations show comparable results to the 1000~seconds runtime limitation, i.\,e. the number of total query executions, detected changes, and the delay remain relatively stable with the 50~seconds limit. Again, we see most result updates are detected by the TTL$_{{max}=32,reset}$ configuration. \begin{table}[ht] \centering \small \caption{Config $K_\mathsf{maxExecTime}=50sec$} \label{tab:50sec} \begin{tabular}{l||r|r|r|r||r|r||r|r} & \begin{tabular}{@{}c@{}} total qe \end{tabular} & \begin{tabular}{@{}c@{}} irrelevant \end{tabular} & \begin{tabular}{@{}c@{}} relevant \end{tabular} & \begin{tabular}{@{}c@{}} eff. (\%) \end{tabular} & \begin{tabular}{@{}c@{}} abs\\ delay \end{tabular} & \begin{tabular}{@{}c@{}} max\\ delay \end{tabular} & \begin{tabular}{@{}c@{}} abs\\ miss \end{tabular} & \begin{tabular}{@{}c@{}} max\\ miss \end{tabular} \\ \hline CV & 15,256 & 0 & 15,256 & 100 & 3 & 1 & 0 & 0 \\ \hline LJF$_{\lambda=0.5}$ & 977,922 & 974,512 &\cellcolor{red!25}} %{0.9 3,410 & .35 & 36,677 & 23 &\cellcolor{red!25}} %{0.9 11,846 &\cellcolor{red!10}} %{0.9 19 \\ LJF$_{\lambda=1.0}$ & 1,535,835 & 1,531,663 &\cellcolor{red!10}} %{0.9 4,172 & .27 & 29,797 & 15 &\cellcolor{red!10}} %{0.9 11,084 & 13 \\ RR & 2,860,301 & 2,855,350 & 4,951 & .17 & 24,206 &\cellcolor{green!25}} %{0.9 10 & 10,305 & 9 \\ SJF$_{\lambda=1.0}$ & 4,334,228 &\cellcolor{red!10}} %{0.9 4,329,712 & 4,516 &\cellcolor{red!10}} %{0.9 .10 & 24,578 & 12 & 10,740 & 11 \\ SJF$_{\lambda=0.5}$ & 5,661,022 &\cellcolor{red!25}} %{0.9 5,657,123 & 3,899 &\cellcolor{red!25}} %{0.9 .07 & 26,591 & 17 & 11,357 & 15 \\ CR$_{\lambda=0.0}$ & 2,395,472 & 2,381,306 &\cellcolor{green!10}} %{0.9 14,166 & .59 &\cellcolor{green!10}} %{0.9 9,734 &\cellcolor{green!10}} %{0.9 11 &\cellcolor{green!10}} %{0.9 1,090 &\cellcolor{green!10}} %{0.9 7 \\ CR$_{\lambda=0.5}$ & 2,645,302 & 2,635,132 & 10,170 & .38 & 16,979 &\cellcolor{green!25}} %{0.9 10 & 5,086 & 8 \\ DJ & 1,986,100 & 1,970,895 &\cellcolor{green!25}} %{0.9 15,205 & .77 &\cellcolor{green!25}} %{0.9 2,449 & 26 &\cellcolor{green!25}} %{0.9 51 &\cellcolor{green!25}} %{0.9 5 \\ TTL$_{{max}=32}$ & 734,555 & 722,749 & 11,806 & 1.61 & 26,908 & 32 & 3,450 &\cellcolor{red!25}} %{0.9 20 \\ TTL$_{{max}=32,reset}$ & 740,847 & 728,559 & 12,288 & 1.66 & 23,283 & 32 & 2,968 &\cellcolor{red!25}} %{0.9 20 \\ TTL$_{{max}=64}$ & 404,840 & 393,213 & 11,627 & 2.87 & 39,416 & 64 & 3,629 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=128}$ & 245,192 &\cellcolor{green!10}} %{0.9 233,635 & 11,557 & 4.71 &\cellcolor{red!10}} %{0.9 57,970 &\cellcolor{red!10}} %{0.9 127 & 3,699 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=128,reset}$& 252,483 & 240,387 & 12,096 &\cellcolor{green!10}} %{0.9 4.79 & 48,981 &\cellcolor{red!10}} %{0.9 127 & 3,160 &\cellcolor{red!10}} %{0.9 19 \\ TTL$_{{max}=256}$ & 165,681 &\cellcolor{green!25}} %{0.9 154,191 & 11,490 &\cellcolor{green!25}} %{0.9 6.94 &\cellcolor{red!25}} %{0.9 86,713 &\cellcolor{red!25}} %{0.9 255 & 3,761 &\cellcolor{red!10}} %{0.9 19 \end{tabular} \end{table} \iffalse config strategy revs q qe rel irrel delay max miss max maxRuntime=50000,all ESClairvoyant 1771 352 15256 15256 0 3 1 0 0 maxRuntime=50000,all ESChangeRateX,k=5 2207 10000 2395472 14166 2381306 9734 11 1090 7 maxRuntime=50000,all ESChangeRate,k=5 2207 10000 3229274 7974 3221300 430770 2172 7282 1070 maxRuntime=50000,all ESChangeRateDecay,k=5,lambda=0.5 2207 10000 2645302 10170 2635132 16979 10 5086 8 maxRuntime=50000,all ESLongestJobFirst,k=5,lambda=0.5 2207 10000 977922 3410 974512 36677 23 11846 19 maxRuntime=50000,all ESRoundRobin 2207 10000 2860301 4951 2855350 24206 10 10305 9 maxRuntime=50000,all ESShortestJobFirst,k=5,lambda=0.5 2207 10000 5661022 3899 5657123 26591 17 11357 15 maxRuntime=50000,all ESChangeRatioJ,k=5000,lambda=0.0 2207 10000 1986100 15205 1970895 2449 26 51 5 maxRuntime=50000,all ESTtl,initialTtl=1,maxTtl=32 2205 10000 734555 11806 722749 26908 32 3450 20 maxRuntime=50000,all ESTtl,initialTtl=1,maxTtl=64 2204 10000 404840 11627 393213 39416 64 3629 19 maxRuntime=50000,all ESTtl,initialTtl=1,maxTtl=128 2203 10000 245192 11557 233635 57970 127 3699 19 maxRuntime=50000,all ESTtl,initialTtl=1,maxTtl=256 2202 10000 165685 11490 154195 87005 255 3766 19 maxRuntime=50000,all ESChangeRateDecay,k=5,lambda=1.0 2207 10000 2692565 8290 2684275 19538 9 6966 8 maxRuntime=50000,all ESLongestJobFirst,k=5,lambda=1.0 2207 10000 1535835 4172 1531663 29797 15 11084 13 maxRuntime=50000,all ESShortestJobFirst,k=5,lambda=1.0 2207 10000 4334228 4516 4329712 24578 12 10740 11 maxRuntime=50000,all ESTtl-1,initialTtl=1,maxTtl=32 2207 10000 740847 12288 728559 23283 32 2968 20 maxRuntime=50000,all ESTtl-1,initialTtl=1,maxTtl=128 2206 10000 252483 12096 240387 48981 127 3160 19 \fi By looking on the results for the most restrictive execution time limit of 10~seconds in Table~\ref{tab:10sec}, we observe that even an optimal scheduling algorithm is not able to detect all result updates in the dataset anymore: the CV policy misses 722 query updates. LJF closely outperforms RR regarding update detection. RR again has the smallest maximum delay per query. SJF is worse than both LJF and RR in all aspects. The change-based policy (CR) detects updates more effectively. Without decay ($\lambda=0.0$) the problem occurs, that queries that did not change so far, are executed very rarely. This results in high delays. Since the maximum miss is relatively high and the total miss is low, we infer that only a small number of frequently changing queries is affected. The dynamicity-based policy (DJ) detects relatively many updates without executing too many irrelevant queries and, thus, is most effective for the scarce time limitation. Nevertheless, this policy is not starvation-free; it ignores queries with less updates. Due to the low dynamicity measure they reach at some point, they henceforth receive a very low rank and are not executed anymore. In contrast, queries with more frequently changing results are preferred and get executed repeatedly. The policy actually only selected 6,282 queries\footnote{The number of distinct executed queries is not shown in the table, since it is usually 10,000 for all policies except CV.} from the query set in total, which indicates a cold start problem. As a result, both the maximum delay and the maximum miss grow significantly. The TTL policies present higher detection rates for short runtime limitations as well. The maximum delay grows with the maximum time-to-live and the configuration TTL$_{{max}=32,reset}$ shows the lowest total delay. It can be seen that more changes are detected with a larger time-to-live, but this comes at the cost of delayed update recognition. It has to be noted, that the maximum numbers of missed updates are low for all TTL configurations compared to the other policies, even though the delay increases. \begin{table}[ht] \centering \small \caption{Config $K_\mathsf{maxExecTime}=10sec$} \label{tab:10sec} \begin{tabular}{l||r|r|r|r||r|r||r|r} & \begin{tabular}{@{}c@{}} total qe \end{tabular} & \begin{tabular}{@{}c@{}} irrelevant \end{tabular} & \begin{tabular}{@{}c@{}} relevant \end{tabular} & \begin{tabular}{@{}c@{}} eff. (\%) \end{tabular} & \begin{tabular}{@{}c@{}} abs\\ delay \end{tabular} & \begin{tabular}{@{}c@{}} max\\ delay \end{tabular} & \begin{tabular}{@{}c@{}} abs\\ miss \end{tabular} & \begin{tabular}{@{}c@{}} max\\ miss \end{tabular} \\ \hline CV & 14,484 & 0 & 14,484 & 100 & 2,481 & 2 & 772 & 2 \\ \hline LJF$_{\lambda=0.5}$ & 690,086 & 687,542 & 2,544 & .37 & 45,619 &\cellcolor{green!10}} %{0.9 35 & 12,712 & 31 \\ LJF$_{\lambda=1.0}$ & 780,738 & 778,204 & 2,534 & .32 & 43,750 &\cellcolor{green!25}} %{0.9 31 & 12,722 & 28 \\ RR & 865,105 & 862,632 & 2,473 & .29 & 43,097 &\cellcolor{green!25}} %{0.9 31 & 12,783 & 28 \\ SJF$_{\lambda=1.0}$ & 934,182 &\cellcolor{red!10}} %{0.9 931,795 &\cellcolor{red!10}} %{0.9 2,387 &\cellcolor{red!10}} %{0.9 .26 & 43,681 & 36 &\cellcolor{red!10}} %{0.9 12,869 & 31 \\ SJF$_{\lambda=0.5}$ & 1,001,825 &\cellcolor{red!25}} %{0.9 999,526 &\cellcolor{red!25}} %{0.9 2,299 &\cellcolor{red!25}} %{0.9 .23 & 43,498 & 38 &\cellcolor{red!25}} %{0.9 12,957 & 34 \\ CR$_{\lambda=0.0}$ & 109,715 &\cellcolor{green!10}} %{0.9 99,791 &\cellcolor{green!25}} %{0.9 9,924 &\cellcolor{green!10}} %{0.9 9.05 &\cellcolor{red!10}} %{0.9 152,346 &\cellcolor{red!10}} %{0.9 678 &\cellcolor{green!25}} %{0.9 5,332 &\cellcolor{red!10}} %{0.9 210 \\ CR$_{\lambda=0.5}$ & 676,868 & 671,640 & 5,228 & .77 & 45,489 & 58 & 10,028 & 46 \\ DJ & 17,519 &\cellcolor{green!25}} %{0.9 11,363 & 6,156 &\cellcolor{green!25}} %{0.9 35.1 &\cellcolor{red!25}} %{0.9 499,860 &\cellcolor{red!25}} %{0.9 2,206 & 9,100 &\cellcolor{red!25}} %{0.9 1750 \\ TTL$_{{max}=32}$ & 621,510 & 615,662 & 5,848 & .94 &\cellcolor{green!10}} %{0.9 37,332 & 39 & 9,408 &\cellcolor{green!25}} %{0.9 15 \\ TTL$_{{max}=32,reset}$ & 621,250 & 615,380 & 5,870 & .94 &\cellcolor{green!25}} %{0.9 34,097 & 38 & 9,386 &\cellcolor{green!25}} %{0.9 15 \\ TTL$_{{max}=64}$ & 375,209 & 366,929 & 8,280 & 2.21 & 45,342 & 67 & 6,976 & 18 \\ TTL$_{{max}=128}$ & 231,409 & 222,265 & 9,144 & 3.95 & 61,796 & 131 & 6,079 & 18 \\ TTL$_{{max}=128,reset}$& 236,734 & 227,531 & 9,203 & 3.89 & 57,172 & 130 & 6,053 &\cellcolor{green!10}} %{0.9 16 \\ TTL$_{{max}=256}$ & 162,407 & 152,767 &\cellcolor{green!10}} %{0.9 9,640 & 5.94 & 95,893 & 258 &\cellcolor{green!10}} %{0.9 5,574 & 18 \end{tabular} \end{table} \iffalse config strategy revs q qe rel irrel delay max miss max maxRuntime=10000,all ESClairvoyant 1880 352 14484 14484 0 2481 2 772 2 maxRuntime=10000,all ESChangeRateX,k=5 2207 10000 109715 9924 99791 152346 678 5332 210 maxRuntime=10000,all ESChangeRate,k=5 2207 10000 432848 7124 425724 311444 2067 5346 884 maxRuntime=10000,all ESChangeRateDecay,k=5,lambda=0.5 2207 10000 676868 5228 671640 45489 58 10028 46 maxRuntime=10000,all ESChangeRateDecay,k=5,lambda=1.0 2207 10000 745365 3962 741403 44275 49 11294 47 maxRuntime=10000,all ESLongestJobFirst,k=5,lambda=0.5 2207 10000 690086 2544 687542 45619 35 12712 31 maxRuntime=10000,all ESLongestJobFirst,k=5,lambda=1.0 2207 10000 780738 2534 778204 43750 31 12722 28 maxRuntime=10000,all ESRoundRobin 2207 10000 865105 2473 862632 43097 31 12783 28 maxRuntime=10000,all ESShortestJobFirst,k=5,lambda=1.0 2207 10000 934182 2387 931795 43681 36 12869 31 maxRuntime=10000,all ESShortestJobFirst,k=5,lambda=0.5 2207 10000 1001825 2299 999526 43498 38 12957 34 maxRuntime=10000,all ESChangeRatioJ,k=5000,lambda=0.0 2207 6282 17519 6156 11363 499860 2206 9100 1750 maxRuntime=10000,all ESTtl,initialTtl=1,maxTtl=32 2207 10000 621510 5848 615662 37332 39 9408 15 maxRuntime=10000,all ESTtl,initialTtl=1,maxTtl=64 2207 10000 375209 8280 366929 45342 67 6976 18 maxRuntime=10000,all ESTtl,initialTtl=1,maxTtl=128 2206 10000 231409 9144 222265 61796 131 6079 18 maxRuntime=10000,all ESTtl,initialTtl=1,maxTtl=256 2205 10000 162407 9640 152767 95893 258 5574 18 maxRuntime=10000,all ESTtl-1,initialTtl=1,maxTtl=32 2207 10000 621250 5870 615380 34097 38 9386 15 maxRuntime=10000,all ESTtl-1,initialTtl=1,maxTtl=64 2207 10000 378711 8245 370466 39514 68 7011 18 maxRuntime=10000,all ESTtl-1,initialTtl=1,maxTtl=128 2207 10000 236734 9203 227531 57172 130 6053 16 maxRuntime=10000,all ESTtl-1,initialTtl=1,maxTtl=256 2207 10000 168301 9725 158576 87817 260 5531 16 \fi \section{Conclusions} \label{sec:conclusions} This paper investigates multiple performance metrics of scheduling strategies for the re-execution of queries on a dynamic dataset. The experiments use query results gathered from a large corpus of SPARQL queries executed at more than 2,000 time points of the DBpedia~Live dataset, which covers a period of three months. The data collected in the experiments has been made public for comparison with other scheduling~approaches. From the experimental results we conclude that there is no absolute winner. The execution-time-based policies, \emph{Longest-Job-First} and \emph{Shortest-Job-First}, are not able to compete. Compared to \emph{Round-Robin} they generally perform worse. The main advantage of \emph{Round-Robin}, besides its simplicity, is the constantly short maximum delay, but in any setting it can not convince regarding total delay and change detection. \emph{Change-Rate} is able to detect a fair amount of changes. An aging factor should be used under scarce execution time restrictions to prevent long delays. Assuming a limited execution time, the \emph{Dynamics-Jaccard} policy shows best change recognition rates. The effectiveness of this policy as shown in prior work can be confirmed by our results. But, as the execution time limit becomes shorter, this policy tends to disregard queries with low update frequencies. Therefore, it is also not starvation-free. As Dividino~\cite{dividino-r-2015-strategies} considered only four iterations, the update frequency of less frequently updated resources could not be measured, but is likely to happen in the dataset update scenario as well. The \emph{Time-To-Live} policy shows a good performance for update detection and can be well adjusted to a certain maximum delay. It keeps the number of maximum missed changes constant. The alternative configuration to reset the time-to-live value instead of dividing it in half when a change has been detected, proves a better performance and results in higher detection rates and also in reduced delays. It could be shown, that scheduling strategies based on previously observed changes produce better predictions. The \emph{Time-To-Live} policy can be well adapted to required response times. While the \emph{Change-Rate} and \emph{Dynamics} policies proved to detect most updates, they tend to neglect less frequently changing queries. Given a less strict execution time limit, \emph{Dynamics-Jaccard} is the best candidate, else \emph{Time-To-Live} can be recommended because it is starvation-free. For future applications it seems reasonable to combine these scheduling approaches into a hybrid scheduler. \section{Outlook and Future Work} \label{sec:outlook} In future work we plan to apply the gathered insights as part of a notification service for query result changes. To improve the selection of queries in the early stage, we will analyze how change characteristics can be estimated from a~priori knowledge. We observed that the change history is an influential factor for the scheduling strategies. This brings difficulties such as the cold start problem. To investigate whether the change characteristics of a query can be retrieved from a~priori knowledge~(such as the query itself and its initial execution) we conducted a preliminary analysis: We computed correlations of the query's change probability with different query characteristics, including \emph{query type}, \emph{ordering}, \emph{result limit} and \emph{offset}, \emph{number of result variables}, \emph{number of triple patterns}, \emph{run time} and \emph{result size} at initial execution. Though, no significant correlation could be identified from these features. It will need a deeper examination whether and how the change probability can be predicted from such query characteristics. \section{Scheduling} \section{Preliminaries} \label{sec:definitions} In this paper we consider a dynamic dataset, denoted by $\mathcal{D}$, that gets updated continuously or in regular time intervals. We assume a sequence $\vv{\mathcal{T}} = (t_1, t_2, \dots, t_n)$ of consecutive points in time at which the dataset constitutes differing revisions% . Additionally, we consider a finite set~$Q$ of SPARQL queries. Then, for every time point $t_i$ in $\vv{\mathcal{T}}$ and for every query~$q \in Q$, we write $\textsf{\footnotesize result}(q,i)$ to denote the query result that one would obtain when executing $q$ over $\mathcal{D}$ at $t_i$. Furthermore, let $C_i \subseteq Q$ be the subset of the queries whose result at $t_i$ differs from the result at the previous time point $t_{i-1}$, i.e., $$ C_i = \big\lbrace q \in Q \mid \textsf{\footnotesize result}(q,i) \neq \textsf{\footnotesize result}(q,i-1) \big\rbrace . $$ The overall aim is to identify a greatest possible subset of $C_i$ at each time point~$t_i$. A~trivial solution to achieve this goal would be to execute all queries from $Q$ at all time points. While this exhaustive approach may be possible for a small set of queries, we assume that the size of $Q$ is large enough for the exhaustive approach to seriously stress, or even overload, the query processing service. Therefore, we consider an additional politeness constraint that any possible approach has to satisfy. For the sake of simplicity, in this paper we use as such a constraint an upper bound on the size of the time slots within which approaches are allowed to execute a selected sequence of queries for the different time points. Hereafter, let $K_\mathsf{maxExecTime}$ be this upper bound, and, for any possible approach, let $E_i \subseteq Q$ be the~(refresh) queries that the approach executes in the time slot for time point~$t_i$. Hence, if we let $\textsf{\footnotesize execTime}(q,i)$ denote the time for executing $q$ over the snapshot of $\mathcal{D}$ at~$t_i$, then for all past time points we have $$K_\mathsf{maxExecTime} \geq \sum_{q \in E_i} \textsf{\footnotesize execTime}(q,i) . $$ To select a sequence of queries to be executed within the time slot for a next time point, the approaches may use any kind of information obtained by the query executions performed during previous time slots for earlier time points. For instance, to select the sequence of queries for a time point $t_i$, an approach may use any query result $\textsf{\footnotesize result}(q,j)$ with $j < i$ and $q \in E_j$, but it cannot use any $\textsf{\footnotesize result}(q'\!,j')$ with $q' \notin E_{j'}$ or with $j' \geq i$. As a last preliminary, in the definition of some of the approaches that we are going to introduce in the next section we write $\textsf{\footnotesize prevExecs}(q,i)$ to denote the set of all time points for which the corresponding approach executed query $q \in Q$ before arriving at time point $t_i$; i.e.\ $ \textsf{\footnotesize prevExecs}(q,i) = \lbrace j < i \mid q \in E_j \rbrace. $ In addition, we write $\textsf{\footnotesize lastExec}(q,i)$ to denote the most recent of these time points, i.e.\ $ \textsf{\footnotesize lastExec}(q,i) = \max\bigl( \textsf{\footnotesize prevExecs}(q,i) \bigr) . $ \section{Scheduling Strategies} \label{sec:strategies} This section presents the scheduling strategies implemented for our evaluation. We begin by introducing features that may affect the behavior of such strategies. Typically, dataset providers do not offer any mechanism to inform clients about data updates, neither whether the data has changed nor to what extent. Therefore, we focus on scheduling strategies that are dataset agnostic, i.\,e. strategies that do not assume information about what has changed since the last query execution. Hence, all features that such a strategy can exploit to schedule queries for the next refresh time slot originate from (a)~the queries themselves, (b)~an initial execution of each query, and (c)~the ever growing history of successful executions of the queries during previous time~slots. Given these constraints, we have implemented different scheduling policies using the following features: \begin{itemize} \item \emph{Age} describes the actual time passed since the last query execution% . \smallskip \item \emph{Estimated execution time} is computed from the median query execution time over the last query executions and corresponds to the politeness constraint $K_\mathsf{maxExecTime}$. \smallskip \item \emph{Change Rate} indicates ``how often'' a query result has changed. It is derived from the recognition of result changes within the last query executions. \smallskip \item \emph{Change Dynamics} indicates ``to what extent'' a query result has changed. It is an aggregation of result changes over the last query executions% ~\cite{dividino-r-2014-changes}. We compute this metric by using the \emph{Jaccard distance} between known subsequent results. \end{itemize} We have implemented seven scheduling policies known from the literature% . We classify them into two groups: \emph{non-selective} and \emph{selective} policies. By using a \emph{non-selective} scheduling policy, potentially all registered queries are evaluated according to a ranking order until the execution time limit~($K_\mathsf{maxExecTime}$) has been reached. For every time point $t_i$ in $\vv{\mathcal{T}}$\!, a new ranking for all queries is determined. The queries are ranked in ascending order using a ranking function $\textsf{\footnotesize rank}(q,i)$. In a tie situation, the decision is made based on the age of the query, and finally the query id. \begin{description} \item [Round-Robin (RR)] treats all queries equal disregarding their change behavior and execution times. It executes the queries for which the least current result is available. \begin{equation} \textsf{\footnotesize rank}_{RR}(q,i) = \frac{1}{i- \lastExecXXX(q,i)} \end{equation} \item [Shortest-Job-First (SJF)] prefers queries with a short estimated runtime~(to execute as many queries per time slot as possible). The runtime is estimated using the median value of runtimes from previous executions. Additionally, the exponential decay function $e^{-\lambda(i- \lastExecXXX(q,i))}$ is used as an aging factor to prevent starvation. \begin{equation} \textsf{\footnotesize rank}_{SJF}(q,i) = e^{-\lambda (i- \lastExecXXX(q,i))} \mathrm{median}_{j \in \prevExecsXXX(q,i)}\bigl( \textsf{\footnotesize execTime}(q,j) \bigr) \end{equation} \item [Longest-Job-First (LJF)] uses the same runtime estimation and aging as SJF but prefers long estimated runtimes, assuming such queries are more likely to produce a result. \begin{equation} \textsf{\footnotesize rank}_{LJF}(q,i) = \frac{ e^{-\lambda (i- \lastExecXXX(q,i))} }{ \mathrm{median}_{j \in \prevExecsXXX(q,i)}\bigl( \textsf{\footnotesize execTime}(q,j) \bigr) } \end{equation} \item [Change-Rate (CR)] prioritizes queries that have changed most frequently in the past% . A decay function $e^{-\lambda t}$ is used to weight the change added by its respective age. \begin{align} \textsf{\footnotesize rank}_{CR}(q,i) &= \sum_{j \in \prevExecsXXX(q,i)} \!\left( e^{-\lambda (i-j)} * \mathsf{change}(q,i)\right), \\ \text{where:} \quad \mathsf{change}(q,i) &= \begin{cases} 1 & \text{if } \textsf{\footnotesize result}(q,j) \neq \textsf{\footnotesize result}(q,\textsf{\footnotesize lastExec}(q,j) ), \\ -1 & \text{else}. \end{cases} \end{align} \item [Dynamics-Jaccard (DJ)] has been proposed as a best-effort scheduling policy for data\-set updates~\cite{dividino-r-2015-strategies}. Here, for \texttt{\footnotesize DESCRIBE} and \texttt{\footnotesize CONSTRUCT} queries we compute the \emph{Jaccard distance} on RDF triples, and on the query solutions for \texttt{\footnotesize SELECT} queries. For \texttt{\footnotesize ASK} queries, the distance is either $0$ or $1$. \begin{align} \textsf{\footnotesize rank}_{DJ}(q,i) &= \sum_{j \in \prevExecsXXX(q,i)} \!\left(e^{-(i-j)} * \textsf{\footnotesize jaccard}(q,j) \right) \\ \text{where:} \quad \textsf{\footnotesize jaccard}(q,j) &= 1 - \frac{\bigl| \textsf{\footnotesize result}(q,j) \cap \textsf{\footnotesize result}(q,\textsf{\footnotesize lastExec}(q,j)) \bigr|}{\bigl| \textsf{\footnotesize result}(q,j) \cup \textsf{\footnotesize result}(q,\textsf{\footnotesize lastExec}(q,j)) \bigr|} \end{align} \end{description} Instead of ranking all queries, the \emph{selective} scheduling policies select a (potentially ranked) subset of queries for evaluation at a given point in time $t_i$. Queries from this subset that do not get evaluated due to the execution time limit~($K_\mathsf{maxExecTime}$) are privileged in the next time slot~$t_{i+1}$. \begin{description} \item [Clairvoyant (CV)] is assumed to have full knowledge of all query results at every point in time and, thus, is able to determine the optimal schedule. \item [Time-To-Live (TTL)] determines specific time points when a query should be executed. To this end, each query is associated with a value indicating a time interval after which the query needs to be re-evaluated. After an evaluation, if the query result has changed, this time-to-live value is divided in half or, alternatively, reset to the initial value of $1$; if the result did not change, the value is doubled up to a fixed maximum value~($max$). We investigate different values as maximum time-to-live. \end{description} \section{Experimental Setup} \label{sec:setup} We evaluated the performances of the scheduling strategies experimentally. In this section, we explain the test setup. The setup consists of a highly dynamic dataset and a corresponding set of SPARQL queries. The individual characteristics of the dataset and the query set are analyzed in detail, before we focus on the evaluation metrics. \subsection{Dataset} \label{sec:dataset} For our experiments we use the \emph{DBpedia~Live} dataset \cite{hellmann-s-2009-dbpedialive} because it provides continuous fine-grained changesets, which are necessary to reproduce a sufficient number of dataset revisions. Moreover, while \emph{DBpedia~Live} and \emph{DBpedia} share the same structural backbone -- both make use of the same vocabularies and are extracted from English Wikipedia articles -- the main difference is that the real-time extraction of \emph{DBpedia~Live} makes use of different article revisions. Therefore, queries for \emph{DBpedia} can be expected to work alike for \emph{DBpedia~Live}, as we show in Sec.~\ref{sec:queries}. We selected the three-months period August--October 2015 for replaying the changesets, starting from a dump of June 2015~({\footnotesize \url{http://live.dbpedia.org/dumps/dbpedia_2015_06_02.nt.gz}}) applied with subsequent updates for June and July 2015. After each fully~replayed hour, we collect dataset statistics and execute the full query set. All statistics and results are recorded in a database for the actual evaluation of the scheduling~strategies. \begin{figure}[htp] \centering \input{./plots/ds_rev_stats} \vspace{-4em} \caption{Revision statistics} \label{fig:revstats} \end{figure} As shown in Fig.~\ref{fig:revstats}, the dataset contains between 398M and 404M triples. The dataset changes are not homogeneous: starting from 08/18 we observe an increased number of triple updates, and from 08/27 to 08/31 there have been exceptionally many insertions and even more deletions% ~(the reason for this pattern could not be revealed from the changesets). In total we have 2,208 hourly updates for our three-months period (92 days * 24 hours), and there are 437 revisions (hours) without any changes. \subsection{Queries} \label{sec:queries} To perform SPARQL query executions on a dynamic dataset it is essential to use queries that match the dataset. We use a set of real-world queries from the \emph{Linked SPARQL Queries dataset} (LSQ) \cite{saleem-m-2015-lsq} which contains 782,364 queries for DBpedia. Though the queries originate from the year 2010 (DBpedia 3.5.1), they still match the current dataset structure. We randomly selected 10,000 queries from LSQ after filtering out those having a runtime of more than 10 minutes or producing parse or runtime errors. The query set contains 11~\texttt{\footnotesize DESCRIBE}, 93~\texttt{\footnotesize CONSTRUCT}, 438~\texttt{\footnotesize ASK}, and 9458~\texttt{\footnotesize SELECT} queries% , and is available at {\footnotesize \url{https://semanticmultimedia.github.io/RefreshQueries/data/queries.txt}}. DBpedia~Live changes gradually, but obviously the structural backbone of DBpedia remains. As a result, 4,423 out of our 10,000 queries deliver a non-empty query result on the first examined revision (4,440 over all examined revisions). \begin{figure}[htp] \centering \input{./plots/rs_changes_per_rev} \vspace{-4em} \caption{Queries with update per revision (bars) and distinctly aggregated (line)} \label{fig:rschange} \end{figure} We consider a result as \emph{changed}, if it is not isomorphic to the result returned for this query in the previous evaluation. For queries having the \texttt{\footnotesize ORDER BY} feature we also check for an equal bindings sequence. If \texttt{\footnotesize ORDER BY} is not used in the query, the binding order is ignored as SPARQL result sets are then expected in no specific order~\cite{harris-s-2013-sparql}. Concerning the \emph{result changes} (cf.\ Fig.~\ref{fig:rschange}) we observe that only a small fraction~of the queries is affected by the dataset updates~(up to 32~queries per revision, 352~queries within all revisions). Furthermore, by the continuously increasing number of total distinct queries with changed result, we observe that query results may also change after being constant for a long time. Periods with higher data update frequencies~(e.g., from~08/27 to~08/31) can be identified also as periods with more query result~changes. As illustrated in Fig.~\ref{fig:rschangeruntime}, the overall runtime of all queries per revision ranges from 440 to 870 seconds, whereas the runtime for affected queries ranges up to 50.1 seconds~(consuming at maximum 8.9\,\% of the total runtime). \begin{figure}[htb] \centering \input{./plots/rs_change_rt_per_rev} \vspace{-4em} \caption{Runtime of queries total vs. with update} \label{fig:rschangeruntime} \end{figure} Fig.~\ref{fig:rschangedist} shows the individual time points the query result changes for a subset of the analyzed query set\footnote{Details on the individual queries can be retrieved from the LSQ dataset, accessible at \url{http://lsq.aksw.org/page/res/DBpedia-q<QUERY_ID>}.}. The majority (191) of the 352 queries affected by the dataset updates change exactly once, 38 queries change twice. The result of the query\footnote{Shortened, find the original query at \url{http://lsq.aksw.org/page/res/DBpedia-q312238}.}\\ \sparqlinline{SELECT ?res ?v WHERE \{ ?res dbo:abstract ?v \} ORDER BY ?res ?v}\\ changes most often with 1,765 times. We can recognize that the query results change in very irregular intervals with a high variation between the individual queries. The average interval between subsequent changes is 27.6 hours (standard deviation 145.6 hours) for the 352 queries which are affected by dataset updates. \iffalse SELECT AVG(`diff`), STD(`diff`) FROM ( SELECT qe.`query_id`, qe.`revision_id`, MIN(qe.`revision_id` - IFNULL(qe2.`revision_id`,0)) AS `diff` FROM (SELECT `query_id`, `revision_id` FROM `query_execution_all` WHERE `result_change` = true ORDER BY `query_id`, `revision_id`) AS qe LEFT JOIN (SELECT `query_id`, `revision_id` FROM `query_execution_all` WHERE `result_change` = true ORDER BY `query_id`, `revision_id`) AS qe2 ON qe.`query_id` = qe2.`query_id` AND qe.`revision_id` > qe2.`revision_id` GROUP BY qe.`query_id`, qe.`revision_id` ORDER BY qe.`query_id`, qe.`revision_id` ) AS rc; \fi \begin{figure}[htb] \centering \input{./plots/rs_changes_per_query} \vspace{-4em} \caption{Result changes per query (examples)} \label{fig:rschangedist} \end{figure} The dataset replay and the query executions have been performed on a 48-core Intel(R)~Xeon(R)[email protected] using the AKSW Jena SPARQL AP \footnote{\url{https://github.com/AKSW/jena-sparql-api}} and an OpenLink Virtuoso Server~07.10 with 32GB reserved~RAM. \subsection{Publication of Experimental Data} \label{sec:datapub} We provide the data gathered from the experiments in form of a MySQL database dump and an RDF dump with the query executions as planned by the evaluated strategies\footnote{Both datasets are available at \url{https://semanticmultimedia.github.io/RefreshQueries/}}. The database dump includes the plain results of all query executions, while the RDF dataset refers to their SHA256 hash values. The RDF dataset applies the LSQ vocabulary\footnote{\url{https://github.com/AKSW/LSQ/blob/gh-pages/LSQ_Vocab.rdf}}. We extended the vocabulary to describe relevant metadata such as the delay and the missed updates of individual query executions. \subsection{Evaluation metrics} \label{sec:metrics} An ideal scheduling strategy should satisfy a number of requirements: \begin{itemize} \item \emph{Effectiveness}: It should only evaluate queries that have changed, which reduces unnecessary load to the SPARQL endpoint. \smallskip \item \emph{Efficiency}: It should evaluate queries that have changed as soon as possible, which reduces the out-of-date time and helps to not miss result changes. \smallskip \item \emph{Avoid starvation}: Results of queries that are susceptible to change~(i.e., there is no reason to believe the query will always produce the same result) may change at any point in time even if the results have been constant so far. It should be ensured that such queries are executed at some point. \end{itemize} To compare the query execution strategies we simulate their query selection with different configurations over all 2,208 dataset revisions ($t_1, \dots, t_{2208}$). The initial query results $\lbrace \forall q \in Q: \textsf{\footnotesize result}(q,0) \rbrace$ for $t_0 < 08/01$ are available to every scheduling strategy right from the start. We compute the following key metrics: \begin{description} \item [Total query executions] number of query executions performed. \smallskip \item [Irrelevant executions] query executions without recognizing a change, equals to the total number of executions minus the relevant ones. Irrelevant executions create unnecessary load to the endpoint and reduce the \emph{effectiveness}. \smallskip \item [Relevant executions] query executions where a change could be detected compared to the last execution, i.\,e. there was at least one result change since the execution; if there was more than a single change, these updates are counted as missed. \smallskip \item [Effectivity] the ratio of relevant query executions to total executions. \smallskip \item [Absolute delay] time between the optimal and actual re-execution $(q,i)$, summed over all queries, which allows to measure the overall \emph{efficiency} of the scheduling strategy. \smallskip \item [Maximum delay] the longest delay for an individual query execution determines the maximum out-of-date time to be expected from the scheduling strategy for an individual query result. Overly long out-of-date times indicate a \emph{starvation} problem. \smallskip \item [Absolute miss] number of changes that are recognized, summed over all queries. \smallskip \item [Maximum miss] the maximum number of missed result updates across all queries. \end{description} \section*{Notes} \iffalse SELECT qe1.`query_id`, qe1.`result_size`, qe2.`result_size` FROM `query_execution` AS qe1, `query_execution` AS qe2 WHERE qe1.`query_id` = qe2.`query_id` AND qe1.`revision_id` = qe2.`revision_id` - 1 AND qe1.`result_size` != qe2.`result_size`; SELECT COUNT(`query_id`) AS queries, `revs` FROM (SELECT `query_id`, COUNT(`revision_id`) AS revs FROM `query_execution` GROUP BY `query_id`) AS r GROUP BY `revs`; SELECT `revision_id`, SUM(`run_time_ms`) AS ms, SUM(`run_time_ms`)/1000/60 AS min FROM `query_execution` GROUP BY `revision_id` ORDER BY `revision_id`; SELECT `query_id`, `revision_id`, `result_change`, `result_size` FROM `query_execution` WHERE `query_id` IN (SELECT DISTINCT `query_id` FROM `query_execution` WHERE `result_change` = true) ORDER BY `query_id`, `revision_id`; SELECT DISTINCT `query_id`, `url`, `text`, `type`, `ordered`, `result_limit`, `result_offset` FROM `query_execution`, `query` WHERE `query_execution`.`query_id` = `query`.`id` AND `result_change` = true; SELECT COUNT(DISTINCT `query_id`), `type` FROM `query_execution`, `query` WHERE `query_execution`.`query_id` = `query`.`id` AND `result_change` = true GROUP BY `type`; SELECT config, strategy, COUNT(DISTINCT revision_id) AS revs, COUNT(query_id) AS query_execs, COUNT(DISTINCT query_id) AS queries, SUM(delay) AS delay_total, AVG(delay), MAX(delay), -- SUM(CASE WHEN delay > 0 THEN 1 ELSE 0 END) AS `delayed`, SUM(delay) / SUM(change_detection) AS `delay_avg`, SUM(missed_n) AS missed_total, AVG(missed_n), MAX(missed_n), -- SUM(CASE WHEN missed_n > 0 THEN 1 ELSE 0 END) AS missed, SUM(missed_n) / SUM(change_detection) AS missed_avg, SUM(change_detection) AS changed -- , MAX(max_runtime) FROM `evaluation` WHERE config="maxRuntime=50000,maxRev=1000" -- "maxRuntime=10000,maxRev=588" GROUP BY config, strategy -- , revision_id ORDER BY config, strategy \fi
2,877,628,088,720
arxiv
\section{Introduction} Over the last several decades an extensive literature has developed describing Monte Carlo simulations of both localized (e.g.~Heisenberg) and itinerant (e.g.~Hubbard) models of quantum magnetism. An important subset of these studies has considered situations where the exchange constants $J_\alpha$ or electron repulsion $U_\alpha$ can take on multiple values, with the attendant possibility of quantum phase transitions as the ratio of these energy scales is altered. For example, in the case of the one-fifth depleted square lattice model of CaV$_4$O$_9$, the quotient $J/J'$ of the exchange constants on the two different vanadium bonds tunes the associated Heisenberg hamiltonian from a disordered dimer phase, to N\'eel order, and then back to a disordered plaquette phase\cite{troyer96}, lending an understanding of spin gapped behavior in this material. Likewise, bilayer Heisenberg\cite{sandvik94} and Hubbard\cite{scalettar94} models have a singlet to N\'eel transition depending on the ratio of values of the inter- and intra-plane energies. In addition to describing systems in which long range order can be destroyed, multiple $J_\alpha$ and $U_\alpha$ can also give rise to transitions between different ordered states, such as charge density versus spin density wave patterns. Simulations of models with several interaction energy scales are especially relevant to heterostructures, where the growth of distinct sheets of the same, or different materials, offers the possibility of tuned magnetic properties. In this paper we present a quantum Monte Carlo investigation of a mixed localized-itinerant magnetic model in which we couple a 2D square layer of Ising spins to several metallic planes. Our interest is both in how, potentially, the additional fluctuations of the free electrons suppress the Ising transition temperature and on how the magnetic layer initiates order amongst the free fermions. We also explore whether the coupling of the metal to the localized spins can open a gap in the electronic spectrum, driving a metal to insulator transition, and the penetration depth of the magnetic order into the metal. Our work is related to simulations of multilayer Hubbard models in which the on-site interaction $U$ can distinguish metallic from magnetic layers\cite{euverte12}. However, by treating the correlated layers as classical, localized spins we are able to explore a greater range of parameter space, and, in particular, to go to lower temperatures away from half-filling of the metallic band where a sign problem would otherwise prevent simulations. This idea of coupling classical spins to itinerant electrons has been extensively used, e.g.~in multiband models of the manganites and iron pnictides where the sign problem similarly precludes treating fully quantum mechanical models \cite{dagotto01,lv10,yin10}. Numerical approaches to these models allow easier access to dynamical behavior and hence greater possibility of contact with spectroscopy and neutron scattering experiments\cite{liang12} than do direct path integral treatments of many-electron systems which require a difficult analytic continuation to get real time information. A number of recent experiments have examined electronic reconstruction at the interface of different transition metal oxides using scanning tunneling microscopy with high spatial and energy resolution. Some of these experiments focus on interfaces of paramagnetic metals and antiferromagnetic insulators \cite{takahashi01,freeland10,tan13}. The Hamiltonian we consider here, in which tight binding layers couple to classical, localized spins, is the most simple model of such a situation, and will clearly require considerable refinement before being able to make any sort of quantitative contact. Nevertheless, it can lend a first qualitative insight into the sort of trends one might expect, e.g.~for magnetic order. Moreover, the study of fluctuating classical spins coupled to fermionic degrees of freedom has recently been suggested as a generally promising approach to move beyond mean-field treatments of interacting electron systems \cite{dagotto14}, providing further motivation for this work. The remainder of this paper is organized as follows: In Sec.~\ref{sec:model} we write down the fermion-Ising hamiltonian along with a brief summary of the numerical methods employed, and definitions of the observables which characterize the phases. Section \ref{sec:results} describes the results when the Fermi level is at the band center, first for the case of antiferromagnetic (AF) Ising spins and then for ferromagnetic coupling, followed by a discussion (Sec.~\ref{sec:resultsdoped}) of the effect of doping away from half-filling. A particular interesting charge disproportionation is shown to occur where metallic layers become unequally populated to allow for an optimal magnetic response. Section \ref{sec:conclusion} presents a conclusion and some future directions of research. \section{Model} \label{sec:model} We consider the hamiltonian \begin{align} \hat H = -&t \sum_{\langle ij\rangle \ell,\sigma} \big( \, c_{i \ell \sigma}^{\dagger} c_{j \ell \sigma}^{\phantom{\dagger}} + c_{j \ell \sigma}^{\dagger} c_{i \ell \sigma}^{\phantom{\dagger}} \, \big)-\mu \sum_{i \ell,\sigma} n_{i \ell\sigma} \nonumber \\ -&t_\perp \sum_{i \langle \ell \ell'\rangle,\sigma} \big( \, c_{i \ell \sigma}^{\dagger} c_{i \ell' \sigma}^{\phantom{\dagger}} + c_{i \ell' \sigma}^{\dagger} c_{i \ell \sigma}^{\phantom{\dagger}} \, \big) \nonumber \\ -& J_H\sum_{i} {s}^z_{i,\ell=0} \, {S}_i +J\sum_{\langle ij\rangle}{S}_i \, {S}_j , \label{eq:hamiltonian} \end{align} where $c_{i\ell\sigma}^{\dagger}\,(c_{i\ell\sigma}^{\phantom{\dagger}})$ are creation(destruction) operators for fermions of spin $\sigma$ on lattice site $i$ of layer $\ell=0,1,\cdots,N_{\rm layer}-1$. Our convention is that layer $\ell=0$ is adjacent to the classical spins. The intra-layer hopping $t$ is on nearest-neighbor sites (denoted by $\langle ij \rangle$) of each layer $\ell$; the inter-layer hopping between neighboring fermionic layers $\langle \ell \ell' \rangle$ is $t_\perp$, and the density of fermions is tuned by the chemical potential $\mu$. The geometry of each layer is that of a 2D square lattice of linear length $L$. The remaining degrees of freedom are Ising spins which populate a single layer\cite{footnote1} and are coupled by exchange constant $J$. The Ising spins interact with the $z$ component of the fermion spin ${s}^z_{i,0}= n_{i,0, \uparrow} - n_{i,0, \downarrow}$ in the interface layer $\ell=0$, via a second exchange constant, $J_H$. The lattice geometry is sketched in Fig.~\ref{fig:cartoon}. We choose periodic boundary conditions in the planes, and open boundary conditions in the direction perpendicular to the planes. Our results will be for two metallic layers (i.e.~$N_{\rm layer}=2$), since, as we shall show, such a situation already allows us to address many of the key questions concerning the interface between a magnetic and a metallic layer. We have chosen $|J|/t=0.2$ (both signs of $J$ will be studied) so that the temperature scale for the development of correlations in the classical spins is comparable to that in the metallic layer and, consequently, possible competing phases are most readily discerned. There are different ways to understand this. The most simple is to note that, if $J=t$, the 2D square lattice Ising $T_c \sim 2.27 J$ is much higher than typical temperature scales at which short range correlations get more robust for noninteracting fermions in a square lattice. This is because for the half-filled $U=0$ Hubbard Hamiltonian, short range antiferromagnetic correlations corresponding to the Fermi wavevector is $\mathbf{k}_F = (\pi,\pi)$, do not onset until the temperature gets below $T \sim 0.25 t$. Even when electron-electron interactions, which are not considered here, are turned on, nearest neighbor spin correlations do not begin to grow substantially until $T \sim 0.5 t$ (for the $U/t=4$ Hubbard model). Thus in either case, a choice $|J|/t \sim 0.2$ (Ising $T_c \sim 0.45 t)$ is required to select classical spin and fermionic spin ordering scales to be roughly equal. An alternate to Eq.~\ref{eq:hamiltonian} would be to consider continuous planar $\vec {\bf S} = (S_i^x,S_i^y)$ or Heisenberg $\vec {\bf S} = (S_i^x,S_i^y,S_i^z)$ spins, with an $\vec {\bf S}_i \cdot \vec {\bf S}_j$ spin-spin coupling between pairs of local spins, and $\vec {\bf S}_i \cdot \vec {\bf s}_j$ spin-spin of local spin to fermion spin.\cite{dagotto01} The restriction used here, to a single ($z$) component, has been considered in other problems involving treating electronic correlation, from mean field approaches\cite{chiesa13} to the study of the $t$-$J_z$ model.\cite{zhang88} The choice of Ising spins also ensures a robust magnetic phase transition in which true long range order occurs at finite $T_c$ in the spin plane. This will be discussed further in the conclusions. It is worth noting several symmetries of the hamiltonian Eq.~\ref{eq:hamiltonian}. Consider first a combined particle-hole transformation $\,c_{i\sigma}^{\phantom{\dagger}} \rightarrow (-1)^i c_{i\sigma}^{\dagger}$ and inversion of the localized spins $\big(\,S_i \rightarrow-S_i\,\big)$. Here $(-1)^i$ denotes a staggered $\pm 1$ phase taking opposite values on the two sublattices of the bipartite square lattice. This transformation leaves each of the terms in the hamiltonian- the fermion kinetic energy, the Ising interaction, and the local spin-fermion coupling invariant. Thus, if $\mu=0$, the whole hamiltonian is unchanged, and the lattice is half-filled ($\rho=1.0$). \begin{figure}[t] \vspace{+0.5cm} \epsfig{figure=cartoon.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Lattice geometry for the fermion-Ising model. A single layer of Ising spins residing on a 2D square lattice is superposed on several layers of noninteracting fermions. The nearest-neighbor spin-spin interaction between the free fermions of layer $\ell=0$ and the Ising spins is proportional to the parameter $J_H$. \label{fig:cartoon} } \end{figure} The finite temperature properties of the system can be obtained from its partition function and associated expectation values. The partition function is, \begin{equation} Z=\sum_{S_i=\pm 1} e^{\beta J \sum_{\langle ij \rangle} S_i S_j} \cdot Z_{\rm f} (\{S\}), \label{eq:Ztotal} \end{equation} where $Z_{\rm f}(\{S\})={\mathrm{Tr}} \, e^{-\beta({\hat H}_{{\rm f}\uparrow} +{\hat H}_{{\rm f}\downarrow})}$ represents the grand-canonical partition function of the fermionic part of the hamiltonian for a particular Ising field configuration $\{S\}$. Since the hamiltonian Eq.~\ref{eq:hamiltonian} is bilinear in the fermionic operators, each $\hat H_{\rm f \sigma}$ can be written as the product of a vector of creation operators, a real-valued matrix ${\cal M}^\sigma$,\cite{footnote2} and a vector of destruction operators. The fermion contributions to $Z$ can then be expressed in terms of the eigenvalues $\lambda_j^\sigma$ of ${\cal M}^\sigma$, \begin{equation} Z_{\rm f}=\prod_{\sigma=\uparrow,\downarrow}\prod_{j}\left(1+e^{ \beta\lambda^\sigma_j}\right) . \end{equation} From this expression, it is clear that the summand in Eq.~\ref{eq:Ztotal} is positive definite and there is no ``sign problem'' (for any $\mu$). Of course, this is simply a consequence of the fact that the spin field to which the fermions are coupled does not vary in imaginary time, as it would, for example, if $\{S\} = S_{i\tau}$ were a Hubbard-Stratonovich field used to decouple a fermion-fermion interaction. The largest computational effort arises from diagonalizing the two $N$x$N$ matrices ${\cal M}^\sigma$ for each update to the configuration of the Ising spins $S_i$. An alternate method to the direct matrix diagonalization used in the literature employs a representation of the density of states $\rho(\lambda)$ in terms of Chebyshev polynomials \cite{motome99,alvarez05,cen06}. The moments of $\rho(\lambda)$ are computed recursively in a way that involves only sparse matrix-vector multiplications. This approach improves the scaling with system size to linear in $N$, at the cost of a significant prefactor. It also involves a (well-controlled) approximation which is the truncation of the expansion at some maximum order. Here we use exact diagonalization, as opposed to the Chebyshev method. The results of the simulations presented below were obtained by averaging over 5-10 independent simulations, each of which was composed of 35,000 Monte Carlo sweeps of the Ising variables. Typically, the linear lattice size $L$ was varied between $4 \le L \le 12$, selecting a geometry with one Ising plane and stacked on top of two fermionic ones, so that $N=2\,L^2$. Expectation values of the Ising variables are averaged in the usual way over the configurations generated in the course of the simulation. For example, to address directly if there is long range ferromagnetic order in the Ising plane in the case $J<0$, we calculate the fourth order Binder cumulant~\cite{Binder81}, $B_4(T)=\left(1-\langle M^4\rangle/3\langle M^2\rangle ^2\right)$. Here $M=1/N\sum_i S_i$ is the magnetization per site. When $J>0$ (the antiferromagnetic case) we replace $M$ by the staggered magnetization, $1/N\sum_i (-1)^i S_i$. Crossings of $B_4(T)$ obtained for different lattice sizes determine the critical temperature for magnetic ordering of the classical spins. When the interaction $J_H$ between the Ising and fermionic spins is nonzero, we expect a shift away from the 2D square lattice Ising $T_c=2.269 \, |J|$. Fermionic measurements like the kinetic energy, double occupancy, and spin-spin correlations can be written in terms of combinations of the single particle Green's functions, $G_{ij}^\sigma=\langle \, c^{\phantom{\dagger}}_{i\sigma} c_{j\sigma}^{\dagger} \, \rangle = ({\cal M}_{ij}^\sigma)^{-1}$, for every configuration of the classical spins. The elements of $G^{\sigma}$ are easily obtained from the diagonalization of ${\cal M}^\sigma$ which is already in hand from the update of the spin variables. Further details of the numerical algorithm for coupled classical spin-fermion systems are contained in Refs.~[\onlinecite{motome99,cen06,dagotto01}]. It is known from related simulations of Hubbard hamiltonians that fermions with no direct interaction $U$ have large finite size effects: the discrete (and often highly degenerate) $U=0$ energy levels $E(k_x,k_y)$ are readily visible in measurements, especially dynamic quantities like the density of states. Although the $U=0$ metal considered here is coupled to classical spins, and hence does have interactions, we still observe significant finite size effects, especially in the metallic portions of the phase diagram. We overcome this difficulty through the introduction of a small magnetic field $B=\Phi_0/L^2$ along the direction perpendicular to the planes. Here $\Phi_0$ is the magnetic flux quanta. With this choice, the intralayer hopping terms are changed by a Peierls-like phase factor $\big(\,t_{\bf ij}\rightarrow t \, {\rm exp} \,\big(\, \frac{2\pi i}{\Phi_0} \int_\mathbf{i}^\mathbf{j}\mathbf{A}\cdot d\mathbf{l}\,\big)$. We use the Landau gauge in order to set the values of the vector potential $\mathbf{A}$. This procedure can be considered as an improvement/generalization of ``boundary condition averaging" \cite{gammel92,gros96,chiesa08}. For a more complete description, see Ref.~[\onlinecite{Assaad}]. Nevertheless, it is important to emphasize the distinction of this field, which couples to the `orbital' motion of the electrons (i.e. their hopping) from a Zeeman field coupling to spin which affects magnetic order. The orbital field introduced here reduces finite size effects by introducing an additional averaging over discrete allowed momenta on a finite lattice. The coupling to the classical spins, on the other hand, produces a Zeeman field for the electrons, whose role in ordering we will determine. This reduction in the finite size effects is especially evident in the single particle density of states, \begin{equation} N(\omega)=\frac{1}{N}\, \mathrm{Im}\, \sum_{r} \sum_{j} \frac{|U_{j,r}|^2} {\lambda_j-\omega-i\delta}. \label{eq:dos} \end{equation} Here $U_{j,r}$ are the components of the eigenvectors corresponding to the eigenvalue $\lambda_j$ of the matrices ${\cal M}^\sigma$ defining the (quadratic) hamiltonian, and which now contain the phase factors described above. The outer sum averages all the equivalent sites in order to recover translationally invariance. Instead of displaying well-separated discrete delta-function peaks, even for free fermions $N(\omega)$ becomes nearly continuous on relatively small lattices, and has a form much closer to that of the thermodynamic limit.\cite{Assaad} In our hamiltonian, turning on $J_H$ further reduces residual finite size effects. \section{Results- Half-filling} \label{sec:results} Because of Fermi surface nesting with vector ${\bf k}=(\pi,\pi)$, the dominant magnetic instability of the half-filled square lattice Hubbard hamiltonian is antiferromagnetic. Indeed, the noninteracting susceptibility $\chi_0(\pi,\pi)$ diverges as temperature $T \rightarrow 0$ so that, within the Random Phase Approximation (RPA), the ground state exhibits AF order for any finite $U$. Similarly, in the strong coupling (Heisenberg) limit, the exchange interaction $J$ favors near neighbor spins which are anti-aligned. Since the $U=0$ fermion sheets exhibit this strong AF preference, we expect a rather different response to the coupling of an AF versus a F Ising plane to the metal. We begin with the AF case. \begin{figure}[t] \epsfig{figure=B_4_vsT.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Crossing plot of the Binder ratio of an AF Ising sheet coupled with $J_H=3t$ (top) and $J_H=10t$ (bottom) to two metallic layers. The interlayer hopping between the fermion layers was set to $t_\perp = t$. For $J_H=3t$, the crossing occurs at $T/t \sim 0.62$, which is well above the critical temperature of a free Ising sheet ($J_H=0$). $T/t = (J/t) \cdot (T_c/J) = 0.2 \cdot 2.269 = 0.454$. The behavior of $T_c$ with $J_H$ is nonmonotonic as the critical temperature for $J_H=10t$ drops to $T/t \sim 0.54$ (see Fig.~\ref{fig:T_cvsJ_H}). \label{fig:Binder1} } \vspace{-0.5cm} \end{figure} \begin{figure}[t] \epsfig{figure=Tcover2269JvsJHrestricted.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Critical temperature $T_c$ (normalized to the 2D Ising value) for the magnetic transition in the Ising spin plane as a function of the interaction $J_H$ with the metal. Several different choices of the hopping parameter $t_\perp$ connecting the two metallic planes are shown. $T_c$ is obtained by the crossing of the Binder ratio $B_4(T)$ for different lattice sizes. (See Fig.~\ref{fig:Binder1}.) Coupling of the spin layer to the metal enhances $T_c$. The degree of enhancement is strongest when the fermionic layers are most weakly coupled to each other (small $t_\perp$). Lines are guides only. \label{fig:T_cvsJ_H} } \vspace{-0.5cm} \end{figure} \subsection{The antiferromagnetic case} Figure~\ref{fig:Binder1} shows the Binder ratio $B_4(T)$ for two metallic planes of linear size $L$=4, 6, 8, 10 and 12 coupled in each case to a single Ising plane of the same dimensions. The interlayer hopping between the fermionic layers is set to $t_\perp=t$ and the coupling $J_H$ between the local spins and the fermions is $J_H=3 t$. The Binder ratios for the three lattice sizes cross at a common point, $T_c \sim 0.62$, representing a 36\% enhancement over the free spin plane result $T_c \sim 2.27 J = 0.454$ for $J=0.2$. Similar Binder crossing plots for other choices of $J_H$ and interlayer hopping yield analogous transition temperatures, which are shown in Fig.~\ref{fig:T_cvsJ_H}. The enhancement in $T_c$ over that of an independent spin plane is nontrivial, because there is a competition between the additional entropy which results from fluctuations of fermionic variables in the metallic plane and the tendency, noted above, towards antiferromagnetism of the $U=0$ Hubbard model, due to Fermi surface nesting. Evidently, the latter tendency wins: $T_{\rm N\acute{e}el}$ is enhanced. The universality class of the transition to an ordered phase as the temperature is lowered remains an open question. Our results are consistent with an Ising transition, and we believe that is the most likely scenario, but the available system sizes do not allow us to draw any final conclusion. \begin{figure}[t] \epsfig{figure=c_isingvsT.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Temperature dependence of the Ising spin-spin correlation between most distant neighbors in the lattice and several values of the interaction $J_H$ for $L=8$ with one Ising layer stacked over two fermion ones. These correlations display a similar trend to the critical temperature: they initially become more robust with the coupling between the different planes; for large $J_H$, they return to values similar to the decoupled Ising model ($J_H=0$). The inset shows the same but now as a function of $J_H$ where the non-monotonic effects on the Ising spins correlations are unequivocal. \label{fig:c_isingvsT} } \vspace{-0.5cm} \end{figure} There are several additional interesting features in the data. First, the enhancement in $T_c$ is non-monotonic in $J_H$. The transition temperature reaches a maximum at $J_H/t \sim 3$ for both $t_\perp=0 $ and $t_\perp = t$. Although data are not shown, even for $t_\perp =5t $ the enhancement of $T_c$ comes back down at large $J_H$. We note that the band structure of the two sheet Hubbard model is $\epsilon(k_x,k_y) = -2 t \big(\,{\rm cos} \, k_x + {\rm cos} \, k_y \, \big) \pm t_\perp$. The two bands overlap for $t_\perp < 4t$ and have a band gap $t_\perp - 4\,t$ otherwise. Thus the choice $t_\perp = 5\,t$ represents the coupling of an Ising spin layer to a band insulator rather than a metal. Figure~\ref{fig:T_cvsJ_H} indicates that the magnetic response of the Ising layer is qualitatively the same in the two situations (metal with $t_{\perp}<4t$ or band insulator with $t_{\perp}>4t$), although the response of a coupling to a band insulator produces less of an effect, as might be expected. This is likely due to the fact that the bilayer Fermi surface is still nested with ${\bf k}=(\pi,\pi)$ for $t_\perp > 4\,t$, even though the density of states at $E_F$ vanishes. \begin{figure}[t] \epsfig{figure=kxyvsT.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Intra-plane kinetic energy $\langle K\rangle$ as a function of temperature for different values of the interaction $J_H$. Here $t_\perp=t$: In (a) the fermionic plane $\ell=0$ directly in contact with the Ising spin layer; and in (b) the more distant fermionic plane $\ell=1$. The trend with increasing $J_H$ is opposite in (a) and (b). For $\ell=0$, the connection to the Ising spins reduces the kinetic energy at all temperatures. For $\ell=1$, the kinetic energy increases. The vertical arrows at panel (a) indicate the critical temperature below which the magnetic ordering takes place for the Ising spins (Fig.\ref{fig:T_cvsJ_H}). \label{fig:kxyvsT} } \vspace{-0.5cm} \end{figure} The non-monotonic behavior of $T_c$ with $J_H$ is reflected also in the evolution of the farthest-neighbor intraplane spin correlation function. $c(i,j)=\langle S_i S_j\rangle$. Fig.~\ref{fig:c_isingvsT} shows $c(i,j)$ vs.~$T/t$ for several values of $J_H$ at $t_{\perp}=t$ on a 8x8 lattice. This quantity, which in the thermodynamic limit would equal the square of the order parameter, evolves rather sharply from zero to one as $T/t$ is lowered. The position where the switch in values occurs moves to larger $T/t$ as $J_H$ changes from $J_H=0$ to $J_H/t \sim 3-4$, but then comes back down, in agreement with the maximal $T_c$ in Fig.~\ref{fig:T_cvsJ_H}. The inset of Fig.~\ref{fig:c_isingvsT} displays the same quantity as a function of temperature and shows, unequivocally, this non-monotonic effect. \begin{figure}[t] \epsfig{figure=nudvsT.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Temperature dependence of the double occupancy in the case $t_\perp=t$, in (a) for $\ell=0$, while in (b) for $\ell=1$. In the former, the increase of $J_H$ decreases double occupancy since the electrons are strongly coupled to the Ising spins and, as a consequence, become more localized. For $\ell=1$ the effect is very small (note the vertical scale) a slight decrease in double occupancy and then a recovery towards the $J_H=0$ value which begins at $J_H/t \gtrsim 2$. \label{fig:nudvsT} } \vspace{-0.5cm} \end{figure} Having described the effect of the interaction $J_H$ between the Ising spin plane and the metal on the ordering transition of the classical spins, we turn now to the issue of the effect of $J_H$ on the metal. We calculate several quantities that characterize both the magnetism and the transport in the fermionic planes. We begin by showing, in Fig.~\ref{fig:kxyvsT}, the intra-plane kinetic energy\cite{footnote3} as a function of temperature for the different values of $J_H$. In (a), which gives the kinetic energy of electrons in the fermionic plane right at the interface, the increase of the interaction with the Ising spins localizes the electrons, eventually driving their kinetic energy to small values. This trend is monotonic in $J_H$. In (b), the farthest plane from the interface with the Ising spins, we find a much weaker effect, as is expected in the absence of direct contact with the classical spin layer. There is a steady increase of the absolute value of the kinetic energy- the opposite of the effect seen in layer $\ell=0$. The sharp crossover temperature in the fermion kinetic energy aligns with $T_c$ for the classical spins, as given in Fig.~\ref{fig:T_cvsJ_H}. The double occupancy $\langle n_\uparrow n_\downarrow \rangle$ provides complementary information to the kinetic energy, and in particular provides insight into the formation of local moments $\langle m^2 \rangle$ and the possibility of Mott metal-insulator behavior. Specifically, $\langle m^2 \rangle = 1 - 2 \langle n_\uparrow n_\downarrow \rangle$ at half-filling so that vanishing double occupancy implies a well-formed moment on every site, and a non-zero double occupancy implies moments which are partially suppressed by charge fluctuations. $\langle n_\uparrow n_\downarrow \rangle(T)$ is shown in Fig.~\ref{fig:nudvsT}. Data for plane 0 and plane 1 are shown in the top and bottom panels, respectively. In both cases $\langle n_\uparrow n_\downarrow \rangle$ takes on its uncorrelated value $\langle n_\uparrow n_\downarrow \rangle =\langle n_\uparrow \rangle \langle n_\downarrow \rangle = 1/4$ for $J_H=0$, as should be the case for a metal with no interactions. In plane 0, there is a monotonic suppression of double occupancy with $J_H$, and hence a steady development of local moments. By the time $J_H = 4$ double occupancy has decreased to $\langle n_\uparrow n_\downarrow \rangle \sim 0.05$ implying $\langle m^2 \rangle(T) \sim 0.90$. The reason for this behavior is clear: the classical Ising spin $S_i$ acts as a local magnetic field for the fermions on site $i$ in plane 0, enhancing(suppressing) the occupation of the electron spin occupation parallel(antiparallel) to it. As we shall see, this induced moment formation aids in magnetic ordering. \begin{figure}[t] \epsfig{figure=safvs1_L.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Finite size extrapolation for the $z$ component of the antiferromagnetic structure factor $S_{\rm af}^z$ for planes 0 and 1, in (a) and (b), respectively. Here $t_\perp=t$ and $T/t = 0.10$. A parabolic fit is used to obtain the value in the thermodynamic limit, $m_{{\rm af},z}^2$. \label{fig:safvs1_L} } \vspace{-0.5cm} \end{figure} In plane 1, more isolated from the classical spins, the double occupancy is barely modified from its $J_H=0$ value. Nevertheless, despite exhibiting only a small efect, the onset of deviations provides a nice signal of the Ne\'el transition temperature. Indeed, the non-monotonicity of $T_{\rm N\acute{e}el}$ observed in Fig.~\ref{fig:T_cvsJ_H} is reflected in a similar non-monotonicity in the double occupancy in fermionic plane 1. Presumably, the large response of the double occupancy to the effective field in plane 0, which is evident far above $T_{\rm N\acute{e}el}$, masks the more subtle signature of the onset of long range order. Long range order of the spin in the metallic planes can be analyzed by a finite size scaling of the antiferromagnetic structure factor,\cite{footnote4} \begin{equation} S_{\rm af}^z=\frac{1}{L^2} \, \sum_{i,j} (-1)^{i+j} \langle s_i^z s_j^z\rangle = m_{{\rm af},z}^2 + \frac{A}{L} + \frac{B}{L^2} \label{eq:fss} \end{equation} Here $m_{{\rm af},z}$ represents the magnetic order parameter in the metallic layer, and the sum over $i,j$ is restricted to that same layer. The coupling of the Ising spins with the $z$ component of the fermionic spins breaks the $SU(2)$ symmetry of the Hubbard hamiltonian, leading to the possibility of long range order at finite temperature. Figure~\ref{fig:safvs1_L} shows the extrapolation according to Eq.~\ref{eq:fss}. We chose $t_\perp=t$, and separate the contributions of plane 0 and 1 in (a) and (b), respectively. However, we do not attempt to discern this possibility, and restrict ourselves to examining the ground state magnetism by setting $T=t/10$ where the structure factor has saturated to its ground state value. The values of $m_{{\rm af},z}^2$ in the two layers, obtained from the thermodynamic limit $1/L\rightarrow0$ extrapolation, are displayed in Fig.~\ref{fig:m_AFvsJ_H} for the same three cases for the interplane hopping appearing in Fig.~\ref{fig:T_cvsJ_H}. Again, the plot is separated in (a) and (b) corresponding to the planes 0 and 1, respectively. \begin{figure}[t] \vspace{-0.5cm} \epsfig{figure=m_AFvsJ_H_restricted.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Antiferromagnetic order parameter as a function of $J_H$ for temperature $T/t =0.10$. In (a) for plane 0 and in (b) the same for plane 1. Plane 0 exhibits a rapid and monotonic saturation with $J_H$. $m^{(1)}_{\rm af\, z}$ in plane 1 first increases with $J_H$ and then falls. \label{fig:m_AFvsJ_H} } \vspace{-0.5cm} \end{figure} \begin{figure}[t] \epsfig{figure=dosAFv2.pdf,width=9.0cm,angle=-0,clip} \vspace{-0.5cm} \caption{(Color online) \underbar{Left columns:} Density of states $N(\omega)$ of fermions in plane 0 for different temperatures $T/t=1.00, 0.67, 0.50, 0.40$ (a-d). The linear lattice size $L=12$, Ising exchange coupling $J=0.2t$, classical spin-fermion spin coupling $J_H=3t$, and the interplane hopping $t_\perp=t$. There are two interesting features in $N(\omega)$: A pile-up of density at $\omega \sim \pm J_H$, which is present even at high $T$, and a gap which opens in the vicinity of $\omega=0$ when $T$ is decreased. Insets display the finite size dependence around $\omega=0$. See text for further discussion. \underbar{Right columns:} Density of states $N(\omega)$ of fermions in plane 1 for different $J_H/t=0.0, 1.0, 3.0, 10.0$ (e-h). The temperature $T/t = 0.10, t_\perp=t,$ and $L=12$. A gap is present for finite $J_H$, but gets filled for larger $J_H$. (See text for discussion.) \label{fig:dosAFv2} } \vspace{-0.5cm} \end{figure} While plane 0, which directly interacts with the Ising spins, becomes easily ``saturated" ($m_{{\rm af},z}\rightarrow1$) with the increase of $J_H$, the fermions on plane 1, farther from the classical spins, are less easily aligned. For smaller values of $J_H$, the long range-order present in the plane at the interface is propagated farther inwards. However, for $J_H/t \gtrsim 3.0$, the magnetism in plane 1 gets less robust. Indeed, the reduction of magnetic order in plane 1 coincides with saturation of magnetic order in plane 0. We turn now to the issue of how the coupling to the classical Ising spins affects the density of states $N(\omega)$ of the metal. There are two separate issues to consider. First, even at high temperatures, the fluctuating classical Ising spins act as a random site energy $\pm J_H$ for the fermions. In the limit $t=t_\perp=0$, we expect $N(\omega)=\frac{1}{2} \big( \, \delta(\omega+J_H) + \delta(\omega-J_H) \, \big)$. Nonzero hopping will broaden this distribution. Second, as $T$ is lowered, the Ising spins no longer fluctuate randomly but instead, for $J>0$, form an ordered antiferromagnetic pattern. This staggered site energy opens a gap in the fermionic spectrum. Figure~\ref{fig:dosAFv2} shows $N(\omega)$ for $L=12$ and $t_\perp=t$. The left panels give $N(\omega)$ in plane 0 for fixed $J_H/t=3$ (the density of states for individual planes is obtained by the appropriate restriction of the spatial sum in Eq.~\ref{eq:dos}.) and decreasing temperature. Both features discussed above are present: peaks in the density of states at $\pm J_H$ at all temperatures, and, near $\omega=0$, an insulating gap which opens only below the Ising $T_c \sim 0.615$ (Fig.~\ref{fig:Binder1}) for $J_H/t=3$. The insulating gap is substantially less than one might expect from a strictly rigid staggered site energy. Presumably, this reflects some residual fluctuations of the Ising spins. In the right panels of Fig.~\ref{fig:dosAFv2} the density of states in the plane further from the interface is shown. In the topmost panel Fig.~\ref{fig:dosAFv2}(e), where $J_H=0$, we recover the analytic result of the DOS of a bilayer with $t_\perp=t$ (displayed as a black thick curve), with some additional structure associated with the discrete finite lattice peaks. For $J_H$ nonzero, the antiferromagnetic gap induced in layer 0 propagates to layer 1, rendering it insulating as well. The size of the gap in $N(\omega)$ for layer $\ell=1$ goes down for large $J_H$, consistent with the decrease in the AF order parameter (Fig.~\ref{fig:m_AFvsJ_H}(b)). One picture of the induced antiferromagnetism, and associated gap, in the layer not adjacent to the Ising spins, is the following: when the Ising spins order they induce antiferromagnetism in plane 0 via $J_H$. It is preferable to have a fermion in plane 1 of opposite spin from the one above it in plane 0, because it can then hop in the perpendicular direction, a lowering of the kinetic energy which is forbidden by the Pauli principle if the plane 1 fermion has parallel spin to the plane 0 fermion. \subsection{The ferromagnetic case} The antiferromagnetic tendency of tight binding electrons on a square lattice at half-filling can be understood from a weak coupling perspective: The Fermi surface is nested at the antiferromagnetic ordering vector $(\pi,\pi)$ and, as a consequence, the non-interacting susceptibility \begin{equation} \chi_0(\mathbf{q})=\frac{1}{L^2}\sum_{\mathbf{k}} \frac {f(\epsilon_\mathbf{k})-f(\epsilon_{\mathbf{k} +\mathbf{q}})} {\epsilon_{\mathbf{k}+\mathbf{q}} -\epsilon_\mathbf{k}}, \label{eq:rpa} \end{equation} diverges there as $T \rightarrow 0$. This reasoning suggests $T_c$ might be suppressed for ferromagnetically coupled Ising spins, whose ordering wave-vector conflicts with what the half-filled metallic fermion spins prefer. Figure \ref{fig:Tc_FvsJ_H} shows the transition temperature $T_c$ of ferromagnetically coupled Ising spins in contact with a half-filled metallic layer. It confirms that $T_c$ is suppressed, consistent with the qualitative argument suggested above, and in contrast to the enhancement seen in the antiferromagnetic case of Fig.~\ref{fig:T_cvsJ_H}. The maximal suppression of $T_c$ occurs at $J_H/t \approx 4$, and reveals a lowering of $T_c$ by almost a factor of two. $T_c$ ultimately recovers, but only for very large values $J_H/t \gtrsim 5$. \begin{figure}[t] \epsfig{figure=Tc_F_over_2269JvsJH_restricted.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Curie temperature for the ferromagnetic Ising model with $J=-0.2t$ coupled with two fermionic planes ($t_\perp=t$), as a function of $J_H$. Unlike the antiferromagnetic case, the coupling with metal decreases the ferromagnetic critical temperature. Lines are guides to the eye. \label{fig:Tc_FvsJ_H} } \end{figure} \begin{figure}[t] \epsfig{figure=m_FvsJ_H_restricted.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) Dependence of the ferromagnetic order parameter $m_F$ for the itinerant spins as a function of the interaction $J_H$ with a ferromagnetic Ising plane. We choose temperature $T = t/10$. Here $t_\perp=t$ and $J=-0.2t$. \label{fig:m_FvsJ_H} } \end{figure} \begin{figure}[t] \epsfig{figure=kxynudvsT_Ferro.pdf,width=9.0cm,angle=-0,clip} \caption{(Color online) In (a), temperature dependence of the intra-plane kinetic energy for $\ell=0$ in the case $t_\perp=t$ and ferromagnetic interaction between the Ising spins with exchange constant $J=-0.2t$. $J_H/t\gtrsim3.0$ marks a distinct behavior where the kinetic energy decreases rapidly with decreasing temperature in opposition to the cases where $J_H$ is small. Equivalently to Fig.\ref{fig:nudvsT}, the temperature dependence of the double occupancy in planes $\ell=0$ and $\ell=1$ ,(b) and (c), respectively. In the former, similar to the antiferromagnetic case, the increase in moment localization due to the interaction with the neighbor Ising spins can be readily seen. In the latter, despite the upturn of double occupancy for low temperatures and large interactions, the later downturn for even smaller temperatures indicates that the ferromagnetism of the Ising layers starts to propagate through the more distant fermionic region. Again, in panels (a) and (b), the vertical arrows depict the Ising critical temperature $T_c$. \label{fig:kxynudvsT_Ferro} } \vspace{-0.5cm} \end{figure} When ferromagnetic order is present in layer 0, we can ask whether it will induce similar order in the more distant layer 1, something which occurred with antiferromagnetic coupling $J$. We calculated the ferromagnetic structure factor ($S^z_{\rm f}=(1/L^2)\sum_i\langle s^z_is^z_j\rangle$) for the same values of interaction $J_H$ considered in the previous case. A similar finite size analysis, Eq.~\ref{eq:fss}, was performed, the order parameter $m_F$ for \emph{ferromagnetism} in each of the fermionic planes was obtained as function of $J_H$ for low temperature, and is shown in Fig.~\ref{fig:m_FvsJ_H}. Ferromagnetic order is induced in both planes, although $m_{\rm f}$ is an order of magnitude smaller for plane 1 than for plane 0, in contrast to the antiferromagnetic case where there was only a factor of two difference. As they order, the fermions in direct contact with the Ising spins ($\ell=0$) start to localize as seen in their reduced double occupancy (Fig.~\ref{fig:kxynudvsT_Ferro}(b)). For any $J_H$ and $T$, the double occupancy for $\ell=1$ is basically unchanged from its uncorrelated value $1/4$ (Fig.~\ref{fig:kxynudvsT_Ferro}(c)). This is similar to what happend in the AF case of Fig.~\ref{fig:nudvsT}(b). The behavior of the $\ell=0$ kinetic energy (Fig.~\ref{fig:kxynudvsT_Ferro}(a)), on the other hand, is quite different from the AF case (Fig.~\ref{fig:kxyvsT}). Althougn in both cases there is a systematic suppression with $J_H$, in the ferromagnetic case the magnitude of the kinetic decreases as $T$ is lowered for $J_H/t \gtrsim 3$. This is likely a consequence of the Pauli principle: In the F case, ordering of the Ising spins promotes polarization of the fermions in layer $\ell=0$ and as this polarization becomes more and more complete the fermions can no longer hop on the lattice. Finally, we analyze the influence of the magnetically ordered plane of Ising spins on the metallic density of states, Fig.~\ref{fig:dosFv2}. Similar to the AF case (Fig.~\ref{fig:dosAFv2}), there are peaks at $\omega \sim \pm J_H$ for layer $\ell=0$. The increase of $J_H$ induces a pseudogap, however the insets to (c) and (d) indicate $N(\omega=0)$ remains finite, in contrast to the AF case. The dashed line gives the density of states for a single fermionic plane coupled to a perfectly ordered ferromagnetic arrangement for the Ising spins, which is derived from the dispersion $E(\mathbf{k})=-2t(\cos(k_x)+\cos(k_y))\pm J_H$. The DOS for plane $\ell=1$ is approximately given by that of a fermionic bilayer with $t_\perp=t$. The effect of $J_H$ is to slowly decrease the distance between the van Hove singularities at $\omega = \pm t_\perp$. This trend would then ultimately result in a single van Hove singularity at $\omega=0$ similar to that of an isolated free fermion plane. Increasing $J_H$ helps ``disconnecting'' the planes $\ell > 0$ which are not right at the interface. Similar decoupling can be seen in layered Hubbard models\cite{euverte12}. \begin{figure}[t] \epsfig{figure=dosFv2.pdf,width=9.0cm,angle=-0,clip} \vspace{-0.5cm} \caption{(Color online) Density of states $N(\omega)$ of fermions in plane $\ell=0$ for different values of the interaction $J_H/t=1.0, 3.0, 4.0, 5.0$ (a-d) at temperature $T/t=0.25$. Panels (e-h) show the correspondent results for plane $\ell=1$. The linear lattice size is $L=12$, Ising exchange coupling $J=-0.2t$ and the interplane hopping $t_\perp=t$. Insets at (c) and (d) include also $L=6,8$ and $10$ near the region $\omega=0$. Also displayed, as a dashed line, the corresponding density of states resulting from the dispersion of one plane under the influence of a fixed global chemical potential as if the configuration for the Ising spins is ``frozen'' in the ferromagnetic state. Worth noting is that there is a pile-up of density at $\omega \sim \pm J_H$, and a pseudogap which opens only for values of $J_H/t\gtrsim4$. Insets in (c) and (d) show a finite size comparison of this gap. \label{fig:dosFv2} } \vspace{-0.5cm} \end{figure} \section{Results- Doped Lattice} \label{sec:resultsdoped} In the previous section we analyzed the influence in the critical temperature of the Ising plane after attaching a metal to it. Its enhancement(suppression) could be explained by the preferred wave-vector of the ordering in this metallic region. Since there is a natural tendency for short-ranged antiferromagnetic order for free fermions in a tightly-binded bipartite lattice at half-filling, these fermions and the Ising spins act cooperatively in order to boost the critical temperature of the antiferromagnetic Ising model. The same argument shows that when the Ising spins have a ferromagnetic coupling, the critical temperature is reduced, once again due to the antiferromagnetic tendency introduced by the contact with the fermionic spins. We now examine the doped lattice where the dominant AF response in the noninteracting $\chi_0({\bf q})$ of Eq.~\ref{eq:rpa} is no longer present. Performing a similar analysis of Fig.~\ref{fig:Binder1} we computed the crossings in the Binder ratios for several values of the interaction $J_H$ when the metal has a fixed total density $\rho=0.87$. The global chemical potential $\mu$ in Eq.~\ref{eq:hamiltonian} is tuned in order to select this density for each of the lattice sizes and temperatures values calculated. Figure~\ref{fig:T_c_dopedvsJ_H} shows the dependence of the critical value of the Ising spins for $t_\perp=t$ and $t_\perp = 0$. When $t_\perp=t$ so that two metallic layers are coupled to the Ising magnetic layer, the qualitative behavior is similar to that at half-filling (Figs.~\ref{fig:T_cvsJ_H} and \ref{fig:Tc_FvsJ_H}). Indeed, the values of the transition temperatures are quantitatively similar. This is true in both the ferromagnetic and antiferromagnetic cases. However, when $t_\perp=0$, so that only one metallic plane is coupled, doping appears to change the behavior of $T_c$ quite substantially. While for small values of $J_H$ the increase(decrease) of the critical temperature of the antiferromagnetic(ferromagnetic) Ising model is the same as for $\rho=1$, once higher values of $J_H$ are reached ($J_H/t \sim 4$ in the AF case and $J_H/t \sim 10$ in the F one) the scenario changes. An antiferromagnetic Ising plane has its critical temperature decreased by the coupling with the free electron spins while in the ferromagnetic case the critical temperature is enhanced. This is not completely unexpected since, as commented earlier, the peak in $\chi_0({\bf q})$ moves away from $(\pi,\pi)$ so that the fermions in the metal no longer so strongly favor AF order. \begin{figure}[t] \epsfig{figure=T_c_dopedvsJ_H.pdf,width=9.0cm,angle=-0,clip} \vspace{-0.5cm} \caption{(Color online) Dependence of critical temperature on $J_H$ of the long-range order for the Ising spins when coupled to fermions at total density $\rho=0.87$ for different scenarios: antiferromagnetic(ferromagnetic) interaction between the Ising spins and two fermionic planes coupled by a hopping $t_\perp=t$ and the same for the interaction with a single plane. In the situation one have two fermionic planes, this dependence is quantitatively similar to the half-filled case. In the latter scenario, in the regime of larger interactions, the coupling with the fermions is detrimental (benign) to the critical temperature when the antiferromagnetic(ferromagnetic) Ising model is considered in a clear contrast with the half-filled case. \label{fig:T_c_dopedvsJ_H} } \vspace{-0.5cm} \end{figure} The reason that this does not happen in the two layer case $t_\perp=1$ is that the second metallic layer $\ell=1$ acts as a charge reservoir for the layer $\ell=0$ at the interface. That is, the electron density is imbalanced, as seen in Fig.~\ref{fig:rho_vs_J_H_doped}. Plane $\ell=0$ adjacent to the magnetic layer has a tendency to become half-filled, leaving the farthest plane less populated. For larger values of $J_H$ the occupations tend to $1.0$ and $0.75$, for $\ell=0$ and $\ell=1$ respectively. Throughout this evolution the total density is preserved at $\rho=0.87$. The half-filling of layer $\ell=0$ allows for the enhancement(supressing) of the critical temperature of an antiferromagnetic(ferromagnetic) aligned Ising plane. \begin{figure}[t] \epsfig{figure=rho_vs_J_H_doped.pdf,width=9.0cm,angle=-0,clip} \vspace{-0.5cm} \caption{(Color online) Dependence of the density on $J_H/t$ in each of the fermionic planes at temperatures close to where the magnetic transition takes place for the Ising spins. The lattice size is $L=8$, $t_\perp = t$ and the interaction among the Ising spins are either ferro or antiferromagnetic with value $|J|=0.2t$. \label{fig:rho_vs_J_H_doped} } \vspace{-0.5cm} \end{figure} \section{Conclusion} \label{sec:conclusion} We studied magnetic order at the interface between an insulator and a metal using quantum Monte Carlo. Specifically, we considered a 2D Ising plane coupled to a lattice of noninteracting (metallic) fermions. In the case of an antiferromagnetic Ising model, and a half-filled metal, the coupling enhanced the Ising critical temperature. Antiferromagnetic order was also induced in the metal, both in the layer immediately at the interface with the classical spins and also deeper within. This enhancement occurs even in the case where the interlayer hopping in the fermionic sheet is made large enough that the fermions become a band insulator, namely a bilayer with interplane hopping $t_\perp$ bigger that $4t$. In contrast, the critical temperature of ferromagnetic Ising spins is reduced by the coupling to the fermions at half-filling. We attribute these distinct effects to be a consequence of the perfect nesting of the square lattice fermion tight binding hamiltonian, which favors antiferromagnetism. Indeed, studies of the doped lattice demonstrate that the system's desire to optimize `magnetic consistency', that is to have an AF metallic response when the magnetic layer is AF, is so great that, if available, charge will be pulled from a second magnetic layer into the interface metallic layer so that half-filling is maintained there. In the absence of such a reservoir, the AF transition is suppressed by this mismatch with the metallic ordering vave vector. A central consideration of our work has been the consistency of the order in the classical spin plane from the ordering tendency of the metal. In ``unfrustrated" cases where the metal and local spins prefer the same wave-vector, transition temperatures are enhanced, and {\it vice versa}. Recent experiments \cite{langridge14} have explored the importance of these considerations on the decoupling of surface and bulk magnetism in UO$_2$. The distinct surface behavior observed is attributed to the different symmetry of its ordered phase relative to the bulk. Other 3D systems in which 2D order occurs due to frustration are certain of the doped cuprate superconductors \cite{tranquada12, tranquada13}. In individual CuO$_2$ sheets, stripes of $d$-wave order coexist with intervening antiferromagnetic stripes. The orientation of the $d$-wave phases alternates from stripe to stripe in a given layer. In adjacent CuO$_2$ sheets, the same stripe pattern occurs, but, because of structural effects, the stripes are oriented perpendicular to the neighboring sheet. The result is that the intersheet Josephson coupling tends to cancel and 2D superconductivity is observed. A natural progression of the work reported here would be to consider the case of continuous XY (planar) or Heisenberg spins. As previously noted, in this case an isolated 2D spin plane has no transition to long range order, owing to the Mermin-Wagner theorem. One interesting question will be how the less robust power law correlations which develop at low $T$ in the XY case are qualitatively affected by coupling to the metal. For $J<0$, where we find the Ising $T_c$ suppressed, will the Kosterlitz-Thouless transition survive? Because the fermion determinant depends only on the spin degrees of freedom in the interface layer, adding additional spin layers has relatively little computational cost. Thus it is feasible to study a 3D lattice Heisenberg spins, which has a finite ordering temperature, coupled to one of more metallic layers. \vskip0.15in {\bf Acknowledgements:} This work was supported by DOE DE-NA0001842-0, and by the Office of the President of the University of California. Support from CNPq (TP and RM), FAPERJ (TP), and the CNPq Science Without Borders project (RS) is gratefully acknowledged.
2,877,628,088,721
arxiv
\section{Introduction} Frustration in magnetic materials offers a fertile ground for studying interesting phenomena in strongly correlated systems~\cite{Diep,Lacroix}. Competing interactions under frustration often lead to an extensive number of energetically degenerate states. Even a small perturbation to the degeneracy can result in remarkable effects, such as phase transitions and colossal responses to external fields. These unique properties have stimulated intensive studies of competing orders and fluctuations in frustrated systems. One such example of frustrated systems is spin ice~\cite{Harris1997,Ramirez1999,GingrasPreprint}. In spin ice, spins with strong Ising-type anisotropy along the sublattice-dependent local $[111]$ direction reside on the pyrochlore lattice, which consists of corner-sharing tetrahedra [see Figs.~\ref{fig:diagram}(c)-\ref{fig:diagram}(f)]. In the spin-ice compounds, the effective interaction for nearest-neighbor (NN) spins becomes ferromagnetic, which enforces a two-in two-out spin configuration in each tetrahedron~\cite{Siddharthan1999,Bramwell2001,denHertog2005}. The local constraint, called the ice rule~\cite{Pauling1935}, is not sufficient to establish a long-range order, and results in the macroscopic degeneracy in the ground state. The long-range dipolar interactions lead to significant changes in the phase diagram~\cite{Melko2004} and emergence of peculiar excitation called magnetic monopoles~\cite{Castelnovo2008}. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{fig1v04.eps} \end{center} \caption{(Color online) (a) Phase diagram of the effective RKKY Ising model in Eq.~(\ref{eq:Hspin}). The lines are the guides for eyes; the solid (dotted) lines show the first(second)-order transitions. The bottom strip shows the ground state phase diagram obtained by a variational calculation comparing the ground state energy between the different magnetic states. (b) The RKKY interactions for nearest-, second-, and third-neighbor pairs. Figures (c)-(f) show schematic pictures of the magnetic orders in the four phases present in the phase diagram: (c) ice-ferro, (d) ice-($0,0,2\pi$), (e) 32-sublattice, and (f) all-in/all-out orders. } \label{fig:diagram} \end{figure} On the other hand, recently, the physics of spin-charge coupling in geometrically frustrated systems has also gained interest. These studies were stimulated by recent experiments on the Mo and Ir metallic pyrochlore oxides which show rich phase diagrams~\cite{Hanasaki2007,Iguchi2009,Matsuhira2011}. Indeed, some of their transport properties were theoretically discussed in the context of spin-charge coupling~\cite{Motome2010a,Motome2010b,Udagawa2012}. At the same time, the magnetism of a pyrochlore Ising model with the Ruderman-Kittel-Kasuya-Yosida (RKKY)~\cite{Ruderman1954,Kasuya1956,Yosida1957} interactions~\cite{Ikeda2008} and a Kondo lattice model on a pyrochlore lattice~\cite{Ishizuka2012} were studied. In the later study, the magnetic phase diagram of the Kondo lattice model was mapped out by using a Monte Carlo (MC) simulation, and it was pointed out that the change of the RKKY interactions depending on the electron density plays an important role in understanding the phase diagram. However, the detailed investigation on the effective RKKY Ising model was not presented. In this contribution, we numerically study the magnetic phase diagram of the effective Ising model with long-range RKKY interactions using a MC simulation. We present the phase diagram of the effective model and show that the result well reproduces the phase diagram of the Kondo lattice model studied in Ref.~\citen{Ishizuka2012}. In addition, we investigate the critical behavior for the phase transition to each magnetic phase, which was unclear in the previous study on the Kondo lattice model due to the computational limitation. \section{Model and Method} \subsection{Model} The Hamiltonian for the spin-ice type Kondo lattice model considered in the previous studies~\cite{Udagawa2012,Ishizuka2012} is given by \begin{eqnarray} H = -t \! \sum_{\langle i,j \rangle, \sigma} \! ( c^\dagger_{i\sigma} c_{j\sigma} + \text{H.c.} ) -J \sum_{i} {\bf S}_i \cdot {\boldsymbol \sigma_i}. \label{eq:Hkondo} \end{eqnarray} Here, the first term is the hopping of itinerant electrons, where $c_{i\sigma}$ ($c^\dagger_{i\sigma}$) is the annihilation (creation) operator of an itinerant electron with spin $\sigma= \uparrow, \downarrow$ at $i$th site. The sum $\langle i,j \rangle$ is taken over NN sites on the pyrochlore lattice, and $t$ is the NN transfer integral. The second term is the onsite interaction between localized spins and itinerant electrons, where ${\bf S}_i$ and ${\boldsymbol \sigma}_i$ represent the localized Ising spin and itinerant electron spin at $i$th site, respectively ($|{\bf S}_i|=1$), and $J$ is the coupling constant (the sign of $J$ does not matter in the present model as the sign of $J$ can be reversed by time-reversal transformation). The anisotropy axis of the Ising spin is given along the local [111] direction at each site, i.e., along the line connecting the centers of two tetrahedra to which the spin belongs [see Figs.~\ref{fig:diagram}(c)-\ref{fig:diagram}(f)]. In the model in Eq.~(\ref{eq:Hkondo}), the kinetic motion of electrons induces effective magnetic interactions between the localized Ising spins. In the weak coupling limit of $J/t \ll 1$, the effective interactions are obtained by the second-order perturbation theory in terms of the second term in Eq.~(\ref{eq:Hkondo}). They are called the RKKY interactions~\cite{Ruderman1954,Kasuya1956,Yosida1957}. Thus, the perturbation gives an effective Ising spin model with long-range RKKY interactions. For simplicity, by omitting the interactions further than third neighbors and projecting the local spin axis along the direction of the anisotropy axis~\cite{Moessner1998}, we consider the Hamiltonian in the form \begin{eqnarray} H = - J_1 \sum_{\langle i,j \rangle} S_i^z S_j^z - J_2 \sum_{\left\{ i,j \right\}} S_i^z S_j^z - J_3 \sum_{\left[ i,j \right]} S_i^z S_j^z, \label{eq:Hspin} \end{eqnarray} where $S_i^z=\pm1$ is the projected collinear Ising moment at $i$th site and the sum $\left\{ i,j \right\}$ ($\left[ i,j \right]$) is taken over second(third)-neighbor pairs. Here, $J_1$, $J_2$, and $J_3$ are the nearest-, second-, and third-neighbor interactions, respectively, which are dependent on the electron density $n=\frac1{2N}\sum_{i\sigma} \langle c_{i\sigma}^\dagger c_{i\sigma}\rangle$ in the original Kondo lattice model in Eq.~(\ref{eq:Hkondo}); the estimates are shown in Fig.~\ref{fig:diagram}(b) as functions of $n$ (we set $J^2/t=1$ as the energy unit hereafter). We call this model the effective RKKY Ising model. Note that the previous study on the similar model assumed the RKKY interaction for free electrons~\cite{Ikeda2008}, while our effective model takes account of the band structure of the pyrochlore lattice. In the following, we take the lattice constant of cubic unit cell $a = 1$ [see Fig.~\ref{fig:diagram}(f)], and the Boltzmann constant $k_{\rm B} = 1$. \subsection{Monte Carlo method} We investigate the phase diagram of the effective RKKY Ising model in Eq.~(\ref{eq:Hspin}) by a classical MC simulation for $J_1$, $J_2$, and $J_3$ at each $n$ plotted in Fig.~\ref{fig:diagram}(b). The single-spin flip update by using the heat-bath method was employed for the MC sampling. Most of the calculations were initially started from random spin configurations, while the calculations in the ice-(0,0,2$\pi$) region were started from a mixed initial spin-configuration of low-temperature ($T$) ordered and high-$T$ disordered states in order to deal with the severe freezing at low $T$. The typical system sizes for the calculations were $N=4\times6^3$ to $4\times12^3$ sites, whereas the calculations up to $N=4\times24^3$ were done for the cases that weak first order transitions were expected. The calculations were typically done with $5\times10^6$ MC steps after the thermalization of $10^6$ MC steps, and the error bars were estimated by dividing the MC data into five bins and calculating the corrected sample standard deviation among the bins. \section{Results} \subsection{Phase diagram} Figure~\ref{fig:diagram}(a) shows the phase diagram for the effective RKKY Ising model in Eq.~(\ref{eq:Hspin}) obtained by MC calculations. We identify four magnetic phases at low $T$: (i) the ice-ferro state for $n\lesssim 0.054$, (ii) the ice-$(0,0,2\pi)$ state for $0.054\lesssim n \lesssim 0.150$, (iii) the 32-sublattice ordered state for $0.150\lesssim n \lesssim 0.222$, and (iv) all-in/all-out ordered state for $0.222 \lesssim n$. The schematic pictures of each magnetic order are shown in Figs.~\ref{fig:diagram}(c)-\ref{fig:diagram}(f). The symbols in the phase diagram show the critical temperatures $T_c$ estimated from $T$ dependence of the order parameters (see the next section for the details). The strip at the bottom of the figure is the ground state phase diagram obtained by a variational calculation comparing the ground state energy for the four magnetic orders. The variational estimates of the range of $n$ for each phase are in good agreement with the MC results, as shown in Fig.~\ref{fig:diagram}(a). The magnetic phase diagram in Fig.~\ref{fig:diagram}(a) is in good accordance with the phase diagram for the Kondo lattice model in Eq.~(\ref{eq:Hkondo}) obtained in Ref.~\citen{Ishizuka2012}. Indeed, all the four magnetic states appeared in the Kondo lattice model at $J/t=2$ are found in the effective RKKY Ising model. The range of $n$ for each phase is in good agreement between the two models, except for the electronic phase separation specific to the Kondo lattice model. The relative values of $T_c$ for different phases also show reasonable accordance between the two models, while the magnitude of $T_c$ is overestimated by a factor of 2-3 in the present results when considering the value of $J/t=2$ taken in the Kondo lattice model. The results indicate that the effective model with the RKKY interactions up to third neighbors semi-quantitatively describes the magnetic properties of the original Kondo lattice model in the weak coupling regime. Although it was unclear how the ice-ferro and ice-$(0,0,2\pi)$ ordered phases meet with each other at low $T$ in the previous study for the Kondo lattice model~\cite{Ishizuka2012}, our result in Fig.~\ref{fig:diagram}(a) suggests that the two phase boundaries go to zero and meet at $T=0$. This peculiar behavior can be understood as follows. The ground state energies per site for the ice-ferro and ice-$(0,0,2\pi)$ ordered states are given by $E_\text{if}=4J_1+8J_2-12J_3$ and $E_{\rm 2\pi}=4J_1-8J_2+4J_3$, respectively. At the boundary between the two states, $E_\text{if} = E_{\rm 2\pi}$; this implies $J_2=J_3$ at the phase boundary. This is indeed the case, as shown in Fig.~\ref{fig:diagram}(b). It was recently pointed out that the model in Eq.~(\ref{eq:Hspin}) with $J_2=J_3$ can be rewritten by introducing the magnetic charge for each tetrahedron $p$, $Q_p=\sum_{i\in p}S_i^z$, into the form of~\cite{Ishizuka2013} \begin{eqnarray} H=-J_2 \sum_{\langle p, q\rangle} Q_p Q_q - \left( \frac{J_1}4 - \frac{J_2}2 \right) \sum_p Q_p^2 + {\rm const}. \end{eqnarray} Here, $p$ and $q$ are the indices for the tetrahedra in the pyrochlore lattice; the first sum is taken over all the pairs of NN tetrahedra, and the second sum over all tetrahedra. Rewriting the Hamiltonian in this form, it is easily seen that all the two-up two-down spin configurations are energetically degenerate since $Q_p=0$ for all $p$. Hence, when $J_1$ is dominantly negative and favors the two-up two-down configurations, the system remains disordered to $T\to0$ when $J_2=J_3$. This explains the reason why $T_c$ goes to zero at the phase boundary between the ice-ferro and ice-($0,0,2\pi)$ phases. \subsection{Temperature dependence of physical quantities} \begin{figure} \begin{center} \includegraphics[width=0.86\linewidth]{fig2v03.eps} \end{center} \caption{(Color online) Temperature dependence of the order parameters in each phase: (a) $m_\text{if}$ at $n=0.039$, (b) $m_{\rm 2\pi}$ at $n=0.123$, (c) $m_\text{32}$ at $n=0.192$, and (d) $m$ at $n=0.230$. The insets in each figure show the temperature dependence of the Binder parameters for each parameter in the main panel. } \label{fig:mc} \end{figure} Let us next look into the critical behavior of the phase transition for each phase while changing $T$. The temperature dependence of the order parameters for each magnetic phase is shown in Fig.~\ref{fig:mc}. Figure~\ref{fig:mc}(a) shows the result of the order parameter for the ice-ferro state, $m_\text{if}=\langle \{ 3\sum_i S_{ii}({\bf 0}) - 2\sum_{i>j}S_{ij}({\bf 0}) \}/4N \rangle$, at $n=0.039$; the inset shows the result of the Binder parameter~\cite{Binder1981} for $m_\text{if}$. Here, $S_{ij}({\bf q})$ ($i,j=0,1,2,3$) is the spin structure factor for the Ising spins on $i$th and $j$th sublattices. The rapid increase of $m_\text{if}$ around $T_c=0.013$ indicates the phase transition to the ice-ferro state. At the same time, the Binder parameter for large system sizes $N\ge 4\times 10^3$ shows a dip and takes a negative value near $T_c$, indicating that the transition is of first order. $T_c$ plotted in Fig.~\ref{fig:diagram}(a) is estimated from the position of the dip for the Binder parameter (the finite size effect is negligibly small). The discontinuous transition is consistent with the result in the previous study using RKKY interactions for free electron gas~\cite{Ikeda2008}. In the previous study, the reason for the discontinuity was discussed in relation to the six-state models. On the other hand, our previous result on the Kondo lattice model appeared to show a continuous change of the order parameter at the phase transition. This is presumably due to the small system size. Indeed, the dip of the Binder parameter does not appear for small sizes in Fig.~\ref{fig:mc}(a). With increasing the electron density, a sharp first order transition to ice-$(0,0,2\pi)$ ordered state is seen. Figure~\ref{fig:mc}(b) and its inset show the result of $m_{\rm 2\pi}^2= \frac1{4N} [ 2\sum_{i,j} S_{ii}({\bf q}_j^{2\pi}) -4 \{ S_{01}({\bf q}_1^{2\pi}) + S_{23}({\bf q}_1^{2\pi}) + S_{02}({\bf q}_2^{2\pi})+ S_{13}({\bf q}_2^{2\pi}) + S_{03}({\bf q}_3^{2\pi}) + S_{12}({\bf q}_3^{2\pi}) \} ]$ and its Binder parameter at $n=0.123$, respectively. Here, ${\bf q}_1^{2\pi}=(0,\pi,\pi)$, ${\bf q}_2^{2\pi}=(\pi,0,\pi)$, and ${\bf q}_3^{2\pi}=(\pi,\pi,0)$. The abrupt jump of $m_{\rm 2\pi}$ at $T_c=0.0524(4)$ clearly indicates that the transition is of first order. As the system size dependence of $T_c$ is negligible, we used $T_c$ for $N=4\times 12^3$ in the plot in Fig.~\ref{fig:diagram}(a). The discontinuous transition is consistent with the result in the Kondo lattice model in Ref.~\citen{Ishizuka2012}. On the other hand, the phase transition to the all-in/all-out state in the higher density region is second order. Figure~\ref{fig:mc}(d) shows $T$ dependence of $m=\sum_{i,j}S({\bf q}={\bf 0})/4N$ at $n=0.230$. $m$ shows continuous increase with decreasing $T$, and the Binder parameter for different sizes shows a crossing at a temperature [the inset of Fig.~\ref{fig:mc}(d)], indicating the second order transition; $T_c$ is determined from the crossing of the Binder parameter shown in the inset. The continuous phase transition was also seen for the Kondo lattice model in Ref.~\citen{Ishizuka2012}. A second order transition also takes place in the phase transition to the 32-sublattice ordered state. Figure~\ref{fig:mc}(c) shows $T$ dependence of $m_{32}=\langle \sum_i S_{ii}({\bf q}_i^{32})/N \rangle$ at $n=0.192$, where ${\bf q}_0^{32}=(\pi,\pi,\pi)$, ${\bf q}_1^{32}=(\pi,0,0)$, ${\bf q}_2^{32}=(0,\pi,0)$, and ${\bf q}_3^{32}=(0,0,\pi)$. The results of the Binder parameters for $m_{32}$ are also shown in the inset. Smooth increase of $m_{32}$ and the crossing of the Binder parameter for different system sizes indicate a second order transition. Up to the calculation of system sizes $N=4\times 24^3$, the second order transition takes place for electron densities $0.18\lesssim n \lesssim 0.21$ as shown by the dotted lines in Fig.~\ref{fig:diagram}(a). On the other hand, by the Binder analysis, the phase transition to the 32-sublattice ordered state becomes first order for $0.15\lesssim n \lesssim 0.18$ and $0.21\lesssim n \lesssim 0.22$. This implies the presence of the tricritical points on the phase boundary for the 32-sublattice ordered phase~\cite{note_32sub}. On the other hand, in the study on the Kondo lattice model, no first order transition was observed~\cite{Ishizuka2012}. The difference is presumably due to the finite size effect. Indeed, in the current calculations, we found that the system size above $N=4\times 12^3$ is necessary for distinguishing the first order transition. On the other hand, due to the computational time, previous calculations on the Kondo lattice model were done with $N=4\times 8^3$ at the largest. \section{Summary} To summarize, we numerically investigated an effective RKKY Ising model for the spin-ice Kondo lattice model, which was studied in Ref.~\citen{Ishizuka2012}, by using the Monte Carlo simulation. We showed that the effective model with up to the third-neighbor RKKY interaction well reproduces the phase diagram of the Kondo lattice model. In addition, we presented the detailed results on the nature of the phase transition to each magnetically ordered state. We showed that the critical temperature goes to zero between the two ice phases, and that the tricritical points are present on the phase boundary of the 32-sublattice ordered state. \section*{Acknowledgement} H.I. is supported by Grant-in-Aid for JSPS Fellows. This research was supported by KAKENHI (No. 21340090, 22540372, 24340076, and 24740221), the Strategic Programs for Innovative Research (SPIRE), MEXT, and the Computational Materials Science Initiative (CMSI), Japan.
2,877,628,088,722
arxiv
\section{Introduction} There are two mathematical frameworks in widespread use for modeling beliefs in multi-agent systems. One approach, popular among computer scientists and logicians, utilizes the \emph{possible worlds} paradigm (see, e.g., \cite{Hal31}). Roughly speaking, a \emph{probability frame} consists of a set of \emph{worlds}, each of which is associated with a set of probability measures (one for each agent), defined on the set of worlds. These probability measures are interpreted as encoding beliefs. Hierarchical beliefs---for example, beliefs about what another agent believes---are naturally captured by the recursive structure of this framework, namely the fact that worlds encode beliefs about worlds. The second approach, more standard in game theory, uses \emph{type spaces}, introduced by Harsanyi \cite{Harsanyi}. Roughly speaking, types spaces are composed of \emph{states}, encoding ``basic'' facts about the world (typically including which strategies the players are using), together with \emph{types}, encoding the beliefs of each player in the form of a probability measure defined over the states and the types of her opponents. What is the relationship between probability frames and type spaces? Aside from a few measure-theoretic technicalities, it is relatively straightforward to transform a type space into a probability frame: essentially, the worlds are state-type pairs. Reversing this transformation is not so straightforward. Given a probability frame, the key question is how to ``factor'' worlds into states and types. Probability frames encode beliefs about worlds, beliefs about beliefs about worlds, and so on, but this never ``bottoms out'' in anything like the states in a type space. That is, there is no obvious component of a world that encodes facts such as what strategies the agents are using or the value participants in an auction might assign to an item up for bid. Thus, there seems to be a mismatch between the two approaches. In this paper, we resolve this mismatch by adding a \emph{language}---a set of basic facts (such as what strategy is used by each agent), represented by primitive propositions---to the picture. In the terminology of modal logic, we pass from frames to models. Given a language, a model is simply a frame together with an \emph{interpretation} that determines for each world $w$ and primitive proposition $p$ in the language whether $p$ is true in world $w$. But then we must decide which language to use. We show that the right choice of language can provide exactly the additional structure needed to ``cleanly'' factor worlds into states and types. Specifically, we define a transformation on probability models that takes language as a parameter, and show that it produces the familiar type space construction when the language is appropriately expressive. The value of forging such a connection between the two major mathematical frameworks for modeling belief is obvious: improved communication between researchers working in these respective traditions, and the prospect of importing insights and results from one paradigm to another. And indeed, one immediate application of our language-sensitive translation is a link between two fundamental notions: that of a \emph{canonical} model from the world of modal logic (see, e.g., \cite{BRV01}) and that of a \emph{universal} type space from the theoretical economics literature \cite{MZ}. Each of these constructions plays a central role in the subfield to which it belongs, and these roles are very similar: each is, in a precise sense, the ``largest'' structure of its kind---a structure that essentially contains all other such structures. It is perhaps not surprising that they are effectively the \textit{same} structure: roughly speaking, we show in Section \ref{sec:trs} that canonical models are transformed into universal type spaces.% \footnote{We remark that Meier \cite{Meier12} already observed this connection in the case of an infinitary language.} Moreover, since canonical model constructions are highly sensitive to the underlying logical language, this result suggests a new view into a landscape of universal type spaces parametrized by language. Much of this work was inspired by a beautiful paper of Heifetz and Samet \cite{HeSa98}. In it, they construct a measure-theoretic universal type space by a process that closely mimics a standard canonical-model construction (though they do not describe it that way). Our work can be viewed as generalizing their construction to produce a translation from \textit{arbitrary} probability frames to type spaces; our Theorem \ref{thm:can} is then the special case of applying this translation to the canonical model associated with a certain specific logic. In order to emphasize this connection, much of the notation and terminology of this paper duplicates or parallels that used by Heifetz and Samet. In fact, our ``canonical model'' construction differs in small but significant ways from the standard construction in modal logic. Typically, worlds in the canonical model are realized as maximal \textit{consistent} sets of formulas from the language, where consistency is, of course, defined relative to some background axiom system. However, the standard finitary axiom system used to reason about probability frames has a problem, namely, it is not \emph{compact}: there exists an infinite set $F$ of formulas that is not satisfiable such that every finite subset of $F$ is satisfiable (which means that $F$ is consistent with the axioms). This renders the corresponding canonical model not a model at all. To avoid this issue, we replace ``consistency'' with ``satisfiability'' in our canonical model construction. (Aumann \cite{Aumann99a} uses an analogous construction.) Meier~\cite{Meier12} considers an alternative approach: changing the axiom system. Specifically, he considers an \textit{infinitary} axiom system (with infinitary rules of inference) with respect to which consistency and satisfiability coincide, and constructs a universal type space using a canonical model style construction over this infinitary logic. Although Meier's logic is infinitary (he allows uncountable conjunctions and disjunctions) and our language is finitary, his canonical model is essentially isomorphic to ours (see Section~\ref{sec:uni} for further discussion).% \footnote{We thank Martin Meier for pointing this important connection between our work, his work, and that of Aumann.} Conceptually, however, our goals are somewhat different from those of Aumann and Meier. Aumann and Meier focus on the construction of the canonical model. By way of contrast, we approach the issue as a problem first of how to transform an arbitrary probability frame into a type space, and observe afterwards that this translation connects a (suitably defined) notion of canonical model to that of a universal type space. We are not the first to study the general relationship between type spaces and possible-worlds--style structures. One connection via logic is well known. Sound and complete axiomatizations have been provided for various logics of probability: Heifetz and Mongin \cite{HeifetzM01} considered a finitary logic where the basic statements have the form $B_i^\theta \phi$ (agent $i$ believes that the probability of $\phi$ is at least $\theta$)---this is the same logic that we consider---and provided a sound and complete axiomatization in their logic for type systems; Meier \cite{Meier12} did the same for an infinitary logic. Since the axioms are easily seen to be sound in probability frames, and every type structure can be viewed as a probability frame, soundness and completeness of these axiomatizations for probability frames follows. Fagin, Halpern, and Megiddo \cite{FHM,FH3} provided a sound and complete axiomatization of a logic that allowed reasoning about linear combinations of probabilities (i.e., statements such as $2\ell_i(\phi) + 3 \ell_i(\psi) \ge 1.5$, which can be read as ``twice agent $i$'s probability of $\phi$ plus three times agent $i$'s probability of $\psi$ is at least 1.5'') in probability frames. Since their axioms are easily seen to be sound in type spaces and statements about linear combinations can be expresssed in Meier's infinitary logic, it follows that this axiomatization is also complete for type spaces. The work on axiomatizations does not produce an explicit translation between type spaces and possible-worlds structures. In more recent work, Galeazzi and Lorini \cite{GL16} develop a translation between the two and prove a semantic equivalence result. They, too, work at the level of models rather than frames (though they do not explicitly discuss this choice); however, their translations are defined model-theoretically with respect to a single fixed language, rather than taking language as a parameter, making the approach we develop more flexible and more broadly applicable. While the translation they propose from (what we call) probability models into type spaces is not a special case of ours, it is similar in spirit. However, there is one significant difference: in passing through language, our approach effectively identifies worlds that satisfy all the same formulas, while theirs does not (in particular, ``duplicate'' worlds produce duplicate types under their translation, but not under ours). Semantically speaking, provided we fix an appropriately expressive language, the type spaces we produce are equivalent, once we identify types that satisfy the same formulas. By varying the language, however, our translations take on different characters---they preserve more or less of the type space structure in accordance with what is expressible in the language. Moreover, Galeazzi and Lorini restrict their attention to countable structures, which effectively precludes consideration of structures like universal type spaces or canonical models. The rest of the paper is organized as follows. Section \ref{sec:pre} presents the basic mathematical frameworks within which we work. Section \ref{sec:tra} motivates and defines the translations from type spaces to probability frames and vice-versa. Section \ref{sec:uni} presents the connection between universal type spaces and canonical models discussed above. Section \ref{sec:con} concludes. Some proofs have been omitted or abridged due to length requirements. \section{Preliminaries} \label{sec:pre} The definition of a type space typically includes various topological assumptions that make it easier to prove certain results of interest within that framework \cite{DS15}. Since our goal is to understand the connection between type spaces and probability frames, we opt instead to work in as minimal a setting as possible, so as not to obscure the translations between the two with additional topological bookkeeping. In particular, following Heifetz and Samet \cite{HeSa98}, we work with a purely measure-theoretic definition of types spaces. A \defin{measurable space} is a set $X$ together with a $\sigma$-algebra $\Sigma_{X}$ over $X$; elements of $\Sigma_{X}$ are called \defin{measurable sets} or \defin{events}. We often drop explicit mention of $\Sigma_{X}$ and refer simply to ``the measurable space $X$''. We denote by $\Delta(X)$ the measurable space of all probability measures on $X$ equipped with the $\sigma$-algebra generated by all sets of the form $$\mathcal{B}^{\theta}(E) \coloneqq \{\mu \in \Delta(X) \: : \: \mu(E) \geq \theta\},$$ where $\theta \in [0,1]$ and $E \in \Sigma_{X}$ is an event. Given measurable spaces $X_{1}, \ldots, X_{k}$, the measurable space $X_{1} \times \cdots \times X_{k}$ is just the usual product space equipped with the $\sigma$-algebra generated by all sets of the form $E_{1} \times \cdots \times E_{k}$, where each $E_{i} \in \Sigma_{X_{i}}$. Given a probability measure $\mu$ on $X$, the associated \defin{outer measure}, denoted $\mu^{*}$, is defined on arbitrary subsets of $X$ as follows: $$\mu^{*}(A) \coloneqq \inf\{\mu(E) \: : \: E \in \Sigma_{X} \textrm{ and } E \supseteq A\}.$$ Obviously, if $A \in \Sigma_{X}$, then $\mu^{*}(A) = \mu(A)$. Otherwise, if $A$ is not a measurable set, the outer measure of $A$ can be thought of as a kind of approximation of the measure of $A$ from above: every event containing $A$ has probability at least $\mu^{*}(A)$, and for all $\varepsilon > 0$, there is an event $E \supseteq A$ with $\mu(E) - \mu^{*}(A) < \varepsilon$. Fix a finite set $I = \{1, \ldots, n\}$ of agents. We adopt the usual notational game-theoretic conventions for tuples over $I$: Given $(X_{i})_{i \in I}$, we write $$X \coloneqq \prod_{i \in I} X_{i} \quad \textrm{and} \quad X_{-i} \coloneqq \prod_{j \neq i} X_{j}.$$ We also write $X_{i}' \times X_{-i}$ for $$X_{1} \times \cdots \times X_{i-1} \times X_{i}' \times X_{i+1} \times \cdots \times X_{n}$$ and similarly $(x_{i}', x_{-i})$ for $$(x_{1}, \ldots, x_{i-1}, x_{i}', x_{i+1}, \ldots, x_{n}).$$ A \defin{type space (over $I$)} is a tuple $\mathcal{T} = (X, (T_{i})_{i \in I}, (\beta_{i})_{i \in I})$ where \begin{itemize} \item $X$ is a measurable space of \emph{states}; \item $T_{i}$ is a measurable space of \emph{$i$-types}; \item $\beta_{i}: T_{i} \to \Delta(X \times T)$ is a measurable function such that the marginal of $\beta_{i}(t_{i})$ on $T_{i}$ is $\delta_{t_{i}}$, the point-mass measure concentrated on $t_{i}$. \end{itemize} Intuitively, $X$ captures the basic facts about which the agents may be uncertain, while $i$-types represent the beliefs of agent $i$ via the function $\beta_{i}$. These beliefs are not just about the states, but also about the types (and therefore the beliefs) of the agents. In this context, the requirement that $\beta_{i}$ be measurable can be thought of as a closure condition on events: for all events $E \subseteq X \times T$, the set of points where agent $i$ assigns $E$ probability at least $\theta$, namely $$X \times \beta_{i}^{-1}(\mathcal{B}^{\theta}(E)) \times T_{-i},$$ is itself an event. The extra condition on $\beta_{i}$ is meant to ensure that agent $i$ is \emph{introspective}: that is, sure of her own beliefs. The point-mass measure $\delta_{t_{i}}$ is defined on the measurable subsets of $T_{i}$ by $$ \delta_{t_{i}}(E) = \left\{ \begin{array}{ll} 1 & \textrm{if $t_{i} \in E$}\\ 0 & \textrm{if $t_{i} \notin E$.} \end{array} \right. $$ Thus, $\delta_{t_{i}}$ assigns probability 1 to all and only the events containing $t_{i}$. Note that in general we cannot simply say that $\{t_{i}\}$ has probability 1 according to agent $i$, since $\{t_{i}\}$ may not be measurable; instead, we can say that every event incompatible with $t_{i}$ has probability 0 according to agent $i$.\footnote{This subtlety does not typically arise in the richer topological setting: provided $T_{i}$ is a $T_{1}$-space (see, e.g., \cite{Munkres}; there is an unfortunate clash of notation here), $\{t_{i}\}$ is closed and therefore part of the Borel $\sigma$-algebra associated with $T_{i}$.} Equivalently, $\delta_{t_{i}}$ is the unique probability measure on $T_{i}$ that assigns $\{t_{i}\}$ outer measure $1$. A \defin{probability frame (over $I$)} is a tuple $\mathcal{F} = (\Omega, (\mathit{Pr}_{i})_{i \in I})$ where \begin{itemize} \item $\Omega$ is a measurable space of \emph{worlds}; \item $\mathit{Pr}_{i}: \Omega \to \Delta(\Omega)$ is a measurable function such that, for each $\omega \in \Omega$, $\mathit{Pr}_{i}(\omega)^{*}(\mathit{Pr}_{i}^{-1}(\mathit{Pr}_{i}(\omega))) = 1$. \end{itemize} Here, all information is encoded in $\Omega$, basic facts and beliefs alike. As with type spaces, the measurability of $\mathit{Pr}_{i}$ yields a closure condition on events: for all events $E \subseteq \Omega$, the set of points where agent $i$ assigns $E$ probability at least $\theta$ is given by $\mathit{Pr}_{i}^{-1}(\mathcal{B}^{\theta}(E))$ and is therefore measurable. And as above, the additional condition on $\mathit{Pr}_{i}$ amounts to the stipulation that agent $i$ is sure of her own beliefs in the sense that at each world $\omega$, $\mathit{Pr}_{i}(\omega)$ assigns outer measure $1$ to the set $$\mathit{Pr}_{i}^{-1}(\mathit{Pr}_{i}(\omega)) = \{\omega' \: : \: \mathit{Pr}_{i}(\omega') = \mathit{Pr}_{i}(\omega)\},$$ namely, the set of worlds where her beliefs are given by the measure $\mathit{Pr}_{i}(\omega)$. If this set is measurable, of course, then it is itself assigned probability $1$. In much of the literature the measurability of this set is simply assumed. We adopt the slightly more cumbersome definition given above using outer measure because it is more general and because it parallels the introspection condition assumed in type spaces in a way that helps to streamline the translation between the two. \section{Translations} \label{sec:tra} Informally, a type space looks like a probability frame where the set of worlds $\Omega$ has been ``factored'' into a component representing basic facts---the states---and components representing the beliefs of the agents---the types. As discussed in the introduction, given a probability frame, it is not clear how to perform such a factorization; most of this section is concerned with developing a solution to this problem. The reverse construction, on the other hand, is straightforward, so we begin with it. \commentout{ } \begin{proposition} \label{pro:t-f} Let $\mathcal{T} = (X, (T_{i})_{i \in I}, (\beta_{i})_{i \in I})$ be a type space, and define $\Omega \coloneqq X \times T$ and $\mathit{Pr}_{i}(x,t) \coloneqq \beta_{i}(t_{i})$. Then $\F_{\T} \coloneqq (\Omega, (\mathit{Pr}_{i})_{i \in I})$ is a probability frame. \end{proposition} \begin{proof} This is the obvious construction; all that needs to be checked is that $\mathit{Pr}_{i}$ satisfies the appropriate conditions. Measurability of this function is an easy consequence of the measurability of $\beta_{i}$, since $\mathit{Pr}_{i}^{-1}(\mathcal{E}) = X \times \beta_{i}^{-1}(\mathcal{E}) \times T_{-i}$. For introspection, observe that $$\mathit{Pr}_{i}(x,t)^{*}(\mathit{Pr}_{i}^{-1}(\mathit{Pr}_{i}(x,t))) = \beta_{i}(t_{i})^{*}(\{(x',t') \: : \: \beta_{i}(t_{i}') = \beta_{i}(t_{i})\}) = 1, $$ since every measurable set containing $\{(x',t') \: : \: \beta_{i}(t_{i}') = \beta_{i}(t_{i})\}$ is of the form $X \times U \times T_{-i}$, where $U \subseteq T_{i}$ is measurable and contains $t_{i}$. \end{proof} In what sense is $\F_{\T}$ the ``right'' translation of $\mathcal{T}$? Intuitively, we want to say that the relevant properties of agents and their beliefs that are captured by $\mathcal{T}$ are also captured by $\F_{\T}$, and in some sense preserved by this translation. To make this precise, we formalize the notion of ``relevant properties'' by identifying them with formulas in a suitably expressive logical language; we then show that the map $\mathcal{T} \mapsto \F_{\T}$ is truth-preserving with respect to this language (Proposition \ref{pro:t-m}). In addition to providing a formal standard by which to evaluate purported translations between models, making the background language explicit lays the groundwork for the reverse translation, which makes essential use of this structure. \subsection{Language} \label{sec:lan} Fix a set $\Phi$ of \emph{primitive propositions} and a set $\Theta \subseteq [0,1]$ of \emph{thresholds}; let $\mathcal{L}_{B}^{\Theta}(\Phi, I)$ be the language recursively generated by the grammar $$\phi ::= p \, | \, \lnot \phi \, | \, \phi \land \psi \, | \, B_{i}^{\theta} \phi,$$ where $p \in \Phi$, $i \in I$, and $\theta \in \Theta$. The parameters $\Phi$ and $I$ are omitted when they are clear from context. The other Boolean connectives can be defined in the standard way. We read $B_{i}^{\theta} \phi$ as ``agent $i$ believes that the probability of $\phi$ is at least $\theta$''. Intuitively, $\Theta$ collects the set of thresholds that the language can express beliefs up to. There is a standard way of interpreting formulas of $\mathcal{L}_{B}^{\Theta}(\Phi, I)$ in probability frames. A \defin{probability model (over $(\Phi, I)$)} is a tuple $\mathcal{M} = (\mathcal{F}, \pi)$ where $\mathcal{F}$ is a probability frame (over $I$) and $\pi: \Phi \to \Sigma_{\Omega}$ is an \emph{interpretation}. Recall that $\Sigma_{\Omega}$ denotes the $\sigma$-algebra associated with the measurable space $\Omega$; the event $\pi(p) \subseteq \Omega$ is conceptualized as the set of worlds where the primitive proposition $p$ is true. We can extend this notion of truth to all formulas by defining $\valM{\cdot}: \mathcal{L}_{B}^{\Theta} \to \Sigma_{\Omega}$ recursively as follows: \begin{eqnarray*} \valM{p} & = & \pi(p)\\ \valM{\lnot \phi} & = & \Omega \mathbin{\mathchoice{\mysetminusD}{\mysetminusT}{\mysetminusS}{\mysetminusSS}} \valM{\phi}\\ \valM{\phi \land \psi} & = & \valM{\phi} \cap \valM{\psi}\\ \valM{B_{i}^{\theta} \phi} & = & \{\omega \in \Omega \: : \: \mathit{Pr}_{i}(\omega)(\valM{\phi}) \geq \theta\}. \end{eqnarray*} Of course, the final clause of this definition only makes sense if $\valM{\phi}$ is measurable, which follows from an easy induction on formulas using the fact that $$\valM{B_{i}^{\theta} \phi} = \mathit{Pr}_{i}^{-1}(\mathcal{B}^{\theta}(\valM{\phi})).$$ We say that a formula $\phi$ is \defin{true at $\omega$ (in $\mathcal{M}$)} if $\omega \in \valM{\phi}$, and that a set $F$ of formulas is true at $\omega$ if each $\phi \in F$ is true at $\omega$. A formula or set of formulas is \defin{valid in $\mathcal{M}$} if it is satisfied at all worlds in $\mathcal{M}$, and \defin{satisfiable in $\mathcal{M}$} if it is true at some world in $\mathcal{M}$; it is \defin{valid} if it is valid in all probability models, and \defin{satisfiable} if it is satisfiable in some probability model. It is worth noting that the introspection condition on frames, which says that every event containing $\mathit{Pr}_{i}^{-1}(\mathit{Pr}_{i}(\omega))$ has probability $1$ according to $\mathit{Pr}_{i}(\omega)$, allows us to deduce the following for all probability models $\mathcal{M}$ (assuming $1 \in \Theta$): \begin{eqnarray*} \omega \in \valM{B_{i}^{\theta} \phi} & \Rightarrow & \mathit{Pr}_{i}^{-1}(\mathit{Pr}_{i}(\omega)) \subseteq \valM{B_{i}^{\theta} \phi}\\ & \Rightarrow & \mathit{Pr}_{i}(\omega)(\valM{B_{i}^{\theta} \phi}) = 1\\ & \Rightarrow & \omega \in \valM{B_{i}^{1} B_{i}^{\theta} \phi}. \end{eqnarray*} This implies that the formula $B_{i}^{\theta} \phi \rightarrow B_{i}^{1} B_{i}^{\theta} \phi$ is valid: whenever agent $i$ believes the probability of $\phi$ is at least $\theta$, she is sure that she has this belief. A similar argument shows that $\lnot B_{i}^{\theta} \phi \rightarrow B_{i}^{1} \lnot B_{i}^{\theta} \phi$ is valid. Of course, this also follows from the stronger assumption that $\mathit{Pr}_{i}^{-1}(\mathit{Pr}_{i}(\omega))$ is itself measurable and has probability $1$, but relative to this logical language, such an assumption is overkill. We can also interpret $\mathcal{L}_{B}^{\Theta}(\Phi, I)$ in type spaces. Although this is not typically done in the literature (though Galeazzi and Lorini \cite{GL16} do), it allows us to state formally the connection between $\mathcal{T}$ and $\F_{\T}$ as defined in Proposition \ref{pro:t-f}, and it highlights the analogies between type spaces and probability frames that we exploit below. An \defin{interpreted type space (over $(\Phi, I)$)} is a pair $\mathcal{I} = (\mathcal{T}, \nu)$ where $\mathcal{T}$ is a type space and $\nu: \Phi \to \Sigma_{X}$ is an \emph{interpretation}; intuitively, $\nu(p)$ specifies the states of nature where $p$ is true. As above, $\nu$ induces a function $\valI{\cdot}: \mathcal{L}_{B}^{\Theta} \to \Sigma_{X \times T}$ as follows: \begin{eqnarray*} \valI{p} & = & \nu(p) \times T\\ \valI{\lnot \phi} & = & (X \times T) \mathbin{\mathchoice{\mysetminusD}{\mysetminusT}{\mysetminusS}{\mysetminusSS}} \valI{\phi}\\ \valI{\phi \land \psi} & = & \valI{\phi} \cap \valI{\psi}\\ \valI{B_{i}^{\theta} \phi} & = & \{(x,t) \in X \times T \: : \: \beta_{i}(t_{i})(\valI{\phi}) \geq \theta\}. \end{eqnarray*} Now we can formalize the sense in which the map $\mathcal{T} \mapsto \F_{\T}$ is truth-preserving. \begin{proposition} \label{pro:t-m} Let $\mathcal{I} = (\mathcal{T}, \nu)$ be an interpreted type space, and let $\F_{\T}$ be the probability frame corresponding to $\mathcal{T}$ as defined in Proposition \ref{pro:t-f}. Define $\pi(p) \coloneqq \nu(p) \times T$. Then $\M_{\I} \coloneqq (\F_{\T}, \pi)$ is a probability model, and for all $\phi \in \mathcal{L}_{B}^{\Theta}$, we have $\val{\phi}_{\M_{\I}} = \valI{\phi}$. \end{proposition} \begin{proof} Proposition \ref{pro:t-f} tells us that $\F_{\T}$ is a probability frame, and since $\nu(p) \in \Sigma_{X}$, it is clear that $\pi(p) \in \Sigma_{X \times T}$; it follows that $\M_{\I}$ is a probability model. The equality $\val{\phi}_{\M_{\I}} = \valI{\phi}$ is proved by an easy structural induction on $\phi$. The base cases where $\phi \in \Phi$ follows from the definition of $\pi$, and the induction steps are all trivial. \end{proof} Proposition \ref{pro:t-m} is parametrized by the choice of primitive propositions $\Phi$ and the interpretation $\nu$: it says that for any such choice, the correspondence $\mathcal{T} \mapsto \F_{\T}$ can be extended to a correspondence $\mathcal{I} \mapsto \M_{\I}$ that is truth preserving with respect to the language $\mathcal{L}_{B}^{\Theta}(\Phi)$. It is worth emphasizing a special case of this result. Given a type space $\mathcal{T} = (X, (T_{i})_{i \in I}, (\beta_{i})_{i \in I})$, recall that the set $X$ of states is often conceptualized as representing the ``basic facts'' about the game; for example, the strategy profiles that may be played. As such, when $X$ is finite (or even just when $\Sigma_{X}$ contains all singletons), it is natural to take $\Phi = X$ and define $\nu(x) = \{x\}$; in this case, intuitively, the primitive propositions simply say what the true state is. \subsection{Factoring worlds} \label{sec:fac} We turn now to the reverse translation: the construction of a suitable type space from a given probability frame. As we have observed, the difficulty lies in ``factoring'' worlds into states and types. Given a probability frame $\mathcal{F} = (\Omega, (\mathit{Pr}_{i})_{i \in I})$, we might hope to identify types for player $i$ with probability measures of the form $\mathit{Pr}_{i}(\omega)$ for $\omega \in \Omega$, but what are the states? This is the crux of the problem: there is nothing in the definition of $\mathcal{F}$ that allows us to distinguish the ``part'' of a world $\omega$ that represents basic facts; indeed, there is no notion of a ``basic fact'' at all in a probability frame. A sufficiently rich logical language, however, such as $\mathcal{L}_{B}^{\Theta}$, \textit{does} distinguish ``basic'' facts from facts about beliefs. For this reason, the construction of a type space naturally operates at the level of probability \textit{models} (which can interpret languages) rather than frames, and depends crucially on the background language. An \defin{$\mathcal{L}_{B}^{\Theta}$-description} is a set $D \subseteq \mathcal{L}_{B}^{\Theta}$ of formulas that is satisfiable and also \emph{maximal} in the sense that, for each $\phi \in \mathcal{L}_{B}^{\Theta}$, either $\phi \in D$ or $\lnot \phi \in D$. Given a probability model $\mathcal{M}$ and a world $\omega$ in $\mathcal{M}$, define the \defin{$\mathcal{L}_{B}^{\Theta}$-description of $\omega$ in $\mathcal{M}$} to be $$D(\omega) \coloneqq \{\phi \in \mathcal{L}_{B}^{\Theta} \: : \: \omega \in \valM{\phi}\}.$$ We omit mention of the language and the model when it is safe to do so. It is easy to see that $D(\omega)$ is an $\mathcal{L}_{B}^{\Theta}$-description; we call $D$ the \emph{description map} for $\mathcal{M}$. Intuitively, $D(\omega)$ records all the information about the world $\omega$ expressible in the language $\mathcal{L}_{B}^{\Theta}$. Let $d_{0}(\omega)$ denote the subset of $D(\omega)$ consisting of the \emph{purely propositional} formulas: that is, Boolean combinations of the primitive propositions. Let $d_{i}(\omega)$ consist of the formulas in $D(\omega)$ that are Boolean combinations of formulas of the form $B_{i}^{\theta} \phi$. Call these the \defin{$0$-description} and the \defin{$i$-description} of $\omega$, respectively. We think of the former as recording the basic facts about $\omega$ (expressible in $\mathcal{L}_{B}^{\Theta}$), and the latter as recording the beliefs of agent $i$ in $\omega$ (again, expressible in $\mathcal{L}_{B}^{\Theta}$). Fix a probability model $\mathcal{M} = ((\Omega,(\mathit{Pr}_{i})_{i \in I}),\pi)$. We construct a type space out of $\mathcal{M}$ by identifying states with $0$-descriptions and $i$-types with $i$-descriptions. Formally, set $$ \textrm{$X \coloneqq \{d_{0}(\omega) \: : \: \omega \in \Omega\}$ and $T_{i} \coloneqq \{d_{i}(\omega) \: : \: \omega \in \Omega\}$.} $$ Intuitively, each state and each type is constituted by a fragment of information about some world $\omega$ in $\mathcal{M}$. We also use this information to define the measure structure: for each $\phi \in \mathcal{L}_{B}^{\Theta}$, set $$ \textrm{$E_{0}(\phi) \coloneqq \{x \in X \: : \: \phi \in x\}$ and $E_{i}(\phi) \coloneqq \{t_{i} \in T_{i} \: : \: \phi \in t_{i}\}$;} $$ we consider $X$ and $T_{i}$ as measurable spaces equipped with the $\sigma$-algebras generated by the collections $\{E_{0}(\phi) \: : \: \phi \in \mathcal{L}_{B}^{\Theta}\}$ and $\{E_{i}(\phi) \: : \: \phi \in \mathcal{L}_{B}^{\Theta}\}$, respectively. The reason we use formulas to pick out events is because, ultimately, we will define each probability measure $\beta_{i}(t_{i})$ on $X \times T$ using the information encoded in $t_{i}$ about the likelihoods of formulas. For example, if $B_{i}^{\theta}\phi \in t_{i}$, this tells us that $\beta_{i}(t_{i})$ must assign probability at least $\theta$ to the subset of $X \times T$ where $\phi$ holds. Of course, in order to make sense of this, we must first define the event in $X \times T$ that corresponds to $\phi$. As a first step toward this, we show that given a state-type tuple $(x,t) \in X \times T$, the collection of formulas obtained by taking the union of all these partial descriptions, namely $x \cup \bigcup_{i} t_{i}$, is satisfiable. It is obvious that every $0$-description $x \in X$ and $i$-description $t_{i} \in T_{i}$ is individually satisfiable since, by definition, each is satisfied at some world in $\mathcal{M}$. On the other hand, there is no guarantee that they are all satisfied at the \textit{same} world in $\mathcal{M}$ (and in general they may not be), so their joint satisfiability is not so obvious. \begin{lemma} \label{lem:sat} For all $(x,t) \in X \times T$, the collection $x \cup \bigcup_{i} t_{i}$ is satisfiable. \end{lemma} \begin{proof} As observed, there are worlds $\omega_0, \ldots, \omega_n$ in $\mathcal{M}$ such that $\omega_{0}$ satisfies $x$ and $\omega_{i}$ satisfies $t_{i}$ for $i =1,\ldots, n$. We now construct a model $\mathcal{M}^*$ and world $\omega^*$ in $\mathcal{M}^*$ such that $\mathcal{M}^*$ consists of $n$ disjoint copies of $\mathcal{M}$ together with the world $\omega^*$; formally, $\mathcal{M}^* = ((\Omega^*, (\mathit{Pr}_{i}^*)_{i \in I}), \pi^*)$, where \begin{itemize} \item $\Omega^* = \{(\omega,i): \omega \in \Omega, i \in \{1,\ldots, n\}\} \cup \{\omega^*\}$; \item $\pi^*(p) = \left\{\begin{array}{ll} \cup_{i=1}^n (\pi(p) \times \{i\}) &\mbox{if $\omega_0 \notin \pi(p)$}\\ \cup_{i=1}^n (\pi(p) \times \{i\}) \cup \{\omega^*\} &\mbox{if $\omega_0 \in \pi(p)$}\\ \end{array}\right.$ \item $\mathit{Pr}_{i}^*(\omega,i)(U \times \{i\}) = \mathit{Pr}_i(\omega)(U)$ for $\omega \in \Omega$, and $\mathit{Pr}_i(\omega^*)(U \times \{i\}) = \mathit{Pr}_i(\omega_i)(U)$ (so the support of $\mathit{Pr}_{i}^{*}(\omega,i)$ and of $\mathit{Pr}_i(\omega*)$ is contained $\Omega \times \{i\}$). \end{itemize} It is easy to check that $\omega^*$ agrees with $\omega_0$ on propositional formulas and with $\omega_i$ on $i$-descriptions. Thus, the desired result holds. \end{proof} In fact, not only is $x \cup \bigcup_{i} t_{i}$ satisfiable, but it determines a unique $\mathcal{L}_{B}^{\Theta}$-description. \begin{lemma} \label{lem:det} There is a unique $\mathcal{L}_{B}^{\Theta}$-description $D$ such that $D \supseteq x \cup \bigcup_{i} t_{i}$. \end{lemma} \begin{proof} By Lemma \ref{lem:sat}, such a $D$ exists (take $D = D(\omega)$ for some $\omega$ that satisfies $x \cup \bigcup_{i} t_{i}$). Uniqueness follows from the following observation, easily proved by structural induction on $\phi$: for all $\phi \in \mathcal{L}_{B}^{\Theta}$, either $x \cup \bigcup_{i} t_{i}$ entails $\phi$ or $x \cup \bigcup_{i} t_{i}$ entails $\lnot \phi$. \end{proof} Let $D(x,t)$ denote the unique description determined by $x \cup \bigcup_{i} t_{i}$ as in Lemma \ref{lem:det}. It is easy to see that $D(d_{0}(\omega), d(\omega)) = D(\omega)$. On the other hand, as mentioned above, the collection of descriptions of the form $D(x,t)$ may be strictly larger than those of the form $D(\omega)$, since some tuples $(x,t)$ may combine partial descriptions that are not simultaneously satisfied at any world in $\mathcal{M}$. The description $D(x,t)$ provides a natural way to associate formulas with events in $X \times T$. For each $\phi \in \mathcal{L}_{B}^{\Theta}$, define $$\textstyle [\phi] \coloneqq \{(x,t) \in X \times T \: : \: \phi \in D(x,t)\}.$$ \begin{lemma} \label{lem:gen} $\Sigma_{X \times T}$ is generated by the collection $\{[\phi] \: : \: \phi \in \mathcal{L}_{B}^{\Theta}\}$. \end{lemma} \begin{proof} It is easy to see that every $\phi \in \mathcal{L}_{B}^{\Theta}$ is a Boolean combination of primitive propositions and formulas of the form $B_{i}^{\theta}\psi$; it follows that $\{[\phi] \: : \: \phi \in \mathcal{L}_{B}^{\Theta}\}$ is the algebra generated by all sets of the form $[p]$ and $[B_{i}^{\theta}\psi]$. Now observe that $(x,t) \in [p]$ iff $p \in x$, so $[p] = E_{0}(p) \times T$, and similarly, $(x,t) \in [B_{i}^{\theta}\psi]$ iff $B_{i}^{\theta}\psi \in t_{i}$, so $[B_{i}^{\theta}\psi] = X \times E_{i}(B_{i}^{\theta}\psi) \times T_{-i}$. Thus, $\{[\phi] \: : \: \phi \in \mathcal{L}_{B}^{\Theta}\} \subseteq \Sigma_{X \times T}$. To see that $\Sigma_{X \times T}$ is in fact generated by this collection, it suffices to observe that if each of $E_{0}(\phi_{0})$, $E_{1}(\phi_{1})$, \ldots, $E_{n}(\phi_{n})$ is nonempty, then $$E_{0}(\phi_{0}) \times E_{1}(\phi_{1}) \times \cdots \times E_{n}(\phi_{n}) = [\phi_{0} \land \phi_{1} \land \cdots \land \phi_{n}].$$ \end{proof} We turn now to defining the probability measures $\beta_{i}(t_{i})$. Each $t_{i} \in T_{i}$ is a collection of formulas in $\mathcal{L}_{B}^{\Theta}$ that bear on agent $i$'s beliefs. We can use these formulas to constrain the space of possible outputs of $\beta_{i}(t_{i})$. Moreover, provided $\mathcal{L}_{B}^{\Theta}$ is rich enough, these contraints yield a unique probability measure. Let $\P_{t_{i}}$ denote the set of all probability measures $\mu$ on $X \times T$ such that, for each $\phi \in \mathcal{L}_{B}^{\Theta}$ and all $\theta \in \Theta$, \begin{equation} \label{eqn:con} \mu([\phi]) \geq \theta \; \Leftrightarrow \; B_{i}^{\theta} \phi \in t_{i}. \end{equation} \begin{lemma} \label{lem:pti} $\P_{t_{i}} \neq \emptyset$. Moreover, if $\Theta$ is dense in $[0,1]$, then $|\P_{t_{i}}| = 1$. \end{lemma} \begin{proof} First we show that $\P_{t_{i}}$ is nonempty. Let $\omega$ be a world in $\mathcal{M}$ such that $d_{i}(\omega) = t_{i}$. For each $\phi \in \mathcal{L}_{B}^{\Theta}$, define $$\mu_{i,\omega}([\phi]) = \mathit{Pr}_{i}(\omega)(\valM{\phi}).$$ One can check that $\mu_{i,\omega}$ is a pre-measure on the algebra $\{[\phi] \: : \: \phi \in \mathcal{L}_{B}^{\Theta}\}$ and satisfies (\ref{eqn:con}). By Carath\'eodory's extension theorem \cite[Theorem 1.14]{Folland}, there is a unique extension $\tilde{\mu}_{i,\omega}$ of $\mu_{i,\omega}$ to the $\sigma$-algebra generated by $\{[\phi] \: : \: \phi \in \mathcal{L}_{B}^{\Theta}\}$, which by Lemma \ref{lem:gen} is just $\Sigma_{X \times T}$. Therefore, by construction, $\tilde{\mu}_{i,\omega} \in \P_{t_{i}}$. If $\Theta$ is dense in $[0,1]$, then it is easy to see that for all $\phi \in \mathcal{L}_{B}^{\Theta}$, if $\mu \in \P_{t_{i}}$ then $$\mu([\phi]) = \sup\{\theta \in \Theta \: : \: B_{i}^{\theta} \phi \in t_{i}\}.$$ It follows that $\P_{t_{i}} = \{\tilde{\mu}_{i,\omega}\}$. \end{proof} Let us restrict our attention for the time being to the case where $\Theta$ is a countable, dense subset of $[0,1]$; indeed, it is common to assume that $\Theta = [0,1] \cap \mathbb{Q}$. Countability ensures that $\mathcal{L}_{B}^{\Theta}$ contains only countably-many modalities, and by Lemma \ref{lem:pti}, density allows us to define $\beta_{i}(t_{i})$ to be the unique element of $\P_{t_{i}}$. We then have the following: \begin{proposition} Let $\mathcal{M}$ be a probability model, and let $\T_{\M} \coloneqq (X, (T_{i})_{i \in I}, (\beta_{i})_{i \in I})$ as defined above. Then $\T_{\M}$ is a type space. Define $\nu(p) \coloneqq E_{0}(p)$. Then $\I_{\M} \coloneqq (\T_{\M},\nu)$ is an interpreted type space, and for all $\phi \in \mathcal{L}_{B}^{\Theta}$, we have $$\omega \in \valM{\phi} \; \Rightarrow \; (d_{0}(\omega), d(\omega)) \in \val{\phi}_{\I_{\M}}.$$ \end{proposition} \begin{proof} First we observe that $\Sigma_{\Delta(X \times T)}$ is generated by all events of the form $\mathcal{B}^{\theta}([\phi])$; this follows from Lemma \ref{lem:gen} together with \cite[Lemma 4.5]{HeSa98}. Thus, to prove that $\beta_{i}$ is measurable it suffices to prove that each set $\beta_{i}^{-1}(\mathcal{B}^{\theta}([\phi]))$ is measurable. By definition, we know that $\beta_{i}(t_{i})([\phi]) \geq \theta$ iff $B_{i}^{\theta} \phi \in t_{i}$; it follows that $$\beta_{i}^{-1}(\mathcal{B}^{\theta}([\phi])) = E_{i}(B_{i}^{\theta} \phi),$$ which is measurable by definition. That $\beta_{i}(t_{i})$ concentrates on $t_{i}$ follows from the fact that $$(x,t) \in [B_{i}^{\theta}\psi] \Leftrightarrow B_{i}^{\theta}\psi \in t_{i} \Leftrightarrow B_{i}^{1}B_{i}^{\theta}\psi \in t_{i} \Leftrightarrow \beta_{i}(t_{i})([B_{i}^{\theta}\psi]) = 1.$$ Finally, the semantic equivalence follows by structural induction on $\phi$. \end{proof} \section{Universal Type Spaces and Canonical Models} \label{sec:uni} \subsection{Universal type spaces} The existence of a \emph{universal type space} \cite{MZ} underpins the use of type spaces as a general framework for modeling beliefs: roughly speaking, it guarantees that they do not rule out any possible collection of beliefs. Individual type spaces, of course, can be quite small and omit many configurations of beliefs. The universal type space, by contrast, essentially includes all possible configurations of belief; in particular, this means we need not be concerned with gaps in our representation of games. Formally, given type spaces $\mathcal{T} = (X, (T_{i})_{i \in I}, (\beta_{i})_{i \in I})$ and $\mathcal{T}' = (X, (T_{i}')_{i \in I}, (\beta'_{i})_{i \in I})$ (with a common set $X$ of states), a profile of functions $f_{i}: T_{i} \to T_{i}'$ constitutes a \defin{type morphism} from $\mathcal{T}$ to $\mathcal{T}'$ provided that, for each $i \in I$, $t_{i} \in T_{i}$, and each event $E \subseteq X \times T'$, $$\beta_{i}'(f_{i}(t_{i}))(E) = \beta_{i}(t_{i})(f^{-1}(E)),$$ where $f: X \times T \to X \times T'$ is defined by $f = (id_{X}, f_{1}, \ldots, f_{n})$. Roughly speaking, this says that each $f_{i}$ assigns to each $t_{i} \in T_{i}$ a type $f_{i}(t_{i}) \in T_{i}'$ that agrees with $t_{i}$ on the probabilities of all events, where events in $\mathcal{T}$ and $\mathcal{T}'$ are identified via the correspondence given by $f$. A type space $\mathcal{T}^{*}$ is called \defin{universal (for $X$)} if, for every type space $\mathcal{T} = (X, (T_{i})_{i \in I}, (\beta_{i})_{i \in I})$, there exists a unique type morphism from $\mathcal{T}$ to $\mathcal{T}^{*}$. Thus, each such $\mathcal{T}$ can be thought of as existing ``inside'' $\mathcal{T}^{*}$ (via the mapping $f$). Type morphisms are defined so as to preserve the structure of belief. Indeed, given any interpretation $\nu: \Phi \to \Sigma_{X}$, it is easy to see that if $(f_{1}, \ldots, f_{n})$ is a type morphism from $\mathcal{T}$ to $\mathcal{T}'$, then for any $(x,t) \in X \times T$ and any $\phi \in \mathcal{L}_{B}^{[0,1]}(\Phi)$, we have $$(x,t) \in \val{\phi}_{(\mathcal{T},\nu)} \Leftrightarrow f(x,t) \in \val{\phi}_{(\mathcal{T}',\nu)}.$$ As a consequence, the universal type space for $X$ satisfies all the $\mathcal{L}_{B}^{[0,1]}(\Phi)$-descriptions that are satisfied in \textit{some} type space over $X$. It is natural to wonder whether this property characterizes the universal type space; the connection with canonical models we now present essentially amounts to a formalization of this idea. \subsection{Canonical models} \label{sec:can} The classical canonical model construction is used to prove completeness of various modal systems. Given some axiom system $\mathsf{AX}$ of interest, a model is constructed wherein each world corresponds to a maximal $\mathsf{AX}$-consistent set of formulas, with additional structure derived from the properties of these sets of formulas. The construction we present here differs in that we are not concerned with axiomatics---indeed, for logics that fail to be compact (such as, notably, the logic of $\mathcal{L}_{B}^{[0,1]}$ as interpreted in probability frames), consistent sets of formulas need not be satisfiable, so the canonical model construction fails. Nonetheless, we can adapt this construction by replacing ``consistent'' with ``satisfiable''; in other words, we can build a model in which the worlds are exactly the $\mathcal{L}_{B}^{[0,1]}$-descriptions.% \footnote{As we said above, a similar construction appears in \cite{Aumann99a}, though the connection to type spaces is not explored in any depth.} Intuitively, such a model contains a world satisfying every such description; ultimately, we will show that we can obtain a universal type space by constructing such a model and then translating it into a type space as in Section \ref{sec:fac}. Consider a fixed language $\mathcal{L}_{B}^{[0,1]}(\Phi)$ and a class of probability models $\mathscr{C}$; let $\bar{\Omega}$ denote the set of all $\mathcal{L}_{B}^{[0,1]}(\Phi)$-descriptions satisfiable in some model in $\mathscr{C}$. Define $\hat{\phi} = \{\bar{\omega} \in \bar{\Omega} \: : \: \phi \in \bar{\omega}\}$, and let $\Sigma_{\bar{\Omega}}$ be the $\sigma$-algebra generated by the collection $\mathcal{A} = \{\hat{\phi} \: : \: \phi \in \mathcal{L}_{B}^{[0,1]}(\Phi)\}$. Define $\mu_{i, \bar{\omega}}: \mathcal{A} \to [0,1]$ by $$\mu_{i, \bar{\omega}}(\hat{\phi}) = \sup\{\theta \in [0,1] \: : \: B_{i}^{\theta}\phi \in \bar{\omega}\}.$$ It is not hard to check that $\mu_{i,\bar{\omega}}$ is a pre-measure on the algebra $\mathcal{A}$, so, by Carath\'eodory's extension theorem, it can be extended to a unique probability measure on $\Sigma_{\bar{\Omega}}$; let $\bar{\mathit{Pr}}_{i}(\bar{\omega})$ denote this extension. Finally, for each $p \in \Phi$, set $\bar{\pi}(p) = \hat{p}$. \begin{proposition} $\bar{\mathcal{M}} = (\bar{\Omega}, (\bar{\mathit{Pr}}_{i})_{i \in I}), \bar{\pi})$ is a probability model, and for all $\phi \in \mathcal{L}_{B}^{[0,1]}$, we have $\val{\phi}_{\bar{\mathcal{M}}} = \hat{\phi}$. Moreover, $\bar{\mathcal{M}}$ is universal for $\mathscr{C}$ in the sense that, for all $\mathcal{M} \in \mathscr{C}$, there is a truth-preserving map (namely, $D$, the description map for $\mathcal{M}$) from $\mathcal{M}$ to $\bar{\mathcal{M}}$. \end{proposition} Call $\bar{\mathcal{M}}$ the \defin{universal probability model for $\mathscr{C}$ over $\mathcal{L}_{B}^{[0,1]}(\Phi)$}. As we mentioned earlier, Meier \cite{Meier12} works with an infinitary version of the language $\mathcal{L}_{B}^{[0,1]}(\Phi)$ and constructs a canonical model for that language. Call his language $\mathcal{L}_{B}^{\infty,[0,1]}$. Although $\mathcal{L}_{B}^{\infty,[0,1]}$ is infinitary, as observed in \cite[Lemma 4.1]{HP08a}, every $\mathcal{L}_{B}^{[0,1]}$-description can be uniquely extended to an $\mathcal{L}_{B}^{\infty,[0,1]}$-description. It follows that the canonical model for the language $\mathcal{L}_{B}^{[0,1]}$ is isomorphic to the canonical model for $\mathcal{L}_{B}^{\infty,[0,1]}$. Meier shows that the canonical model for $\mathcal{L}_{B}^{\infty,[0,1]}$ is universal. Of course, it follows that the canonical model for $\mathcal{L}_{B}^{[0,1]}$ is also universal. We given an independent proof of this result here, since it allows us to connect universal type spaces to the language considerations discussed earlier. \subsection{Translation} \label{sec:trs} Let $X$ be a measurable space of states where $\Sigma_{X}$ is generated by the singletons $\{x\}$.% \footnote{It is possible to weaken this condition to the following: for every $x,y \in X$, there exists a ``separating event'' $E \in \Sigma_{X}$ such that $x \in E$ and $y \notin E$. The issue here is that if $X$ contains points that are not separated in this way, they will not differ on any description and so the universal model construction we employ below will end up identifying them. Notice, however, that this is only a problem because the universal type space for state space $X$ is required to use $X$ as the state space, even when $X$ contains ``redundant'' states that are not separated by any event. Intuitively, however, this is unnecessary---a slightly relaxed notion of a universal type space would simply require that its state space be rich enough to reflect the measure structure of $X$, rather than its set-theoretic structure. And indeed, this is essentially what you get by running the construction below without the separability requirement articulated above.} We construct a universal type space for $X$ by first constructing a universal model as in Section \ref{sec:can}. Consider the language $\mathcal{L}_{B}^{[0,1]}(X)$ (i.e., where $\Phi = X$) and the class $\mathscr{C}_{X}$ of probability models such that $\{\pi(x) \: : \: x \in X\}$ partitions $\Omega$. Intuitively, this condition hard-codes the constraint that exactly one state $x \in X$ is the ``true'' state of the world. \begin{theorem} \label{thm:can} Let $\bar{\mathcal{M}}$ be the universal probability model for $\mathscr{C}_{X}$ over $\mathcal{L}_{B}^{[0,1]}(X)$. Then the type space $\mathcal{T}_{\bar{\mathcal{M}}}$ is universal for $X$. \end{theorem} \begin{proof} The state space for $\mathcal{T}_{\bar{\mathcal{M}}}$ is, by definition, the collection $\{d_{0}(\bar{\omega}) \: : \: \bar{\omega} \in \bar{\Omega}\}$; it is easy to see that each set $d_{0}(\bar{\omega})$ contains exactly one element of $X$, and this correspondence is a measurable bijection with measurable inverse. So $\mathcal{T}_{\bar{\mathcal{M}}}$ has the ``right'' state space. Next, let $\mathcal{T} = (X, (T_{i})_{i \in I}, (\beta_{i})_{i \in I})$ be any type space based on $X$. We must produce a (unique) type morphism from $\mathcal{T}$ to $\mathcal{T}_{\bar{\mathcal{M}}}$. To do so, define $\nu: \Phi \to \Sigma_{X}$ by $\nu(x) = \{x\}$, let $\mathcal{I} = (\mathcal{T},\nu)$ be the corresponding interpreted type space, and consider the model $\mathcal{M}_{\mathcal{I}}$ obtained from $\mathcal{I}$ as in Proposition \ref{pro:t-m}. It is easy to see that $\mathcal{M}_{\mathcal{I}} \in \mathscr{C}_{X}$, and because of this, for each $(x,t) \in X \times T$ and $i \in I$, there is a unique $d_{i}(\bar{\omega})$ that is satisfied at $(x,t)$. In this case, define $f_{i}(t_{i}) = d_{i}(\bar{\omega})$. \end{proof} Theorem \ref{thm:can} realizes the intuition that the universal type space for $X$ is precisely the type space that satisfies all and only the $\mathcal{L}_{B}^{[0,1]}(\Phi)$-descriptions that are satisfied in some type space over $X$. Thinking of universal type spaces in this way makes the dependence on language plain, and suggests alternative notions of ``universal type spaces'' obtained by varying the language over which the universal quantification takes place. That is, given a class of type spaces $\mathscr{T}$ and a language $\mathcal{L}$ interpretable in those type spaces in $\mathscr{T}$, we can define a type space $\mathcal{T}^{*}$ to be \defin{universal for $\mathscr{T}$ with respect to $\mathcal{L}$} provided every $\mathcal{L}$-description satisfiable in $\mathscr{T}$ is (uniquely) satisfied in $\mathcal{T}^{*}$. Naturally, we might hope to construct $\mathcal{T}^{*}$ by transforming an appropriate canonical/universal model. The translation defined in Section \ref{sec:tra} does the job for languages of the form $\mathcal{L}_{B}^{\Theta}$ when $\Theta$ is dense in $[0,1]$. Generalizing this result to other languages, both richer and poorer, is the subject of ongoing research. One natural way to coarsen the language is by dropping the assumption that $\Theta$ is dense in $[0,1]$. An extreme case of this would be to take $\Theta = \{1\}$, corresponding to a standard modal language of qualitative, ``probability $1$'' belief (see, e.g., \cite{Hal31}). In this case, the sets of measures $\P_{t_{i}}$ defined in Section \ref{sec:tra} encode only information regarding those events that $t_{i}$ assigns probability 1 to. Another natural modification to the language is to enrich it with a knowledge modality. Logics of knowledge and belief have been well-studied, and canonical models certainly exist in such settings (see \cite{Len} and the references in \cite[Chapter 8]{Hal31}). By contrast, \emph{knowledge spaces}, an epistemic analogue to type spaces, have been shown \textit{not} to permit a universal object \cite{FHV1,HeSa1}. What is the source of this mismatch? Does the translation technique we present fundamentally fail to generalize to models of knowledge? Or can the canonical model construction in the modal case inform a new, type-theoretic representation of knowledge that does enjoy a universal model? We leave these questions to future work. \section{Conclusion} \label{sec:con} We have related probability frames and type spaces in a way that makes clear the critical role of language. Our approach allows us to show the deep connections between the canonical models that are standard in the modal logic community and the universal type spaces that play a critical role in epistemic game theory. We believe that further work, considering different choices of language, will further illuminate the connections between these two modeling paradigms. \bibliographystyle{eptcs}
2,877,628,088,723
arxiv
\section{Introduction} Source separation is a signal processing problem that consists in recovering individual superimposed \textit{sources} from a \textit{mixture}. Since 2008, the role of the Signal Separation Evaluation Campaign (SiSEC) has been to compare performance of separation systems on a voluntary and community-based basis, by defining tasks, datasets and metrics to evaluate methods~\cite{sassec2007,sisec2008,sisec0710,sisec2011,sisec2013,sisec2015,sisec2016}. Although source separation may find applications in several domains, the focus of SiSEC has always mostly been on audio source separation. This year, we decided to drop the legacy speech separation and denoising tasks UND and BGN, because they are now the core focus of very large and successful other campaigns such as CHiME~\cite{chime,chime2,chime3}. Instead, most of our efforts were spent on music separation, where the SiSEC MUS task is playing an important role, both in terms of datasets and participation. However, we also maintained the ASY task of asynchronous separation, due to its originality and adequation with the objectives of SiSEC. While the primary objective of SiSEC is to regularly report on the progress made by the community through standardized evaluations, its secondary objective is also to provide useful resources for research in source separation, even outside the scope of the campaign itself. This explains why the SiSEC data has always been made public, to be used for related publications. Since 2015, the scope of the SiSEC MUS data was significantly widened, so that it could serve not only for evaluation, but also for the design of music separation system. This important shift is motivated by the recent development of systems based on deep learning, which now define the state-of-the-art and require important amounts of learning data. This lead to the proposal of the MSD~\cite{sisec2015} and the DSD100~\cite{sisec2016} datasets in the previous editions This year's SiSEC present several contributions. First, the computation of oracle performance goes further than the usual Ideal Binary Mask (IBM) to also include Ideal Ratio Mask (IRM) and Multichannel Wiener Filters (MWF). Second, we released the MUSDB18, that comprises almost 10~h of music with separated stems. Third, we released a new version~4 for the \texttt{BSS~Eval} toolbox, that handles time-invariant distortion filters, significantly speeding up computations\footnote{\url{sisec.inria.fr}.}. \section{Oracle performance for audio separation} \label{sec:oracle} We write $I$ as he number of channels of the audio mixture: $I=2$ for stereo. We write $x$ for the 3-dimensional complex array obtained by stacking the Short-Time Frequency Transforms (STFT) of all channels. Its dimensions are $F\times T\times I$, where $F,T$ stand for the number of frequency bands and time frames, respectively. Its values at Time-Frequency (TF) bin $\left(f,t\right)$ are written $x\left(f,t\right)\in\mathbb{C}^I$, with entries $x_i\left(f,t\right)$. The mixture is the sum of the sources \textit{images}: $x\left(f,t\right)=\sum_j y_j\left(f,t\right)$, which are also multichannel. A filtering method $\sboxed{m}$ usually computes estimates $\hat{y}_j^{\sboxed{m}}$ for the source images linearly from $x$: \begin{equation} \hat{y}_j^{\sboxed{m}}\ftt{m}=M_j^{\sboxed{m}}\ftt{m} x\left(f,t\right),\label{eq:TFmask} \end{equation} where $\thet{m}$ are some parameters specific to $\sboxed{m}$ and $M_j\ftt{m}$ is a $I\times I$ complex matrix called a TF \textit{mask}, computed using $\thet{m}$ in a way specific to method~$\sboxed{m}$. Once given the filtering strategy $\sboxed{m}$, the objective of a source separation system is to analyze the mixture to obtain parameters $\thet{m}$ that yield good separation performance. For evaluation purposes, it is useful to know how good a filtering strategy can be, i.e. to have some upper bound on its performance, which is what an \textit{oracle} is~\cite{vincent2007oracle}: \begin{equation} \thet{m}^{\star}=\underset{\thet{m}}{\text{argmin}}\sum_{f,t,j}\left\Vert y_{j}\left(f,t\right)-\hat{y}_{j}^{\sboxed{m}}\ftt{m}\right\Vert, \end{equation} where $\Vert\cdot\Vert$ is any norm deemed appropriate. In this SiSEC, we covered the three most commonly used filtering strategies, and assessed performance of their respective oracles: \begin{enumerate} \item The \textbf{Ideal Binary Mask} (\textit{IBM},~\cite{wang2005}) is arguably the simplest filtering method. It processes all $\left(f,t,i\right)$ of the mixture independently and simply assigns each of them to one source only: $M_{ij}^\sboxed{IBM}\left(f,t\right)\in\left\{0,1\right\}$. The IMB1 method is defined as $M_{ij}=1$ iff source $j$ has a magnitude $\left|y_{ij}(f,t)\right|$ that is at least half the sum of all sources magnitudes. IBM2 is defined similarly with the sources power spectrograms $\left|y_{ij}(f,t)\right|^2$. \item The \textbf{Ideal Ratio Mask} (IRM), also called the $\alpha$-Wiener filter~\cite{liutkus15}, relaxes the binary nature of the IBM. It processes all $\left(f,t,i\right)$ through multiplication by $M_{ij}^\sboxed{IRM}\in\left[0,1\right]$ defined as: \begin{equation} M^{\sboxed{IRM}}_{ij}\left(f,t\right)=\frac{v_{ij}\left(f,t\right)}{\sum_{j'}v_{ij'}\left(f,t\right)}, \end{equation} where $v_{ij}\left(f,t\right)=\left|y_{ij}\left(f,t\right)\right|^\alpha$ is the fractional power spectrogram of the source image $y_{ij}$. Particular cases include the \textit{IRM2} Wiener filter for $\alpha=2$ and the \textit{IRM1} magnitude ratio mask for $\alpha=1$. \item The \textbf{Multichannel Wiener Filter} (\textit{MWF}, \cite{duong10}) exploits multichannel information, while IBM and IRM do not. $M^{\sboxed{MWF}}_{j}\left(f,t\right)$ is a $I\times I$ complex matrix given by: \begin{equation} M_{j}^{\sboxed{MWF}}\left(f,t\right)=C_{j}\left(f,t\right) C_{x}^{-1}\left(f,t\right), \end{equation} where $C_j\left(f,t\right)$ is the $I\times I$ covariance matrix for source $j$ at TF bin $\left(f,t\right)$ and $C_x=\sum_j C_j$. In the classical local Gaussian model \cite{duong10}, the further parameterization $C_j\left(f,t\right)=v_j\left(f,t\right) R_j\left(f\right)$ is picked, with $R_j$ being the $I\times I$ \textit{spatial covariance matrix}, encoding the average correlations between channels at frequency bin $f$, and $v_j\left(f,t\right)\geq0$ encoding the power spectral density at $\left(f,t\right)$. The optimal values for these parameters are easily computed from the true sources $y_j$ \cite{liutkus2013}. \end{enumerate} These five oracle systems IBM1, IBM2, IRM1, IRM2, MWF have been implemented in Python and released in an open-source license\footnote{\url{github.com/sigsep/sigsep-mus-oracle}}. \section{Data and metrics} \subsection{The MUSDB18 Dataset} For the organization of the present SiSEC, the MUSDB18 corpus was released~\cite{musdb18}, comprising tracks from MedleyDB~\cite{medleydb}, DSD100~\cite{sisec2015,sisec2016}, and other material. It contains $150$ full-length tracks, totaling approximately~$10$~h of audio. \begin{itemize} \item All items are full-length tracks, enabling the handling of long-term musical structures, and the evaluation of quality over silent regions for sources. \item All signals are stereo and mixed using professional digital audio workstations, thus representative of real application scenarios. \item All signals are split into 4 predefined categories: bass, drums, vocals, and other. This promotes automation of the algorithms. \item Many musical genres are represented: jazz, electro, metal, etc. \item It is split into a training (100 tracks, 6.5~h) and a test set (50 tracks, 3.5~h), for the design of data-driven methods. \end{itemize} The dataset is freely available online, along with Python development tools\footnote{\url{https://sigsep.github.io/musdb}}. \subsection{BSS Eval version~4} \label{ssec:bssevalv4} The BSS~Eval metrics, as implemented in the MATLAB toolboxes~\cite{bssevalv2,bssevalv3} are widely used in the separation literature. They assess separation quality through $3$~criteria: Source to Distortion, to Artefact, to Interference ratios (SDR, SAR, SIR) and additionally with the Image to Spatial distortion (ISR) for the \texttt{BSS~Eval v3} toolbox~\cite{bssevalv3}. One particularity of BSS~Eval is to compute the metrics after optimally matching the estimates to the true sources through linear \textit{distortion filters}. This provides some robustness to linear mismatches. This matching is the reason for most of the computation cost of BSS~Eval, especially considering it is done for each evaluation window. In this SiSEC, we decided to drop the assumption that distortion filters could be varying over time, but considered instead they are fixed for the whole length of the track. First, this significantly reduces the computational cost because matching is done only once for the whole signal. Second, this introduces more dynamics in the evaluation, because time-varying matching filters over-estimate performance, as we show later. Third, this makes matching more stable, because sources are never silent throughout the whole recording, while they often were for short windows. This new $4^{th}$ version for the \texttt{BSS~Eval} toolbox was implemented in Python\footnote{\texttt{pip install museval}}, and is fully compatible with earlier MATLAB-based versions up to a tolerance of $10^{-12}$~dB in case time-varying filters are selected. \section{Separation results} \subsection{Oracle performance with \texttt{BSS Eval v4}} \label{ssec:bsseval-results} To the best of our knowledge, the results presented in Figure~\ref{fig:boxplots_bsseval} are the first fair comparison between the different and widely used oracle systems presented in Section~\ref{sec:oracle}. On this figure, we can see boxplots of the \texttt{BSS~Eval} scores obtained by IBM1, IBM2, IRM1, IRM2 and MWF on the $4$~sources considered in MUSDB18. The scores were computed on 1~second windows, taken on the whole test-set. The most striking fact we see on this Figure~\ref{fig:boxplots_bsseval} is that IBM is \textit{not} achieving the best scores on any metric except ISR. Most particularly, we notice that IBM systematically induces a small loss in performance of a few dBs on SDR and SIR compared to soft masks for most sources, and to a significant loss for SAR, that can get as bad as around~$5$~dB for the accompaniment source. This is in line with the presence of strong \textit{musical noise} produced by IBM whenever the source to separate is \textit{dense} and cannot be assumed stronger in magnitude or energy than all others whenever it is active. This also happens for the bass, which is usually weaker than all other sources at high frequencies, yielding significant distortion with IBM. Furthermore, we suspect the strong scores obtained by IBM in vocals and bass ISR to mostly be due to the zeroing of large amounts of frequency bands in those estimates. Indeed, zero estimates lead the projection filters of BSS~eval to totally cancel those frequencies in the reference also, artificially boosting ISR performance. Now, comparing soft masks, it appears that IRM2 and MWF produce the best overall performance as compared to IRM1. However, this result is expected: \texttt{BSS~Eval} scores are \textit{in fine} relative to squared-error criteria, which are precisely optimised with those filters. Previous perceptual studies showed that IRM1 may be preferred in some cases~\cite{liutkus15}. This may be reflected in the slightly better performance that IRM1 obtains for SAR. Finally, although IRM2 seems slightly better than MWF for most metrics, we highlight that it also comes with twice as many parameters: power spectral densities for left and right channels, instead of just one for MWF, shared across channels. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\linewidth]{fig/evaluation_v3v4.pdf} \end{center} \caption{Vocals SIR score vs vocals energy for BSS~eval v3 and v4.}% \label{fig:v3v4} \end{figure} Concerning the discrepancies between\texttt{BSS~Eval} v3 and v4 (time-invariant distortion filters), we observe several differences. First, computations were $8$~times faster for v4 than for v3, which allowed using small $1$~s frames and thus get an estimate of the performance along time at a reasonable computing cost. Second, computing distortion filters only once for the whole duration of the signal brings an interesting side-effect, that can be visualized on Figure~~\ref{fig:v3v4}. The new v4 brings a much higher dynamics for the scores: we clearly see that lower energy for the true source brings lower performance. However, the marginal distributions for the scores over the whole dataset were not statistically different between v3 and v4, which validates the use of fewer distortion filters to optimize computing time and get to similar conclusions. \subsection{Comparison of systems submitted to SiSEC-MUS 2018} This year's participation has been the strongest ever observed for SiSEC, with $30$ systems submitted in total. Due to space constraints, we cannot detail all the methods here, but refer the interested reader to the corresponding papers. We may distinguish three broad groups of methods, that are: \begin{description} \item[Model-based] These methods exploit prior knowledge about the spectrograms of the sources to separate and do not use the MUSDB18 training data for their design. They are: MELO as described in \cite{MELO}, as well as all the method implemented in NUSSL \cite{NUSSL}: 2DFT \cite{2DFT}, RPCA \cite{RPCA}, REP1 \cite{REP1}, REP2 \cite{REP2}, HPSS \cite{HPSS}. \item[No additional data] These methods are data-driven and exploit only the training data for MUSDB18 to learn the models. They are: RGT1-2 \cite{RGT1}, STL, HEL1 \cite{HEL1}, MDL1 \cite{MDL1}, MDLT \cite{MDLT}, JY1-3 \cite{JY1}, WK \cite{WK}, UHL1-2 \cite{UHL}, TAK1 \cite{TAK12}. \item[With additional data] These methods are also data-driven, and exploit additional training data on top of the MUSDB18 training set. They are: UHL3 \cite{UHL}, TAK2 \cite{TAK12}, TAK3 \cite{TAK3}, TAU \cite{TAK3,UHL}. \end{description} As may be seen, the vast majority of methods submitted this year to SiSEC MUS are based on deep learning, reflecting a shift in the community's methodology. The MIX method additionally serves as a negative anchor, that corresponds to using the mixture as an estimate for all sources. \begin{figure} \begin{center} \includegraphics[height=\textheight]{fig/boxplot.pdf} \end{center} \caption{Details of results for all metrics, targets and methods.} \label{fig:boxplots_bsseval} \end{figure} In the first set of results depicted on Figure \ref{fig:boxplots_bsseval}, we display boxplots of the BSSeval scores for the evaluation. For each track, the median value of the score was taken and used for the boxplots. Inspecting these results, we immediately see that data-driven methods clearly outperform model-based approaches by a large margin. This fact is noticeable for most targets and metrics. \begin{figure} \begin{center} \includegraphics[width=0.85\linewidth]{fig/heatmaps.pdf} \end{center} \caption{Vocals (top) and accompaniment (below) SDR for all tracks and methods.} \label{fig:trackwise_scores} \end{figure} In the second set of results displayed on Figure \ref{fig:trackwise_scores}, we computed the track-wise median SDR score for all methods on the vocals (top) and accompaniment (bottom) targets. The striking fact we notice there is that methods exploiting additional training data (UHL3, TA*) do perform comparably to the oracles for approximately half of the tracks. After inspection, it turns out that room for improvement mostly lies in tracks featuring significant amounts of distortion in either the vocals or the accompaniment. We may also notice on these plots that tracks where accompaniment separation is easy often come with a challenging estimation of vocals. After inspection, this is the case when vocals are rarely active. Consequently, correctly detecting vocals presence seems a good asset for separation methods. \begin{figure}[h] \begin{center} \includegraphics[width=1\linewidth]{fig/pairwise.pdf} \end{center} \caption{Pair-wise statistical significance of the differences between separation quality. Left: vocals SDR. Right: accompaniment SDR.} \label{fig:pairwise_matrix} \end{figure} Our third round of analysis concerns the pair-wise post-hoc Conover-Inman test, displayed on Figure \ref{fig:pairwise_matrix}, to assess which methods perform significantly better than others, for both vocals and accompaniment separation. In this plot, an obvious fact is that DNN-based methods exploiting additional training data perform best. Remarkably, they do not perform significantly differently than the oracles for accompaniment, suggesting that the automatic karaoke problem can now be considered solved to a large extent, given sufficient amounts of training data. On the contrary, vocals separation shows room for improvement. Concerning model-based methods, we notice they perform worse, but that among them, MELO stands above for vocal separation, while it is comparable to others for accompaniment. For DNN approaches not using additional training data, we notice different behaviours for vocals and accompaniment separation. We may summarize the results by mentioning that RGT1-2, STL and MDL1 do not behave as well as MDLT, STL1, JY1-3, WK and UHL1-2, which all behave comparably. It is noteworthy that TAK1 and UHL2 compare well with methods exploiting additional data for vocals separation. This evaluation highlights a methodological question that should be investigated in future campaigns, which is the relative importance of the system architecture and the amount of training data. It indeed appears that very different architectures do behave comparably and that the gap in performance now rather comes from additional training data, as exemplified by the difference between UHL2 and UHL3. This confirms the importance of using standard training and test datasets such as MUSDB18 for evaluation, and we believe that obtaining good performance with reduced training data remains an interesting and challenging machine learning problem. \subsection{Comparison of systems submitted to SiSEC-ASY 2018} As shown in Table~\ref{table}, there was one submission to the task "Asynchronous recordings of speech mixtures" by Corey {\it et al.}~\cite{corey}. This method does not resample the microphone signals in order to separate them. Rather, it uses a separate time-varying two-channel Wiener filter for each synchronous pair of microphones. The remaining asynchronous microphone pairs are used to compute a speech presence probability for each source in each time-frequency bin. The speech presence information from the remote microphone pairs allows the reference recorder to separate more than two speech signals using a two-channel filter. \begin{table} \centering \caption{Result for the task "Asynchronous recordings of speech mixtures". Result by Miyabe {\it et al.} in SiSEC2015 is also shown as a reference.} \label{table} \begin{tabular}{|c|c|ccc|ccc|}\hline systems&criteria&\multicolumn{3}{|c|}{3src}&\multicolumn{3}{|c|}{4src}\\ &&realmix&sumrefs&mix&realmix&sumrefs&mix\\\hline Corey~\cite{corey}&SDR&$-4.0$&$-4.0$&$-4.1$&$3.1$&$2.9$&$1.7$\\ &ISR&$-0.1$&$-0.1$&$-0.1$&$7.0$&$6.7$&$5.8$\\ &SIR&$-2.2$&$-1.7$&$-1.9$&$5.4$&$5.0$&$2.4$\\ &SAR&$-13.2$&$-13.1$&$-12.4$&$7.9$&$7.8$&$6.1$\\\hline Miyabe&SDR&6.9&6.8&10.6&4.0&3.8&3.3\\ &ISR&11.2&11.1&15.1&8.8&8.5&7.3\\ &SIR&11.0&10.9&14.9&6.7&6.4&6.0\\ &SAR&11.7&11.6&15.5&7.8&7.6&7.4\\\hline \end{tabular} \end{table} \section{Conclusion} \label{sec:concl} We reported our work on the organization of SiSEC 2018, that comprised the development of a new Python version~$4$ for BSS~Eval to assess performance, that is fully compatible with earlier MATLAB versions and additionally allows for time-invariant distortion filters, significantly reducing computational load. Furthermore, we presented the new MUSDB18 dataset, that gathers 150 music tracks with isolated stems, totaling almost $10$~h of music. Finally, we also provide open-source implementations of $3$ popular oracle methods to provide various upper bounds for performance. Then, we reported the impact of choosing time-invariant distortion filters for BSS~Eval over time-varying ones and quickly summarized the discrepancies in the performance of the proposed oracles methods with BSS~Eval v3 and v4. Finally, we provided an overall presentation of the scores obtained by the participants to this year's edition. More detailed analysis and sound excerpts can be accessed online on the SiSEC webpage\footnote{\url{sisec18.unmix.app}}. \footnotesize \bibliographystyle{plain}
2,877,628,088,724
arxiv
\section{Introduction} In this paper we consider the question of classifying the homomorphisms from $C_0(0,1]$ to a C*-algebra $A$. In \cite{ciuperca-elliott}, Ciuperca and Elliott show that if $A$ has stable rank 1 then this classification is possible---up to approximate unitary equivalence---by means of the the Cuntz semigroup functor. They define a pseudometric $d_W$ on the morphisms from $\mathrm{Cu}(C_0(0,1])$ to $\mathrm{Cu}(A)$, and show if $A$ has stable rank 1 then $d_W(\mathrm{Cu}(\phi),\mathrm{Cu}(\psi))=0$ for $\phi,\psi\colon C_0(0,1]\to A$ if and only if $\phi$ and $\psi$ are approximately unitarily equivalent by unitaries in $A^\sim$ (the unitization of $A$). A classification result in the same spirit as Ciuperca and Elliott's result is Thomsen's \cite[Theorem 1.2]{thomsen}. Thomsen shows that if $X$ is a locally compact Hausdorff space such that $\dim X\leq 2$ and $\check H^2(X)=0$, then the approximate unitary equivalence class of a positive element in $M_n(C_0(X))$ is determined by its eigenvalue functions. Theorem \ref{1} below applies to a class of C*-algebras that contains both the stable rank 1 C*-algebras and the C*-algebras considered by Thomsen. For this class of algebras the classification of homomorphisms by the functor $\mathrm{Cu}(\cdot)$ must be rephrased in terms of stable approximate unitary equivalence. Given $\phi,\psi\colon C_0(0,1]\to A$ we say that $\phi$ and $\psi$ are stably approximately unitarily equivalent if there are unitaries $u_n\in (A\otimes \mathcal K)^\sim$, $n=1,2\dots$, such that $u_n\phi u_n^*\to \psi$ pointwise (where $A$ is identified with the top left corner of $A\otimes \mathcal K$). If $A$ is stable or has stable rank 1, then stable approximate unitary equivalence coincides with approximate unitary equivalence, but these relations might differ in general. The following theorem characterizes the C*-algebras for which the pseudometric $d_W$ (defined in the next section) determines the stable approximate unitary equivalence classes of homomorphism from $C_0(0,1]$ to the algebra. \begin{theorem}\label{1} Let $A$ be a C*-algebra. The following propositions are equivalent. (I) For all $x,e\in A$ with $e$ a positive contraction and $ex=xe=x$, we have that $x^*x+e$ is stably approximately unitarily equivalent to $xx^*+e$. (II) If $\phi,\psi\colon C_0(0,1]\to A$ are such that $d_W(\mathrm{Cu}(\phi),\mathrm{Cu}(\psi))=0$ then $\phi$ is stably approximately unitarily equivalent to $\psi$. If (I) and (II) hold then \begin{equation}\label{inequalities} d_W(\phi,\psi)\leq d_{U}(\phi,\psi)\leq 4d_W(\phi,\psi). \end{equation} \end{theorem} In \eqref{inequalities} $d_{U}$ denotes the distance between the stable unitary orbits of $\phi(\mathrm{id})$ and $\psi(\mathrm{id})$, where $\mathrm{id}\in C_0(0,1]$ is the identity function. The inequalities \eqref{inequalities} are derived in \cite{ciuperca-elliott} for the stable rank 1 case, though their factor of 8 has now been improved to 4. By the bijective correspondence $\phi\mapsto \phi(\mathrm{id})$ between homomorphisms $\phi\colon C_0(0,1]\to A$ and positive contractions of $A$, the proposition (II) of the previous theorem may be restated as a classification of the stable unitary orbits of positive contractions in terms of the Cuntz equivalence relation of positive elements. The following theorem extends Ciuperca and Elliott's classification result beyond the stable rank 1 case. \begin{theorem}\label{2} Suppose that $(A\otimes \mathcal K)^{\sim}$ has the property (I) of Theorem \ref{1}. Let $h_A\in A^+$ be strictly positive. Then for every $\alpha\colon \mathrm{Cu}(C_0(0,1])\to \mathrm{Cu}(A)$, morphism in the category $\mathbf{Cu}$, with $\alpha([\mathrm{id}])\leq [h_A]$, there is $\phi\colon C_0(0,1]\to A$, unique up to stable approximate unitary equivalence, such that $\mathrm{Cu}(\phi)=\alpha$. \end{theorem} The class of algebras that satisfy (I) is closed under the passage to quotients, hereditary subalgebras, and inductive limits (see Proposition \ref{classI} below). This class is strictly larger than the class of stable rank 1 C*-algebras. Any commutative C*-algebra satisfies (I). If $X$ is a locally compact Hausdorff space with $\dim X\leq 2$ and $\check H^2(X)=0$ (the Cech cohomology with integer coefficients), then we deduce from \cite[Theorem 1.2]{thomsen} that $(C_0(X)\otimes \mathcal K)^\sim$ satisfies (I) (and so, Theorem \ref{2} is applicable to $C_0(X)\otimes \mathcal K$). On the other hand, the C*-algebra $M_2(C(S^2))$, with $S^2$ the 2-dimensional sphere, does not satisfy (I). In fact, there exists a pair of homomorphisms $\phi,\psi\colon C_0(0,1]\to M_2(C(S^2))$ such that $\mathrm{Cu}(\phi)=\mathrm{Cu}(\psi)$ but $\phi$ is not stably approximately unitarily equivalent to $\psi$ (see Example \ref{sphere} below). This phenomenon is not restricted to non-simple AH C*-algebras: by a slight variation---to suit our purposes---of the inductive limit systems constructed by Villadsen in \cite{villadsen}, we construct a simple, stable, AH C*-algebra for which the Cuntz semigroup functor does not classify the homomorphism from $C_0(0,1]$ into the algebra (see Theorem \ref{example}). These counterexamples raise the question of what additional data is necessary to classify, up to stable approximate unitary equivalence, the homomorphisms from $C_0(0,1]$ to an arbitrary C*-algebra. In the last section of this paper we take a step in this direction by proving the following theorem. \begin{theorem}\label{extension} Let $A$ be an inductive limit of the form $\varinjlim C(X_i)\otimes\mathcal K$, with $X_i$ compact metric spaces, and $\dim X_i\leq 2$ for all $i=1,2,\dots$. Let $\phi,\psi\colon C_0(0,1]\to A$ be homomorphisms such that $\mathrm{Cu}(\phi\otimes \mathrm{Id})=\mathrm{Cu}(\psi\otimes \mathrm{Id})$, where $\mathrm{Id}\colon C_0(0,1]\to C_0(0,1]$ is the identity homomorphism. Then $\phi$ and $\psi$ are approximately unitarily equivalent. \end{theorem} \section{Preliminary definitions and results} In this section we collect a number of definitions and results that will be used throughout the paper. \subsection{Relations on positive elements.} Let $A$ be a C*-algebra and let $a$ and $b$ be positive elements of $A$. Let us say that (i) $a$ is Murray-von Neumann equivalent to $b$ if there is $x\in A$ such that $a=x^*x$ and $b=xx^*$; we denote this by $a\sim b$, (ii) $a$ is approximately Murray-von Neumann equivalent to $b$ if there are $x_n\in A$, $n=1,2\dots$, such that $x_n^*x_n\to a$ and $x_nx_n^*\to b$; we denote this by $a\sim_{ap} b$, (iii) $a$ is stably approximately unitarily equivalent to $b$ if there are unitaries $u_n\in (A\otimes \mathcal K)^\sim$, such that $u_n^*au_n\to b$, where $A$ is identified with the top left corner of $A\otimes \mathcal K$, (iv) $a$ is Cuntz smaller than $b$ if there are $d_n\in A$, $n=1,2\dots$, such that $d_n^*bd_n\to a$; we denote this by $a\preccurlyeq_{Cu} b$, (v) $a$ is Cuntz equivalent to $b$ if $a\preccurlyeq_{Cu} b$ and $b\preccurlyeq_{Cu} a$, and we denote this by $a\sim_{Cu} b$. We have (i)$\Rightarrow$(ii)$\Rightarrow$(v). By \cite[Remark 1.8]{thomsen2}, approximate Murray-von Neumann equivalence is the same as stable approximate unitary equivalence. We will make frequent use of this fact throughout the paper. The relations (i),(ii), and (iii) will also be applied to homomorphisms from $C_0(0,1]$ to $A$, via the bijection $\phi\mapsto \phi(\mathrm{id})$ from these homomorphisms into the positive contractions of $A$. We will make frequent use of the following proposition. \begin{proposition}\label{rordam1} Let $a\in A^+$ and $x\in A$ be such that $\|a-x^*x\|<\epsilon$ for some $\epsilon>0$. Then there is $y$ such that $(a-\epsilon)_+=y^*y$, $yy^*\leq xx^*$, and $\|y-x\|<C\epsilon^{1/2}\|a\|$. The constant $C$ is universal. \end{proposition} \begin{proof} The proof works along the same lines as the proof of \cite[Lemma 2.2]{kirchberg-rordam} (see also \cite[Lemma 1]{robert}). We briefly skecth the argument here. We have $a-\epsilon_1\leq x^*x$, with $\epsilon_1$ such that $\|a-x^*x\|<\epsilon_1<\epsilon$. So $(a-\epsilon)_+\leq ex^*xe$, with $e\in C^*(a)$ such that $e(a-\epsilon_1)e=(a-\epsilon)_+$. Set $xe=\widetilde x$ and let $\widetilde x=v|\widetilde x|$ be its polar decomposition. Then $y=v(a-\epsilon)_+^{1/2}$ has the properties stated in the proposition. \end{proof} It follows from the previous proposition (or from \cite[Lemma 2.2]{kirchberg-rordam}), that Cuntz comparison can be described in terms of Murray-von Neumann equivalence as follows: $a\preccurlyeq_{Cu} b$ if and only if for every $\epsilon>0$ there is $b'$ such that $(a-\epsilon)_+\sim b'\in \mathrm{Her}(b)$. Here $\mathrm{Her}(b)$ denotes the hereditary subalgebra generated by $b$. We also have the following corollary of Proposition \ref{rordam1}. \begin{corollary}\label{mvnher} If $a,b\in B^+\subseteq A^+$, where $B$ is a hereditary subalgebra of $A$, then $a\sim_{ap} b$ in $A$ if and only if $a\sim_{ap} b$ in $B$. \end{corollary} \begin{proof} If $w^*w$ and $ww^*$ belong to $B$ for some $w\in A$, then $w\in B$. Thus, the proposition follows if $a$ and $b$ are Murray-von Neumann equivalent. Suppose that $a\sim_{ap} b$. We may assume without loss of generality that $a$ and $b$ are contractions. For $\epsilon>0$ let $x\in A$ be such that $\|a-x^*x\|<\epsilon$ and $\|b-xx^*\|<\epsilon$. Then by Proposition \ref{rordam1} there exists $y$ such that $(a-\epsilon)_+=y^*y$ and $\|yy^*-b\|\leq C_1\sqrt{\epsilon}$ for some constant $C_1$. Applying Proposition \ref{rordam1} again we get that there exists $z\in A$ such that $(yy^*-\epsilon)_+=z^*z$, $\|zz^*-b\|\leq C_2\sqrt[4]{\epsilon}$, and $zz^*\leq b$, for some constant $C_2$. Set $zz^*=b'$. We have $(a-2\epsilon)_+\sim (yy^*-\epsilon)_+\sim b'$ and $b'\in B$. So there is $w\in B$ such that $(a-2\epsilon)_+=w^*w$ and $b'=ww^*$. Since $\|b'-b\|\leq C_2\sqrt[4]{\epsilon}$ and $\epsilon$ is arbitrary, the desired result follows. \end{proof} \subsection{The Cuntz semigroup.} Let us briefly recall the definition of the (stabilized) Cuntz semigroup in terms of the positive elements of the stabilization of the algebra (see \cite{rordam1} and \cite{coward-elliott-ivanescu}). Let $A$ be a C*-algebra. Given $a\in (A\otimes \mathcal K)^+$ let us denote by $[a]$ the Cuntz equivalence class of $a$. The Cuntz semigroup of $A$ is defined as the set of Cuntz equivalence classes of positive elements of $A\otimes \mathcal K$. This set, denoted by $\mathrm{Cu}(A)$, is endowed with the order such that $[a]\leq [b]$ if $a\preccurlyeq_{Cu} b$, and the addition operation $[a]+[b]:=[a'+b']$, where $a'$ and $b'$ are mutually orthogonal and Murray-von Neumann equivalent to $a$ and $b$, respectively. If $\phi\colon A\to B$ then $\mathrm{Cu}(\phi)\colon \mathrm{Cu}(A)\to \mathrm{Cu}(B)$ is defined by $\mathrm{Cu}(\phi)([a]):=[\phi(a)]$. Coward, Elliott, and Ivanescu, showed in \cite{coward-elliott-ivanescu} that $\mathrm{Cu}(\cdot)$ is a functor from the category of C*-algebras to a certain category of ordered semigroups denoted by $\mathbf{Cu}$. In order to describe this category, let us first recall the definition of the far below relation. Let $S$ be an ordered set such that the suprema of increasing sequences always exists in $S$. For $x$ and $y$ in $S$, let us say that $x$ is far below $y$, and denote it by $x\ll y$, if for every increasing sequence $(y_n)$ such that $y\leq \sup_n y_n$, we have $x\leq y_k$ for some $k$. An ordered semigroups $S$ is an object of the Cuntz category $\mathbf{Cu}$ if it has a 0 element and satisfies that (1) if $(x_n)$ is an increasing sequence of elements of $S$ then $\sup_n x_n$ exists in $S$, (2) if $(x_n)$ and $(y_n)$ are increasing sequences in $S$ then $\sup_n (x_n+y_n)=\sup_n x_n+\sup_n y_n$, (3) for every $x\in S$ there is a sequence $(x_n)$ with supremum $x$ and such that $x_n\ll x_{n+1}$ for all $n$, (4) if $x_1,x_2,y_1,y_2\in S$ satisfy $x_1\ll y_1$ and $x_2\ll y_2$, then $x_1+x_2\ll y_1+y_2$. \noindent The morphisms of the category $\mathbf{Cu}$ are the order preserving semigroup maps that also preserve the suprema of increasing sequences, the far below relation, and the 0 element. \subsection{The pseudometrics $d_U$ and $d_W$.} Let us identify the C*-algebra $A$ with the top left corner of $A\otimes \mathcal K$. Given positive elements $a,b\in A$ let us denote by $d_U(a,b)$ the distance between the unitary orbits of $a$ and $b$ in $A\otimes \mathcal K$ (with the unitaries taken in $(A\otimes \mathcal K)^\sim$). Following Ciuperca and Elliott (see \cite{ciuperca-elliott}), let us define a pseudometric on the morphisms from $\mathrm{Cu}(C_0(0,1])$ to $\mathrm{Cu}(A)$ as follows: \begin{align}\label{defdWmor} d_W(\alpha,\beta):=\inf\left\{r\in \mathbb{R}^+\left| \begin{array}{c} \alpha([e_{t+r}])\le\beta([e_t]),\\ \beta([e_{t+r}])\le\alpha([e_t]), \end{array} \hbox{ for all }t\in \mathbb{R}^+\right\}, \right. \end{align} where $\alpha,\beta\colon \mathrm{Cu}(C_0(0,1])\to \mathrm{Cu}(A)$ are morphisms in the Cuntz category and $e_t$ is the function $e_t(x)=\max(x-t,0)$, for $x\geq 0$. It is easily shown that $d_W$ is a pseudometric. \emph{Notation convention.} All throughout the paper we will use the notations $(a-t)_+$ and $e_t(a)$ interchangeably, both meaning the positive element obtained evaluating the function $e_t(x)$ on a given selfadjoint element $a$. The pseudometric $d_W$ may be used to define a pseudometric---that we also denote by $d_W$---on the positive elements of norm at most 1 by setting $d_W(a,b):=d_W(\mathrm{Cu}(\phi),\mathrm{Cu}(\psi))$, where $\phi,\psi\colon C_0(0,1]\to A$ are such that $\phi(\mathrm{id})=a$ and $\psi(\mathrm{id})=b$. We have \begin{align}\label{defdW} d_W(a,b)=\inf\left\{r\in \mathbb{R}^+\left| \begin{array}{c} e_{t+r}(a)\preccurlyeq_{Cu} e_t(b),\\ e_{t+r}(b)\preccurlyeq_{Cu} e_t(a), \end{array} \hbox{ for all }t\in \mathbb{R}^+\right\}. \right. \end{align} Notice that \eqref{defdW} makes sense for arbitrary positive elements $a$ and $b$ without assuming that they are contractions. We extend $d_W$ to all positive elements using \eqref{defdW}. The following lemma relates the metrics $d_U$ and $d_W$ in a general C*-algebra (this is \cite[Corollary 9.1]{ciuperca-elliott}). \begin{lemma}\label{continuous} For all $a,b\in A^+$ we have $d_{W}(a,b)\leq d_U(a,b)\leq \|a-b\|$. \end{lemma} \begin{proof} Let $r$ be such that $\|a-b\|<r$ and choose $r_1$ such that $\|a-b\|<r_1<r$. Then for all $t\geq 0$ we have $a-t-r_1\leq b-t$. Multipliying this inequality on the left and the right by $e^{1/2}$, where $e\in C^*(a)$ is such $e(a-t-r_1)=(a-t-r)_+=e_{t+r}(a)$, we have \[ e_{t+r}(a)\leq e^{1/2}(b-t)e^{1/2}\leq e^{1/2}(b-t)_+e^{1/2}\preccurlyeq_{Cu} e_t(b), \] for all $t\geq 0$. Similarly we deduce that $e_{t+r}(b)\preccurlyeq_{Cu} e_t(a)$ for all $t\geq 0$. It follows that $d_W(a,b)\leq \|a-b\|$. Since $d_{W}$ is invariant by stable unitary equivalence, $d_W(a,b)\leq \|a-ubu^*\|$ for any $u$ unitary in $(A\otimes \mathcal K)^\sim$. Hence $d_W(a,b)\leq d_U(a,b)$. \end{proof} The question of whether $d_W$---as defined in \eqref{defdWmor}---is a metric is linked to the property of weak cancellation in the Cuntz semigroup. Let us say that a semigroup in the category $\mathbf{Cu}$ has weak cancellation if $x+z\ll y+z$ implies $x\leq y$ for elements $x$, $y$, and $z$ in the semigroup. It was proven in \cite{ciuperca-elliott} that if $\mathrm{Cu}(A)$ has weak cancellation then $d_W$ is a metric on the morphisms from $\mathrm{Cu}(C_0(0,1])$ to $\mathrm{Cu}(A)$. Since this result is not explicitly stated in that paper, we reprove it here. \begin{proposition} (Ciuperca, Elliott \cite{ciuperca-elliott}) If $Cu(A)$ has weak cancellation then $d_W$ is a metric on the Cuntz category morphisms from $\mathrm{Cu}(C_0(0,1])$ to $\mathrm{Cu}(A)$. \label{metric} \end{proposition} \begin{proof} By \cite[Theorem 1]{robert}, the map $[f]\mapsto (t\mapsto \rank f(t))$ is a well defined isomorphism from $\mathrm{Cu}(C_0(0,1])$ to the ordered semigroup of lower semicontinuous functions from $(0,1]$ to $\mathbb{N}\cup\{\infty\}$. This isomorphism maps $[e_t]$ to $\mathds{1}_{(t,1]}$ for all $t\in [0,1]$, with $\mathds{1}_{(t,1]}$ the characteristic function of $(t,1]$. Let us identify $\mathrm{Cu}(C_0(0,1])$ with the semigroup of lower semicontinuous functions from $(0,1]$ to $\mathbb{N}\cup\{\infty\}$ in this way. Then $d_W(\alpha,\beta)=0$ says that $\alpha(\mathds{1}_{(t,1]})=\beta(\mathds{1}_{(t,1]})$ for all $t$. In order to show that $\alpha$ and $\beta$ are equal it suffices to show that they agree on the functions $\mathds{1}_{(s,t)}$ (their overall equality then follows by additivity and preservation of suprema of increasing sequences). Let $\epsilon>0$. We have \begin{align*} \alpha(\mathds{1}_{(s+\epsilon, t-\epsilon)})+\alpha(\mathds{1}_{(t-\epsilon, 1]})&\ll \alpha(\mathds{1}_{(s, 1]})=\beta(\mathds{1}_{(s, 1]})\\&\le \beta(\mathds{1}_{(s,t)})+\beta(\mathds{1}_{(t-\epsilon,1]})\\ &=\beta(\mathds{1}_{(s,t)})+\alpha(\mathds{1}_{(t-\epsilon,1]}). \end{align*} Since $A$ has weak cancellation $\alpha(\mathds{1}_{(s+\epsilon, t-\epsilon)})\le\beta(\mathds{1}_{(s,t)})$. Passing to the supremum over $\epsilon>0$ we get that $\alpha(\mathds{1}_{(s, t)})\le\beta(\mathds{1}_{(s,t)})$. By symmetry we also have $\beta(\mathds{1}_{(s,t)})\le\alpha(\mathds{1}_{(s, t)})$. Hence, $\alpha(\mathds{1}_{(s, t)})=\beta(\mathds{1}_{(s,t)})$. \end{proof} R\o rdam and Winter showed in \cite[Theorem 4.3]{rordam-winter} that if $A$ has stable rank 1 then $\mathrm{Cu}(A)$ has weak cancellation. In the next section we will extend this result to the case when the property (I) of Theorem \ref{1} holds in $(A\otimes\mathcal K)^\sim$. \section{Proofs of Theorems 1 and 2} \subsection{Proof of Theorem 1} In this subsection we prove Theorem \ref{1} of the introduction. For positive elements $a,b\in A^+$ we use the notation $a\lhd b$ to mean that $b$ is a unit for $a$, that is to say, $ab=ba=a$. We start with a lemma. \begin{lemma} Let $A$ be a C*-algebra such that the property (I) of Theorem \ref{1} holds in $A$. Let $e,f,\alpha,\beta\in A^+$ be such that $e$ is a contraction, and \[ \alpha\lhd e,\, \alpha\sim \beta\lhd f, \hbox{ and $f\sim f'\lhd e$ for some $f'\in A^+$.} \] Then for every $\delta>0$ there are $\alpha',e'\in A^+$ such that \[\alpha'\lhd e'\lhd e, \quad\beta+f\sim \alpha'+e', \hbox{ and }\|\alpha-\alpha'\|<\delta.\] \end{lemma} \begin{proof} Since $f\sim f'$ there exists $x$ such that $f=x^*x$ and $xx^*=f'$. Let $x=w|x|$ be the polar decomposition of $x$ in the bidual of $A$. We have $wfw^*=f'$. Set $w\beta w^*=\alpha_1$. Then $\alpha_1\sim \alpha$, $\alpha_1\lhd e$, and $\alpha\lhd e$. Hence $\alpha_1+e\sim_{ap} \alpha+e$. By Proposition \ref{rordam1} this implies that for every $\delta'>0$ there is $z\in A$ such that \begin{align} &(\alpha_1+e-\delta')_+=z^*z, \quad zz^*\leq \alpha+e,\hbox{ and}\\ &\| zz^*-(\alpha+e)\|<C\sqrt{\delta'}. \label{z} \end{align} Let $z=w_1|z|$ be the polar decomposition of $z$ in the bidual of $A$. Since $e$ is a unit for $\alpha_1$ we have $(\alpha_1+e-\delta')_+=\alpha_1+(e-\delta')_+$. It follows that the map $c\mapsto w_1cw_1^*$, sends the elements of $\mathrm{Her}((e-\delta')_+)$ into $\mathrm{Her}(e)$. By \eqref{z} if we let $\delta'\to 0$ then $(zz^*-1)_+$ can be made arbitarily close to $(\alpha+e-1)_+$. Since $(zz^*-1)_+=w_1(a_1-\delta')_+w_1^*$ and $(\alpha+e-1)_+=\alpha$, this means that we can choose $\delta'$ small enough so that $\|w_1\alpha_1w_1^*-\alpha\|<\delta$. Let $\alpha'=w_1\alpha_1w_1^*$, $e'=w_1f'w_1^*$, and $y=w_1w(\beta+f)^{1/2}$. Then $\beta +f=y^*y$ and $yy^*= \alpha'+e'$. \end{proof} \begin{proof}[Proof of Theorem \ref{1}] (II) $\Rightarrow$ (I). Let $\phi, \psi\colon C_0(0,1]\to A$ be the homomorphism such that \[ \phi(\mathrm{id})=\frac{1}{\|x\|^2+1}(x^*x+e)\hbox{ and } \psi(\mathrm{id})=\frac{1}{\|x\|^2+1}(xx^*+e). \] From the definition of the pseudometric $d_W$ we see $d_W(\mathrm{Cu}(\phi),\mathrm{Cu}(\psi))=\frac{1}{|x|^2+1}d_W(x^*x+e, xx^*+e)$. In order to prove that $x^*x+e$ is stably approximately unitarily equivalent to $xx^*+e$ it is enough to show that \[ d_W(x^*x+e, xx^*+e)=0. \] That is, $(x^*x+e-t)_+\sim_{Cu} (xx^*+e-t)_+$ for all $t\in \mathbb{R}$. Using that $e$ is a unit for $x^*x$ and $xx^*$ we deduce that \begin{align*} (x^*x+e-t)_+=x^*x+(e-t)_+, \quad (xx^*+e-t)_+=xx^*+(e-t)_+, \end{align*} for $0\le t<1$. Also, $x^*x(e-t)_+=x^*x(1-t)$ and $xx^*(e-t)_+=xx^*(1-t)$. It follows that $x^*x$ and $xx^*$ belong to the hereditary algebra generated by $(e-t)_+$. Therefore, \[ (x^*x+e-t)_+\sim_{Cu}(e-t)_+\sim_{Cu} (xx^*+e-t)_+, \hbox{ for }0\leq t<1. \] If $t\ge1$ then $(x^*x+e-t)_+=(x^*x+1-t)_+$ and $(xx^*+e-t)_+=(xx^*+1-t)_+$. Hence, $(x^*x+e-t)_+\sim_{Cu} (xx^*+e-t)_+$ for $t\geq 1$. (I) $\Rightarrow$ (II). Set $\phi(\mathrm{id})=a$ and $\psi(\mathrm{id})=b$. Let $r$ be such that $d_W(a,b)<r$. Let $m\in \mathbb{N}$ be the number such that $mr\leq 1<(m+1)r$. Finally, let the sequences $(a_i)_{i=1}^{m+1}$, $(b_i)_{i=1}^{m+1}$ be defined as $a_i=\xi_{m-i+1}(a)$, $b_i=\xi_{m-i+1}(b)$ for $i=1,2,\dots,m+1$, where $\xi_i\in C_0(0,1]$ is such that $\mathds{1}_{(ir+\epsilon,1]}\le\xi_i\le\mathds{1}_{(ir,1]}$ and $\epsilon>0$ is chosen small enough so that $d_W(a,b)+2\epsilon<r$. The sequences $(a_i)_{i=1}^{m+1}$ and $(b_i)_{i=1}^{m+1}$ satisfy that \begin{align*} a_i\lhd a_{i+1}, \quad b_i\lhd b_{i+1}, \hbox{ for $i=1,\dots,m$},\\ a_i\sim d_i\lhd b_{i+1},\quad b_i\sim c_i\lhd a_{i+1},\hbox{ for $i=1,\dots,m$}, \end{align*} for some positive elements $c_i$ and $d_i$. The first line follows trivially from the definition of the elements $a_i$ and $b_i$. Let us prove the second line. From $d_W(a,b)<r-2\epsilon$ we get \[ e_{(m-i+1)r-\epsilon}(a)\preccurlyeq_{Cu} e_{(m-i)r+\epsilon}(b)\lhd b_{i+1}. \] By the definition of Cuntz comparison there exists $d\in A^+$ such that $e_{(m-i+1)r}(a)\sim d\lhd b_{i+1}$. Since $a_i$ is expressible by functional calculus as a function of $e_{(m-i+1)r}(a)$, we get that there exists $d_i\in A^+$ such that $a_i\sim d_i\lhd b_{i+1}$. We reason similarly to get the existence of $c_i$. Let us now show by induction on $n$, for $n=1,2,\dots,m$, that there are sequences of elements $(a_i')_{i=1}^n$ and $(b_i')_{i=1}^n$ such that \begin{align} a_i'\lhd a_{i+1}', \quad b_i'\lhd b_{i+1}', &\hbox{ for }i=1,2,\dots n-1\label{kk1}\\ \|a_i-a_i'\|<\epsilon, &\hbox{ for $i$ odd, }i\leq n,\label{kk2}\\ \|b_i-b_i'\|<\epsilon, &\hbox{ for $i$ even, }i\leq n,\label{kk3}\\ \sum_{i=1}^n a_i' \sim &\sum_{i=1}^n b_i',\label{kk4} \end{align} and $a_n'=a_n$, $b_n'\lhd b_{n+1}$ if $n$ is odd, and $b_n'=b_n$, $a_n'\lhd a_{n+1}$ if $n$ is even. Since $a_1\sim d_1\lhd b_2$, the induction hypothesis holds for $n=1$ taking $b_1'=d_1$. Suppose the induction holds for $n$ and let us show that it also holds for $n+1$. Let us consider the case that $n$ is odd (the case that $n$ is even is dealt with similarly). We set $b_{n+1}'=b_{n+1}$ and leave the sequence $(b_i')_{i=1}^n$ unchanged. We are going to modify the squence $(a_i')_{i=1}^n$ in order to complete the induction step. Let $\alpha=\sum_{i=1}^n a_i'$, $e=a_{n+2}$, $\beta=\sum_{i=1}^n b_i'$, $f=b_{n+1}'$. Then the conditions of the previous lemma apply. We thus have that for every $\delta>0$ there are $\alpha'$ and $e'$, such that \[\alpha'\lhd e'\lhd a_{n+2}, \quad \|\alpha-\alpha'\|<\delta, \hbox{ and }\beta+f\sim \alpha'+e'. \] It follows that $\beta\sim \alpha'$, and so $\alpha'=\sum_{i=1}^n a_i''$, with $a_i''\lhd a_{i+1}''$. We remark that the elements $a_i'$ are all in the C*-algebra generated by $\alpha$ and the elements $a_i''$ are in the C*-algebra generated by $\alpha'$. In fact, \begin{align} (\alpha-i)_+-(\alpha-(i-1))_+=a_i',\label{f1}\\ (\alpha'-i)_+-(\alpha'-(i-1))_+=a_i''.\label{f2} \end{align} Therefore, we may choose the number $\delta$ sufficiently small so that $\|a_i-a_i''\|<\epsilon$ for all $i\leq n$. We now rename the sequence $(a_i'')_{i=1}^n$ as $(a_i')_{i=1}^n$ and set $a_{n+1}'=e'$. From $\beta+f\sim \alpha'+e'$ we get that $\sum_{i=1}^{n+1} b_i'\sim \sum_{i=1}^{n+1} a_i'$. This completes the induction. Continuing the induction up to $n=m$ we find $(a_i')_{i=1}^m$ and $(b_i')_{i=1}^m$ that satisfy \eqref{kk1}-\eqref{kk4}. For the last part of the proof we split the analysis in to cases, $m$ even and $m$ odd. Suppose that $m=2k+1$. We have \begin{align}\label{mo} \frac{\sum_{i=1}^{2k+1} a_i'}{2k+1}\sim \frac{\sum_{i=1}^{2k+1} b_i'}{2k+1}. \end{align} Let $a'$ denote the sum on the left side of the last equation, and $b'$ the sum on the right. Let us show that $\|a'-a\|<2r+2\epsilon$ and $\|b-b'\|<2r+2\epsilon$. Since $a_i'\lhd a_{i+1}'$ for all $i$ and $\|a_i'\|\le 1$ for all $i$, we have $a_i'\le a_{i+1}'$ for all $i$. Hence, \begin{align*} \frac{2\sum_{i=1}^ka_{2i-1}'+a_{2k+1}'}{2k+1}\le\frac{\sum_{i=1}^{2k+1} a_i'}{2k+1}\le \frac{a_1'+2\sum_{i=1}^ka_{2i+1}'}{2k+1}. \end{align*} Using that $\|a_i'-a_i\|<\epsilon$ for $i$ odd in the above inequalities we obtain \begin{align*} \frac{2\sum_{i=1}^ka_{2i-1}+a_{2k+1}}{2k+1}-\epsilon \le \frac{\sum_{i=1}^{2k+1} a_i'}{2k+1}\le \frac{a_1+2\sum_{i=1}^ka_{2i+1}}{2k+1}+\epsilon. \end{align*} It follows now from the inequalities \begin{align*} \frac{2\sum\limits_{i=1}^k\xi_{2i-1}(t)+\xi_{2k+1}(t)}{2k+1}\le t+2r+\epsilon,\quad t-2r-\epsilon\le \frac{\xi_1(t)+2\sum\limits_{i=1}^k\xi_{2i+1}(t)}{2k+1}, \end{align*} that \begin{align*} a-2r-2\epsilon\le\frac{\sum_{i=1}^{2k+1} a_i'}{2k+1}\le a+2r+2\epsilon. \end{align*} Therefore $\|a-a'\|<2r+2\epsilon$. Let us show that $\|b-b'\|<2r+2\epsilon$. Using that $b_i'\leq b_{i+1}'$ for $i=1, 2, \ldots, 2k$, that $b_{2k+1}'\le b_{2k+2}$, and that $\|b_i'-b_i\|<\epsilon$ for all $i$ even, we obtain the inequalities \begin{align*} \frac{2\sum_{i=1}^kb_{2i}}{2k+1}-\epsilon\le \frac{\sum_{i=1}^{2k+1}b_i'}{2k+1}\le \frac{2\sum_{i=1}^{k}b_{2i}+b_{2k+2}}{2k+1}+\epsilon. \end{align*} It follows from the estimates \begin{align*} \frac{2\sum_{1}^k\xi_{2i}(t)}{2k+1}\ge t-2r-\epsilon,\quad \frac{\xi_0(t)+2\sum_{1}^k\xi_{2i}(t)}{2k+1}\le t+2r+\epsilon, \end{align*} that \begin{align*} b-2r-2\epsilon\le \frac{\sum_{i=1}^{2k+1}b_i'}{2k+1}\le b+2r+2\epsilon. \end{align*} Hence $\|b-b'\|<2r+2\epsilon$. We have found $a',b'\in A^+$ such that $a'\sim b'$, $\|a'-a\|<2r+2\epsilon$ and $\|b-b'\|<2r+2\epsilon$. Therfore $d_U(a,b)\leq 4r+4\epsilon$. Since $\epsilon>0$ is arbitrary the desired result follows. For the case that $m=2k$ we take $a'=\frac{1}{2k}\sum_{i=1}^{2k} a_{i}'$ and $b'=\frac{1}{2k}\sum_{i=1}^{2k} b_{i}'$, and we reason similarly to how we did in the odd case to obtain that $\|a'-a\|<2r+2\epsilon$ and $\|b-b'\|<2r+2\epsilon$. \end{proof} \begin{corollary}\label{completeness} Let $A$ be a C*-algebra with the property (I) of Theorem \ref{1}. The following propositions hold true: (i) If $a$ and $b$ are positive elements of $A$ such that $d_W(a,b)<r$, then for all $\epsilon>0$ there exists $b'\in A^+$ such that $\|a-b'\|<4r$ and $d_U(b,b')<\epsilon$. (ii) The set of positive elements of $A$ is complete with respect to the pseudometric $d_U$. \end{corollary} \begin{proof} (i) We may assume without loss of generality that $a$ and $b$ are contractions. We may also assume that $A$ is $\sigma$-unital by passing to the subalgebra $\mathrm{Her}(a,b)$ if necessary (the property (I) holds for hereditary subalgebras by Proposition \ref{classI}). Let $c\in A^+$ be strictly positive. By the continuity of the pseudometrics $d_U$ and $d_W$ (see Lemma \ref{continuous}), it is enough to prove the desired proposition assuming that $a$ and $b$ belong to a dense subset of $A^+$. Thus, we may assume that $a,b\in \mathrm{Her}((c-\delta)_+)$ for some $\delta>0$. From $d_W(a,b)<r$ and the proof of Theorem \ref{1} we get that there is $x\in \mathrm{Her}((c-\delta)_+)$ such that \[ \|a-x^*x\|<2r,\quad\|b-xx^*\|<2r. \] Let $e\in A^+$ be a positive contraction that is a unit for the subalgebra $\mathrm{Her}((c-\delta)_+)$. Then $x^*x+e\sim_{ap} xx^*+e$. This implies that for all $\epsilon>0$ there is a unitary $u$ in $(A\otimes \mathcal{K})^\sim$ such that \[ \|u^*eu-e\|<\epsilon,\quad \|u^*x^*xu-xx^*\|<\epsilon. \] Set $eubu^*e=b'$. If we take $\epsilon$ small enough such that \[ \|a-x^*x\|<2r-\epsilon,\quad\|b-xx^*\|<2r-\epsilon, \] we then have the following estimates: \begin{align*} & \|a-b'\|\le \|a-ubu^*\|<4r-2\epsilon+\|uxx^*u^*-x^*x\|<4r,\\ & \|u^*b'u-b\|\le \|u^*eubu^*eu-ebu^*eu\|+\|bu^*eu-be\|<2\epsilon. \end{align*} From here part (i) of the corollary follows. (ii) Let $(c_i)_{i=1}^\infty$ be a sequence of positive elements of $A$ that is Cauchy with respect to the pseudometric $d_U$. In order to show that $(c_i)_{i=1}^\infty$ converges it is enough to show that it has a convergent subsequence. We may assume, by passing to a subsequence if necessary, that $d_U(c_i,c_{i+1})<\frac{1}{2^i}$ for all $i\geq 1$. Using mathematical induction we will construct a new sequence $(c_i')_{i=1}^\infty$ such that \begin{align}\label{sequence} \| c'_i-c'_{i+1}\|<\frac{1}{2^{i-3}}, \quad d_U(c_i, c_i')<\frac{1}{2^i}, \end{align} for all $i$. For $n=1$ we set $c_1=c_1'$. Suppose that we have constructed $c_i'$, for $i=1, 2,\ldots, n$, and let us construct $c_{n+1}'$. We have $d_U(c_{n+1},c_{n})<\frac{1}{2^n}$ and $d_U(c_{n}',c_{n})<\frac{1}{2^n}$ (by the induction hypothesis). Hence $d_U(c_{n}',c_{n+1})<\frac{1}{2^{n-1}}$, and so $d_W(c_n', c_{n+1})<\frac{1}{2^{n-1}}$ (by Lemma \eqref{continuous}). Applying part (i) of the corollary to $a=c_n'$ and $b=c_{n+1}$, we find a positive element $d$ such that \[ \|c_{n}'-d\|<\frac{1}{2^{n-3}},\quad d_U(c_{n+1}, d)<\frac{1}{2^{n+1}}. \] Setting $c_{n+1}'=d$ completes the induction. By \eqref{sequence} the sequence $(c'_i)_{i=1}^\infty$ is a Cauchy sequence with respect to the norm of $A$. Hence, it converges to an element $c\in A^+$. Also by \eqref{sequence} we have that $d_U(c_i, c_i')<\frac{1}{2^i}$ for all $i$. Hence, $d_U(c_i,c)\le d_U(c_i, c_i')+d_U(c_i',c)\to 0$. That is, $(c_i)_{i=1}^\infty$ converges to $c$ in the pseudometric $d_U$. Thus, $A^+$ is complete with respect to $d_U$. \end{proof} \subsection{Approximate existence theorem.} Let $A$ be a C*-algebra and $h_A$ a strictly positive element of $A$. The main result of this subsection, Theorem \ref{existence} below, states that every morphism $\alpha\colon \mathrm{Cu}(C_0(0,1])\to \mathrm{Cu}(A)$ in the category $\mathbf{Cu}$ such that $\alpha([\mathrm{id}])\leq [h_A]$, may be approximated in the pseudometric $d_W$ by a morphism of the form $\mathrm{Cu}(\phi)$, with $\phi\colon C_0(0,1]\to A$ a C*-algebra homomorphism. \begin{lemma}\label{cuntzorder} Let $A$ be a C*-algebra. The following propositions hold true: (i) If $a$ and $b$ are two positive elements of $A$ such that $a\preccurlyeq_{Cu} b$, then for every $\epsilon>0$ there is $b'\in M_2(A)^+$ such that $b'\sim_{Cu} b$ and \[ \left\| \begin{pmatrix} a & 0\\ 0 & 0 \end{pmatrix}-b' \right\|<\epsilon. \] (ii) If $a$ and $b$ are two positive elements of $A\otimes \mathcal K$ such that $a\preccurlyeq_{Cu} b$ then for every $\epsilon>0$ there exists $b'\in (A\otimes \mathcal K)^+$ such that $b'\sim_{Cu} b$ and $\|a-b'\|<\epsilon$. \end{lemma} \begin{proof} (i) Let $\epsilon>0$ be given. Since $a\preccurlyeq_{Cu} b$, by \cite[Lemma 2.2]{kirchberg-rordam} there exists $d\in A$ such that $(a-\epsilon/2)_+=d^*bd$. Consider the vector $c=(b^{\frac{1}{2}}d, \delta b^{\frac{1}{2}})$, where $\delta>0$. Then \begin{align*} cc^*=b^{\frac{1}{2}}dd^*b^{\frac{1}{2}}+\delta^2 b\quad\text{and}\quad c^*c= \begin{pmatrix} (a-\epsilon/2)_+ & \delta d^*b\\ \delta bd & \delta^2 b \end{pmatrix}. \end{align*} We may choose $\delta$ small enough such that \[ \left\| \begin{pmatrix} a & 0\\ 0 & 0 \end{pmatrix}-c^*c \right\|<\epsilon. \] Since $\delta^2b\leq cc^*\leq (\delta^2+\|d\|^2)b$, we have $cc^*\sim_{Cu} b$. Thus, the desired result follows letting $b'=c^*c$. (ii) We may assume without loss of generality that $A$ is stable. This implies that for every $b\in (A\otimes \mathcal K)^+$ there is $b'\in A^+$ that is Murray-von Neumann equivalent to $b$, where $A$ is being identified with the top corner of $A\otimes \mathcal K$. Thus, we may assume without loss of generality that $b\in A^+$. Every positive element $a$ in $(A\otimes \mathcal K)^+$ is approximated by the elements $p_nap_n\in M_n(A)$ (with $p_n$ the unit of $M_n(A^\sim)$). Therefore, we may also assume without loss of generality that $a\in M_n(A)$ for some $n$. So we have $a,b\in M_n(A)^+$ for some $n$. Now the existence of $b'\in M_{2n}(A)^+$ with the desired properties is guaranteed by part (i) of the lemma. \end{proof} \begin{lemma}\label{interpolation} Let $A$ be a C*-algebra and let $(x_k)_{k=0}^{n}$ be elements of $\mathrm{Cu}(A)$ such that $x_{k+1}\ll x_k$ for all $k$. There is $a\in (A\otimes \mathcal K)^+$, with $\|a\|\leq 1$, such that $[a]=x_0$ and $x_{k+1}\ll[(a-k/n)_+]\ll x_k$ for $k=1,\ldots, n-1$. \end{lemma} \begin{proof} Let $\epsilon>0$. Let $a'_{n}\in (A\otimes \mathcal K)^+$ be such that $[a'_n]=x_n$ and $\|a'_n\|\leq \epsilon$. Repeatedly applying Lemma \ref{cuntzorder} (ii), we can find positive elements $(a'_k)_{i=0}^{n-1}$ such that $[a'_k]=x_k$ and $\|a'_k-a'_{k+1}\|<\epsilon$ for $k=1,\ldots, n-1$. For all $k$ we have $\|a'_0-a'_k\|<k\epsilon$. It follows from Lemma \ref{continuous} that $d_W(a_0',a_k')<k\epsilon$. Hence, \[ (a'_k-2k\epsilon)_+\preccurlyeq_{Cu} (a'_0-k\epsilon)_+\preccurlyeq_{Cu} a'_k. \] Since $x_{k+1}\ll x_k$ for all $k$, we can choose $\epsilon$ small enough such that \begin{align*} x_0=[a'_0]\geq x_1 \geq [(a'_0-\epsilon)_+] \geq x_2 &\geq [(a'_0-2\epsilon)_+] \geq \ldots \\ \ldots &\geq [(a'_0-(n-1)\epsilon)_+]\geq x_n. \end{align*} Set $a'_0/(n\epsilon)=a$. Then $[(a'_0-k\epsilon)_+]=[(a-k/n)_+]$ for all $k$. The lemma now follows by noticing that $\|a'_n\|\leq \epsilon$ and $\|a_0-a'_n\|<(n-1)\epsilon$ imply that $\|a\|\leq 1$. \end{proof} \begin{theorem}\label{existence} Let $A$ be a C*-algebra and let $h_A$ be a strictly positive element of $A$. Let $\alpha\colon \mathrm{Cu}(C_0(0,1])\to \mathrm{Cu}(A)$ be a morphism in $\mathbf{Cu}$ such that $\alpha([\mathrm{id}])\le [h_A]$. Then for every $\epsilon>0$ there exists $\phi\colon C_0(0,1]\to A$ such that $d_W(\mathrm{Cu}(\phi),\alpha)<\epsilon$. \end{theorem} \begin{proof} Let $\epsilon>0$ be given and let $n$ be such that $1/2^{n-1}<\epsilon$. Set $\alpha([e_t])=x_t$ for $t\in [0,1]$. By Lemma \ref{interpolation}, we can find $a\in (A\otimes \mathcal K)^+$ such that $\|a\|\leq 1$, $[a]=x_0$, and \begin{align}\label{int} x_{(k+1)/2^n}\ll [(a-k/2^n)_+]\ll x_{k/2^n} \end{align} for $k=1,\ldots, 2^n-1$. Let $\delta>0$ be such that \eqref{int} still holds after replacing $a$ by $(a-\delta)_+$. This is possible since \[ [(a-k/2^n)_+]=\sup_{\delta>0} [(a-\delta-k/2^n)_+]. \] We have $[a]=\alpha([\mathrm{id}])\le [h_A]$. By \cite[Lemma 2.2]{kirchberg-rordam}, there exists $d\in A\otimes\mathcal K$ such that $(a-\delta)_+=dh_Ad^*$. Set $h_A^{1/2}d^*dh_A^{1/2}=a'$. Then $a'$ is in $A^+$ and is Murray-von Neumann equivalent to $(a-\delta)_+$. It follows that $(a'-t)_+$ is Murray-von Neumann equivalent to $(a-\delta-t)_+$ for all $t\in [0,1]$. Therefore, $[(a'-k/2^n)_+]=[(a-\delta-k/2^n)_+]$ for $k=1,\ldots, 2^n-1$. So we have found a positive element $a'$ in $A^+$ such that \begin{align*} x_{(k+1)/2^n}\ll [(a'-k/2^n)_+]\ll x_{k/2^n} \end{align*} for $k=1,\ldots, 2^n-1$. Notice also that $\|a'\|=\|(a-\delta)_+\|<1$. Let $\phi\colon C_0(0,1]\to A$ be such that $\phi(\mathrm{id})=a'$. Then \begin{align*} \mathrm{Cu}(\phi)([e_{k/2^n}])\le \alpha([e_{k/2^n}])\,\text{ and }\,\alpha([e_{(k+1)/2^n}])\le \mathrm{Cu}(\phi)([e_{k/2^n}]). \end{align*} Any interval of length $1/2^{n-1}$ contains an interval of the form $(k/2^n, (k+1)/2^n)$ for some $k$. Thus, for every $t\in [0,1]$ there exists $k$ such that $(k/2^n, (k+1)/2^n)\subseteq (t, t+1/2^{n-1})$. It follows that \begin{align*} \mathrm{Cu}(\phi)([e_{t+1/2^{n-1}}])\le \mathrm{Cu}(\phi)([e_{k/2^n}])\le \alpha([e_{k/2^n}])\le \alpha([e_t]) \end{align*} and \begin{align*} \alpha([e_{t+1/2^{n-1}}])\le \alpha([e_{(k+1)/2^n}])\le \mathrm{Cu}(\phi)([e_{k/2^n}])\le \mathrm{Cu}(\phi)([e_t]). \end{align*} These inequalities imply that $d_W(\mathrm{Cu}(\phi),\alpha)\le 1/2^{n-1}<\epsilon$. \end{proof} \subsection{Weak cancellation in $\mathrm{Cu}(A)$} \begin{proposition}\label{cancellation} Suppose that $(A\otimes \mathcal K)^\sim$ has the property (I) of Theorem \ref{1}. Then $Cu(A)$ has weak cancellation. \end{proposition} \begin{proof} Suppose that $[a]+[c]\ll [b]+[c]$ for $[a]$, $[b]$, and $[c]$ in $\mathrm{Cu}(A)$. Let us choose $a$, $b$, and $c$, such that $ac=bc=0$. Taking supremum over $\delta>0$ in $[(b-\delta)_+]+[(c-\delta)_+]$ we get that $[a]+[c]\le [(b-\delta)_+]+[(c-\delta)_+]$ for some $\delta>0$. Hence, for every $\epsilon>0$ there are $a_1$ and $c_1$ in $(A\otimes \mathcal K)^+$ such that \begin{align*} a_1+c_1 \in \mathrm{Her}&((b-\delta)_+ +(c-\delta)_+),\\ a_1\sim (a-\epsilon)_+,& \quad c_1\sim (c-\epsilon)_+, \hbox{ and }a_1c_1=0. \end{align*} We assume that $\epsilon<\delta/2$. Let us show that $a_1$ is Cuntz smaller than $b$. Let $g\in C_0(0,1]$ be such that $0\le g(t)\le 1$, $g(t)=1$ for $t\geq \delta-\epsilon$ and $g(t)=0$ for $t\leq \delta/2$. Then $g((c-\epsilon)_+)+g(b)$ is a unit for $a_1$ and $c_1$. We have $g(c_1)\sim g((c-\epsilon)_+)$. Let $x$ be such that $g(c_1)=xx^*$ and $g((c-\epsilon)_+)=x^*x$. From $(g(b)+x^*x)xx^*=xx^*$ we deduce that $(1-(g(b)+x^*x))x=0$. Let $w\in (A\otimes \mathcal K)^\sim$ be given by \[ w=x+\sqrt{1-(g(b)+x^*x)}. \] Then we have $w^*w=1-g(b)$. From $a_1g(c_1)=0$ and $g(c_1)=xx^*$ we get that $a_1x=0$. Also $a_1(1-(g(b)+x^*x))=0$. We conclude that $aw=0$. Let $\tilde b$ be defined by $ww^*=1-\tilde b$. Since we have assumed that the property (I) holds in $(A\otimes \mathcal K)^\sim$, we have $w^*w+1\sim_{ap} ww^*+1$. From this we deduce $1-w^*w\sim_{ap} 1-ww^*$, i.e., $g(b)\sim_{ap} \tilde b$. So $\tilde b\sim_{Cu} g(b)\preccurlyeq_{Cu} b$. On the other hand, from $a_1w=0$ we deduce that $a_1\tilde b=a_1$, and so $a_1\leq \|a_1\|\tilde b$. Hence $a_1\preccurlyeq_{Cu}\tilde b\preccurlyeq b$. We have $[(a-\epsilon)_+]=[a_1]\leq [b]$ for all $\epsilon>0$. Letting $\epsilon\to 0$ we get $[a]\leq [b]$ as desired. \end{proof} \subsection{Proof of Theorem \ref{2}} \begin{proof}[Proof of Theorem \ref{2}] The uniqueness of the homomorphism $\phi$ is clear by Theorem \ref{1}. Let us prove its existence. By Theorem \ref{existence}, for every $n$ there exists $\phi_n\colon C_0(0,1]\to A$ such that $d_W(\mathrm{Cu}(\phi_n),\alpha)<1/2^{n+2}$. It follows from Theorem \ref{1} that \begin{align*} d_U(\phi_n(\mathrm{id}), \phi_{n+1}(\mathrm{id}))\le 4d_W(\mathrm{Cu}(\phi_n), \mathrm{Cu}(\phi_{n+1}))<1/2^n. \end{align*} This implies that $(\phi_n(\mathrm{id}))_n$ is a Cauchy sequence with respect to the pseudometric $d_U$. By Corollary \ref{completeness}, $A^+$ is complete with respect to $d_U$. Hence, there exists $\phi\colon C_0(0,1]\to A$ such that $d_U(\phi(\mathrm{id}), \phi_n(\mathrm{id}))\to 0$. We have, \begin{align*} d_W(\mathrm{Cu}(\phi),\alpha) &\le d_W(\mathrm{Cu}(\phi), \mathrm{Cu}(\phi_n))+d_W(\mathrm{Cu}(\phi_n), \alpha),\\ & \le d_U(\mathrm{Cu}(\phi), \mathrm{Cu}(\phi_n))+d_W(\mathrm{Cu}(\phi_n), \alpha)\to 0 \end{align*} So $d_W(\mathrm{Cu}(\phi),\alpha)=0$. By Propositions and Proposition \ref{metric}, $d_W$ is a metric. Therefore $\mathrm{Cu}(\phi)=\alpha$. \end{proof} \section{Examples and counterexamples} \subsection{Algebras with the property (I)} The following proposition provides us with examples of C*-algebras with the property (I) of Theorem \ref{1}. \begin{proposition} \label{classI} The following propositions hold true. (i) If $A$ is a C*-algebra of stable rank 1 then (I) holds in $A$. (ii) If $X$ is a locally compact Hausdorff space such that $\dim X\leq 2$ and $\check H^2(X)=0$, then (I) holds in $(C_0(X)\otimes \mathcal K)^\sim$. (iii) If (I) holds in $A$ it also holds in every hereditary subalgebra and every quotient of $A$. (vi) If $A\cong \varinjlim A_i$ and (I) holds in the C*-algebras $A_i$ then it also holds in $A$. \end{proposition} \begin{proof} (i) Let $x,e\in A$ be as in Theorem \ref{1} (I). Let $B$ be the smallest hereditary subalgebra of $A$ containing $x^*x$ and $xx^*$. Then $B$ has stable rank 1, and $e$ is a unit for $B$. It is well known that in a C*-algebra of stable rank 1 Murray-von Neumann equivalent positive elements are approximately unitarily equivalent in the unitization of the algebra. Therefore, there are unitaries $u_n\in B^\sim$, $n=1,2,\dots$, such that $u_n^*x^*xu_n\to xx^*$. We also have $u_n^*eu_n=e$ for all $n$, since $e$ is a unit for $B$. Hence $u_n^*(x^*x+e)u_n\to xx^*+e$, as desired. (ii) Let $x,e\in (C_0(X)\otimes \mathcal K)^\sim$ be as in Theorem \ref{1} (I). For every $t\in X$ the operators $x^*(t)x(t)+e(t)$ and $x(t)x^*(t)+e(t)$, in $\mathcal K^\sim$, are approximately unitarily equivalent, since $\mathcal K^\sim$ has stable rank 1. Let us denote by $\lambda\in \mathbb{R}$ the scalar such that $x^*x+e-\lambda \cdot 1\in C_0(X)\otimes \mathcal K$ and $xx^*+e-\lambda \cdot 1\in C_0(X)\otimes \mathcal K$. Then the selfadjoint elements $x^*x+e-\lambda \cdot 1$ and $xx^*+e-\lambda \cdot 1$ have the same eigenvalues for any point $t\in X$, and so by Thomsen's \cite[Theorem 1.2]{thomsen} they are approximately unitarily equivalent in $C_0(X)\otimes \mathcal K$. (Thomsen's result is stated for selfadjoint elements of $C_0(X)\otimes M_n$, but it easily extends to selfadjoint elements of $C_0(X)\otimes \mathcal K$). It follows that $x^*x+e$ and $xx^*+e$ are approximately unitarily equivalent in $(C_0(X)\otimes \mathcal K)^\sim$. (iii) The property (I) passes to hereditary subalgebras because approximate Murray-von Neumann equivalence does (by Corollary \ref{mvnher}). In order to consider quotients by closed two-sided ideals we first make the following claim: for every $\epsilon>0$ there is $\delta>0$ such that if $\|x(1-e)\|<\delta$ and $\|(1-e)x\|<\delta$, with $e$ a positive contraction, then $d_W(x^*x+e,xx^*+e)<\epsilon$. In order to prove this we notice that the inequality $d_W(x^*x+e,xx^*+e)<\epsilon$ is implied by a finite set of relations of Cuntz comparison on positive elements obtained by functional calculus on $x^*x+e$ and $xx^*+e$ (see the proofs of Theorem \ref{existence} and Lemma \ref{WTlimit} (ii)). Using the continuity of the functional calculus, the argument used in the implication (II)$\Rightarrow$(I) of Theorem \ref{1} can still be carried out, approximately, to obtain this finite set of Cuntz comparisons. Let us suppose that the algebra $A$ has the property (I). Let $x,e\in A/I$ be elements in a quotient of $A$ such that $ex=xe=x$, and $e$ is a positive contraction. Let $\tilde x$ and $\tilde e$ be lifts of $x$ and $e$, with $\tilde e$ a positive contraction. Let $(i_\lambda)$ be an approximate identity of $I$. Let $\tilde e_\lambda\in A$ be the positive contraction defined by $1-\tilde e_\lambda=(1-\tilde e)^{1/2}(1-i_\lambda)(1-\tilde e)^{1/2}$. Then $\tilde e_\lambda$ is a lift of $e$ for all $\lambda$, and $(1-\tilde e_\lambda)\tilde x, \tilde x(1-\tilde e_\lambda)\to 0$. Thus, we can find lifts $\tilde x$ and $\tilde e_\lambda$ of $x$ and $e$, such that $\|(1-\tilde e_\lambda)\tilde x\|<\delta$ and $\|\tilde x(1-\tilde e_\lambda)\|<\delta$ for any given $\delta>0$. By the claim made in the previous paragraph we can choose $\delta$ such that $d_W(\tilde x^*\tilde x+\tilde e,\tilde x^*\tilde x+\tilde e)<\epsilon$, for any given $\epsilon>0$. Since $A$ has the property (I), we have by Theorem \ref{1} that $d_U(\tilde x^*\tilde x+\tilde e,\tilde x^*\tilde x+\tilde e)<4\epsilon$. Passing to the quotient by $I$ we get $d_U(x^*x+e,x^*x+e)<4\epsilon$, and since $\epsilon$ is arbitrary we are done. (iii) Let $x,e\in A$ be as in Theorem \ref{1} (I). We may approximate these elements by the images of elements $x',e'\in A_n$, with $e'$ a positive contraction, within an arbitrary degree of proximity. By possibly moving the elements $x'$ and $e'$ further along the inductive limit, we may assume that $e'$ is approximately a unit for $x'$. We can then use the claim established in the proof of (ii), to get that $d_W((x')^*x'+e',x'(x')^*+e')$ can be made arbitrarily small (choosing $x'$ and $e'$ suitably). Since $A_n$ has the property (I), we have that $d_U((x')^*x'+e',x'(x')^*+e')$ can be arbitrarily small. Going back to the limit algebra this implies that $d_U(x^*x+e,xx^*+e)$ is arbitrarily small, and so it is 0. \end{proof} \begin{example} Let $D$ denote the unit disc in $\mathbb{R}^2$ and $U$ its interior. Let $B\subseteq M_2(D)$ be the hereditary subalgebra \[\begin{pmatrix} C(D) & C_0(U)\\ C_0(U) &C_0(U)\end{pmatrix}.\] By Propositions \ref{classI} (ii) and (iv), (I) holds in $B$. Thus, the Cuntz semigroup functor classifies the homomorphisms from $C_0(0,1]$ to $B$ up to stable approximate unitary equivalence. Let us show that, unlike the case of stable rank 1 algebras, stable approximate unitary equivalence and approximate unitary equivalence do not agree in $B$. Let $p\in B$ be the rank 1 projection $\begin{pmatrix} 1 & 0\\ 0 &0\end{pmatrix}$ and let $q\in B$ be a rank 1 projection that agrees with $p$ on the boundary of $D$, and such that the projection induced by $1-q$ in $D/\!\!\sim$, the disc with the boundary points identified, is nontrivial. Then $p$ and $q$ are Murray-von Neumann equivalent projections, and so they are stably unitary equivalent. However, if there were $u\in B^\sim$ unitary such that $u^*pu=q$, then the partial isometry $v=u^*(1-p)$ would be constant on $\mathbb{T}$ and such that $v^*v=1-q$ and $vv^*=1-p$ is trivial. This would contradict the nontriviality of $1-q$ in $D/\!\!\sim$. \end{example} Examples of C*-algebras that do not have the property (I) are not hard to come by. If a unital C*-algebra $A$ has (I), then for any two projections $p$ and $q$ in $A$ such that $p\sim q$, we have that $p+1\sim_{ap} q+1$ by (I). From this we deduce by functional calculus on $p+1$ and $q+1$ that $1-p\sim 1-q$. Thus, any unital C*-algebra where Murray-von Neumann equivalence of projections does not imply that they are unitary equivalent does not have (I). In particular, the algebra $B^\sim$, with $B$ as in the previous example, does not have (I). \subsection{The isometry question} The following question was posed to us by Andrew Toms: if $A$ has stable rank 1, is it true that $d_W=d_U$? We formulate this question here for the algebras covered by Theorem \ref{1}. \emph{Question.} Suppose that $A$ has the property (I) of Theorem \ref{1}. Is it true that $d_W=d_U$? We do not know the answer to this question, even in the case of stable rank 1 algebras. Proposition \ref{isometry} below provides some evidence that the answer is yes. \begin{lemma}\label{WTlimit} Let $A=\varinjlim (A_i, \phi_{i,j})$ be the C*-algebra inductive limit of the sequence of C*-algebras $(A_i)_{i=1}^\infty$ with connecting homomorphisms $\phi_{i,j}\colon A_i\to A_j$. Let $a,b\in A_k^+$ for some $k$. Then (i) $d_{U}^{A_i}(a_i,b_i)\to d_{U}^A(a_\infty,b_\infty)$ as $i\to \infty$, and (ii) $d_{W}^{A_i}(a_i,b_i)\to d_{W}^A(a_\infty,b_\infty)$ as $i\to \infty$, \noindent where $a_i$ and $b_i$ denote the images of $a$ and $b$ by the homomorphism $\phi_{k,i}$, for $i=k+1, k+2,\ldots, \infty$. \end{lemma} \begin{proof} (i) We clearly have $d_{U}^{A_n}(a_n,b_n)\geq d_U^{A_{n+1}}(a_{n+1},b_{n+1})\geq d_U^A(a_\infty,b_\infty)$ for all $n\geq 1$. Therefore, it is enough to show that for every $\epsilon>0$ there is $n$ such that $d_U^{A_n}(a_n,b_n)\leq d_U^A(a_\infty,b_\infty)+\epsilon$. Let us denote $d_U^A(a_\infty,b_\infty)$ by $r$ and let $\epsilon>0$. Let $u\in (A\otimes \mathcal K)^\sim$ be a unitary such that $\|ua_\infty u^*-b_\infty\|<\epsilon+r$. Since $A\otimes \mathcal K=\varinjlim A_i\otimes \mathcal K$, there are $n$ and a unitary $u'\in (A_n\otimes\mathcal K)^\sim$ such that $\|u'a_n(u')^*-b_n\|<\epsilon+r$. Hence $d_U^{A_n}(a_n,b_n)\leq d_U^A(a_\infty,b_\infty)+\epsilon$. (ii) We may assume without loss of generality that $k=1$. As before, we have $d_{W}^{A_n}(a_n,b_n)\geq d_W^{A_{n+1}}(a_{n+1},b_{n+1})\geq d_W^A(a_\infty ,b_\infty )$ for all $n\geq 1$. Thus, we need to show that for every $\epsilon>0$ there is $n$ such that $d_W^{A_n}(a_n,b_n)\leq d_W^A(a_\infty ,b_\infty)+\epsilon$. Let us denote $d_W(a_\infty,b_\infty)$ by $r$ and let $\epsilon>0$. Let us choose a grid of points $\{t_i\}_{i=1}^m$ in $(0,1]$ such that $t_i<t_{i+1}$ and $|t_i-t_{i+1}|<\epsilon$ for $i=1,\ldots,m-1$ (e.g., choose $m\geq 1/\varepsilon$ and $t_i=i/m$ for $i=1,\ldots,m$). From the Cuntz inequality $e_{t_i+r+\epsilon/4}(a_\infty)\preccurlyeq_{Cu} e_{t_i}(b_\infty)$ and \cite[Lemma 2.2]{kirchberg-rordam}, we deduce that there exists $d_i\in A$ such that $e_{t_i+r+\epsilon/2}(a_\infty)=d_ie_t(b_\infty)d_i^*$. Since $A$ is the inductive limit of the C*-algebras $A_n$, we can find $n$ and $d_i'\in A_n$ such that \[ \|e_{t_i+r+\epsilon/2}(a_n)-d_i'e_{t_i}(b_n)(d_i')^*\|<\epsilon/2. \] By \cite[Lemma 2.2]{kirchberg-rordam} applied in the algebra $A_n$, we have that $e_{t_i+r+\epsilon}(a_n)\preccurlyeq_{Cu} e_{t_i}(b_n)$ in $A_n$. Let us choose a value of $n$ such that this inequality holds in $A_n$ for all $i=1,2,\dots,m-1$, and such that we also have $e_{t_i+r+\epsilon}(b_n)\preccurlyeq_{Cu} e_{t_i}(a_n)$ for all $i=1,2,\dots,m-1$. Let $t\in [0,1]$. Let $i$ be the smallest integer such that $t\leq t_i$. Then $[t_i,t_i+r+\epsilon]\subseteq [t,t+r+2\epsilon]$. We have the following inequalities in $A_n$: \[ e_{t+r+2\epsilon}(a_n)\preccurlyeq_{Cu} e_{t_i+r+\epsilon}(a_n)\preccurlyeq_{Cu} e_{t_i}(b_n)\preccurlyeq_{Cu} e_t(b_n). \] The same inequalities hold after interchanging $a_n$ and $b_n$. Thus, $d_W^{A_n}(a_n,b_n)\leq r+2\epsilon$. \end{proof} \begin{proposition}\label{isometry} Let $A$ be such that $A\otimes \mathcal K$ is an inductive limit of algebras of the form $C(X_i)\otimes K$, with $\dim X_i\leq 2$, $\check H^2(X_i)=0$. Then the pseudometrics $d_U$ and $d_W$ agree on the positive elements of $A$. \end{proposition} \begin{proof} We may assume without loss of generality that $A$ is stable. Let $A=\varinjlim (C_0(X_i)\otimes \mathcal K,\phi_{i,i+1})$. Since both $d_U$ and $d_W$ are continuous (by Lemma \ref{continuous}), it is enough to show that they are equal on a dense subset of $A^+$. Thus, we may assume that $a$ and $b$ belong to the image in $A$ of some algebra $C_0(X_i)\otimes \mathcal K$. Furthermore, in order to show that $d_U(a,b)=d_W(a,b)$, it is enough to show, by Lemma \ref{WTlimit}, that this equality holds on all the algebras $C_0(X_j)\otimes \mathcal K$, with $j\geq i$. Thus, we may assume that the algebra $A$ is itself of the form $C_0(X)\otimes \mathcal K$, with $\dim X\le 2$ and $\check H^2(X)=0$. Finally, since $\bigcup_{n=1}^\infty M_n(C_0(X))$ is dense in $C_0(X)\otimes \mathcal K$, we may assume that $a,b\in M_n(C_0(X))$ for some $n\in \mathbb{N}$. So let $a,b\in M_n(C_0(X))$ be positive elements. Set $d_W(a,b)=r$. Then for every $x\in X$ we have $d_W(a(x),b(x))\leq r$, where $d_W$ is now taken in the C*-algebra $M_n(\mathbb{C})$. From the definition of $d_W$ we see that this means that for every $t>0$, the number of eigenvalues of $a(x)$ that are less than $t$ is less than the number of eigenvalues of $b(x)$ that are less than $t+r$, and vice-versa, the number of eigenvalues of $b(x)$ less than $t$, is less than the number of eigenvalues of $a(x)$ less than $t+r$. By the Marriage Lemma, this means that the eigenvalues of $a(x)$ and $b(x)$ may be matched in such a way that the distance between the paired eigenvalues is always less than $r$. By \cite[Theorem 1.2]{thomsen}, this implies that $d_U(a,b)<r$. \end{proof} \subsection{Counterexamples.} The counterexamples of this subsection are C*-algebras that not only do not have the property (I), but moreover the Cuntz semigroup functor does not distinguish the stable approximate unitary classes of homomorphisms from $C_0(0,1]$ to the algebra. \begin{example}\label{sphere} Let $S^2$ denote the 2-dimensional sphere. Let us show that there are homomorphisms $\phi,\psi\colon C_0(0,1]\to M_2(C(S^2))$ such that $\mathrm{Cu}(\phi)=\mathrm{Cu}(\psi)$ but $\phi$ is not stably approximately unitarily equivalent to $\psi$. Let $\lambda_1$ and $\lambda_2$ be continuous functions from $S^2$ to $[0,1]$ such that $\lambda_1>\lambda_2$, $\min \lambda_2=0$, and $\min\lambda_1\leq \max \lambda_2$. Let $P$ and $E$ be rank one projections in $M_2(C(S^2))$ such that $E$ is trivial and $P$ is non-trivial. Consider the positive elements \[ a=\lambda_1 P+\lambda_2(1_2-P)\,\hbox{ and }\, b=\lambda_1 E+\lambda_2(1_2-E), \] where $1_2$ denotes the unit of $M_2(C(S^2))$. Let us show that for every non-zero function $f\in C_0(0,1]$ we have $f(a)\sim f(b)$. In view of the computation of the Cuntz semigroup of $S^2$ obtained in \cite{robert}, it is enough to show that the rank functions of $f(a)$ and $f(b)$ are equal and non-constant. We have $f(a)=f(\lambda_1)P+f(\lambda_2)(1-P)$ and $f(b)=f(\lambda_1)E+f(\lambda_2)(1-E)$. It is easily verified that the rank functions of $f(a)$ and $f(b)$ are both equal to $\mathds{1}_U+\mathds{1}_V$, where $U=\{x\mid f(\lambda_1(x))\neq 0\}$, $V=\{x\mid f(\lambda_2(x))\neq 0\}$, and $\mathds{1}_U$ and $\mathds{1}_V$ denote the characteristic functions of $U$ and $V$. Since $\min \lambda_2=0$, the open set $V$ is a proper subset of $S^2$. So if $V$ is non-empty, then the function $\mathds{1}_U+\mathds{1}_V$ is non-constant. On the other hand, if $V$ is empty, then $f$ is 0 on the interval $[0,\max\lambda_2]$; in particular, $f(\min\lambda_1)=0$. Thus, $U$ is a proper subset of $S^2$ in this case, and so $\mathds{1}_U+\mathds{1}_V$ is again non-constant. Let $\phi,\psi\colon C_0(0,1]\to M_2(C(S^2))$ be the homomorphisms such that $\phi(\mathrm{id})=a$ and $\psi(\mathrm{id})=b$. It follows from the discussion in the previous paragraph that $\mathrm{Cu}(\phi)=\mathrm{Cu}(\psi)$. Let us show that $\phi$ and $\psi$ are not stably approximately unitarily equivalent. Let $t=\max(\frac{\lambda_1}{\lambda_2})$ and $r>0$. Then \[ e_t\left(\frac{a}{\lambda_1}\right)=(1-t)P\, \hbox{ and }\, e_{t+r}\left(\frac{b}{\lambda_1}\right)=(1-t-r)E. \] In order that $e_{t+r}(b/\lambda_1)$ be Cuntz smaller than $e_t(a/\lambda_1)$ the value of $r$ must be at least $1-t$. Thus, $d_W(\frac{a}{\lambda_1},\frac{b}{\lambda_1})\geq 1-t$. Hence $\frac{a}{\lambda_1}\nsim_{ap}\frac{b}{\lambda_1}$, and so $a\nsim_{ap}b$. It follows that $\phi$ and $\psi$ are not stably approximately unitarily equivalent. \end{example} Next we construct a simple AH C*-algebra for which the Cuntz semigroup functor does not classify the homomorphisms from $C_0(0,1]$ into the algebra. Let us recall the definition given in \cite{villadsen} of a diagonal homomorphism from $C(X)\otimes \mathcal K$ to $C(Y)\otimes \mathcal K$ (here $X$ and $Y$ are compact Hausdorff spaces). Let $(p_i)_{i=1}^n$ be mutually orthogonal projections in $C(Y)\otimes\mathcal K$ and let $\lambda_i\colon Y\to X$, $i=1,2,\dots,n$, be continuous maps. Let us define a homomorphism $\phi\colon C(X)\to C(Y)\otimes\mathcal K$ by \[\phi(f)=\sum_{i=1}^n (f\circ\lambda_i)p_i.\] The homomorphism $\phi$ gives rise to a homomorphism $\tilde\phi$ from $C(X)\otimes\mathcal K$ to $C(Y)\otimes\mathcal K$ as follows: $\tilde\phi$ is the composition of $\phi\otimes \mathrm{id}\colon C(X)\otimes \mathcal K\to C(Y)\otimes\mathcal K\otimes\mathcal K$ with $\mathrm{id}\otimes \alpha\colon C(Y)\otimes\mathcal K\otimes \mathcal K\to C(Y)\otimes\mathcal K$, where $\alpha$ is some isomorphism map from $\mathcal K\otimes \mathcal K$ to $\mathcal K$. A homomorphism $\tilde\phi$ obtained in this way is said to be a diagonal homomorphism arising from the data $(p_i,\lambda_i)_{i=1}^n$ (the choice of $\alpha$ does not change the approximate unitary equivalence class of $\tilde\phi$). \begin{theorem}\label{example} There exists a simple stable AH C*-algebra $A$, and homomorphisms $\phi,\psi\colon C_0(0,1]\to A$, such that $\mathrm{Cu}(\phi)=\mathrm{Cu}(\psi)$ but $\phi$ and $\psi$ are not approximately unitarily equivalent. \end{theorem} \begin{proof} Let us define the sequence of topological spaces $(X_i)_{i=1}^\infty$ by $X_1=\mathrm{CP}(1)$ and $X_{i+1}=X_i\times \mathrm{CP}(n_i)$, where $n_i=2\cdot(i+1)!$ and $\mathrm{CP}(n)$ denotes the complex projective space of dimension $2n$. For every $n$ let us denote by $\eta_{n}$ the rank one projection in $C(\mathrm{CP}(n))\otimes \mathcal K$ associated to the canonical line bundle of $\mathrm{CP}(n)$. For every $i$ let $\pi_i\colon X_{i+1}\to X_i$ denote the projection map onto $X_i$. Let $\tilde\phi_i\colon C(X_i)\otimes K\to C(X_{i+1}\otimes \mathcal K)$ denote the diagonal homomorphism given by the data $(1,\pi_i)\cup (\eta_{n_i}^j,\delta_{y_i^j})_{j=1}^i$, where $(\eta_{n_i}^j)_{j=1}^i$ are mutually orthogonal projections all Murray-von Neumann equivalent to $\eta_{n_i}$, and $\delta_{y_i^j}\colon X_{i+1}\to X_i$ is the constant map equal to $y_i^j\in X_i$ for $j=1,2,\dots,i$. It is possible, and well known, to choose the points $y_i^j$ in such a way that the inductive limit $A=\varinjlim (C(X_i)\otimes K,\phi_i)$ is a simple C*-algebra (see \cite{villadsen}). Let us show that this inductive limit $A$ provides us with the desired example. Let $a,b\in C(X_1)\otimes \mathcal K$ be the two positive elements constructed in the proof of Theorem \ref{sphere} (notice that $X_1$ is homeomorphic to $S^2$). Set $\phi_{1,i}(a)=a_i$ and $\phi_{1,i}(b)=b_i$ for $i=2,3,\dots,\infty$. For $i=2,\dots,\infty$, let us denote by $\phi_{a_i}$ and $\psi_{b_i}$ the homomorphisms from $C_0(0,1]$ to $A_i$ associated to the positive elements $a_i$ and $b_i$. Since $\mathrm{Cu}(\phi_{a_i})=\mathrm{Cu}(\psi_{b_i})$, we have $\mathrm{Cu}(\phi_{a_\infty})=\mathrm{Cu}(\psi_{b_\infty})$. Let us show, on the other hand, that the homomorphisms $\phi_{a_\infty}$ and $\psi_{b_\infty}$ are not approximately unitarily equivalent. Equivalently, let us show that $d_U(a_\infty,b_\infty)>0$. By Lemma \ref{WTlimit}, it suffices to show that $d_U(a_n,b_n)$ does not tend to 0. Let us show that $d_U(a_n,b_n)\geq (\min \lambda_1)(1-\max(\lambda_2/\lambda_1))$ for all $n$, where $\lambda_1$ and $\lambda_2$ are the functions used in the definition of $a$ and $b$ in Theorem \ref{sphere}. Let us denote by $\tilde\eta_i\in C(X_i)\otimes \mathcal K$ the projection $e_0\otimes 1\otimes\dots \otimes \eta_i\otimes \dots\otimes 1$, where $\eta_i$ is placed in the $i$-th position of the tensor product. Here we view $ C(X_i)\otimes \mathcal K$ as the tensor product \[ (C(\mathrm{CP}(1))\otimes \mathcal K)\otimes C(\mathrm{CP}(n_2))\otimes\dots \otimes C(\mathrm{CP}(n_i)). \] Let $p$ be an arbitrary projection in $C(X_1)\otimes \mathcal K$. It was observed in \cite{villadsen} that the image of $p$ by $\phi_{1,i}$ is Murray-von Neumann equivalent to the projection \begin{align*} (p\otimes 1\otimes \dots \otimes 1)\oplus k_1\tilde\eta_1 \oplus k_2\tilde\eta_2\oplus \dots\oplus k_i\tilde\eta_i, \end{align*} where $k_i\in \mathbb{N}$. In this expression the multiplication by the coefficients $k_i$ indicates the orthogonal sum of $k_i$ copies of the projection $\tilde\eta_i$. In a similar manner, one can show that for every scalar function $\lambda\in C(X_1)$ the image of $\lambda p$ by $\phi_{1,i}$ is Murray-von Neumann equivalent to \begin{align*} \lambda (p\otimes 1\otimes \dots \otimes 1)\oplus \bigoplus_j \lambda(y_1^j)\tilde\eta_1 \oplus \bigoplus_j \lambda(y_2^j)\tilde\eta_2\oplus\dots\oplus \bigoplus_j \lambda(y_i^j)\tilde\eta_i. \end{align*} Since $a$ and $b$ have both the form $\lambda_1 p\oplus \lambda_2 q$, for some projections $p$ and $q$, and scalar functions $\lambda_1$ and $\lambda_2$, the formula above allows us to compute the images of $a$ and $b$ in $C(X_i)\otimes \mathcal K$, i.e., the elements $a_i$ and $b_i$, up to Murray-von Neumann equivalence. Thus, $a_i$ is Murray-von Neumann equivalent to \begin{align*} &\lambda_1 \tilde\eta_1\oplus \bigoplus_j \lambda_1(y_1^j)\tilde\eta_1 \oplus \bigoplus_j \lambda_1(y_2^j)\tilde\eta_2\oplus\dots\oplus \bigoplus_j \lambda_1(y_i^j)\tilde\eta_i\oplus\\ &\lambda_2 \tilde\eta_1'\oplus \bigoplus_j \lambda_2(y_1^j)\tilde\eta_1 \oplus \bigoplus_j \lambda_2(y_2^j)\tilde\eta_2\oplus\dots\oplus \bigoplus_j \lambda_2(y_i^j)\tilde\eta_i, \end{align*} where $\tilde\eta_1'=(1_2-\eta_1)\otimes 1\otimes\dots\otimes 1$. A similar expression holds for $b_i$. Let $a'_i=a_i/\lambda_1$ and $b_i'=b_i/\lambda_1$. Let $t=\max(\lambda_2/\lambda_1)$. Let us show that $d_W(a_i',b_i')\geq 1-t$. We have that $(a_i'-t)_+$ is Murray-von Neumann equivalent to \begin{align}\label{bigoplus} (1-t)\tilde\eta_1\oplus&\bigoplus_j \alpha_{1,j}(y)\tilde\eta_1 \oplus \bigoplus_j \alpha_{2,j}(y)\tilde\eta_2\oplus\dots \oplus\bigoplus_j \alpha_{i,j}(y)\tilde\eta_i\oplus\\ &\bigoplus_j \beta_{1,j}(y)\tilde\eta_1 \oplus\bigoplus_j \beta_{2,j}(y)\tilde\eta_2\oplus\dots \oplus\bigoplus_j \beta_{i,j}(y)\tilde\eta_i,\nonumber \end{align} where $\alpha_{k,j}(y)=(\frac{\lambda_1(y_k^j)}{\lambda_1(y)}-t)_+$ and $\beta_{k,j}(y)=(\frac{\lambda_2(y_k^j)}{\lambda_1(y)}-t)_+$ for $k,j=1,\dots,i$. It follows that \[ [(a_i'-t)_+]\leq [\tilde\eta_1]+\sum_{j=2}^i 2k_j[\tilde\eta_j] \] in the Cuntz semigroup of $C(X_i)\otimes \mathcal K$. For the element $(b_i'-t)_+$ an expression identical to \eqref{bigoplus} may be found, except that the first summand of \eqref{bigoplus} is replaced with the term $(1-t)(1\otimes \dots\otimes 1)$. It follows that for all $r<1-t$ we have $[1\otimes\dots \otimes 1]\leq [(b_i'-t-r)_+]$. Since we do not have $[1\otimes\dots \otimes 1]\leq [\tilde\eta_1]+\sum_{j=2}^i 2k_j[\tilde\eta_j]$ (because the total Chern class of the projection on the right side is nonzero), we conclude that $d_W(a_i',b_i')\geq 1-t$. By Lemma \ref{continuous} we have $d_U(a_i',b_i')\geq 1-t$. Hence \[ d_U(a_i,b_i)\geq (\min \lambda_1)\cdot d_U(a_i',b_i')\geq (\min \lambda_1)\cdot (1-\max(\lambda_2/\lambda_1)).\qedhere \] \end{proof} \section{Classification by the functor $\mathrm{Cu}(\cdot\otimes \mathrm{Id})$} Let $A$ and $B$ be C*-algebras. For $a\in (A\otimes \mathcal K)^+$ a contraction, let us denote by $d_W^a$ the pseudometric on the Cuntz category morphisms from $\mathrm{Cu}(A)$ to $\mathrm{Cu}(B)$ given by \[ d_W^a(\alpha,\beta):= d_W(\alpha\circ\mathrm{Cu}(\phi_a),\beta\circ\mathrm{Cu}(\phi_a)), \] where $\phi_a\colon C_0(0,1]\to A\otimes \mathcal K$ is such that $\phi(\mathrm{id})=a$. We consider the set $\mathrm{Mor}(\mathrm{Cu}(A),\mathrm{Cu}(B))$ endowed with the uniform structure induced by all the pseudometrics $d_W^a$. A basis of entourages for this uniform structure is given by the sets \[ U_{F,\epsilon}=\{(\alpha,\beta)\mid d_W^a(\alpha,\beta)<\epsilon,a\in F\}, \] where $\epsilon>0$ and $F$ runs through the finite subsets of positive contractions of $A\otimes \mathcal K$. We will prove the following theorem, of which Theorem \ref{extension} of the introduction is an obvious corollary. \begin{theorem}\label{extension2} For every $\epsilon>0$ there is a finite set $F\subset C_0(0,1]\otimes C_0(0,1]$, and $\delta>0$, such that \[ (\mathrm{Cu}(\phi\otimes \mathrm{Id}),\mathrm{Cu}(\psi\otimes \mathrm{Id}))\in U_{F,\delta}\Rightarrow d_U(\phi(\mathrm{id}),\psi(\mathrm{id}))<\epsilon, \] for any pair of homomorphisms $\phi,\psi\colon C_0(0,1]\to A$, where the C*-algebra $A$ is an inductive limit of the form $\varinjlim C(X_i)\otimes\mathcal K$, with $X_i$ compact metric spaces and $\dim X_i\leq 2$ for all $i=1,2\dots$. \end{theorem} Before proving Theorem \ref{extension2} we need some preliminary definitions and results. We will consider the relation of Murray-von Neumann equivalence on projections in matrix algebras over possibly non-compact spaces. If $P$ and $Q$ are projections in the algebra $M_n(C_b(X))$ of continuous, bounded, matrix valued functions on $X$, we say that $P$ and $Q$ are Murray-von Neumann equivalent, and denote this by $P\sim Q$, if there is $v\in M_n(C_b(X))$ such that $P=vv^*$ and $Q=vv^*$. For a subset $U$ of $X$, assumed either open or closed, we say that $P$ is Murray-von Neumann equivalent to $Q$ on the set $U$ if the restrictions of $P$ and $Q$ to $U$ are Murray-von Neumann equivalent in the algebra $M_n(C_b(U))$. \begin{lemma}\label{pointsCW} Let $X$ be a finite CW-complex of dimension at most 2, and let $C$ be a closed subset of $X$. If $P$ and $Q$ are projections in $M_n(C(X))$ such that $P$ is Murray-von Neumann equivalent to $Q$ on the set $C$, then there exists a finite subset $F$ of $X\backslash C$ such that $P$ is Murray-von Neumann equivalent to $Q$ on $X\backslash F$. \end{lemma} \begin{proof} Let $X_1$ denote the 1-skeleton of $X$ and $(\Delta_i)_{i=1}^m$ the 2-cells of $X$. Suppose that $(\Delta_i)_{i=1}^{m_0}$ are the 2-cells intersected by the open set $X\backslash C$. Choose points $x_i\in \mathring{\Delta}_i\backslash C$ for $i\leq m_0$, and let $F$ be the set of these points. Since $X\backslash F$ contracts to $X_1\cup \bigcup_{i>m_0} \Delta_i$, it is enough to show that $P$ is Murray-von Neumann equivalent to $Q$ on $X_1\cup \bigcup_{i>m_0} \Delta_i$ (see \cite[Theorem 1]{vaserstein}). Let $v$ be a partial isometry defined on $\bigcup_{i>m_0} \Delta_i$ such that $P=vv^*$ and $Q=v^*v$ on $\bigcup_{i>m_0} \Delta_i$ ($v$ exists by hypothesis). Let us show that $v$ extends to $X_1\cup \bigcup_{i>m_0}\Delta_i$. For this, it is enough to show that $v$ extends from $X_1\cap \bigcup_{i>m_0} \Delta_i$ to $X_1$. This is true by \cite[Proposition 4.2 (1)]{phillips} (applied to 1-dimensional spaces). \end{proof} \begin{proposition}\label{CWcompare} Let $X$ be a finite CW-complex of dimension at most 2. Let $\epsilon>0$. Suppose that $a,b\in M_n(C(X))^+$ are of the form \begin{align}\label{aandb} a=\sum_{j=1}^n P_j\lambda_j, \quad b=\sum_{j=1}^n Q_j\lambda_j, \end{align} where $(P_j)_{j=1}^n$ and $(Q_j)_{j=1}^n$ are sequences of orthogonal projections of rank 1, $(\lambda_j)_{j=1}^n$ is a sequence of scalar functions such that $\lambda_j\geq \lambda_{j+1}$ for $j=1,2\dots n-1$, and \begin{align} \sum_{j=1}^i P_j\sim\sum_{j=1}^i Q_j \hbox{ on the set }\{x\in X\mid\lambda_i(x)-\lambda_{i+1}(x)\geq \epsilon\}, \label{lammu2} \end{align} for $i=1,\dots,n$ (for $i=n$ we take $\lambda_{i+1}=0$ in \eqref{lammu2}). Then $d_U(a,b)<2\epsilon$. \end{proposition} \begin{proof} Let $\epsilon>0$ and $a$ and $b$ be as in the statement of the lemma. Let us perturb the elements $a$ and $b$ by modifying the functions $(\lambda_i)_{i=1}^n$ in the following way: For $i=1,2,\dots,n$, let us denote by $C_i$ the set $\{x\in X\mid \lambda_i(x)-\lambda_{i+1}(x)\geq \epsilon\}$. By \eqref{lammu2} and Lemma \ref{pointsCW}, there are finite sets $F_i\subseteq X\backslash C_i$ such that $\sum_{j=1}^i P_j$ is Murray-von Neumann to $\sum_{j=1}^i Q_j$ on $X\backslash F_i$ for $i=1,2,\dots,n$. Let us choose the sets $F_i$ so that they are disjoint for different $i$s (it is clear from the proof of Lemma \ref{pointsCW} that this is possible). Furthermore, for every $x\in \bigcup_{i=1}^n F_i$ let us choose an open neighbourhood $U(x)$ of $x$ such that $U(x)\cap U(x')=\varnothing$ for $x\neq x'$ and $U(x)\cap C_i=\varnothing$ for $x\in F_i$. Starting with $i=1$, and proceeding to $i=2,\dots,n$, let us perturb the function $\lambda_{i+1}$ on the set $\bigcup_{x\in F_i} U(x)$ by an amount less than $\epsilon$, and so that $\lambda_{i+1}(x)=\lambda_i(x)$ for $x$ in some open set $V_i$ such that $F_i\subset V_i$ and $\overline {V_i}\subseteq\bigcup_{x\in F_i} U(x)$. Since the sets $\bigcup_{x\in F_i} U(x)$ are disjoint for different values of $i$, the resulting perturbations of $a$ and $b$ are within a distance of $\epsilon$ of their original values. These perturbations, which we continue to denote by $a$ and $b$, satisfy that \begin{align}\label{abconds} &a =\sum_{j=1}^n P_j\lambda_j ,\quad b =\sum_{j=1}^n Q_j \lambda_j,\\ \label{abconds2} &\sum_{j=1}^i P_j \sim \sum_{j=1}^i Q_j \hbox{ on }X\backslash V_i, \hbox{ for }i=1,2,\dots,n,\\ &V_i\subseteq \{x\mid \lambda_i(x)=\lambda_{i+1}(x)\}, \hbox{ and }V_i \hbox{ is open.}\label{abconds3} \end{align} The proposition will be proved once we show that, under the conditions \eqref{abconds}-\eqref{abconds3}, the elements $a$ and $b$ are Murray-von Neumann equivalent. This amounts to finding a sequence of orthogonal projections $(R_i)_{i=1}^n$ in $M_n(C(X))$ such that $a=\sum_{j=1}^n R_j\lambda_j$, and $R_i\sim Q_i$ for $i=1,\dots,n$. Let us show that this is possible. The sequence $(R_i)_{i=1}^n$ will be obtained by a series of modifications on the sequence $(P_i)_{i=1}^n$. Let $k_0$ be the smallest index such that $P_{k_0}\nsim Q_{k_0}$. From $\sum_{j=1}^{k_0-1}P_i\sim \sum_{j=1}^{k_0-1}Q_i$ and \eqref{abconds2}, we get that $P_{k_0}\sim Q_{k_0}$ on $X\backslash V_{k_0}$ (since there is cancellation of projections over spaces of dimension at most 2). Let $v$ be the partial isometry defined on $X\backslash V_{k_0}$ such that $P_{k_0}=vv^*$ and $Q_{k_0}=v^*v$ on $X\backslash V_{k_0}$. It is guaranteed by \cite[Proposition 4.2 (1)]{phillips} that $v$ can be extended to a partial isometry $w$ on $X$ such that $w^*w=Q_{k_0}$ and $ww^*\leq P_{k_0}+P_{k_0+1}$. Set $ww^*=P_{k_0}'$, with $w$ being such an extension of $v$. Then $P_{k_0}'$ is such that $P_{k_0}'\sim Q_{k_0}$, $P_{k_0}'\leq P_{k_0}+P_{k_0+1}$, and $P_{k_0}'(x)=P_{k_0}(x)$ for all $x\in X\backslash V_{k_0}$. Let $P_{k_0+1}'$ be the projection such that $P_{k_0}'+P_{k_0+1}'=P_{k_0}+P_{k_0+1}$. We have \[ P_{k_0}\lambda_{k_0}+P_{k_0+1}\lambda_{k_0+1}=P_{k_0}'\lambda_{k_0}+P_{k_0+1}'\lambda_{k_0+1}. \] Thus, replacing $P_{k_0}$ and $P_{k_0+1}$ by $P_{k_0}'$ and $P_{k_0+1}'$ respectively, we obtain a new sequence of projections $(P_i)_{i=1}^n$ that satisfies \eqref{abconds} and \eqref{abconds2}, and also $P_{k}\sim Q_{k}$ for $k\leq k_0$. Continuing this process we obtain the desired sequence $(R_i)_{i=1}^n$. \end{proof} \begin{proof}[Proof of Theorem \ref{extension2}] Let $\epsilon>0$ (and assume $\epsilon<1$). Let $g_\epsilon\in C_0(0,1]$ be a function such that $g_\epsilon(t)=\frac{\epsilon}{t}$ for $t\in [\epsilon,1]$, and $0\le g_\epsilon(t)\le 1$ for $t\in (0,1]$. Let $F\subseteq C_0(0,1]\otimes C_0(0,1]$ be the set $F=\{\mathrm{id}\otimes \mathrm{id},\mathrm{id}\otimes g_\epsilon\}$. Let us prove that \[ (\mathrm{Cu}(\phi\otimes \mathrm{Id}),\mathrm{Cu}(\psi\otimes \mathrm{Id}))\in U_{F,\frac{\epsilon^2}{2}}\Rightarrow d_U(\phi(\mathrm{id}),\psi(\mathrm{id}))<2\epsilon+\frac{\epsilon^2}{2}, \] where $\phi$, $\psi$, and $A$ are as in the statement of the theorem. Let us express what we wish to prove in terms of positive contractions (via the bijection $\phi\mapsto \phi(\mathrm{id})$). For $a,b\in A$ positive contractions, we have \begin{align*} d_W^{\mathrm{id}\otimes\mathrm{id}}(a\otimes \mathrm{id},b\otimes\mathrm{id})=d_W(a\otimes\mathrm{id},b\otimes\mathrm{id}),\\ d_W^{\mathrm{id}\otimes g_\epsilon}(a\otimes\mathrm{id},b\otimes\mathrm{id})=d_W(a\otimes g_\epsilon,b\otimes g_\epsilon). \end{align*} Thus, we want to show that \begin{align}\label{withcontractions} \begin{array}{l} d_W(a\otimes\mathrm{id},b\otimes\mathrm{id})<\frac{\epsilon^2}{2},\\ d_W(a\otimes g_\epsilon,b\otimes g_\epsilon)<\frac{\epsilon^2}{2}, \end{array} \Rightarrow d_U(a,b)<2\epsilon+\frac{\epsilon^2}{2}, \end{align} for $a$ and $b$ positive contractions. Let us first show that if we have \eqref{withcontractions} for the C*-algebras $(A_i)_{i=1}^\infty$ of a sequential inductive system, then we also have \eqref{withcontractions} for their inductive limit $A$. By the continuity of the pseudometrics $d_W$ and $d_U$ (see Lemma \ref{continuous}), it is enough to prove \eqref{withcontractions} assuming that $a$ and $b$ belong to a dense subset of the positive contractions of $A$. Thus, we may assume that $a$ and $b$ are the images in $A$ of positive contractions in some C*-algebra $A_i$, $i\in \mathbb{N}$. Suppose we have $a',b'\in A_i$ such that their images in $A$ satisfy the inequalities of the left side of \eqref{withcontractions}. By Lemma \ref{WTlimit} (ii), it is possible to move $a'$ and $b'$ along the inductive limit to a C*-algebra $A_j$, $j\geq i$, so that these same inequalities hold in the C*-algebra $A_j$. We conclude that $d_U^{A_j}(\phi_{i,j}(a'),\phi_{i,j}(b'))<\epsilon$. Moving $a$ and $b$ back to the limit we get the right side of \eqref{withcontractions}. From the discussion of the previous paragraph, it is enough to prove \eqref{withcontractions} for $A=C(X)\otimes \mathcal K$, with $X$ a compact metric space of dimension at most 2. Moreover, since a compact metric space of dimension at most 2 is a sequential projective limit of finite CW-complexes of dimension at most 2 (see \cite[Theorem 1.13.5]{engelking}), we are reduced to proving \eqref{withcontractions} for the case that $A=C(X)\otimes \mathcal K$, where $X$ is a finite CW-complex of dimension at most 2. Let us suppose $A=C(X)\otimes \mathcal K$, where $X$ is a finite CW-complex of dimension at most 2. It is enough to prove \eqref{withcontractions} assuming that $a,b\in M_n(C(X))$ for some $n\in \mathbb{N}$. Moreover, by Choi and Elliott's \cite[Theorem 1]{choi-elliott}, we may assume that $a(x)$ and $b(x)$ have distinct eigenvalues (as matrices in $M_n(\mathbb{C})$) for all $x\in X$. (Choi and Elliott's Theorem implies that such a set is dense in the set of positive contractions of $M_n(C(X))$ for $\dim X\leq 2$.) This implies (see the proof of \cite[Theorem 1.2]{thomsen}) that $a$ and $b$ have the form \begin{align}\label{abPQ} a=\sum_{j=1}^n P_j\lambda_i \hbox{\, and \,}b=\sum_{j=1}^n Q_j\mu_i, \end{align} for some sequences of orthogonal projections of rank 1 $(P_i)_{i=1}^n$ and $(Q_i)_{i=1}^n$, and scalar eigenfunctions $(\lambda_i)_{i=1}^n$ and $(\mu_i)_{i=1}^n$, such that $1\geq \lambda_1(x)>\lambda_2(x)>\dots>0$ and $1\geq \mu_1(x)>\mu_2(x)>\dots>0$. From $d_W(a\otimes\mathrm{id},b\otimes\mathrm{id})<\frac{\epsilon^2}{2}$ we deduce that $d_W(a,b)<\frac{\epsilon^2}{2}$ (evaluating $\mathrm{id}$ at $t=1$), and so $\|\lambda_i-\mu_i\|<\frac{\epsilon^2}{2}$ for all $i$ (see the proof of Theorem \ref{isometry}). Let $b'\in M_n(C(X))$ be given by $b'=\sum_{i=1}^n Q_i\lambda_i$. Then $d_U(b,b')<\epsilon^2/2$ and \[ d_W(a\otimes g_\epsilon,b'\otimes g_\epsilon)\leq d_W(a\otimes g_\epsilon,b\otimes g_\epsilon)+d_W(b\otimes g_\epsilon,b'\otimes g_\epsilon)<\epsilon^2. \] The implication \eqref{withcontractions} will be proven once we have shown that \[ d_W(a\otimes g_\epsilon,b'\otimes g_\epsilon)<\epsilon^2\Rightarrow d_U(a,b')<2\epsilon. \] In order to prove this, it is enough to show that the left side of this implication implies \eqref{lammu2} of Proposition \ref{CWcompare} (applied to the elements $a$ and $b'$). Let us choose $\epsilon'>0$ such that $d_W(a\otimes g_\epsilon,b'\otimes g_\epsilon)<\epsilon^2-\epsilon'\epsilon$. By the definition of $d_W$ we have that \[ (a\otimes g_\epsilon-(\epsilon-\epsilon'\epsilon))_+\preccurlyeq_{Cu} (b'\otimes g_\epsilon-(\epsilon-\epsilon^2))_+. \] Let us identify $M_n(C(X))\otimes C_0(0,1]$ with $M_n(C_0(X\times (0,1]))$ and express the Cuntz comparison above in terms of the projections $(P_i)_{i=1}^n$ and $(Q_i)_{i=1}^n$, and the eigenfunctions $(\lambda_i)_{i=1}^n$. We get \begin{equation}\label{cuntzineq} \sum_{j=1}^n P_j(x)(\lambda_j(x)g_\epsilon(t)-\epsilon+\epsilon'\epsilon)_+\preccurlyeq_{Cu} \sum_{j=1}^n Q_j(x)(\lambda_j(x)g_\epsilon(t)-\epsilon+\epsilon^2)_+, \end{equation} for $(x,t)\in X\times (0,1]$. Note: this Cuntz relation comparison is not to be understood as a pointwise relation, but rather as a relation in the C*-algebra $M_n(C(X\times (0,1]))$. For $i\leq n$ let us define the set \[ T_i=\{x\in X\mid \lambda_{i+1}(x)/\lambda_{i}(x)\leq 1-\epsilon\hbox{ and }\lambda_i(x)\geq \epsilon\}. \] Let $C_i\subseteq X\times (0,1]$ be the closed set $C_i=\{(x,\lambda_i(x))\mid x\in T_i\}$. Restricting the Cuntz comparison \eqref{cuntzineq} to the set $C_i$, and using the definition of $g_\epsilon$, we get that \begin{align*} P_1 &\left(\frac{\lambda_1}{\lambda_i} - (1-\epsilon')\right)_+ + P_2 \left(\frac{\lambda_2}{\lambda_i}-(1-\epsilon')\right)_+ +\dots +\epsilon' P_i \preceq_{Cu} \\ & Q_1 \left(\frac{\lambda_1}{\lambda_i}-(1-\epsilon)\right)_+ + Q_2 \left(\frac{\lambda_2}{\lambda_i}-(1-\epsilon)\right)_+ +\dots + \epsilon Q_i, \end{align*} on the closed set $T_i$. It follows that $\sum_{j=1}^i P_j\preccurlyeq_{Cu} \sum_{j=1}^i Q_j$ on $T_i$. In the same way we can prove that $\sum_{j=1}^i Q_j\preccurlyeq_{Cu} \sum_{j=1}^i P_j$ on $T_i$, and so $\sum_{j=1}^i P_j\sim \sum_{j=1}^i Q_j$ on $T_i$. If $\lambda_i(x)-\lambda_{i+1}(x)\geq \epsilon$ then $\lambda_{i+1}(x)/\lambda_{i}(x)\leq 1-\epsilon$ and $\lambda_i(x)\geq \epsilon$. Hence, $\{x\in X\mid \lambda_i(x)-\lambda_{i+1}(x)\geq \epsilon\}\subseteq T_i$. Therefore, the elements $a$ and $b'$ satisfy the condition \eqref{lammu2} of Proposition \ref{CWcompare}. This completes the proof of the theorem. \end{proof} \begin{bibdiv} \begin{biblist} \bib{choi-elliott}{article}{ author={Choi, M. D.}, author={Elliott, G. A.}, title={Density of the selfadjoint elements with finite spectrum in an irrational rotation C*-algebra}, journal={Math. Scand.}, volume={67}, date={1990}, pages={73--86}, } \bib{ciuperca-elliott}{article}{ author={Ciuperca, A.}, author={Elliott, G. A.}, title={A remark on invariants for C*-algebras of stable rank one}, journal={Int. Math. Res. Not. IMRN}, date={2008}, number={5}, } \bib{coward-elliott-ivanescu}{article}{ author={Coward, K. T.}, author={Elliott, G. A.}, author={Ivanescu, C.}, title={The Cuntz semigroup as an invariant for C*-algebras}, journal={J. Reine Angew. Math.}, volume={623}, date={2008}, pages={161--193}, } \bib{engelking}{book}{ author={Engelking, R.}, title={Dimension theory}, publisher={North-Holland Publishing Co.}, place={Amsterdam}, date={1978}, pages={x+314 pp. (loose errata)}, } \bib{kirchberg-rordam}{article}{ author={Kirchberg, E.}, author={R{\o}rdam, M.}, title={Infinite non-simple C*-algebras: absorbing the Cuntz algebras $\scr O\sb \infty$}, journal={Adv. Math.}, volume={167}, date={2002}, pages={195--264}, issn={0001-8708}, } \bib{phillips}{article}{ author={Phillips, N. C.}, title={Recursive subhomogeneous algebras}, journal={Trans. Amer. Math. Soc.}, volume={359}, date={2007}, number={10}, pages={4595--4623 (electronic)}, } \bib{robert}{article}{ author={Robert, L.}, title={The Cuntz semigroup of some spaces of dimension at most 2}, status={preprint, arXiv:0711.4396v2}, date={2007}, } \bib{rordam1}{article}{ author={R{\o}rdam, M.}, title={The stable and the real rank of $\scr Z$-absorbing C*-algebras}, journal={Internat. J. Math.}, volume={15}, date={2004}, pages={1065--1084}, } \bib{rordam-winter}{article}{ author={R{\o}rdam, M.}, author={Winter, W.}, title={The Jiang-Su algebra revisited}, journal={ J. Reine Angew. Math.}, status={to appear}, } \bib{thomsen}{article}{ author={Thomsen, K.}, title={Homomorphisms between finite direct sums of circle algebras}, journal={Linear and Multilinear Algebra}, volume={32}, date={1992}, number={1}, pages={33--50}, } \bib{thomsen2}{article}{ author={Thomsen, K.}, title={Inductive limits of interval algebras: unitary orbits of positive elements}, journal={Math. Ann.}, volume={293}, date={1992}, number={1}, pages={47--63}, } \bib{vaserstein}{article}{ author={Vaserstein, Leonid N.}, title={Vector bundles and projective modules}, journal={Trans. Amer. Math. Soc.}, volume={294}, date={1986}, pages={749--755}, issn={0002-9947}, } \bib{villadsen}{article}{ author={Villadsen, J.}, title={On the stable rank of simple C*-algebras}, journal={J. Amer. Math. Soc.}, volume={12}, date={1999}, number={4}, pages={1091--1102}, } \end{biblist} \end{bibdiv} \end{document}
2,877,628,088,725
arxiv
\section{Introduction}\label{intro} Upcoming neutral hydrogen (H\,{\small I}) surveys \citep[e.g.,][]{ Apertif3,duffy} will deliver large datasets. The daily data-flow will be of the order of TBytes and several hundreds of galaxies will be detected. To find and characterize H\,{\small I} objects, automated processing methods must use all of the three-dimensional (3-D) information (two positional dimensions and one spectral dimension) that the surveys make available. In this context, 3-D visualization techniques provide a powerful tool to inspect the sources under study. In fact, the 3-D view of a galaxy simultaneously presents both its H\,{\small I} distribution and its kinematics providing an immediate overview of the structures and coherence in the data \citep{Oosterloo, Goodman, Punzo2015}. In addition, user interaction in the 3-D environment provides capabilities which astronomers can use to quickly analyze complex sources found by automated pipelines \citep[e.g., $\tt{Duchamp}$ and $\tt{SoFiA}$;][]{Whiting, sofia}. These sources include interacting galaxies, tidal tails, filaments, and stripped galaxies, and the majority will not exceed dimensions greater than $10^8$ voxels\footnote{ Voxels are 3-D pixels.}. Performing interactive 3-D rendering (and analysis) of H\,{\small I} sources is computationally affordable using a modern desktop \citep{Punzo2015}. This has stimulated further development of 3-D visualization tools for astronomical purposes. For example, different package developments have recently been undertaken, exploiting: the rendering engine of $\tt{Blender}$ \footnote{\url{https://www.blender.org/}}, an open source software for 3-D animations $\,$\citep{frelled,KentBook,AstroBlend}; indirect volume rendering\footnote{In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2-D projection of a 3-D discretely sampled dataset.} available in the Visualization ToolKit, $\tt{VTK}$ \footnote{\url{http://www.vtk.org/}}, and $\tt{Mayavi2}$\footnote{\url{http://code.enthought.com/projects/mayavi/}} \citep{X3D}; stereoscopic visualization and 3-D interaction hardware using the gaming engine $\tt{Unity}$ \footnote{\url{https://unity3d.com/}} \citep{Ferrand}; and a large-scale, hybrid visualization and supercomputing environment \citep{Vohl}. Although the previous packages have introduced 3-D rendering solutions to visualize 3-D astronomical datasets, they do not fully satisfy our visualization requirements (see Section~\ref{design}). In this paper, we present $\tt{SlicerAstro}$ \footnote{\url{https://github.com/Punzo/SlicerAstro}} \citep{SlicerAstro}, an extension of $\tt{3DSlicer}$\footnote{\url{https://www.slicer.org/}} \citep[a multi-platform open source software package for visualization and medical image processing;][]{Slicer}, that aims to provide an interactive 3-D visual analytics tool based on traditional 2-D input/output hardware. In Section~\ref{design} we describe the design of $\tt{SlicerAstro}$. In Section~\ref{filtering} we show how interactive filtering and 3-D visualization can boost the inspection of faint complex sources. In Section~\ref{masking} we describe the interactive 3-D masking capabilities available in $\tt{SlicerAstro}$. In Section~\ref{modeling} we show how 3-D visualization, coupled with interactive modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. In Section~\ref{conclusion} we discuss the efficiency of such visual analytics techniques for helping astronomers in the analysis of complex sources. \section{The $\tt{SlicerAstro}$ environment}\label{design} An exhaustive review of open-source 3-D visualization packages in \cite{Punzo2015} led to the choice of $\tt{3DSlicer}$ as the preferred platform for the development of $\tt{Slicer}$-$\tt{Astro}$. The most important deciding factors included the following: \begin{enumerate}[I)] \item $\tt{3DSlicer}$ is an open-source platform with a Berkeley Software Distribution (BSD) license, which allows for free utilization of the software; \item the software has a flexible environment for code development and collaboration; \item $\tt{3DSlicer}$ has adequate documentation for both developers and users; \item the $\tt{3DSlicer}$ software has a large number of active developers; \item the $\tt{3DSlicer}$ interface already has numerous quantitative features e.g., data probing, setting fiducial markups\footnote{A fiducial markup or fiducial is an object placed in the field of view of an imaging system which appears in the image produced, for use as a point of reference or a measure.} and listing their position, 2-D/3-D rulers and calculating statistics in a selected volume). \end{enumerate} Several of the medical visualization tools present in $\tt{3DSlicer}$ suit the needs of astronomical applications. For example, $\tt{3DSlicer}$ optimizes the display layout and the process of navigating through data for parallel two-dimen-sional visualizations (e.g., movies of channel maps). In addition, $\tt{3DSlicer}$ has been adopted by Kitware\footnote{\url{https://www.kitware.com/}} as key open-source platform similarly to $\tt{VTK}$, $\tt{ITK}$\footnote{\url{https://itk.org/}} and $\tt{Paraview}$\footnote{\url{http://www.paraview.org/}} which Kitware has been supporting for more than 15 years. This guarantees long-term support and future updates of $\tt{3DSlicer}$. \subsection{Design}\label{design} \cite{Punzo2015} analyzed and reviewed the requirements for the visualization of H\,{\small I} in and around galaxies. These include handling the loading and writing of Flexible Image Transport System (FITS) files \citep{Pence}, the ability to display astronomical World Coordinates System \citep[WCS;][]{Calabretta, Greisen}, interactive 3-D high-quality rendering capabilities \citep[i.e., \textit{graphics processing unit} (GPU)-accelerated ray casting rendering][]{Roth, Schroeder} and interactive linking between 1-D/2-D/3-D views. Interactive visualization which allows the user to extract quantitative information directly from the visual presentation is also of primary importance: probing the data with a cursor; displaying coordinate axes in the 2-D views; performing 3-D segmentation\footnote{Image segmentation is the process of partitioning an image into disjoint regions that are uniform with respect to some property.} techniques; linked 1-D/2-D/3-D region of interest (ROI) selection and the ability to calculate statistics (e.g., mean, $rms$, maximum, minimum, etc.) in a specific area or volume. Another requirement is to couple analysis techniques such as interactive smoothing and tilted-ring model fitting to visualization. Therefore, comparative visualization (multiple views, overlaid visualizations, etc.) is fundamental for comparing the raw data with the smoothed version and/or the models. The last requirement is interoperability\footnote{Interoperability is the ability of different information technology systems and software applications to communicate, exchange data, and use the information that has been exchanged.} with virtual observatory (VO) tools \citep{samp}. Moreover, in order to facilitate collaborative work, the source code must be open, modular, well documented, and well maintained. The current version of the $\tt{3DSlicer}$ software provides several of these capabilities: CPU and GPU rendering based on the $\tt{VTK}$, interface optimized for 2-D visualization with a high-level of linking between the 2-D and 3-D views, 2-D and 3-D segmentations techniques, high-level of modularity in the source code, embedded python console in the user interface for fast interaction with the $\tt{3DSlicer}$ application programming interface (API)\footnote{The API is a set of subroutine definitions, protocols, and tools for building application software.}, presence of detailed documentation for both users and developers. In addition, we made a number of contributions to the $\tt{3DSlicer}$ source: we added more types of units in the $\tt{3DSlicer}$ standards and factorized the $\tt{DataProbe}$ module and widgets that control the 2-D views to allow their customization by $\tt{3DSlicer}$ extensions. In addition, to fulfill the requirements, the following capabilities have to be added: \begin{enumerate}[I)] \item proper visualization of astronomical data-cubes using the FITS data format; \item enabling interactive smoothing in all three dimensions; \item interactive 3-D selection of H\,{\small I} sources; \item interactive H\,{\small I} data modeling coupled to visualization; \item generation of flux density profiles and histograms of the voxel intensities; \item introduction of the SAMP protocol to enable interoperability with $\tt{Topcat}$ \citep{Topcat}, and other VO tools and catalogs. \end{enumerate} These software capabilities are particular to astronomical applications and, therefore, it is optimal to implement them in an extension of $\tt{3DSlicer}$, i.e.~$\tt{SlicerAstro}$, rather than in its core. In the next sections we will discuss the implementation and deployment of such capabilities and use the H\,{\small I} emission in and around WEIN069 \citep{Mpati}, a galaxy in a region in the sky where a filament of the Perseus-Pisces Supercluster (PPScl) crosses the plane of the Milky Way, as an example. \subsection{Implementation} The $\tt{3DSlicer}$ plug-in mechanism enables the rapid development of custom modules in different programming languages and for different levels of integration: \begin{enumerate}[1)] \item The \textit{command-line interface modules} are standalone executables with a limited input/output argument complexity (simple argument types and no user interaction). \item The \textit{loadable modules} are plugins implemented in the {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}\; language that are integrated tightly in the $\tt{3DSlicer}$ core software. These modules have access to all other $\tt{3DSlicer}$ core modules and the internals of the application and they can define custom, interactive graphical user interfaces. \item The \textit{scripted modules} are written in the $\tt{Python}$ language. These modules can be developed and modified without rebuilding or restarting $\tt{3DSlicer}$ and they have similar access to the application internals as loadable modules. \end{enumerate} All objects (volumetric images, surface models, transforms, etc.) in $\tt{3DSlicer}$ are stored in a hierarchical structure of nodes encoded in the Medical Reality Modeling Language (MRML). Each MRML node has its own list of custom attributes that can be used to specify additional characteristics for the data object. This method of storage enables the modules to have access to the MRML tree, allowing new extensions to leverage existing processing and visualization functions without directly interfering with other modules. In addition, $\tt{3DSlicer}$ and its extensions are developed using a $\tt{CMake}$-based\footnote{\url{https://cmake.org/}} build system which greatly helps the development, packaging and testing of multi-platform software. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{SlicerAstroDesign} \caption{The architecture of $\tt{SlicerAstro}$ is shown in the diagram. The dashed arrows indicate the dependency of a component on another one. The loadable modules are the main components of $\tt{SlicerAstro}$. The $\tt{AstroVolume}$ module is the core module of $\tt{SlicerAstro}$ and it provides an interface for handling the loading and writing of FITS files, the control of the 2-D and 3-D color transfer functions, and the display of the astronomical world coordinates system \citep[WCS;][]{Calabretta, Greisen}. The $\tt{AstroSmoothing}$ and $\tt{AstroModeling}$ modules take care of specific operations (smoothing and modeling respectively), with their own interface widgets for user interaction. The scripted modules have the role of utilities such as downloading sample datasets. } \label{SlicerAstroDesign} \end{figure} The $\tt{SlicerAstro}$ functionality is implemented as multiple plug-in modules, bundled as one downloadable extension. This modularization makes development and maintenance faster and affordable. Moreover, extensions are built everyday for the nightly build of $\tt{3DSlicer}$ to identify breakage with the core. The architecture of $\tt{SlicerAstro}$ is shown in Fig.~\ref{SlicerAstroDesign}. $\tt{SlicerAstro}$ uses the $\tt{CTK}$\footnote{\url{http://www.commontk.org}} and $\tt{Qt}$\footnote{\url{https://www.qt.io/}} packages for user interface widgets, and the $\tt{VTK}$ library for the visualization (i.e.~2-D and 3-D rendering). $\tt{SlicerAstro}$ depends also on: $\tt{CFITSIO}$ \citep{cfitsio}, $\tt{WCSLIB}$ \citep{wcslib} and $\tt{^{\rm3D}\,Barolo}$ \citep{bbarolo}. The loadable modules are the main components of $\tt{SlicerAstro}$, while the scripted modules have the role of utilities such as presenting a welcoming interface and capabilities to download sample datasets. The $\tt{AstroVolume}$ component is the core module (see Section~\ref{framework}); $\tt{AstroSmo}$-$\tt{othing}$ and $\tt{AstroModeling}$ modules take care of specific operations (smoothing and modeling respectively), with their own interface widgets for user interaction (see Sections \ref{filtering} and \ref{modeling}). $\tt{SlicerAstro}$ development focuses on H\,{\small I} datasets. Therefore, we currently provide modules which are mainly aimed for the analysis of H\,{\small I} data-cubes. However, $\tt{SlicerAstro}$ can potentially enhance also the inspection of other datasets such as mm/submm molecular line data and optical integral field spectroscopic data. We will elaborate more in the potential of $\tt{SlicerAstro}$ for such datasets in Section~\ref{conclusion}. \subsection{Interface framework}\label{framework} The $\tt{AstroVolume}$ module provides an interface for handling the loading and writing of FITS files, MRML nodes that store the data in the $\tt{3DSlicer}$ object-tree, the display of the WCS and the control of the 2-D and 3-D color transfer functions. In Fig.~\ref{SlicerAstroFig1}, we show the implementation of the $\tt{3DSlicer}$ and $\tt{SlicerAstro}$ interface. On the top, the main menu shows several options for loading and writing files (including FITS files) and for editing the $\tt{3DSlicer}$ settings. The data loaded from a FITS file are stored in a $\tt{vtkMRMLAstro}$-$\tt{VolumeNode}$ object. The instantiated MRML nodes and their properties can be inspected in the $\tt{SubjectHierarchy}$ module (see Fig.~\ref{SlicerAstroFig0}). The output of source finder pipelines, that is, \textit{object masks}, are loaded as $\tt{vtkMRMLAstroLabel}$-$\tt{MapVolumeNode}$ objects. These masks are delivered as a data-cube where non-detected voxels in the original data-cube have a value of 0 and detected voxels have an integer value corresponding to the ID of the object they belong to. Due to the complex 3-D nature of the sources \citep{Sancisi} and the noisy character of the data, constructing a fully automated and reliable pipeline is not trivial \citep{Popping} and visualization can help in identifying or rejecting very faint signals \citep{Punzo2016}. For example, in Fig.~\ref{SlicerAstroFig1}, $\tt{SlicerAstro}$ shows the visualization of the H\,{\small I} emission in and around WEIN069 and its mask, generated with $\tt{SoFiA}$ \citep{sofiaCode}. The data-cube contains three sources, WEIN069 and two companions, each identified as a separate source with its own mask. In addition there is a tidal tail and a very faint filament that is connecting two of the galaxies. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{SlicerAstroFig0} \caption{The interface widgets of the $\tt{SubjectHierarchy}$ module. In the top panel, the interface includes the widgets for selecting MRML nodes representing the data-cubes. In the bottom panel, the interface includes a tool to inspect and modify the FITS keywords. } \label{SlicerAstroFig0} \end{figure} The left panel in Fig.~\ref{SlicerAstroFig1}, includes the widgets for changing the the 2-D and 3-D color transfer functions for $\tt{vtkMR}$-$\tt{MLAstroVolumeNode}$ objects. In the case of $\tt{vtkMRMLAstro}$-$\tt{LabelMapVolumeNode}$ objects, volume rendering is not available, but it is possible to use the $\tt{MaskVisualization}$ widget to convert the $\tt{vtkMRMLAstroLabelMapVolumeNode}$ object to a $\tt{vtkMRMLSegmentationNode}$ object. The $\tt{vtkMRML}$-$\tt{SegmentationNode}$ class is a core class of $\tt{3DSlicer}$ that handles the display of data segmentation both in the 2-D and 3-D views, as shown in Fig.~\ref{SlicerAstroFig1}, and they can be overlaid on the data of a $\tt{vtkMRMLAstroVolumeNode}$ object. The segmentation objects can also be interactively modified (see Section~\ref{masking} for more information) and can be exported for 3-D printing (or imported in $\tt{Blender}$) by saving them in the STL file format. Moreover, the layout includes interface widgets to control the display properties (e.g., user interaction to rotate the 3-D view), a window displaying the 3-D World Coordinate and data values of the position of a data probe in the linked 2-D views. The 2-D views also have quantitative World Coordinate axes. Finally, the MRML infrastructure allows the user to save the session as a \textit{scene}. Reloading such a \textit{scene} restores the session. One can also share interesting visualizations with colleagues using the $\tt{Datastore}$ module. This module saves a bundle with all the necessary information (the data, the visualization views, screen-shots and text comments) on the Kitware servers. Other users can download these bundles. \begin{sidewaysfigure*} \centering \includegraphics[width=1.\textwidth]{SlicerAstroFig1} \caption{Visualization of the H\,{\scriptsize I} emission in and around WEIN069 \citep{Mpati} and its mask, generated with $\tt{SoFiA}$, in $\tt{SlicerAstro}$. The data-cube contains three sources, i.e., WEIN069 and two companion galaxies, a tidal tail and a very faint filament that connects two galaxies. In the 3-D view the data are rendered in green and highlighted at an intensity level equal to 3 times the root mean square ($rms$) noise. The colored segmentations represent the mask and each color refers to a specific source ID as shown in the table widget in the left panel. The left panel includes also interface widgets to control the 2-D and 3-D color transfer functions and a data probe window. Quantitative information such as WCS coordinates are shown both in the data probe window and along the axes in the 2-D views. The white labels in the 3-D view represent the four cardinal directions (\textit{N}, \textit{S}, \textit{E}, \textit{W}) and the line-of-sight direction (representing frequency/wavelength or velocity/redshift, z, hence the symbol \textit{Z}). } \label{SlicerAstroFig1} \end{sidewaysfigure*} \subsection{Rendering and user interactions}\label{rendering} In $\tt{3DSlicer}$ the visualization representations are rendered with the Visualization Toolkit, $\tt{VTK}$ \citep{Schroeder}. In $\tt{SlicerAstro}$ the data are rendered in 3-D with the $\tt{VTK}$ implementation of the ray casting algorithm, a direct volume rendering method \citep{Roth}. Ray casting offers very high-quality results (i.e., free of artifacts), but it is computationally expensive. On the other hand, ray casting is a massively parallel algorithm. On modern desktops the $\tt{VTK}$ GPU implementation offers interactive rendering with a high ($> 5$) frame rate (FPS) for data-cubes not exceeding $10^9$ voxels. The use of such high quality rendering is mandatory in our case. In fact, other methods can produce many rendering artifacts in the noisy regions of an H\,{\small I} data-cube. In particular, indirect volume rendering techniques are very ineffective at signal-to-noise-ratio $\lesssim 2$, because they have to fit geometries to very noisy data that do not have well-defined closed borders \citep{Punzo2015}. In $\tt{SlicerAstro}$, the masks are visualized as segmentations (i.e., $\tt{3DSlicer}$ renders them with indirect volume rendering), because they are supposed to be noise-free by definition. $\tt{3DSlicer}$ offers several 2-D/3-D linked navigation and interaction tools such as \textit{crosshair}, \textit{fiducials}, \textit{region of interest} (ROI), \textit{ruler} and slice views linked with 3-D views \citep[for more information, we refer to][ and the $\tt{3DSlicer}$ online documentation\footnote{ \url{https://www.slicer.org/wiki/Documentation/Nightly}}]{Slicer}. All these features are extremely useful for navigating and probing the data. However, the 3-D visualization paradigm used in $\tt{3DSlicer}$ and $\tt{SlicerAstro}$ is limited by the use of 2-D input and output hardware such as a standard monitor and mouse. An obvious limitation in 3-D is that it is not straightforward to select features or pick positions (i.e., voxels) in the 3-D space in an intuitive manner. Complementary visualization in 2-D (linked to the 3-D one) can partially address these deficiencies. In $\tt{3DSlicer}$ all the modules are accessible at run-time from the $\tt{Python}$ console ($\tt{Python}$ version 2.7.11 is bundled and delivered together with the $\tt{3DSlicer}$ binaries). Note, however, that of the packages often used in astronomy only $\tt{numpy}$ is part of this bundle. This allows additional flexibility for user-customized visualization and analysis tasks using all $\tt{3DSlicer}$ and $\tt{SlicerAstro}$ capabilities. The $\tt{Python}$ console and automated $\tt{Python}$ scripts are a very powerful tool for interacting with the data itself. Some examples are: accessing the array containing the data, modifying the data and calculating statistics in a region of interest. Moreover, the MRML objects store everything that the user visualizes and changes in the interface. This allows the user to perform the same actions by using $\tt{Python}$ scripts. An example for applying smoothing to a data-cube, performing the rendering, and saving the result as a video is shown in the appendix, Section~\ref{Appendix}. For example, this framework is extremely useful for creating screenshots and videos for a large number of sources. For more information, we also refer to the online documentation\footnote{\url{https://github.com/Punzo/SlicerAstro/wiki}}. \section{Interactive filtering}\label{filtering} Future blind H\,{\small I} surveys will detect a large variety of galaxies with additional complex features such as tails, extra-planar gas, and filaments. These faint structures can be found in nearby, well resolved galaxies and groups of marginally resolved galaxies. They have a very low signal-to-noise ratio ($\sim 1$), but are extended over many pixels. Efficiently separating such signals from the noise is not straightforward \citep{Punzo2016}. Moreover, in the case of Apertif \citep{Apertif3} and ASKAP \citep{askap}, it is estimated that tens of such sub-cubes will be collected weekly \citep{Punzo2015}. This is a large volume of data, and a coupling between the filtering algorithms and 3-D visualization can enhance the inspection process of large numbers of galaxies and masks provided by source finder algorithms. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{SlicerAstroFig2Widgets} \caption{The interface widgets of the $\tt{AstroSmoothing}$ module. The interface includes a widget for changing the input parameters for the smoothing and a table showing the output segmentations generated after the smoothing process. } \label{SlicerAstroFig2Widgets} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=1.\textwidth]{SlicerAstroFig2} \caption{A comparative layout of the output generated by the $\tt{AstroSmoothing}$ module is shown. In Fig.~\ref{SlicerAstroFig2Widgets}, we show the interface of the $\tt{AstroSmoothing}$ module. This includes the widgets for changing the input (such as the filter choice, the computational hardware and the smoothing parameters) and visualizing the output segmentation objects generated by the smoothing process. The layout is composed of two 3-D views and three 2-D views. In the left 3-D view and the 2-D views the data are shown. In the right 3-D view the filtered version of the data is shown. The data are rendered in different colors that highlight the data at different intensity levels: green, blue and red correspond to 3, 7 and 15 times the $rms$ noise respectively. The colored segmentations represent masks automatically calculated by the filtering algorithm. The light blue and yellow segmentations (visualized as contour plots in the 2-D views) are a 3 $rms$ thresholding of the input data and the filtered data, respectively. } \label{SlicerAstroFig2} \end{figure*} Three filters are currently available in the $\tt{AstroSmoo}$-$\tt{thing}$ module: \begin{enumerate}[1)] \item \textit{Box} (or \textit{mean}) filter: it replaces each pixel value in the volume with the mean value of its neighbors including the value of the pixel itself. Both isotropic (where the 3-D kernel has the same dimensions along all the three axes) and anisotropic implementations are available. \item \textit{Gaussian}: it applies a 3-D convolution operator to the volume. It preserves the shape of the objects better than the box filter, but is computationally more expensive. Both isotropic and anisotropic (the kernel can have different dimensions along the 3 axes and it can be rotated) implementations are available. \item \textit{Intensity-Driven Gradient}: it uses an adaptive diffusion process (i.e., operates on the differences between neighboring pixels, rather than on the pixel values directly). High signal-to-noise regions are unaffected, but low signal-to-noise, extended, regions are enhanced. \end{enumerate} These algorithms are available in $\tt{SlicerAstro}$ as parallelized implementations on both CPU and GPU hardware, offering interactive performance when processing data-cubes of dimensions up to $10^7$ voxels and very fast performance ($< 3.5$ sec) for larger ones (up to $10^8$ voxels). The intensity-driven gradient filter, due to its adaptive characteristics, is the optimal choice for H\,{\small I} data \citep{Punzo2016}. Therefore, it is the default method when the automatic mode has been chosen. This algorithm preserves the detailed structure of the signal with high signal-to-noise ratio ($> 3$) at the highest resolution, while smoothing only the faint part of the signal (signal-to-noise ratio $< 3$). For more information regarding the filters and their performance, default parameters, advantages and disadvantages, we refer to \cite{Punzo2016}. After running the smoothing process, $\tt{SlicerAstro}$ displays automatically a comparative layout composed of two 3-D views, one of the original data (top left panel) and one of the filtered data (top right panel), and three 2-D views of the original data (lower three panels) for the inspection of the data, as shown in Fig.~\ref{SlicerAstroFig2}. In this particular case, the 3-D visualization of the filtered data highlights immediately the presence of the faint filament between two galaxies that was hardly visible in the original version of the data. Moreover, the coupling between 3-D visualization and interactive filtering enables a user to manually and iteratively search the best smoothing parameters for maximally enhancing the local signal-to-noise ratio of the very faint component. We will show in the next section how any segmentations generated by the smoothing module (or converted from loaded masks as shown in Section~\ref{framework}) can be interactively modified in the $\tt{SegmentationEditor}$ module of $\tt{3DSlicer}$. \section{Interactive 3-D masking}\label{masking} Twenty years ago, \cite{Norris} pointed out that the main challenge for visualizing astronomical data in 3-D was to develop a 3-D visualization tool with interactive capabilities for data inspection and with interactive and quantitative analysis capabilities. Nowadays, 3-D interactive visualization is achievable thanks to the use of massively parallel hardware such as GPUs (see Section~\ref{rendering}). On the other hand, volumetric data interaction tools (e.g., picking a voxel or selecting a region of interest in 3-D) are necessary for performing data analysis in a 3-D environment. An optimized 3-D selection technique, based on 2-D input/output hardware, is still a partially open-problem, not only in astronomy, but also in medical visualization and computer science. Moreover, the optimal selection technique highly depends on the specifications of the use case. Our requirements for a 3-D selection tool are interactivity and a minimal number of user-operations for achieving the selection (i.e., user-friendliness). For a review of the state-of-the-art 3-D selection algorithms we refer to \cite{Yu1} and \cite{Yu2}. For our application, we opt for the $\tt{CloudLasso}$ technique \citep{Yu1}. The $\tt{CloudLasso}$, operated on grid data, is based on the application of the Marching Cubes (MC) algorithm \citep{Wyvill, Lorensen} for the identification of regions of voxels with signal inside a user-drawn lasso; i.e., $\tt{CloudLasso}$ is a lasso-constrained Marching Cubes method. The $\tt{CloudLasso}$ method allows us to spatially select structures with high signal-to-noise ratio ($> 3$) within a lasso region. Even if disjoint structures lie visually behind one another, they can be all selected without including the noisy regions in between. For operating the intended selection, a threshold has to be chosen. The $\tt{CloudLasso}$ algorithm, therefore, comprises the following two steps: \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{SlicerAstroFig3Widgets} \caption{The interface widgets of the $\tt{SegmentationEditor}$ module and $\tt{AstroCloudLasso}$ segmentation effects. The interface includes widgets for selecting the segment to modify, the segmentation editor effect and the parameters relative to the chosen effect. } \label{SlicerAstroFig3Widgets} \end{figure} \begin{enumerate}[A)] \item Volume selection: the subset of the volume where the intensities of the voxels exceed a threshold is computed using Marching Cubes. \item Threshold tuning: interactive adjustment of the intensity threshold. \end{enumerate} The selection operated by the $\tt{CloudLasso}$ algorithm is highly dependent on the user interactions. In fact, the user has to choose the orientation of the camera which gives the best view of the data, perform the 2-D selection on the screen and select the optimal threshold. Therefore, the user experience and knowledge of the data are of primary importance in the $\tt{CloudLasso}$ selection. The technique allows the user to perform a refinement of the selection interactively by tuning the threshold or through Boolean Operations. \begin{figure}[!ht] \centering \includegraphics[width=0.395\textwidth]{SlicerAstroFig3} \includegraphics[width=0.395\textwidth]{SlicerAstroFig3b} \includegraphics[width=0.395\textwidth]{SlicerAstroFig4} \caption{Usage of the $\tt{AstroCloudLasso}$ segmentation editor effect in 3-D. A smoothed version of the WEIN069 data is rendered in green in the top and middle panels. In the bottom panel, the original version of WEIN069 data is rendered. The three renderings highlight the data at the intensity level equal 3 times the $rms$. In the top panel, the colored segmentations represent the mask shown in Fig.~\ref{SlicerAstroFig1}. In order to visualize clearly both the data and the mask, the data are rendered with a higher opacity in the bottom panel compared to the upper panels. Similarly, the opacity of the segmentations is decreased. The $\tt{AstroCloudLasso}$ selection tool is visualized as a yellow tube drawn by the user with the 2-D cursor indicated by the blue cloud. This tool computes a selection in 3-D space from the 2-D user-selection. It builds a closed surface at the value of the intensity level specified in the settings widget (Fig.~\ref{SlicerAstroFig3Widgets}) and visualizes the modified segment as shown in the middle panel. In the bottom panel, we show all the modified segments.} \label{SlicerAstroFig3} \end{figure} Although the threshold tuning step can be improved or replaced by more complex techniques to identify and classify the signal in the selection, the $\tt{CloudLasso}$ technique is the most reliable choice in our case, because it leaves any classification to the user (leveraging his/her knowledge about the data). For example, connectivity operators \citep{Heijmans} can be applied after the thresholding to distinguish the various islands of signal and to label them with IDs. Moreover, $\tt{MAX}$-$\tt{TREE}$ algorithms \citep{moschini2014} can automatically provide a tree classification of the data. Finally, more advanced selection techniques can be employed \citep[e.g., $\tt{Cast}$ selections][]{Yu2}. The common element in these techniques is the idea to classify (in different ways) the information in the data. However, due to the very noisy nature of H\,{\small I} data, separating the H\,{\small I} signal from the noise is not trivial \citep{Punzo2016} and, therefore, it is quite challenging to build an automated algorithm to classify the data see also \citep[see also][]{Giese2016}. In the $\tt{SegmentationEditor}$ module of $\tt{3DSlicer}$ we implemented an $\tt{AstroCloudLasso}$ segmentation editing capability, optimized and specialized for the selection of H\,{\small I} data. A segmentation editor is a $\tt{3DSlicer}$ tool that enables user interaction with the data and creation/modification of segmentations both in the 2-D and 3-D views. In Fig.~\ref{SlicerAstroFig3Widgets} we show the interface widgets of the $\tt{AstroCloudLasso}$ segmentation editor. The default value of the threshold is set to 3 times the $rms$ value of the data-cube under study. In Fig.~\ref{SlicerAstroFig3}, we show how the selection procedure is performed in a 3-D view of $\tt{3DSlicer}$ and the results for each segment are shown (i.e., we repeated the selection procedure four times). The tool can also perform 2-D selections (on the 2-D views), it can erase the segment under the selection (both in the 2-D and 3-D views) if the erase mode has been enabled, and it can interactively adjust the selection of the intensity threshold if the automatic updating mode has been enabled. The $\tt{AstroCloudLasso}$ segmentation editor effect can be used for two applications: \begin{enumerate}[A)] \item interactively modify a mask as shown in Fig.~\ref{SlicerAstroFig3} (note that $\tt{SlicerAstro}$ can save the new mask as a FITS file). This framework can be used as a modification tool of the masks generated from source finder pipelines. \item selecting regions of interest for further analysis. \end{enumerate} In the next section, we will apply the segmentation as a selection for operating tilted-ring modeling in the region of interest. \section{Interactive modeling}\label{modeling} In the case of H\,{\small I} in galaxies one can extract additional information from fitting the observations with a so called \textit{tilted-ring} model \citep{Warner}. Such a model describes the observed H\,{\small I} distribution of the galaxy as a set of concentric, inclined, and rotating rings. Each ring is characterized by the following parameters: the center of the spatial coordinates, and the systemic velocity, rotation velocity, velocity dispersion, inclination, position angle as a function of the galactocentric radius. A model is specified by a set of ring (radially varying) parameters plus a set of global parameters (e.g., ring width). To compute a model the rings are populated with an ensemble of H\,{\small I} clouds using a Monte Carlo method. The cloud ensembles are integrated along each line-of-sight in the data-cube and convolved with a 3D-Gaussian representing the properties of the observing beam and the resolution in the frequency domain. A tilted-ring model is necessarily an oversimplification of the H\,{\small I} distribution inside galaxies. When the orbits are significantly non-circular, for example in the presence of a bar \citep{Bosma}, the tilted-ring model will not be able to represent the data accurately. Furthermore, there is a degree of degeneracy between some of the ring parameters (e.g., inclination, position angle and rotational velocity). In many cases, however, the tilted-ring model serves as a good approximation and can provide a deeper understanding of the kinematics and morphology of a galaxy, including asymmetries in surface density and velocity, the presence of gas at anomalous velocities, of extra-planar gas, of inflows and outflows, etc. It is for example rather easy to locate the presence of extra-planar gas once the symmetric and regularly rotating disk is modeled (see Section~\ref{UseCaseB}). It is, therefore, very useful to add model fitting capabilities to a visual analytics tool for H\,{\small I} data. Such a capability enables an interactive comparison between the data and models so that the quality of the model can be assessed interactively. This is possible by embedding the model routine in the visualization interface. This will also enable interactive tuning of the model parameters using the visualization interface. Modern 3-D tilted-ring modeling software can generate symmetric models that reproduce the data with a minimal user input and interaction \citep{Teodoro, Kamphuis}. In the next section we will briefly review such software libraries. We will also describe the integration of one of them in $\tt{SlicerAstro}$ and show how it provides additional capabilities for the detection and analysis of subtle structures in the 3-D domain. Two use cases will be investigated in Sections \ref{UseCaseA} and \ref{UseCaseB}: using the 3-D selection tool (shown in Section~\ref{masking}) to perform the tilted-ring model fitting only in a region of interest (i.e., excluding non-symmetric, non-regular, H\,{\small I} structures such as tidal tails) and using the symmetrical properties of automated tilted-ring model fitting to locate extra-planar gas. \subsection{Requirements} Tilted-ring model fitting is rather complex. Therefore we chose to rely on an external state-of-the-art package rather than designing a new one. In order to be able to wrap an external model fitting package into $\tt{SlicerAstro}$ the following requirements can be formulated: \begin{enumerate}[I)] \item 3-D model fitting capabilities; \item automatic estimation of the initial parameters for the fitting; \item parallelization on CPU and/or GPU for fast execution; \item developed as a modern, modular source code, preferably {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}. \end{enumerate} Currently, two software packages are available: \newline $\tt{^{\rm3D}\,Barolo}$ \citep{bbarolo}, an automated procedure which fits tilted-ring models to H\,{\small I} data-cubes; and $\tt{Fat}$ \citep{FAT}, a similar package built on top of $\tt{TiRiFic}$ \citep{TiRiFic,Jozsa}. \begin{table}[!ht] \centering \begin{tabular}{| c | c | c | c |} \hline Requirements & $\tt{TiRiFic/Fat}$ & $\tt{^{\rm3D}\,Barolo}$ \\ \hline\hline I: fitting & & \\ capabilities & \ding{51} & \ding{51} \\ \hline II: parameter & & \\ estimation & \ding{51} & \ding{51} \\ \hline III: CPU/GPU & & \\ parallelization & \ding{55} & \ding{55} \\ \hline IV: {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}} & & \\ development & \ding{55} & \ding{51} \\ \hline \end{tabular} \caption{Requirements for the model fitting external library, described in Section~\ref{modeling}. We compare two software packages: $\tt{TiRiFic/Fat}$ \citep{Jozsa,Kamphuis} and $\tt{^{\rm3D}\,Barolo}$ \citep{Teodoro}. } \label{ModelingRequirements} \end{table} As shown in Table \ref{ModelingRequirements}, $\tt{^{\rm3D}\,Barolo}$ is currently our optimal choice because, being developed in {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}, it satisfies the fourth requirement. The fourth requirement ensures high performance, a rather simple and smooth integration process, and long-term maintainability. On the contrary, $\tt{Fat}$ has been developed in $\tt{IDL}$ which introduce with it several compiling and linking issues (also, the license of $\tt{IDL}$ is not compatible the open-source BSD license of $\tt{SlicerAstro}$). Although both state-of-the-art packages lack of a concrete parallelization strategy, they still have sufficiently fast performance for fitting the data for sources in data volumes up to $10^6$ voxels and for generating a single model for sources representing a data volume of up to $10^8$ voxels (see Section~\ref{UseCaseA}). However, user interaction with the modeling routine in the $\tt{AstroModeling}$ module will greatly benefit from a parallel 3-D tilted-ring model fitting source code. In the following use cases\footnote{In software and systems engineering, a use case is a list of actions or event steps, typically defining the interactions between a role (known in the Unified Modeling Language as an actor) and a system, to achieve a goal. The actor can be a human or other external system.} we will show how the $\tt{Astro}$-$\tt{Modeling}$ module, exploiting 3-D interactive visualization and $\tt{^{\rm3D}\,Barolo}$, helps in the modeling and analysis of complex sources. \subsection{Use Case A: analysis of sources with tidal tails}\label{UseCaseA} Although $\tt{^{\rm3D}\,Barolo}$ is a powerful fitting routine, it is designed to fit models of galaxies with a thin regularly rotating disk. Therefore, $\tt{^{\rm3D}\,Barolo}$ (or other current tilted-ring modeling algorithms) cannot recognize, for example, tidal tail structures and separate them from the central regularly rotating body of the galaxy. In this section, we show how to use the $\tt{AstroModel}$-$\tt{ing}$ module for the manual quality control of the models. This framework enhances the analysis of gravitational perturbed galaxies such as WEIN069. In fact, the 3-D selection tool described in Section~\ref{masking} can be used to select a region of interest for which $\tt{^{\rm3D}\,Barolo}$ provides the best results. For example, in the case of WEIN069 the user can separate the two kinematic components, i.e., the regularly rotating disk and the tidal tail, and perform the calculations only on the central disk. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{SlicerAstroFig5} \caption{3-D view of WEIN069 rendered in green. It highlights the data at the intensity level equal 3 times the $rms$. The blue segmentation represents a 3-D selection (see Section~\ref{masking}).} \label{SlicerAstroFig5} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.42\textwidth]{SlicerAstroFig5Widgets} \includegraphics[width=0.42\textwidth]{SlicerAstroFig6Widgets} \caption{The interface widgets of the $\tt{AstroModeling}$ module. In the top panel, the interface includes the widgets for selecting a segment that will be used as mask for the modeling and the input parameters for the model fitting. In manual mode one can specify the fitting method (i.e.~the kind of residuals between the model and the data to be minimized) and the weighting function, i.e.~the weights (as a function of angle from the minor axis) given to the residuals before fitting. These weights correct for the effect that the line of sight component of the circular velocity in a rotating disk approaches zero when approaching the minor axis. \citep[for more information see][]{bbarolo}. In the bottom panel, the interface includes a \textit{Contour level} widget to choose the threshold value for the segmentation of the model, an editable table with the parameters of the rings of the output model (see Fig.\ref{SlicerAstroFig6}), and push buttons for updating the model.} \label{SlicerAstroFig5Widgets} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=0.89\textwidth]{SlicerAstroFig6} \caption{Comparative layout of the output generated by the $\tt{AstroModeling}$ module. The layout is composed of three 3-D views, three 2-D views and a chart view. The WEIN069 data are shown in the 3-D view and the 2-D views. In the 3-D views the data are rendered in green. Each 3-D view has a different camera position: top-left, viewing direction along the velocity axis; top-right, the origin of the camera is in the center of the data-cube and the view is parallel to the geometrical major axis of the galaxy; middle-left, the viewing direction is along the RA axis. The white labels \textit{z} and \textit{Z} indicate the line of sight velocity (or redshift z) direction (i.e., increasing from \textit{z} to \textit{Z}). The green arrow in the lower right corner of the top right panel points to the plane indicated by the symbol \textit{z}. The middle-right view has plotting capabilities and the different parameters of the rings of the output model can be shown. The bottom views are slices of the data-cube: bottom-left, XY; bottom-middle, XZ; bottom-right, ZY. In the 2-D views the data are displayed with a grayscale color function. The yellow segmentation (in the 3-D and 2-D views) represents the fitted model. The red segmentation (in the 2-D views) is a contour plot of the data. The rendering and the segmentations highlight the data and model at the $rms$ value chosen in the \textit{Contour level} widget (Fig.~\ref{SlicerAstroFig5Widgets}). In the chart view, the values of the ring parameters of the fitted model are plotted (it is possible to switch the plots in the chart view menu under the awl widget). The values of the parameters are also reported in the table widget on the $\tt{AstroModeling}$ module window interface (see Fig.~\ref{SlicerAstroFig5Widgets}). The values in the table are editable and can be used to refine the model. } \label{SlicerAstroFig6} \end{figure*} Figs.~\ref{SlicerAstroFig5} and \ref{SlicerAstroFig5Widgets} show in blue a selection of the central body of WEIN069 and the parameters chosen for running the fitting routine in $\tt{^{\rm3D}\,Barolo}$. The fitting results are shown in Figs.~\ref{SlicerAstroFig5Widgets} and \ref{SlicerAstroFig6}: the yellow segmentation, in the 2-D and 3-D views, represents the model, while the green rendering, in the 3-D view, represents the data. The visualization highlights the model and the data at the intensity level chosen in the \textit{contour level} interface widget (in this case three times the value of the $rms$ noise in the input data-cube). The overlay of the segmentation of the model on the 3-D rendering of the data facilitates the inspection of the model. In the case of Fig.~\ref{SlicerAstroFig6}, the horizontal and vertical axes of the third 3-D view (middle left panel) are the velocity and the declination dimensions, respectively. It is immediately clear that the rotation curve of the model in the inner rings does not rise fast enough. User interactions with the 3-D view such as camera zooming and rotation enhance the 3-D perspective giving an even better overview of the differences. On the other hand, for checking the data pixel by pixel (e.g., for data probing) it is better to use a two-dimensional representation. \begin{figure*}[!ht] \centering \includegraphics[width=1.\textwidth]{SlicerAstroFig7} \caption{Illustration of the H\,{\scriptsize I} data and model, fitted by $\tt{^{\rm3D}\,Barolo}$, of NGC2403 from the THINGS survey \citep{Walter}. The galaxy is very well resolved. The comparative layout is the same as used in Fig.~\ref{SlicerAstroFig6}. The top-left view is a 3-D view of NGC2403. The white labels represent the four cardinal directions (\textit{N}, \textit{S}, \textit{E}, \textit{W}). The green arrow points along the line of sight. The white segmentation (in the 3-D view) represents the model that fits the regular disk. The model has been fitted in \textit{automatic} mode (i.e., no mask and no input parameters have been provided to $\tt{^{\rm3D}\,Barolo}$ from the graphical user interface). The dark green rendering of the data, from an intensity level of 3 times the $rms$, clearly shows unsettled gas in the inner region. The top-right view has plotting capabilities and the different parameters of the rings of the output model can be shown. The bottom views are slices of the data-cube: bottom-left, XY; bottom-middle, XZ; bottom-right, ZY. The blue and the pink segmentations (in the 2-D views) are contours of the data and model, respectively, at the $rms$ value chosen in the \textit{Contour level} widget (see Fig.~\ref{SlicerAstroFig5Widgets}). } \label{SlicerAstroFig7} \end{figure*} Finally, the $\tt{AstroModeling}$ module provides a table widget in the interface (see Fig.~\ref{SlicerAstroFig5Widgets}) that can be used to refine the model and update the visualization. All the ring parameters of the model are available in the table. The refining process of the output model is crucial. In fact, the tilted-ring model fitting is a process with a high degree of degeneracy between the parameters, and the fitting results strongly depend on the value of the initial parameters, especially for the inclination. Therefore, the models must be carefully checked, compared with the data, and refined. The computational time needed by $\tt{^{\rm3D}\,Barolo}$ to fit the data depends on several factors: the number of voxels, the number of rings of the model and the \textit{goodness} (i.e., whether the error in the estimate is $< 10\%$) of the initial parameters. However, it is not possible to provide unbiased benchmarks for a fitting routine (i.e., the performance highly depends on the input parameters). To give an example, the time for fitting a source extended up to $10^6$ voxels, using 20 rings and having a reliable estimation of the input parameters is $\sim 2.5$ min (exploiting 1 CPU core at 2.60 GHz). On the other hand, once the fitting has been performed, recalculating a single new model with the same size in voxels and number of rings takes less than 2 seconds (the process includes getting the model from $\tt{^{\rm3D}\,Barolo}$ and creating the 3-D segmentation in $\tt{SlicerAstro}$ as well). The computational complexity of this second step is $O(Nr)$, where $N$ is the number of voxels of the source and $r$ the number of rings. Despite the fact that the framework is not interactive, it is still fast enough to provide a powerful tool to refine models and compare them with the data. \subsection{Use Case B: finding anomalous velocity gas}\label{UseCaseB} It has been demonstrated that the gas distribution of some spiral galaxies \citep[e.g., NGC2403;][]{Fraternali} is not composed of just a cold \textit{regular} thin disk. Stellar winds and supernovae can produce extra-planar gas \citep[e.g., a galactic fountain;][]{Bregman}. In this case, modeling is used to constrain the 3-D structure and kinematics of the extra-planar gas which is visible in the data as a faint kinematic component in addition to the disk. The $\tt{AstroModeling}$ module uses the output model of $\tt{^{\rm3D}\,Barolo}$ for visually highlighting the different components in the data-cube. After visualizing the model of the symmetric cold thin disk as a segmentation, it is immediately possible to locate any unusual features in the data-cube of interest and already get an idea of their properties, thus directing further modeling. For example, a model of the extra-planar gas above or below the disk with a slower rotation and a vertical motion provides quantitative information about the rotation and the infall velocity of such gas. In Fig.~\ref{SlicerAstroFig7} we show as an example the analysis that we performed on NGC2403. The input parameters for the fitting have not been edited, therefore $\tt{^{\rm3D}\,Barolo}$ performed an automatic estimation of the initial parameters. In the 3-D view $\tt{SlicerAstro}$ illustrates the data of the NGC2403 observations rendered in green and the tilted-ring model generated with $\tt{^{\rm3D}\,Barolo}$ as a white segmentation. The white segmentation is rendered with the maximum opacity in order to obscure all the data that have been fitted by $\tt{^{\rm3D}\,Barolo}$. This combination gives an immediate overview of the extra-planar gas present in the NGC2403 observations \citep{Fraternali}. Since $\tt{^{\rm3D}\,Barolo}$ mostly fits the symmetric regularly rotating part of the galaxy, it therefore is a powerful tool for locating anomalous features in the data, such as the extra-planar gas in NGC2403. \section{Summary}\label{conclusion} $\tt{SlicerAstro}$ is an open-source project and its binaries are currently available in the extensions manager of $\tt{3DSlicer}$\footnote{The user guide is available at the following link: \url{https://github.com/Punzo/SlicerAstro/wiki\#get-slicerastro}}. The novelty of $\tt{SlicerAstro}$ over traditional astronomical viewers is the use of 3-D interactive tools for the visualization and analysis of H\,{\small I} in and around galaxies. $\tt{SlicerAstro}$ has been designed with a strong, stable and modular {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}\; core, but it can be used also via $\tt{Python}$ scripting, allowing great flexibility for user-customized visualization and analysis tasks (see Section~\ref{design}). Although $\tt{SlicerAstro}$ is still under development, it already offers several new qualitative and quantitative visualization and analysis features for the inspection of H\,{\small I} and other spectral line data. The overall advantage of $\tt{SlicerAstro}$ compared to traditional viewers \citep[e.g., $\tt{KARMA}$, $\tt{Casaviewer}$ and $\tt{VISIONS}$;][]{Karma, CASA, Gipsy} is that it bundles analytical operations such as smoothing and modeling with the visualization. These visual analytics techniques enhance the visualization itself. More important is in our view the interactivity offered by $\tt{SlicerAstro}$. Interactivity is key to enhancing the inspection and analysis of complex datasets. In fact, precisely the interactive and coupled 3-D/2-D visualization aspects (e.g., volume rendering, navigation, changing color/opacity function, selecting regions of interest in the 3-D space) which are (partially) missing in the traditional tools and which disclose powerful visual analytics capabilities. In Section~\ref{framework}, we presented the main module, $\tt{Astro}$-$\tt{Volume}$. This module provides: a user interface for loading and writing FITS files; the display of astronomical World Coordinates; control of 2-D and 3-D color transfer functions; MRML nodes for storing the data; and data conversion tools for masks and $\tt{3DSlicer}$ segmentation objects. Fig.~\ref{SlicerAstroFig1} showed how 3-D visualization gives an immediate overview of the H\,{\small I} emission in and around WEIN069 \citep[i.e.~three interacting galaxies and a tidal tail][]{wein,Mpati} and the mask generated by automated source finder pipelines such as $\tt{SoFiA}$ or $\tt{Duchamp}$. 3-D visualization highly enhances and accelerates the inspection of the data and of the masks, allowing efficient manual quality control of part (i.e., complex galaxies or groups of galaxies) of the large data sets that will be provided by the SKA precursors. In addition, we presented the $\tt{AstroSmoothing}$ module in Section~\ref{filtering}. Fig.~\ref{SlicerAstroFig2} showed the filtered version of WEIN069, obtained with a newly implemented intensity-driven gradient filter \citep{Punzo2015}. The 3-D visualization highlights immediately the presence of a faint filament between two galaxies that was hardly visible in the original data-cube. The coupling between the interactive smoothing algorithms (available in the parallelized version both on CPUs and GPUs) and the 3-D visualization allows for a detailed inspection of the result and a manual, iterative, search for the best smoothing parameters for maximally enhancing the local signal-to-noise ratio of the very faint signal. Moreover, we introduced the $\tt{AstroCloudLasso}$ selection tool in Section~\ref{masking}. This is a 3-D interactive selection tool \citep{Yu1}, optimized for H\,{\small I} data, added by $\tt{Slicer}$-$\tt{Astro}$ in the $\tt{SegmentationEditor}$ of $\tt{3DSlicer}$. We showed how to use this tool to create and modify segmentation objects in the 3-D views (Fig.~\ref{SlicerAstroFig3}). The tool can be also used in the 2-D views for a 2-D selection. $\tt{CloudLasso}$ is an intuitive and efficient 3-D selection method, which is crucial for allowing manual modification of masks generated automatically by source finder pipelines (e.g., adding very faint signal missed by automated pipelines). A second application of the tool is to select a region of interest (ROI). The ROI can be successively used to perform calculations, such as tilted-ring model fitting, in the selection. In Section~\ref{modeling}, we demonstrated that 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. We integrated $\tt{^{\rm3D}\,Barolo}$, a tilted-ring model fitting package, in $\tt{SlicerAstro}$, providing an interface to set the input parameters for the fitting (Fig.~\ref{SlicerAstroFig5}). Moreover, the interface includes a widget for editing the ring parameters of the output model, for recalculating and visualizing the model on top of the data. We also showed that 3-D is a powerful tool not only to provide a region of interest for the calculations, but also for the inspection of the model (Fig.~\ref{SlicerAstroFig6}) and the data not fitted by the model (e.g.~extra-planar gas in NGC2403, Fig.~\ref{SlicerAstroFig7}). The efficiency and the effectiveness of the visual analytics techniques implemented in $\tt{SlicerAstro}$ have been tested. Quantifying the results for the efficiency of the modeling capabilities in $\tt{SlicerAstro}$ is not straightforward as the speed depends on the size of the data-cube and on the input parameters. However, even in the case of a moderately large and well resolved object such as NGC2403 (dimension $\sim 1.4 \times 10^6$ voxels), the model fitting is performed in less than 2 minutes. In addition, modifying manually the parameters of the output model is interactive. The 3-D smoothing algorithms in $\tt{SlicerAstro}$ have interactive performance. An adaptive smoothing operation on NGC2403 data-cube is performed in less than 0.1 seconds exploiting the computing power of a GPU (i.e., GeForce GTX860M). The main advantage of $\tt{SlicerAstro}$ is that the combination of these smoothing and modeling capabilities with \textit{interactive} 3-D visualization provides an immediate overview of all the coherent 3-D structures of the data, masks and models. This is very powerful and it definitely does increase the effectiveness of the visualization and, thus, the efficiency of the astronomical users in the manual exploration of many datasets. We conclude that interactive 3-D quantitative and comparative visualization, 3-D user interaction and analysis capabilities available in $\tt{SlicerAstro}$ form an effective new tool that will boost, in terms both of efficiency and quality, the analysis of complex sources in the context of large data-flows that will be provided by the SKA precursors. However, in order to fulfill all the visualization requirements defined in the Section~\ref{design} \citep[and extensively discussed in][]{Punzo2015}, some quantitative features still have to be incorporated in $\tt{SlicerAstro}$. For example, a tool displaying the histogram of the flux intensities of the data-cube will greatly help the user in setting the 2-D color function. The capability to display flux density profiles (i.e.~linked 1-D visualization) is also necessary, especially when dealing with unresolved sources. In addition, capabilities for overlaying (in an automated way) other datasets (including datasets with different grids and projection systems) will enhance the inspection of multi-wave bands datasets and are under development. Furthermore, a dedicated tool in $\tt{SlicerAstro}$ for easily displaying position-velocity (P-V) diagrams will improve the inspection and comparison of models. Specialized analysis tasks on 3-D selections (e.g., calculating statistics, moment maps, etc., in regions of interest) can be performed by running scripts in the $\tt{3DSlicer}$ $\tt{Python}$ console. On the other hand, customized quantitative tasks can also be added, as core modules, in $\tt{SlicerAstro}$, similar to the implementation of the $\tt{AstroModeling}$ module. These capabilities will be integrated in future updates of $\tt{SlicerAstro}$. The implementation of VO interoperability and the advantages of such connectivity will be considered and analyzed further in the case of $\tt{SlicerAstro}$. In fact, the SAMP protocol and the FITS format are no longer globally accepted standards \citep{Fits1,Fits2,Fits3}. Other scientific fields such as medical imaging can provide insights on how to improve the astronomical standards. For example, the Digital Imaging and Communications in Medicine (DICOM) \citep{dicom} protocol is a remarkable example of a standard for handling, storing, printing, and transmitting information in medical imaging. DICOM includes a file format definition and a network communications protocol universally accepted and used by the medical scientific community. $\tt{SlicerAstro}$ is a project under continuous development and we have adopted an agile development approach (i.e.~development cycles are driven by user-feedback). In addition, the software is open-source and third parties are encouraged to contribute. More important, any idea, feedback, criticism or bug can be reported at the following link in the tracker issue\footnote{\url{https://github.com/Punzo/SlicerAstro/issues}}. Finally, although the development of $\tt{SlicerAstro}$ thus far mainly focused on 3-D H\,{\small I} data, it will also be a useful tool for any other type of 3-D astronomical data such as mm/submm molecular line data and optical integral field spectroscopic data. Molecular line data and optical/NIR spectroscopic data have the additional complication that often more than one spectral line are present in a single spectral window. This makes the visualization more complex, though clever stacking of the known spectral lines can e.g.~enhance the signal to noise and stacking tools can help to bring out the kinematic behavior of gas which emits multiple spectral lines. One can also think of additional tools to visualize line ratios by e.g.~superposing different spectral lines in different colors interactively. In conclusion, though $\tt{SlicerAstro}$ is useful for other types of astronomical 3-D data, additional tools will be required tailored to the kind of data and the scientific questions to be addressed. \section{Appendix}\label{Appendix} Below, the $\tt{Python}$ code provides an example for applying smoothing to a data-cube, performing the rendering and saving the last as a video. The script can be copy and pasted in the $\tt{3DSlicer}$ $\tt{Python}$ console, or it can be launched from the command line with the following command: \begin{lstlisting}[language=bash] ./Slicer --python-script script.py \end{lstlisting} \textcolor{blue}{More information is provided at the following link:} \url{https://github.com/Punzo/SlicerAstro/wiki}. \begin{python} # Load a data-cube in SlicerAstro slicer.util.loadVolume("/full_path/WEIN069.fits",{"center":True}) mw = slicer.util.mainWindow() ms = mw.moduleSelector() # Smooth the data-cube in automatic mode (CPU) ms.selectModule('AstroSmoothing') smowidget = slicer.modules.astrosmoothing.widgetRepresentation() smowidget.onApply() # Setup the Render for the data-cube and its filtered version ms.selectModule('AstroVolume') astrovolumewidget = slicer.modules.astrovolume.widgetRepresentation() astrovolumewidget.onCurrentQualityControlChanged(1) volumes = slicer.mrmlScene.GetNodesByClass("vtkMRMLAstroVolumeNode") volumefiltered = volumes.GetItemAsObject(1) smomrmlpara.SetInputVolumeNodeID(volumefiltered.GetID()) astrovolumewidget.onCurrentQualityControlChanged(1) # Create videos ms.selectModule('ScreenCapture') screencapturewidget = slicer.modules.screencapture.widgetRepresentation() instance = screencapturewidget.self() # For the data-cube viewNode = slicer.util.getNode('vtkMRMLViewNode1') instance.viewNodeSelector.setCurrentNode(viewNode) instance.numberOfStepsSliderWidget.setValue(360) instance.videoExportCheckBox.setChecked(1) instance.videoFormatWidget.setCurrentIndex(1) instance.videoFileNameWidget.setText("WEIN069.mp4") instance.videoLengthSliderWidget.setValue(6) instance.onCaptureButton() # For the filtered version viewNode = slicer.util.getNode('vtkMRMLViewNode2') instance.viewNodeSelector.setCurrentNode(viewNode) instance.numberOfStepsSliderWidget.setValue(360) instance.videoExportCheckBox.setChecked(1) instance.videoFormatWidget.setCurrentIndex(1) instance.videoFileNameWidget.setText("WEIN069_smoothed.mp4") instance.videoLengthSliderWidget.setValue(6) instance.onCaptureButton() \end{python} \section{Acknowledgments} We thank M.A. Ramatsoku and M.A.W. Verheijen for proving us with the H\,{\small I} data of WEIN069. Support also came from S. Pieper (Isomics, Inc.), A. Lasso (Laboratory of Percutaneous Surgery at Queen's University) and E. di Teodoro (Australian National University) in the form of feedback and assistance. Finally, we thank the reviewers for their constructive comments, which helped us to improve the paper. D. Punzo and J.M van der Hulst acknowledge the support from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement nr. 291-531. L. Yu was partially supported from National Natural Science Foundation of China (Grant No. 61502132). We are grateful to the various agencies and programs that funded support and development of $\tt{3DSlicer}$ over the years. \section{References} \bibliographystyle{elsart-num-names}
2,877,628,088,726
arxiv
\section{Introduction} We present a new completely elementary model which describes creation, annihilation and motion of non-interacting electrons and positrons along a line (see Definitions~\ref{def-anti-alg}, \ref{def-anti-combi}, and~\ref{def-multipoint}). It is a modification of the model known as Feynman checkers, or one-dimensional quantum walk, or Ising model at imaginary temperature (see Definition~\ref{def-mass} and surveys~\cite{Kempe-09,Konno-20,SU-22,Venegas-Andraca-12}). This modification preserves known identities (see \S\ref{ssec-identities}) and Fourier integral representation (see Proposition~\ref{th-equivalence}) but adds antiparticles (and thus is called \emph{Feynman anticheckers}). The discrete model is consistent with the continuum quantum field theory, namely, reproduces the known expected charge density as lattice step tends to zero (see Figure~\ref{fig-approx-q} and Corollary~\ref{cor-anti-uniform}). It is exactly solvable via hypergeometric functions (see Proposition~\ref{p-mass5}) and is described asymptotically by Bessel and trigonometric functions (see Theorems~\ref{th-continuum-limit}--\tobereplaced{\ref{th-anti-ergenium}}{\ref{th-anti-Airy}}). We introduce interaction resembling Fermi theory and get perturbation expansion (see Definition~\ref{def-fermi} and Proposition~\ref{p-perturbation}). \subsection{Background} One of the main open problems in mathematics is a rigorous definition of quantum field theory. For instance, the case of $4$-dimensional Yang--Mills theory is a Millennium Problem. A perspective approach to the problem is \emph{constructive field theory}, which constructs a continuum theory as a limit of discrete ones \cite{Glimm-Jaffe-12}. This leads to the \emph{consistency} question of whether the discrete objects indeed approximate the desired continuum ones. Constructive field theory is actually as old as quantum field theory itself. The most elementary model of electron motion known as \emph{Feynman checkers} or \emph{quantum walk} was introduced by R.Feynman in 1940s and first published in 1965 \cite{Feynman-Gibbs}. Consistency with continuum quantum mechanics was posed as a problem there \cite[Problem~2.6]{Feynman-Gibbs}; it was solved mathematically only recently \cite{SU-22}. See also surveys~\cite{Kempe-09,Konno-20,SU-22,Venegas-Andraca-12} on Feynman's model and its generalizations. In 1970s F.~Wegner and K.~Wilson introduced \emph{lattice gauge theory} as a computational tool for gauge theory describing all known interactions (except gravity); see \cite{Maldacena-16} for a popular-science introduction. This theory is \emph{Euclidean} in the sense that it involves \emph{imaginary} time. Euclidean lattice field theory became one of the main computational tools \cite{Rothe-12} and culminated in determining the proton mass theoretically with error less than $2\%$ in a sense. There were developed methods to establish consistency, such as \emph{Reisz power counting theorem} \cite[\S13.3]{Rothe-12}. This lead to some rigorous constructions of field theories in dimension 2 and 3 \cite{Glimm-Jaffe-12}. Euclidean theories are related to statistical physics via the equivalence between imaginary time and temperature \cite[\S V.2]{Zee-10}. For instance, Feynman checkers can be viewed as an Ising model at imaginary temperature \cite[\S2.2]{SU-22}, whereas \emph{Euclidean} theory of an electron is an Ising model at real temperature. S.~Smirnov and coauthors proved consistency for a wide class of $2$-dimensional models including the Ising one and some loop $O(n)$ models \cite{Smirnov-10}. Euclidean theories involving electrons suffer from \emph{fermion doubling}, unavoidable by the \emph{Nielsen--Ninomiya no-go theorem}, and often making them inconsistent \cite[\S4.2]{Rothe-12}. \mscomm{Contradiction?} A new promising direction is \emph{Minkowskian} lattice quantum field theory, where time is \emph{real} and fermion doubling is possibly avoided \cite[\S4.1.1]{Foster-Jacobson-17} (cf.~\emph{staggered fermions} \cite[\S4.4]{Rothe-12}). Feynman checkers is a reasonable starting point. It was an old dream to incorporate creation and annihilation of electron-positron pairs in it (see \cite[p.~481--483]{Schweber-86}, \cite{Jacobson-85}), celebrating a passage from quantum mechanics to quantum field theory (\emph{second quantization}). One looked for a combinatorial model reproducing \emph{Feynman propagator}~\eqref{eq-feynman-propagator} in the continuum limit. Known constructions (such as \emph{hopping expansion} \cite[\S12]{Rothe-12}) did not lead to the Feynman propagator because of fermion doubling. In the \emph{massless} case, a \emph{non-combinatorial} construction was given by C.~Bender--L.~Mead--K.~Milton--D.~Sharp in \cite[\S9F]{Bender-etal-94} and~\cite[\S IV]{Bender-etal-85}. The desired combinatorial construction is finally given in this paper (realizing two steps of the program outlined in \cite[\S\S8--9]{SU-22}). It follows a classical approach known from Kirchhoff matrix-tree theorem, the Kasteleyn and Kenyon theorems \cite{Levine-11,Kenyon-02}. In this approach, a physical system (an electrical network, a moving electron, etc) is described by a difference equation on the lattice (lattice Laplace equation, lattice Dirac equation from Feynman checkers, etc). The solution is expressed through determinants, interpreted combinatorially via loop expansion \cite[\S12.3]{Rothe-12}, and computed explicitly by Fourier transform. In our setup, the solution is not unique and has to be \emph{regularized} first by introducing a small imaginary mass ``living'' on the dual lattice. \subsection{Organization of the paper} In \S\ref{sec-statements} we define the new model and list its properties. In \S\ref{sec-variations} we discuss its generalizations and in \S\ref{sec-proofs} we give proofs. In Appendices~\ref{ssec-proofs-alternative} and~\ref{sec-axioms} we give alternative definitions/proofs and put the model in the general framework of quantum field theory respectively. The paper assumes no background in physics. The definitions in \S\ref{sec-statements}--\ref{sec-variations} are completely elementary (in particular, we use neither Hilbert spaces nor Grassman variables). The paper is written in a mathematical level of rigor, in the sense that all the definitions, conventions, and theorems (including corollaries, propositions, lemmas) should be understood literally. Theorems remain true, even if cut out from the text. The proofs of theorems use the statements but not the proofs of the other ones. Most statements are much less technical than the proofs; hence the proofs are kept in a separate section (\S\ref{sec-proofs}) and long computations are kept in~\cite{SU-3}. Remarks are informal and usually not used elsewhere (hence skippable). Text outside definitions, theorems, proofs is neither used formally. \addcontentsline{toc}{myshrink}{} \section{Feynman anticheckers: statements} \label{sec-statements} \subsection{Construction outline} \label{ssec-outline} Let us recall the definition of Feynman's original model, and then outline how it is modified. \begin{definition} \label{def-mass} Fix $\varepsilon>0$ and $m\ge 0$ called \emph{lattice step} and \emph{particle mass} respectively. Consider the lattice $\varepsilon\mathbb{Z}^2 =\{\,(x,t):x/\varepsilon,t/\varepsilon\in\mathbb{Z}\,\}$ (see Figure~\ref{fig-paths} to the left). A \emph{checker path} $s$ is a finite sequence of lattice points such that the vector from each point (except the last one) to the next one equals either $(\varepsilon,\varepsilon)$ or $(-\varepsilon,\varepsilon)$. Denote by $\mathrm{turns}(s)$ the number of points in $s$ (not the first and not the last one) such that the vectors from the point to the next and to the previous ones are orthogonal. To the path $s$, assign the complex number $$ a(s):=(1+m^2\varepsilon^2)^{1-n/2}\,i (-im\varepsilon)^{\mathrm{turns}(s)}, $$ where $n$ is the total number of points in $s$. For each $(x,t)\in\varepsilon\mathbb{Z}^2$, where $t>0$, denote by \begin{equation* {a}(x,t,m,\varepsilon) :=\sum_s a(s) \end{equation*} the sum over all checker paths $s$ from $(0,0)$ to $(x,t)$ containing $(\varepsilon,\varepsilon)$. An empty sum is set to be zero by definition. Denote $$ a_1(x,t,m,\varepsilon):=\mathrm{Re}\,a(x,t,m,\varepsilon), \quad a_2(x,t,m,\varepsilon):=\mathrm{Im}\,a(x,t,m,\varepsilon). $$ \end{definition} \begin{physicalinterpretation*} One interprets $|{a}(x,t,m,\varepsilon)|^2$ as the probability to find an electron of mass $m$ in the interval of length $\varepsilon$ around the point $x$ at the time $t>0$, if the electron was emitted from the origin at the time $0$. Hereafter we work in a natural system of units where $\hbar=c=e=1$ (setting all those constants to $1$ is possible for vacuum permittivity $\varepsilon_0\ne 1$). \end{physicalinterpretation*} \begin{figure}[htbp] \centering \includegraphics[height=2.4cm]{path-v3.png} \qquad \includegraphics[height=2.4cm]{generalized-path-v3.png} \caption{A checker path (left). A generalized checker path (right). See Definitions~\ref{def-mass} and~\ref{def-anti-combi}.} \label{fig-paths} \end{figure} We have the following recurrence relation called \emph{lattice Dirac equation} \cite[Proposition~5]{SU-22}: \begin{align* a_1(x,t,m, \varepsilon) &= \frac{1}{\sqrt{1+m^2\varepsilon^2}} (a_1(x+\varepsilon,t-\varepsilon,m, \varepsilon) + m \varepsilon\, a_2(x+\varepsilon,t-\varepsilon,m, \varepsilon)),\\ a_2(x,t,m, \varepsilon) &= \frac{1}{\sqrt{1+m^2\varepsilon^2}} (a_2(x-\varepsilon,t-\varepsilon,m, \varepsilon) - m \varepsilon\, a_1(x-\varepsilon,t-\varepsilon,m, \varepsilon)). \end{align*} Informally, the new model is obtained by the following modification of lattice Dirac equation: \begin{description} \item[Step 0:] the functions $a_1$ and $a_2$ are extended to the \emph{dual} lattice, shifts by $\pm\varepsilon$ are replaced by shifts by $\pm\varepsilon/2$ in their arguments, and a term vanishing outside the origin is added; \item[Step 1:] the particle mass acquires small imaginary part which we eventually tend to zero; \item[Step 2:] on the dual lattice, the mass is replaced by its imaginary part. \end{description} This makes lattice Dirac equation uniquely solvable in $L^2$, and the solution is much different from $(a_1,a_2)$: we get two complex functions rather than components of one complex function. The elementary combinatorial definition (see Definition~\ref{def-anti-combi}) is obtained from Feynman's one (see Definition~\ref{def-mass}) by slightly more involved modification starting with the same Step~1: \begin{description} \item[Step 2${}'$:] just like the real mass is ``responsible'' for turns at the points of the lattice, the imaginary one allows turns at the points of the \emph{dual} lattice (see Figure~\ref{fig-paths} to the right); \item[Step 3${}'$:] the infinite lattice is replaced by a torus with the size eventually tending to infinity; \item[Step 4${}'$:] the sum over checker paths is replaced by a ratio of sums over loop configurations. \end{description} The resulting loops are interpreted as the \emph{Dirac sea} of electrons filling the whole space, and the edges not in the loops form paths of holes in the sea, that is, positrons. In this informal outline, Steps~2 and~2${}'$ are completely new whereas the other ones are standard. The former reflect a general principle that the real and the imaginary part of a quantity should be always put on dual lattices. Thus in what follows we consider a new lattice which is the disjoint union of the initial lattice $\varepsilon\mathbb{Z}^2$ and its dual; the latter become sublattices. \vspace{-0.3cm} \subsection{Axiomatic definition} \label{ssec-alg-def} \begin{definition} \label{def-anti-alg} Fix $\varepsilon,m,\delta>0$ called \emph{lattice step, particle mass}, and \emph{small imaginary mass} respectively. Assume $\delta<1$. For two elements $x,y$ of the same set denote \begin{equation* \delta_{xy}:= \begin{cases} 1, &\text{for }x=y,\\ 0, &\text{for }x\ne y. \end{cases} \end{equation*} Define a pair of complex-valued functions $A_k(x,t)=A_k(x,t,m,\varepsilon,\delta)$, where $k\in\{1,2\}$, on the set $\{\,(x,t)\in\mathbb{R}^2: 2x/\varepsilon,2t/\varepsilon,(x+t)/\varepsilon\in\mathbb{Z}\,\}$ by the following $3$ conditions: \begin{description} \item[axiom~1:] for each $(x,t)$ with $2x/\varepsilon$ and $2t/\varepsilon$ even, \begin{equation* \begin{aligned} A_1(x, ) &= \frac{1}{\sqrt{1+m^2\varepsilon^2}} \left(A_1\left(x+\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right) + m \varepsilon\, A_2\left(x+\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right)\right),\\ A_2(x, ) &= \frac{1}{\sqrt{1+m^2\varepsilon^2}} \left(A_2\left(x-\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right) - m \varepsilon\, A_1\left(x-\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right)\right) +2\delta_{x0}\delta_{t0}; \end{aligned} \end{equation*} \item[axiom~2:] for each $(x,t)$ with $2x/\varepsilon$ and $2t/\varepsilon$ odd, \begin{equation* \begin{aligned} A_1(x, ) &= \frac{1}{\sqrt{1-\delta^2}} \left(A_1\left(x+\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right) -i\delta\, A_2\left(x+\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right)\right),\\ A_2(x, ) &= \frac{1}{\sqrt{1-\delta^2}} \left(A_2\left(x-\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right) +i\delta\, A_1\left(x-\frac{\varepsilon}{2},t-\frac{\varepsilon}{2 \right)\right); \end{aligned} \end{equation*} \item[axiom~3:] $\sum_{(x,t)\in\varepsilon\mathbb{Z}^ } \left(\left|A_1(x, )\right|^2+\left|A_2(x, )\right|^2\right)<\infty.$ \end{description} For each $k\in\{1,2\}$ and $(x,t)\in\varepsilon\mathbb{Z}^2$ define the \emph{lattice propagator} to be the limit \begin{equation}\label{eq-def-anti-alg} \widetilde{A}_k(x,t):=\widetilde{A}_k(x,t,m,\varepsilon) :=\lim_{\delta\searrow0} A_k(x,t,m,\varepsilon,\delta). \end{equation} \end{definition} \begin{theorem}[Consistency of the axioms and concordance to Feynman's model] \label{th-well-defined} The functions $A_k(x,t,m,\varepsilon,\delta)$ and the lattice propagator $\widetilde{A}_k(x,t,m,\varepsilon)$ are well-defined, that is, there exists a unique pair of functions satisfying axioms~1--3, and limit~\eqref{eq-def-anti-alg} exists for each $(x,t)\in\varepsilon \mathbb{Z}^2$ and $k\in\{1,2\}$. For $(x+t)/\varepsilon+k$ even, the limit is real and given by \begin{align*} \widetilde{A}_1(x,t,m,\varepsilon) &=a_1(x,|t|+\varepsilon,m,\varepsilon), &&\text{for $(x+t)/\varepsilon$ odd,}\\ \widetilde{A}_2(x,t,m,\varepsilon)&= \pm a_2(\pm x+\varepsilon,|t|+\varepsilon,m,\varepsilon), &&\text{for $(x+t)/\varepsilon$ even,} \end{align*} where the minus signs are taken when $t<0$. For $(x+t)/\varepsilon+k$ odd, limit~\eqref{eq-def-anti-alg} is purely imaginary. \end{theorem} Once again we see that the real and imaginary parts live on dual sublattices. \begin{physicalinterpretation*} One interprets \begin{equation}\label{eq-q} Q\left(x,t,m,\varepsilon\right):=\frac{1}{2}\left|\widetilde{A}_1\left(x,t,m,\varepsilon\right)\right|^2+ \frac{1}{2}\left|\widetilde{A}_2\left(x,t,m,\varepsilon\right)\right|^2 \end{equation} as the \emph{expected charge} in the interval of length $\varepsilon$ around the point $x$ at the time $t>0$, if an electron of mass $m$ was emitted from the origin at the time $0$ (or a positron is absorbed there). Unlike the original Feynman checkers, this value cannot be interpreted as probability \cite[\S9.2]{SU-22}: virtual electrons and positrons also contribute to the charge. \end{physicalinterpretation*} \subsection{Exact solution \label{ssec-analytic} Now we state a result which reduces investigation of the new model to analysis of certain integral. One can use it as an alternative definition of the model. The integral coincides with the one arising in the original Feynman's model \cite[Proposition~12]{SU-22} but now is computed for arbitrary parity of $(x+t)/\varepsilon$. \begin{proposition}[Fourier integral]\label{th-equivalence} For each $m,\varepsilon>0$ and $(x,t)\in\varepsilon\mathbb{Z}^2$ we have \begin{equation*} \begin{aligned} \widetilde{A}_1(x,t,m,\varepsilon)&= \pm\frac{im\varepsilon^2}{2\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}};\\ \widetilde{A}_2(x,t,m,\varepsilon)&= \pm\frac{\varepsilon}{2\pi}\int_{-\pi/\varepsilon}^{\pi/\varepsilon} \left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{ipx-i\omega_pt}\,dp, \end{aligned} \end{equation*} where the minus sign in the expression for $\widetilde{A}_k$ is taken for $t<0$ and $(x+t)/\varepsilon+k$ even, and \begin{equation}\label{eq-omega} \omega_p:=\frac{1}{\varepsilon}\arccos\left(\frac{\cos p\varepsilon}{\sqrt{1+m^2\varepsilon^2}}\right). \end{equation} \end{proposition} \begin{physicalinterpretation*} Fourier integral represents a wave emitted by a point source as a superposition of waves of wavelength $2\pi/p$ and frequency~$\omega_p$. \emph{Plank} and \emph{de Broglie relations} assert that $\omega_p$ and $p$ are the energy and the momentum of the waves. As $\varepsilon\to 0$, the energy $\omega_p$ tends to the expression $\sqrt{m^2+p^2}$ (see~Lemma~\ref{l-g-difference}). The latter expression is standard; it generalizes \emph{Einstein formula} $\hbar\omega_0=mc^2$ relating particle energy and mass (recall that $\hbar=c=1$ in our units). Fourier integral resembles the \emph{spin-$1/2$ Feynman propagator} describing creation, annihilation and motion of non-interacting electrons and positrons along a line in quantum field theory \cite[(6.45),(6.50)--(6.51)]{Folland}: \begin{equation}\label{eq-feynman-propagator-fourier} G^F(x,t,m)= \frac{1}{4\pi} \int_{-\infty}^{+\infty} \begin{pmatrix} \frac{im}{\sqrt{m^2+p^2}} & 1+\frac{p}{\sqrt{m^2+p^2}} \\ -1+\frac{p}{\sqrt{m^2+p^2}} & \frac{im}{\sqrt{m^2+p^2}} \end{pmatrix} e^{i p x-i\sqrt{m^2+p^2}t}\,dp \qquad\text{for }t>0. \end{equation} Here the integral is understood as Fourier transform of matrix-valued tempered distributions. (We do not use~\eqref{eq-feynman-propagator-fourier} in this paper and hence do not define those notions. Cf.~\eqref{eq-feynman-propagator}) \end{physicalinterpretation*} \begin{example}[Massless and heavy particles]\label{p-massless} \textup{(Cf.~\cite{Bender-etal-85})} For each $(x,t)\in \varepsilon\mathbb{Z}^2$ we have \begin{align} \notag \widetilde{A}_k(x,t,\infty,\varepsilon)&:= \lim_{m \to +\infty}\widetilde{A}_k(x,t,m,\varepsilon)= \delta_{x0}(-i)^{|t/\varepsilon+k-1|-1}, \\ \notag \widetilde{A}_1(x,t,0,\varepsilon)&:= \lim_{m \searrow 0}\widetilde{A}_1(x,t,m,\varepsilon)=0,\\ \label{eq-massless-lattice} \widetilde{A}_2(x,t,0,\varepsilon)&:= \lim_{m \searrow 0}\widetilde{A}_2(x,t,m,\varepsilon)= \begin{cases} 1, &\text{for }x=t\ge 0,\\ -1, &\text{for }x=t<0,\\ {2i\varepsilon}/{\pi(x-t)},&\text{for } {(x+t)}/{\varepsilon}\text{ odd},\\ 0, &\text{otherwise.} \end{cases} \end{align} Physically this means that an infinitely heavy particle stays at the origin forever, and a massless particle forms a charged ``cloud'' moving with the speed of light. The massless lattice propagator is proportional to the \emph{massless spin-$1/2$ Feynman propagator} defined by \begin{equation}\label{eq-massless} G^{F}(x,t,0):= \left(\begin{smallmatrix} 0 & i/2\pi(x-t) \\ i/2\pi(x+t) & 0 \end{smallmatrix}\right) \qquad\text{for }|x|\ne |t|. \end{equation} \end{example} \begin{example}[Unit mass and lattice step] \label{ex-simplest-values} The value \begin{align}\label{eq-Gauss} \widetilde{A}_1(0,0,1,1)/i&=\Gamma(\tfrac{1}{4})^2/(2\pi)^{3/2}= \tfrac{2}{\pi}K(i)=:G\approx 0.83463 \intertext{is the Gauss constant and} \label{eq-lemniscate} \widetilde{A}_2(1,0,1,1)/i&=2\sqrt{2\pi}/\Gamma(\tfrac{1}{4})^2 =\tfrac{2}{\pi}(E(i)-K(i))=1/\pi G=:L'\approx 0.38138 \end{align} is the inverse lemniscate constant (cf.~\textup{\cite[\S6.1]{Finch-03}}), where $K(z):=\int_{0}^{\pi/2}d\theta/\sqrt{1-z^2\sin^2\theta}$ and $E(z):=\int_{0}^{\pi/2}\sqrt{1-z^2\sin^2\theta}\,d\theta$ are the complete elliptic integrals of the 1st and 2nd kind respectively. The other values are even more complicated irrationalities (see Table~\ref{table-a4}). \end{example} \begin{proposition}[Rational basis] \label{p-basis} For each $k\in\{1,2\}$ and $(x,t)\in \mathbb{Z}^2$ the values $2^{|t|/2}\mathrm{Re}\,\widetilde{A}_k(x,t,1,1)$ are integers and $2^{|t|/2}\mathrm{Im}\,\widetilde{A}_k(x,t,1,1)$ are rational linear combinations of numbers~\eqref{eq-Gauss} and~\eqref{eq-lemniscate}. \end{proposition} \begin{table}[ht] \centering $\widetilde{A}_1(x,t,1,1)$ \\[0.1cm] \begin{tabular} {|c*{8}{|>{\centering\arraybackslash}p{55pt}}|} \hline $2$& $0$ & ${i\frac{2G-3L'}{3}}$& $\frac{1}{2}$ & ${-iL'}$& $\frac{1}{2}$ & ${i\frac{2G-3L'}{3}}$& $0$ \\ \hline $1$ & ${i\frac{7G-15L'}{3\sqrt{2}}}$ & $0$ & ${i\frac{G-L'}{\sqrt{2}}}$& $\frac{1}{\sqrt{2}}$ & ${i\frac{G-L'}{\sqrt{2}}}$ & $0$ & ${i\frac{7G-15L'}{3\sqrt{2}}}$ \\ \hline $0$& $0$ & ${i(G-2L')}$& $0$ & ${iG}$& $0$ & ${i(G-2L')}$& $0$ \\ \hline $-1$& ${i\frac{7G-15L'}{3\sqrt{2}}}$ & $0$ & ${i\frac{G-L'}{\sqrt{2}}}$& $\frac{1}{\sqrt{2}}$ & ${i\frac{G-L'}{\sqrt{2}}}$& $0$ & ${i\frac{7G-15L'}{3\sqrt{2}}}$\\ \hline $-2$ & $0$ & ${i\frac{2G-3L'}{3}}$& $\frac{1}{2}$ & ${-iL'}$& $\frac{1}{2}$ & ${i\frac{2G-3L'}{3}}$& $0$ \\ \hline \diagbox[dir=SW,height=25pt]{$t$}{$x$}&$-3$&$-2$&$-1$&$0$&$1$&$2$&$3$\\ \hline \end{tabular} \\[0.2cm] $\widetilde{A}_2(x,t,1,1)$\\[0.1cm] \begin{tabular}{|c*{8}{|>{\centering\arraybackslash}p{55pt}}|} \hline $2$& $i\frac{5G-12L'}{15}$ & $0$ & $-i\frac{G}{3}$ & $-\frac{1}{2}$ & $-iG$ & $\frac{1}{2}$ & $i\frac{-5G+12L'}{3}$ \\ \hline $1$& $0$ & $i\frac{G-3L'}{3\sqrt{2}}$ & $0$ & $-i\frac{G+L'}{\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ & $i\frac{-G+3L'}{\sqrt{2}}$ & $0$ \\ \hline $0$ & $i\frac{4G-9L'}{3}$ & $0$ & $-iL'$ & $1$ & $iL'$ & $0$ & $i\frac{-4G+9L'}{3}$ \\ \hline $-1$& $0$ & $i\frac{G-3L'}{\sqrt{2}}$ & $-\frac{1}{\sqrt{2}}$ & $i\frac{G+L'}{\sqrt{2}}$ & $0$ & $i\frac{-G+3L'}{3\sqrt{2}}$ & $0$ \\ \hline $-2$& $i\frac{5G-12L'}{3}$ & $-\frac{1}{2}$ & $iG$ & $\frac{1}{2}$ & $i\frac{G}{3}$ & $0$ & $i\frac{-5G+12L'}{15}$\\ \hline \diagbox[dir=SW,height=25pt]{$t$}{$x$}&$-3$&$-2$&$-1$&$0$&$1$&$2$&$3$\\ \hline \end{tabular} \qquad \caption{The values $\widetilde{A}_1(x,t,1,1)$ and $\widetilde{A}_2(x,t,1,1)$ for small $x,t$ (see Definition~\ref{def-anti-combi} and Example~\ref{ex-simplest-values})} \label{table-a4} \end{table} In general, the propagator is ``explicitly'' expressed through the Gauss hypergeometric function. Denote by ${}_2F_{1}(p,q;r;z)$ the principal branch of the function defined by \cite[9.111]{Gradstein-Ryzhik-63}. \begin{proposition} \textup{(``Explicit'' formula)} \label{p-mass5} For each $m,\varepsilon>0$ and $(x,t)\in\varepsilon\mathbb{Z}^2$ we have \begin{align*} \widetilde{A}_1(x,t,m,\varepsilon) &=\pm i \frac{(-im\varepsilon)^{\frac{t-|x|}{\varepsilon}}} {\left({1+m^2\varepsilon^2}\right)^{\frac{t}{2\varepsilon}}} \binom{\frac{t+|x|}{2\varepsilon}-\frac{1}{2}}{|x|/\varepsilon} {}_2F_1\left(\frac{1}{2} + \frac{|x|-t}{2\varepsilon}, \frac{1}{2} + \frac{|x|-t}{2\varepsilon}; 1+\frac{|x|}{\varepsilon}; -\frac{1}{m^2\varepsilon^2}\right),\\ \widetilde{A}_2(x,t,m,\varepsilon) &=\pm \frac{(-im\varepsilon)^{\frac{t-|x|}{\varepsilon}}} {\left({1+m^2\varepsilon^2}\right)^{\frac{t}{2\varepsilon}}} \binom{\frac{t+|x|}{2\varepsilon}-1+\theta(x)}{|x|/\varepsilon} {}_2F_1\left(\frac{|x|-t}{2\varepsilon}, 1+\frac{|x|-t}{2\varepsilon}; 1+\frac{|x|}{\varepsilon}; -\frac{1}{m^2\varepsilon^2}\right), \end{align*} where the minus sign in the expression for $\widetilde{A}_k$ is taken for $t<0$ and $(x+t)/\varepsilon+k$ even, and \begin{align*} \theta(x)&:= \begin{cases} 1, & \mbox{if } x\ge0, \\ 0, & \mbox{if } x<0; \end{cases} & \binom{z}{n}&:=\frac{1}{n!}\prod_{j=1}^{n}(z-j+1). \end{align*} \end{proposition} \begin{remark} \label{rem-jacobi-functions} Depending on the parity of $(x+t)/\varepsilon$, those expressions can be rewritten as \emph{Jacobi polynomials} (see~\cite[Remark~3]{SU-22}) or as \emph{Jacobi functions of the 2nd kind} of half-integer order (see the definition in \cite[(4.61.1)]{Szego-39}). For instance, for each $(x,t)\in\varepsilon\mathbb{Z}^2$ such that $|x|>t$ and $(x+t)/\varepsilon$ is even we have $$ \widetilde{A}_1(x,t,m,\varepsilon) =\frac{2m\varepsilon}{\pi} \left({1+m^2\varepsilon^2}\right)^{t/2\varepsilon} Q_{(|x|-t-\varepsilon)/2\varepsilon}^{(0,t/\varepsilon)}(1+2m^2\varepsilon^2). $$ \end{remark} \begin{remark} \label{rem-probability-to-die} For even $x/\varepsilon$, the value $\widetilde{A}_1(x,0,m,\varepsilon)$ equals $i(1+\sqrt{1+m^2\varepsilon^2})/m\varepsilon$ times the probability that a planar simple random walk over the sublattice $\{(x,t)\in\varepsilon\mathbb{Z}^2:(x+t)/\varepsilon\text{ even}\}$ dies at $(x,0)$, if it starts at $(0,0)$ and dies with the probability $1-1/\sqrt{1+m^2\varepsilon^2}$ before each step. Nothing like that is known for $t\ne 0$. \end{remark} \subsection{Asymptotic formulae} \label{ssec-asymptotic} The main result of this work is consistency of the model with continuum quantum field theory, that is, the convergence of the lattice propagator to the continuum one as the lattice becomes finer and finer (see Figure~\ref{fig-approx-b} and Theorem~\ref{th-continuum-limit}). More precisely, the former propagator converges to the real part of the latter on certain sublattice, and it converges to the imaginary part on the dual sublattice. The limit involves Bessel functions $J_n(z)$ and $Y_n(z)$ of the 1st and 2nd kind respectively, and modified Bessel functions $K_n(z)$ of the 2nd kind defined in \cite[\S8.40]{Gradstein-Ryzhik-63}. \begin{theorem}[Asymptotic formula in the continuum limit] \label{th-continuum-limit} For each $m,\varepsilon>0$ and $(x,t)\in\varepsilon\mathbb{Z}^2$ such that $|x|\ne |t|$ we have \begin{align} \label{eq1-th-continuum-limit} \hspace{-0.2cm}\widetilde{A}_1\left(x,t,{m},{\varepsilon}\right) &= \begin{cases} m\varepsilon \left(J_0(ms)+O\left(\varepsilon\Delta\right)\right), &\text{for }|x|<|t|\text{ and }(x+t)/\varepsilon\text{ odd};\\ -im\varepsilon \left(Y_0(ms)+O\left(\varepsilon\Delta\right)\right), &\text{for }|x|<|t|\text{ and }(x+t)/\varepsilon\text{ even};\\ 0, &\text{for }|x|>|t|\text{ and }(x+t)/\varepsilon\text{ odd};\\ \mathrlap{2im\varepsilon \left(K_0(ms)+O\left(\varepsilon\Delta\right)\right)/{\pi},} \hphantom{2im\varepsilon(t+x)\left(K_1(ms)+O\left(\varepsilon\Delta\right)\right)/\pi s,} &\text{for }|x|>|t|\text{ and }(x+t)/\varepsilon\text{ even}; \end{cases \end{align} \begin{align} \label{eq2-th-continuum-limit} \hspace{-0.2cm}\widetilde{A}_2\left(x,t,{m},{\varepsilon}\right) &= \begin{cases} -m\varepsilon(t+x)\left(J_1(ms)+O\left(\varepsilon\Delta\right)\right)/{s}, &\text{for }|x|<|t|\text{ and }(x+t)/\varepsilon\text{ even};\\ im\varepsilon(t+x)\left(Y_1(ms)+O\left(\varepsilon\Delta\right)\right)/{s}, &\text{for }|x|<|t|\text{ and }(x+t)/\varepsilon\text{ odd};\\ 0, &\text{for }|x|>|t|\text{ and }(x+t)/\varepsilon\text{ even};\\ 2im\varepsilon(t+x)\left(K_1(ms)+O\left(\varepsilon\Delta\right)\right)/\pi s, &\text{for }|x|>|t|\text{ and }(x+t)/\varepsilon\text{ odd}. \end{cases} \end{align} $$ \text{where}\qquad s:=\sqrt{\left|t^2-x^2\right|}\qquad\text{and}\qquad \Delta:=\frac{1}{\left|\,|x|-|t|\,\right|}+m^2(|x|+|t|). $$ \end{theorem} Recall that $f(x,t,{m},{\varepsilon})=g(x,t,{m},{\varepsilon})+{O}\left(h(x,t,{m},{\varepsilon})\right)$ means that there is a constant $C$ (not depending on $x,t,{m},{\varepsilon}$) such that for each $x,t,{m},{\varepsilon}$ satisfying the assumptions of the theorem we have $|f(x,t,{m},{\varepsilon})-g(x,t,{m},{\varepsilon})|\le C\,h(x,t,{m},{\varepsilon})$. \begin{physicalinterpretation*} The theorem means that in the continuum limit, the model reproduces the spin-$1/2$ Feynman propagator (cf.~\eqref{eq-feynman-propagator-fourier}) \begin{equation}\label{eq-feynman-propagator} \hspace{-0.2cm}G^F(x,t,m):= \begin{cases} \dfrac{m}{4}\, \begin{pmatrix} J_0(ms)-iY_0(ms) & -{\frac{t+x}{s}}\left(J_1(ms)-iY_1(ms)\right) \\ {\frac{t-x}{s}}\left(J_1(ms)-iY_1(ms)\right) & J_0(ms)-iY_0(ms) \end{pmatrix}, &\text{if }|x|<|t|;\\ \dfrac{im}{2\pi}\, \begin{pmatrix} K_0(ms) & {\frac{t+x}{s}}\,K_1(ms) \\ {\frac{x-t}{s}}\,K_1(ms) & K_0(ms) \end{pmatrix}, &\text{if }|x|>|t|, \end{cases} \end{equation} where again $s:=\sqrt{\left|t^2-x^2\right|}$. (A common definition involves also a generalized function supported on the lines $t=\pm x$ which we drop because those lines are excluded anyway.) The value $|G_{11}^F(x,t,m)|^2+|G_{12}^F(x,t,m)|^2$ is the expected charge density at the point $x$ at the moment~$t$. Recall that Feynman's original model reproduces just \emph{retarded propagator} \cite[Theorem~5]{SU-22}. \end{physicalinterpretation*} The asymptotic formulae in Theorem~\ref{th-continuum-limit} were known earlier (with a slightly weaker error estimates) on the sublattice, where the new model coincides with Feynman's original one \cite[Theorem~5]{SU-22}. Extension to the dual sublattice has required different methods. In the following corollary, we approximate a point $(x,t)\in\mathbb{R}^2$ by the lattice point \begin{equation}\label{eq-lattice-approx} (x_\varepsilon,t_\varepsilon):=\left(2\varepsilon\!\left\lceil \frac{x}{2\varepsilon}\right\rceil,2\varepsilon\!\left\lceil \frac{t}{2\varepsilon}\right\rceil\right). \end{equation} \begin{corollary}[Uniform continuum limit; see Figures~\ref{fig-approx-q} and~\ref{fig-approx-b}] \label{cor-anti-uniform} For each fixed $m\ge 0$ we have \begin{gather} \notag \begin{aligned} \frac{1}{4\varepsilon}\,\tilde{A}_1\left(x_\varepsilon+\varepsilon,t_\varepsilon,{m},{\varepsilon}\right) &\rightrightarrows \mathrm{Re}\,G^F_{11}(x,t,m); & \frac{1}{4\varepsilon}\,\tilde{A}_1\left(x_\varepsilon,t_\varepsilon,{m},{\varepsilon}\right) &\rightrightarrows i\mathrm{Im}\,G^F_{11}(x,t,m);\\ \frac{1}{4\varepsilon}\,\tilde{A}_2\left(x_\varepsilon+\varepsilon,t_\varepsilon,{m},{\varepsilon}\right) &\rightrightarrows i\mathrm{Im}\,G^F_{12}(x,t,m); & \frac{1}{4\varepsilon}\,\tilde{A}_2\left(x_\varepsilon,t_\varepsilon,{m},{\varepsilon}\right) &\rightrightarrows \mathrm{Re}\,G^F_{12}(x,t,m); \end{aligned}\\ \label{eq-cor-anti-uniform} \frac{1}{8\varepsilon^2}\,\left(Q\left(x_\varepsilon,t_\varepsilon,{m},{\varepsilon}\right) +Q\left(x_\varepsilon+\varepsilon,t_\varepsilon,{m},{\varepsilon}\right)\right) \rightrightarrows |G^F_{11}(x,t,m)|^2+|G^F_{12}(x,t,m)|^2 \end{gather} as $\varepsilon\to 0$ uniformly on compact subsets of $\mathbb{R}^2\setminus\{|t|=|x|\}$, under notation~\eqref{eq-lattice-approx},\eqref{eq-feynman-propagator},\eqref{eq-q},\eqref{eq-massless-lattice},\eqref{eq-massless}. \end{corollary} \begin{figure}[htbp] \centering \includegraphics[width=0.36\textwidth]{ima1-approx-v2.pdf} \includegraphics[width=0.36\textwidth]{logima1-approx-v1.pdf} \\ \includegraphics[width=0.36\textwidth]{ima2-approx-v2.pdf} \includegraphics[width=0.36\textwidth]{logima2-approx-v1.pdf} \caption{Plots of (the normalized imaginary part of) the lattice propagators $\mathrm{Im}\,\widetilde{A}_1(x,6,4,0.03)/0.12$ (top, dots) and $\mathrm{Im}\,\widetilde{A}_2(x,6,4,0.03)/0.12$ (bottom, dots) for $x/0.06\in\mathbb{Z}$ and $x/0.06+1/2\in\mathbb{Z}$ respectively, their analytic approximations from Theorems~\ref{th-continuum-limit} (dark curve) and~\tobereplaced{\ref{th-anti-ergenium}}{\ref{th-anti-Airy}} (light curve). The former approximation is the imaginary part of the spin-$1/2$ Feynman propagator $\mathrm{Im}\,G^F_{11}(x,6,4)$ (top, dark) and $\mathrm{Im}\,G^F_{12}(x,6,4)$ (bottom, dark) given by~\eqref{eq-feynman-propagator}.} \label{fig-approx-b} \end{figure} The following result follows from Proposition~\ref{th-equivalence} and~\cite[Theorems~2 and~7]{SU-22}; cf.~\cite{Ozhegov-21}. \begin{theorem [Large-time asymptotic formula between the peaks; see Figure~\ref{fig-approx-b}] \label{th-anti-ergenium} For each $\Delta>0$ there is $C_\Delta>0$ such that for each $m,\varepsilon>0$ and each $(x,t)\in\varepsilon\mathbb{Z}^2$ satisfying \begin{equation}\label{eq-case-A} |x|/t<1/\sqrt{1+m^2\varepsilon^2}-\Delta, \qquad \varepsilon\le 1/m, \qquad t>C_\Delta/m, \end{equation} we have \begin{align*} \widetilde{A}_1\left(x,t,m,\varepsilon\right) &= \begin{cases} {\varepsilon}\sqrt{\frac{2m}{\pi}} \frac{\sin (\theta(x,t,m,\varepsilon)+\pi/4)} {\left(t^2-(1+m^2\varepsilon^2)x^2\right)^{1/4}} +O_\Delta\left(\frac{\varepsilon}{m^{1/2}t^{3/2}}\right), &\text{for }(x+t)/\varepsilon\text{ odd};\\ \mathrlap{i{\varepsilon}\sqrt{\frac{2m}{\pi}} \frac{\cos (\theta(x,t,m,\varepsilon)+\pi/4)} {\left(t^2-(1+m^2\varepsilon^2)x^2\right)^{1/4}} +O_\Delta\left(\frac{\varepsilon}{m^{1/2}t^{3/2}}\right),} \hphantom{-i{\varepsilon}\sqrt{\frac{2m}{\pi}} \sqrt{\frac{t+x}{t-x}}\frac{\sin (\theta(x,t,m,\varepsilon)+\pi/4)} {\left(t^2-(1+m^2\varepsilon^2)x^2\right)^{1/4}} +O_\Delta\left(\frac{\varepsilon}{m^{1/2}t^{3/2}}\right),} &\text{for }(x+t)/\varepsilon\text{ even}; \end{cases} \\ \widetilde{A}_2\left(x,t,m,\varepsilon\right) &= \begin{cases} {\varepsilon}\sqrt{\frac{2m}{\pi}} \sqrt{\frac{t+x}{t-x}}\frac{\cos (\theta(x,t,m,\varepsilon)+\pi/4)} {\left(t^2-(1+m^2\varepsilon^2)x^2\right)^{1/4}} +O_\Delta\left(\frac{\varepsilon}{m^{1/2}t^{3/2}}\right), &\text{for }(x+t)/\varepsilon\text{ even};\\ -i{\varepsilon}\sqrt{\frac{2m}{\pi}} \sqrt{\frac{t+x}{t-x}}\frac{\sin (\theta(x,t,m,\varepsilon)+\pi/4)} {\left(t^2-(1+m^2\varepsilon^2)x^2\right)^{1/4}} +O_\Delta\left(\frac{\varepsilon}{m^{1/2}t^{3/2}}\right), &\text{for }(x+t)/\varepsilon\text{ odd}, \end{cases} \intertext{where} \theta(x,t,m,\varepsilon)&:= \frac{t}{\varepsilon}\arcsi \frac{m\varepsilon t} {\sqrt{\left(1+m^2\varepsilon^2\right)\left(t^2-x^2\right)} -\frac{x}{\varepsilon}\arcsi \frac{m\varepsilon x}{\sqrt{t^2-x^2}} \end{align*} \end{theorem} Here the notation $f(x,t,m,\varepsilon) =g(x,t,m,\varepsilon)+{O}_{\Delta}\left(h(x,t,m,\varepsilon)\right)$ means that there is a constant $C(\Delta)$ (depending on $\Delta$ but \emph{not} on $x,t,m,\varepsilon$) such that for each $x,t,m,\varepsilon$ satisfying the assumptions of the theorem we have $|f(x,t,m,\varepsilon)-g(x,t,m,\varepsilon)|\le C(\Delta)\,h(x,t,m,\varepsilon)$. \tobeadded The following result is an immediate consequence of Proposition~\ref{th-equivalence} and \cite[Remark after Theorem~3]{Zakorko-21}; cf.~\cite{Ozhegov-21}. \mscomm{!!! Update references and theorem numbers !!!} \begin{theorem}[Large-time asymptotic formula] \label{th-anti-Airy} \textup{(See \cite[Remark after Theorem~3]{Zakorko-21}.)} For each $m,\varepsilon>0$ and each $(x,t)\in\varepsilon\mathbb{Z}^2$ satisfying $0<|x/t|<1/\sqrt{1+m^2\varepsilon^2}$ we have \begin{align* \tilde{A}_1\left(x,t,m,\varepsilon\right) &= \frac{i^{\frac{|x|-|t|+\varepsilon}{\varepsilon}}\varepsilon\sqrt{m}\,\theta(x,t,m,\varepsilon)^{1/2}} {\sqrt{3}(t^2-(1+m^2\varepsilon^2)x^2)^{1/4}} \left(J_{ 1/3}\left(\theta(x,t,m,\varepsilon)\right)+ J_{-1/3}\left(\theta(x,t,m,\varepsilon)\right)\right)+ O_{m,\varepsilon}\left(\frac{1}{|t|}\right),\\ \tilde{A}_2\left(x,t,m,\varepsilon \right) &= \sqrt{\frac{t + x}{t - x}} \frac{i^{\frac{|x|-|t|}{\varepsilon}}\varepsilon\sqrt{m}\,\theta(x,t,m,\varepsilon)^{1/2}} {\sqrt{3}(t^2-(1+m^2\varepsilon^2)x^2)^{1/4}} \left(J_{ 1/3}\left(\theta(x,t,m,\varepsilon)\right)+ J_{-1/3}\left(\theta(x,t,m,\varepsilon)\right)\right)+ O_{m,\varepsilon}\left(\frac{1}{|t|}\right),\\ \intertext{where} \theta(x,t,m,\varepsilon)&:= \frac{t}{\varepsilon}\arctan\frac{\sqrt{t^2-(1+m^2\varepsilon^2)x^2}}{m \varepsilon t} -\frac{x}{\varepsilon}\arctan\frac{\sqrt{t^2-(1+m^2\varepsilon^2)x^2}}{m \varepsilon x}. \end{align*} \end{theorem} Here the notation $f(x,t,m,\varepsilon) =g(x,t,m,\varepsilon)+{O}_{m,\varepsilon}\left(h(x,t)\right)$ means that there is a constant $C(m,\varepsilon)$ (depending on $m,\varepsilon$ but \emph{not} on $x,t$) such that for each $x,t,m,\varepsilon$ satisfying the assumptions of the theorem we have $|f(x,t,m,\varepsilon)-g(x,t,m,\varepsilon)|\le C(m,\varepsilon)\,h(x,t)$. \endtobeadded \begin{physicalinterpretation*} Here $-\theta(x,t,m,\varepsilon)$ has the meaning of \emph{action}. As $\varepsilon\to 0$, it tends to the action $-m\sqrt{t^2-x^2}$ of free relativistic particle (moving from the origin to $(x,t)$ with constant speed). If we introduce the \emph{Lagrangian} $\mathcal{L}(v):=-\theta(vt,t,m,\varepsilon)/t$, then the well-known relation between energy~\eqref{eq-omega}, momentum $p$, and Lagrangian $\mathcal{L}$ holds: $\omega_p=pv-\mathcal{L}$ for $p=\partial\mathcal{L}/\partial v$. \end{physicalinterpretation*} \subsection{Identities} \label{ssec-identities} Now we establish the following informal assertion (see Propositions~\ref{p-mass}--\ref{p-triple} for formal ones). \begin{consistencyprinciple*} The new model satisfies the same identities as Feynman's one. \end{consistencyprinciple*} In particular, the identities in this subsection are known (and easy to deduce from Definition~\ref{def-mass}) for the sublattice, where the new model coincides with the original one \cite[Propositions~5--10]{SU-22}. For the dual sublattice, these results are not so easy to prove. Further, for the former sublattice, there are a few ``exceptions'' to the identities; but on the dual lattice, the imaginary part $b_k(x,t,m,\varepsilon)$ defined in \cite[Definition~5]{SU-22} satisfies known identities \cite[Propositions~5--8 and 10]{SU-22} literally for both $t>0$ and $t\le 0$. In what follows we fix $m,\varepsilon>0$ and omit the arguments $m,\varepsilon$ of the propagators. \begin{proposition}[Dirac equation]\label{p-mass} For each $(x,t)\in \varepsilon\mathbb{Z}^2$ we have \begin{align}\label{eq-Dirac-source1} \tilde A_1(x, ) &= \frac{1}{\sqrt{1+m^2\varepsilon^2}} (\tilde A_1(x+\varepsilon,t-\varepsilo ) + m \varepsilon\, \tilde A_2(x,t-\varepsilo )),\\ \label{eq-Dirac-source2} \tilde A_2(x, ) &= \frac{1}{\sqrt{1+m^2\varepsilon^2}} (\tilde A_2(x-\varepsilon,t-\varepsilo ) - m \varepsilon\, \tilde A_1(x,t-\varepsilo ))+2\delta_{x0}\delta_{t0}. \end{align} \end{proposition} In the limit $\varepsilon\to 0$, this reproduces the \emph{Dirac equation in $1$ space- and $1$ time-dimension} $$\begin{pmatrix} m & \partial/\partial x-\partial/\partial t \\ \partial/\partial x+\partial/\partial t & m \end{pmatrix} \begin{pmatrix} G^F_{12}(x, ) \\ G^F_{11}(x, ) \end{pmatrix = \begin{pmatrix} 0\\ \delta(x)\delta(t) \end{pmatrix} .$$ \begin{proposition}[Klein--Gordon equation] \label{p-Klein-Gordon-mass} For each $k\in\{1,2\}$ and each $(x,t)\in \varepsilon\mathbb{Z}^2$, where $(x,t)\ne(0,0)$ for $k=1$ and $(x,t)\ne(-\varepsilon,0),(0,-\varepsilon)$ for $k=2$, we have \begin{align*} \sqrt{1+m^2\varepsilon^2}\,\widetilde{A}_k(x,t+\varepsilo ) +\sqrt{1+m^2\varepsilon^2}\,\widetilde{A}_k(x,t-\varepsilo ) -\widetilde{A}_k(x+\varepsilon, ) -\widetilde{A}_k(x-\varepsilon, )=0. \end{align*} \end{proposition} In the limit $\varepsilon\to 0$, this gives the \emph{Klein--Gordon equation} $\left(\tfrac{\partial^2}{\partial t^2}-\tfrac{\partial^2}{\partial x^2}+m^2\right)G^F_{1k}(x,t)=0$. The infinite-lattice propagator has the same reflection symmetries as the continuum one. \begin{proposition}[Skew-symmetry]\label{p-symmetry-mass} For each $(x,t)\in \varepsilon\mathbb{Z}^2$, where $(x,t)\ne(0,0)$, we have \begin{gather*} \widetilde{A}_1(x, )=\widetilde{A}_1(-x, )=\widetilde{A}_1(x,- )=\widetilde{A}_1(-x,- ),\\ \widetilde{A}_2(x, )=-\widetilde{A}_2(-x,- ), \qquad\qquad (t-x)\,\widetilde{A}_2(x, ) =(t+x)\,\widetilde{A}_2(-x, ). \end{gather*} \end{proposition} \begin{proposition}[Charge conservation] \label{p-mass2} For each $t\in\varepsilon\mathbb{Z}$, $\sum\limits_{x\in\varepsilon\mathbb{Z}} \dfrac{|\widetilde{A}_1\left(x, \right)|^2+ |\widetilde{A}_2\left(x, \right)|^2}{2}=1$. \end{proposition} There are two versions of Huygens' principle (cf.~\cite[Proposition~9]{SU-22}). \begin{proposition}[Huygens' principle]\label{p-Huygens2} For each $x,t,t'\in\varepsilon\mathbb{Z}$, where $t\ge t'\ge 0$, we have \begin{align*} \widetilde{A}_1(x,t) &=\frac{1}{2}\sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}}} \left(\widetilde{A}_2(x',t')\widetilde{A}_1(x-x',t-t') +\widetilde{A}_1(x',t')\widetilde{A}_2(x'-x,t-t')\right),\\ \widetilde{A}_2(x,t) &=\frac{1}{2}\sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}}} \left( \widetilde{A}_2(x',t')\widetilde{A}_2(x-x',t-t') -\widetilde{A}_1(x',t')\widetilde{A}_1(x'-x,t-t')\right). \end{align*} \end{proposition} In the following version of Huygens' principle, the sums are actually finite. \begin{proposition}[Huygens' principle]\label{p-Huygens} For each $x,t,t'\in\varepsilon\mathbb{Z}$, where $t\ge t'\ge 0$, we have \begin{align*} \widetilde{A}_1(x,t) &=\sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+x'+t+t')/\varepsilon \text{ odd}}} \widetilde{A}_2(x',t')\widetilde{A}_1(x-x',t-t') +\sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+x'+t+t')/\varepsilon \text{ even}}} \widetilde{A}_1(x',t')\widetilde{A}_2(x'-x,t-t'),\\ \widetilde{A}_2(x,t) &= \sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+x'+t+t')/\varepsilon \text{ even}}} \widetilde{A}_2(x',t')\widetilde{A}_2(x-x',t-t') - \sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+x'+t+t')/\varepsilon \text{ odd}}} \widetilde{A}_1(x',t')\widetilde{A}_1(x'-x,t-t'). \end{align*} \end{proposition} \begin{proposition}[Equal-time mixed recurrence]\label{l-mean} For each $(x,t)\in \varepsilon\mathbb{Z}^2$ we have \begin{align}\label{eq-mean1} 2m\varepsilon x\widetilde{A}_1(x,t) &=(x-t-\varepsilon)\widetilde{A}_2(x-\varepsilon,t) -(x-t+\varepsilon)\widetilde{A}_2(x+\varepsilon,t), \\ \label{eq-mean2} 2m\varepsilon x\widetilde{A}_2(x,t) &=(x+t)\widetilde{A}_1(x-\varepsilon,t)-(x+t)\widetilde{A}_1(x+\varepsilon,t). \end{align} \end{proposition} In particular, $\widetilde{A}_2(x,0) =\left(\widetilde{A}_1(x-\varepsilon,0)-\widetilde{A}_1(x+\varepsilon,0)\right)/2m\varepsilon$ for $x\ne 0$. \begin{proposition}[Equal-time recurrence relation]\label{p-triple} For each $(x,t)\in\varepsilon\mathbb{Z}^2$ we have \begin{multline* (x+\varepsilon)((x-\varepsilon)^2-t^2) \widetilde{A}_1(x-2\varepsilon, ) +(x-\varepsilon)((x+\varepsilon)^2-t^2) \widetilde{A}_1(x+2\varepsilon, )= \\ =2x \left((1+2m^2\varepsilon^2)(x^2-\varepsilon^2)-t^2\right) \widetilde{A}_1(x, ), \end{multline*} \vspace{-1.0cm} \begin{multline*} (x+\varepsilon)((x-\varepsilon)^2-(t+\varepsilon)^2) \widetilde{A}_2(x-2\varepsilon, ) +(x-\varepsilon)((x+\varepsilon)^2-(t-\varepsilon)^2) \widetilde{A}_2(x+2\varepsilon, )= \\ =2x\left((1+2m^2\varepsilon^2)(x^2-\varepsilon^2)-t^2+\varepsilon^2\right) \widetilde{A}_2(x, ). \end{multline*} \end{proposition} Analogous identities can be written for any $3$ neighboring lattice points by means of Proposition~\ref{p-mass5} and Gauss contiguous relations \cite[9.137]{Gradstein-Ryzhik-63}. \begin{figure}[htbp] \centering \includegraphics[height=2.2cm]{1x1v2.png} \includegraphics[height=2.2cm]{2x2v2.png} \includegraphics[height=2.2cm]{a0f1f2v2.png} \includegraphics[height=2.2cm]{complementary-v2.png} \caption{Lattices of sizes $1$ and $2$ (left); see Example~\ref{ex-1x1}. Notation for edges (right) } \label{fig-1x1} \end{figure} \subsection{Combinatorial definition} \label{ssec-combi-def} Now we realize the plan from the end of~\S\ref{ssec-outline}, but switch the role of the lattice and its dual. \begin{definition} \label{def-anti-combi} Fix $T\in\mathbb{Z}$ and $\varepsilon,m,\delta>0$ called \emph{lattice size, lattice step, particle mass}, and \emph{small imaginary mass} respectively. Assume $T>0$ and $\delta<1$. The \emph{lattice} is the quotient set $$ \faktor{ \{\,(x,t)\in[0,T\varepsilon]^2: 2x/\varepsilon,2t/\varepsilon,(x+t)/\varepsilon\in\mathbb{Z}\,\} } { \forall x,t:(x,0)\sim (x,T\varepsilon)\,\&\,(0,t)\sim (T\varepsilon,t). } $$ (This is a finite subset of the torus obtained from the square $[0,T\varepsilon]^2$ by an identification of the opposite sides; see Figure~\ref{fig-1x1} to the left.) A lattice point $(x,t)$ is \emph{even} (respectively, \emph{odd}), if $2x/\varepsilon$ is even (respectively, odd). An \emph{edge} is a vector starting from a lattice point $(x,t)$ and ending at the lattice point $(x+\varepsilon/2,t+\varepsilon/2)$ or $(x-\varepsilon/2,t+\varepsilon/2)$. A \emph{generalized checker path} (or just a \emph{path}) is a finite sequence of distinct edges such that the endpoint of each edge is the starting point of the next one. A \emph{cycle} is defined analogously, only the sequence has the unique repetition: the first and the last edges coincide, and there is at least one edge in between. (In particular, a path such that the endpoint of the last edge is the starting point of the first one is \emph{not} yet a cycle; coincidence of the first and the last \emph{edges} is required. The first and the last edges of a \emph{path} coincide only if the path has a single edge. Thus in our setup, a path is \emph{never} a cycle.) \emph{Changing the starting edge} of a cycle means removal of the first edge from the sequence, then a cyclic permutation, and then adding the last edge of the resulting sequence at the beginning. A \emph{loop} is a cycle up to changing of the starting edge. A \emph{node} of a path or loop $s$ is {an ordered} pair of consecutive edges in $s$ {(the order of the edges in the pair is the same as in~$s$)}. A \emph{turn} is a node such that the two edges are orthogonal. A node or turn is \emph{even} (respectively, \emph{odd}), if the {endpoint of the first edge in the pair} is even (respectively, odd). Denote by $\mathrm{eventurns}(s)$, $\mathrm{oddturns}(s)$, $\mathrm{evennodes}(s)$, $\mathrm{oddnodes}(s)$ the number of even and odd turns and nodes in $s$. The \emph{arrow} (or \emph{weight)} of $s$ is \begin{equation}\label{eq-def-anti3} {A}(s):={A}(s,m,\varepsilon,\delta) :=\pm\frac{(-im\varepsilon)^{\mathrm{oddturns}(s)} (-\delta)^{\mathrm{eventurns}(s)}} {(1+m^2\varepsilon^2)^{\mathrm{oddnodes}(s)/2} (1-\delta^2)^{\mathrm{evennodes}(s)/2}}, \end{equation} where the overall minus sign is taken when $s$ is a loop. A set of checker paths or loops is \emph{edge-disjoint}, if no two of them have a common edge. An edge-disjoint set of loops is a \emph{loop configuration}. A \emph{loop configuration with a source $a$ and a sink $f$} is an edge-disjoint set of any number of loops and exactly one path starting with the edge $a$ and ending with the edge $f$. The \emph{arrow} $A(S):=A(S,m,\varepsilon,\delta)$ of a loop configuration $S$ (possibly with a source and a sink) is the product of arrows of all loops and paths in the configuration. An empty product is set to be $1$. The \emph{arrow from an edge $a$ to an edge $f$} (or \emph{finite-lattice propagator}) is \begin{equation}\label{eq-def-finite-lattice-propagator} {A}(a\to f):={A}(a\to f;m,\varepsilon,\delta,T) :=\frac{\sum\limits_{\substack{\text{loop configurations $S$}\\ \text{with the source $a$ and the sink $f$}}}A(S,m,\varepsilon,\delta)} {\sum\limits_{\substack{\text{loop configurations $S$}}}A(S,m,\varepsilon,\delta)}. \end{equation} Now take a point $(x,t)\in \varepsilon\mathbb{Z}^2$ and set $ x':=x\mod T\varepsilon$, $t':=t\mod T\varepsilon$. Denote by $a_0,f_1=f_1(x,t),f_2=f_2(x,t)$ the edges starting at $(0,0)$, $(x',t')$, $(x',t')$ and ending at $(\varepsilon/2,\varepsilon/2)$, $(x'-\varepsilon/2,t'+\varepsilon/2)$, $(x'+\varepsilon/2,t'+\varepsilon/2)$ respectively; see Figure~\ref{fig-1x1} to the middle-right. The \emph{arrow of the point $(x,t)$} (or \emph{infinite-lattice propagator}) is the pair of complex numbers \begin{equation}\label{eq-def-infinite-lattice-propagator} \widetilde{A}_k^\mathrm{loop}(x,t,m,\varepsilon):=-2(-i)^k\, \lim_{\delta\searrow 0}\lim_{\substack{T\to\infty}} {A}(a_0\to f_k(x,t);m,\varepsilon,\delta,T) \qquad\text{for $k=1,2$.} \end{equation} \end{definition} \begin{table}[htb] \centering \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline $S$ & $\varnothing$ & $\{aba\}$ & $\{cdc\}$ & $\{aca\}$ & $\{bdb\}$ & $\{abdca\}$ & $\{acdba\}$ & $\{aba,cdc\}$ & $\{aca,bdb\}$ \\ $A(S)$ & $1$ & ${-im\varepsilon\delta}/n$ & ${-im\varepsilon\delta}/n$ & ${-1}/n$ & ${-1}/n$ & ${m^2\varepsilon^2}/n^2$ & ${-\delta^2}/n^2$ & ${-m^2\varepsilon^2\delta^2}/n^2$ & ${1}/n^2$ \\ \hline \end{tabular} \caption{All loop configurations on the lattice of size $1$ and their arrows; see Example~\ref{ex-1x1}. }\label{tab-1x1} \end{table} \begin{example}[Lattice $1\times 1$; see Figure~\ref{fig-1x1} to the left] \label{ex-1x1} The lattice of size $1$ lies on the square $[0,\varepsilon]^2$ with the opposite sides identified. The lattice has $2$ points: the midpoint and the identified vertices of the square. It has $4$ edges $a,b,c,d$ shown in Figure~\ref{fig-1x1} to the left. Note that the paths $abdc$, $acdb$, $bacd$ are distinct although they contain the same edges. Their arrows are $\tfrac{-m^2\varepsilon^2}{\sqrt{1-\delta^2}(1+m^2\varepsilon^2)}$, $\tfrac{-\delta}{\sqrt{1-\delta^2}(1+m^2\varepsilon^2)}$, $\tfrac{\delta^2}{(1-\delta^2)\sqrt{1+m^2\varepsilon^2}}$ respectively. Those paths are not the same as the cycles $acdba$, $bacdb$. The two cycles determine the same loop with the arrow $\tfrac{-\delta^2}{(1-\delta^2)(1+m^2\varepsilon^2)}$. All the $9$ loop configurations and their arrows are listed in Table~\ref{tab-1x1}, where $n:={\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}}$. We obtain \begin{align*} {A}(a\to ) &= \frac{-im\varepsilon\sqrt{1-\delta^2}-\delta\sqrt{1+m^2\varepsilon^2}} {2(\sqrt{1+m^2\varepsilon^2}\sqrt{1-\delta^2}-1-im\varepsilon\delta)}, & {A}(a\to ) &= \frac{\sqrt{1-\delta^2}-\sqrt{1+m^2\varepsilon^2}} {2(\sqrt{1+m^2\varepsilon^2}\sqrt{1-\delta^2}-1-im\varepsilon\delta)}, \\ {A}(a\to ) &= \frac{1}{2}, & {A}(a\to ) &= \frac{-im\varepsilon-\delta} {2(\sqrt{1+m^2\varepsilon^2}\sqrt{1-\delta^2}-1-im\varepsilon\delta)}. \end{align*} \end{example} \begin{theorem}[Equivalence of definition ]\label{p-real-imaginary} Both finite- and infinite-lattice propagators are well-defined, that is, the denominator of~\eqref{eq-def-finite-lattice-propagator} is nonzero, limit~\eqref{eq-def-infinite-lattice-propagator} exists and equals $\widetilde{A}_k(x,t,m,\varepsilon)$. \end{theorem} We conclude this section with a few identities for the finite-lattice propagator. The first one is an analogue of the equality $a(\varepsilon,\varepsilon,m,\varepsilon)=1$ up to a factor of $2$ coming fro ~\eqref{eq-def-infinite-lattice-propagator}. \begin{proposition}[Initial value] \label{p-initial} For each edge $a$ we have $ {A}(a\to )={1}/{2}. $ \end{proposition} \begin{proposition}[Skew-symmetry] \label{p-symmetry} For each pair of edges $a\ne f$ we have $$ {A}(a\to )= \begin{cases} {A}(f\to ), &\text{for $a\perp f$};\\ -{A}(f\to ), &\text{for $a\parallel f$}. \end{cases} $$ \end{proposition} Now we state the crucial property analogous to \cite[Proposition~5]{SU-22} and \cite[Definition~3.1]{Smirnov-10}. \begin{proposition}[Dirac equation/S-holomorphicity] \label{p-dirac-finite} Let $f$ be an edge starting at a lattice point $P$. Denote by $e$ and $e'$ the two edges ending at $P$ such that $e\parallel f$ and $e'\perp f$ (see Figure~\ref{fig-1x1} to the right). Then for each edge $a$ we have \begin{align*} {A}(a\to ) &= {A}(a\to e)A(ef)+{A}(a\to e')A(e'f)+\delta_{af} \\&= \begin{cases} \frac{1}{\sqrt{1-\delta^2}} ({A}(a\to ) -\delta {A}(a\to e ))+\delta_{af}, &\text{for $P$ even;}\\ \frac{1}{\sqrt{1+m^2\varepsilon^2}} ({A}(a\to ) -im\varepsilon {A}(a\to e ))+\delta_{af}, &\text{for $P$ odd.} \end{cases} \end{align*} \end{proposition} Let us state a simple corollary of the previous three identities. \begin{proposition}[Adjoint Dirac equation] \label{p-dirac-adjoint} Under the assumptions of Proposition~\ref{p-dirac-finite}, \begin{equation*} {A}(f\to a)= A(ef)\left({A}(e\to a)-\delta_{ea}\right) -A(e'f)\left({A}(e'\to a)-\delta_{e'a}\right). \end{equation*} \end{proposition} The following proposition is a simple generalization of Dirac equation. \begin{proposition}[Huygens' principle] \label{cor-dirac} For each $n\le 2T$ and each pair of edges $a, f$ we have $$ A(a\to f)=\sum_{e\dots f\text{ of length }n}A(a\to e)A(e\dots f)+ \sum_{a\dots f\text{ of length }<n}A(a\dots f), $$ where the first sum is over all the paths $e\dots f$ of length exactly $n$ ending with $f$ and the second sum is over all the paths $a\dots f$ of length less than $n$ starting with $a$ and ending with $f$. \end{proposition} Note that the \emph{finite}-lattice propagator does \emph{not} exhibit charge conservation (see Example~\ref{ex-2x2}). \addcontentsline{toc}{myshrink}{} \section{Generalizations to several particles} \label{sec-variations} In this section we upgrade the model to describe motion of \emph{several} non-interacting electrons and positrons, then introduce interaction and establish perturbation expansion. \subsection{Identical particles in Feynman checkers} \label{sec-identical1} As a warm up, we upgrade Feynman's original model (see Definition~\ref{def-mass}) to two identical electrons. This upgrade takes into account \emph{chirality} of electrons, which can be either \emph{right} or \emph{left} \cite[\S4]{SU-22}, but does not yet incorporate creation and annihilation of electron-positron pairs. \begin{definition}\label{def-identical} Under the notation of Definition~\ref{def-mass}, take $m=\varepsilon=1$. Fix integer points $A=(0,0)$, $A'=(x_0,0)$, $F=(x,t)$, $F'=(x',t)$ and their diagonal neighbors $B=(1,1)$, $B'=(x_0+1,1)$, $E=(x-1,t-1)$, $E'=(x'-1,t-1)$, where $x_0\ne 0$ and $x'\ge x$. Denot $$ {a}(AB,A'B'\to EF,E'F'):= \sum_{\substack{s=AB\dots EF\\s'=A'B'\dots E'F'}} {a}(s){a}(s')- \sum_{\substack{s=AB\dots E'F'\\s'=A'B'\dots EF}} {a}(s){a}(s'), $$ where the first sum is over all pairs consisting of a checker path $s$ starting with the move $AB$ and ending with the move $EF$, and a path $s'$ starting with the move $A'B'$ and ending with the move $E'F'$, whereas in the second sum the final moves are interchanged. The length square $P(AB,A'B'\to EF,E'F'):=\left|{a}(AB,A'B'\to EF,E'F')\right|^2$ is called the \emph{probability to find right electrons at $F$ and $F'$, if they are emitted from $A$ and~$A'$}. (In particular, $P(AB,A'B'\to EF,EF)=0$, i.e., two right electrons cannot be found at the same point; this is called \emph{exclusion principle}.) Define $P(AB,A'B'\to EF,E'F')$ similarly for $E=(x\pm 1,t-1)$, $E'=(x'\pm 1,t-1)$. Here we require $x'\ge x$, if both signs in $\pm$ are the same, and allow arbitrary $x'$ and $x$, otherwise. (The latter requirement is introduced not to count twice the contribution of physically indistinguishable final states $(EF,E'F')$ and $(E'F',EF)$.) \end{definition} \begin{proposition}[Locality] \label{p-independence-1} For $x_0\ge 2t$, $x'>x$, $E=(x-1,t-1)$, and $E'=(x'-1,t-1)$ we have $P(AB,A'B'\to EF,E'F')=|a_2(x,t,1,1)|^2|a_2(x'-x_0,t,1,1)|^2$. \end{proposition} This means that two sufficiently distant electrons move independently. \begin{proposition}[Probability conservation] \label{p-transfer-matrix} For each $t>0$ and $x_0\ne 0$ we have the identity $\sum_{E,E',F,F'} P(AB,A'B'\to EF,E'F')=1$, where the sum is over all quadruples $F=(x,t)$, $F'=(x',t)$, $E=(x\pm 1,t-1)$, $E'=(x'\pm 1,t-1)$, such that $x'\ge x$, if the latter two signs in $\pm$ are the same. \end{proposition} \subsection{Identical particles in Feynman anticheckers} \label{sec-identical} Now we generalize the new model (see Definition~\ref{def-anti-combi}) to several non-interacting electrons and positrons which can be created and annihilated during motion. \begin{definition} \label{def-multipoint} A \emph{loop configuration $S$ with sources $a_1,\dots,a_n$ and sinks $f_1,\dots,f_n$} is an edge-disjoint set of any number of loops and exactly $n$ paths starting with the edges $a_1,\dots,a_n$ and ending with the edges $f_{\sigma(1)},\dots,f_{\sigma(n)}$ respectively, where $\sigma$ is an arbitrary permutation of $\{1,\dots,n\}$. The \emph{arrow} $A(S,m,\varepsilon,\delta)$ of $S$ is the permutation sign $\mathrm{sgn}(\sigma)$ times the product of arrows of all loops and paths in the configuration. Define ${A}(a_1,\dots,a_n\to f_1,\dots,f_ )$ analogously to $A(a\to f)$, only the sum in the numerator of~\eqref{eq-def-finite-lattice-propagator} is over all loop configurations $S$ with the sources $a_1,\dots,a_n$ and the sinks $f_1,\dots,f_n$. (This expression vanishes, if some two sources or some two sinks coincide.) Given edges $a,e,f$, define ${A}(a\to f \text{ pass } )$ analogously to ${A}(a\to )$, only the sum in the numerator of~\eqref{eq-def-finite-lattice-propagator} is now over loop configurations with the source $a$ and the sink $f$ containing the edge $e$ (the sum in the denominator remains the same). \end{definition} \begin{physicalinterpretation*} Assume that the edges $a_1,\dots,a_k,f_1,\dots,f_l$ start on a horizontal line $t=t_1$ and the remaining sources and sinks start on a horizontal line $t=t_2$, where $t_2>t_1$. Then the model describes a transition from a state with $k$ electrons and $l$ positrons at the time $t_1$ to $n-l$ electrons and $n-k$ positrons at the time $t_2$. Beware that analogously to \cite[\S9.2]{SU-22} one cannot speak of any transition probabilities. \end{physicalinterpretation*} \begin{proposition}[Determinant formula] \label{p-det} For each edges $a_1,\dots,a_n,f_1,\dots,f_n$ we have $$ {A}(a_1,\dots,a_n\to f_1,\dots,f_ ) =\sum_{\sigma} \mathrm{sgn}(\sigma) A(a_1\to f_{\sigma(1) )\dots A(a_n\to f_{\sigma(n) ) =\det\left(A(a_k\to f_ )\right)_{k,l=1}^n, $$ where the sum is over all permutations $\sigma$ of $\{1,\dots,n\}$. \end{proposition} \begin{proposition}[Pass-or-loop formula] \label{cor-pass} For each edges $a,e,f$ we have $$ {A}(a\to f \text{ pass } )= {A}(a\to ) {A}(e\to )+ {A}(a\to ){A}(e\to ) =\frac{1}{2} {A}(a\to ) + {A}(a\to ){A}(e\to ). $$ \end{proposition} \subsection{Fermi theory and Feynman diagrams} \label{sec-fermi} Now we couple two copies of the model in a way resembling Fermi theory, which describes one type of weak interaction between electrons and muons, slightly heavier analogues of electrons. \begin{definition} \label{def-fermi} Fix $g, m_\mathrm{e}, m_\mu>0$ called \emph{coupling constant, electron}, and \emph{muon mass} respectively. Denote by $\mathrm{commonedges}(S_\mathrm{e},S_\mu)$ the number of common edges in two loop configurations $S_\mathrm{e},S_\mu$ (possibly with sources and sinks). The \emph{arrow from edges $a_\mathrm{e}$ and $a_\mu$ to edges $f_\mathrm{e}$ and $f_\mu$} is \begin{multline}\label{eq-def-fermi} {A}(a_\mathrm{e},a_\mu\to f_\mathrm{e},f_\m ) :=\\:=\frac{ \sum\limits_{\substack{\text{loop configurations $S_\mathrm{e}$}\\ \text{with the source $a_\mathrm{e}$}\\ \text{and the sink $f_\mathrm{e}$}}}\quad \sum\limits_{\substack{\text{loop configurations $S_\mu$}\\ \text{with the source $a_\mu$}\\ \text{and the sink $f_\mu$}}} A(S_\mathrm{e},m_\mathrm{e},\varepsilon,\delta)A(S_\mu,m_\mu,\varepsilon,\delta) (1+g)^{\mathrm{commonedges}(S_\mathrm{e},S_\mu)}} {\sum\limits_{\substack{\text{loop configurations $S_\mathrm{e}$}}}\quad \sum\limits_{\substack{\text{loop configurations $S_\mu$}}} A(S_\mathrm{e},m_\mathrm{e},\varepsilon,\delta)A(S_\mu,m_\mu,\varepsilon,\delta) (1+g)^{\mathrm{commonedges}(S_\mathrm{e},S_\mu)}}. \end{multline} \end{definition} Informally, the powers of $(1+g)$ are explained as follows: An interaction may or may not occur on each common edge of $S_\mathrm{e}$ and $S_\mu$. Each occurance gives a factor of $g$. \begin{proposition}[Perturbation expansion] \label{p-perturbation} For $g$ sufficiently small in terms of $m_\mathrm{e},m_\mu,\varepsilon,\delta,T$ and for each edges $a_\mathrm{e},a_\mu,f_\mathrm{e},f_\mu$ the arrow from $a_\mathrm{e}$ and $a_\mu$ to $f_\mathrm{e}$ and $f_\mu$ is well-defined, that is, the denominator of~\eqref{eq-def-fermi} is nonzero. We have \begin{multline*} {A}(a_\mathrm{e},a_\mu\to f_\mathrm{e},f_\m ) ={A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(a_\mu\overset{\mu}{\to} f_\mu) +g\sum_{e} \left( {A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} e) {A}(e\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(a_\mu\overset{\mu}{\to}e) {A}(e\overset{\mu}{\to} f_\mu) \right.+\\+\left. {A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(e\overset{\mathrm{e}}{\to} e) {A}(a_\mu\overset{\mu}{\to} e) {A}(e\overset{\mu}{\to} f_\mu + {A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} e) {A}(e\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(a_\mu\overset{\mu}{\to} f_\mu) {A}(e\overset{\mu}{\to} e) \right) +o(g), \end{multline*} where the sum is over all edges $e$ and we denote $A(a\overset{\mathrm{e}}{\to} f):= A(a\to f;m_\mathrm{e},\varepsilon,\delta,T)$ and $A(a\overset{\mu}{\to} f):= A(a\to f;m_\mu,\varepsilon,\delta,T)$. \end{proposition} \begin{figure}[htbp] \centering \small \begin{tabular}{|c|c|} \hline ${A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(a_\mu\overset{\mu}{\to} f_\mu)$ & $g\,{A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} e) {A}(e\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(a_\mu\overset{\mu}{\to}e) {A}(e\overset{\mu}{\to} f_\mu)$ \\[3pt] \includegraphics[height=2.2cm]{feynman0.png} & \includegraphics[height=2.2cm]{feynman1.png} \\ \includegraphics[height=2.2cm]{configuration0.png} & \includegraphics[height=2.2cm]{configuration1.png} \\ \hline $g\,{A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(e\overset{\mathrm{e}}{\to} e) {A}(a_\mu\overset{\mu}{\to} e) {A}(e\overset{\mu}{\to} f_\mu)$ & $g\,{A}(a_\mathrm{e}\overset{\mathrm{e}}{\to} e) {A}(e\overset{\mathrm{e}}{\to} f_\mathrm{e}) {A}(a_\mu\overset{\mu}{\to} f_\mu) {A}(e\overset{\mu}{\to} e)$\\[3pt] \includegraphics[height=2.2cm]{feynman2.png} & \includegraphics[height=2.2cm]{feynman3.png} \\ \includegraphics[height=2.2cm]{configuration2.png} & \includegraphics[height=2.2cm]{configuration3.png} \\ \hline \end{tabular} \caption{Terms in perturbation expansion, their Feynman diagrams, and collections of loop configurations contributing to the terms (from top to bottom in each cell); see Proposition~\ref{p-perturbation}.} \label{fig-feynman} \end{figure} The perturbation expansion can be extended to any order in $g$. The terms are depicted as so-called \emph{Feynman diagrams} as follows (see Figure~\ref{fig-feynman}). For each edge in the left side, draw a white vertex. For each edge that is a summation variable in the right side, draw a black vertex. For each factor of the form $A(a\overset{\mathrm{e}}{\to} f)$ or $A(a\overset{\mu}{\to} f)$, draw an arrow from the vertex drawn for $a$ to the vertex drawn for $f$ (a loop, if $a=f$) labeled by letter ``$\mathrm{e}$'' or ``$\mu$'' respectively. We conjecture that those Feynman diagrams have usual properties: each black vertex is the starting point of exactly two arrows labeled by ``$\mathrm{e}$'' and ``$\mu$'', is the endpoint of exactly two arrows also labeled by ``$\mathrm{e}$'' and ``$\mu$'', and is joined with a white vertex by a sequence of arrows. Let us give a few comments for specialists. As $\varepsilon\to 0$, the contribution of Feynman diagrams involving loops blows up because $A(e\to e)=1/2$ by Proposition~\ref{p-initial} whereas the other arrows are of order $\varepsilon$ by Theorem~\ref{th-continuum-limit}. This suggests that the model has no \emph{naive} continuum limit. As usual, the \emph{true} continuum limit requires \emph{renormalization}, that is, choosing a lattice-dependent coupling $g(\varepsilon)$ in a wise way. Fermi model in $1$ space and $1$ time dimension is known to be \emph{renormalizable} \cite[\S III.3, top of p.~180]{Zee-10}; thus one expects that the true continuum limit exists. Proving the existence mathematically is as hard as for any other model with interaction. \subsection{Open problems} The new model is only a starting point of the missing Minkowskian lattice quantum field theory. Here we pick up a few informal open problems among a variety of research directions. We start with the ones relying on Definition~\ref{def-anti-alg} only. As a warm-up, we suggest the following. \begin{pr} (Cf.~\cite[Theorem~1]{Novikov-20},\cite{Kuyanov-Slizkov-22}) Does expected charge~\eqref{eq-q} vanish somewhere? \end{pr} The most shouting problem is to find a large-time asymptotic formula\tobereplaced{,}{ for $|x|>|t|/\sqrt{1+m^2\varepsilon^2}$ and} especially for $|x|>|t|$. \begin{pr} (Cf.~Theorem~\tobereplaced{\ref{th-anti-ergenium}}{\ref{th-anti-Airy}}) Prove that for each $m,\varepsilon>0$ and each $(x,t)\in\varepsilon\mathbb{Z}^2$ satisfying $1/\sqrt{1+m^2\varepsilon^2}<|x/t|<1$ we have \begin{align* \tilde{A}_1\left(x,t,m,\varepsilon\right) &= \frac{i^{\frac{|x|-|t|+\varepsilon}{\varepsilon}}\varepsilon\sqrt{m}\,\theta(x,t,m,\varepsilon)^{1/2}} {\pi((1+m^2\varepsilon^2)x^2-t^2)^{1/4}} K_{ 1/3}\left(\theta(x,t,m,\varepsilon)\right) \left(1+O_{m,\varepsilon}\left(\frac{1}{|t|}\right)\right),\\ \tilde{A}_2\left(x,t,m,\varepsilon \right) &= \sqrt{\frac{t + x}{t - x}} \frac{i^{\frac{|x|-|t|}{\varepsilon}}\varepsilon\sqrt{m}\,\theta(x,t,m,\varepsilon)^{1/2}} {\pi((1+m^2\varepsilon^2)x^2-t^2)^{1/4}} K_{ 1/3}\left(\theta(x,t,m,\varepsilon)\right) \left(1+O_{m,\varepsilon}\left(\frac{1}{|t|}\right)\right),\\ \intertext{where} \theta(x,t,m,\varepsilon)&:= \tobereplaced{ -\frac{t}{\varepsilon}\,\mathrm{arcosh}\ \frac{m\varepsilon t} {\sqrt{\left(1+m^2\varepsilon^2\right)\left(t^2-x^2\right)} +\frac{x}{\varepsilon}\,\mathrm{arcosh}\ \frac{m\varepsilon x}{\sqrt{t^2-x^2}}. } { -\frac{t}{\varepsilon}\,\mathrm{artanh}\,\frac{\sqrt{(1+m^2\varepsilon^2)x^2-t^2}}{m \varepsilon t} +\frac{x}{\varepsilon}\,\mathrm{artanh}\,\frac{\sqrt{(1+m^2\varepsilon^2)x^2-t^2}}{m \varepsilon x}. } \end{align*} \end{pr} The limit of small lattice step also deserves attention. Corollary~\ref{cor-anti-uniform} assumes $|x|\ne |t|$, hence misses the main contribution to the charge. Now we ask for the weak limit detecting the peak. \begin{pr} \label{p-weak} (Cf.~Corollary~\ref{cor-anti-uniform}) Find the distributional limits $\lim_{\varepsilon\searrow 0} \widetilde{A}\left(x_\varepsilon,t_\varepsilon,{m},{\varepsilon}\right)/{4\varepsilon}$ and $\lim_{\varepsilon\searrow 0} {Q}\left(x_\varepsilon,t_\varepsilon,{m},{\varepsilon}\right)/{8\varepsilon^2}$ on the whole $\mathbb{R}^2$. Is the former limit equal to propagator~\eqref{eq-feynman-propagator-fourier} including the generalized function supported on the lines $t=\pm x$? \end{pr} The infinite-lattice propagator seems to be unique to satisfy the variety of properties from \S\S\ref{ssec-asymptotic}--\ref{ssec-identities}. But there still could be different \emph{finite}-lattice propagators with the same limit. \begin{pr} (Cf.~Definition~\ref{def-anti-combi}, Example~\ref{ex-2x2}) Find a combinatorial construction of a finite-lattice propagator having the following properties: \begin{description} \item[consistency:] it has the same limit~\eqref{eq-def-infinite-lattice-propagator}; \item[charge conservation:] it satisfies an analogue of Proposition~\ref{p-mass2} before passing to the limit; \item[other boundary conditions:] it does not require time-periodic boundary conditions. \end{description} \end{pr} The new model describes a free massive spin-$1/2$ quantum field but can be easily adopted to other spins via known relations between propagators for different spins. For instance, \emph{spin-$0$ and spin-$1$ massive infinite-lattice propagators} are defined to be $\frac{\sqrt{1+m^2\varepsilon^2}}{2m\varepsilon}\widetilde{A}_1\left(x,t,{m},{\varepsilon}\right)$ and $\frac{\sqrt{1+m^2\varepsilon^2}}{2m\varepsilon}\left( \begin{smallmatrix} \widetilde{A}_1\left(x,t,{m},{\varepsilon}\right) & 0 \\ 0 & \widetilde{A}_1\left(x,t,{m},{\varepsilon}\right) \end{smallmatrix} \right)$ respectively. Consistency with continuum theory is automatic by Corollary~\ref{cor-anti-uniform} and Proposition~\ref{p-Klein-Gordon-mass}. However, it is natural to modify the combinatorial definition. \begin{pr} (Cf.~\S\ref{ssec-proofs-fourier}) Find a combinatorial construction of $\widetilde{A}_1\left(x,t,{m},{\varepsilon}\right)$ starting from the Klein--Gordon equation (Proposition~\ref{p-Klein-Gordon-mass}) instead of the Dirac one, to make the construction symmetric with respect to time reversal $t\mapsto -t$. \end{pr} \begin{pr} (Cf.~Example~\ref{p-massless}) Find a combinatorial construction of \emph{massless} spin-$0$, spin-$1/2$, and spin-$1$ infinite-lattice propagators (obtained from the massive ones in the limit $m\to 0$). \end{pr} \begin{pr} Modify the combinatorial definition of the model with several identical particles (Definition~\ref{def-multipoint}) for spin $0$ so that the determinant in Proposition~\ref{p-det} is replaced by the permanent. \end{pr} The next challenge is to introduce interaction and to prove that the continuum limit of the resulting model has natural physical properties (at least, satisfies Wightman axioms recalled in Appendix~\ref{sec-axioms}). Particular goals could be quantum electrodynamics and Fermi model (see \S\ref{sec-fermi}). \addcontentsline{toc}{myshrink}{} \section{Proofs}\label{sec-proofs} Let us present a chart showing the dependence of the above results and further subsections: $$ \xymatrix{ \boxed{\text{\ref{ssec-proofs-fourier-series}. (Theorem~\ref{th-well-defined}, Proposition~\ref{th-equivalence})}} \ar[d]\ar[r]\ar[dr] & \boxed{\text{\ref{ssec-proofs-continuum}. (Theorem~\ref{th-continuum-limit}, Corollary~\ref{cor-anti-uniform})}} \\ \boxed{\text{\ref{ssec-proofs-identities}. (Propositions~\ref{p-basis}--\ref{p-triple})}} & \boxed{\text{\ref{ssec-proofs-fourier}. (Theorem~\ref{p-real-imaginary}, Propositions~\ref{p-initial}--\ref{cor-dirac})}} \ar[dl] \\ \boxed{\text{\ref{ssec-proofs-variations}. (Propositions~\ref{p-independence-1}--\ref{p-perturbation})}} & \boxed{\text{Appendix~\ref{ssec-proofs-alternative}. (Propositions~\ref{p-initial}--\ref{cor-dirac})}} } $$ } Section~\ref{ssec-proofs-variations} relies only on Theorem~\ref{p-real-imaginary}, Proposition~\ref{p-initial}, and~Lemma~\ref{l-loop-expansion} among the results proved in \S\S\ref{ssec-proofs-fourier-series},\ref{ssec-proofs-fourier}. Appendix~\ref{ssec-proofs-alternative} contains alternative proofs and is independent from \S\ref{sec-proofs}. Throughout this section we use notation~\eqref{eq-omega}. \subsection{Fourier integral (Theorem~\ref{th-well-defined}, Proposition~\ref{th-equivalence})} \label{ssec-proofs-fourier-series} In this section we compute the functions in Definition~\ref{def-anti-alg} by Fourier method (see Proposition~\ref{l-double-fourier-finite}). Then we obtain Proposition~\ref{th-equivalence} by contour integration (this step has been already performed in \cite{SU-22}). Finally, we discuss direct consequences (Corollaries~\ref{cor-scaling}--\ref{cor-halve-interval} and Theorem~\ref{th-well-defined}). Although the method is analogous to the computation of the continuum propagator, it is the new idea of putting imaginary mass to the dual lattice what makes it successful (see Remark~\ref{rem-delta-zero}). \begin{proposition}[Full space-time Fourier transform] \label{l-double-fourier-finite} There exists a unique pair of functions satisfying axioms 1--3 in Definition~\ref{def-anti-alg}. Under notation $x^*:=x+\frac{\varepsilon}{2}$, $t^*:=t+\frac{\varepsilon}{2}$, it is given by \begin{equation*} \begin{aligned} A_1(x, )&= \begin{cases} \frac{\varepsilon^2}{4\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{m\varepsilon-i\delta e^{ip\varepsilon}} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta} e^{i p x-i\omega t}\,d\omega dp, &\text{for $2x/\varepsilon$ even},\\%(x,t)\in \varepsilon\mathbb{Z}^2,\\ \frac{\varepsilon^2}{4\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{m\varepsilon\sqrt{1-\delta^2} e^{i\omega\varepsilon}-i\delta\sqrt{1+m^2\varepsilon^2}} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta}e^{i p x^*-i\omega t^*}\,d\omega dp, &\text{for $2x/\varepsilon$ odd} \end{cases} \\ A_2(x, )&= \begin{cases} \frac \varepsilon^2}{4\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2 e^{-i\omega\varepsilon}-e^{ip\varepsilon}-im\varepsilon\delta} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2} \cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta} e^{i p x-i\omega t} \,d\omega dp &\text{for $2x/\varepsilon$ even} \\ \frac{\varepsilon^2}{4\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sqrt{1+m^2\varepsilon^2}e^{-ip\varepsilon} -\sqrt{1-\delta^2}e^{i\omega\varepsilon}} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta} e^{i p x^*-i\omega t^*}\,d\omega dp, &\text{for $2x/\varepsilon$ odd} \end{cases} \end{aligned} \end{equation*} \end{proposition} \begin{proof} Substituting axiom~2 into axiom~1 for each $(x,t)\in\varepsilon\mathbb{Z}^2$, we get \begin{equation}\label{eq-substituted} \begin{aligned} A_1(x,t)&= \frac{ A_1(x+\varepsilon,t-\varepsilon) +im\varepsilon\delta A_1(x,t-\varepsilon) -i\delta A_2(x+\varepsilon,t-\varepsilon) +m\varepsilon A_2(x,t-\varepsilon) }{\sqrt{1+m^2\varepsilon^2}\sqrt{1-\delta^2}},\\ A_2(x,t)&= \frac{ i\delta A_1(x-\varepsilon,t-\varepsilon) -m\varepsilon A_1(x,t-\varepsilon) +A_2(x-\varepsilon,t-\varepsilon) +im\varepsilon\delta A_2(x,t-\varepsilon) }{\sqrt{1+m^2\varepsilon^2}\sqrt{1-\delta^2}}+2\delta_{x0}\delta_{t0}. \end{aligned} \end{equation} It suffices to solve system~\eqref{eq-substituted} on $\varepsilon\mathbb{Z}^2$ (that is, for $2x/\varepsilon$ even) under the restriction given by axiom~3; then the values for $2x/\varepsilon$ odd are uniquely determined (and computed) by axiom~2. We use Fourier method. To a function $A_k(x,t)$ satisfying axiom~3 assign the Fourier series $$ \widehat{A}_k(p,\omega):\overset{L^2}{=} \sum_{(x,t)\in\varepsilon\mathbb{Z}^2} A_k(x,t)e^{-ipx+i\omega t}\in L^2([-\pi/\varepsilon,\pi/\varepsilon]^2). $$ Here the summands are understood as functions on $[-\pi/\varepsilon,\pi/\varepsilon]^2$ and mean-square convergence of the series is assumed. By Plancherel theorem, this assignment is a bijection between the space of functions satisfying axiom~3 and the space $L^2([-\pi/\varepsilon,\pi/\varepsilon]^2)$ of square-integrable functions $[-\pi/\varepsilon,\pi/\varepsilon]^2\to\mathbb{C}$ up to change on a set of measure zero. Under this bijection, the shifts $x\mapsto x\pm\varepsilon$ and $t\mapsto t-\varepsilon$ are taken to multiplication by $e^{\pm ip\varepsilon}$ and $e^{i\omega\varepsilon}$ respectively, and $\delta_{x0}\delta_{t0}$ is taken to $1$. Thus~\eqref{eq-substituted} is transformed to the following equality almost everywhere \begin{equation}\label{eq-transformed-equation} \begin{pmatrix} \widehat A_1(p,\omega) \\ \widehat A_2(p,\omega) \end{pmatrix} = \frac{e^{i\omega\varepsilon}} {\sqrt{1+m^2\varepsilon^2}\sqrt{1-\delta^2}} \begin{pmatrix} e^{ip\varepsilon}+im\varepsilon\delta & -i\delta e^{ip\varepsilon}+m\varepsilon \\ i\delta e^{-ip\varepsilon}-m\varepsilon & e^{-ip\varepsilon}+im\varepsilon\delta \end{pmatrix} \begin{pmatrix} \widehat A_1(p,\omega) \\ \widehat A_2(p,\omega) \end{pmatrix} + \begin{pmatrix} 0 \\ 2 \end{pmatrix}. \end{equation} The resulting $2\times 2$ linear system has the unique solution (this is checked in \cite[\S1]{SU-3}) \begin{align*} \widehat A_1(p,\omega) &=\frac{m\varepsilon - i\delta e^{ip\varepsilon}} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) - \cos(p\varepsilon) - im\varepsilon\delta},\\ \widehat A_2(p,\omega) &=\frac{ \sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}e^{-i\omega\varepsilon} - e^{ip\varepsilon}-im\varepsilon\delta} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) - \cos(p\varepsilon) - im\varepsilon\delta}. \end{align*} It belongs to $L^2([-\pi/\varepsilon,\pi/\varepsilon]^2)$ because $m,\varepsilon,\delta>0$ and the denominator vanishes nowhere. Now the formula for the Fourier coefficients gives the desired expressions in the proposition. \end{proof} \begin{remark}\label{rem-delta-zero} This argument shows that for $\delta=0$ axioms~1--3 are inconsistent even if $m$ has imaginary part because $\widehat A_k(p,\omega)$ blows up at $(\pi/2\varepsilon,\pi/2\varepsilon)$. Thus Step~2 in \S\ref{ssec-outline} is necessary. \end{remark} Passing to the limit $\delta\searrow 0$ in Proposition~\ref{l-double-fourier-finite} and using $\frac{\varepsilon}{2\pi}\int_{-\pi/\varepsilon}^{\pi/\varepsilon}e^{ipx}\,dp=\delta_{x0}$ for $x\in\varepsilon\mathbb{Z}$, we get the following result. \begin{proposition}[Full space-time Fourier transform] \label{cor-double-fourier-anti} For each $(x,t)\in \varepsilon\mathbb{Z}^2$ we have \begin{equation*} \begin{aligned} \tilde A_1(x, )&= \lim_{\delta\searrow 0} \frac{m\varepsilon^3}{4\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{ e^{i p x-i\omega t}\,d\omega dp} {\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-i\delta},\\ \hspace{-0.5cm}\tilde A_2(x, )&= \lim_{\delta\searrow 0} \frac{-i\varepsilon^2}{4\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sqrt{1+m^2\varepsilon^2}\sin(\omega\varepsilon)+\sin(p\varepsilon)} {\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-i\delta} e^{i p x-i\omega t} \,d\omega dp+\delta_{x0}\delta_{t0}. \end{aligned} \end{equation*} \end{proposition} \begin{proof}[Proof of Proposition~\ref{th-equivalence}] This follows from Proposition~\ref{cor-double-fourier-anti} and~\cite[Proposition~17]{SU-22}, which states that the right-hand sides of Proposition~\ref{cor-double-fourier-anti} and Proposition~\ref{th-equivalence} are equal (and in particular, the limits in Proposition~\ref{cor-double-fourier-anti} exist). \end{proof} Performing changes of variables $(p,\omega)\mapsto (p\varepsilon,\omega\varepsilon)$, $(\pi/\varepsilon-p,\pi/\varepsilon-\omega)$, $(\pm p,-\omega)$ in the integrals from Proposition~\ref{cor-double-fourier-anti}, one gets the following three immediate corollaries. \begin{corollary}[Scaling symmetry] \label{cor-scaling} For each $k\in\{1,2\}$ and $(x,t)\in\varepsilon\mathbb{Z}^2$ we have $\widetilde{A}_k(x,t,m,\varepsilon) =\widetilde{A}_k(x/\varepsilon,t/\varepsilon,m\varepsilon,1)$. \end{corollary} \begin{corollary}[Alternation of real and imaginary values] \label{cor-alternation} Let $k\in\{1,2\}$ and $(x,t)\in\varepsilon\mathbb{Z}^2$. If $(x+t)/\varepsilon+k$ is even (respectively, odd), then $\widetilde{A}_k(x,t)$ is real (respectively, purely imaginary). \end{corollary} \begin{corollary}[Skew symmetry] \label{cor-symmetry} For each $(x,t)\in\varepsilon\mathbb{Z}^2$, where $(x,t)\ne (0,0)$, we have $\widetilde{A}_1(x, )=\widetilde{A}_1(x,- )=\widetilde{A}_1(-x,- )$ and $\widetilde{A}_2(x, )=-\widetilde{A}_2(-x,- )$. \end{corollary} \begin{proof}[Proof of Theorem~\ref{th-well-defined}] The existence and uniqueness of $A_k(x,t,m,\varepsilon,\delta)$ is Proposition~\ref{l-double-fourier-finite}. The existence of limit~\eqref{eq-def-infinite-lattice-propagator} and the required equalities follow from Proposition~\ref{th-equivalence}, Corollary~\ref{cor-symmetry}, and \cite[Proposition~12]{SU-22}, which states that the integrals from Proposition~\ref{th-equivalence} equal $a_1(x,t+\varepsilon,m,\varepsilon)$ and $a_2(x+\varepsilon,t+\varepsilon,m, \varepsilon)$ for $t\ge 0$ and appropriate parity of $(x+t)/\varepsilon$. The rest is Corollary~\ref{cor-alternation}. \end{proof} \begin{corollary}[Symmetry] \label{cor-skew-symmetry} For each $(x,t)\in\varepsilon\mathbb{Z}^2$ we have $(t-x)\widetilde{A}_2(x, )=(t+x)\widetilde{A}_2(-x, )$. \end{corollary} \begin{proof} Assume that $x\ne 0$. Changing the sign of the variable $p$ in Proposition~\ref{cor-double-fourier-anti} we get $$ \widetilde{A}_2(-x,t) = \lim_{\delta\searrow 0} \frac{-i\varepsilon^2}{4\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sqrt{1+m^2\varepsilon^2}\sin(\omega\varepsilon)-\sin(p\varepsilon)} {\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-i\delta} e^{i p x-i\omega t} \,d\omega dp. $$ Adding the expression for $\widetilde{A}_2(x,t)$ from Proposition~\ref{cor-double-fourier-anti} and integrating by parts twice we get \begin{multline*} \widetilde{A}_2(x,t)+\widetilde{A}_2(-x,t) = \lim_{\delta\searrow 0} \frac{-i\varepsilon^2}{2\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sqrt{1+m^2\varepsilon^2}\sin(\omega\varepsilon)} {\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-i\delta}e^{i p x-i\omega t} \,d\omega dp =\\= \lim_{\delta\searrow 0} \frac{i\varepsilon^2}{2\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sqrt{1+m^2\varepsilon^2}\sin(\omega\varepsilon)\sin(p\varepsilon)\varepsilon } {(\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-i\delta)^2}\cdot\frac{e^{i p x-i\omega t}}{ix} \,d\omega dp =\\= \lim_{\delta\searrow 0} \frac{-i\varepsilon^2}{2\pi^2} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sin(p\varepsilon) (it) e^{i p x-i\omega t} \,d\omega dp} {(\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-i\delta)(ix)} =\frac{t}{x} \left(\widetilde{A}_2(x,t)-\widetilde{A}_2(-x,t)\right). \end{multline*} Here in the second equality, we integrate the exponential and differentiate the remaining factor with respect to $p$. In the third equality, we differentiate the exponential and integrate the remaining factor with respect to $\omega$. The resulting identity is equivalent to the required one. \end{proof} For the proof of Theorem~\ref{th-continuum-limit}, we halve the integration interval $[-\pi/\varepsilon,\pi/\varepsilon]$ in Proposition~\ref{th-equivalence}. \begin{corollary} \label{cor-halve-interval} For each $(x,t)\in\varepsilon\mathbb{Z}^2$, where $t\ge 0$, we have \begin{align*} \widetilde{A}_1(x, )&= \begin{cases} \mathrm{Re}\,\left(\frac{im\varepsilon^2}{\pi} \int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \frac{e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right), &\text{for }(x+t)/\varepsilon\text{ odd};\\ \mathrlap{i\mathrm{Im}\,\left(\frac{im\varepsilon^2}{\pi} \int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \frac{e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right),} \hphantom{i\mathrm{Im}\,\left(\frac{\varepsilon}{\pi}\int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{ipx-i\omega_pt}\,dp \right),} &\text{for }(x+t)/\varepsilon\text{ even}; \end{cases}\\ \widetilde{A}_2(x, )&= \begin{cases} \mathrm{Re}\,\left(\frac{\varepsilon}{\pi}\int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{ipx-i\omega_pt}\,dp \right), &\text{for }(x+t)/\varepsilon\text{ even};\\ i\mathrm{Im}\,\left(\frac{\varepsilon}{\pi}\int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{ipx-i\omega_pt}\,dp \right), &\text{for }(x+t)/\varepsilon\text{ odd}. \end{cases} \end{align*} \end{corollary} \begin{proof} This follows from Proposition~\ref{th-equivalence} by the change of variable $p\mapsto \pi/\varepsilon-p$. For instance, \begin{multline*} \widetilde{A}_1(x, ) =\frac{im\varepsilon^2}{2\pi} \int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \frac{e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} + \frac{im\varepsilon^2}{2\pi} \int_{\pi/2\varepsilon}^{3\pi/2\varepsilon} \frac{e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} =\\=\frac{im\varepsilon^2}{2\pi} \int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \frac{e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} +(-1)^{(x+t)/\varepsilon} \frac{im\varepsilon^2}{2\pi} \int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \frac{e^{-i p x+i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}, \quad\text{as required.}\\[-1.3cm] \end{multline*} \end{proof} \subsection{Asymptotic formulae (Theorem~\ref{th-continuum-limit}, Corollary~\ref{cor-anti-uniform})} \label{ssec-proofs-continuum} In this subsection we prove Theorem~\ref{th-continuum-limit} and Corollary~\ref{cor-anti-uniform}. First let us outline the plan of the argument. We perform the Fourier transform and estimate the difference of the resulting oscillatory integrals for the discrete and continuum models, using tails exchange and non-stationary-phase method. The proof of~\eqref{eq1-th-continuum-limit} consists of $3$ steps: \begin{description} \item[Step 1:] we replace the integration interval in the Fourier integral for the continuum model by the one from the discrete model (cutting off large momenta); \item[Step 2:] we replace the phase in the discrete model by the one from the continuum model; \item[Step 3:] we estimate the difference of the resulting integrals for the continuum and discrete models for small and intermediate momenta. \end{description} In the proof of~\eqref{eq2-th-continuum-limit}, we first subtract the massless propagator from the massive one to make the Fourier integral convergent. The integral for the continuum model is as follows. \begin{lemma}\label{l-feynman-fourier} Under notation~\eqref{eq-feynman-propagator}, for each $m>0$, $t\ge 0$, and $x\ne \pm t$ we have \begin{align*} G^F_{11}(x,t,m)&= \frac{im}{4\pi} \int_{-\infty}^{+\infty} \frac{e^{i p x-i\sqrt{m^2+p^2}t}\,dp} {\sqrt{m^2+p^2}}, \end{align*} where the integral is understood as a conditionally convergent improper Riemann integral. \end{lemma} \begin{proof} For $t>|x|$, use the change of variables $q=tp-x\sqrt{m^2+p^2}$ and \cite[8.421.11, 8.405.1]{Gradstein-Ryzhik-63}: $$ \frac{im}{4\pi} \int_{-\infty}^{+\infty} \frac{e^{i p x-i\sqrt{m^2+p^2}t}\,dp} {\sqrt{m^2+p^2}} =\frac{im}{4\pi} \int_{-\infty}^{+\infty} \frac{e^{-i\sqrt{m^2(t^2-x^2)+q^2}}\,dq} {\sqrt{m^2(t^2-x^2)+q^2}} =G^F_{11}(x,t,m). $$ For $0\le t<|x|$, use the change of variables $q=xp-t\sqrt{m^2+p^2}$ and \cite[8.432.5]{Gradstein-Ryzhik-63}. \end{proof} \begin{proof}[Proof of formula~\eqref{eq1-th-continuum-limit} in Theorem~\ref{th-continuum-limit}] In the case when $|x|>|t|$ and $(x+t)/\varepsilon$ is odd, formula~\eqref{eq1-th-continuum-limit} follows from Theorem~\ref{th-well-defined} and Definition~\ref{def-mass}; thus we exclude this case in what follows. We may assume that $x,t\ge 0$ because ~\eqref{eq1-th-continuum-limit} is invariant under the transformations $(x,t)\mapsto (\pm x,\pm t)$ by Corollaries~\ref{cor-symmetry}--\ref{cor-skew-symmetry}. Assume that $(x+t)/\varepsilon$ is even; otherwise the proof is the same up to an obvious modification of the very first inequality below. Use notation~\eqref{eq-feynman-propagator}. Formula~\eqref{eq1-th-continuum-limit} will follow from \begin{multline*} \left| \widetilde{A}_1(x,t,m,\varepsilon) -4\varepsilon\, \mathrm{Im}G^F_{11}(x,t,m) \right| \le\left| \frac{im\varepsilon^2}{\pi} \int\limits_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \frac{e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} - \frac{im\varepsilon}{\pi} \int\limits_{-\infty}^{+\infty} \frac{e^{i p x-i\sqrt{m^2+p^2}t}\,dp} {\sqrt{m^2+p^2}} \right|\\ \le \frac{m\varepsilon}{\pi} \left| \int_{|p|\ge\pi/2\varepsilon} \frac{e^{i p x-i\sqrt{m^2+p^2}t}\,dp} {\sqrt{m^2+p^2}} \right| + \frac{m\varepsilon}{\pi} \left| \int_{|p|\le \pi/2\varepsilon} \frac{\varepsilon\left(e^{i p x-i\omega_pt} -e^{i p x-i\sqrt{m^2+p^2}t}\right)\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)} \right| \\ + \frac{m\varepsilon}{\pi} \left| \int_{|p|\le\pi/2\varepsilon} \left( \frac{\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} -\frac{1}{\sqrt{m^2+p^2}} \right) e^{i p x-i\sqrt{m^2+p^2}t}\,dp \right| =\\=m\varepsilon\, O\left(\frac{\varepsilon}{|x-t|}+\frac{m\varepsilon(x+t)}{s}\right) + m\varepsilon\, O\left(m^2t\varepsilon\right) + m\varepsilon\, O\left(\frac{\varepsilon}{|x-t|}+\frac{m\varepsilon(x+t)}{s}\right) = m\varepsilon\, O\left(\varepsilon \Delta\right). \end{multline*} Here the first inequality follows from Corollary~\ref{cor-halve-interval}, Lemma~\ref{l-feynman-fourier}, and the inequality $|\mathrm{Im}\,z-\mathrm{Im}\,w|\le |z-w|$. The second inequality is straightforward. The obtained $3$ integrals are estimated below in Steps~1--3 respectively. The last bound follows from $2m(x+t)/s\le \Delta$. Below we restrict the integrals to $p\ge 0$; the argument for $p<0$ is analogous. The estimates are slightly different for $\varepsilon<s/m(x+t)$ and $\varepsilon>s/m(x+t)$. Denote \begin{equation}\label{eq-eps-plus} \begin{aligned} \varepsilon_+&:=\max\{\varepsilon,s/m(x+t)\};\\ \varepsilon_-&:=\min\{\varepsilon,s/m(x+t)\}. \end{aligned} \end{equation} We use the following known result. \begin{lemma}[First derivative test] \label{l-first-derivative-test} \textup{\cite[Lemma~5.1.2]{Huxley-96}} Let $\alpha,\beta\in\mathbb{R}$ and $\alpha<\beta$. Assume that $f\in C^1[\alpha,\beta]$ has monotone nonvanishing derivative; then for each $g\in C^0[\alpha,\beta]$ we have \begin{equation}\label{eq-l-first-derivative-test} \left|\int_{\alpha}^{\beta}g(p)e^{if(p)}\,dp\right| \le \frac{2\max_{[\alpha,\beta]}|g|+2V_\alpha^\beta (g)} {\min_{[\alpha,\beta]}|f'|}. \end{equation} \end{lemma} \textbf{Step 1.} Apply Lemma~\ref{l-first-derivative-test} for $\alpha=\pi/2\varepsilon_-$, $\beta\to+\infty$, $f(p):=p x-\sqrt{m^2+p^2}\,t$, and $g(p):=1/\sqrt{m^2+p^2}$. The derivative $f'(p)$ is monotone because $f''(p)=-m^2t/(m^2+p^2)^{3/2}\le 0$. Since $g(p)\searrow 0$ as $p\to +\infty$, it follows that the numerator in the right side of~\eqref{eq-l-first-derivative-test} tends to $4g(\pi/2\varepsilon_-)\le 8\varepsilon_-/\pi =O(\varepsilon)$ as $\beta\to+\infty$. To bound the denominator from below (and in particular to check the assumption $f'(p)\ne 0$), we need a lemma. \begin{lemma} \label{l-omegaprime} If $p\ge \pi m(x+t)/2s$ then $|x-tp/\sqrt{m^2+p^2}|\ge|x-t|/4$. \end{lemma} \begin{proof} We may assume that $x\ne0$. Since $p\ge \pi m(x+t)/2s\ge \pi mx/2\sqrt{|t^2-x^2|}$ it follows that $m/p\le 1$ and ${m^2}/{p^2}=\eta{(t^2-x^2)}/{x^2}$ for some $\eta\in [-4/\pi^2;4/\pi^2]\subset[-1/2;1/2]$. Thus $$ \left|x-t\frac{p}{\sqrt{m^2+p^2}}\right|= \frac{\left|x^2\left(1+m^2/p^2\right)-t^2\right|} {x\left(1+m^2/p^2\right)+t\sqrt{1+m^2/p^2}} =\frac{(1-\eta)|t^2-x^2|}{x\left(1+m^2/p^2\right)+t\sqrt{1+m^2/p^2}} \ge \frac{|x-t|}{4}. \vspace{-0.6cm} $$ \end{proof} Since $\alpha=\pi/2\varepsilon_-\ge \pi m(x+t)/2s$, by Lemmas~\ref{l-first-derivative-test}--\ref{l-omegaprime} we get $\int_{\pi/2\varepsilon_-}^{+\infty} \frac{e^{i p x-i\sqrt{m^2+p^2}t}\,dp}{\sqrt{m^2+p^2}} =O\left(\frac{\varepsilon}{|x-t|}\right) $. For $\varepsilon\le s/m(x+t)$, which is equivalent to $\varepsilon_-=\varepsilon$, this completes step~1. For $\varepsilon>s/m(x+t)$ we need an additional bound $$ \left|\int_{\pi/2\varepsilon}^{\pi/2\varepsilon_-} \frac{e^{i p x-i\sqrt{m^2+p^2}t}\,dp} {\sqrt{m^2+p^2}}\right| \le \int_{\pi/2\varepsilon}^{\pi/2\varepsilon_-} \frac{dp} {\pi/2\varepsilon} \le \frac{\pi/2\varepsilon_-}{\pi/2\varepsilon} =\frac{m\varepsilon(x+t)}{s}. $$ \textbf{Step 2.} Using that $1-e^{iz}=O(|z|)$ for $z\in\mathbb{R}$ and $\sin z> z/2$ for $0<z<\pi/2$ we get \begin{multline*} \int_{0}^{\pi/2\varepsilon} \frac{\varepsilon\left(e^{i p x-i\omega_pt} -e^{i p x-i\sqrt{m^2+p^2}t}\right)\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} = \int_{0}^{\pi/2\varepsilon} \frac{ \left(1-e^{i\omega_pt-i\sqrt{m^2+p^2}t}\right) \varepsilon e^{i p x-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} =\\= O\left(\int_{0}^{\pi/2\varepsilon} \frac{\left|\omega_p-\sqrt{m^2+p^2}\right|\varepsilon t \,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) = O\left(\int_{0}^{\pi/2\varepsilon}\frac{m^2\varepsilon^2t\sqrt{m^2+p^2}\,dp} {\sqrt{m^2+p^2}}\right) = O\left(\frac{\pi}{2\varepsilon}\cdot m^2\varepsilon^2t\right) = O(m^2 t\varepsilon). \end{multline*} Here the third estimate is proved in the following lemma. \begin{lemma} \label{l-g-difference} For $|p|\le \pi/2\varepsilon$ we have $$\omega_p=\sqrt{p^2+m^2}+O\left(m^2\varepsilon^2\sqrt{p^2+m^2}\right) \qquad\text{ and }\qquad \frac{\partial \omega_p}{\partial p} = \frac{\sin p\varepsilon}{\sqrt{\sin^2p\varepsilon+m^2\varepsilon^2}} =\frac{p}{\sqrt{p^2+m^2}}+O(m^2\varepsilon^2). $$ \end{lemma} \begin{proof} First we estimate the derivative. By the Lagrange theorem, there is $\varepsilon'\in[0,\varepsilon]$ such that \begin{multline*} \frac{\sin p\varepsilon}{\sqrt{\sin^2p\varepsilon+m^2\varepsilon^2}} -\frac{p}{\sqrt{p^2+m^2}} = \left. \frac{\partial }{\partial \varepsilon}\left(\frac{\sin p\varepsilon}{\sqrt{\sin^2p\varepsilon+m^2\varepsilon^2}}\right) \right|_{\varepsilon=\varepsilon'}\cdot\varepsilon =\\= \left.\frac{m^2\varepsilon(p\varepsilon\cos p\varepsilon-\sin p\varepsilon)}{(\sin^2p\varepsilon+m^2\varepsilon^2)^{3/2}} \right|_{\varepsilon=\varepsilon'}\cdot\varepsilon = O(m^2\varepsilon^2) \end{multline*} because $\sin z-z\cos z=O(z^3)$ and $\sin z> z/2$ for $0<z<\pi/2$. Now we estimate $\omega_p$. By the Lagrange theorem, there is $m'\in[0,m]$ such that $$ \omega_0=\left. \frac{\partial \omega_0}{\partial m}\right|_{m=m'}\cdot m =\left. \frac{1}{1+m^2\varepsilon^2} \right|_{m=m'}\cdot m=m+O(m^3\varepsilon^2). $$ Then by the estimate for the derivative $\frac{\partial \omega_p}{\partial p}$, for some $p'\in[0,p]$ we have $$ \omega_p-\sqrt{p^2+m^2}=\omega_0-m+\left. \left(\frac{\partial \omega_p}{\partial p}-\frac{p}{\sqrt{p^2+m^2}}\right)\right|_{p=p'}\cdot p =O(m^2\varepsilon^2\sqrt{p^2+m^2}). \vspace{-1.1cm} $$ \end{proof} \textbf{Step 3.} We have \begin{multline*} \hspace{-1.2cm}\int\limits_{0}^{2\pi/\varepsilon_+} \left( \frac{\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} -\frac{1}{\sqrt{m^2+p^2}} \right) e^{i p x-i\sqrt{m^2+p^2}t}\,dp =O\left(\int\limits_{0}^{2\pi/\varepsilon_+}p\varepsilon^2\,dp\right) =O\left(\frac{\varepsilon^2}{\varepsilon_+^2}\right) =O\left(\frac{\varepsilon m(x+t)}{s}\right). \end{multline*} Here the first estimate is proved in the following lemma. \begin{lemma} \label{l-difference} For $0\le p\le \pi/2\varepsilon$ we have $\frac{\varepsilon}{\sqrt{\sin^2p\varepsilon+m^2\varepsilon^2}}-\frac{1}{\sqrt{p^2+m^2}}=O(p\varepsilon^2). $ \end{lemma} \begin{proof} By the Lagrange theorem, for some $\varepsilon'\in (0;\varepsilon)$ we have \begin{multline*} \hspace{-0.5cm} \frac{\varepsilon}{\sqrt{\sin^2p\varepsilon+m^2\varepsilon^2}} -\frac{1}{\sqrt{p^2+m^2}} = \left.\frac{\partial}{\partial\varepsilon} \frac{\varepsilon}{\sqrt{\sin^2p\varepsilon+m^2\varepsilon^2}} \right|_{\varepsilon=\varepsilon'} \cdot \varepsilon \left.\frac{\sin p\varepsilon (\sin p\varepsilon - p\varepsilon\cos p\varepsilon)} {(\sin^2 p\varepsilon+m^2\varepsilon^2)^{3/2}} \right|_{\varepsilon=\varepsilon'} \cdot \varepsilon = O(p\varepsilon^2) \end{multline*} because $\sin z-z\cos z=O(z^3)$ and $z/2<\sin z<z$ for $0<z<\pi/2$. \end{proof} For $\varepsilon<s/m(x+t)$ we need an additional bound: \begin{equation*} \int_{\pi/2\varepsilon_+}^{\pi/2\varepsilon} \left( \frac{\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} -\frac{1}{\sqrt{m^2+p^2}} \right) e^{i p x-i\sqrt{m^2+p^2}t}\,dp =O\left( \frac{\varepsilon}{|x-t|}\right) \end{equation*} obtained by applying Lemma~\ref{l-first-derivative-test} for $g(p):={\varepsilon}/{\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}-1/\sqrt{m^2+p^2}$ and $f(p)=p x-\sqrt{m^2+p^2}\,t$. The lower bound for the denominator of~\eqref{eq-l-first-derivative-test} is obtained by Lemma~\ref{l-omegaprime}. The numerator is at most $4g(\pi/2\varepsilon)=O(\varepsilon+2\varepsilon/\pi)=O(\varepsilon)$ by the following lemma. \end{proof} \begin{lemma} The function $g(p):={\varepsilon}/{\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}-1/\sqrt{m^2+p^2}$ increases on $[0,\pi/2\varepsilon]$. \end{lemma} \begin{proof} It suffices to prove that $$ \frac{\partial g}{\partial p} = -\frac{\varepsilon^2\sin(p\varepsilon)\cos(p\varepsilon)}{(m^2\varepsilon^2+\sin^2(p\varepsilon))^{3/2}} +\frac{p}{(m^2+p^2)^{3/2}}\ge 0. $$ Since this expression clearly tends to $0$ as $\varepsilon\to 0$, it suffices to prove that $$ \frac{\partial^2 g}{\partial p\partial\varepsilon} = \frac{\varepsilon\left(2\sin^2 p \varepsilon \left(p\varepsilon(1-\cos p\varepsilon)^2 +2(p\varepsilon-\sin p \varepsilon)\cos p\varepsilon\right) +m^2 \varepsilon^2 (\sin 2 p \varepsilon-2p \varepsilon \cos 2 p\varepsilon)\right)} {2(m^2\varepsilon^2+\sin^2(p\varepsilon))^{5/2}} \ge 0. $$ The equality and the inequality follow from \cite[\S3]{SU-3} and $\sin z\le z\le \tan z$ for $z\in[0,\pi/2]$. \end{proof} This completes the proof of~\eqref{eq1-th-continuum-limit}. For the proof of~\eqref{eq2-th-continuum-limit} we need two lemmas establishing the Fourier integral for the continuum model. \begin{lemma}\label{l-feynman-fourier2} Under notation~\eqref{eq-feynman-propagator} and \eqref{eq-massless}, for each $m, t\ge 0$ and $x\ne \pm t$ we have \begin{align*} G^F_{12}(x,t,m)-G^F_{12}(x,t,0)&= \frac{1}{4\pi} \int_{-\infty}^{+\infty} \left( \left(1+\frac{p} {\sqrt{m^2+p^2}}\right) e^{-i\sqrt{m^2+p^2}t}- \left(1+\mathrm{sgn}(p) \right) e^{-i|p|t} \right) e^{i p x}\,dp. \end{align*} where the integral is understood as conditionally convergent improper Riemann integral. \end{lemma} \begin{proof} This is the limiting case $n\searrow 0$ of the following formula: \begin{multline*} G^F_{12}(x,t,m)-G^F_{12}(x,t,n) =\left( \frac{\partial}{\partial t}-\frac{\partial}{\partial x} \right) \left(\frac{G^F_{11}(x,t,m)}{m}-\frac{G^F_{11}(x,t,n)}{n}\right) =\\ \hspace{-0.5cm}= \int_{-\infty}^{+\infty} \left( \frac{\partial}{\partial t}-\frac{\partial}{\partial x} \right) \left(\frac{e^{i p x-i\sqrt{m^2+p^2}t}} {\sqrt{m^2+p^2}}- \frac{e^{i p x-i\sqrt{n^2+p^2}t}} {\sqrt{n^2+p^2}} \right)\frac{i \,dp}{4\pi} = \int_{-\infty}^{+\infty} \left( e^{ipx-i\sqrt{m^2+p^2}t}- e^{ipx-i\sqrt{n^2+p^2}t} \right) \,\frac{dp}{4\pi} +\\+ \int_{-\infty}^{+\infty} \left( \frac{p} {\sqrt{m^2+p^2}} e^{-i\sqrt{m^2+p^2}t}- \frac{p} {\sqrt{n^2+p^2}} e^{-i\sqrt{n^2+p^2}t} \right) \frac{e^{i p x}\, dp}{4\pi} \end{multline*} Here we first applied \cite[8.473.4,5]{Gradstein-Ryzhik-63}, then Lemma~\ref{l-feynman-fourier}. We can change the order of the differentiation and the integration (and pass to the limit $n\searrow 0$ under the integral) by \cite[Proposition~6 in \S2.3 in Ch.~7]{Zorich-97} because the latter two integrals converge uniformly on compact subsets of $\mathbb{R}^2\setminus\{|x|=|t|\}$ by the following lemma. \end{proof} \begin{lemma} \label{l-tail} For each $m,n,x,t\ge 0$, $\alpha>0$, and $x\ne t$ we have \begin{align*} \int_{\alpha}^{+\infty} \left( \frac{p} {\sqrt{m^2+p^2}} e^{-i\sqrt{m^2+p^2}t}- \frac{p} {\sqrt{n^2+p^2}} e^{-i\sqrt{n^2+p^2}t} \right) e^{i p x}\,dp &=O\left(\frac{(m^2+n^2)t}{\alpha|x-t|}\right);\\ \int_{\alpha}^{+\infty} \left( e^{ipx-i\sqrt{m^2+p^2}t}- e^{ipx-i\sqrt{n^2+p^2}t} \right) \,dp&=O\left(\frac{(m^2+n^2)t}{\alpha|x-t|}\right). \end{align*} \end{lemma} \begin{proof} Assume $m\ge n$ without loss of generality. Let us prove the first formula; the second one is proved analogously. Rewrite the integral as a sum of two ones: \begin{multline*} \hspace{-0.5cm}\int_{\alpha}^{+\infty} \left(\frac{p} {\sqrt{m^2+p^2}}-\frac{p} {\sqrt{n^2+p^2}}\right) e^{ipx-i\sqrt{n^2+p^2}t} \,dp+ \int_{\alpha}^{+\infty} \frac{p\left( e^{ipt-i\sqrt{m^2+p^2}t}- e^{ipt-i\sqrt{n^2+p^2}t} \right) e^{i p (x-t)}\,dp} {\sqrt{m^2+p^2}}. \end{multline*} The first integral is estimated immediately as $\int_{\alpha}^{+\infty}(m^2-n^2)\,dp/p^2=O\left((m^2+n^2)/\alpha\right)$. To estimate the second integral, apply Lemma~\ref{l-first-derivative-test} for $\beta\to+\infty$, $f(p):=p(x-t)$, and \begin{align*} g(p)&: \frac{p}{\sqrt{m^2+p^2}} \left(e^{ipt-i\sqrt{m^2+p^2}t}-e^{ipt-i\sqrt{n^2+p^2}t}\right). \end{align*} (Clearly, the lemma remains true for a complex-valued function $g$, if the right side of~\eqref{eq-l-first-derivative-test} is multiplied by $2$.) The right side of~\eqref{eq-l-first-derivative-test} is not greater than \begin{multline*} \frac{4\int_{\alpha}^{+\infty}|g'(p)|\,dp}{|x-t|} =\\=O\left(\int_{\alpha}^{+\infty} \left( \frac{m^2}{(m^2+p^2)^{3/2}} \left(\sqrt{m^2+p^2}-\sqrt{n^2+p^2}\right) +2-\frac{p}{\sqrt{n^2+p^2}}-\frac{p}{\sqrt{m^2+p^2}}\right) \frac{t\,dp}{|x-t|}\right) =\\= O\left(\int_{\alpha}^{+\infty} \left(1-\frac{p}{\sqrt{m^2+p^2}}\right)\frac{t\,dp}{|x-t|}\right) =O\left(\int_{\alpha}^{+\infty} \frac{m^2t\,dp}{|x-t| p^2}\right) =O\left( \frac{(m^2+n^2)t}{|x-t|\alpha}\right), \end{multline*} where we use the Leibniz differentiation rule and the bounds $e^{iz}-e^{iw}=O(|z-w|)$ for $z,w\in\mathbb{R}$, $\frac{p}{\sqrt{m^2+p^2}}=O(1)$, and $1-\frac{p}{\sqrt{m^2+p^2}}=O(\frac{m^2}{p^2})$. \end{proof} \begin{proof}[Proof of formula~\eqref{eq2-th-continuum-limit} in Theorem~\ref{th-continuum-limit}] This is a modification of the proof of formula~\eqref{eq1-th-continuum-limit} above. In particular, we use conventions from the first paragraph of that proof except that now we assume that $(x+t)/\varepsilon$ is odd. Use notation~\eqref{eq-feynman-propagator}, \eqref{eq-massless-lattice}, \eqref{eq-massless}, \eqref{eq-eps-plus}. Formula~\eqref{eq2-th-continuum-limit} follows from the estimates obtained from Example~\ref{p-massless}, Corollary~\ref{cor-halve-interval}, Lemma~\ref{l-feynman-fourier2} and Steps~1--3 below: \begin{multline*} \hspace{-0.5cm}\left| \widetilde{A}_2(x,t,m,\varepsilon) -4\varepsilon \, \mathrm{Im}G^F_{12}(x,t,m) \right| = \left| \widetilde{A}_2(x,t,m,\varepsilon)-\widetilde{A}_2(x,t,0,\varepsilon) -4\varepsilon\, \mathrm{Im}\left(G^F_{12}(x,t,m)-G^F_{12}(x,t,0)\right) \right| \\ \le\left| \frac{\varepsilon}{\pi} \int_{-\pi/2\varepsilon}^{\pi/2\varepsilon} \left(\left(1+\frac{\sin p\varepsilon } {\sqrt{m^2\varepsilon^2+\sin^2 p\varepsilon}}\right) e^{-i\omega_pt} - \left(1+\mathrm{sgn}(p) \right) e^{-i|p|t} \right) e^{i p x}\,dp -\right.\\ -\left. \frac{\varepsilon}{\pi} \int_{-\infty}^{+\infty} \left( \left(1+\frac{p} {\sqrt{m^2+p^2}}\right) e^{-i\sqrt{m^2+p^2}t}- \left(1+\mathrm{sgn}(p) \right) e^{-i|p|t} \right) e^{i p x}\,dp \right|\\ \le \frac{\varepsilon}{\pi} \left|\int_{|p|\ge \pi/2\varepsilon} \left( \left(1+\frac{p} {\sqrt{m^2+p^2}}\right) e^{-i\sqrt{m^2+p^2}t}- \left(1+\mathrm{sgn}(p) \right) e^{-i|p|t} \right) e^{i p x}\,dp \right| +\\+ \frac{\varepsilon}{\pi} \left| \int_{|p|\le \pi/2\varepsilon} \left( 1+\frac{\sin p\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2 p\varepsilon}} \right) \left(e^{-i\omega_pt}-e^{-i\sqrt{m^2+p^2}t}\right) e^{i p x}\,dp \right| +\\+ \frac{\varepsilon}{\pi} \left| \int_{|p|\le \pi/2\varepsilon} \left( \frac{\sin p\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2p\varepsilon}} -\frac{p}{\sqrt{m^2+p^2}} \right) e^{i p x-i\sqrt{m^2+p^2}t}\,dp \right| =\\=\varepsilon\, O\left \frac{m^2\varepsilon t}{|x-t|}\right) +\varepsilon\, O\left(\frac{m^3\varepsilon (x+t)^2}{s} +\frac{m^2\varepsilon t}{|x-t|}\right) + \varepsilon\, O\left(m^2\varepsilon\right) = \frac{m\varepsilon(x+t)}{s}\,O\left(\varepsilon\Delta\right). \end{multline*} Below we restrict the integrals to $p\ge 0$; the argument for $p<0$ is analogous. \textbf{Step 1.} The integral over $p\ge 2\pi/\varepsilon$ is estimated in Lemma~\ref{l-tail} for $n=0$ and $\alpha=2\pi/\varepsilon$. \textbf{Step 2.} By Lemma~\ref{l-g-difference} we have \begin{multline*} \hspace{-0.5cm}\int_{0}^{\pi/2\varepsilon_+} \left( 1+\frac{\sin p\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2 p\varepsilon}} \right) \left(e^{-i\omega_pt}-e^{-i\sqrt{m^2+p^2}t}\right) e^{i p x}\,dp =O\left( \int_{0}^{\pi/2\varepsilon_+} \left|1-e^{it(\omega_p-\sqrt{m^2+p^2})}\right| \,dp\right) =\\=O\left( \int_{0}^{\pi/2\varepsilon_+} m^2\varepsilon^2(p+m)t\,dp\right) =O\left(m^2\varepsilon^2t \left(\frac{1}{\varepsilon_+^2}+\frac{m}{\varepsilon_+}\right)\right) =O\left(\frac{m^2\varepsilon t}{\varepsilon_+}+m^3\varepsilon t\right) =O\left(\frac{m^3\varepsilon (x+t)^2}{s}\right). \end{multline*} For $\varepsilon<s/m(x+t)$ in addition apply Lemma~\ref{l-first-derivative-test} for $\alpha:=2\pi/\varepsilon_+$, $\beta:=2\pi/\varepsilon$, $f(p):=px-\sqrt{m^2+p^2}\,t$, $g(p):=\left( 1+\frac{\sin p\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2 p\varepsilon}} \right) \left(e^{i\sqrt{m^2+p^2}t-i\omega_pt}-1\right)$. The maximum in~\eqref{eq-l-first-derivative-test} is estimated analogously to the previous paragraph using the inequality $\varepsilon<s/m(x+t)\le 1/m$: \begin{multline*} \max_{[\pi/2\varepsilon_+;\pi/2\varepsilon]}|g| =O\left(\max_{p\in [\pi/2\varepsilon_+;\pi/2\varepsilon]} m^2\varepsilon^2(p+m)t\right) =O\left( m^2\varepsilon^2\left(\frac{2\pi}{\varepsilon}+m\right)t\right) = O(m^2\varepsilon t) \end{multline*} The variation in~\eqref{eq-l-first-derivative-test} is estimated using the Leibniz rule and Lemma~\ref{l-g-difference}: \begin{multline*} V_{\pi/2\varepsilon_+}^{\pi/2\varepsilon}(g) =\int_{\pi/2\varepsilon_+}^{\pi/2\varepsilon}|g'(p)|\,dp =\\= O\left(\int_{\pi/2\varepsilon_+}^{\pi/2\varepsilon} \left(\frac{m^2\varepsilon^3\cos p\varepsilon}{(m^2\varepsilon^2+\sin^2 p\varepsilon)^{3/2}} \left|\omega_p-\sqrt{m^2+p^2}\right|+ \left|\frac{\partial\omega_p}{\partial p} -\frac{\partial \sqrt{m^2+p^2}}{\partial p}\right| \right) t\,dp\right) =\\= O\left(\frac{\pi}{2\varepsilon}\cdot m^2\varepsilon^2 t\right) =O\left(m^2\varepsilon t\right). \end{multline*} The denominator of~\eqref{eq-l-first-derivative-test} is estimated using Lemma~\ref{l-omegaprime}. Thus $\int_{\alpha}^{\beta}g(p)e^{if(p)}\,dp =O\left {m^2\varepsilon t}/{|x-t|}\right)$. \textbf{Step 3.} By Lemma~\ref{l-g-difference} we have \begin{multline*} \int_{0}^{\pi/2\varepsilon} \left( \frac{\sin p\varepsilon}{\sqrt{m^2\varepsilon^2+\sin^2p\varepsilon}} -\frac{p}{\sqrt{m^2+p^2}} \right) e^{i p x-i\sqrt{m^2+p^2}t}\,dp =O\left(\frac{\pi}{2\varepsilon}\cdot m^2\varepsilon^2\right) =O\left(m^2\varepsilon\right).\\[-1.6cm] \end{multline*} \end{proof} \begin{proof}[Proof of Corollary~\ref{cor-anti-uniform}] For $m>0$ this follows from Theorem~\ref{th-continuum-limit} because $\Delta$ is uniformly bounded and the limiting functions are uniformly continuous on each compact subset of $\mathbb{R}^2\setminus\{|x|=|t|\}$. For $m=0$ this follows from Example~\ref{p-massless}. \end{proof} \subsection{Identities (Propositions~\ref{p-basis}--\ref{p-triple})} \label{ssec-proofs-identities} We first prove the results of \S\ref{ssec-identities} and then those of \S\ref{ssec-analytic} (except Proposition~\ref{th-equivalence} proved above). \begin{proof}[Proof of Proposition~\ref{p-mass}] This is obtained by substituting axiom~2 into axiom~1 in Definition~\ref{def-anti-alg} and passing to the limit $\delta\searrow 0$ (cf.~\eqref{eq-substituted}). \end{proof} Proposition~\ref{p-Klein-Gordon-mass} is deduced from the previous one similarily to \cite[Proof of Proposition~7]{SU-22}. \begin{proof}[Proof of Proposition~\ref{p-Klein-Gordon-mass}] Substituting $t=t+\varepsilon$ in Proposition~\ref{p-mass}, Eq.~\eqref{eq-Dirac-source1}, we get $$ \tilde A_1(x,t+\varepsilo ) = \frac{1}{\sqrt{1+m^2\varepsilon^2}} (\tilde A_1(x+\varepsilon, ) + m \varepsilon\, \tilde A_2(x, )). $$ Changing the signs of both $x$ and $t$ and applying Corollary~\ref{cor-symmetry} we get for $(x,t)\ne (0,0)$ $$ \tilde A_1(x,t-\varepsilo ) = \frac{1}{\sqrt{1+m^2\varepsilon^2}} (\tilde A_1(x-\varepsilon, ) - m \varepsilon\, \tilde A_2(x, )). $$ Adding the resulting two identities we get the required identity for $k=1$. The one for $k=2$ is proved analogously but we start with~\eqref{eq-Dirac-source2}. The analogues of the above two identities hold for $(x,t)\ne (0,-\varepsilon)$ and $(x,t)\ne (-\varepsilon,0)$ respectively. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-symmetry-mass}] This has been proved in Corollaries~\ref{cor-symmetry} and~\ref{cor-skew-symmetry}. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-mass2}] By Proposition~\ref{th-equivalence} and the Plancherel theorem we get \begin{multline*} \sum\limits_{x\in\varepsilon\mathbb{Z}} \left(|\widetilde{A}_1\left(x, \right)|^2+ |\widetilde{A}_2\left(x, \right)|^2 \right) =\\=\frac{\varepsilon}{2\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \left(\left| \frac{m\varepsilon e^{-i\omega_pt}} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} \right|^2 + \left| \left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{-i\omega_pt} \right|^2\right)\,dp =\\ =\frac{\varepsilon}{2\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \left(2+\frac{2\sin(p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) \,dp =2, \end{multline*} because the second summand in the latter integral is an odd function in $p$. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-Huygens2}] This follows from Proposition~\ref{th-equivalence} and the convolution theorem, because \begin{align*} 2\frac{im\varepsilon e^{-i\omega_pt}} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} &= \left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{-i\omega_pt'}\cdot \frac{im\varepsilon e^{-i\omega_p(t-t')}} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} \\&+ \frac{im\varepsilon e^{-i\omega_pt'}} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} \cdot \left(1- \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{-i\omega_p(t-t')},\\ \hspace{-0.5cm} 2\left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{-i\omega_pt} &=\left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{-i\omega_pt'}\cdot \left(1+ \frac{\sin (p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) e^{-i\omega_p(t-t')} \\&- \frac{im\varepsilon e^{-i\omega_pt'}} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} \cdot \frac{im\varepsilon e^{-i\omega_p(t-t')}} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}. \\[-1.3cm] \end{align*} \end{proof} For the next proposition we need a lemma, which follows from Definition~\ref{def-mass} and Theorem~\ref{th-well-defined}. \begin{lemma}[Initial value] \label{l-initial} For $k+x/\varepsilon$ even we have $\widetilde{A}_k(x,0)=\delta_{k2}\delta_{x0}$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{p-Huygens}] The proof is by induction on $t$. The base $t=t'$ is Lemma~\ref{l-initial}. The inductive step follows from \begin{multline*} \widetilde{A}_1(x,t+\varepsilon) =\frac{1}{\sqrt{1+m^2\varepsilon^2}} (\tilde A_1(x+\varepsilon, ) + m \varepsilon\, \tilde A_2(x, )) =\\=\sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+\varepsilon+x'+t+t')/\varepsilon \text{ odd}}} \widetilde{A}_2(x',t')\widetilde{A}_1(x-x'+\varepsilon,t-t') +\sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+\varepsilon+x'+t+t')/\varepsilon \text{ even}}} \widetilde{A}_1(x',t')\widetilde{A}_2(x'-x-\varepsilon,t-t') +\\+ m \varepsilon \sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+\varepsilon+x'+t+t')/\varepsilon \text{ odd}}} \widetilde{A}_2(x',t')\widetilde{A}_2(x-x',t-t') - m \varepsilon \sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+\varepsilon+x'+t+t')/\varepsilon \text{ even}}} \widetilde{A}_1(x',t')\widetilde{A}_1(x'-x,t-t') =\\= \sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+x'+t+\varepsilon+t')/\varepsilon \text{ odd}}} \widetilde{A}_2(x',t')\widetilde{A}_1(x-x',t+\varepsilon-t') +\sum\limits_{\substack{x'\in\varepsilon\mathbb{Z}:\\ (x+x'+t+\varepsilon+t')/\varepsilon \text{ even}}} \widetilde{A}_1(x',t')\widetilde{A}_2(x'-x,t+\varepsilon-t') \end{multline*} and an analogous computation for $\widetilde{A}_2(x,t+\varepsilon)$. Here the first and the last equality follow from Proposition~\ref{p-mass}, and the middle equality is the inductive hypothesis. \end{proof} \begin{proof}[Proof of Proposition~\ref{l-mean}] Assume that $t\ge 0$; otherwise perform the transformation $(x,t)\mapsto(-x,-t)$, which preserves~\eqref{eq-mean1}--\eqref{eq-mean2} by Proposition~\ref{p-symmetry-mass}. Identity~\eqref{eq-mean2} is then obtained from Propositions~\ref{p-symmetry-mass} and~\ref{th-equivalence} as follows: \begin{multline*} 2m\varepsilon x\widetilde{A}_2(x,t)= m\varepsilon(t+x)\left(\widetilde{A}_2(x,t)-\widetilde{A}_2(-x,t)\right) =\frac{(t+x)m\varepsilon^2}{\pi}\int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sin (p\varepsilon) e^{ipx-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} =\\= \frac{-i(t+x)m\varepsilon^2}{2\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\left(e^{ip(x+\varepsilon)}-e^{ip(x-\varepsilon)}\right)e^{-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} =(t+x)\left(\widetilde{A}_1(x-\varepsilon,t) -\widetilde{A}_1(x+\varepsilon,t)\right). \end{multline*} To prove~\eqref{eq-mean1}, apply Proposition~\ref{th-equivalence} and integrate by parts: \mscomm{!!! ref to Wolfram !!!} \begin{align} \notag 2m\varepsilon x\widetilde{A}_1(x,t)&= \frac{m^2\varepsilon^3}{\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\left(ixe^{ipx}\right)e^{-i\omega_pt}\,dp} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}} =-\frac{m^2\varepsilon^3}{\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} e^{ipx}\frac{\partial}{\partial p}\left(\frac{e^{-i\omega_pt}} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right)\,dp \\ \label{eq-proof-mean1} &= \frac{m^2\varepsilon^3}{\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{\sin (p\varepsilon)e^{ipx-i\omega_pt}} {m^2\varepsilon^2+\sin^2(p\varepsilon)} \left(it+\frac{\varepsilon\cos(p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right) \,dp; \\ \notag (x-t)\widetilde{A}_2(x,t)&= \frac{-i\varepsilon}{2\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \left(i(x-t)e^{ip(x-t)}\right)e^{-i(\omega_p-p)t} \left(1+\frac{\sin(p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right)\,dp \\ \notag &= \frac{i\varepsilon}{2\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} e^{ip(x-t)}\frac{\partial}{\partial p} \left(e^{-i(\omega_p-p)t} \left(1+\frac{\sin(p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right)\right)\,dp \\ \label{eq-proof-mean2} &= \frac{im^2\varepsilon^3}{2\pi} \int_{-\pi/\varepsilon}^{\pi/\varepsilon} \frac{e^{ipx-i\omega_pt}\,dp} {m^2\varepsilon^2+\sin^2(p\varepsilon)} \left(it+\frac{\varepsilon\cos(p\varepsilon)} {\sqrt{m^2\varepsilon^2+\sin^2(p\varepsilon)}}\right). \end{align} Substituting $x\pm\varepsilon$ for $x$ in~\eqref{eq-proof-mean2}, subtracting the resulting equalities, and adding~\eqref{eq-proof-mean1}, we get~\eqref{eq-mean1}. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-triple}] The first required identity follows from Proposition~\ref{l-mean} by substituting $x\pm\varepsilon$ for $x$ in~\eqref{eq-mean2} and inserting the resulting expressions into~\eqref{eq-mean1}. The second one is obtained by the same argument with \eqref{eq-mean2} and \eqref{eq-mean1} interchanged. \end{proof} \begin{proof}[Proof of Example~\ref{p-massless}] This follows directly from Proposition~\ref{th-equivalence}. \end{proof} \begin{proof}[Proof of Example~\ref{ex-simplest-values}] Eq.~\eqref{eq-Gauss}--\eqref{eq-lemniscate} are checked directly. \mscomm{!!! ref to Wolfram !!!} Table~\ref{table-a4} is filled inductively using Lemma~\ref{l-initial} and Propositions~\ref{p-symmetry-mass}, \ref{l-mean}, \ref{p-mass}. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-basis}] The value $2^{|t|/2}\mathrm{Re}\,\widetilde{A}_k(x,t,1,1)$ is an integer by Theorem~\ref{th-well-defined} and Definition~\ref{def-mass}. It remains to prove that $2^{|t|/2}\mathrm{Im}\,\widetilde{A}_k(x,t,1,1)$ is a rational linear combination of $G$ and $L'$ for $x+t+k$ odd; otherwise the expression vanishes by Theorem~\ref{th-well-defined}. By Proposition~\ref{p-symmetry-mass} we may assume that $x,t\ge 0$. The proof is by induction on $t$. The base $t=0$ is proved by induction on $x$. The base $(x,t)=(0,0)$ and $(1,0)$ is Example~\ref{ex-simplest-values}. The step from $x$ to $x+1$ follows from Proposition~\ref{l-mean}. Thus the assertion holds for $t=0$ and each $x$. The step from $t$ to $t+1$ follows from Proposition~\ref{p-mass}. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-mass5}] It suffices to prove the proposition for $x,t\ge 0$ and $\varepsilon=1$. Indeed, for $\varepsilon\ne 1$ perform the transformation $(x,t,m,\varepsilon)\mapsto (x/\varepsilon,t/\varepsilon,m\varepsilon,1)$ which preserves the required formulae by Corollary~\ref{cor-scaling}. For $x<0$ change the sign of $x$. The left sides transform as shown in Proposition~\ref{p-symmetry-mass}. By the identity $\binom{n}{k}=\frac{n}{n-k}\binom{n-1}{k}$ it follows that the right sides transform in the same way as the left sides. For $t<0$ change the sign of $t$. The left sides transform as in Proposition~\ref{p-symmetry-mass}. By the Euler transformation ${}_2F_1\left(p,q;r;z\right) =(1-z)^{r-p-q}{}_2F_1\left(r-p,r-q;r;z\right) $ \cite[9.131.1]{Gradstein-Ryzhik-63} and the identity $\binom{n}{k}=(-1)^{k}\binom{k-n-1}{k}$, the right sides transform in the same way. For $x,t\ge 0$ and $\varepsilon=1$ the proof is by induction on $t$. Induction base: $t=0$. To compute $\widetilde{A}_k(x,0)$, consider the following $3$ cases: Case 1: $x+k$ even. The required formula holds by Lemma~\ref{l-initial} and the identities $\binom{(x+k-2)/2}{x}=\delta_{k2}\delta_{x0}$ and ${}_2F_1(0,1;1;z)=1$ for each $k\in\{1,2\}$, $0\le x\in\mathbb{Z}$, $z\in \mathbb{R}$. Case 2: $x$ even, $k=1$. Recall that $\varepsilon=1$. Then the required identity follows from \begin{align*} \hspace{-0.7cm} \widetilde{A}_1(x,0) &=\frac{im}{2\pi} \int_{-\pi}^{\pi} \frac{e^{i p x}\,dp} {\sqrt{m^2+\sin^2p}} =\frac{im}{\pi} \int_{0}^{2\pi} \frac{ \sqrt{z}\cos(q x/2)\,dq} {\sqrt{1-2z\cos q+z^2}} =\frac{4im z^{(x+1)/2}} {xB(1/2,x/2)}\cdot {}_2F_1\left(\frac{1}{2},\frac{x+1}{2}; \frac{x}{2}+{1};z^2\right) \\ &= im (-1)^{x/2}2^{x+1}z^{(x+1)/2} \binom{(x-1)/2}{x} {}_2F_1\left(\frac{x+1}{2},\frac{1}{2}; \frac{x}{2}+{1};z^2\right) \\ &= \frac{im (-1)^{x/2}2^{x+1} z^{(x+1)/2}} {(1-z)^{x+1}} \binom{(x-1)/2}{x} {}_2F_1\left(\frac{x+1}{2}, \frac{x+1}{2}; 1+{x};\frac{-4z}{(1-z)^2}\right) \\ &= i(-im)^{-x} \binom{{(x-1)}/{2}}{x} {}_2F_1\left(\frac{x+1}{2}, \frac{x+1}{2}; 1+x; -\frac{1}{m^2}\right). \end{align*} Here the first equality is Proposition~\ref{th-equivalence}. The second equality is obtained by the change of the integration variable $q:=2p$, a transformation the denominator using the notation $z:=(\sqrt{1+m^2}-m)^2$, dropping the odd function containing $\sin (qx/2)$, and halving the integration interval for the remaining even function. The third equality is obtained by applying \cite[9.112]{Gradstein-Ryzhik-63} for $p=1/2$ and $n=x/2$. The fourth equality is obtained by evaluation of the beta-function \cite[8.384.1,8.339.1--2]{Gradstein-Ryzhik-63} and applying \cite[9.131.1]{Gradstein-Ryzhik-63}. The fifth equality is obtained by applying \cite[9.134.3]{Gradstein-Ryzhik-63} (with the sign of $z$ changed). The last equality follows from $4z/(1-z)^2=1/m^2$. Case 3: $x$ odd, $k=2$. By Case~2, Proposition~\ref{l-mean}, and \cite[9.137.15]{Gradstein-Ryzhik-63} we get the identity \begin{multline*} \widetilde{A}_2(x,0) =\frac{1}{2m}\widetilde{A}_1(x-1,0) -\frac{1}{2m}\widetilde{A}_1(x+1,0) = \frac{i(-im)^{1-x}}{2m} \binom{x/{2}-1}{x-1} {}_2F_1\left(\frac{x}{2}, \frac{x}{2}; x; -\frac{1}{m^2}\right) -\\- \frac{i(-im)^{-x-1}}{2m} \binom{x/{2}}{x+1} {}_2F_1\left(\frac{x}{2}+1, \frac{x}{2}+1; x+2; -\frac{1}{m^2}\right) = (-im)^{-x} \binom{x/{2}}{x} {}_2F_1\left(\frac{x}{2}, 1+\frac{x}{2}; 1+x; -\frac{1}{m^2}\right). \end{multline*} Induction step. Using Proposition~\ref{p-mass}, the inductive hypothesis, and \cite[9.137.11]{Gradstein-Ryzhik-63} we get \begin{multline*} \tilde A_1(x, ) = \frac{1}{\sqrt{1+m^2}} (\tilde A_1(x+1,t- ) + m\, \tilde A_2(x,t- )) =\\= i\left({1+m^2}\right)^{-\frac{t}{2}} (-im)^{{t-x-2}}\binom{\frac{t+x-1}{2}}{x+1} {}_2F_1\left(\frac{x-t+3}{2},\frac{x-t+3}{2};x+2;-\frac{1}{m^2}\right)+ \\+ m\left({1+m^2}\right)^{-\frac{t}{2}} (-im)^{t-x-1}\binom{\frac{t+x-1}{2}}{x} {}_2F_1\left(\frac{x-t+1}{2},\frac{x-t+3}{2};x+1;-\frac{1}{m^2}\right) =\\= i\left({1+m^2}\right)^{-\frac{t}{2}} (-im)^{{t-x}}\binom{\frac{t+x-1}{2}}{x} {}_2F_1\left(\frac{x-t+1}{2},\frac{x-t+1}{2};x+1;-\frac{1}{m^2}\right). \end{multline*} For $\tilde A_2(x,t)$ the step is analogous; it uses ~\cite[9.137.18]{Gradstein-Ryzhik-63} for $x\ne 0$ and~\cite[9.137.12]{Gradstein-Ryzhik-63} for $x\!=\!0$. \end{proof} \subsection{Combinatorial definition (Theorem~\ref{p-real-imaginary}, Propositions~\ref{p-initial}--\ref{cor-dirac})} \label{ssec-proofs-fourier} In this section we compute full space-time Fourier transform of the finite-lattice propagator (Proposition~\ref{cor-double-fourier-finite}), use it to prove some identities (Corollary~\ref{cor-symmetry-finite}, Propositions~\ref{p-initial}--\ref{cor-dirac}) and Theorem~\ref{p-real-imaginary}. We follow the classical approach known from Kirchhoff matrix-tree theorem, the Kasteleyn and Kenyon theorems \cite{Levine-11,Kenyon-02}. Namely, the solution of Dirac equation on the finite lattice is expressed through determinants, interpreted combinatorially via loop expansion, and computed explicitly via Fourier transform. \begin{notation*} Let $e_1=e_1(x,t)\perp(1,1)$ and $e_2=e_2(x,t)\parallel (1,1)$ be the two edges ending at a lattice point $(x,t)$; cf.~Figure~\ref{fig-1x1} to the right. Denote $b_k:=e_k(0,0)$ and $x^*:=x+\frac{\varepsilon}{2}$, $t^*:=t+\frac{\varepsilon}{2}$. \end{notation*} \begin{proposition}[Full space-time Fourier transform] \label{cor-double-fourier-finite} The denominator of~\eqref{eq-def-finite-lattice-propagator} is nonzero. For each even lattice point $(x,t)$ we have \begin{align}\notag A(b_2\to e_1)&= \frac{-i}{2T^2} \sum_{p,\omega\in \faktor{(2\pi/T\varepsilon)\mathbb{Z}}{(2\pi/\varepsilon)\mathbb{Z}}} \frac{m\varepsilon -i\delta e^{ip\varepsilon}} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta}e^{i p x-i\omega t},\\ \notag A(b_2\to e_2)&= \frac{-i}{2T^2} \sum_{p,\omega\in \faktor{(2\pi/T\varepsilon)\mathbb{Z}}{(2\pi/\varepsilon)\mathbb{Z}}} \frac{\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\sin(\omega\varepsilon)+\sin(p\varepsilon)} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta} e^{i p x-i\omega t} +\frac{1}{2}\delta_{x0}\delta_{t0}.\\ \intertext{For each odd lattice point $(x,t)$ we have} \notag A(b_2\to e_1)&= \frac{-i}{2T^2} \sum_{p,\omega\in \faktor{(2\pi/T\varepsilon)\mathbb{Z}}{(2\pi/\varepsilon)\mathbb{Z}}} \frac{m\varepsilon\sqrt{1-\delta^2} e^{i\omega\varepsilon}-i\delta\sqrt{1+m^2\varepsilon^2}} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta}e^{i p x^*-i\omega t^*},\\ \notag A(b_2\to e_2)&= \frac{1}{2T^2} \sum_{p,\omega\in \faktor{(2\pi/T\varepsilon)\mathbb{Z}}{(2\pi/\varepsilon)\mathbb{Z}}} \frac{\sqrt{1+m^2\varepsilon^2}e^{-ip\varepsilon} -\sqrt{1-\delta^2}e^{i\omega\varepsilon}} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}\cos(\omega\varepsilon) -\cos(p\varepsilon)-im\varepsilon\delta} e^{i p x^*-i\omega t^*}. \end{align} \end{proposition} The proposition follows from the next $3$ lemmas. The first one is proved completely analogously to Proposition~\ref{l-double-fourier-finite}, only the Fourier series is replaced by the discrete Fourier transform. \begin{lemma}[Full space-time Fourier transform] \label{l-discrete-fourier} There exists a unique pair of functions $A_k(x,t)$ on the lattice of size $T$ satisfying axioms~1--2 from Definition~\ref{def-anti-alg}. It is given by the expressions from Proposition~\ref{l-double-fourier-finite}, only the integrals are replaced by the sums over $\faktor{(2\pi/T\varepsilon)\mathbb{Z}}{(2\pi/\varepsilon)\mathbb{Z}}$, and the factors $\varepsilon^2/4\pi^2$ are replaced by $1/T^2$. \end{lemma} For combinatorial interpretation, we pass from functions on the lattice to functions on edges. \begin{lemma}[Equivalence of equations] \label{l-equivalence-equations} Functions $A_k(x,t)$ on the lattice of size $T$ satisfy axioms~1--2 from Definition~\ref{def-anti-alg} if and only if the function $\alpha(e_k(x,t)):=-i^k A_k(x,t)/2$ on the set of edges satisfies the equation \begin{equation}\label{eq-l-equivalence-equations} \alpha(f)= \alpha(e)A(ef)+\alpha(e')A(e'f)+\delta_{b_2f} \end{equation} for each edge $f$, where $e\parallel f$ and $e'\perp f$ are the two edges ending at the starting point of $f$. \end{lemma} \begin{proof} Assume $f=e_2(x,t)$ and $(x,t)$ is even; the other cases are analogous. Then \begin{align*} e &=e_2\left(x-\frac{\varepsilon}{2},t-\frac{\varepsilon}{2}\right), & A(ef)&=\frac{1}{\sqrt{1+m^2\varepsilon^2}}, & \delta_{b_2f}&=\delta_{x0}\delta_{t0}, \\ e'&=e_1\left(x-\frac{\varepsilon}{2},t-\frac{\varepsilon}{2}\right), & A(e'f)&=\frac{-im\varepsilon}{\sqrt{1+m^2\varepsilon^2}}. & & \end{align*} Substituting $\alpha(e_k(x,t))=-i^k A_k(x,t)/2$, we get the second equation of axiom~2. \end{proof} Now we solve the system of equations~\eqref{eq-l-equivalence-equations} by Cramer's rule. \begin{lemma}[Loop expansion]\label{l-loop-expansion} Define two matrices with the rows and columns indexed by edges: $$ A_{fa}:=A(a\to f)\qquad\text{and}\qquad U_{fe} :=\begin{cases} A(ef), &\text{if the endpoint of $e$ is the starting point of $f$},\\ 0, &\text{otherwise}. \end{cases} $$ Denote by $Z$ be the denominator of~\eqref{eq-def-finite-lattice-propagator}. Then $Z=\det (I-U)$. If $Z\ne 0$ then $A=(I-U)^{-1}$. \end{lemma} \begin{proof} The first formula follows from $$ \det (I-U)=\sum_\sigma \mathrm{sgn}(\sigma)\prod_e (I-U)_{\sigma(e)e} =\sum_\sigma \mathrm{sgn}(\sigma)\prod_{e:\sigma(e)\ne e}(-U_{\sigma(e)e}) =\sum_S A(S)=Z. $$ Here the products are over all edges $e$, the first two sums are over all permutations $\sigma$ of edges, and the last sum is over all loop configurations $S$. All the equalities except the third one follow from definitions. To prove the third equality, take a permutation $\sigma$ of edges and decompose it into disjoint cycles. Take one of the cycles $e_1e_2\dots e_ke_1$ of length $k>1$. The contribution of the cycle to the product is nonzero only if the endpoint of each edge $e_i$ is the starting point of the next one. In the latter case the contribution is $$ (-U_{e_2e_1})\dots (-U_{e_1e_k}) = (-1)^k A(e_1e_{2})\dots A(e_ke_{1}) = (-1)^{k-1} A(e_1e_{2}\dots e_{1}), $$ where we have taken the minus sign in~\eqref{eq-def-anti3} into account. Multiply the resulting contributions over all cycles of length greater than $1$. The cycles form together a loop configuration $S$, and the product of their arrows is $A(S)$. Since $(-1)^{k-1}$ is the sign of the cyclic permutation, the product of such signs equals $\mathrm{sgn}(\sigma)$. Clearly, the resulting loop configurations $S$ are in bijection with all permutations giving a nonzero contribution to the sum. This proves that $\det (I-U)=Z$. To prove the formula $A=(I-U)^{-1}$, replace the entry $(I-U)_{af}$ of the matrix $I-U$ by $1$, and all the other entries in the row $a$ by $0$. Analogously to the previous argument, the determinant of the resulting matrix (the cofactor of $I-U$) equals the numerator of~\eqref{eq-def-finite-lattice-propagator}. By Cramer's rule we get $A=(I-U)^{-1}$. \end{proof} \begin{proof}[Proof of Proposition~\ref{cor-double-fourier-finite}] This follows from Lemmas~\ref{l-discrete-fourier}--\ref{l-loop-expansion}. In particular, $Z=\det(I-U)\ne 0$ because $I-U$ is the matrix of system~\eqref{eq-l-equivalence-equations} having a unique solution by Lemmas~\ref{l-discrete-fourier}--\ref{l-equivalence-equations}. \end{proof} \begin{remark} Using Lemma~\ref{l-loop-expansion}, the discrete Fourier transform, and multiplying the determinants of equations~\eqref{eq-transformed-equation} over all $p,\omega$, one can show that $$ Z=2^{T^2}\prod_{p,\omega\in \faktor{(2\pi/T\varepsilon)\mathbb{Z}}{(2\pi/\varepsilon)\mathbb{Z}}} \left( \cos(\omega\varepsilon) -\frac{\cos(p\varepsilon)+im\varepsilon\delta} {\sqrt{1-\delta^2}\sqrt{1+m^2\varepsilon^2}} \right). $$ This remains true for $m=0$ or $\delta=0$, implying that $Z=0$ for $T$ divisible by $4$ (because of the factor obtained for $p=\omega=\pi/2\varepsilon$). For $\delta=0$ the latter remains true even if $m$ has imaginary part, which shows that Step~2${}'$ in \S\ref{ssec-outline} is necessary. Moreover, by Proposition~\ref{cor-double-fourier-finite}, the limit $\lim_{\delta\searrow 0}A(b_2\to e_1;m,\varepsilon,\delta,T)$, hence $\lim_{\delta\searrow 0}A(a_0\to f_1;m,\varepsilon,\delta,T)$, does not exist for $T$ divisible by $4$ and, say, $x=t=0$. Thus one cannot change the order of limits in~\eqref{eq-def-infinite-lattice-propagator}. \end{remark} \begin{example}[No charge conservation on the $2\times 2$ lattice] \label{ex-2x2} For $T=2$ we have \textup{\cite[\S2]{SU-3}}: $$\sum_{f\text{ starting on }t=0}|{A}(a\to f)|^2 =\frac{(1+\delta^2)(1+m^2\varepsilon^2)}{4(m^2\varepsilon^2+\delta^2)} \ne \frac{(1-\delta^2)(1+m^2\varepsilon^2)}{4(m^2\varepsilon^2+\delta^2)} =\sum_{f\text{ starting on }t=\varepsilon}|{A}(a\to f)|^2, $$ where the sums are over all edges $f$ starting on the line $t=0$ and $t=\varepsilon$ respectively. \end{example} Performing the change of variables $(p,\omega)\mapsto (\pm p,-\omega)$ in Proposition~\ref{cor-double-fourier-finite}, we get the following. \begin{corollary}[Skew symmetry] \label{cor-symmetry-finite} For each $(x,t)\in\varepsilon\mathbb{Z}^2$, where $(x,t)\ne (0,0)$, we have the identities $A(b_2\to e_1(x,-t))=A(b_2\to e_1(x,t))$ and $A(b_2\to e_2(-x,-t))=-A(b_2\to e_2(x,t))$. \end{corollary} For the proof of the identities from~\S\ref{ssec-combi-def}, we need a lemma, which follows immediately from defining equations~\eqref{eq-def-anti3}--\eqref{eq-def-finite-lattice-propagator}. \begin{definition} The arrow $A(a\to f)$ is \emph{invariant} under a transformation $\tau$ of the lattice, if $A(\tau(a)\to \tau(f))=A(a\to f)$. Clearly, $A(a\to f)=A_{fa}(-im\varepsilon, -\delta, \sqrt{1+m^2\varepsilon^2},\sqrt{1-\delta^2})$ for some rational function $A_{fa}$ in $4$ variables, depending on the parameters $a$, $f$, $T$. A transformation $\tau$ \emph{acts as the replacement $\delta\leftrightarrow im\varepsilon$}, if $A(\tau(a)\to \tau(f))=A_{fa}(-\delta, -im\varepsilon, \sqrt{1-\delta^2}, \sqrt{1+m^2\varepsilon^2})$. \end{definition} \begin{lemma}[Invariance] \label{l-invariance} The arrow $A(a\to f)$ is invariant under the translations by the vectors $(\varepsilon,0)$ and $(0,\varepsilon)$ and under the reflection with respect to the line $x=0$. The translation by $(\varepsilon/2,\varepsilon/2)$ acts as the replacement $\delta\leftrightarrow im\varepsilon$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{p-initial}] By Lemma~\ref{l-invariance} we may assume that $a=b_2$ because the required equation $A(a\to a)=1/2$ is invariant under the replacement $\delta\leftrightarrow im\varepsilon$. Apply Proposition~\ref{cor-double-fourier-finite} for $x=t=0$ so that $e_2=b_2$. The change of variables $(p,\omega)\mapsto (-p,-\omega)$ shows that the sum over $p,\omega$ in the expression for $A(b_2\to e_2)$ vanishes. The remaining term is $1/2$. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-symmetry}] By Lemma~\ref{l-invariance} we may assume $a=b_2$. Assume that $(x,t)$ is even; otherwise the proof is analogous. Consider the following $2$ cases. Case 1: $f=e_1(x,t)$. Translate both $a$ and $f$ by $(-x,-t)$ and reflect with respect to the line $x=0$. By Lemma~\ref{l-invariance} and Corollary~\ref{cor-symmetry-finite} we get $A(e_1(x,t)\to b_2)=A(b_2\to e_1(x,-t))=A(b_2\to e_1(x,t))$, as required. Case 2: $f=e_2(x,t)\ne b_2$. Translating by $(-x,-t)$, applying Lemma~\ref{l-invariance} and Corollary~\ref{cor-symmetry-finite}, we get $A(e_2(x,t)\to b_2)=A(b_2\to e_2(-x,-t))=-A(b_2\to e_2(x,t))$, as required. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-dirac-finite}] By Lemma~\ref{l-loop-expansion} we get $(I-U)A=I$, that is, $A_{fa}-\sum_e U_{fe}A_{ea}=\delta_{fa}$, which is equivalent to the required identity. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-dirac-adjoint}] This follows from Propositions~\ref{p-initial}--\ref{p-dirac-finite} because $e\parallel f$ and $e'\perp f$. \end{proof} \begin{proof}[Proof of Proposition~\ref{cor-dirac}] By Lemma~\ref{l-loop-expansion} we get $(I-U^n)A=(I+U+\dots+U^{n-1})(I-U)A=I+U+\dots+U^{n-1}$, which is equivalent to the required identity for $n\le 2T$. \end{proof} \begin{proof}[Proof of Theorem~\ref{p-real-imaginary}] The denominator of~\eqref{eq-def-finite-lattice-propagator} does not vanish by Proposition~\ref{cor-double-fourier-finite}. Limit~\eqref{eq-def-infinite-lattice-propagator} is computed as follows: \begin{multline*} A(a_0\to f_k) =\sum_{j,l=1}^2 (-1)^l A(b_la_0)A(b_l\to e_j)A(e_jf_k) =\frac{1}{1-\delta^2}\sum_{j,l=1}^2 (-\delta)^{2-l}\delta^{|j-k|}A(b_l\to e_j) \\ =\frac{\sum_{j,l=1}^2 (-\delta)^{2-l}\delta^{|j-k|} A(b_2\to e_{j'}((-1)^l x,t))}{1-\delta^2} \overset{T\to\infty}{\to} -\frac{\sum_{j,l=1}^2 (-\delta)^{2-l}\delta^{|j-k|}i^{j'} A_{j'}((-1)^l x,t)}{2(1-\delta^2)} \overset{\delta\searrow 0}{\to} -\frac{i^k\widetilde{A}_k(x,t)}{2}, \end{multline*} where $j':=2-|j-l|$. Here the first two equalities follow from Propositions~\ref{p-dirac-finite}--\ref{p-dirac-adjoint}. The third one is obtained by a reflection. The convergence holds by Propositions~\ref{cor-double-fourier-finite}, \ref{l-double-fourier-finite}, and Definition~\ref{def-anti-alg}. \end{proof} \subsection{Generalizations to several particles (Propositions~\ref{p-independence-1}--\ref{p-perturbation})} \label{ssec-proofs-variations} The results of \S\ref{sec-identical1} are proved easily. \begin{proof}[Proof of Proposition~\ref{p-independence-1}] Due to the condition $x_0 \geq 2t$ there are no paths starting at $A$ and ending at $F'$ and no paths starting at $A'$ and ending at $F$. Therefore \begin{multline*} a (AB,A'B'\to EF,E'F') = \sum\limits_{\substack{s=AB\dots EF\\s'=A'B'\dots E'F'}} a(s)a(s') =\\= \sum\limits_{s=AB\dots EF}a(s)\sum\limits_{s=AB\dots E'F'}a(s')= a_2(x,t,1,1)a_2(x'-x_0,t,1,1). \end{multline*} Taking the norm square, we get the required formula. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-transfer-matrix}] The proof is by induction on~$t$. The base $t=1$ is obvious. The step is obtained from the following identity by summation over all unordered pairs $E,E'$: \begin{equation}\label{eq-local-conservation} \sum_{F,F'}P(AB,A'B'\to EF,E'F') =\sum_{D,D'}P(AB,A'B'\to DE,D'E'), \end{equation} where the sums are over all ordered pairs $(F,F')$ and $(D,D')$ of integer points such that $\overrightarrow{EF},\overrightarrow{E'F'},\overrightarrow{DE}, \overrightarrow{D'E'}\in\{(1,1),(-1,1)\}$. To prove~\eqref{eq-local-conservation}, consider the following $2$ cases. Case 1: $E\ne E'$. Dropping the last moves of the paths $s$ and $s'$ from Definition~\ref{def-identical}, we get $$a(AB,A'B'\to EF,E'F')= \sum_{D,D'}a(AB,A'B'\to DE,D'E')\frac{1}{i}a(DEF)\frac{1}{i}a(D'E'F'). $$ Consider the $4\times 4$ matrix with the entries $\frac{1}{i}a(DEF)\frac{1}{i}a(D'E'F')$, where $(D,D')$ and $(F,F')$ run through all pairs as in~\eqref{eq-local-conservation}. A direct checking shows that the matrix is unitary (actually a Kronecker product of two $2\times 2$ unitary matrices), which implies~\eqref{eq-local-conservation}. Case 2: $E=E'$. Dropping the last moves of the two paths, we get for $F\ne F'$ \begin{multline*} a(AB,A'B'\to EF,EF')= a(AB,A'B'\to DE,D'E)\left(a(DEF')a(D'EF)-a(DEF)a(D'EF')\right) =\\= a(AB,A'B'\to DE,D'E) \left(\frac{1}{\sqrt{2}}\frac{1}{\sqrt{2}}- \frac{i}{\sqrt{2}}\frac{i}{\sqrt{2}}\right) =a(AB,A'B'\to DE,D'E), \end{multline*} where the integer points $D$ and $D'$ are now defined by the conditions $\overrightarrow{DE}=\overrightarrow{EF}$ and $\overrightarrow{D'E}=\overrightarrow{EF'}$. Since $a(AB,A'B'\to EF,EF')=a(AB,A'B'\to DE,D'E)=0$ for $F=F'$, we get~\eqref{eq-local-conservation}. \end{proof} For the results of \S\ref{sec-identical} we need the following lemma proved analogously to Lemma~\ref{l-loop-expansion}. \begin{lemma}[Loop expansion] \label{l-loop-expansion3} Let $a_1,\dots,a_n$ be distinct edges. In the matrix $I-U$, replace the entries $(I-U)_{a_1f_1},\dots,(I-U)_{a_nf_n}$ by $1$, and all the other entries in the rows $a_1,\dots,a_n$ by $0$. Then the determinant of the resulting matrix equals $ZA(a_1,\dots,a_n\to f_1,\dots,f_n)$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{p-det}] By Theorem~\ref{p-real-imaginary} we get $Z\ne 0$. Then by Lemma~\ref{l-loop-expansion} we get $(I-U)^{-1}=A$ and $\det A=1/\det(I-U)=1/Z$. Then by the well-known relation between complementary minors of two inverse matrices, the determinant of the matrix from Lemma~\ref{l-loop-expansion3} equals $\det \left(A_{f_ja_i}\right)_{i,j=1}^n/\det A= Z\det \left(A(a_i\to f_j)\right)_{i,j=1}^n$. It remains to use Lemma~\ref{l-loop-expansion3} and cancel~$Z$. \end{proof} \begin{proof}[Proof of Proposition~\ref{cor-pass}] The proposition follows from \begin{multline*} \hspace{-0.7cm}{A}(a\to f \text{ pass } ) ={A}(a\to ) -{A}(a,e\to f, ) {A}(a\to ) -{A}(a\to ) {A}(e\to ) +{A}(a\to ) {A}(e\to ) =\\=\frac{1}{2} {A}(a\to )+ {A}(a\to ){A}(e\to ) {A}(a\to ){A}(e\to )+ {A}(a\to ){A}(e\to ). \end{multline*} Here the first equality holds because $S\mapsto S\cup\{e\}$ is a bijection between loop configurations $S$ with the source $a$ and the sink $f$ not passing through $e$ and loop configurations with the sources $a,e$ and the sinks $f,e$. This bijection preserves $A( )$ because $A(S\cup\{e\})=A(S)A(e)=A(S)\cdot 1$. The rest follows from Propositions~\ref{p-det} and~\ref{p-initial}. \end{proof} For the result of \S\ref{sec-fermi} we need the following lemma. \begin{lemma} \label{l-half} For each edge $e$ we have $\sum_{S\ni e}A(S)=\frac{1}{2}\sum_{S}A(S)$, where the left sum is over loop configurations containing $e$ and the right sum is over all loop configurations. \end{lemma} \begin{proof} This follows from $ \frac{\sum_{S\ni e}A(S)}{\sum_{S}A(S)}= 1-\frac{\sum_{S\not\ni e}A(S)}{\sum_{S}A(S)} =1-A(e\to e)=\frac{1}{2}. $ Here the second equality holds because $S\mapsto S\cup\{e\}$ is a bijection between loop configurations $S$ not passing through $e$ and loop configurations with the source $e$ and the sink $e$. The third equality is Proposition~\ref{p-initial}. \end{proof} \begin{proof}[Proof of Proposition~\ref{p-perturbation}] For loop configurations $S_\mathrm{e}$ and $S_\mu$ denote $A(S_\mathrm{e}):=A(S_\mathrm{e},m_\mathrm{e},\varepsilon,\delta)$, $A(S_\mu):=A(S_\mu,m_\mu,\varepsilon,\delta)$, $Z_\mathrm{e}:=\sum_{S_\mathrm{e}}A(S_\mathrm{e})$, $Z_\mu:=\sum_{S_\mu}A(S_\mu)$. Up to terms of order $g^2$, the denominator of~\eqref{eq-def-fermi} equals $$ \hspace{-0.5cm}\sum_{S_\mathrm{e},S_\mu}A(S_\mathrm{e})A(S_\mu) \left(1+\sum_{e\in S_\mathrm{e},S_\mu}g\right) = \sum_{S_\mathrm{e}}A(S_\mathrm{e})\sum_{S_\mu}A(S_\mu) +g\sum_{e}\sum_{S_\mathrm{e}\ni e}A(S_\mathrm{e})\sum_{S_\mu\ni e}A(S_\mu) =Z_\mathrm{e}Z_\mu\left(1+\sum_{e}\frac{g}{4}\right), $$ where the second sum is over all common edges $e$ of $S_\mathrm{e}$ and $S_\mu$, the fifth and the last sums are over all the edges $e$, and we applied Lemma~\ref{l-half}. In particular, the denominator of~\eqref{eq-def-fermi} is nonzero for $g$ sufficiently small in terms of $m_\mathrm{e},m_\mu,\varepsilon,\delta,T$ because $Z_\mathrm{e}Z_\mu\ne 0$ by Theorem~\ref{p-real-imaginary}. Up to terms of order $g^2$, the numerator of~\eqref{eq-def-fermi} equals \begin{multline*} Z_\mathrm{e}Z_\mu \left(A(a_\mathrm{e}\overset{\mathrm{e}}{\to} f_\mathrm{e}) A(a_\mu\overset{\mu}{\to} f_\mu) +g\sum_{e}A(a_\mathrm{e}\to f_\mathrm{e} \text{ pass }e)A(a_\mu\to f_\mu \text{ pass }e) \right) =\\=Z_\mathrm{e}Z_\mu A(a_\mathrm{e}\overset{\mathrm{e}}{\to} f_\mathrm{e}) A(a_\mu\overset{\mu}{\to} f_\mu) +\\ +gZ_\mathrm{e}Z_\mu\sum_{e} \left(A(a_\mathrm{e}\overset{\mathrm{e}}{\to} e) A(e\overset{\mathrm{e}}{\to} f_\mathrm{e}) +\frac{1}{2}A(a_\mathrm{e}\overset{\mathrm{e}}{\to} f_\mathrm{e})\right) \left(A(a_\mu\overset{\mu}{\to} e)A(e\overset{\mu}{\to} f_\mu) +\frac{1}{2}A(a_\mu\overset{\mu}{\to} f_\mu)\right), \end{multline*} where the sums are over all the edges $e$, and we applied Proposition~\ref{cor-pass}. Dividing the resulting expressions and applying Proposition~\ref{p-initial}, we get the result. \end{proof}
2,877,628,088,727
arxiv
\section{INTRODUCTION} Nowadays there are several semi-phenomenological NN potentials that may be considered as realistic because they provide good descriptions of cross sections, scattering amplitudes and phase shifts. Nevertheless, it is possible to notice important discrepancies when one compares directly their configuration space profile functions. Of course, this situation is consistent with the venerable inverse scattering problem, whereby there are always many potentials that can explain a given set of observables. Therefore one must look elsewhere in order to assess the merits of the various possible models. In the case of NN interactions, there is a rather rich relationship between the potential and observables, involving several spin and isospin channels and different spatial regions. On the other hand, as all modern models represent the long range interaction by means of the one pion exchange potential (OPEP), one must go to inner regions in order to unravel the discrepancies among the various approaches. Models vary widely in the way they treat the non-OPEP part of the interaction and, in the literature, one finds potentials constructed by means of dispersion relations, field theory or just based on common sense guesses. In all cases, parameters are used which either reflect knowledge about other physical processes or are adjusted ad hoc. This leaves a wide space for personal whim and indicates the need of information with little model dependence about the inner part of the nuclear force. In the case of NN interactions, the complexity of the relevant physical processes increases very rapidly as the internucleon distance decreases and hence the best process for yielding information with little model dependence is the tail of the two-pion exchange potential ($TPEP$). This problem has a long history. More than thirty years ago, Cottingham and Vinh Mau began a research program based on the idea that the $TPEP$ is related to the pion--nucleon ($\pi N$) amplitude~\cite{Vin63}. It lead to the construction of the Paris potential~\cite{Cott73,Laco80}, where the intermediate part of the force is obtained from empirical $\pi N$ information treated by means of dispersion relations. This procedure minimizes the number of unnecessary hypotheses and hence yields results which can be considered as model independent. Another important contribution was made by Brown and Durso~\cite{Brow71} who stressed, in the early seventies, that chiral symmetry has a main role in the description of the intermediate $\pi N$ amplitude. In the last four years the interest in applications of chiral symmetry to nuclear problems was renewed and several authors have reconsidered the construction of the $TPEP$. At first, only systems containing pions and nucleons were studied, by means of non-linear lagrangians based on either PS or PV pion-nucleon couplings~\cite{Ordo92,Cele92,Fria94,Rocha94,Birse94}. Nowadays, the evaluation of this part of the potential in the framework of chiral symmetry has no important ambiguities and is quite well understood. This minimal $TPEP$ fulfills the expectations from chiral symmetry and, in particular, reproduces at the nuclear level the well known cancellations of the intermediate $\,\pi N\,$ amplitude~\cite{Ballot96a,Ballot96b}. On the other hand, it fails to yield the qualitative features of the medium range scalar-isoscalar NN attraction~\cite{Rocha94,Rocha95}. This happens because a system containing just pions and nucleons cannot explain the experimental $\pi N$ scattering data~\cite{HohI} and one needs other degrees of freedom, especially those associated with the delta and the $\pi N$ $\sigma$-term. The former possibility was considered by Ord\'{o}\~nez, Ray and Van Kolck~\cite{Ordo94,Ordo96}, and shown to improve the predictive power of chiral cancellations but, in their work they did not regard closely the experimental features of the intermediate $\pi N$ amplitude. Empirical information concerning the intermediate $\pi N$ process may be introduced into the $TPEP$ in a model independent way, with the help of the H\"ohler, Jacob and Strauss (HJS) subthreshold coefficients~\cite{HohI,Hoh72}. This kind of approach has already been extensively adopted in other problems. For instance, Tarrach and M.Ericson used it in their study of the relationship between nucleon polarizability and nuclear Van der Waals forces~\cite{Tarra78}. In the case of three-body forces, it was employed in the construction of both model independent and model dependent two-pion exchange potentials~\cite{Coon75,Rob83,Ueda84}. Using the same strategy, we have recently shown that the knowledge of the $\pi N$ amplitude, constrained by both chiral symmetry and experimental information in the form of the HJS coefficients, provides a unambiguous and model independent determination of the long range part of the two-pion exchange NN potential~\cite{Rob96}. There we restricted ourselves to the general formulation of the problem and to the identification of the leading scalar-isoscalar potential. In the present work we explore the numerical consequences of the expressions derived in that paper and compare them with some existing potentials. Our presentation is divided as follows: in Sec. II, we briefly summarize the derivation of the potential and recollect the main formulae for the sake of self-consistency, leaving details to Appendices A, B, and C. In Sec. III we discuss the main features of the loop integrals that determine the potential, emphasizing in the approximations associated with chiral symmetry. In Sec. IV we relate our theoretical expressions with those of other authors and in Sec. V, results are compared with existing phenomenological potentials. Finally, in Sec. VI we present our conclusions. \section{TWO-PION EXCHANGE POTENTIAL} The construction of the $\pi \pi EP$ begins with the evaluation of the amplitude for on-shell NN scattering due to the exchange of two pions. In order to avoid double counting, we must subtract the term corresponding to the iterated OPEP and, on the centre of mass of the NN system, the resulting amplitude is already the desired potential in momentum space. As it depends strongly on the momentum transferred $\bbox{\Delta}$ and little on the nucleon energy E, we denote it by $T (\bbox{\Delta})$\footnote{The final relativistic expression for the amplitude depends also on powers of $\vec{z}=\vec{p}\,'+\vec{p}$, which yield ``non-local'' terms. We expand the amplitude in powers of $\vec{z}/m$ and keep just the first term, which gives the spin-orbit force.}. In this work we are interested in the central, spin-spin, spin-orbit and tensor components of the configuration space potential, which may be written in terms of local profile functions~\cite{Part70}. They are related with the appropriate amplitudes in momentum space by \begin{equation} V(r) = - {\left(\frac{\mu}{2m}\right)}^2 {\frac{\mu}{4\pi}} \int \frac{d^3 \bbox{\Delta}}{(2\pi)^3}\; e^{-i\protect\bbox{\Delta} \protect\cdot \bbox{ r}} \left[\left(\frac{4\pi }{\mu^3}\right) T (\bbox{\Delta})\right] . \label{eq1} \end{equation} In general, there are many processes that contribute to the $TPEP$. However, for large distances, the potential is dominated by the low energy amplitude for $\pi N$ scattering on each nucleon. When the external nucleons are on-shell, the amplitude for the process $\pi^a(k) N(p) \rightarrow \pi^b(k')N(p')$ is written as \begin{equation} F = F^+ \, \delta_{ab} + F^- \, i \, \epsilon_{bac} \, \tau_c \ , \label{eq2} \end{equation} where \begin{equation} F^\pm = \overline u \left( A^\pm + \frac{{\not \! k} + {\not \! k}'}{2} \, B^\pm \right) u \ . \label{eq3} \end{equation} The functions $A^\pm$ and $B^\pm$ depend on the variables \begin{equation} t = \left(p-p'\right)^2\;\;\text{ and }\;\;\nu = \frac{(p+p')\cdot (k+k')}{4m} \end{equation} \noindent or, alternatively, on \begin{equation} s=(p+k)^2\;\;\text{ and }\;\; u=(p-k')^2\;. \end{equation} \noindent When the pions are off-shell, they may also depend on $k^2$ and ${k'}^2$. However, as discussed in Ref.~\cite{Rob96}, off-shell pionic effects have short range and do not contribute to the asymptotic amplitudes. At low energies, $A^\pm$ and $B^\pm$ may be written as a sum of chiral contributions from the pure pion-nucleon sector, supplemented by a series in the variables $\nu$ and $t$\cite{HohI}, as follows: \begin{eqnarray} && A^+ = \frac{g^2}{m} +\sum a^+_{mn} \, \nu^{2m} \, t^n \ \ , \label{eq4}\\ [0.3cm] && B^+ = - \,\frac{g^2}{s-m^2} + \frac{g^2}{u-m^2} + \sum b^+_{mn} \, \nu^{(2m+1)} \, t^n \ \ , \label{eq5}\\ [0.3cm] && A^- = \sum a^-_{mn} \, \nu^{(2m+1)} \, t^n \ \ , \label{eq6}\\ [0.3cm] && B^- = -\, \frac{g^2}{s-m^2} - \frac{g^2}{u-m^2} + \sum b^-_{mn} \, \nu^{2m} \, t^n\ \ . \label{eq7} \end{eqnarray} \noindent In these expressions, the nucleon contributions were calculated using a non-linear pseudoscalar (PS) $\pi N$ coupling \cite{Rob96} and, in writing $A^+$, we have made explicit the factor $(g^2/m)$, associated with chiral symmetry. This amounts to just a redefinition of the usual $a_{00}^+$, given in Refs.~\cite{HohI,Hoh72}. On the other hand, the use of a pseudo vector (PV) $\pi N$ coupling would imply also a small redefinition of $b_{00}^-$. It is very important to note, however, that the values of the sub-amplitudes $A^\pm$ and $B^\pm$ are not at all influenced by this kind of choice and hence are completely model independent. In the sequence, the terms in these expressions associated with the HJS coefficients will be denoted by $A^\pm_R$ and $B^\pm_R$, the subscript $R$ standing for ``remainder'', as indicated in Fig.~\ref{Fig.1}(A). \begin{figure} \epsfxsize=8.5cm \epsfysize=8cm \centerline{\epsffile{art5b_1.eps}} \vspace{0.4cm} \caption{\it A) diagrams contributing to the low-energy $\pi N$ amplitude, where R represents the processes associated with the HJS coefficients; b) the two-pion exchange amplitude; c) contributions to the two-pion exchange amplitude from the purely pionic sector (top) and from processes involving the HJS coefficients (bottom).} \label{Fig.1} \end{figure} The evaluation of the diagrams of Fig.~\ref{Fig.1}(B) yields the following general form for $T$ \begin{eqnarray} T &=& -\frac{i}{2} \int \frac{d^4 Q}{(2\pi)^4} \, \frac{1}{k^2-\mu^2} \; \frac{1}{{k'}^2-\mu^2} \nonumber \\ [0.3cm] &\times& \left[ 3 F^{+(1)} \, F^{+(2)} + 2 \bbox{\tau}^{(1)} \cdot \bbox{\tau}^{(2)} \, F^{-(1)} \, F^{-(2)} \right] \label{eq8} \end{eqnarray} \noindent where the $F^{(i)}$ are given in Eq.~(\ref{eq2}) and the factor $\case{1}/{2}$ accounts for the symmetry under the exchange of the intermediate pions. The pion mass is represented by $\mu$ and the integration variable $Q$ is defined as \begin{equation} Q \equiv \frac{1}{2} \, (k+k') \ . \label{eq9}\\ \end{equation} \noindent In the sequence, we will also need the variables \begin{eqnarray} W &\equiv& p_1 + p_2 = p'_1 + p'_2 \ , \label{eq10}\\ [0.3cm] \Delta &\equiv& k' - k = p'_1 - p_1 = p_2 - p'_2 \ , \label{eq11}\\ [0.3cm] z &\equiv& \frac{1}{2} \, \left[ (p_1 + p'_1) - (p_2+p'_2) \right], \label{eq12}\\ [0.3cm] V_1 &\equiv& \frac{1}{2m} \; (W+z) \ , \label{eq13}\\ [0.3cm] V_2 &\equiv& \frac{1}{2m} \; (W-z) . \label{eq14} \end{eqnarray} The evaluation of the diagrams of Fig.~\ref{Fig.1}(C) produces \begin{eqnarray} &&T = -i\,(2m)^2 \, \frac{1}{2} \int \frac{d^4 Q}{(2\pi)^4} \; \nonumber \\ [0.3cm] &\!\!\times& \frac{1}{\left[ \left( Q -\frac{1}{2}\, \Delta \right)^2 - \mu^2 \right] \left[ \left( Q + \frac{1}{2} \, \Delta \right)^2 - \mu^2 \right]} \nonumber \\ [0.3cm] & \!\! \times & \!\! \left\{ 3 \left[ \left( \frac{g^2}{m} + A^+_R \right) I + \left( - \, \frac{g^2}{s-m^2} + \frac{g^2}{u-m^2} + B^+_R \right) {\raisebox{0.2ex}{$\not$}}Q \right]^{(1)}\right. \nonumber \\ [0.3cm] & \!\! \times &\left[ \left( \frac{g^2}{m} + A^+_R \right) I \left( - \,\frac{g^2}{s-m^2} + \frac{g^2}{u-m^2} + B^+_R \right) {\raisebox{0.2ex}{$\not$}}Q \right]^{(2)} \nonumber \\ [0.3cm] & \!\! +& 2\bbox{\tau}^{(1)} \cdot \bbox{\tau}^{(2)} \left[ A^-_R \,I + \left( - \,\frac{g^2}{s-m^2}- \frac{g^2}{u-m^2} + B^-_R \right) {\raisebox{0.2ex}{$\not$}}Q \right]^{(1)} \nonumber \\ [0.3cm] & \!\! \times &\left. \left[ A^-_R \, I + \left( - \, \frac{g^2}{s-m^2} - \frac{g^2}{u-m^2} + B^-_R \right) {\raisebox{0.2ex}{$\not$}}Q \right]^{(2)} \right\}, \label{eq15} \end{eqnarray} \noindent where $I$ and ${\raisebox{0.2ex}{$\not$}}Q$ are defined with non-relativistic normalizations as \begin{eqnarray} && I = \frac{1}{2m} \; \overline u u \ , \label{eq16} \\[0.3cm] && {\raisebox{0.2ex}{$\not$}}Q = \frac{1}{2m} \; \overline u \, Q_\mu \, \gamma^\mu \, u \ . \label {eq17} \end{eqnarray} \noindent The integrand also depends implicitly on $Q$ through the variables \begin{eqnarray} && s_{i}- m^2 = Q^2 + Q \cdot (W\pm z) - \frac{1}{4} \; \Delta^2 \ , \label{eq18} \\[0.2cm] && u_{i} - m^2 = Q^2 - Q \cdot (W\pm z) - \frac{1}{4} \; \Delta^2 \ , \label{eq19}\\[0.2cm] && \nu_{i} = Q \cdot V_i \ . \label{eq20} \end{eqnarray} \noindent The integration is symmetric under the operation $\,Q \rightarrow - Q$ and hence nucleon denominators involving $s$ and $u$ yield identical results. The evaluation of the potential in configuration space requires also an integration over $t$ and the pole structure of Eq.~(\ref{eq8}) implies that the leading contribution at very large distances comes from the region $t\approx 4\mu^2$ \cite{BrJack}, as it is well known. Therefore the form of our results in configuration space becomes more transparent when the contribution of the HJS coefficients is reorganized in terms of the dimensionless variable \begin{equation} \theta \equiv \left(\frac{t}{4\mu^2}-1\right). \label{eq21} \end{equation} \noindent The amplitudes $A^\pm_R$ and $B^\pm_R$, associated with the HJS coefficients, are rewritten as \onecolumn \begin{eqnarray} && A^+_R = \frac{1}{\mu} \sum \alpha^+_{mn}\left(\frac{\nu}{\mu}\right)^{2m}\theta^n\; , \label{eq23} \\ [0.2cm] && B^+_R = \frac{1}{\mu^2} \sum \beta^+_{mn}\left(\frac{\nu}{\mu}\right)^{(2m+1)}\theta^n \; , \label{eq24} \\ [0.2cm] && A^-_R = \frac{1}{\mu} \sum \alpha^-_{mn}\left(\frac{\nu}{\mu}\right)^{(2m+1)}\theta^n \; , \label{eq25} \\ [0.2cm] && B^-_R = \frac{1}{\mu^2} \sum \beta^-_{mn}\left(\frac{\nu}{\mu}\right)^{2m}\theta^n \; . \label{eq26} \end{eqnarray} \noindent In defining the coefficients $\alpha^\pm_{mn}$ and $\beta^\pm_{mn}$, we have introduced powers of $\mu$ where appropriate so as to make them dimensionless. Their numerical values are given in Tab.~\ref{Tab.1}. \begin{table} \caption{\it Values for the dimensionless coefficients of Eqs.~(\protect\ref{eq23}-\protect\ref{eq26}) taken from Ref.~\protect\cite{HohI} and re-stated by Eq.~\protect\ref{eq21}.} \begin{tabular} {lcccccr} {$(m,n)$} & {$(0,0)$} & {$(0,1)$} & {$(0,2)$} & {$(1,0)$} & {$(1,1)$} & {$(2,0)$} \\ \tableline $\alpha^+_{mn}$ & $ 3.676\pm 0.138$ & $ 5.712\pm 0.096$ & $ 0.576\pm 0.048$ & $ 4.62 $ & $-0.04 $ & $ 1.2 \pm 0.02 $ \\ $\beta^+_{mn}$ & $ -2.98 \pm 0.10 $ & $ 0.40 \pm 0.04 $ & $-0.16 $ & $ -0.68 \pm 0.06 $ & $ 0.32 \pm 0.04 $ & $-0.31 \pm 0.02 $ \\ $\alpha^-_{mn}$ & $-10.566\pm 0.212$ & $-1.976\pm 0.144$ & $-0.240\pm 0.032$ & $ 1.222\pm 0.074$ & $ 0.208\pm 0.024$ & $-0.33 \pm 0.02 $ \\ $\beta^-_{mn}$ & $ 9.730\pm 0.172$ & $ 1.760\pm 0.104$ & $ 0.40 \pm 0.032$ & $ 0.86 \pm 0.07 $ & $ 0.22 \pm 0.02 $ & $ 0.25 \pm 0.02 $ \end{tabular} \label{Tab.1} \end{table} Eq. (\ref{eq15}) can be naturally decomposed into a piece proportional to $g^4$, which originates in the pure pion-nucleon sector and a remainder, labelled by R, as in Fig.~\ref{Fig.1}(C). The former was discussed in detail in Refs.~\cite{Rocha94,Rocha95}, where numerical expressions were produced, and will no longer be considered here. We concentrate on $T_R$, which encompasses all the other dynamical effects. The potential in configuration space may be written as \begin{equation} V_R = \left(V^+_{R1}+V^+_{R2}+V^+_{R3}+V^+_{R4}+V^+_{R5} +V^+_{R6}+V^+_{R7}+V^+_{R8}\right) +\bbox{\tau}^{(1)} \cdot \bbox{\tau}^{(2)} \left(V^-_{R1}+V^-_{R2}+V^-_{R3} +V^-_{R4}+V^-_{R5}+V^-_{R6}+V^-_{R7}+V^-_{R8}\right) \label{eq27} \end{equation} where the $V^\pm_{R_i}$ are integrals of the form \begin{equation} V^\pm_{Ri} = -\, {\left(\frac{\mu}{2m}\right)}^2 {\frac{\mu}{4\pi}} \int \frac{d^3 \bbox{\Delta}}{(2\pi)^3} e^{-i\protect\bbox{\Delta} \cdot \protect\bbox{ r}} {\left[-i\frac{4\pi}{\mu^3} \int \frac{d^4 Q}{(2\pi)^4} \; \frac{1}{\left[ \left( Q -\frac{1}{2}\, \Delta \right)^2 - \mu^2 \right] \left[ \left( Q + \frac{1}{2} \, \Delta \right)^2 - \mu^2 \right]} \; g^\pm_i \right]}, \label{eq22} \end{equation} \noindent and the $g^\pm_i$ are the polynomials in $\nu/\mu$ and $\theta$ given in Appendix A. Thus we obtain the following general result for the $V^\pm_i$ \begin{eqnarray} V^+_{R1}&=& -{\frac{\mu}{4\pi}}\frac{3}{2}\left\{g^2\frac{\mu}{m} \alpha^+_{mn} 2 S_{B(2m,n)} +\alpha^+_{k\ell} \alpha^+_{mn}S_{B(2k+2m,\ell+n)} \right\}I^{(1)}I^{(2)}, \label{eq28}\\ [0.3cm] V^+_{R2}&=& -{\frac{\mu}{4\pi}}\frac{3}{2} \left\{g^2\frac{\mu}{m}\beta^+_{mn}S^{\mu}_{B(2m+1,n)} +\alpha^+_{k\ell}\beta^+_{mn} S^{\mu}_{B(2k+2m+1,n)} \right\} I^{(1)}\gamma_{\mu}^{(2)}, \label{eq29}\\ [0.3cm] V^+_{R4}&=& -{\frac{\mu}{4\pi}}\frac{3}{2} \left\{\beta^+_{k\ell} \beta^+_{mn} S^{\mu\nu}_{B(2k+2m+2,\ell+n)} \right\}\gamma_{\mu}^{(1)}\gamma_{\nu}^{(2)}, \label{eq30}\\ [0.3cm] V^-_{R1}&=& -{\frac{\mu}{4\pi}} \left\{\alpha^-_{k\ell} \alpha^-_{mn}S_{B(2k+2m+2,\ell+n)} \right\}I^{(1)}I^{(2)}, \label{eq31}\\ [0.3cm] V^-_{R2}&=& -{\frac{\mu}{4\pi}} \left\{\alpha^-_{k\ell} \beta^-_{mn} S^{\mu}_{B(2k+2m+1,\ell+n)} \right\} I^{(1)}\gamma_{\mu}^{(2)}, \label{eq32}\\ [0.3cm] V^-_{R4}&=& -{\frac{\mu}{4\pi}} \left\{\beta^-_{k\ell}\beta^-_{mn} S^{\mu\nu}_{B(2k+2m,\ell+n)} \right\}\gamma_{\mu}^{(1)}\gamma_{\nu}^{(2)}, \label{eq33}\\ [0.3cm] V^+_{R5}&=& -{\frac{\mu}{4\pi}}\frac{3}{2}\frac{\mu}{m} \left\{g^2\alpha^+_{mn}S^{\mu}_{T(2m,n)} \right\}\gamma_{\mu}^{(1)}I^{(2)}, \label{eq34}\\ [0.3cm] V^+_{R7}&=& -{\frac{\mu}{4\pi}}\frac{3}{2}\frac{\mu}{m} \left\{g^2\beta^+_{mn}S^{\mu\nu}_{T(2m+1,n)} \right\}\gamma_{\mu}^{(1)}\gamma_{\nu}^{(2)}, \label{eq35}\\ [0.3cm] V^-_{R5}&=& {\frac{\mu}{4\pi}}\frac{\mu}{m} \left\{g^2\alpha^-_{mn}S^{\mu}_{T(2m+1,n)} \right\}\gamma_{\mu}^{(1)}I^{(2)}, \label{eq36}\\ [0.3cm] V^-_{R7}&=& {\frac{\mu}{4\pi}}\frac{\mu}{m} \left\{g^2\beta^-_{mn}S^{\mu\nu}_{T(2m,n)} \right\}\gamma_{\mu}^{(1)}\gamma_{\nu}^{(2)}. \label{eq37} \end{eqnarray} The expressions for $V^\pm_{R3}$, $V^\pm_{R6}$ and $V^\pm_{R8}$ are identical respectively to $V^\pm_{R2}$, $V^\pm_{R5}$ and $V^\pm_{R7}$ when the very small differences between $\nu_1$ and $\nu_2$ are neglected. In these results $S_{B(m,n)}$ and $S_{T(m,n)}$ represent integrals of bubble (B) and triangle (T) diagrams, with $m$ and $n$ indicating the powers of $(\nu/\mu)$ and $\theta$ respectively, whose detailed form is presented in appendix B. There, we show that the integrals with one free Lorentz index are proportional to $V^\mu_i$ whereas those with two indices may be proportional to either $V^\mu_iV^\nu_i$ or $g^{\mu\nu}$. Therefore we write for both bubble and triangle integrals \begin{eqnarray} S^\mu_{(m,n)}&=& V^\mu_iS^V_{(m,n)}, \label{eq38}\\ [0.3cm] S^{\mu\nu}_{(m,n)}&=& V^\mu_iV^\nu_iS^{VV}_{(m,n)}+g^{\mu\nu}S^{g}_{(m,n)}. \label{eq39} \end{eqnarray} Using the approximations described in Appendix B and the Dirac equation as in Eq.(B1), we obtain \begin{eqnarray} &&V^+_{R1}+V^+_{R5}+V^+_{R6}= -{\frac{\mu}{4\pi}}\frac{3}{2} \left\{g^2\frac{\mu}{m} \alpha^+_{mn} \left[2S_{B(2m,n)}+2S^V_{T(2m,n)}\right] +\alpha^+_{k\ell} \alpha^+_{mn}S_{B(2k+2m,\ell+n)} \right\}I^{(1)}I^{(2)}, \label{eq40}\\ [0.4cm] &&V^+_{R2}+V^+_{R3}+V^+_{R7}+V^+_{R8}= -{\frac{\mu}{4\pi}}\frac{3}{2} \left\{g^2\frac{\mu}{m}\beta^+_{mn} \left[2S^V_{B(2m+1,n)}+2S^{VV}_{T(2m+1,n)}\right] \right. \nonumber \\ [0.2cm] &&+\left.\alpha^+_{k\ell}\beta^+_{mn} S^{V}_{B(2k+2m+1,n+\ell)} \right\} I^{(1)}I^{(2)} -\ {\frac{\mu}{4\pi}}\frac{3}{2} \left\{g^2\frac{\mu}{m}\beta^+_{mn} 2S^{g}_{T(2m+1,n)}\right\} \gamma^{(1)}\cdot\gamma^{(2)}, \label{eq41}\\ [0.4cm] &&V^+_{R4}= -{\frac{\mu}{4\pi}}\frac{3}{2} \left\{\beta^+_{k\ell} \beta^+_{mn} S^{VV}_{B(2k+2m+2,\ell+n)}\right\}I^{(1)}I^{(2)} -{\frac{\mu}{4\pi}}\frac{3}{2} \left\{\beta^+_{k\ell} \beta^+_{mn} S^{g}_{B(2k+2m+2,\ell+n)}\right\} \gamma^{(1)}\cdot\gamma^{(2)}, \label{eq42}\\ [0.4cm] &&V^-_{R1}= -{\frac{\mu}{4\pi}} \left\{\alpha^-_{k\ell} \alpha^-_{mn}S_{B(2k+2m+2,\ell+n)} \right\}I^{(1)}I^{(2)}, \label{eq43}\\ [0.4cm] &&V^-_{R2}+V^-_{R3}= -{\frac{\mu}{4\pi}} \left\{\alpha^-_{k\ell} \beta^-_{mn} 2S^{V}_{B(2k+2m+1,\ell+n)} \right\} I^{(1)}I^{(2)}, \label{eq44}\\ [0.4cm] &&V^-_{R4}= -{\frac{\mu}{4\pi}} \left\{\beta^-_{k\ell}\beta^-_{mn} S^{VV}_{B(2k+2m,\ell+n)} \right\}I^{(1)}I^{(2)} -{\frac{\mu}{4\pi}} \left\{\beta^-_{k\ell}\beta^-_{mn} S^{g}_{B(2k+2m,\ell+n)} \right\}\gamma^{(1)}\cdot\gamma^{(2)}\, , \label{eq45}\\ [0.4cm] &&V^-_{R5}+V^-_{R6}={\frac{\mu}{4\pi}}\frac{\mu}{m} \left\{g^2\alpha^-_{mn}2S^{V}_{T(2m+1,n)} \right\}I^{(1)}I^{(2)}\, , \label{eq46}\\ [0.4cm] &&V^-_{R7}+V^-_{R8}= {\frac{\mu}{4\pi}}\frac{\mu}{m} \left\{g^2\beta^-_{mn}S^{VV}_{T(2m,n)} \right\}I^{(1)}I^{(2)} +{\frac{\mu}{4\pi}}\frac{\mu}{m} \left\{g^2\beta^-_{mn}S^{g}_{T(2m,n)} \right\}\gamma^{(1)}\cdot\gamma^{(2)}\, . \label{eq47} \end{eqnarray} In configuration space, the spin-dependence of the potential is obtained by means of the non-relativistic results~\cite{Part70} \begin{eqnarray} I^{(1)} \, I^{(2)} \; & \cong & \; 1 - \frac{\Omega_{S0}}{2m^2} \ \ , \label{eq48}\\ [0.3cm] \gamma^{(1)} \cdot \gamma^{(2)} & \cong & 1+3\frac{\Omega_{S0}}{2m^2}-\frac{\Omega_{SS}}{6m^2} -\frac{\Omega_T}{12m^2} \ \ , \label{eq49} \end{eqnarray} \noindent where \twocolumn \begin{eqnarray} \Omega_{S0} & = &\bbox{ L \cdot S}\left( \frac{1}{r}\; \frac{\partial}{\partial r} \right) \ , \label{eq50}\\ [0.3cm] \Omega_{SS} & = &- \bbox{\sigma}^{(1)} \cdot \bbox{\sigma}^{(2)} \left( \frac{\partial^2}{\partial r^2} + \frac{2}{r}\; \frac{\partial}{\partial r} \right) \ , \label{eq51}\\ [0.3cm] \Omega_T &\!\! = &\!\! \hat{S}_{12} \left( \frac{\partial^2}{\partial r^2} \, - \frac{1}{r} \; \frac{\partial}{\partial r} \right) \ , \label{eq52} \end{eqnarray} \noindent and \[ \hat{S}_{12}=\left(3 \bbox{\sigma}^{(1)} \cdot \hat{\bf r} \, \bbox{\sigma}^{(2)} \cdot \hat{\bf r} - \bbox{\sigma}^{(1)} \cdot \bbox{\sigma}^{(2)} \right)\ . \] An interesting feature of the partial contributions to the potential is that they are given by two sets of phenomenological parameters, the $\pi N$ coupling constant and the HJS coefficients, multiplying structure integrals. These integrals depend on just the pion and nucleon propagators and hence carry very little model dependence. Their main features are discussed in the next section. \section{INTEGRALS AND CHIRAL SYMMETRY} Our expressions for the $TPEP$, given by Eqs.~(\ref{eq40}-\ref{eq47}), contain both bubble and triangle integrals, which depend on the indices $m$ and $n$, associated respectively with the powers of $(\nu/\mu)$ and $\theta$, in the HJS expansion. The numerical evaluation of these integrals has shown that there is a marked hierarchy in their spatial behavior and that the functions with $m=n=0$ prevail at large distances. In order to provide a feeling for the distance scales of the various effects, in Figs.~\ref{Fig.2} and~\ref{Fig.3} we display the ratios $\left[S_{B(m,n)}/S_{B(0,0)}\right]$ and $\left[S_{T(m,n)}^V/S_{T(0,0)}^V\right]$, for some values of $m$ and $n$, as functions of $r$. \begin{figure} \epsfxsize=7.0cm \epsfysize=9.5cm \centerline{\epsffile{art5b_2.eps}} \caption{\it Asymptotic behavior of the bubble integrals $S_{B(m,n)}$. The ratios $S_{B(m,0)}/S_{B(0,0)}$ and $S_{B(0,n)}/S_{B(0,0)}$, for some values of $m$ and $n$are indicated by solid and dashed lines respectively. One sees that the integral $S_{B(0,0)}$ (unity line) is asymptotically dominant.} \label{Fig.2} \end{figure} \begin{figure} \epsfxsize=7.0cm \epsfysize=9.5cm \centerline{\epsffile{art5b_3.eps}} \caption{\it Asymptotic behavior of the triangle integrals $S_{T(m,n)}$. The ratios $S_{T(m,0)}/S_{T(0,0)}$ and $S_{T(0,n)}/S_{T(0,0)}$, for some values of $m$ and $n$are indicated by solid and dashed lines respectively. As in Fig.~\protect\ref{Fig.2}, the integral for $m=n=0$ is asymptotically dominant.} \label{Fig.3} \end{figure} When considering these figures, it is useful to bear in mind that the $(m,0)$ and $(0,n)$ curves convey different informations. The former series represents the average values of $\left(\nu /\mu\right)^m$ and is related to the behavior of the intermediate $\pi N$ amplitude below threshold. For physical $\pi N$ scattering, the variable $\nu$ is always greater than $\mu$, whereas in the present problem the average values of $(\nu/\mu)^m$ are smaller than 1 for distances beyond 2.5 fm and tend to zero for very large values of $r$. This is the reason why the construction of the $TPEP$ cannot be based on raw scattering data, but rather, requires the use of dispersion relations in order to transform the $\pi N$ amplitude to the suitable kinematical region \cite{BrJack}. One has, therefore, a situation similar to the case of three-body forces, as discussed by Murphy and Coon~\cite{Murp95}, which emphasizes the role of the HJS coefficients. Regarding the dependence of the integrals on the momentum transferred, one notes that the intermediate $\pi N$ amplitude in the momentum space is already in the physical $t<0$ region and does not require any extrapolations. On the other hand, when one goes to configuration space, the Fourier transform picks up values of the amplitude around the point $t=4\mu^2$. Thus, the r-space potential is not transparent as far as $t$ is concerned and the coherent physical picture only emerges when one uses it in the Schrodinger equation. This is a well known property, which also applies to the OPEP. The fact that the integrals with $m=n=0$ dominate at large distances means that the main contribution to the isospin symmetric central potential comes from Eq.~(\ref{eq40}) and is given by \begin{eqnarray} &&V^+_{R1}+V^+_{R5}+V^+_{R6}= -\frac{\mu}{4\pi}\,\frac{3}{2} \left\{g^2\,\frac{\mu}{m}\,\alpha^+_{00}\;\right. \nonumber \\ [0.3cm] && \times \left. 2\left[S_{B(0,0)}+S^V_{T(0,0)}\right] +\left(\alpha^+_{00}\right)^2 S_{B(0,0)} \right\}I^{(1)}I^{(2)}. \label{eq53} \end{eqnarray} The first term within curly brackets, proportional to $g^2$, is produced by the triangle and bubble diagrams in Fig.~\ref{Fig.1}(C)-bottom, containing nucleons on one side and HJS amplitudes on the other, whereas the second one is due to the last diagram of Fig.~\ref{Fig.1}(C)-bottom. Inspecting Tab.~\ref{Tab.1} one learns that $\left(g^2\mu/m\right)/\alpha_{00}^+ \approx 8$, which suggests the first class of diagrams should dominate. On the other hand, the first term is proportional to $\left[S_{B(0,0)}+S_{T(0,0)}^V\right]$ and, as discussed in appendix B, these two integrals have opposite signs and there is a partial cancellation between them. These features of the leading contribution are displayed in Fig.~\ref{Fig.4}, which shows that the first term is indeed dominant. \begin{figure} \epsfxsize=7.0cm \epsfysize=7.0cm \centerline{\epsffile{art5b_4.eps}} \caption{\it Structure of the leading contribution to the central potential, as given by Eq.~\protect\ref{eq53}. The continuous line represents the total effect, whereas the dashed, dotted, and dash-dotted lines correspond to the contributions proportional to $(g^2\mu/m)\alpha_{(00)}^+\,2S_B$, $(g^2\mu/m)\alpha_{(00)}^+\,2S_T$, and $\left(\alpha_{(00)}^+\right)^2\,S_B$ respectively.} \label{Fig.4} \end{figure} The cancellation noticed in the leading contributions is not a coincidence. Instead, it represents a deep feature of the problem, which is due to chiral symmetry and also occurs in various other terms of the potential. In appendix C we have shown that the asymptotic form of $S_{B(0,0)}$ is given by the analytic expression \begin{equation} S_{B(0,0)}^{\text{\scriptsize asymp}} = \frac{1}{(4\pi)^2}\; 2 \sqrt{\pi}\; \frac{e^{-2x}}{x^{5/2}} \left(1+\frac{3}{16}\frac{1}{x} -\frac{15}{512}\frac{1}{x^2}+\cdots\right)\; . \label{eq54} \end{equation} \noindent Its accuracy is 1\% up to 1.2 fm. There, we also studied the form of the basic triangle integral $S_{T(0,0)}$ and have demonstrated that $S_{T(0,0)}^{\,\text{\scriptsize asymp}}= - S_{B(0,0)}^{\,\text{\scriptsize asymp}}$ when $(\mu/m)\rightarrow 0$. As the integrals with other values of $m$ and $n$ can be obtained from the leading ones, the same relationship holds for them as well. This explains why Figs.~\ref{Fig.2} and~\ref{Fig.3} are so similar. As we have discussed elsewhere~\cite{Ballot96a,Ballot96b}, important cancellations due to chiral symmetry also occur in the pure $\pi N$ sector. In order to stress this point, we have evaluated the contributions of the diagrams in the top line of Fig.~\ref{Fig.1}(C), denoted respectively by box $(\Box)$, crossed $(\Join)$, triangle $(\triangle)$ (twice) and bubble $((\!))$, for three different values of the ratio $\mu/m$, namely $$ \left(\frac{\mu}{m}\right)^{\text{exp}},\;\; \; \frac{1}{10}\left(\frac{\mu}{m}\right)^{\text{exp}}, \;\;\text{ and }\;\; \frac{1}{100}\left(\frac{\mu}{m}\right)^{\text{exp}}. $$ \begin{figure} \epsfxsize=7.0cm \epsfysize=8.0cm \centerline{\epsffile{art5b_5.eps}} \vspace{0.5cm} \caption{\it Contributions of the box, crossed, and triangle diagrams divided by that of the bubble, in the pure $\pi N$ sector, for the ratios of the pion over the nucleon mass equal to the experimental value $\mu/m$, to $0.1\mu/m$, and to $0.01\mu/m$.} \label{Fig.5} \end{figure} In Fig.~\ref{Fig.5} we display the ratios of the box, crossed and triangle contributions over the bubble result as functions of distance, where it is possible to notice two interesting features. The first is that these ratios tend to become flat as $\mu/m$ decreases. The other one is that as $(\mu/m)\rightarrow 0$, one obtains the following relations: $\Box=0.5 (\!)$, $\Join=0.5 (\!)$, and $\triangle=-(\!)$. Thus, for the amplitude in the pure $\pi N$ sector, we have $ \Box+\Join+2\triangle+(\!)=0$, a point also remarked by Friar and Coon~\cite{Fria94}. This result, when combined with the previous discussion concerning the bottom part of Fig.~\ref{Fig.1}(C), indicates that the two-pion exchange NN potential would vanish if chiral symmetry were exact, because the same would happen with the intermediate $\pi N$ amplitude. So, all the physics associated with the tail of the intermediate range interaction is due to chiral symmetry breaking. As a final comment, we would like to point out that in the evaluation of the $TPEP$ there are two different hierarchies that can be used to simplify calculations. One of them concerns the HJS coefficients, which are more important for low powers on $\nu$ and $t$. The other one is associated with the spatial behavior of the integrals as functions of $m$ and $n$. The combined use of these hierarchies allow many terms to be discarded. \section{RELATED WORKS} To our knowledge, only Ord\'o\~nez, Ray and van Kolck have so far attempted to derive realistic nucleon-nucleon phenomenology in the framework of chiral symmetry~\cite{Ordo94,Ordo96}. The potential obtained by these authors is based on a very general effective Lagrangian, which is approximately invariant under chiral symmetry to a given order in non-relativistic momenta and pion mass. They considered explicitly the degrees of freedom associated with pions, nucleons and deltas, whereas the effects of other interactions were incorporated into parameters arising from contact terms and higher order derivatives. In principle the free parameters in their effective Lagrangian could be obtained from other physical processes, but at present only some of them are known\footnote{See Ref.~\cite{Ber95} for a comprehensive discussion of this point.}. In their work these parameters were obtained by fitting deuteron properties and NN observables for $j\le 2$ whereas loop integrals were regularized by means of non-covariant Gaussian cutoffs of the order of the $\rho$ meson mass. Thus they could show that the effective chiral Lagrangian approach is flexible enough for allowing the data to be reproduced with an appropriate choice of dynamical parameters and cutoffs. Comparing their approach to ours, one notes several important differences. For instance, we use dimensional regularization which is well known to preserve the symmetries of the problem and our expressions are quite insensitive to short distance effects. In the work of Ord\'o\~nez, Ray and van Kolck, on the other hand, ``variations in the cutoff are compensated to some extent by a redefinition of the free parameters in the theory.''~\cite{Ordo96}. Moreover, we use the HJS coefficients as input, which are determined by $\pi N$ scattering, and therefore our results yield predictions for interactions at large distances or, alternatively, for $j\ge 2$. The test of these predictions will be presented elsewhere. Another point in the present work that deserves to be discussed concerns the subtraction of the iterated OPEP. In our calculation of the $TPEP$ in the pure nucleonic sector, we have supplemented the results derived by Lomon and Partovi ~\cite{Part70} for the pseudoscalar box and crossed box diagrams with bubble and triangle diagrams associated with chiral symmetry\cite{Rocha94}. We have also shown that the use of a pseudovector coupling yields exactly the same results and hence that the potential does not depend on how the symmetry is implemented. However, the Partovi and Lomon amplitude include the subtraction of the OPEP by means of the Blankenbecler-Sugar reduction of the relativistic equation and hence our results are also affected by that procedure. This kind of choice should not influence measured quantities, since it amounts to just a selection of the conceptual basis to treat the problem ~\cite{Desp92}. As discussed by Friar ~\cite{Fri77} and more recently by Friar and Coon~\cite{Fria94}, the treatments of the iterated OPEP by Taketani, Machida and Ohnuma\cite{TMO} and by Brueckner and Watson\cite{BW} differ by terms which are energy dependent. However, in our calculation, energy dependent terms can be translated into the variable $\nu$ and, in the previous section, we have shown that the $TPEP$ at large distances is dominated by the region where $\nu \approx 0$. Hence our results are not affected by the way the OPEP is defined. Another indication that confirms this fact comes from two recent studies dealing with the relative weights of the various $TPEP$ contributions to NN phase shifts, which have shown that the role of the iterated OPEP is very small for $j\ge 2$ \cite{Ballot96a,Ballot96b}. The last comment we would like to make in this section concerns the dynamical significance of the HJS coefficients. It has long been known that a tree model for the intermediate $\pi N$ amplitude containing nucleons, deltas, rho mesons and an amplitude describing the $\sigma$-term can be made consistent with the experimental values of the HJS coefficients by means of a rather conservative choice of masses and coupling constants ~\cite{HohI,Murp95,Olson75,Sca74,Mene85}. In general, there are two advantages of employing such a model in a nuclear physics calculation. The first is that it allows one to go beyond the HJS coefficients, specially as far as the the pion off-shell behaviour of the amplitude is concerned. However, as we have discussed above, this kind of off-shell effects are related to short distance interactions and hence are not important for the asymptotic $TPEP$. It is in this sense that we consider our results to be model independent. The second motivation for using a model is that it may provide a dynamical picture involving the various degrees of freedom of the problem and shed light into their relative importance. As we show in the next section, the leading contribution to the scalar-isoscalar potential comes from the coefficient $\alpha^+_{00}\equiv\mu(a^+_{00}+4\mu^2a^+_{01}+16\mu^4a^+_{02})$. As expected, it is attractive and determined mostly by the $\,\pi N\,$ sigma term and by the delta. The former yields $\alpha^+_{00\Sigma}= 1.8$ whereas the latter is the outcome of a strong cancellation between pole and non-pole contributions $\alpha^+_{00\Delta}=(26.5-25.2)$\cite{HohI}. Thus the delta non-pole term plays a very important role in the interaction, and must be carefully considered in any model aiming at being realistic. \section{RESULTS AND CONCLUSIONS} In this work we have assumed that the $TPEP$ is due to both pure pion-nucleon interactions and processes involving other degrees of freedom, as represented in the top and bottom lines of Fig.~\ref{Fig.1}(C). The former class of processes was evaluated and studied elsewhere~\cite{Rocha94,Ballot96b} and hence we here concentrate on the latter. \begin{figure} \epsfxsize=6.3cm \epsfysize=6.0cm \centerline{\epsffile{art5b_6.eps}} \vspace{0.5cm} \caption{\it Structure of the central potential; the dot-dashed curve represents the leading contribution (Eq.~(\protect\ref{eq53})) whereas the dashed, big dotted and small dotted curves correspond to Eqs.~(\protect\ref{eq40}), (\protect\ref{eq41}), and (\protect\ref{eq42}) respectively; the solid line represent the full potential.} \label{Fig.6} \end{figure} As discussed in Sec. III, the leading contribution to the potential at large distances is due to the intermediate $\pi N$ amplitude around the point $\nu=0$, $t=4\mu^2$. In order to understand the role played by the other terms, in Fig.~\ref{Fig.6} we disclose the structure of the scalar-isoscalar potential, given by Eqs.~(\ref{eq40}-\ref{eq42}). There it is possible to see that Eq.~(\ref{eq40}), associated with the $\alpha^+_{m n}$ HJS coefficients, completely dominates the full potential. On the other hand, for moderate distances, there is a clear separation between the curves representing the leading contribution, given by Eq.~(\ref{eq53}), and the total potential. This indicates that corrections associated with higher powers of $\nu$ and $t$ are important there, a feature that could have been anticipated from Figs.~\ref{Fig.2} and~\ref{Fig.3}. \begin{figure} \epsfxsize=6.3cm \epsfysize=5.8cm \centerline{\epsffile{art5b_7.eps}} \vspace{0.3cm} \caption{\it Contributions for the total $TPEP$, represented by continuous line; the dashed line comes from the pure $\pi N$ sector (Fig.~\protect\ref{Fig.1}(C)-top), whereas that associated with other degrees of freedom falls on top of the continuous line and cannot be distinguished from it.} \label{Fig.7} \end{figure} The total potential, obtained by adding the results of Refs.~\cite{Rocha94,Rocha95} with those of this work, is given in Fig.~\ref{Fig.7}, where it is possible to see that the contribution from the pure nucleon sector is rather small. This information, when combined with those contained in the preceding figures, allows one to conclude that the strength of the scalar-isoscalar attraction at large distances is due mostly to diagrams involving the nucleon on one side and the remaining degrees of freedom on the other. \begin{figure} \epsfxsize=6.3cm \epsfysize=5.8cm \centerline{\epsffile{art5b_8.eps}} \vspace{0.3cm} \caption{\it Central components of various potentials: parametrized Paris~\protect\cite{Laco80} (solid, P), Argonne v14~\protect\cite{WSA84} (solid, A), dTRS~\protect\cite{TRS75} (dashed, d), Bonn~\protect\cite{MHE87} (dashed, B), and our full potential (solid, *).} \label{Fig.8} \end{figure} In Fig.~\ref{Fig.8} we compare our results for the scalar-isoscalar interaction with the corresponding components of some potentials found in the literature: parametrized Paris~\cite{Laco80}, Argonne v14~\cite{WSA84}, dTRS~\cite{TRS75}, and Bonn~\cite{MHE87}. The first thing that should be noted is that all curves but ours bend upwards close to the origin, indicating clearly that the validity of our results is restricted to large distances. Inspecting the medium and long distances regions, it is possible to see that every potential disagrees with all the others. On the other hand, this does not prevent the realistic potentials from reproducing experimental data, something that is possible because there is a compensation arising from the other discrepancies found in the short distance region. It is for this reason that the \onecolumn \noindent accurate knowledge of the tail of the potential may yield indirect constraints over its short distance part. Finally, in Fig.~\ref{Fig.9} we show the ratios of the realistic potentials by our full potential, where the discrepancies mentioned above appear again, in a different form. An interesting feature of this figure is that the realistic potentials come close together around 2 fm, suggesting that this region is important for reproduction of experimental data. Moreover, all of them show inflections there, indicating that the physics in this region goes beyond the exchange of two uncorrelated pions. In the long distance domain, the $r$ dependence of the Argonne potential is not too different from ours, because it is based on a square OPEP form. \begin{figure} \epsfxsize=12.0cm \epsfysize=10.0cm \centerline{\epsffile{art5b_9.eps}} \caption{\it Ratio of the central components of some realistic potentials by our full result (solid,*): parametrized Paris~\protect\cite{Laco80} (solid, P), Argonne v14~\protect\cite{WSA84} (solid, A), dTRS~\protect\cite{TRS75} (dashed, d), and Bonn~\protect\cite{MHE87} (dashed, B).} \label{Fig.9} \end{figure} In summary, in this work we have shown that the use of a chiral $\pi N$ amplitude, supplemented by experimental information, determines uniquely the long-distance features of the scalar-isoscalar component of the $NN$ potential. As it is well known, the kinematical regions relevant to this problem are not directly accessible by experiment and hence empirical information has to be treated by means of dispersion relations before being used as input in the calculations of the force. From a purely mathematical point of view, our results are valid for $r >2.5$ fm, since in this region one has $\nu <\mu$ and the HJS coefficients may be safely employed. On the other hand, the determination of the dynamical validity of the results is much more difficult, since this requires a comparision with processes involving the mutual interaction of the exchanged pions, something that remains to be done in the framework of chiral symmetry. In general, a potential involves two complementary ingredients that deserve attention, namely geometry and dynamics. In our calculation, the former is associated with standard bubble and triangle integrals, that determine unambiguously the profile functions in configuration space, whereas dynamics is incorporated into the problem by means of coupling constants and empirical coefficients. Geometry and dynamics decouple in our final expressions and hence they would remain valid even if changes in the values of the dynamical constants may occur in the future. In the case of Fig.~\ref{Fig.9}, such a change would amount to just a modification of the vertical scale, with no appreciable effect on the discrepancies found with phenomenological potentials. \section{ACKNOWLEDGEMENTS} M.R.R. would like to thank the kind hospitality of Nuclear Theory Group of the Department of Physics of the University of Washington, Seattle, during the performance of this work. This work was partially supported by U.S. Department of Energy. The work of C.A. da Rocha was supported by CNPq, Brazilian Agency.
2,877,628,088,728
arxiv
\section{Introduction} Density functional theory\cite{PhysRev.140.A1133,RevModPhys.61.689,jones2015density} (DFT) and beyond-DFT methods are often used in combination with photoelectron spectroscopy to obtain physical insights into the electronic structure of molecules and solids. The Kohn-Sham (KS) eigenvalues are not electron removal energies except for the highest occupied one\cite{almbladh1985,PhysRevA.30.2745,Perdew1982,Perdew1997,PhysRevB.60.4545}, but they often, though not always, provide good approximations to electron binding energies (EBEs)\cite{PhysRevB.70.134422,C8SC03862G,PhysRevB.79.201205,PhysRevB.73.205407}. Eigenvalues of the range separated hybrid functionals\cite{doi:10.1063/1.1383587} using tuned separation parameter generally provide good approximations to EBEs due to mitigation of self-interaction (SI) errors\cite{doi:10.1063/1.1383587,baer2010tuned,kronik2012excitation}. SI error has a large role in the underestimation of the magnitude of the KS eigenvalues. For excitation energies, the HOMO-LUMO orbital difference in time-dependent KS theory plays a significant role. The first ionization energy and electron affinity in the exact KS theory are determined by the HOMO (highest partly-occupied molecular orbital) energies of the system before and after the addition of a fraction of an electron. The HOMO-LUMO gap at fixed electron number is not equal to the first-excitation energy, but is close enough to it in a molecule to enable an accurate calculation in TDDFT (time-dependent density functional theory) of the charge transfer excitation of donor and acceptor molecules. The standard implementations of self-interaction correction methods in the generalized Kohn-Sham scheme\cite{PhysRevB.28.5992, PhysRevA.95.052505} correct only the occupied orbitals while the unoccupied orbitals see only the mean-field DFA potential. We here seek a way to make the unoccupied orbitals (including the Rydberg states) see a self-interaction-corrected potential. For that purpose, we adapt and implement the the density-consistent effective potential (DCEP) method of Kohut, Ryabinkin, and Staroverov\cite{Ryabinkin2015,Kohut2014} to obtain effective local potentials from the Perdew-Zunger\cite{PhysRevB.23.5048} (PZ) self-interaction corrected orbitals and density. The Perdew-Zunger self-interaction correction (PZSIC) approach is closer to the self-interaction correction schemes\cite{lindgren1971statistical,PhysRevA.15.2135,PhysRevB.23.5048} that remove self-interaction error on an orbital-by-orbital basis. The PZ energy results in an energy functional that is orbital dependent. In the KS-DFT scheme, implementing such orbital-dependent functionals requires computing the multiplicative potential $v_{XC} (\vec{r})=\frac{\delta E_{XC}}{\delta \rho (\vec{r})}$ . Here $E_{XC}$ is the exchange-correlation functional and $\rho (\vec{r})$ is the total electron density. When the functional $E_{XC}$ is not an explicit functional of the density, the potential cannot be obtained by a straightforward evaluation. Traditionally, an effective potential has been obtained by solving the optimized effective potential (OEP) integral equation\cite{Sharp1953,Talman1976,PhysRevA.46.5453}. The OEP approach is often not well suited to routine calculations due to numerical instabilities and difficulties solving it in finite basis sets\cite{ivanov2002finite,yang2002direct,heaton2007optimized,kummel2008orbital,hesselmann2007numerically}. The PZSIC method has also been implemented within the Krieger-Li-Iafrate (KLI) approximation\cite{doi:10.1063/1.481421, doi:10.1063/1.1370527, Tong1997, messud2008improved, PhysRevB.77.121204, PhysRevA.103.042811} and within OEP using the real space approach \cite{Korzdorfer2008}. But as mentioned earlier, OEP implementations using finite Gaussian basis sets are usually fraught with numerical instabilities. The recently developed method by Kohut, Ryabinkin, and Staroverov provides a practical alternative to the OEP method to obtain KS potentials in a straightforward manner\cite{Ryabinkin2015,Kohut2014,Ospadov2017,PhysRevLett.111.013001}. Kohut, Ryabinkin, and Staroverov in Ref. [\onlinecite{Kohut2014}] have laid out a hierarchy of successively more accurate approximations, ending in DCEP. In this work, we adapt the DCEP method and apply it to the PZSIC using the Fermi-L\"owdin orbital self-interaction correction scheme\cite{doi:10.1063/1.4907592} (FLOSIC) to obtain the self-interaction corrected eigenvalues and corresponding orbitals. The density of states (photoelectron spectra) and HOMO-LUMO gaps obtained using this approach are compared with experimental values for several molecules. In Sec. \ref{sec:rks} we describe the DCEP method and validate our implementation in the UTEP-NRLMOL code\cite{UTEPNRLMOL}. In Sec. \ref{sec:rks_flosic} we present our adaptation of the DCEP to the FLOSIC method. The computational details and results are presented in Sec.~\ref{sec:results}. \section{Ryabinkin-Kohut-Staroverov Method}\label{sec:rks} The simplest approximation to the OEP relies on the idea developed by Slater\cite{Slater1951} to construct an orbital-averaged potential weighted by $\vert\phi_i \vert^2/\rho $ where $\vert \phi_i \vert^2$ and $\rho$ are respectively density of the $i^{th}$ orbital and total electron density. In the context of Hartree-Fock (HF) approximation, this results in the so-called Slater potential, \begin{equation} v_S(\vec{r}) = \frac{1}{\rho(\vec{r})}\sum_{i=1}^N \phi_i^*\hat{K}\phi_i =-\frac{1}{2\rho(\vec{r})} \int\frac{|\gamma((\vec{r},\vec{r}\,')|^2}{\vert\vec{r}-\vec{r}\,'\vert} d\vec{r}\,' \end{equation} with the Fock exchange operator $\hat{K}$ and reduced density matrix $\gamma(\vec{r},\vec{r}\,')=\sum_i^N\phi_i(\vec{r})\phi_i^*(\vec{r}\,')$, where $N$ is the number of occupied orbitals. Kohut \textit{et al.}\cite{Kohut2014} in 2014 defined a methodology to obtain higher-order approximations to the OEP for Hartree-Fock calculations. The method begins by rearranging the Fock equations as given by \begin{equation} \Bigg[ -\frac{1}{2} \nabla^{2} + v_{ext}(\vec{r}) + v_{H}^{HF}(\vec{r}) + \hat{K} \Bigg] \phi_i^{HF} = \epsilon _i^{HF} \phi_i^{HF} . \end{equation} Multiplying both sides in the above equation by $\phi_i^{HF}$, summing over $i$ from 1 to $N$, and then dividing both sides by $\rho^{HF}(\vec{r})$ gives \begin{equation}\label{eq:I_HF} \frac{\tau_L^{HF}(\vec{r})}{\rho^{HF}(\vec{r})} + v_{ext}(\vec{r}) + v_{H}^{HF}(\vec{r}) + v_S^{HF}(\vec{r}) = -\overline{I}^{HF}(\vec{r}) \end{equation} where $\tau_L^{HF}(\vec{r})$ is the HF kinetic energy density in Laplacian form and $\bar{I}^{HF}(\vec{r})$ is the HF average local ionization energy. Likewise, repeating these steps for the KS equations gives \begin{equation}\label{eq:I_KS} \frac{\tau_L^{KS}(\vec{r})}{\rho^{KS}(\vec{r})} + v_{ext}(\vec{r}) + v_{H}^{KS}(\vec{r}) + v_X(\vec{r}) = -\overline{I}^{KS}(\vec{r}) . \end{equation} Here, the Laplacian kinetic energy density for a given wavefunction (WF) (e.g. HF or KS) is \begin{equation} \tau_L^{WF}(\vec{r}) = -\frac{1}{2}\sum_{i=1}^N \phi_i^{WF*}\nabla^2 \phi_i^{WF} \end{equation} and the average local ionization energy (sometimes $\overline{\epsilon}^{WF}$) is defined by \begin{equation} \overline{I}^{WF}(\vec{r}) = -\frac{1}{\rho^{WF}(\vec{r})} \sum_{i=1}^N \epsilon_i^{WF} \vert \phi_i^{WF}\vert^2 . \end{equation} The highest level of approximations defined by Kohut, Ryabinkin, and Staroverov is DCEP. This approximation relies on the assumption that the ground state densities in two schemes are equal. This equates to imposing the constraint $\rho^{KS}(\vec{r}) = \rho^{HF}(\vec{r})$. Subtracting Eq. (\ref{eq:I_KS}) from Eq. (\ref{eq:I_HF}) leads to \begin{equation}\label{eq:dcep} v_{X}^{DCEP}(\vec{r}) =v_{S}^{HF}(\vec{r}) + \overline{I}^{HF} - \overline{I}^{KS} + \frac{\tau_{HF}(\vec{r})}{\rho_{HF}(\vec{r})} - \frac{\tau_{KS}(\vec{r})}{\rho_{KS} (\vec{r})} \end{equation} where $\tau$ is the positive-definite form of the kinetic energy density, such that $ \tau^{HF}(\vec{r}) = \frac{1}{2} \sum_{i=1}^N \vert\nabla^2\phi_i^{HF}\vert^2 $. In the DCEP method, the eigenvalues are then shifted so that $\epsilon_{HOMO}^{KS} = \epsilon_{HOMO}^{HF}$ to ensure the correct behavior of the asymptotic potential in the limit of $r\rightarrow \infty$. A full self-consistent calculation is performed subsequently. In this work we define the tolerance for self-consistent calculations such that self-consistency is reached when relative difference between the potentials in the two successive iterations is less than $10^{-8}$, that is, $\frac{||V_{n}-V_{n-1}||}{||V_n||} <10^{-8}.$ \subsection{DCEP-Hartree-Fock results} We validate our DCEP implementation in the UTEP-NRLMOL code\cite{UTEPNRLMOL} by comparing our results against the HF results from the original DCEP work.\cite{Kohut2014} The NRLMOL code\cite{PhysRevB.41.7453,PhysRevB.42.3276,Pederson2000}, on which UTEP-NRLMOL is based, is a pure density functional code. The Hartree-Fock exchange was introduced in UTEP-NRLMOL code using a semi-analytic scheme similar to what is used for calculating the Coulomb energy.\cite{UTEPHF} Fig.~\ref{fig:HF-Vxc} shows the DCEP potentials for the Ar atom and Li$_2$ molecule obtained from the HF densities using the NRLMOL basis set \cite{PhysRevA.60.2840}. These potentials compare very well with the the OEP exchange potentials reported by Kohut and coworkers\cite{Kohut2014} thus validating the present implementation of the DCEP method. There is a slight difference around 0.1 a.u. where the first bump in the exchange potential is somewhat less conspicuous in the present result. We find that this is a basis set artifact and uncontracting the basis reproduces the bump in the exchange potential closely. \begin{figure}[h] \includegraphics[width=\columnwidth]{Fig1.pdf} \caption{\label{fig:HF-Vxc} The exchange potential curves obtained from the Hartree-Fock densities for (a) the argon atom plotted radially and (b) Li$_2$ plotted along the molecular axis. The red curves are OEP from Ref.~\onlinecite{Kohut2014}, and the black curves are obtained from the implementation in this work. The green circles represent grid points where the data points are calculated.} \end{figure} \section{DCEP in FLOSIC}\label{sec:rks_flosic} We propose to extend the DCEP method to obtain a multiplicative potential from PZSIC orbitals and density. To do so, we start with the PZSIC equations, \begin{equation}\label{eq:pzsic} \Bigg[ -\frac{1}{2} \nabla^{2} + v_{ext}(\vec{r}) + v_{H}(\vec{r}) + v_{XC}(\vec{r}) + V^{i}_{SIC}(\vec{r}) \Bigg] \phi_i= \epsilon _i \phi_i. \end{equation} Here $i$ is the orbital index, $v_{ext}$ is the external potential, $v_H$ is the Coulomb potential and $V^{i}_{SIC}$ is the self-interaction-correction term for the $i$th orbital. $V^{i}_{SIC}$ consists of the self-Coulomb potential $ U[\rho_i]$ term and the self-exchange-correlation potential $v_{xc}[\rho_i,0]$ term as follows, \begin{eqnarray} V^{i}_{SIC}& = & - \{\, U[\rho_i] + v_{xc}[\rho_i,0] \,\} \\ & = & -\int d^3r\,' \frac{\rho_i(\vec{r}\,')}{\vert \vec{r} - \vec{r}\,'\vert} - v_{xc}[\rho_i,0]. \end{eqnarray} In our case, we use the Fermi L\"owdin orbitals (FLOs) implementation of PZSIC\cite{PEDERSON2015153,PhysRevA.95.052505} to determine $V_{SIC}^i$ such that $\phi_i = \phi_i^{FLO}$ and $\rho_i = \rho_i^{FLO} = |\phi_i^{FLO}|^2$. To form FLOs, first, a set of Fermi orbitals $F_i$ is constructed with the density matrix and $3N$ positions called Fermi orbital descriptor (FOD) positions given as \begin{equation} F_i(\vec{r})=\frac{\sum_j^{N}\psi_j(a_i)\psi_j(\vec{r})}{\sqrt{\rho_i(a_i)}}. \end{equation} Then, the set of $F_i$'s is orthogonalized using L\"owdin's scheme to obtain FLOs. Similarly, as with the previous case in Sec.~\ref{sec:rks}, we multiply both sides of Eq.~(\ref{eq:pzsic}) with $\phi_i$, sum over $i$ from 1 to $N$, and divide both sides with $\rho_{SIC}(\vec{r})$. This yields averaged-over quantities shown as \begin{equation}\label{eq:I_SIC} \frac{\tau(\vec{r})}{\rho_{SIC}(\vec{r})} + v_{ext}(\vec{r}) + v_{H}(\vec{r}) + v_{XC}(\vec{r}) + \overline{V}_{SIC}(\vec{r}) = -\overline{I}^{SIC}(\vec{r}) \end{equation} where $\rho_{SIC}=\sum_i \rho_i$. Here $\overline{V}_{SIC}$ averages over the orbital SIC potentials using \begin{equation}\label{eq:avg_sicv} \overline{V}_{SIC}(\vec{r}) = \sum_{i=1}^{N} v_{SIC}^{i}(\vec{r}) \frac{\rho_i(\vec{r})}{\rho_{SIC}(\vec{r})}. \end{equation} Finally, subtracting Eq. (\ref{eq:I_KS}) from Eq. (\ref{eq:I_SIC}) subject to the constraint in Eq. (\ref{eq:dcep}), we obtain the density-consistent effective potential \begin{equation} v_{XC}^{DCEP}(\vec{r}) =v_{XC}^{SIC}(\vec{r}) + \overline{V}_{SIC}(\vec{r}) + \overline{I}^{SIC} - \overline{I}^{KS} + \frac{\tau_{SIC}(\vec{r})}{\rho_{SIC}(\vec{r})} - \frac{\tau_{KS}(\vec{r})}{\rho_{KS} (\vec{r})}. \end{equation} These SIC terms are obtained from a self-consistent FLOSIC calculation with optimized sets of FODs. Once they are determined, a self-consistent calculation can be performed to obtain the SIC effective potential. \section{Results}\label{sec:results} \subsection{Computational Details} The DCEP method was implemented in a highly-scalable version of the UTEP-NRLMOL\cite{UTEPNRLMOL} code developed at UTEP. The code uses a Gaussian orbital basis\cite{PhysRevA.60.2840} and an accurate numerical integration grid scheme\cite{PhysRevB.41.7453}. SI-corrected inputs were obtained from the FLOSIC code\cite{FLOSICcode,FLOSICcodep}, which is based on the NRLMOL code. The default grid in the FLOSIC code requires a much higher grid density than standard DFT calculations. The mesh generated with the FLOSIC code was reused for the DCEP method for each system. All calculations use the default NRLMOL\cite{PhysRevA.60.2840} basis set and the PBE exchange-correlation functional\cite{PhysRevLett.77.3865,PhysRevLett.78.1396}. \subsection{Eigenvalues in DCEP-SIC} In the FLOSIC scheme, self-consistency is obtained using Jacobi-like iterative scheme\cite{PhysRevA.95.052505} and only the occupied orbitals are affected directly by SIC. The DCEP method allows us to examine the effect of SI-corrected wavefunctions on unoccupied states. A study by Zhang and Musgrave\cite{Zhang2007} compares the HOMO, LUMO, and HOMO-LUMO gap of 11 functionals and compares those with experimental ionization potential (IP) and the lowest excitation energy for a set of molecules. Following Zhang and Musgrave, we define the experimental HOMO energy as the negative of the experimental ionization energy and the experimental LUMO energy as the difference between the experimental lowest excitation energy and the HOMO energy. All the DFT and TDDFT (with adiabatic kernels) in our tables and figures other than the DCEP-SIC-PBE are from Zhang and Musgrave. Allen and Tozer\cite{allen2002eigenvalues} utilized the procedure of Zhao, Morrison, and Parr\cite{zhao1994electron} to determine the KS eigenvalues and HOMO-LUMO gaps from coupled-cluster BD (Brueckner Doubles) electron densities. For the sake of clarity, we mention that our goal is not to assess various exchange-correlation functionals, which now run into several hundreds, but to test how good the DCEP-SIC unoccupied orbitals are. For this purpose we restricted the comparison with the results reported by Zhang and Musgrave\cite{Zhang2007} and Allen and Tozer\cite{allen2002eigenvalues}. We have chosen C$_2$H$_2$, CO, H$_2$, HF, C$_6$H$_6$, C$_{10}$H$_8$, C$_{14}$H$_{10}$, H$_2$O, and NH$_3$ and C$_2$H$_4$ molecules. The HOMOs, LUMOs, and the HOMO-LUMO gaps calculated with DCEP-SIC are compared with the experimental values and with the results from Ref. \onlinecite{Zhang2007} in Fig. \ref{fig:evals}. The mean absolute errors (MAEs) of DCEP-SIC method for the HOMO, LUMO, and HOMO-LUMO gaps are compared with the MAEs of the 11 functionals used in Ref. \onlinecite{Zhang2007} in Table \ref{tab:homolumo}. We find that the DCEP-SIC eigenvalues perform well for all three categories of HOMOs, LUMOs and HOMO-LUMO gaps. The HOMO eigenvalues of the FLOSIC scheme have previously shown good agreement with vertical ionization potentials from CCSD(T) and experiments.\cite{waterpolarizability,doi:10.1063/1.5120532,C9CP06106A,doi:10.1063/5.0041265,doi:10.1063/5.0024776,PhysRevA.103.042811} As detailed in Sec. \ref{sec:rks}, the eigenvalues in DCEP-SIC calculations are shifted such that the DCEP HOMO matches with the FLOSIC HOMO values. Thus, the HOMO levels in DCEP-SIC are same as those in the FLOSIC method. For the chosen systems, the mean absolute error (MAE) of the FLOSIC HOMO eigenvalues is 1.09 eV with respect to the experimental ionization energies. This MAE is relatively small when compared with the 11 functionals, with only the KMLYP functional which has about 55.7\% of Hartree-Fock exchange providing better HOMO prediction (MAE, 0.83 eV) than DCEP-SIC (FLOSIC HOMO). Zhang and Musgrave\cite{Zhang2007} found that all 11 functionals provide rather poor estimate of the {\it experimental} LUMO eigenvalues. We find that the DCEP-SIC provides good agreement with experiment, with a MAE of 0.73 eV. Although LDA and GGA functionals do not perform well for HOMOs and LUMOs, they all give much better estimate of HOMO-LUMO gaps when compared with the experimental lowest excitation energies. The MAEs for the HOMO-LUMO gaps for these functionals range from 0.64-0.67 eV. These functionals benefit from error cancellation while taking the difference between the HOMO and LUMO energies. In contrast the errors are much larger for the hybrid functionals. The hybrid functionals are typically implemented in generalized Kohn-Sham scheme. The mixing of Hartree-Fock exchange with DFA in hybrid functionals improve occupied eigenvalues due to mitigation of self-interaction errors and as result HOMO eigenvalues are more accurate in hybrid functionals but the LUMO eigenvalues remain poor. The MAE for the HOMO-LUMO gaps for the hybrid functionals range from 1.04-5.15 eV. The present DCEP-SIC method performs much better with an MAE of 1.01 eV. Zhang and Musgrave have also compared the HOMO-LUMO gaps with (the lowest) TDDFT excitation energies. In the bottom right plot of Fig. \ref{fig:evals}, we show the comparison of the TDDFT excitation energies reported by Zhang and Musgrave with experiment to facilitate a comparison of the HOMO-LUMO gaps shown in the bottom left panel against the TDDFT excitation energies. The plots show the DCEP-SIC HOMO-LUMO gaps compare well to the TDDFT excitation energies (cf. Table~\ref{tab:homolumo}) for these molecules. Overall, the results show that the functionals that perform best for HOMO eigenvalues (KMLYP, BH, and B1B95) perform the worst for gaps since their LUMO predictions are rather poor. At the same time, the functionals that perform best for gaps such as PBE and BLYP are among the worst for predicting HOMO eigenvalues accurately. In contrast, the DCEP-SIC method gives reliable results for all three quantities. It, by construction, retains the accuracy of the PZSIC HOMO eigenvalues and yields accurate LUMO eigenvalues and thereby yields accurate HOMO-LUMO gaps, as borne out by the comparison of MAEs in Table~\ref{tab:homolumo}. \begin{figure*}[hb] \includegraphics[width=1.8\columnwidth]{Fig2.pdf} \caption{\label{fig:evals} The calculated DCEP-SIC HOMO, LUMO, and HOMO-LUMO gaps (in eV) against experimental IPs and lowest excitation energies: (top left) HOMO energies, (top right) LUMO energies, (bottom left) HOMO-LUMO gaps, and (bottom right) TDDFT excitation energies. For comparison, values for the other functionals from Ref.~\onlinecite{Zhang2007} are shown. The solid line indicates the ideal agreement with experiments.} \end{figure*} \begin{table} \caption{\label{tab:homolumo}Mean absolute errors (in eV) of calculated HOMO, LUMO, HOMO-LUMO gaps, and TDDFT excitation energies with respect to experimental ionization potentials and the first excitation energies. } \begin{ruledtabular} \begin{tabular}{ccccc} Method & HOMO & LUMO & HOMO-LUMO & TDDFT\\ \hline DCEP-SIC-PBE & 1.09 & 0.73 & 1.01 & ---\\ SVWN$^\text{*}$ & 3.69 & 3.74 & 0.67 & 1.05\\ BLYP$^\text{*}$ & 4.41 & 4.30 & 0.66 & 0.76\\ BP86$^\text{*}$ & 4.20 & 4.29 & 0.66 & 1.03\\ BPW91$^\text{*}$ & 4.30 & 4.41 & 0.66 & 1.07\\ PW91$^\text{*}$ & 4.24 & 4.21 & 0.64 & 0.99\\ PBE$^\text{*}$ & 4.30 & 4.04 & 0.65 & 1.00\\ B3LYP$^\text{*}$ & 3.13 & 4.93 & 1.80 & 0.85\\ KMLYP$^\text{*}$ & 0.83 & 5.96 & 5.15 & 1.52\\ BH\&HLYP$^\text{*}$ & 1.58 & 6.04 & 4.49 & 1.49\\ O3LYP$^\text{*}$ & 3.67 & 4.67 & 1.04 & 1.05\\ B1B95$^\text{*}$ & 2.90 & 5.44 & 2.52 & 1.20\\ \end{tabular} \end{ruledtabular} \begin{flushleft} $^\text{*}$Errors obtained using the data provided in reference~\onlinecite{Zhang2007}. \end{flushleft} \end{table} \subsection{Approximate photoelectron spectra of polyacenes} We also examine the effect on all occupied eigenvalues by comparing the SIC and DCEP-SIC results to the photoelectron spectra of polyacenes obtained through Ultraviolet Photoelectron Spectroscopy (UPS).\cite{Yamauchi1998,Liu2011} The SIC eigenvalues, with LDA and PBE, shown in the top left plot for benzene in Fig. \ref{fig:spectra} are in qualitative agreement with experimental values, but on a much broader energy scale. Additionally, SIC introduces an additional peak around 13.0 eV not seen in the experimental spectra. DCEP-SIC compresses the eigenvalue spectrum and provides a much closer fit to experimental results in terms of the number of peaks and energy spacing between the eigenvalues. The LDA calculations result in slightly higher energies in comparison to PBE but retain the similar spacing for both SIC and DCEP-SIC. Similar behavior for SIC against DCEP-SIC and DCEP-SIC-LDA against DCEP-SIC-PBE eigenvalues are found also for the other three polyacenes (naphthalene, antracene, and tetracene) and so only experimental and DCEP-SIC-PBE results are plotted in Fig. \ref{fig:spectra}. For the larger polyacenes, DCEP-SIC shows good agreement with experimental spectra for low to mid-level energy states. For anthracene, the spectrum displays a rigid shift higher than the experiment spectrum. In all cases, DCEP-SIC over-compresses the spectra so high-energy (core) states are underestimated. \begin{figure*}[hb] \includegraphics[width=1.8\columnwidth]{Fig3.pdf} \caption{\label{fig:spectra} Calculated spectra of polyacenes: (top left) benzene, (top right) naphthalene, (bottom left) anthracene, and (bottom right) tetracene. The calculated spectra are shifted to match the HOMO eigenvalue with the IP of experiments. The DCEP-SIC-PBE spectra are shown in solid black and DCEP-SIC-LDA results are shown in dashed lines. Experimental UPS results from Refs.~\onlinecite{Yamauchi1998,Liu2011} are shown in blue. For benzene, SIC results with PBE are shown at the top of the plot in solid red and SIC-LDA results are shown using dashed red lines.} \end{figure*} \section{Conclusion} We have implemented the DCEP method to generate multiplicative effective potentials from FLOSIC orbitals and densities. While the FLOSIC method (within the generalized Kohn-Sham scheme) can provide accurate HOMO energies due to explicit removal of self-interaction error, the unoccupied states are essentially the same as those of the uncorrected functional. The same behavior can be seen for many hybrid functionals as well. The use of a multiplicative effective potential results in much improved description of properties related to unoccupied orbitals and so gives an accurate description of the HOMO and LUMO as well as the HOMO-LUMO gap, whereas the 11 functionals tested by Zhang and Musgrave fail for one or more of these quantities. We also find the HOMO-LUMO gap of DCEP-SIC to be comparable to TDDFT excitation energies of GGAs and hybrid functionals. The present results show that the DCEP-SIC eigenvalues provide a much better description of the experimental spectroscopic results compared to the standard PZSIC eigenvalues obtained with the FLOSIC formalism. For a set of polyacenes, we find that the DCEP-SIC eigenvalues provide a better approximation of the photoelectron spectra than the standard FLOSIC eigenvalues as DCEP-SIC corrects the broadened spectra of standard FLOSIC calculations. \section*{Data Availability Statement} The data that supports the findings of this study are available within the article. \begin{acknowledgments} Authors acknowledge Dr. Po-Hao Chang for discussions. This work was supported by U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award No. DE-SC0018331. Computational time at Texas Advanced Computing Center through NSF Grant No. TG-DMR090071 and NERSC is gratefully acknowledged. \end{acknowledgments} \section*{References}
2,877,628,088,729
arxiv
\section{Details of Minimum ISE} \section{Connection with Clustering Based Algorithms} \subsection{Gaussian Barycenter} \subsubsection{Guassian Baycenter under KL Divergence} \label{sec:kl_barycenter} We proof Lemma~\ref{lemma:Gaussian_KL_barycenter} in this section. \begin{proof} The KL-divergence between two Gaussians is given by \begin{equation*} \begin{split} &2D_{\text{KL}}(\Phi(\cdot|\mu_1,\Sigma_1)\|\Phi(\cdot|\mu_2,\Sigma_2))\\ =&\log \frac{|\Sigma_2|}{|\Sigma_1|} + \text{tr}(\Sigma_2^{-1}\Sigma_1) + (\mu_2-\mu_1)^{\tau}\Sigma_2^{-1}(\mu_2-\mu_1)-d \end{split} \end{equation*} where $|\Sigma|$ is the determinant of the matrix. Therefore, we can write \begin{equation*} \begin{split} L(\mu,\Sigma) &= \sum_{n=1}^{n} \lambda_n D_{\text{KL}}(f_n\|f) \\ & = \frac{1}{2}\sum_{n} \lambda_n \left\{\log |\Sigma|+ \text{tr}(\Sigma^{-1}\Sigma_n)\right\} \\ &+ \frac{1}{2}\sum_{n} \lambda_n (\mu-\mu_n)^{\tau}\Sigma^{-1}(\mu-\mu_n) + C \end{split} \end{equation*} for some constant $C$. We now use the following linear algebra formulas $$ \frac{\partial \log |\Sigma|}{\partial \Sigma} = (\Sigma^{-1})^{\tau} = (\Sigma^{\tau})^{-1}, $$ $$ \frac{\partial \text{tr}(A\Sigma^{-1}B)}{\partial\Sigma} = -(\Sigma^{-1}BA\Sigma^{-1})^{\tau}, $$ and $$\frac{\partial}{\partial x} (x-\mu)^{\tau}\Sigma^{-1}(x-\mu) = 2\Sigma^{-1}(x-\mu) $$ to work out partial derivatives of $L$ with respect to $\mu$ and $\Sigma$. They are given by \begin{align*} \frac{\partial L}{\partial \mu} &= 2\sum_{n}\lambda_n \Sigma^{-1}(\mu - \mu_n), \\ \frac{\partial L}{\partial \Sigma} &= \Sigma^{-1} - \Sigma^{-1}\sum_{n} \lambda_n \left\{ \Sigma_n + (\mu-\mu_n)(\mu-\mu_n)^{\tau}\right\}\Sigma^{-1}. \end{align*} Setting both partial derivatives to $\mathbf{0}$, we obtain \[ \bar{\mu}= \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n\mu_n \] and the covariance \[ \bar{\Sigma} = \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n (\Sigma_n + (\mu_n-\bar{\mu})(\mu_n-\bar{\mu})^{\tau}). \] Clearly, these solutions are the mean and covariance matrix of $f$ that minimizes $L(f)$. This completes the proof. \end{proof} \subsubsection{Gaussian Barycenter under CS Divergence} \label{sec:CS_barycenter_proof} Similar to the Gaussian barycenter under the KL divergence, in this section, we derive the Gaussian barycenter under the CS divergence. \begin{proof} Based on~\eqref{eq:cauchy-schwartz-divergence-gaussian}, let \begin{equation*} \begin{split} L(\mu,\Sigma)=&\sum_{n}\lambda_n D_{\text{SW}}(f_n\|f) \\ =&\sum_{n}\lambda_{n} \{-\log \phi(\mu_n|\mu, \Sigma_n+ \Sigma) - \frac{1}{4}\log|\Sigma|\} + C\\ =&\frac{1}{2}\sum_{n}\lambda_{n}(\mu_n-\mu)^{\tau}(\Sigma_n+\Sigma)^{-1}(\mu_n-\mu)\\ & +\frac{1}{2}\sum_{n}\lambda_{n} \{\log|\Sigma_n+\Sigma|- \frac{1}{2}\log |\Sigma|\} + C \end{split} \end{equation*} for some constant $C$. The gradients are given by $$\frac{\partial L}{\partial \mu} = -\sum_{n}\lambda_n (\Sigma_n+\Sigma)^{-1}(\mu_n-\mu)$$ and \[ \begin{split} &2\frac{\partial L}{\partial \Sigma} = -\frac{1}{2}\left\{\sum_{n} \lambda_n \right\}\Sigma^{-1}\\ &+\sum_{n}\lambda_n(\Sigma_n+\Sigma)^{-1}\left\{I-(\mu_n-\mu)(\mu_n-\mu)^{\tau}(\Sigma_n+\Sigma)^{-1}\right\} \end{split} \] Setting both partial derivatives to $\mathbf{0}$, we obtain \[ \bar\mu=\left\{\sum_{n}\lambda_n (\Sigma_n+\bar\Sigma)^{-1}\right\}^{-1}\sum_{n}\lambda_n (\Sigma_r+\bar\Sigma)^{-1}\mu_n \] and \[ \bar{\Sigma}^{-1} = 2 \sum_{n}\tilde \lambda_n (\Sigma_n+\bar\Sigma)^{-1}\{ I -(\mu_r-\bar\mu)(\mu_r-\bar\mu)^{\tau}(\Sigma_r+\bar\Sigma)^{-1}\}. \] where $\tilde \lambda_n = \lambda_n/\sum_{n}\lambda_n$. This completes the proof. \end{proof} \subsection{Connection with Hard Clustering Algorithm in~\cite{schieferdecker2009gaussian}} \label{sec:connection_hard_clustering} In this section, we show that when the cost function is chosen to be \begin{equation*} c(\phi_n, \tilde \phi_m) = D_{\text{KL}}(\phi_n\|\tilde \phi_m), \end{equation*} our algorithm reduces to the hard clustering based algorithm in~\cite{schieferdecker2009gaussian}. According to our assignment step in Algorithm~\ref{alg:mm_reduction}, the transportation plan becomes \[ \pi_{nm} = \begin{cases} w_n&\text{if } C(n): = \argmin_{m'} c(\phi_n, \tilde \phi_{m'})\\ 0&\text{otherwise}. \end{cases} \] Then the mixing weights becomes \begin{equation*} \tilde w_m = \sum_{n=1}^{N}\pi_{nm} = \sum_{C(n)=m} w_{n} \end{equation*} and the $m$th subpopulation is updated via the KL barycenter \begin{equation*} \tilde \phi_{m} = \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} D_{\text{KL}}(\phi_n\| \phi) \end{equation*} By substituting $\lambda_n$ with $\pi_{nm}$ above, the updated subpopulation parameters based on our approach becomes \begin{equation*} \tilde\mu_{m} = \tilde w_m^{-1} \sum_{C(n)=m}w_n \mu_n \end{equation*} and \begin{equation*} \tilde\Sigma_{m} = \tilde w_m^{-1} \sum_{C(n)=m}w_n\{\Sigma_n + (\mu_n-\tilde\mu_m)(\mu_n-\tilde\mu_m)^{\tau}\}, \end{equation*} which are the same as the moment matching given in the hard clustering algorithm in Algorithm~\ref{alg:hard_clustering}. \section{Connection With Optimization Based Algorithms} \subsection{Proof for Equation~\ref{eq:barycenter_equiv}} \label{app:CTD_equiv} \begin{proof} We give the proof respectively under the following two cases. \noindent \textbf{ISE} When $c(\cdot,\cdot)=D_{\text{ISE}}(\cdot,\cdot)$, then according to~\eqref{eq:ISE}, the objective function on the LHS of~\eqref{eq:barycenter_equiv} is \begin{align*} &\sum_{n=1}^{N} w_n D_{\text{ISE}}(\phi_n, \phi)\\ =&\sum_{n=1}^{N} w_n \left\{\int \phi_n^2(x) dx + \int \tilde \phi^2(x) dx - 2\int \phi_n(x) \tilde \phi(x) dx\right\}\\ \end{align*} The RHS of~\eqref{eq:barycenter_equiv} is \begin{align*} &D_{\text{ISE}}\left(\sum_{n=1}^{N} w_n \phi_{n}, \tilde \phi\right)= \int \left\{\sum_{n=1}^{N} w_n \phi_{n}(x) - \tilde \phi^2(x)\right\} dx\\ =&\int \left\{ \sum_{n=1}^{N} w_n\phi_{n}(x)\right\}^2 dx + \int \tilde \phi^2(x) dx- 2\sum_{n} w_n \int \phi_{n}(x)\tilde \phi(x)dx\\ =& C + \sum_{n=1}^{N} w_n D_{\text{ISE}}(\phi_{n}, \tilde \phi) \end{align*} where $C$ is some constant that does not depend on $\phi$. This relationship implies that~\eqref{eq:barycenter_equiv} holds when the cost function is the ISE. \noindent \textbf{KL divergence} \begin{equation*} \begin{split} &D_{\mathrm{KL}}\left(\sum_{n=1}^{N} w_m\phi_{n} \| \tilde\phi\right)\\ =&\int \left\{\sum_{n=1}^{N} w_n\phi_{n}(x)\right\} \log \left \{\frac{\sum_{n} w_n\phi_{n}(x)}{\tilde \phi (x)}\right \} dx\\ =&C_1 - \sum_{n} w_n \int \phi_{n}(x) \log \tilde \phi (x) dx\\ =&C_2 + \sum_{n} w_n \int \phi_{n}(x) \log \frac{\phi_{n}(x)}{\tilde \phi (x)} dx \\ =&C_2 + \sum_{n} w_n D_{\mathrm{KL}}(\phi_{n}\|\tilde \phi) \end{split} \end{equation*} where $C_1$ and $C_2$ are constants not dependent on $\phi$. This relationship implies that~\eqref{eq:barycenter_equiv} holds when the cost function is the KL divergence. \end{proof} \subsection{Proof for Theorem~\ref{thm:upper_bound}} \label{app:CTD_is_upper_bound} \begin{proof} Let ${\boldsymbol\pi}\in\Pi({\mathbf{w}}, \tilde {\mathbf{w}})$ be a transportation plan, then we can write $p_{G} = \sum_{n} w_n \phi_n = \sum_{n}\sum_{m}\pi_{nm}\phi_n$, similarly $p_{\tilde G} = \sum_{n,m}\pi_{nm}\tilde \phi_m$. Therefore, \begin{equation*} \begin{split} c(p_{G}, p_{\tilde G}) &= c(\sum_{n}w_n\phi_n, \sum_{m} \tilde w_m \tilde \phi_m)\\ &= c(\sum_{n}\pi_{nm}\phi_n, \sum_{m} \pi_{nm} \tilde \phi_m)\\ &\leq \sum_{n,m}\pi_{nm} c(\phi_n, \tilde \phi_m) \end{split} \end{equation*} The last inequality holds because of the ``convexity" property of the cost funtion. Since this inequality holds for any ${\boldsymbol\pi}$, therefore taking the infimum with respect to ${\boldsymbol\pi}$ on the right hand side, we then have $$c(p_{G}, p_{\tilde G})\leq \mathcal{T}_{c}^{0}(p_{G}, p_{\tilde G})$$ which finishes the proof. \end{proof} \subsection{Convexity of Cost Functions} \subsubsection{ISE is Convex} \label{app:ISE_convexity} \begin{proof} Based on the definition of $D_{\text{ISE}}$, we have \begin{equation*} \begin{split} &D_{\text{ISE}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)\\ =&\int \{\alpha \{f_1(x)-\phi_1(x)\}+(1-\alpha) \{f_2(x) - \phi_2(x)\}\}^2 dx\\ =&\alpha^2D_{\text{ISE}}(f_1,\phi_1) +(1-\alpha)^2D_{\text{ISE}}(f_2,\phi_2) \\ &+ 2\alpha(1-\alpha)\int (f_1(x)-\phi_1(x))(f_2(x)-\phi_2(x))dx\\ :=&\alpha^2D_{\text{ISE}}(f_1,\phi_1) +(1-\alpha)^2D_{\text{ISE}}(f_2,\phi_2) \\ &+ 2\alpha(1-\alpha) \langle f_1(x)-\phi_1(x), f_2(x)-\phi_2(x)\rangle \end{split} \end{equation*} Therefore, we have \begin{equation*} \begin{split} &\alpha D_{\text{ISE}}(f_1,\phi_1) + (1-\alpha)D_{\text{ISE}}(f_2,\phi_2) \\ &- D_{\text{ISE}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)\\ =&\alpha(1-\alpha)D_{\text{ISE}}(f_1,\phi_1) + \alpha(1-\alpha)D_{\text{ISE}}(f_2,\phi_2)\\ &- 2\alpha(1-\alpha)\langle f_1(x)-\phi_1(x), f_2(x)-\phi_2(x)\rangle\\ =&\alpha(1-\alpha)\{D_{\text{ISE}}(f_1,\phi_1) + D_{\text{ISE}}(f_2,\phi_2) \\ &- 2\langle f_1(x)-\phi_1(x), f_2(x)-\phi_2(x)\rangle\}\\ =&\alpha(1-\alpha) D_{\text{ISE}}(f_1-\phi_1, f_2-\phi_2)\geq 0 \end{split} \end{equation*} The last inequality holds because the ISE is non-negative and $\alpha\in(0,1)$. We therefore show that $D_{\text{ISE}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)\leq \alpha D_{\text{ISE}}(f_1,\phi_1) + (1-\alpha)D_{\text{ISE}}(f_2,\phi_2)$ which finishes the proof. \end{proof} \subsubsection{CS Divergence is Nonconvex} \label{app:CS_convexity_counter_example} In this section, we present a counter example for the convexity of the CS divergence. That is to show that there exists a $f_1$, $f_2$, $\phi_1$, $\phi_2$, and $\alpha$ so that $D_{\text{CS}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2) > \alpha D_{\text{CS}}(f_1,\phi_1) + (1-\alpha)D_{\text{CS}}(f_2,\phi_2)$. Due to the form of the CS divergence, we show that \begin{equation} \label{eq:cs_nonconvex_obj} \begin{split} &-D_{\text{CS}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2) \\ <&-\alpha D_{\text{CS}}(f_1,\phi_1) - (1-\alpha)D_{\text{CS}}(f_2,\phi_2). \end{split} \end{equation} Let $\alpha=0.5$, $f_1(x) = \phi(x|-1, \sigma^2)$, $f_2(x) = \phi(x|-\mu,1)$, $\phi_1(x)=\phi(x|1,1)$, and $\phi_2(x) = \phi(x|\mu,\sigma^2)$ for some $\mu>0$ and $\sigma>0$. Then based on the closed form of the CS divergence, it is easy to see that \begin{equation*} \begin{split} D_{\text{CS}}(f_1, \phi_1) &= -\log \phi(1|-1, 1+\sigma^2) - \frac{1}{4}\log(16\pi^2\sigma^2)\\ D_{\text{CS}}(f_2, \phi_2) &= -\log \phi(\mu|-\mu, 1+\sigma^2) - \frac{1}{4}\log(16\pi^2\sigma^2) \end{split} \end{equation*} Therefore, the RHS of~\eqref{eq:cs_nonconvex_obj} becomes \begin{equation*} \text{RHS} =\frac{1}{2}\{\log \phi(1|-1,1+\sigma^2) + \log \phi(\mu|-\mu, 1+\sigma^2) + \log(4\pi\sigma)\} \end{equation*} We also have the LHS of~\eqref{eq:cs_nonconvex_obj} \begin{equation*} \begin{split} &\text{LHS}\\ =&\log\{\frac{1}{4}\{\phi(1|-1,1+\sigma^2) + \phi(\mu|-\mu,1+\sigma^2)\\ &+ \phi(\mu|-1,2\sigma^2)+\phi(-\mu|1,2)\}\}\\ &-\log \{\frac{1}{4}\{(4\pi\sigma^2)^{-1/2}+(4\pi)^{-1/2}\}+ \frac{1}{2}\phi(\mu|1,1+\sigma^2)\}\\ \end{split} \end{equation*} The difference between LHS and RHS for various values of $\mu$ and $\sigma$ is shown in Figure~\ref{fig:non_convexity_CS}. It can be seen from the figure that the surface plot is a saddle surface and is not always negative. Therefore, the CS divergence is not convex. \begin{figure}[htpb] \centering \includegraphics[width=\columnwidth]{./figure/misc/non_convexity_of_CS.png} \caption{The difference $\alpha D_{\text{CS}}(f_1,\phi_1) + (1-\alpha)D_{\text{CS}}(f_2,\phi_2)-D_{\text{CS}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)$ when $\alpha=0.5$, $f_1(x) = \phi(x|-1, \sigma^2)$, $f_2(x) = \phi(x|-\mu,1)$, $\phi_1(x)=\phi(x|1,1)$, and $\phi_2(x) = \phi(x|\mu,\sigma^2)$ for different values of $\mu$ and $\sigma$.} \label{fig:non_convexity_CS} \end{figure} \section{Generalization to Reduce Mixtures from Exponential Families} \label{app:generalization_exponential_family} In this section, we discuss the generalization of our proposed algorithm to the reduction of mixtures from full exponential families. \begin{table*}[htb] \centering \scriptsize \caption{Parameterization of widely used exponential families.} \begin{tabular}{llllll} \toprule ${\mathcal{F}}$ & $\theta$ &$ T$ & $A$& $h$ & ${\mathbb{E}}\{h\}$\\ \midrule \multicolumn{5}{c}{Univariate distribution}\\ Exponential & $-\lambda$ & $x$ & $-\frac{1}{\theta}$&1&1\\ Weibull (known $k$)& $-\frac{1}{\lambda^k}$ & $x^k$ & $-\frac{1}{\theta}$ & $kx^{k-1}$& $\frac{k!}{\lambda^{k-1}}$ \\ Laplace (known $\mu$)& $-\frac{1}{b}$ & $|x-\mu|$ & $-\frac{2}{\theta}$ & $1$ & 1\\ Rayleigh& $-\frac{1}{2\sigma^2}$ & $x^2$ & $-\frac{1}{2\theta}$ & $x$ & $\sigma\sqrt{\frac{\pi}{2}}$\\ Log-normal& $(\frac{\mu}{\sigma^2}, -\frac{1}{2\sigma^2})^{\tau}$ & $(\log x, (\log x)^2)^{\tau}$ & $\exp(\frac{\theta_1^2}{\theta_2})\frac{1}{\sqrt{-2\theta_2}}$ & $\frac{1}{\sqrt{2\pi} x}$ & $\frac{1}{\sqrt{2\pi}}\exp(\frac{\theta_1}{2\theta_2}-\frac{1}{4\theta_2})$\\ Gamma&$(-\beta,\alpha-1)^{\tau}$ & $(x,\log x)$ & $ \Gamma(\theta_2+1))(-\theta_1)^{- (\theta_2+1)}$ & $1$& $1$\\ Inverse Gamma& $(-\beta,-\alpha-1)^{\tau}$ & $(x,\log x)$ & $\Gamma(-\theta_2-1))(-\theta_1)^{\theta_2+1}$ & $1$& $1$\\ \midrule \multicolumn{5}{c}{Multivariate distribution}\\ Gaussian Gamma&$(\alpha-\frac{1}{2}, -\beta-\frac{\lambda\mu^2}{2}, \lambda\mu, -\frac{\lambda}{2})^{\tau}$ & $(\log \tau, \tau, \tau x, \tau x^2)^{\tau}$ & $\Gamma(\theta_1+\frac{1}{2})\frac{1}{\sqrt{-2\theta_4}} (\frac{\theta_3^2}{4\theta_4}-\theta_2)^{-(\theta_1+\frac{1}{2})}$ & $\frac{1}{\sqrt{2\pi}}$& $\frac{1}{\sqrt{2\pi}}$\\ Dirichlet& $\boldsymbol\alpha-1$ & $\log {\bm{x}}$ & $\exp\{\mathbbm{1}_K^{\tau}\log \Gamma(\boldsymbol\alpha) - \log \{\mathbbm{1}_K^{\tau}\Gamma(\boldsymbol\alpha)\}\}$ & $1$& $1$\\ \bottomrule \end{tabular} \label{tab:ISE_exp_family} \end{table*} \begin{definition}[Full Exponential Family] ${\mathcal{F}} = \{f(x|\theta): \theta\in \Theta\}$ is said to be a full exponential family if its density can be represented as \begin{equation} f(x|\theta) = h(x)\exp(\theta^{\tau}T(x) - \log A(\theta)) \end{equation} where $\theta = (\theta_1, \theta_2,\ldots,\theta_m)^{\tau}$ is called the natural parameter and $T(x)=(T_1(x), T_2(x),\ldots, T_m(x))^{\tau}$ is called the natural sufficient statistics. The parameter space $\Theta$ is formed by $$\Theta=\{\theta\in{\mathbb{R}}^{m}: \int \exp(\theta^{\tau}T(x))dx<\infty\}.$$ \end{definition} The natural parameters and the natural sufficient statistics for some widely used exponential families is given in Table~\ref{tab:ISE_exp_family}. We first consider the case where the cost function is the KL divergence. Under exponential family, the KL divergence between two densitiy functions have the following form \begin{equation*} \begin{split} D_{\mathrm{KL}}(f(\cdot|\theta_1), f(\cdot|\theta_2)) =& (\theta_1-\theta_2)^{\tau} {\mathbb{E}}_{\theta_1}\{T(X)\} - \log \frac{A(\theta_1)}{A(\theta_2)}\\ :=&(\theta_1-\theta_2)^{\tau} \mu(\theta_1) - \log \frac{A(\theta_1)}{A(\theta_2)} \end{split} \end{equation*} where $\mu(\theta) = {\mathbb{E}}_{\theta}\{T(X)\}$. Therefore, the KL divergence can be computed in closed-form as long as $\mu(\theta)$ has a closed-form. The barycenter of $f_1,f_2,\ldots, f_n \in {\mathcal{F}}$ under the KL divergence minimizes $\sum_{n=1}^{N} \lambda_n D_{\mathrm{KL}}(f(\cdot|\theta_n), f(\cdot|\theta))$ is given by $f(\cdot|\bar{\theta})$ where $\bar\theta$ is the solution to \begin{equation} \label{eq:KL_barycenter} \frac{\partial}{\partial \theta} \log A(\theta) = \sum_{n=1}^{N}\frac{\lambda_n}{\sum_{n} \lambda_n} \mu(\theta_n) \end{equation} This equality can be further simplifed since $\mu(\theta) = \frac{\partial}{\partial \theta}\log A(\theta)$, which can be easily shown by taking derivative of $\int f(x|\theta) h(x) dx$ with respect to $\theta$. Therefore, by simplifying~\eqref{eq:KL_barycenter}, we can see that the parameter of the barycenter under KL divergence has the following closed form \begin{equation} \label{eq:KL_barycenter_closed_form} \bar\theta = \mu^{-1}\left(\sum_{n=1}^{N}\frac{\lambda_n}{\sum_{n} \lambda_n} \mu(\theta_n)\right). \end{equation} The conclusion is also established in~\cite{liu2014distributed} where the KL barycenter is used to combine local estimates under distributed learning. Under the context of mixture reduction, a similar discussion is carried out in~\cite{ardeshiri2013reduction}. We next discuss the case where the cost function is ISE. The ISE between two density functions have the following form \begin{equation*} \begin{split} &D_{\text{ISE}}(f(\cdot|\theta_1), f(\cdot|\theta_2))\\ =&\int (f(x|\theta_1)-f(x|\theta_2))^2 dx\\ =&\sum_{i=1}^2 \frac{A(2\theta_i)}{A^2(\theta_i)} {\mathbb{E}}_{2\theta_i}\{h(X)\} - 2\frac{A(\theta_1+\theta_2)}{A(\theta_1)A(\theta_2)} {\mathbb{E}}_{\theta_1+\theta_2}\{h(X)\}. \end{split} \end{equation*} The ISE therefore has a closed form as long as ${\mathbb{E}}_{\theta}\{h(X)\}$ as a closed form. The form of $H(\theta)={\mathbb{E}}_{\theta}\{h(X)\}$ under the widely used exponential families is given in the last column of Table~\ref{tab:ISE_exp_family}. It can be seen from the table that for distributions from these exponential families, the ISE has a closed form. The ISE barycenter of $f_1,f_2,\ldots, f_n \in {\mathcal{F}}$ minimizes the objective function $${\mathcal{L}}(\theta) = \sum_{n} \lambda_n \left\{\frac{A(2\theta)}{A^2(\theta)} H(2\theta) - 2\frac{A(\theta_n+\theta)}{A(\theta_n)A(\theta)} H(\theta_n+\theta)\right\}.$$ This objective function is in general non-convex and numerical methods are required to find the local minimum. \section{Approximate Inference} \label{app:tracking} \begin{lemma}[Product of Gaussian Densities I] \label{lemma:product_gaussian_prediction} Let $\phi(x|A\mu+b,\Sigma)$ and $\phi(\mu|\mu_0,\Sigma_0)$ be two Gaussian densities, then $$\phi(x|A\mu+b,\Sigma)\phi(\mu|\mu_0,\Sigma_0) = C \phi(\mu|\hat \mu,\hat \Sigma)$$ and $$\int \phi(x|A\mu+b,\Sigma)\phi(\mu|\mu_0,\Sigma_0) d\mu = C$$ where $\hat \Sigma = (\Sigma_0^{-1}+A^{\tau}\Sigma^{-1}A)^{-1}$, $\hat \mu = \hat \Sigma(\Sigma_0^{-1}\mu_0 + A^{\tau}\Sigma^{-1}(x-b))$ and \begin{equation*} C = \phi(x|A\mu_0+b,\Sigma+A\Sigma_0A^{\tau}). \end{equation*} \end{lemma} \begin{lemma}[Product of Gaussian Densities II] \label{lemma:product_gaussian} Let $\phi(x|\mu_1,\Sigma_1)$ and $\phi(x|\mu_2,\Sigma_2)$ be two Gaussian densities, then $$\phi(x|\mu_1,\Sigma_1)\phi(x|\mu_2,\Sigma_2) = C \phi(x|\mu_3, \Sigma_3)$$ and $$\int \phi(x|\mu_1,\Sigma_1)\phi(x|\mu_2,\Sigma_2) dx = C$$ where $\Sigma_3 = (\Sigma_1^{-1}+\Sigma_2^{-1})^{-1}$, $\mu_3 = \Sigma_3(\Sigma_1^{-1}\mu_1 + \Sigma_2^{-1}\mu_2)$ and \begin{equation*} C = \phi(\mu_1|\mu_2,\Sigma_1+\Sigma_2). \end{equation*} \end{lemma} \section{Introduction}\label{sec:introduction}} \IEEEPARstart{L}{et} ${\mathcal{F}} = \{\phi(x|\theta) = |2\pi \Sigma|^{-1/2} \exp\{-\frac{1}{2} (x-\mu)^{\tau}\Sigma^{-1} (x-\mu)\}: \theta\in \Theta\}$ be the family of density functions of Gaussian distributions. For Gaussian densities, the parameter $\theta=(\mu,\Sigma)$ is in the $\Theta = {\mathbb{R}}^{d}\times {\mathcal{S}}_{d}$ where ${\mathcal{S}}_{d}$ is the space of symmetric nonnegative definite matrices of dimension $d$. Let $G= \sum_{n=1}^{N} w_n \delta_{\theta_n}$ be a discrete probability measure on $\Theta$ for some integer $N>0$. The density function of a finite Gaussian mixture model (GMM) is defined to be \begin{equation*} p_{G}(x) := \int \phi(x|\theta) dG(\theta) = \sum_{n=1}^{N} w_n \phi(x|\theta_n) := \sum_{n=1}^{N} w_n \phi_n(x) \end{equation*} For the ease of notation, we use $\phi_{n}(x)$ for $\phi(x|\theta_n)$. The discrete probability measure $G$ is called the mixing distribution. The components of the vector $\mbox{\bf w} = (w_1,w_2,\cdots,w_N)^{\tau}$ are called the mixing weights and the atoms $\theta_{n}$s are called the subpopulation or component parameters. The space of mixing distributions with order up to $N$ is denoted as $${\mathbb{G}}_{N} = \big \{ G=\sum_{n=1}^N w_n\delta_{\theta_n}|\theta_n \in \Theta, w_n\geq 0, \sum_{n=1}^N w_n=1 \big\}.$$ An order $N$ mixture has its mixing distribution in the space ${\mathbb{G}}_{N}-{\mathbb{G}}_{N-1}$. It is often cited~\cite{titterington1985statistical,nguyen2020approximation} that there always exists a GMM that can be arbitrarily close to any density function, the GMMs are therefore widely used as a parametric distribution to approximate distributions of complex shape. Therefore, there is always a trade-off between the accuracy of the approximation and computational efficiency. A larger number of components improves the approximation precision but increases the computational cost for downstream applications. For instance, it increases the cost of evaluating the log-likelihood function. Moreover, when finite mixture models are used to approximate density functions in some Bayesian inference procedures, the number of components of the mixture increases exponentially~\cite{manzar2017recursive} due to recursive operations. An example is in belief propagation for finding the marginal probabilities in graphical models. When the messages--the reusable partial sum for the marginalization calculations--are modeled as Gaussian mixtures~\cite{sudderth2010nonparametric}, their number of components increase exponentially and quickly become intractable with iterations. In recursive Bayesian filtering under hidden Markov model, when the transition probability and the likelihood are both Gaussian mixtures, then posterior distribution is also a mixture whose order increases exponentially. In these cases, to make inferences within a reasonable amount of time, some intermediate approximation steps can be used to prevent the number of components of the mixture from exploding. \emph{Gaussian mixture reduction} (GMR) is a technique useful for such purposes. GMR aims at approximating a $N$-component mixture $p_{G}(x) =\sum_{n=1}^{N} w_n \phi_n(x)$ by $p_{\tilde G}(x) = \sum_{m=1}^{M} \tilde w_m \phi(x|\tilde \theta_m)=\sum_{m=1}^{M} \tilde w_m \tilde \phi_m(x)$ where $M \leq N$. They are usually referred to as the original and the reduced mixtures respectively. There has been a rich literature on GMR and most approaches belong to one of the three general types: \emph{greedy algorithm based}~\cite{salmond1990mixture,runnalls2007kullback,huber2008progressive}, \emph{optimization based}~\cite{williams2006cost}, and \emph{clustering based}~\cite{schieferdecker2009gaussian,goldberger2005hierarchical,davis2007differential, assa2018wasserstein,zhang2010simplifying,vasconcelos1999learning,yu2018density} approaches. We refer to~\cite{crouse2011look} for a thorough review of these approaches. In this paper, we only focus on the optimization based and clustering based approaches. The optimization based approach~\cite{williams2006cost} has the big advantage of having the closed-form optimality goal but is computationally difficult to optimize. The clustering based approaches have the advantage of computational efficiency. Despite their computational efficiency, the clustering based approaches are motivated by the \emph{k-means} algorithms in the Euclidean space. Yet it is unclear if these clustering based algorithms converge or attain some optimality target when they do. In this paper, we propose an optimization based approach for GMR that has both advantages of the clustering based and optimization based approaches. Our proposed reduced mixture minimizes the \emph{composite transportation divergence} (CTD) from the original mixture to the mixture with a desired order. An efficient Majorization-Minimization (MM) algorithm is designed for the proposed optimization problem. We show the clustering based algorithms in the literature are special cases of our proposed MM algorithms. By establishing this connection, we are able to show the \emph{convergence} and the \emph{optimality goal} of clustering based algorithms. Our formulation for GMR also allows us to easily develop more ``effective'' approaches and to extend to the reduction of mixture of other families of mixtures. The rest of the paper is organized as follows. In Section~\ref{sec:overview}, we review the optimization based and clustering based approaches in the literature. The composite transportation divergence, which is the foundation of our proposed method, is introduced in Section~\ref{sec:composite_wdistance}. Our proposed method is given in Section~\ref{sec:problem_formulation}, we also discuss the connection of existing methods with our proposed method and the generalization in this section. Numerical experiments for comparing different approaches are given in Section~\ref{sec:exp}. \section{Overview of Existing Algorithms} \label{sec:overview} We review the optimization based and clustering based approaches in the literature. \subsection{Optimization Based Algorithm} The GMR is formulated as an optimization problem~\cite{williams2006cost} where the reduced mixture minimizes some divergence to the original mixture. The reduced mixture is defined as \begin{equation} \label{eq:minimum_distance} p_{\tilde G} = \argmin_{\tilde G \in {\mathbb{G}}_{M}} D(p_{G}, p_{\tilde G}) \end{equation} for some divergence $D(\cdot, \cdot)$. The advantage of the optimization based approach is that the reduced mixture is known to achieve an optimality goal. In~\cite{williams2006cost}, $D(\cdot, \cdot)$ is chosen to be the Integral Squared Error (ISE), the squared $L_2$ distance between these two Gaussian mixtures, due to its closed-form. \begin{equation} \label{eq:ISE} \begin{split} D_{\text{ISE}}(p_{G}, p_{\tilde{G}}) &:= \int (p_{G}(x) - p_{\tilde{G}}(x))^2 dx\\ &=\mbox{\bf w}^{\tau}\mathbf{S}_{OO}\mbox{\bf w} -2\mbox{\bf w}^{\tau}\mathbf{S}_{OR} \tilde{\mbox{\bf w}} + \tilde{\mbox{\bf w}}^{\tau} \mathbf{S}_{RR}\tilde{\mbox{\bf w}} \end{split} \end{equation} where $\mathbf{S}_{OO}$, $\mathbf{S}_{OR}$, and $\mathbf{S}_{RR}$ are three matrices of sizes $N\times N$, $N\times M$, and $M\times M$ respectively. Their $(n,m)$th elements respectively given by $\phi(\mu_n|\mu_m,\Sigma_n+\Sigma_m)$, $\phi(\mu_n|\tilde\mu_m,\Sigma_n+\tilde\Sigma_m)$, and $\phi(\tilde\mu_n|\tilde\mu_m,\tilde\Sigma_n+\tilde\Sigma_m)$. Although the objective function has a closed-form, numerical algorithms must be employed for minimization. The Quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is used in~\cite{williams2006cost} for optimization. In this approach evluating the gradient at each step has cost $O(NMd^3)$ and the dimension of the parameter is of order $O(d^2)$. Therefore, directly minimizing ISE becomes very expensive as the dimension $d$ increases. \subsection{Clustering Based Algorithm} The clustering based algorithm~\cite{schieferdecker2009gaussian} that mimics the idea of the \emph{k-means} algorithm is a computationally efficient approach for GMR. It aims at partitioning the components of the original mixture into clusters with the number of clusters equals to the order of the reduced mixture. Given initial cluster centers, which are Gaussians, the clustering based algorithm consists of the following two steps: \begin{enumerate} \item \emph{Assignment step}: each component of the original mixture is assigned to a cluster based on their distance to cluster centers; \item \emph{Update step}: the new cluster centers, which are Gaussians, are updated according to certain criteria. \end{enumerate} These two steps are iteratively applied until certain convergence criteria has been met. Then the components of the reduced mixture are these cluster centers, whose corresponding mixing weights are the sum of the weights of the components of the original mixture that belong to the same cluster. As in the case of clustering in the vector space, the clustering based algorithms for GMR can also be classified into hard and soft clustering. Each component of the original mixture is assigned to one and only one cluster in the hard clustering and to each cluster with certain probability in the soft clustering. \noindent \textbf{Hard Clustering} At the assignment step, the original mixture components are partitioned based on their minimum KL-divergence~\cite{schieferdecker2009gaussian} to cluster centers. The $n$th component of the original mixture is assigned to the $j$th cluster if $j$th cluster center is closest in the KL-divergence. At the update step, the components in the same cluster are merged to a single distribution in ${\mathcal{F}}$ via moment matching. That is the first two moments of the new cluster center, which is a Gaussian distribution, equals to the first two moments of the mixture formed by the components belong to the same cluster. See Algorithm~\ref{alg:hard_clustering}. Other similarity measure such as Wasserstein distance is also considered~\cite{assa2018wasserstein}. \begin{algorithm}[htbp] \begin{algorithmic} \State {\bfseries Input:} $p_{G}(x) = \sum_{n} w_n\phi_{n}(x)$ \State {\bfseries Initialize:} $\tilde\phi_1,\tilde\phi_2,\ldots,\tilde\phi_M$ \Repeat \State \underline{\emph{Assignment step:}} \State Compute $d_{nm} = D_{\text{KL}}(\phi_n,\tilde\phi_m)$ \State Assign component $n$ to clusters $C(n)=\argmin_{j} d_{nj}$ \State \underline{\emph{Update step:}} update cluster center by moment matching \For{$m \in [M]$} \begin{align*} \tilde w_m &= \sum_{C(n)=m} w_n\\ \tilde \mu_m &= \tilde w_m^{-1}\sum_{C(n)=m}w_n\mu_n\\ \tilde \Sigma_m &= \tilde w_m^{-1}\sum_{C(n)=m} w_n\{\Sigma_n+(\mu_n-\tilde \mu_m)(\mu_n-\tilde\mu_m)^{\tau}\} \end{align*} \EndFor \State Let $p_{\tilde{G}}(x) = \sum_{m} \tilde{w}_m \tilde\phi_m(x)$ \Until the value of $D_{\text{ISE}}(p_G, p_{\tilde{G}})$ converges \end{algorithmic} \caption{Hard clustering based algorithm in~\cite{schieferdecker2009gaussian}} \label{alg:hard_clustering} \end{algorithm} \noindent \textbf{Soft Clustering} Instead of assigning each component of the original mixture to one cluster, let $z_{nm}$ be the probability that the $n$th component of the original mixture belongs to the $m$th cluster, then $\sum_{m=1}^{M}z_{nm}=1$ for $n=1,2,\ldots,N$. The soft clustering based algorithm in~\cite{yu2018density} is given in Algorithm~\ref{alg:soft_clustering}. Compare with algorithm with the hard clustering based algorithm, we can see that update step is a weighted version of the moment matching. When the hyper-parameter $I\rightarrow \infty$, then $z_{nm} = 1$ if $m=C(n)$ and $0$ otherwise, that is the soft clustering based algorithm reduces to the hard clustering based algorithm. \begin{algorithm}[htpb] \begin{algorithmic} \State {\bfseries Input:} $p_{G}(x) = \sum_{n} w_n\phi_n(x)$, hyper-parameter $I>0$ \State {\bfseries Initialize:} $\tilde\phi_1,\tilde\phi_2,\ldots,\tilde\phi_M$ \Repeat \State \underline{\emph{Assignment step:}} \State Let \begin{align} E_{nm} &= {\mathbb{E}}_{\phi_n}\{\log \tilde \phi_m(X)\}\label{eq:soft_clustering_similarity} \\ z_{nm} &=\{\tilde w_m \exp(IE_{nm})\}/\sum_{m'} \{\tilde w_{m'} \exp(IE_{nm'})\}\label{eq:soft_clustering_assignment} \end{align} \State \underline{\emph{Update step:}} \For{$m\in[M]$} \begin{equation*} \begin{split} \tilde w_m &= \sum_{n}z_{nm}w_n\\ \tilde\mu_m &= \tilde w_m^{-1}\sum_{n} z_{nm} w_n \mu_n\\ \tilde\Sigma_m &= \tilde w_m^{-1}\sum_{n} z_{nm} w_n \left\{\Sigma_n + (\mu_n - \tilde\mu_m)(\mu_n - \tilde\mu_m)^{\tau}\right\} \end{split} \end{equation*} \EndFor \Until $\sum_{n}w_n\sum_{m} z_{nm} \left\{\log \frac{\tilde w_m}{z_{nm}} + IE_{nm}\right\}$ converges \end{algorithmic} \caption{Soft clustering based algorithm in~\cite{yu2018density}} \label{alg:soft_clustering} \end{algorithm} \section{Composite Transportation Divergence} \label{sec:composite_wdistance} Our proposed framework for GMR is based upon the composite transportation divergence (CTD). In this section, we introduce its formal definition. \begin{definition}[Composite Transportation Divergence] \label{def:CTD} Let us denote by $G=\sum_{n=1}^{N} w_{n} \delta_{\theta_{n}}$ and $\tilde G=\sum_{m=1}^{M}\tilde w_{m} \delta_{\tilde \theta_{m}}$ two discrete measures on $\Theta$. Denote ${\mathbf{w}}$ and $\tilde{{\mathbf{w}}}$ for the corresponding weights in vector form. Let $\phi_n(x) = \phi(x|\theta_n)$, $\tilde{\phi}_m(x) = \phi(x|\tilde \theta_m)$, and the cost function $c(\cdot,\cdot): {\mathcal{F}}\times {\mathcal{F}} \rightarrow {\mathbb{R}}_{+}$ be a non-negative semi-continuous function. Denote by $$ \Pi({\mathbf{w}} , \tilde{\mathbf{w}}) = \{{\boldsymbol\pi} \in {\mathbb{R}}_{+}^{N \times M}: {\boldsymbol\pi}\mathbbm{1}_{M} = {\mathbf{w}},{\boldsymbol\pi}^{\tau}\mathbbm{1}_{N} = \tilde {\mathbf{w}}\}. $$ The composite transportation divergence~\cite{chen2017optimal,delon2019wasserstein} between two mixtures is defined as \begin{equation} \label{eq:WD_distance} {\mathcal{T}}_{c}(p_{G},p_{\tilde G}) :=\inf_{{\boldsymbol\pi} \in \Pi ({\mathbf{w}} , \tilde{{\mathbf{w}}})}\sum_{n,m}\pi_{nm}c(\phi_n, \tilde \phi_m). \end{equation} \end{definition} The CTD between two finite mixtures can be explained by a canonical example in optimal transportation~\cite{peyre2019computational}. Suppose that an operator runs $N$ warehouses and $M$ factories in space $\mathcal{F}$. The warehouse $n$ at location $\phi_n$ contains $w_n$ units of the raw material. These raw materials are to be transported to the factories at location $\tilde{\phi}_m$ with $\tilde w_m$ being the amount of the material that the $m$th factory requires. The element $c(\phi_n,\tilde{\phi}_m)$ is the unit cost for transportation and $\pi_{nm}$ is the amount of material to be transported from warehouse $n$ to factory $m$. It is reasonable to assume the cost is proportional to the amount of material to be transported, therefore $\sum_{n,m} \pi_{nm}c(\phi_n,\tilde{\phi}_m)$ gives the total transportation cost under the transportation plan ${\boldsymbol\pi}$. The set $\Pi({\mathbf{w}} , \tilde{\mathbf{w}})$ is defined to be all possible transportation plans ${\boldsymbol\pi}$ that satisfy two marginal constraints: (a) the right amount of materials are taken from warehouses, and (b) the right amount of materials send to these factories. The CTD $\mathcal{T}_{c}(p_{G},p_{\tilde G})$ is the lowest total transportation cost over all possible transportation plans. For a given cost function, the optimal transportation plan can be computed via linear programming with maximum number of iterations required being $\mathcal{O}(N^3 log N)$ when $N=M$~\cite{panaretos2019statistical}. Therefore, the computation of the CTD becomes more and more expensive as the mixture order $N$ gets larger. To solve the computational issue,~\cite{cuturi2014fast} considers an entropic regularized version which gives an approximate solution to the original transportation problem but can be computed much faster. We adopt the same idea and introduce the entropic regularized CTD as follows. \begin{definition}[Entropic Regularized CTD] With the same notation as before. Let ${\mathcal{H}}({\boldsymbol\pi}) = -\sum_{i,j}\pi_{ij}(\log \pi_{ij} - 1 )$ be the entropy of the transportation plan ${\boldsymbol\pi}$. Then the entropic regularized CTD between two mixtures is defined as \begin{equation*} {\mathcal{T}}_{c}^\lambda(p_{G},p_{\tilde G})=\inf_{{\boldsymbol\pi} \in \Pi ({\mathbf{w}} , \tilde{{\mathbf{w}}})}\{\sum_{n,m}\pi_{nm}c(\phi_n,\tilde{\phi}_m) - \lambda {\mathcal{H}}({\boldsymbol\pi})\} \end{equation*} for some regularization parameter $\lambda \geq 0$. \end{definition} When $\lambda = 0$, the entropic regularized CTD reduces to the CTD; when $\lambda\rightarrow\infty$, ${\mathcal{T}}_{c}^\lambda(p_{G},p_{\tilde G})\rightarrow \sum_{n,m} \pi_{nm}^*c(\phi_n,\tilde{\phi}_m)$ where ${\boldsymbol\pi}^* = \arginf\{- {\mathcal{H}}({\boldsymbol\pi}): {\boldsymbol\pi} \in \Pi ({\mathbf{w}} , \tilde {\mathbf{w}})\}=\mbox{\bf w}\tilde\mbox{\bf w}^{\tau}$. For simplicity, we refer to the entropy regularized divergence also as the composite transportation divergence, their difference is highlighted when necessary. \section{Proposed Reduction Approach} \label{sec:problem_formulation} Recall $p_{G}(x) = \sum_{n=1}^{N} w_n \phi_n(x)$ is the original mixture with order $N$ and we search for a $p_{\tilde G}(x) = \tilde w_m \tilde\phi_m(x)$ where $M \leq N$ to approximate $p_{G}$. We propose to find the reduced mixture via an optimization based approach. Let the divergence be the CTD in~\eqref{eq:minimum_distance}, we propose to find the solution to GMR via \begin{equation} \label{eq:RCWD_GMR} \tilde G:=\arginf_{\tilde G\in {\mathbb{G}}_{M}} {\mathcal{T}}_{c}^\lambda(p_{G}, p_{\tilde G}) \end{equation} for a pre-specified cost function $c(\cdot,\cdot)$ and regularization strength $\lambda \geq 0$. The rest of the section is organized as follows, we first prescribe an efficient MM algorithm for~\eqref{eq:RCWD_GMR} in Section~\ref{sec:mm_algorithm}. We then establish its connection with existing clustering based algorithms in Section~\ref{sec:connection}. We finally show in Section~\ref{sec:generalization} how this framework generalizes with different cost functions. \subsection{Numerical Algorithm} \label{sec:mm_algorithm} Based on~\eqref{eq:RCWD_GMR}, it may appear that finding the reduced mixture involves two optimizations: 1) computing ${\mathcal{T}}_{c}^\lambda(p_{G}, p_{\tilde G})$ for each pair of $p_{G}$ and $p_{\tilde G}$, and 2) minimizing the CTD to find the optimal $\tilde G$. We show in the section that the optimization can be simplified by removing one of the redundant constraint on the transportation plan. At a high level, the optimization in ${\mathcal{T}}_{c}^\lambda(p_{G}, p_{\tilde G})$ involves searching for transportation plans ${{\boldsymbol\pi}}$ under two marginal constraints specified by $\mbox{\bf w}$ and $\tilde \mbox{\bf w}$. While the constraint on $\mbox{\bf w}$ is strict, $\tilde \mbox{\bf w}$ is a moving constraint. Therefore, instead of searching for ${{\boldsymbol\pi}}$ satisfying constraint $\tilde \mbox{\bf w}$, we move $\tilde \mbox{\bf w}$ to meet ${{\boldsymbol\pi}}$. This makes the marginal distribution constraint $\tilde \mbox{\bf w}$ on ${{\boldsymbol\pi}}$ redundant. By removing this constraint, we could obtain a closed-form transportation plan, which simplifies the optimization. Let $\Pi(\mbox{\bf w},\cdot) = \{{\boldsymbol\pi} \in {\mathbb{R}}_{+}^{N \times M}: {\boldsymbol\pi}\mathbbm{1}_{M} = {\mathbf{w}}\}$ be the set of transportation plans that only satisfies the first marginal constraint. Let us define two functions of $\tilde G$, with $G$ hidden in the background: \begin{align} \label{Jcal} {\mathcal{J}}_{c}^{\lambda}(\tilde G) &= \inf_{{\boldsymbol\pi} \in \Pi( \mbox{\bf w}, \cdot)}\{ \sum_{nm} {\pi}_{nm} c(\phi_n, \tilde \phi_m) - \lambda {\mathcal{H}}({\boldsymbol\pi}) \}, \\ {{\boldsymbol\pi}}^{\lambda}(\tilde G) \label{bpi-star} & = \arginf_{{\boldsymbol\pi} \in \Pi( \mbox{\bf w}, \cdot)}\{ \sum_{nm} {\pi}_{nm} c(\phi_n, \tilde \phi_m) - \lambda {\mathcal{H}}({\boldsymbol\pi})\}. \end{align} Both functions depend on $\tilde G$ through its subpopulations $\tilde{f}_m$ but are free of its mixing weights $\tilde \mbox{\bf w}$. The optimizations in~\eqref{Jcal} and~\eqref{bpi-star} only involve the linear constraint in terms of $\mbox{\bf w}$. Hence, for a given cost function $c(\cdot, \cdot)$, the optimal transportation plan ${\boldsymbol\pi}^{\lambda}(\tilde G)$ has an analytical form \begin{equation} \label{eq:bpi} \pi_{nm}^{\lambda}(\tilde G) = w_n \frac{\exp(-c(\phi_n, \tilde \phi_m)/\lambda)}{\sum_{m'}\exp(-c(\phi_n, \tilde f_{m'})/\lambda)}. \end{equation} Under the special case where $\lambda=0$, it can be easily show that $\pi_{nm}^{0}(\tilde G) = \lim_{\lambda\rightarrow0} \pi_{nm}^{\lambda}(\tilde G)$. We therefore denote the optimal transportation plan as $\pi_{nm}^{\lambda}$ for the ease of notation. The numerical algorithm then reduces to the special case when $\lambda=0$ that is described in~\cite{zhang2020distributed}. The following theorem states the simplified optimization problem. \begin{theorem}[Simplified Optimization] \label{thm:ww_averaging_equivalent_obj} Let $G$, ${\mathcal{T}}_{c}^{\lambda}(\cdot)$, ${\mathcal{J}}_{c}^{\lambda}(\cdot)$, ${\boldsymbol\pi}^{\lambda}(\cdot)$, and the other notation be as given earlier. We have \begin{equation} \label{eq:equiv_optimization} \inf\{{\mathcal{T}}_{c}^{\lambda}(p_{G}, p_{\tilde G}): \tilde G \in {\mathbb{G}}_{M}\} = \inf\{{\mathcal{J}}_{c}^{\lambda}(p_{\tilde G}): \tilde G \in {\mathbb{G}}_{M}\}. \end{equation} The reduced mixture are hence given by \begin{equation} \label{eq:reduced_mixture_subpop} p_{\tilde G} = \arginf\{{\mathcal{J}}_{c}^{\lambda}(p_{\tilde G}): \tilde G \in {\mathbb{G}}_{M} \} \end{equation} and the mixing weights are given by \begin{equation} \label{eq:reduced_mixture_weight} \tilde \mbox{\bf w} = \sum_{n} {\boldsymbol\pi}^{\lambda}(p_{\tilde G}). \end{equation} \end{theorem} Based on this theorem, the optimization reduces to search for $M$ subpopulations $\{\tilde f_{m},~m=1,2,\ldots,M\}$. The mixing proportions are then determined by~\eqref{eq:reduced_mixture_weight}. An iterative algorithm following the well-known majorization--minimization (MM) algorithm~\cite{hunter2004tutorial} becomes available. A brief overview of MM is provided. The MM algorithm is an iterative algorithm that starts at initial point $x_0$, at the $t$th step of the iteration, the MM algorithm aims at minimizing the function that majorize the objective function at $x_t$. \begin{definition}[Majorization Function] A function $h(x|x_0)$ majorizes $g(x)$ at $x_0$ if $h(x|x_0) \geq g(x)$, with equality holding when $x = x_0$. \end{definition} This iterative procedure ensures the objective function is non-increasing after each iteration. The key for the MM algorithm is to find a majorizing function which is usually convex and easy to minimize. For the optimization problem in~\eqref{eq:reduced_mixture_subpop}, let $\tilde G^{(t)}$ be the mixing distribution after $t$ MM iterations. Define a majorization function of ${\mathcal{J}}_{c}^{\lambda}$ at $\tilde G^{(t)}$ to be \begin{equation} \label{eq:majorization_function} {\mathcal{K}}_{c}^{\lambda}(p_{\tilde G}|p_{\tilde G^{(t)}}) = \sum_{n,m} \pi_{nm}^{\lambda}(p_{\tilde G^{(t)}}) c(\phi_n, \tilde \phi_m) \end{equation} where $\pi_{nm}^{\lambda}(p_{\tilde G^{(t)}})$ is computed according to~\eqref{eq:bpi}. The subpopulations $\tilde \phi_m$ are separated in the majorization function~\eqref{eq:majorization_function}. This allows us to update the subpopulation parameters, one $\tilde \phi_m$ at a time and possibly in parallel, as the solutions to \begin{equation} \label{eq:support_update} \tilde \phi_m^{(t+1)} = \arginf_{\phi\in {\mathcal{F}}}\{\sum_{n} \pi_{nm}^{\lambda}(p_{\tilde G^{(t)}})c(\phi_n, \phi)\}. \end{equation} The mixing proportions of $G^{(t)}$ are updated via \begin{equation} \label{eq:weight_update} \tilde w_m^{(t+1)} = \sum_n \pi_{nm}^{\lambda}(p_{\tilde G^{(t)}}). \end{equation} The MM algorithm then iterates between the majorization step~\eqref{eq:majorization_function} and the maximization step~\eqref{eq:support_update} and~\eqref{eq:weight_update} until the change in the CTD is below some threshold. The algorithm is summarized in Algorithm~\ref{alg:mm_reduction}. The solution to~\eqref{eq:support_update} is usually called a barycenter and is defined as follows. \begin{definition}[Barycenter] \label{def:barycenter} Let $({\mathcal{F}}, \rho)$ be the space of Gaussian densities endowed with the divergence $\rho(\cdot, \cdot)$. Given positive constants $\lambda_l$ for $l\in [L]$, the (weighted) barycenter of $\phi_1, \ldots, \phi_{L} \in {\mathcal{F}}$ is a minimum point of $\sum_{l=1}^L \lambda_l \rho (\phi_l, \phi)$. \end{definition} \begin{algorithm}[htp] \begin{algorithmic} \State {\bfseries Initialization:} $\tilde \phi_{m}$, $m\in [M]$ \Repeat \For {$m\in[M]$} \State \underline{\emph{Assignment step:}} $\pi_{nm}^{\lambda}=\frac{w_n\exp(-c(\phi_n, \tilde \phi_m)/\lambda)}{\sum_{m'}w_{m'}\exp(-c(\phi_n, \tilde f_{m'})/\lambda)}$ \State \underline{\emph{Update step:}} \State Let $\tilde \phi_{m} =\argmin_{f} \sum_{n=1}^{N} \pi_{nm}^{\lambda} c(\phi_n, \phi)$ \State Let $\tilde w_{m} = \sum_{n} \pi_{nm}^{\lambda}$ \EndFor \Until $\sum_{n,m}\pi_{nm}^{\lambda}c(\phi_n, \tilde \phi_m)-\lambda {\mathcal{H}}({\boldsymbol\pi}^{\lambda})$ converges \end{algorithmic} \caption{MM algorithm for GMR with CTD} \label{alg:mm_reduction} \end{algorithm} The convergence of the numerical algorithm can be found in~\cite[Theorem 7]{zhang2020distributed}. \subsubsection{Computational Complexity Analysis} The MM algorithm is an iterative algorithm and we analyze the computational cost at each iteration. For a given cost function, the cost function needs to be evaluated for $O(NM)$ times. For most of the cost functions we consider in this paper, the cost for evaluating the cost function once is $O(d^3)$. Therefore, the total cost for evaluating the cost matrix is $O(NMd^3)$. The cost for computational the transportation plan is $O(NM)$. The computationally most expensive step is to find the $M$ barycenters. The cost depends on the specific cost function, for example, when the cost function is KL divergence, then as shown in Lemma~\ref{lemma:Gaussian_KL_barycenter}, the barycenter has a closed form and the cost for computing one barycenter is $O(d^2)$. Therefore, the total cost for finding the barycenters is $O(Md^2)$. Therefore, when the cost function is the KL divergence, the total cost at each iteration is $O(NMd^3)$. \subsection{Connection with Existing Algorithms} \label{sec:connection} We claim our proposed GMR approach is an unified framework that connects the optimization based approaches and the clustering based approaches in the literature, we establish their connections in this section. \subsubsection{Connection with Clustering Based Algorithm} Despite of the computational efficiency of the clustering based,~\emph{it is unclear if these algorithms always converge or attain some optimal targets when they do.} We show existing clustering based algorithms are special cases of the MM algorithm by choosing specific cost functions in CTD. The connection helps explain the clustering based algorithms from the following aspects: \begin{enumerate} \item \textbf{Objective}: The ultimate goal of clustering based algorithm is to minimize the CTD from the original mixture to the reduced mixture for some pre-specified cost function. \item \textbf{Convergence}: Since the clustering based algorithms are special cases of the proposed MM algorithm, the convergence of the MM algorithm implies the convergence of the clustering based algorithm. \item \textbf{Consistency}: The assignment and update steps need to be consistent with the cost function. The update step in our proposed algorithm corresponds to find the barycenter of components in the same cluster with respect to the cost function $c(\cdot, \cdot)$. If one assigns the components to clusters based on some divergence but nonetheless find the cluster centers by moment matching, then this may not lead to the convergence of the algorithm. \end{enumerate} As an example, we show that when the cost function is chosen to be \begin{equation*} c(\phi_n, \tilde \phi_m) = -\log\tilde w_m - I E_{nm} \end{equation*} where $E_{nm}$ is defined in~\eqref{eq:soft_clustering_similarity} and $\lambda=1$ in our proposed algorithm. Our algorithm reduces to the soft clustering based algorithm in~\cite{yu2018density}. Then according to our assignment step in Algorithm~\ref{alg:mm_reduction}, the transportation plan becomes \[ \begin{split} \pi_{nm} &= \frac{w_n\exp(-c(\phi_n, \tilde \phi_m))}{\sum_{m'}\exp(-c(\phi_n, \tilde f_{m'}))}=\frac{w_n\tilde w_{m}\exp(IE_{nm})}{\sum_{m'}\tilde w_{m'}\exp(IE_{nm'})}\\ &= w_nz_{nm} \end{split} \] Then the mixing weights becomes \begin{equation} \label{eq:mm_weighted_kl_mixing_weights} \tilde w_m = \sum_{n=1}^{N}\pi_{nm} = \sum_{n=1}^{N} w_nz_{nm} \end{equation} and the $m$th subpopulation is updated via \begin{equation*} \begin{split} \tilde \phi_{m} &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm}c(\phi_n, \tilde \phi)\\ &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} \{-\log\tilde w - I \int \phi_n(x) \log \phi(x) dx\}\\ &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} \int \log\left\{\frac{\phi_n(x)}{\phi(x)}\right\} \phi_n(x)dx \\ &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} D_{\text{KL}}(\phi_n\| \phi). \end{split} \end{equation*} Therefore, the components of the reduced mixture becomes the barycenter of the original Gaussian components under the KL divergence with weights $\pi_{1m},\pi_{2m},\ldots, \pi_{Nm}$. The following result that the barycenter of Gaussians with respect to the KL divergence is the same as moment matching is needed. The proof is deferred to Appendix~\ref{sec:kl_barycenter}. \begin{lemma}[Gaussian Barycenter under the KL Divergence is the Same as Moment Matching] \label{lemma:Gaussian_KL_barycenter} The minimizer of $L(\phi) = \sum_{n=1}^N \lambda_n D_{\text{KL}}(\phi_n\|\phi)$ when $\phi$ is constrained to be a Gaussian density has mean $\bar{\mu}= \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n\mu_n$ and the covariance \begin{equation*} \bar{\Sigma} = \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n (\Sigma_n + (\mu_n-\bar{\mu})(\mu_n-\bar{\mu})^{\tau}). \end{equation*} \end{lemma} By substituting $\lambda_n$ with $\pi_{nm}$ above, along with~\eqref{eq:mm_weighted_kl_mixing_weights}, the updated subpopulation parameters based on our approach becomes \begin{equation} \label{eq:mm_weighted_kl_mean} \tilde\mu_{m} = \tilde w_m^{-1} \sum_{n=1}^Nw_n z_{nm}\mu_n \end{equation} and \begin{equation} \label{eq:mm_weighted_kl_cov} \tilde\Sigma_{m} = \tilde w_m^{-1} \sum_{n=1}^Nw_n z_{nm}\{\Sigma_n + (\mu_n-\tilde\mu_m)(\mu_n-\tilde\mu_m)^{\tau}\}. \end{equation} It can be seen that~\eqref{eq:mm_weighted_kl_mixing_weights}-\eqref{eq:mm_weighted_kl_cov} matches the cluster centers given in the soft clustering algorithm in Algorithm~\ref{alg:soft_clustering}. In summary, by choosing specific cost function, our proposed algorithm reduces to the soft clustering based algorithms in the literature. Similarly, the connection can be established for the hard clustering based algorithm in~\cite{schieferdecker2009gaussian}. See details in Appendix~\ref{sec:connection_hard_clustering}. A summary of the existing clustering based algorithm and their corresponding cost functions is given in Table~\ref{tab:cost_fct_CTD}. \begin{table}[htpb] \centering \scriptsize \caption{The relationship between the proposed GMR approach and existing clustering based GMR approaches according to the cost function $c(\cdot,\cdot)$ and regularization strength $\lambda$. Empty entries indicate these approaches are not explored.} \label{tab:cost_fct_CTD} \begin{tabular}{cccc} \toprule $c(\phi_n,\tilde \phi_m)$ & $D_{\text{KL}}(\phi_n\|\tilde \phi_m)$ & $-\log \tilde w_m -IE_{nm}$ & $W_2(\phi_n,\tilde \phi_m)$ \\ \midrule $\lambda=0$ & \cite{schieferdecker2009gaussian}& --&\cite{assa2018wasserstein}\\ \midrule $\lambda=1$ & -- & \cite{yu2018density} & --\\ \bottomrule \end{tabular} \end{table} \begin{remark}[Technical error in~\cite{yu2018density}] As a side note, we want to point out there is an error in~\cite{yu2018density}. Let $X_1,X_2,\ldots,X_{I}\overset{\text{i.i.d.}}{\sim} p_{G}(x)$,~\cite{yu2018density} performs GMR by finding the maximizer of \begin{align} \ell_{I}(\tilde G) &= {\mathbb{E}}_{X\sim\prod_{i=1}^{I} p_{G}(x_i)} \left\{\sum_{i=1}^{I}\log p_{\tilde G}(X_i)\right\} \label{eq:yu_obj}\\ &= \sum_{n=1}^{N} w_n {\mathbb{E}}_{X\sim \prod_{i=1}^{I}\phi_n(x_i)} \left\{\sum_{i=1}^{I}\log p_{\tilde G}(X_i)\right\} \label{eq:yu_wrong_eq} \end{align} However,~\eqref{eq:yu_wrong_eq}, which is the second equality in (3) of~\cite{yu2018density} does not hold. Let ${\bm{x}} = (x_1,x_2,\ldots, x_I)$, then we can see \[ \int g({\bm{x}}) \prod_{i=1}^I \{\sum_{n=1}^N w_n \phi_n(x_i)\}d{\bm{x}} \neq \sum_{n=1}^N w_n \int g({\bm{x}}) \prod_{i=1}^I \phi_n(x_i)d{\bm{x}} \] because the summation and the product are not exchangeable. This mistake leads to the wrong derivation in the variational inference. Moreover, they fail to observe that \begin{equation} \label{eq:yu_equivalent} \ell_{I}(\tilde G) = I {\mathbb{E}}_{p_{G}(x)}\{\log p_{\tilde G}(X)\} \end{equation} in~\eqref{eq:yu_obj}. For a fixed hyper-parameter $I$, the maximum of the RHS is the same as the maximum of the LHS. Therefore, from maximize likelihood point of view, the parameter $I$ in~\cite{yu2018density} is redundant. Due to these errors, the interpretation of algorithm is no longer valid but their algorithm is still useful and the correct interpretation from the minimum CTD point of view is given in this paper. \end{remark} \subsubsection{Connection with Optimization Based Algorithms} In our optimization based formulation, the objective function is $\mathcal{T}_{c}^{\lambda}$. We may instead directly minimize $c(\cdot,\cdot)$ between two mixtures. However, they are not used in practice since they usually do not have closed forms and the corresponding optimization problems are computationally expensive. Is there any connection between $c(\cdot,\cdot)$ and ${\mathcal{T}}_{c}$? Under the special case where $c(\cdot,\cdot) = D_{\text{ISE}}(\cdot, \cdot)$ or $c(\cdot,\cdot) = D_{\mathrm{KL}}(\cdot\|\cdot)$ and $M=1$, we have \begin{equation} \label{eq:barycenter_equiv} \argmin_{\tilde \phi} \sum_{n=1}^{N} w_n c(\phi_n, \tilde \phi) = \argmin_{\tilde \phi} c\left(\sum_{n=1}^{N}w_n\phi_n, \tilde \phi\right). \end{equation} The LHS of~\eqref{eq:barycenter_equiv} is the composite transportation divergence between the original mixture and the reduce mixture and the RHS of~\eqref{eq:barycenter_equiv} is the divergence between two mixtures. The equality shows that when reduces to a single Gaussian, these two approaches are the same. The proof is deferred to Appendix~\ref{app:CTD_equiv}. Under a general case, we show that when $c(\cdot,\cdot)$ satisfies ``convexity" which is defined later, ${\mathcal{T}}_{c}$ is an upper bound of $c(\cdot,\cdot)$. Our formulation therefore minimizes an upper bound. \begin{theorem} \label{thm:upper_bound} Let $c(\cdot,\cdot):{\mathcal{F}}\times {\mathcal{F}} \to {\mathbb{R}}_{+}$ be a cost function that satisfies the ``convexity" property: for any $\alpha\in(0,1)$, we have $c(\alpha f_1 + (1-\alpha) f_2, \alpha \phi_1 + (1-\alpha) \phi_2) \leq \alpha c(f_1, \phi_1) + (1-\alpha) c(f_2, \phi_2)$. Then for all $\tilde G$, we have $$c(p_{G}, p_{\tilde G})\leq {\mathcal{T}}_{c}^{0}(p_{G}, p_{\tilde G}).$$ \end{theorem} The proof of the theorem is deferred to Appendix~\ref{app:CTD_is_upper_bound}. The convexity of the cost function holds for most divergence, such as $f$-divergence, ISE (see Appendix~\ref{app:ISE_convexity}), and squared $2$-Wasserstein distance~\cite[Chapter 7]{villani2003topics}. We show in Appendix~\ref{app:CS_convexity_counter_example} that the convexity does not hold for the Cauchy-Schwartz divergence discussed in the next section. \begin{remark} The same conclusion under some special cost functions has been shown in the literature from different views. Examples includes~\cite[Section 4.2]{delon2020wasserstein} when the cost function $c(\phi_n,\tilde\phi_m) = W_2^2(\phi_n, \tilde \phi_m)$ is the squared $2$-Wasserstein distance between two Gaussians,~\cite[Lemma 1]{nguyen2013convergence} when the cost function is $f$-divergence. \end{remark} \subsection{Generalization} \label{sec:generalization} According to proposed Algorithm~\ref{alg:mm_reduction}, we could generalize the algorithm by selecting different cost function $c(\cdot, \cdot)$. In this section, we propose several possible cost functions. Due to space limit, we discuss the generalization to mixture when $\mathcal{F}$ is the exponential family in Appendix~\ref{app:generalization_exponential_family}. A possible choice for the cost function could be the Cauchy-Schwartz (CS) divergence. \begin{definition}[Cauchy-Schwartz Divergence] The Cauchy-Schwartz divergence~\cite{jenssen2006cauchy} between two density functions $p(x)$ and $q(x)$ is defined as \begin{equation*} D_{\text{CS}}(p,q) = -\log \frac{\int p(x)q(x) dx}{\sqrt{\int p^2(x) dx \int q^2(x) dx}}. \end{equation*} \end{definition} The CS divergence between two Gaussian has the closed-form \begin{equation} \label{eq:cauchy-schwartz-divergence-gaussian} \begin{split} &D_{\text{CS}}(\phi(\cdot|\mu_1,\Sigma_1), \phi(\cdot|\mu_2,\Sigma_2))\\ =& -\log \phi(\mu_1|\mu_2, \Sigma_1+ \Sigma_2) - \frac{1}{4}\log \{|4\pi \Sigma_1| |4\pi \Sigma_2|\} \end{split} \end{equation} In Algorithm~\ref{alg:mm_reduction}, the barycenter of Gaussians under this divergence is also required. We use an iterative algorithm to find the stationary point as follows. \begin{example}[Gaussian Barycenter under Cauchy-Schwartz Divergence] \label{eg:CS_barycenter} With the same notation in Lemma~\ref{lemma:Gaussian_KL_barycenter}, then $\sum_{n=1}^{N} \lambda_n D_{\text{CS}}(\phi_n\|\phi)$ is minimized uniquely in the space of Gaussians with mean \begin{equation*} \bar\mu=\left\{\sum_{n} \lambda_n (\Sigma_n+\bar\Sigma)^{-1}\right\}^{-1}\sum_{n}\lambda_n (\Sigma_n+\bar\Sigma)^{-1}\mu_n \end{equation*} and covariance $\bar\Sigma$ is the solution to \[ \bar{\Sigma}^{-1} = 2 \sum_{n}\tilde \lambda_n (\Sigma_n+\bar\Sigma)^{-1}\{ I -(\mu_r-\bar\mu)(\mu_r-\bar\mu)^{\tau}(\Sigma_r+\bar\Sigma)^{-1}\} \] where $\tilde \lambda_n=\lambda_n/\sum_n \lambda_n$. The proofs are deferred to Appendix~\ref{sec:CS_barycenter_proof}. \end{example} We also consider to use the ISE as the cost function, that is $c(\phi_n,\tilde \phi_m)= D_{\text{ISE}}(\phi_n,\tilde \phi_m)$, we find the barycenter under this divergence with the numerical optimization algorithm. \section{Experiments} \label{sec:exp} In this section, we conduct numerical experiments to compare the performance of various reduction approaches. The reduction approaches that we consider including 1) the optimization based approach that minimum $D_{\text{ISE}}(p_{G}, p_{\tilde G})$ and we refer to this approach as ISE; 2) our proposed approach with various cost functions and we refer to the corresponding approach as CTD-XX where XX is the name of the cost function. For example, if the cost function is KL divergence, then we refer to this approach as CTD-KL. The default value for $\lambda$ is $0$ if not specified. Unless otherwise specified, all the reduction algorithms are initialized by $10$ initial values obtained by fitting a Gaussian mixture of $M$ components by the MLE using random samples generated from the original mixture. All experiments are implemented in Python 3.7.4 on cluster with Intel E5 CPU with 2.1Ghz. \subsection{Simulated Dataset} In this experiment, we use simulated data to reduce mixtures from order $N=25$. The parameter values of the original mixture is generated as follows. We first randomly pick $5$ locations uniformly at random within $[-10,10]\times[-10,10]$. For $i$th location, we randomly generate $n_i$ components around it where $(n_1,n_2,\ldots,n_5)\sim \text{Multinom}(N, 0.2\mathbbm{1}_{5})$. The mean parameter values of components of the original mixture are randomly generated within a circle at each location with radius 2.5. An example of the randomly picked locations (solid red dots) and the $25$ component centers (black dot with different shapes) is given in Figure~\ref{fig:simulation} (a). We then generate the covariance matrix for each component by generating the two diagonal elements $\Sigma_{11}$, $\Sigma_{22}$ from $\text{Gamma}(8,4)$ and the off-diagonal element is $\sqrt{\Sigma_{11}\Sigma_{22}}\cos(\pi\theta)$ where $\theta\sim U(0.2,0.8)$. We use the equal mixing weight for each component. The heatmap of the density function of the mixture with centers given in Figure~\ref{fig:simulation} (a) is given in Figure~\ref{fig:simulation} (b). We randomly create $100$ mixtures as described above. \begin{figure}[!htpb] \centering \subfloat[Component centers]{\includegraphics[width=0.44\columnwidth, height=0.44\columnwidth]{figure/simulated_data/locations.png}} \subfloat[Density heatmap]{\includegraphics[width=0.45\columnwidth, height=0.45\columnwidth]{figure/simulated_data/generated_pop_2.png}}\\ \subfloat[ISE]{\includegraphics[width=0.45\columnwidth]{figure/simulated_data/simulation_ISE.png}} \subfloat[Time]{\includegraphics[width=0.45\columnwidth]{figure/simulated_data/simulation_time.png}} \caption{(a) The randomly generated mixture with $25$-components, (b) the squared $L_2$ distance between the reduced and original mixture, and (c) the computational time.} \label{fig:simulation} \end{figure} Based on our procedure for generating the components, it is natural to reduce the original mixture to $M=5$, we also consider the case where we reduce the order to $M=10$ and $M=15$. The reduction result for the original mixture in Figure~\ref{fig:simulation} (a) is given in Figure~\ref{fig:simulation_eg} under different reduction approaches and values of $M$. \begin{figure}[htpb] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{cccc} \toprule &M=5&M=10&M=15\\ \midrule \rotatebox{90}{ISE}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_ISE_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_ISE_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_ISE_order_15.png}\\ \rotatebox{90}{CTD-ISE}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-ISE_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-ISE_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-ISE_order_15.png}\\ \rotatebox{90}{CTD-CS}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-CS_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-CS_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-CS_order_15.png}\\ \rotatebox{90}{CTD-KL}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-KL_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-KL_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-KL_order_15.png}\\ \rotatebox{90}{CTD-W2}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-W2_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-W2_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-W2_order_15.png}\\ \bottomrule \end{tabular} \caption{Heatmap of the difference of the density functions of the reduced mixture and original mixture.} \label{fig:simulation_eg} \end{figure} We compute the ISE between the reduced mixture and the original mixture based on minimum ISE over $100$ repetitions and visualize the mean value and 95\% error bar in Figure~\ref{fig:simulation} (b). The computational time for each reduction method is given in Figure~\ref{fig:simulation} (c). In terms of ISE, it should be the case that the ISE reduction has the smallest ISE to the original mixture and we use this method as a baseline for comparison. This reduction approach has the longest computational time but smallest ISE to the original mixture. It can be seen from the plot that the ISE decreases and the computational time increases regardless of the reduction approach. For the minimum CTD based reduction approaches, the performance is different when different cost functions are used. The relative performance of the CTD based reduction approaches is consistent regardless of the value of $M$. In terms of ISE, the preference for the cost functions from high to low is ISE, CS, KL, and W2. In terms of the computational time, the preference for the cost functions from high to low is KL, CS, W2, and ISE. When the cost function is the ISE between two Gaussians and $M=5$, the reduction result is almost as good as the minimum reduction result in terms of ISE and the computational time is only about 1/10 of the minimum ISE approach. The CTD-KL approach only takes about 1/10000 time of the minimum ISE approach. Based on the reduced estimators in Figure~\ref{fig:simulation_eg} that even that CTD-KL and CTD-W2 are not as good as CTD-ISE, they still gives a good reduction result, which shows the advantage of using the minimum CTD based approaches for reduction. \subsection{Approximate Inference for Belief Propagation} \label{sec:BP} The density function of Gaussian mixtures are usually used to approximate density functions in statistical inference, examples including the belief propagation in arbitrarily structured graphical models and tracking in hidden Markov model. In both problems, the order of the Gaussian mixtures increases exponentially due to recursion. To solve this issue, the GMR is applied to reduce the order of the mixture to a reasonable order after each iteration. In this section, we conduct experiment to compare the performance of different GMR approaches for belief propagation. The \emph{Belief propagation} (BP) is an iterative algorithm used to compute the marginal distributions based on a probabilistic graphical model. A graph consists of a node set $\mathcal{V}$ and an undirected edge set $\mathcal{E}$ made of pairs of nodes that are related. A probabilistic graphical model associates each node with a random variable, say $X_i$, and assumes that the joint density function of the random vector $X=\{X_i:i\in\mathcal{V}\}$ can be factorized into \begin{equation*} p(x) \propto \prod_{(i,j) \in \mathcal{E}} \psi_{ij}(x_i, x_j)\prod_{i\in \mathcal{V}} \psi_{i}(x_i) \end{equation*} for some non-negative valued functions $\psi_{ij}(\cdot,\cdot)$ and $\psi_{i}(\cdot)$. We call $\psi_{ij}(\cdot,\cdot)$ and $\psi_{i}(\cdot)$ local potential and local evidence potential respectively. Let the neighborhood of a node $i$ be denoted as $\Gamma(i) := \{j: (i,j)\in\mathcal{E}\}$. Let $m_{ji}^{(t-1)}(\cdot)$ be the message at the $t-1$th iteration of the BP algorithm, we then update the message by \begin{equation*} \label{eq:message_update} m_{ji}^{(t)}(x_i) \propto \int \psi_{ij}(x_i,x_j) \psi_j(x_j)\prod_{k\in\Gamma(j)\backslash i} m_{kj}^{(t-1)}(x_j) dx_j. \end{equation*} The belief, which is the approximate marginal density function of the random variable associated with node $i$, is updated as \begin{equation*} q_{i}^{(t)}(x_i)\propto \psi_i(x_i)\prod_{j\in\Gamma(i)}m_{ji}^{(t)}(x_i). \end{equation*} The messages and beliefs are iteratively updated until convergence. For tree-structured graphs, the beliefs will converge to the true marginals. The closed-form outcome of the messages generally do not exist with some exceptions. Same as the reason for using GMR in tracking in HMM, the GMMs are used to approximate the local potentials and local evidence potentials. In this section, we consider the graphical model in Figure~\ref{fig:BP_example}(a) following~\cite{yu2018density}. In this model, the local potential associated with the $(i,j)$th edge is given by $\psi_{ij}(x,y) = \phi(x|y,\phi_{ij}^{-1})$, where $\phi_{ij}$ values are marked alongside the edges. The local evidence potential associated with the $i$th node is a two-component Gaussian mixture $\psi_{i}(x)=w_i\phi(x|\mu_{i1}, 1)+(1-w_i)\phi(x|\mu_{i2}, 1.5)$ with $w_i\sim U(0, 1)$, $\mu_i^1\sim U(-4,0)$, and $\mu_i^2\overset{\text{i.i.d.}}{\sim} U(0, 4)$. As the order of the message mixture grows exponentially with the number of iterations, the \emph{exact inference} becomes intractable after $4$ iterations. To overcome this difficulty, we use GMR to performance \emph{approximate inference}, that is to reduce the order of message before updating the belief in the next iteration to keep the order at a manageable size. In our experiment, we reduce the message mixture to $M=4$ when $N>4$. \begin{figure}[htpb] \centering \subfloat[Graphical model]{ \begin{tikzpicture}[auto, scale=0.5, node distance=3cm, transform shape, main node/.style={circle,draw,font=\sffamily\Large\bfseries}] \node[main node] (1) {$X_1$}; \node[main node] (2) [below left of=1] {$X_2$}; \node[main node] (3) [below right of=2] {$X_3$}; \node[main node] (4) [below right of=1] {$X_4$}; \path[every node/.style={font=\sffamily\small}] (1) edge node [left] {0.6} (4) edge node {0.4} (3) (2) edge node [right] {0.2} (1) (3) edge node [right] {0.01} (2) (4) edge node [left] {0.8} (3); \end{tikzpicture}} \hspace{10mm} \subfloat[Time]{\includegraphics[width=0.45\columnwidth]{figure/BP/BP_time.png}}\\ \subfloat[ISE]{\includegraphics[width=\columnwidth]{figure/BP/BP_ISE.png}} \caption{(a) The structure of the graphical model for belief propagation; (b) computational time for belief update versus number of iteration, and (c) the squared $L_2$ distance between the exact and approximate beliefs.} \label{fig:BP_example} \end{figure} We evaluate the performance of the GMR approaches by computing the ISE, averaged over nodes, between the exact belief and the approximate beliefs. The comparison is computationally feasible for the first $3$ iterations due to limited computer memory. Since no reduction is applied in the first iteration, we only show the result for the 2nd and 3rd iterations. The results are averaged over 100 trials. Figure~\ref{fig:BP_example} (c) gives the distance of the belief based on the approximate inference to the true belief mixture based on the exact inference at each node. As the iteration increases, the ISE at each node increases. It can be also seen from Figure~\ref{fig:BP_example} (c) that in terms of the distance, the approximate inference based on ISE is the best. For all the minimum CTD based reduction approaches, when the cost function is the ISE, the approximate inference has the best results. In terms of the computational time, the ISE approach that is the closest to the exact inference does not save the computational time. In the 3rd iteration, as the order of the message mixture increases exponentially in the exact inference, the CTD based reduction based approaches saves the computational time and the beliefs obtained based on the approximate inference is close to the exact inference. \subsection{Hand Gesture Recognition} We apply the GMR for static hand gesture recognition. For static hand gesture recognition, a set of labeled images of hand gestures are given as the training set, on which a classifier is trained to classify unseen images of the same set of hand gestures. \noindent \textbf{Dataset \& Pre-processing} We use the Jochen Triesch static hand posture database~\cite{triesch1996robust} that is publicly available online. This dataset contains $128\times128$ gray-scale images of $10$ hand postures forming the alphabetic letters: A, B, C, D, G, H, I, L, V, and Y for 24 persons with 3 different backgrounds. To remove additional noise caused by the background, in our experiment, we use the same set of images as described in~\cite{kampa2011closed} whose backgrounds are removed. To reduce the classification error caused by the mis-alignment of the hands,~\cite{kampa2011closed} centers these hands by cropping. They manually crop each image into the smallest rectangle that only contains the hands and whose center is the center of the hand. After this step, all hands are centered in the image but with different sizes due to the difference in the hand sizes in the original images. To make the classifiers to be more invariant to the size of the hand, they resize the images into a square whose most top-left pixel and most bottom-right pixel are coordinates $(0,0)$ and $(1,1)$ respectively. After these pre-processing steps, there are $168$ images in total with around $16-20$ images for each hand posture. \noindent \textbf{Gaussian Mixture \& Hand Gesture Recognition} In~\cite{kampa2011closed}, they view the intensity of each pixel as a function of the location of the pixels, which can be approximated by the density function of a Gaussian mixture up to some normalizing constant. They therefore fit a $10$-component on each image with the non-background pixels, each image therefore can be represented by a GMM. An example of the original image and the heat-map of the density function of the corresponding fitted mixture model is given in Figure~\ref{fig:hand_gesture_fitted_gmm}. \begin{figure}[htpb] \centering \subfloat[Pre-processed image]{\includegraphics[height=0.35\linewidth, width=0.45\linewidth]{figure/hand_gestures/demo_c_original.png}} \subfloat[Heat-map of GMM]{\includegraphics[width=0.35\linewidth]{figure/hand_gestures/demo_c_gmm.png}} \caption{An example of (a) a pre-processed image of hand posture ``C"; (b) the heat-map of the fitted 10-component mixture on the pre-processed image in (a).} \label{fig:hand_gesture_fitted_gmm} \end{figure} We propose to classify a new image, which is also a GMM, by the minimum divergence classifier. In~\cite{kampa2011closed}, they classify a new image by computing the CS divergence between the test images and all training images, where both are represented by GMMs. Then the test image is classified based on the nearest neighbor. For example, a test image of hand gesture is classified as gesture ``A'' if there exists a training images with hand gesture ``A'' is closest to this test image. We perform the classification slightly different from~\cite{kampa2011closed}. We also use the minimum divergence classifier, but instead of computing the divergence from the test image to every single training image, we compute the divergence from the test images to the class ``prototype'' training images. We summarize the training images within the same class into a single training image and called it the class prototype. We combine all the images within the same class into a GMM with many components and reduce this GMM into a $10$-component GMM and the reduced mixture is our class prototype. When there are a lot of training examples, our approach can save the evaluation time. Figure~\ref{fig:hand_gesture_prototype} gives the class protype of the hand gestures based on different reduction approaches. \begin{figure}[ht] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{ccccccccccc} \toprule Method&A&B&C&D&G&H&I&L&V&Y\\ \midrule ISE&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_Y.png}\\ CTD-KL&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_Y.png}\\ CTD-W2&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_Y.png}\\ CTD-CS&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_Y.png}\\ CTD-ISE&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_Y.png}\\ \bottomrule \end{tabular} \caption{The class prototype of each hand gesture obtained by different reduction approaches.} \label{fig:hand_gesture_prototype} \end{figure} \noindent \textbf{Results} The quality of the class prototypes must have an effect on the classification accuracy, we therefore compare the classification accuracy based on various reduction methods and their corresponding runtime. Since the training set is relatively small, we perform 5-fold cross validation that is repeated for 100 times to approximate the classification accuracy. \begin{figure}[htpb] \centering \subfloat[Classification Accuracy]{\includegraphics[width=0.45\linewidth]{figure/hand_gestures/hand_gesture_accuracy.png}} \subfloat[Time]{\includegraphics[width=0.45\linewidth]{figure/hand_gestures/hand_gesture_time.png}}\\ \subfloat[Classification Accuracy]{\includegraphics[width=0.45\linewidth]{figure/hand_gestures/hand_gesture_cross_test.png}} \caption{The (a) classification accuracy when same divergence is used during reduction and test, (b) computational time based on different reduction approaches, (c) classification accuracy when different divergences are used during reduction and test.} \label{fig:hand_gesture_final_result} \end{figure} We consider two schemes during the test: \begin{enumerate} \item During the test time, the divergence used for classification is the same as that used during the reduction. For example, if we minimize the ISE to obtain the class prototype, we then also use ISE to measure the similarity between the test images and the class prototypes. The classification accuracy based on different divergences are given in Figure~\ref{fig:hand_gesture_final_result} (a). \item During the test time, we consider use all the divergences used in this experiment. Each combination of the divergence in the reduction and the divergence in the test gives a classification accuracy, which is visualized in Figure~\ref{fig:hand_gesture_final_result} (b). \end{enumerate} When the same divergence is used during the reduction and test, the ISE based approach gives the highest classification accuracy, however, its computational time is the longest. The performance of the CTD based approaches are worse, however, with the CS and ISE with the cost function, these two approaches have better performance than the CTD-KL, which is the classical clustering based algorithm in the literature. The CTD-KL takes shortest time. When different divergences are used during the reduction and test, it can be seen that when the ISE is used during the test, four of the reduction approaches gives us the same classification accuracy. When the CTD-W2 approach is used during the reduction and ISE is used during the test, then the performance is not as good as other approaches. Therefore, consider the computational time and the classification accuracy, we recommend to use the CTD-KL to perform reduction and use ISE during the test. \section{Conclusion} In this paper, we propose an optimization based GMR approach which connects the clustering based and optimization based approaches for GMR in the literature. We use this formulation to establish the convergence of the clustering based algorithms. We also show how the formulation can be generalized to other cost functions such as ISE to further improve the performance of the clustering based algorithms. Numerical experiments are conducted to illustrate the effectiveness of the proposed generalization. Although these algorithms maintains the computational simplicity of the clustering based algorithms, they are not as good as the minimum ISE based optimization approach from statistical efficiency. We leave it as a future work to develop a computationally efficient algorithm that is as good as the minimum ISE approach and has theoretical guarantees. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank Trevor Campbell, Nhat Ho, Kittipat Kampa, Richard Schonberg, and Lei Yu for their help and discussion. This research was enabled in part by support provided by WestGrid (\url{www.westgrid.ca}) and Compute Canada Calcul Canada (\url{www.computecanada.ca}). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction}\label{sec:introduction}} \IEEEPARstart{L}{et} ${\mathcal{F}} = \{\phi(x|\theta) = |2\pi \Sigma|^{-1/2} \exp\{-\frac{1}{2} (x-\mu)^{\tau}\Sigma^{-1} (x-\mu)\}: \theta\in \Theta\}$ be the family of density functions of Gaussian distributions. For Gaussian densities, the parameter $\theta=(\mu,\Sigma)$ is in the $\Theta = {\mathbb{R}}^{d}\times {\mathcal{S}}_{d}$ where ${\mathcal{S}}_{d}$ is the space of symmetric nonnegative definite matrices of dimension $d$. Let $G= \sum_{n=1}^{N} w_n \delta_{\theta_n}$ be a discrete probability measure on $\Theta$ for some integer $N>0$. The density function of a finite Gaussian mixture model (GMM) is defined to be \begin{equation*} p_{G}(x) := \int \phi(x|\theta) dG(\theta) = \sum_{n=1}^{N} w_n \phi(x|\theta_n) := \sum_{n=1}^{N} w_n \phi_n(x) \end{equation*} For the ease of notation, we use $\phi_{n}(x)$ for $\phi(x|\theta_n)$. The discrete probability measure $G$ is called the mixing distribution. The components of the vector $\mbox{\bf w} = (w_1,w_2,\cdots,w_N)^{\tau}$ are called the mixing weights and the atoms $\theta_{n}$s are called the subpopulation or component parameters. The space of mixing distributions with order up to $N$ is denoted as $${\mathbb{G}}_{N} = \big \{ G=\sum_{n=1}^N w_n\delta_{\theta_n}|\theta_n \in \Theta, w_n\geq 0, \sum_{n=1}^N w_n=1 \big\}.$$ An order $N$ mixture has its mixing distribution in the space ${\mathbb{G}}_{N}-{\mathbb{G}}_{N-1}$. It is often cited~\cite{titterington1985statistical,nguyen2020approximation} that there always exists a GMM that can be arbitrarily close to any density function, the GMMs are therefore widely used as a parametric distribution to approximate distributions of complex shape. Therefore, there is always a trade-off between the accuracy of the approximation and computational efficiency. A larger number of components improves the approximation precision but increases the computational cost for downstream applications. For instance, it increases the cost of evaluating the log-likelihood function. Moreover, when finite mixture models are used to approximate density functions in some Bayesian inference procedures, the number of components of the mixture increases exponentially~\cite{manzar2017recursive} due to recursive operations. An example is in belief propagation for finding the marginal probabilities in graphical models. When the messages--the reusable partial sum for the marginalization calculations--are modeled as Gaussian mixtures~\cite{sudderth2010nonparametric}, their number of components increase exponentially and quickly become intractable with iterations. In recursive Bayesian filtering under hidden Markov model, when the transition probability and the likelihood are both Gaussian mixtures, then posterior distribution is also a mixture whose order increases exponentially. In these cases, to make inferences within a reasonable amount of time, some intermediate approximation steps can be used to prevent the number of components of the mixture from exploding. \emph{Gaussian mixture reduction} (GMR) is a technique useful for such purposes. GMR aims at approximating a $N$-component mixture $p_{G}(x) =\sum_{n=1}^{N} w_n \phi_n(x)$ by $p_{\tilde G}(x) = \sum_{m=1}^{M} \tilde w_m \phi(x|\tilde \theta_m)=\sum_{m=1}^{M} \tilde w_m \tilde \phi_m(x)$ where $M \leq N$. They are usually referred to as the original and the reduced mixtures respectively. There has been a rich literature on GMR and most approaches belong to one of the three general types: \emph{greedy algorithm based}~\cite{salmond1990mixture,runnalls2007kullback,huber2008progressive}, \emph{optimization based}~\cite{williams2006cost}, and \emph{clustering based}~\cite{schieferdecker2009gaussian,goldberger2005hierarchical,davis2007differential, assa2018wasserstein,zhang2010simplifying,vasconcelos1999learning,yu2018density} approaches. We refer to~\cite{crouse2011look} for a thorough review of these approaches. In this paper, we only focus on the optimization based and clustering based approaches. The optimization based approach~\cite{williams2006cost} has the big advantage of having the closed-form optimality goal but is computationally difficult to optimize. The clustering based approaches have the advantage of computational efficiency. Despite their computational efficiency, the clustering based approaches are motivated by the \emph{k-means} algorithms in the Euclidean space. Yet it is unclear if these clustering based algorithms converge or attain some optimality target when they do. In this paper, we propose an optimization based approach for GMR that has both advantages of the clustering based and optimization based approaches. Our proposed reduced mixture minimizes the \emph{composite transportation divergence} (CTD) from the original mixture to the mixture with a desired order. An efficient Majorization-Minimization (MM) algorithm is designed for the proposed optimization problem. We show the clustering based algorithms in the literature are special cases of our proposed MM algorithms. By establishing this connection, we are able to show the \emph{convergence} and the \emph{optimality goal} of clustering based algorithms. Our formulation for GMR also allows us to easily develop more ``effective'' approaches and to extend to the reduction of mixture of other families of mixtures. The rest of the paper is organized as follows. In Section~\ref{sec:overview}, we review the optimization based and clustering based approaches in the literature. The composite transportation divergence, which is the foundation of our proposed method, is introduced in Section~\ref{sec:composite_wdistance}. Our proposed method is given in Section~\ref{sec:problem_formulation}, we also discuss the connection of existing methods with our proposed method and the generalization in this section. Numerical experiments for comparing different approaches are given in Section~\ref{sec:exp}. \section{Overview of Existing Algorithms} \label{sec:overview} We review the optimization based and clustering based approaches in the literature. \subsection{Optimization Based Algorithm} The GMR is formulated as an optimization problem~\cite{williams2006cost} where the reduced mixture minimizes some divergence to the original mixture. The reduced mixture is defined as \begin{equation} \label{eq:minimum_distance} p_{\tilde G} = \argmin_{\tilde G \in {\mathbb{G}}_{M}} D(p_{G}, p_{\tilde G}) \end{equation} for some divergence $D(\cdot, \cdot)$. The advantage of the optimization based approach is that the reduced mixture is known to achieve an optimality goal. In~\cite{williams2006cost}, $D(\cdot, \cdot)$ is chosen to be the Integral Squared Error (ISE), the squared $L_2$ distance between these two Gaussian mixtures, due to its closed-form. \begin{equation} \label{eq:ISE} \begin{split} D_{\text{ISE}}(p_{G}, p_{\tilde{G}}) &:= \int (p_{G}(x) - p_{\tilde{G}}(x))^2 dx\\ &=\mbox{\bf w}^{\tau}\mathbf{S}_{OO}\mbox{\bf w} -2\mbox{\bf w}^{\tau}\mathbf{S}_{OR} \tilde{\mbox{\bf w}} + \tilde{\mbox{\bf w}}^{\tau} \mathbf{S}_{RR}\tilde{\mbox{\bf w}} \end{split} \end{equation} where $\mathbf{S}_{OO}$, $\mathbf{S}_{OR}$, and $\mathbf{S}_{RR}$ are three matrices of sizes $N\times N$, $N\times M$, and $M\times M$ respectively. Their $(n,m)$th elements respectively given by $\phi(\mu_n|\mu_m,\Sigma_n+\Sigma_m)$, $\phi(\mu_n|\tilde\mu_m,\Sigma_n+\tilde\Sigma_m)$, and $\phi(\tilde\mu_n|\tilde\mu_m,\tilde\Sigma_n+\tilde\Sigma_m)$. Although the objective function has a closed-form, numerical algorithms must be employed for minimization. The Quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is used in~\cite{williams2006cost} for optimization. In this approach evluating the gradient at each step has cost $O(NMd^3)$ and the dimension of the parameter is of order $O(d^2)$. Therefore, directly minimizing ISE becomes very expensive as the dimension $d$ increases. \subsection{Clustering Based Algorithm} The clustering based algorithm~\cite{schieferdecker2009gaussian} that mimics the idea of the \emph{k-means} algorithm is a computationally efficient approach for GMR. It aims at partitioning the components of the original mixture into clusters with the number of clusters equals to the order of the reduced mixture. Given initial cluster centers, which are Gaussians, the clustering based algorithm consists of the following two steps: \begin{enumerate} \item \emph{Assignment step}: each component of the original mixture is assigned to a cluster based on their distance to cluster centers; \item \emph{Update step}: the new cluster centers, which are Gaussians, are updated according to certain criteria. \end{enumerate} These two steps are iteratively applied until certain convergence criteria has been met. Then the components of the reduced mixture are these cluster centers, whose corresponding mixing weights are the sum of the weights of the components of the original mixture that belong to the same cluster. As in the case of clustering in the vector space, the clustering based algorithms for GMR can also be classified into hard and soft clustering. Each component of the original mixture is assigned to one and only one cluster in the hard clustering and to each cluster with certain probability in the soft clustering. \noindent \textbf{Hard Clustering} At the assignment step, the original mixture components are partitioned based on their minimum KL-divergence~\cite{schieferdecker2009gaussian} to cluster centers. The $n$th component of the original mixture is assigned to the $j$th cluster if $j$th cluster center is closest in the KL-divergence. At the update step, the components in the same cluster are merged to a single distribution in ${\mathcal{F}}$ via moment matching. That is the first two moments of the new cluster center, which is a Gaussian distribution, equals to the first two moments of the mixture formed by the components belong to the same cluster. See Algorithm~\ref{alg:hard_clustering}. Other similarity measure such as Wasserstein distance is also considered~\cite{assa2018wasserstein}. \begin{algorithm}[htbp] \begin{algorithmic} \State {\bfseries Input:} $p_{G}(x) = \sum_{n} w_n\phi_{n}(x)$ \State {\bfseries Initialize:} $\tilde\phi_1,\tilde\phi_2,\ldots,\tilde\phi_M$ \Repeat \State \underline{\emph{Assignment step:}} \State Compute $d_{nm} = D_{\text{KL}}(\phi_n,\tilde\phi_m)$ \State Assign component $n$ to clusters $C(n)=\argmin_{j} d_{nj}$ \State \underline{\emph{Update step:}} update cluster center by moment matching \For{$m \in [M]$} \begin{align*} \tilde w_m &= \sum_{C(n)=m} w_n\\ \tilde \mu_m &= \tilde w_m^{-1}\sum_{C(n)=m}w_n\mu_n\\ \tilde \Sigma_m &= \tilde w_m^{-1}\sum_{C(n)=m} w_n\{\Sigma_n+(\mu_n-\tilde \mu_m)(\mu_n-\tilde\mu_m)^{\tau}\} \end{align*} \EndFor \State Let $p_{\tilde{G}}(x) = \sum_{m} \tilde{w}_m \tilde\phi_m(x)$ \Until the value of $D_{\text{ISE}}(p_G, p_{\tilde{G}})$ converges \end{algorithmic} \caption{Hard clustering based algorithm in~\cite{schieferdecker2009gaussian}} \label{alg:hard_clustering} \end{algorithm} \noindent \textbf{Soft Clustering} Instead of assigning each component of the original mixture to one cluster, let $z_{nm}$ be the probability that the $n$th component of the original mixture belongs to the $m$th cluster, then $\sum_{m=1}^{M}z_{nm}=1$ for $n=1,2,\ldots,N$. The soft clustering based algorithm in~\cite{yu2018density} is given in Algorithm~\ref{alg:soft_clustering}. Compare with algorithm with the hard clustering based algorithm, we can see that update step is a weighted version of the moment matching. When the hyper-parameter $I\rightarrow \infty$, then $z_{nm} = 1$ if $m=C(n)$ and $0$ otherwise, that is the soft clustering based algorithm reduces to the hard clustering based algorithm. \begin{algorithm}[htpb] \begin{algorithmic} \State {\bfseries Input:} $p_{G}(x) = \sum_{n} w_n\phi_n(x)$, hyper-parameter $I>0$ \State {\bfseries Initialize:} $\tilde\phi_1,\tilde\phi_2,\ldots,\tilde\phi_M$ \Repeat \State \underline{\emph{Assignment step:}} \State Let \begin{align} E_{nm} &= {\mathbb{E}}_{\phi_n}\{\log \tilde \phi_m(X)\}\label{eq:soft_clustering_similarity} \\ z_{nm} &=\{\tilde w_m \exp(IE_{nm})\}/\sum_{m'} \{\tilde w_{m'} \exp(IE_{nm'})\}\label{eq:soft_clustering_assignment} \end{align} \State \underline{\emph{Update step:}} \For{$m\in[M]$} \begin{equation*} \begin{split} \tilde w_m &= \sum_{n}z_{nm}w_n\\ \tilde\mu_m &= \tilde w_m^{-1}\sum_{n} z_{nm} w_n \mu_n\\ \tilde\Sigma_m &= \tilde w_m^{-1}\sum_{n} z_{nm} w_n \left\{\Sigma_n + (\mu_n - \tilde\mu_m)(\mu_n - \tilde\mu_m)^{\tau}\right\} \end{split} \end{equation*} \EndFor \Until $\sum_{n}w_n\sum_{m} z_{nm} \left\{\log \frac{\tilde w_m}{z_{nm}} + IE_{nm}\right\}$ converges \end{algorithmic} \caption{Soft clustering based algorithm in~\cite{yu2018density}} \label{alg:soft_clustering} \end{algorithm} \section{Composite Transportation Divergence} \label{sec:composite_wdistance} Our proposed framework for GMR is based upon the composite transportation divergence (CTD). In this section, we introduce its formal definition. \begin{definition}[Composite Transportation Divergence] \label{def:CTD} Let us denote by $G=\sum_{n=1}^{N} w_{n} \delta_{\theta_{n}}$ and $\tilde G=\sum_{m=1}^{M}\tilde w_{m} \delta_{\tilde \theta_{m}}$ two discrete measures on $\Theta$. Denote ${\mathbf{w}}$ and $\tilde{{\mathbf{w}}}$ for the corresponding weights in vector form. Let $\phi_n(x) = \phi(x|\theta_n)$, $\tilde{\phi}_m(x) = \phi(x|\tilde \theta_m)$, and the cost function $c(\cdot,\cdot): {\mathcal{F}}\times {\mathcal{F}} \rightarrow {\mathbb{R}}_{+}$ be a non-negative semi-continuous function. Denote by $$ \Pi({\mathbf{w}} , \tilde{\mathbf{w}}) = \{{\boldsymbol\pi} \in {\mathbb{R}}_{+}^{N \times M}: {\boldsymbol\pi}\mathbbm{1}_{M} = {\mathbf{w}},{\boldsymbol\pi}^{\tau}\mathbbm{1}_{N} = \tilde {\mathbf{w}}\}. $$ The composite transportation divergence~\cite{chen2017optimal,delon2019wasserstein} between two mixtures is defined as \begin{equation} \label{eq:WD_distance} {\mathcal{T}}_{c}(p_{G},p_{\tilde G}) :=\inf_{{\boldsymbol\pi} \in \Pi ({\mathbf{w}} , \tilde{{\mathbf{w}}})}\sum_{n,m}\pi_{nm}c(\phi_n, \tilde \phi_m). \end{equation} \end{definition} The CTD between two finite mixtures can be explained by a canonical example in optimal transportation~\cite{peyre2019computational}. Suppose that an operator runs $N$ warehouses and $M$ factories in space $\mathcal{F}$. The warehouse $n$ at location $\phi_n$ contains $w_n$ units of the raw material. These raw materials are to be transported to the factories at location $\tilde{\phi}_m$ with $\tilde w_m$ being the amount of the material that the $m$th factory requires. The element $c(\phi_n,\tilde{\phi}_m)$ is the unit cost for transportation and $\pi_{nm}$ is the amount of material to be transported from warehouse $n$ to factory $m$. It is reasonable to assume the cost is proportional to the amount of material to be transported, therefore $\sum_{n,m} \pi_{nm}c(\phi_n,\tilde{\phi}_m)$ gives the total transportation cost under the transportation plan ${\boldsymbol\pi}$. The set $\Pi({\mathbf{w}} , \tilde{\mathbf{w}})$ is defined to be all possible transportation plans ${\boldsymbol\pi}$ that satisfy two marginal constraints: (a) the right amount of materials are taken from warehouses, and (b) the right amount of materials send to these factories. The CTD $\mathcal{T}_{c}(p_{G},p_{\tilde G})$ is the lowest total transportation cost over all possible transportation plans. For a given cost function, the optimal transportation plan can be computed via linear programming with maximum number of iterations required being $\mathcal{O}(N^3 log N)$ when $N=M$~\cite{panaretos2019statistical}. Therefore, the computation of the CTD becomes more and more expensive as the mixture order $N$ gets larger. To solve the computational issue,~\cite{cuturi2014fast} considers an entropic regularized version which gives an approximate solution to the original transportation problem but can be computed much faster. We adopt the same idea and introduce the entropic regularized CTD as follows. \begin{definition}[Entropic Regularized CTD] With the same notation as before. Let ${\mathcal{H}}({\boldsymbol\pi}) = -\sum_{i,j}\pi_{ij}(\log \pi_{ij} - 1 )$ be the entropy of the transportation plan ${\boldsymbol\pi}$. Then the entropic regularized CTD between two mixtures is defined as \begin{equation*} {\mathcal{T}}_{c}^\lambda(p_{G},p_{\tilde G})=\inf_{{\boldsymbol\pi} \in \Pi ({\mathbf{w}} , \tilde{{\mathbf{w}}})}\{\sum_{n,m}\pi_{nm}c(\phi_n,\tilde{\phi}_m) - \lambda {\mathcal{H}}({\boldsymbol\pi})\} \end{equation*} for some regularization parameter $\lambda \geq 0$. \end{definition} When $\lambda = 0$, the entropic regularized CTD reduces to the CTD; when $\lambda\rightarrow\infty$, ${\mathcal{T}}_{c}^\lambda(p_{G},p_{\tilde G})\rightarrow \sum_{n,m} \pi_{nm}^*c(\phi_n,\tilde{\phi}_m)$ where ${\boldsymbol\pi}^* = \arginf\{- {\mathcal{H}}({\boldsymbol\pi}): {\boldsymbol\pi} \in \Pi ({\mathbf{w}} , \tilde {\mathbf{w}})\}=\mbox{\bf w}\tilde\mbox{\bf w}^{\tau}$. For simplicity, we refer to the entropy regularized divergence also as the composite transportation divergence, their difference is highlighted when necessary. \section{Proposed Reduction Approach} \label{sec:problem_formulation} Recall $p_{G}(x) = \sum_{n=1}^{N} w_n \phi_n(x)$ is the original mixture with order $N$ and we search for a $p_{\tilde G}(x) = \tilde w_m \tilde\phi_m(x)$ where $M \leq N$ to approximate $p_{G}$. We propose to find the reduced mixture via an optimization based approach. Let the divergence be the CTD in~\eqref{eq:minimum_distance}, we propose to find the solution to GMR via \begin{equation} \label{eq:RCWD_GMR} \tilde G:=\arginf_{\tilde G\in {\mathbb{G}}_{M}} {\mathcal{T}}_{c}^\lambda(p_{G}, p_{\tilde G}) \end{equation} for a pre-specified cost function $c(\cdot,\cdot)$ and regularization strength $\lambda \geq 0$. The rest of the section is organized as follows, we first prescribe an efficient MM algorithm for~\eqref{eq:RCWD_GMR} in Section~\ref{sec:mm_algorithm}. We then establish its connection with existing clustering based algorithms in Section~\ref{sec:connection}. We finally show in Section~\ref{sec:generalization} how this framework generalizes with different cost functions. \subsection{Numerical Algorithm} \label{sec:mm_algorithm} Based on~\eqref{eq:RCWD_GMR}, it may appear that finding the reduced mixture involves two optimizations: 1) computing ${\mathcal{T}}_{c}^\lambda(p_{G}, p_{\tilde G})$ for each pair of $p_{G}$ and $p_{\tilde G}$, and 2) minimizing the CTD to find the optimal $\tilde G$. We show in the section that the optimization can be simplified by removing one of the redundant constraint on the transportation plan. At a high level, the optimization in ${\mathcal{T}}_{c}^\lambda(p_{G}, p_{\tilde G})$ involves searching for transportation plans ${{\boldsymbol\pi}}$ under two marginal constraints specified by $\mbox{\bf w}$ and $\tilde \mbox{\bf w}$. While the constraint on $\mbox{\bf w}$ is strict, $\tilde \mbox{\bf w}$ is a moving constraint. Therefore, instead of searching for ${{\boldsymbol\pi}}$ satisfying constraint $\tilde \mbox{\bf w}$, we move $\tilde \mbox{\bf w}$ to meet ${{\boldsymbol\pi}}$. This makes the marginal distribution constraint $\tilde \mbox{\bf w}$ on ${{\boldsymbol\pi}}$ redundant. By removing this constraint, we could obtain a closed-form transportation plan, which simplifies the optimization. Let $\Pi(\mbox{\bf w},\cdot) = \{{\boldsymbol\pi} \in {\mathbb{R}}_{+}^{N \times M}: {\boldsymbol\pi}\mathbbm{1}_{M} = {\mathbf{w}}\}$ be the set of transportation plans that only satisfies the first marginal constraint. Let us define two functions of $\tilde G$, with $G$ hidden in the background: \begin{align} \label{Jcal} {\mathcal{J}}_{c}^{\lambda}(\tilde G) &= \inf_{{\boldsymbol\pi} \in \Pi( \mbox{\bf w}, \cdot)}\{ \sum_{nm} {\pi}_{nm} c(\phi_n, \tilde \phi_m) - \lambda {\mathcal{H}}({\boldsymbol\pi}) \}, \\ {{\boldsymbol\pi}}^{\lambda}(\tilde G) \label{bpi-star} & = \arginf_{{\boldsymbol\pi} \in \Pi( \mbox{\bf w}, \cdot)}\{ \sum_{nm} {\pi}_{nm} c(\phi_n, \tilde \phi_m) - \lambda {\mathcal{H}}({\boldsymbol\pi})\}. \end{align} Both functions depend on $\tilde G$ through its subpopulations $\tilde{f}_m$ but are free of its mixing weights $\tilde \mbox{\bf w}$. The optimizations in~\eqref{Jcal} and~\eqref{bpi-star} only involve the linear constraint in terms of $\mbox{\bf w}$. Hence, for a given cost function $c(\cdot, \cdot)$, the optimal transportation plan ${\boldsymbol\pi}^{\lambda}(\tilde G)$ has an analytical form \begin{equation} \label{eq:bpi} \pi_{nm}^{\lambda}(\tilde G) = w_n \frac{\exp(-c(\phi_n, \tilde \phi_m)/\lambda)}{\sum_{m'}\exp(-c(\phi_n, \tilde f_{m'})/\lambda)}. \end{equation} Under the special case where $\lambda=0$, it can be easily show that $\pi_{nm}^{0}(\tilde G) = \lim_{\lambda\rightarrow0} \pi_{nm}^{\lambda}(\tilde G)$. We therefore denote the optimal transportation plan as $\pi_{nm}^{\lambda}$ for the ease of notation. The numerical algorithm then reduces to the special case when $\lambda=0$ that is described in~\cite{zhang2020distributed}. The following theorem states the simplified optimization problem. \begin{theorem}[Simplified Optimization] \label{thm:ww_averaging_equivalent_obj} Let $G$, ${\mathcal{T}}_{c}^{\lambda}(\cdot)$, ${\mathcal{J}}_{c}^{\lambda}(\cdot)$, ${\boldsymbol\pi}^{\lambda}(\cdot)$, and the other notation be as given earlier. We have \begin{equation} \label{eq:equiv_optimization} \inf\{{\mathcal{T}}_{c}^{\lambda}(p_{G}, p_{\tilde G}): \tilde G \in {\mathbb{G}}_{M}\} = \inf\{{\mathcal{J}}_{c}^{\lambda}(p_{\tilde G}): \tilde G \in {\mathbb{G}}_{M}\}. \end{equation} The reduced mixture are hence given by \begin{equation} \label{eq:reduced_mixture_subpop} p_{\tilde G} = \arginf\{{\mathcal{J}}_{c}^{\lambda}(p_{\tilde G}): \tilde G \in {\mathbb{G}}_{M} \} \end{equation} and the mixing weights are given by \begin{equation} \label{eq:reduced_mixture_weight} \tilde \mbox{\bf w} = \sum_{n} {\boldsymbol\pi}^{\lambda}(p_{\tilde G}). \end{equation} \end{theorem} Based on this theorem, the optimization reduces to search for $M$ subpopulations $\{\tilde f_{m},~m=1,2,\ldots,M\}$. The mixing proportions are then determined by~\eqref{eq:reduced_mixture_weight}. An iterative algorithm following the well-known majorization--minimization (MM) algorithm~\cite{hunter2004tutorial} becomes available. A brief overview of MM is provided. The MM algorithm is an iterative algorithm that starts at initial point $x_0$, at the $t$th step of the iteration, the MM algorithm aims at minimizing the function that majorize the objective function at $x_t$. \begin{definition}[Majorization Function] A function $h(x|x_0)$ majorizes $g(x)$ at $x_0$ if $h(x|x_0) \geq g(x)$, with equality holding when $x = x_0$. \end{definition} This iterative procedure ensures the objective function is non-increasing after each iteration. The key for the MM algorithm is to find a majorizing function which is usually convex and easy to minimize. For the optimization problem in~\eqref{eq:reduced_mixture_subpop}, let $\tilde G^{(t)}$ be the mixing distribution after $t$ MM iterations. Define a majorization function of ${\mathcal{J}}_{c}^{\lambda}$ at $\tilde G^{(t)}$ to be \begin{equation} \label{eq:majorization_function} {\mathcal{K}}_{c}^{\lambda}(p_{\tilde G}|p_{\tilde G^{(t)}}) = \sum_{n,m} \pi_{nm}^{\lambda}(p_{\tilde G^{(t)}}) c(\phi_n, \tilde \phi_m) \end{equation} where $\pi_{nm}^{\lambda}(p_{\tilde G^{(t)}})$ is computed according to~\eqref{eq:bpi}. The subpopulations $\tilde \phi_m$ are separated in the majorization function~\eqref{eq:majorization_function}. This allows us to update the subpopulation parameters, one $\tilde \phi_m$ at a time and possibly in parallel, as the solutions to \begin{equation} \label{eq:support_update} \tilde \phi_m^{(t+1)} = \arginf_{\phi\in {\mathcal{F}}}\{\sum_{n} \pi_{nm}^{\lambda}(p_{\tilde G^{(t)}})c(\phi_n, \phi)\}. \end{equation} The mixing proportions of $G^{(t)}$ are updated via \begin{equation} \label{eq:weight_update} \tilde w_m^{(t+1)} = \sum_n \pi_{nm}^{\lambda}(p_{\tilde G^{(t)}}). \end{equation} The MM algorithm then iterates between the majorization step~\eqref{eq:majorization_function} and the maximization step~\eqref{eq:support_update} and~\eqref{eq:weight_update} until the change in the CTD is below some threshold. The algorithm is summarized in Algorithm~\ref{alg:mm_reduction}. The solution to~\eqref{eq:support_update} is usually called a barycenter and is defined as follows. \begin{definition}[Barycenter] \label{def:barycenter} Let $({\mathcal{F}}, \rho)$ be the space of Gaussian densities endowed with the divergence $\rho(\cdot, \cdot)$. Given positive constants $\lambda_l$ for $l\in [L]$, the (weighted) barycenter of $\phi_1, \ldots, \phi_{L} \in {\mathcal{F}}$ is a minimum point of $\sum_{l=1}^L \lambda_l \rho (\phi_l, \phi)$. \end{definition} \begin{algorithm}[htp] \begin{algorithmic} \State {\bfseries Initialization:} $\tilde \phi_{m}$, $m\in [M]$ \Repeat \For {$m\in[M]$} \State \underline{\emph{Assignment step:}} $\pi_{nm}^{\lambda}=\frac{w_n\exp(-c(\phi_n, \tilde \phi_m)/\lambda)}{\sum_{m'}w_{m'}\exp(-c(\phi_n, \tilde f_{m'})/\lambda)}$ \State \underline{\emph{Update step:}} \State Let $\tilde \phi_{m} =\argmin_{f} \sum_{n=1}^{N} \pi_{nm}^{\lambda} c(\phi_n, \phi)$ \State Let $\tilde w_{m} = \sum_{n} \pi_{nm}^{\lambda}$ \EndFor \Until $\sum_{n,m}\pi_{nm}^{\lambda}c(\phi_n, \tilde \phi_m)-\lambda {\mathcal{H}}({\boldsymbol\pi}^{\lambda})$ converges \end{algorithmic} \caption{MM algorithm for GMR with CTD} \label{alg:mm_reduction} \end{algorithm} The convergence of the numerical algorithm can be found in~\cite[Theorem 7]{zhang2020distributed}. \subsubsection{Computational Complexity Analysis} The MM algorithm is an iterative algorithm and we analyze the computational cost at each iteration. For a given cost function, the cost function needs to be evaluated for $O(NM)$ times. For most of the cost functions we consider in this paper, the cost for evaluating the cost function once is $O(d^3)$. Therefore, the total cost for evaluating the cost matrix is $O(NMd^3)$. The cost for computational the transportation plan is $O(NM)$. The computationally most expensive step is to find the $M$ barycenters. The cost depends on the specific cost function, for example, when the cost function is KL divergence, then as shown in Lemma~\ref{lemma:Gaussian_KL_barycenter}, the barycenter has a closed form and the cost for computing one barycenter is $O(d^2)$. Therefore, the total cost for finding the barycenters is $O(Md^2)$. Therefore, when the cost function is the KL divergence, the total cost at each iteration is $O(NMd^3)$. \subsection{Connection with Existing Algorithms} \label{sec:connection} We claim our proposed GMR approach is an unified framework that connects the optimization based approaches and the clustering based approaches in the literature, we establish their connections in this section. \subsubsection{Connection with Clustering Based Algorithm} Despite of the computational efficiency of the clustering based,~\emph{it is unclear if these algorithms always converge or attain some optimal targets when they do.} We show existing clustering based algorithms are special cases of the MM algorithm by choosing specific cost functions in CTD. The connection helps explain the clustering based algorithms from the following aspects: \begin{enumerate} \item \textbf{Objective}: The ultimate goal of clustering based algorithm is to minimize the CTD from the original mixture to the reduced mixture for some pre-specified cost function. \item \textbf{Convergence}: Since the clustering based algorithms are special cases of the proposed MM algorithm, the convergence of the MM algorithm implies the convergence of the clustering based algorithm. \item \textbf{Consistency}: The assignment and update steps need to be consistent with the cost function. The update step in our proposed algorithm corresponds to find the barycenter of components in the same cluster with respect to the cost function $c(\cdot, \cdot)$. If one assigns the components to clusters based on some divergence but nonetheless find the cluster centers by moment matching, then this may not lead to the convergence of the algorithm. \end{enumerate} As an example, we show that when the cost function is chosen to be \begin{equation*} c(\phi_n, \tilde \phi_m) = -\log\tilde w_m - I E_{nm} \end{equation*} where $E_{nm}$ is defined in~\eqref{eq:soft_clustering_similarity} and $\lambda=1$ in our proposed algorithm. Our algorithm reduces to the soft clustering based algorithm in~\cite{yu2018density}. Then according to our assignment step in Algorithm~\ref{alg:mm_reduction}, the transportation plan becomes \[ \begin{split} \pi_{nm} &= \frac{w_n\exp(-c(\phi_n, \tilde \phi_m))}{\sum_{m'}\exp(-c(\phi_n, \tilde f_{m'}))}=\frac{w_n\tilde w_{m}\exp(IE_{nm})}{\sum_{m'}\tilde w_{m'}\exp(IE_{nm'})}\\ &= w_nz_{nm} \end{split} \] Then the mixing weights becomes \begin{equation} \label{eq:mm_weighted_kl_mixing_weights} \tilde w_m = \sum_{n=1}^{N}\pi_{nm} = \sum_{n=1}^{N} w_nz_{nm} \end{equation} and the $m$th subpopulation is updated via \begin{equation*} \begin{split} \tilde \phi_{m} &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm}c(\phi_n, \tilde \phi)\\ &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} \{-\log\tilde w - I \int \phi_n(x) \log \phi(x) dx\}\\ &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} \int \log\left\{\frac{\phi_n(x)}{\phi(x)}\right\} \phi_n(x)dx \\ &= \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} D_{\text{KL}}(\phi_n\| \phi). \end{split} \end{equation*} Therefore, the components of the reduced mixture becomes the barycenter of the original Gaussian components under the KL divergence with weights $\pi_{1m},\pi_{2m},\ldots, \pi_{Nm}$. The following result that the barycenter of Gaussians with respect to the KL divergence is the same as moment matching is needed. The proof is deferred to Appendix~\ref{sec:kl_barycenter}. \begin{lemma}[Gaussian Barycenter under the KL Divergence is the Same as Moment Matching] \label{lemma:Gaussian_KL_barycenter} The minimizer of $L(\phi) = \sum_{n=1}^N \lambda_n D_{\text{KL}}(\phi_n\|\phi)$ when $\phi$ is constrained to be a Gaussian density has mean $\bar{\mu}= \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n\mu_n$ and the covariance \begin{equation*} \bar{\Sigma} = \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n (\Sigma_n + (\mu_n-\bar{\mu})(\mu_n-\bar{\mu})^{\tau}). \end{equation*} \end{lemma} By substituting $\lambda_n$ with $\pi_{nm}$ above, along with~\eqref{eq:mm_weighted_kl_mixing_weights}, the updated subpopulation parameters based on our approach becomes \begin{equation} \label{eq:mm_weighted_kl_mean} \tilde\mu_{m} = \tilde w_m^{-1} \sum_{n=1}^Nw_n z_{nm}\mu_n \end{equation} and \begin{equation} \label{eq:mm_weighted_kl_cov} \tilde\Sigma_{m} = \tilde w_m^{-1} \sum_{n=1}^Nw_n z_{nm}\{\Sigma_n + (\mu_n-\tilde\mu_m)(\mu_n-\tilde\mu_m)^{\tau}\}. \end{equation} It can be seen that~\eqref{eq:mm_weighted_kl_mixing_weights}-\eqref{eq:mm_weighted_kl_cov} matches the cluster centers given in the soft clustering algorithm in Algorithm~\ref{alg:soft_clustering}. In summary, by choosing specific cost function, our proposed algorithm reduces to the soft clustering based algorithms in the literature. Similarly, the connection can be established for the hard clustering based algorithm in~\cite{schieferdecker2009gaussian}. See details in Appendix~\ref{sec:connection_hard_clustering}. A summary of the existing clustering based algorithm and their corresponding cost functions is given in Table~\ref{tab:cost_fct_CTD}. \begin{table}[htpb] \centering \scriptsize \caption{The relationship between the proposed GMR approach and existing clustering based GMR approaches according to the cost function $c(\cdot,\cdot)$ and regularization strength $\lambda$. Empty entries indicate these approaches are not explored.} \label{tab:cost_fct_CTD} \begin{tabular}{cccc} \toprule $c(\phi_n,\tilde \phi_m)$ & $D_{\text{KL}}(\phi_n\|\tilde \phi_m)$ & $-\log \tilde w_m -IE_{nm}$ & $W_2(\phi_n,\tilde \phi_m)$ \\ \midrule $\lambda=0$ & \cite{schieferdecker2009gaussian}& --&\cite{assa2018wasserstein}\\ \midrule $\lambda=1$ & -- & \cite{yu2018density} & --\\ \bottomrule \end{tabular} \end{table} \begin{remark}[Technical error in~\cite{yu2018density}] As a side note, we want to point out there is an error in~\cite{yu2018density}. Let $X_1,X_2,\ldots,X_{I}\overset{\text{i.i.d.}}{\sim} p_{G}(x)$,~\cite{yu2018density} performs GMR by finding the maximizer of \begin{align} \ell_{I}(\tilde G) &= {\mathbb{E}}_{X\sim\prod_{i=1}^{I} p_{G}(x_i)} \left\{\sum_{i=1}^{I}\log p_{\tilde G}(X_i)\right\} \label{eq:yu_obj}\\ &= \sum_{n=1}^{N} w_n {\mathbb{E}}_{X\sim \prod_{i=1}^{I}\phi_n(x_i)} \left\{\sum_{i=1}^{I}\log p_{\tilde G}(X_i)\right\} \label{eq:yu_wrong_eq} \end{align} However,~\eqref{eq:yu_wrong_eq}, which is the second equality in (3) of~\cite{yu2018density} does not hold. Let ${\bm{x}} = (x_1,x_2,\ldots, x_I)$, then we can see \[ \int g({\bm{x}}) \prod_{i=1}^I \{\sum_{n=1}^N w_n \phi_n(x_i)\}d{\bm{x}} \neq \sum_{n=1}^N w_n \int g({\bm{x}}) \prod_{i=1}^I \phi_n(x_i)d{\bm{x}} \] because the summation and the product are not exchangeable. This mistake leads to the wrong derivation in the variational inference. Moreover, they fail to observe that \begin{equation} \label{eq:yu_equivalent} \ell_{I}(\tilde G) = I {\mathbb{E}}_{p_{G}(x)}\{\log p_{\tilde G}(X)\} \end{equation} in~\eqref{eq:yu_obj}. For a fixed hyper-parameter $I$, the maximum of the RHS is the same as the maximum of the LHS. Therefore, from maximize likelihood point of view, the parameter $I$ in~\cite{yu2018density} is redundant. Due to these errors, the interpretation of algorithm is no longer valid but their algorithm is still useful and the correct interpretation from the minimum CTD point of view is given in this paper. \end{remark} \subsubsection{Connection with Optimization Based Algorithms} In our optimization based formulation, the objective function is $\mathcal{T}_{c}^{\lambda}$. We may instead directly minimize $c(\cdot,\cdot)$ between two mixtures. However, they are not used in practice since they usually do not have closed forms and the corresponding optimization problems are computationally expensive. Is there any connection between $c(\cdot,\cdot)$ and ${\mathcal{T}}_{c}$? Under the special case where $c(\cdot,\cdot) = D_{\text{ISE}}(\cdot, \cdot)$ or $c(\cdot,\cdot) = D_{\mathrm{KL}}(\cdot\|\cdot)$ and $M=1$, we have \begin{equation} \label{eq:barycenter_equiv} \argmin_{\tilde \phi} \sum_{n=1}^{N} w_n c(\phi_n, \tilde \phi) = \argmin_{\tilde \phi} c\left(\sum_{n=1}^{N}w_n\phi_n, \tilde \phi\right). \end{equation} The LHS of~\eqref{eq:barycenter_equiv} is the composite transportation divergence between the original mixture and the reduce mixture and the RHS of~\eqref{eq:barycenter_equiv} is the divergence between two mixtures. The equality shows that when reduces to a single Gaussian, these two approaches are the same. The proof is deferred to Appendix~\ref{app:CTD_equiv}. Under a general case, we show that when $c(\cdot,\cdot)$ satisfies ``convexity" which is defined later, ${\mathcal{T}}_{c}$ is an upper bound of $c(\cdot,\cdot)$. Our formulation therefore minimizes an upper bound. \begin{theorem} \label{thm:upper_bound} Let $c(\cdot,\cdot):{\mathcal{F}}\times {\mathcal{F}} \to {\mathbb{R}}_{+}$ be a cost function that satisfies the ``convexity" property: for any $\alpha\in(0,1)$, we have $c(\alpha f_1 + (1-\alpha) f_2, \alpha \phi_1 + (1-\alpha) \phi_2) \leq \alpha c(f_1, \phi_1) + (1-\alpha) c(f_2, \phi_2)$. Then for all $\tilde G$, we have $$c(p_{G}, p_{\tilde G})\leq {\mathcal{T}}_{c}^{0}(p_{G}, p_{\tilde G}).$$ \end{theorem} The proof of the theorem is deferred to Appendix~\ref{app:CTD_is_upper_bound}. The convexity of the cost function holds for most divergence, such as $f$-divergence, ISE (see Appendix~\ref{app:ISE_convexity}), and squared $2$-Wasserstein distance~\cite[Chapter 7]{villani2003topics}. We show in Appendix~\ref{app:CS_convexity_counter_example} that the convexity does not hold for the Cauchy-Schwartz divergence discussed in the next section. \begin{remark} The same conclusion under some special cost functions has been shown in the literature from different views. Examples includes~\cite[Section 4.2]{delon2020wasserstein} when the cost function $c(\phi_n,\tilde\phi_m) = W_2^2(\phi_n, \tilde \phi_m)$ is the squared $2$-Wasserstein distance between two Gaussians,~\cite[Lemma 1]{nguyen2013convergence} when the cost function is $f$-divergence. \end{remark} \subsection{Generalization} \label{sec:generalization} According to proposed Algorithm~\ref{alg:mm_reduction}, we could generalize the algorithm by selecting different cost function $c(\cdot, \cdot)$. In this section, we propose several possible cost functions. Due to space limit, we discuss the generalization to mixture when $\mathcal{F}$ is the exponential family in Appendix~\ref{app:generalization_exponential_family}. A possible choice for the cost function could be the Cauchy-Schwartz (CS) divergence. \begin{definition}[Cauchy-Schwartz Divergence] The Cauchy-Schwartz divergence~\cite{jenssen2006cauchy} between two density functions $p(x)$ and $q(x)$ is defined as \begin{equation*} D_{\text{CS}}(p,q) = -\log \frac{\int p(x)q(x) dx}{\sqrt{\int p^2(x) dx \int q^2(x) dx}}. \end{equation*} \end{definition} The CS divergence between two Gaussian has the closed-form \begin{equation} \label{eq:cauchy-schwartz-divergence-gaussian} \begin{split} &D_{\text{CS}}(\phi(\cdot|\mu_1,\Sigma_1), \phi(\cdot|\mu_2,\Sigma_2))\\ =& -\log \phi(\mu_1|\mu_2, \Sigma_1+ \Sigma_2) - \frac{1}{4}\log \{|4\pi \Sigma_1| |4\pi \Sigma_2|\} \end{split} \end{equation} In Algorithm~\ref{alg:mm_reduction}, the barycenter of Gaussians under this divergence is also required. We use an iterative algorithm to find the stationary point as follows. \begin{example}[Gaussian Barycenter under Cauchy-Schwartz Divergence] \label{eg:CS_barycenter} With the same notation in Lemma~\ref{lemma:Gaussian_KL_barycenter}, then $\sum_{n=1}^{N} \lambda_n D_{\text{CS}}(\phi_n\|\phi)$ is minimized uniquely in the space of Gaussians with mean \begin{equation*} \bar\mu=\left\{\sum_{n} \lambda_n (\Sigma_n+\bar\Sigma)^{-1}\right\}^{-1}\sum_{n}\lambda_n (\Sigma_n+\bar\Sigma)^{-1}\mu_n \end{equation*} and covariance $\bar\Sigma$ is the solution to \[ \bar{\Sigma}^{-1} = 2 \sum_{n}\tilde \lambda_n (\Sigma_n+\bar\Sigma)^{-1}\{ I -(\mu_r-\bar\mu)(\mu_r-\bar\mu)^{\tau}(\Sigma_r+\bar\Sigma)^{-1}\} \] where $\tilde \lambda_n=\lambda_n/\sum_n \lambda_n$. The proofs are deferred to Appendix~\ref{sec:CS_barycenter_proof}. \end{example} We also consider to use the ISE as the cost function, that is $c(\phi_n,\tilde \phi_m)= D_{\text{ISE}}(\phi_n,\tilde \phi_m)$, we find the barycenter under this divergence with the numerical optimization algorithm. \section{Experiments} \label{sec:exp} In this section, we conduct numerical experiments to compare the performance of various reduction approaches. The reduction approaches that we consider including 1) the optimization based approach that minimum $D_{\text{ISE}}(p_{G}, p_{\tilde G})$ and we refer to this approach as ISE; 2) our proposed approach with various cost functions and we refer to the corresponding approach as CTD-XX where XX is the name of the cost function. For example, if the cost function is KL divergence, then we refer to this approach as CTD-KL. The default value for $\lambda$ is $0$ if not specified. Unless otherwise specified, all the reduction algorithms are initialized by $10$ initial values obtained by fitting a Gaussian mixture of $M$ components by the MLE using random samples generated from the original mixture. All experiments are implemented in Python 3.7.4 on cluster with Intel E5 CPU with 2.1Ghz. \subsection{Simulated Dataset} In this experiment, we use simulated data to reduce mixtures from order $N=25$. The parameter values of the original mixture is generated as follows. We first randomly pick $5$ locations uniformly at random within $[-10,10]\times[-10,10]$. For $i$th location, we randomly generate $n_i$ components around it where $(n_1,n_2,\ldots,n_5)\sim \text{Multinom}(N, 0.2\mathbbm{1}_{5})$. The mean parameter values of components of the original mixture are randomly generated within a circle at each location with radius 2.5. An example of the randomly picked locations (solid red dots) and the $25$ component centers (black dot with different shapes) is given in Figure~\ref{fig:simulation} (a). We then generate the covariance matrix for each component by generating the two diagonal elements $\Sigma_{11}$, $\Sigma_{22}$ from $\text{Gamma}(8,4)$ and the off-diagonal element is $\sqrt{\Sigma_{11}\Sigma_{22}}\cos(\pi\theta)$ where $\theta\sim U(0.2,0.8)$. We use the equal mixing weight for each component. The heatmap of the density function of the mixture with centers given in Figure~\ref{fig:simulation} (a) is given in Figure~\ref{fig:simulation} (b). We randomly create $100$ mixtures as described above. \begin{figure}[!htpb] \centering \subfloat[Component centers]{\includegraphics[width=0.44\columnwidth, height=0.44\columnwidth]{figure/simulated_data/locations.png}} \subfloat[Density heatmap]{\includegraphics[width=0.45\columnwidth, height=0.45\columnwidth]{figure/simulated_data/generated_pop_2.png}}\\ \subfloat[ISE]{\includegraphics[width=0.45\columnwidth]{figure/simulated_data/simulation_ISE.png}} \subfloat[Time]{\includegraphics[width=0.45\columnwidth]{figure/simulated_data/simulation_time.png}} \caption{(a) The randomly generated mixture with $25$-components, (b) the squared $L_2$ distance between the reduced and original mixture, and (c) the computational time.} \label{fig:simulation} \end{figure} Based on our procedure for generating the components, it is natural to reduce the original mixture to $M=5$, we also consider the case where we reduce the order to $M=10$ and $M=15$. The reduction result for the original mixture in Figure~\ref{fig:simulation} (a) is given in Figure~\ref{fig:simulation_eg} under different reduction approaches and values of $M$. \begin{figure}[htpb] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{cccc} \toprule &M=5&M=10&M=15\\ \midrule \rotatebox{90}{ISE}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_ISE_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_ISE_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_ISE_order_15.png}\\ \rotatebox{90}{CTD-ISE}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-ISE_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-ISE_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-ISE_order_15.png}\\ \rotatebox{90}{CTD-CS}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-CS_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-CS_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-CS_order_15.png}\\ \rotatebox{90}{CTD-KL}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-KL_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-KL_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-KL_order_15.png}\\ \rotatebox{90}{CTD-W2}&\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-W2_order_5.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-W2_order_10.png} &\includegraphics[width=0.3\columnwidth]{figure/simulated_data/reduced_CTD-W2_order_15.png}\\ \bottomrule \end{tabular} \caption{Heatmap of the difference of the density functions of the reduced mixture and original mixture.} \label{fig:simulation_eg} \end{figure} We compute the ISE between the reduced mixture and the original mixture based on minimum ISE over $100$ repetitions and visualize the mean value and 95\% error bar in Figure~\ref{fig:simulation} (b). The computational time for each reduction method is given in Figure~\ref{fig:simulation} (c). In terms of ISE, it should be the case that the ISE reduction has the smallest ISE to the original mixture and we use this method as a baseline for comparison. This reduction approach has the longest computational time but smallest ISE to the original mixture. It can be seen from the plot that the ISE decreases and the computational time increases regardless of the reduction approach. For the minimum CTD based reduction approaches, the performance is different when different cost functions are used. The relative performance of the CTD based reduction approaches is consistent regardless of the value of $M$. In terms of ISE, the preference for the cost functions from high to low is ISE, CS, KL, and W2. In terms of the computational time, the preference for the cost functions from high to low is KL, CS, W2, and ISE. When the cost function is the ISE between two Gaussians and $M=5$, the reduction result is almost as good as the minimum reduction result in terms of ISE and the computational time is only about 1/10 of the minimum ISE approach. The CTD-KL approach only takes about 1/10000 time of the minimum ISE approach. Based on the reduced estimators in Figure~\ref{fig:simulation_eg} that even that CTD-KL and CTD-W2 are not as good as CTD-ISE, they still gives a good reduction result, which shows the advantage of using the minimum CTD based approaches for reduction. \subsection{Approximate Inference for Belief Propagation} \label{sec:BP} The density function of Gaussian mixtures are usually used to approximate density functions in statistical inference, examples including the belief propagation in arbitrarily structured graphical models and tracking in hidden Markov model. In both problems, the order of the Gaussian mixtures increases exponentially due to recursion. To solve this issue, the GMR is applied to reduce the order of the mixture to a reasonable order after each iteration. In this section, we conduct experiment to compare the performance of different GMR approaches for belief propagation. The \emph{Belief propagation} (BP) is an iterative algorithm used to compute the marginal distributions based on a probabilistic graphical model. A graph consists of a node set $\mathcal{V}$ and an undirected edge set $\mathcal{E}$ made of pairs of nodes that are related. A probabilistic graphical model associates each node with a random variable, say $X_i$, and assumes that the joint density function of the random vector $X=\{X_i:i\in\mathcal{V}\}$ can be factorized into \begin{equation*} p(x) \propto \prod_{(i,j) \in \mathcal{E}} \psi_{ij}(x_i, x_j)\prod_{i\in \mathcal{V}} \psi_{i}(x_i) \end{equation*} for some non-negative valued functions $\psi_{ij}(\cdot,\cdot)$ and $\psi_{i}(\cdot)$. We call $\psi_{ij}(\cdot,\cdot)$ and $\psi_{i}(\cdot)$ local potential and local evidence potential respectively. Let the neighborhood of a node $i$ be denoted as $\Gamma(i) := \{j: (i,j)\in\mathcal{E}\}$. Let $m_{ji}^{(t-1)}(\cdot)$ be the message at the $t-1$th iteration of the BP algorithm, we then update the message by \begin{equation*} \label{eq:message_update} m_{ji}^{(t)}(x_i) \propto \int \psi_{ij}(x_i,x_j) \psi_j(x_j)\prod_{k\in\Gamma(j)\backslash i} m_{kj}^{(t-1)}(x_j) dx_j. \end{equation*} The belief, which is the approximate marginal density function of the random variable associated with node $i$, is updated as \begin{equation*} q_{i}^{(t)}(x_i)\propto \psi_i(x_i)\prod_{j\in\Gamma(i)}m_{ji}^{(t)}(x_i). \end{equation*} The messages and beliefs are iteratively updated until convergence. For tree-structured graphs, the beliefs will converge to the true marginals. The closed-form outcome of the messages generally do not exist with some exceptions. Same as the reason for using GMR in tracking in HMM, the GMMs are used to approximate the local potentials and local evidence potentials. In this section, we consider the graphical model in Figure~\ref{fig:BP_example}(a) following~\cite{yu2018density}. In this model, the local potential associated with the $(i,j)$th edge is given by $\psi_{ij}(x,y) = \phi(x|y,\phi_{ij}^{-1})$, where $\phi_{ij}$ values are marked alongside the edges. The local evidence potential associated with the $i$th node is a two-component Gaussian mixture $\psi_{i}(x)=w_i\phi(x|\mu_{i1}, 1)+(1-w_i)\phi(x|\mu_{i2}, 1.5)$ with $w_i\sim U(0, 1)$, $\mu_i^1\sim U(-4,0)$, and $\mu_i^2\overset{\text{i.i.d.}}{\sim} U(0, 4)$. As the order of the message mixture grows exponentially with the number of iterations, the \emph{exact inference} becomes intractable after $4$ iterations. To overcome this difficulty, we use GMR to performance \emph{approximate inference}, that is to reduce the order of message before updating the belief in the next iteration to keep the order at a manageable size. In our experiment, we reduce the message mixture to $M=4$ when $N>4$. \begin{figure}[htpb] \centering \subfloat[Graphical model]{ \begin{tikzpicture}[auto, scale=0.5, node distance=3cm, transform shape, main node/.style={circle,draw,font=\sffamily\Large\bfseries}] \node[main node] (1) {$X_1$}; \node[main node] (2) [below left of=1] {$X_2$}; \node[main node] (3) [below right of=2] {$X_3$}; \node[main node] (4) [below right of=1] {$X_4$}; \path[every node/.style={font=\sffamily\small}] (1) edge node [left] {0.6} (4) edge node {0.4} (3) (2) edge node [right] {0.2} (1) (3) edge node [right] {0.01} (2) (4) edge node [left] {0.8} (3); \end{tikzpicture}} \hspace{10mm} \subfloat[Time]{\includegraphics[width=0.45\columnwidth]{figure/BP/BP_time.png}}\\ \subfloat[ISE]{\includegraphics[width=\columnwidth]{figure/BP/BP_ISE.png}} \caption{(a) The structure of the graphical model for belief propagation; (b) computational time for belief update versus number of iteration, and (c) the squared $L_2$ distance between the exact and approximate beliefs.} \label{fig:BP_example} \end{figure} We evaluate the performance of the GMR approaches by computing the ISE, averaged over nodes, between the exact belief and the approximate beliefs. The comparison is computationally feasible for the first $3$ iterations due to limited computer memory. Since no reduction is applied in the first iteration, we only show the result for the 2nd and 3rd iterations. The results are averaged over 100 trials. Figure~\ref{fig:BP_example} (c) gives the distance of the belief based on the approximate inference to the true belief mixture based on the exact inference at each node. As the iteration increases, the ISE at each node increases. It can be also seen from Figure~\ref{fig:BP_example} (c) that in terms of the distance, the approximate inference based on ISE is the best. For all the minimum CTD based reduction approaches, when the cost function is the ISE, the approximate inference has the best results. In terms of the computational time, the ISE approach that is the closest to the exact inference does not save the computational time. In the 3rd iteration, as the order of the message mixture increases exponentially in the exact inference, the CTD based reduction based approaches saves the computational time and the beliefs obtained based on the approximate inference is close to the exact inference. \subsection{Hand Gesture Recognition} We apply the GMR for static hand gesture recognition. For static hand gesture recognition, a set of labeled images of hand gestures are given as the training set, on which a classifier is trained to classify unseen images of the same set of hand gestures. \noindent \textbf{Dataset \& Pre-processing} We use the Jochen Triesch static hand posture database~\cite{triesch1996robust} that is publicly available online. This dataset contains $128\times128$ gray-scale images of $10$ hand postures forming the alphabetic letters: A, B, C, D, G, H, I, L, V, and Y for 24 persons with 3 different backgrounds. To remove additional noise caused by the background, in our experiment, we use the same set of images as described in~\cite{kampa2011closed} whose backgrounds are removed. To reduce the classification error caused by the mis-alignment of the hands,~\cite{kampa2011closed} centers these hands by cropping. They manually crop each image into the smallest rectangle that only contains the hands and whose center is the center of the hand. After this step, all hands are centered in the image but with different sizes due to the difference in the hand sizes in the original images. To make the classifiers to be more invariant to the size of the hand, they resize the images into a square whose most top-left pixel and most bottom-right pixel are coordinates $(0,0)$ and $(1,1)$ respectively. After these pre-processing steps, there are $168$ images in total with around $16-20$ images for each hand posture. \noindent \textbf{Gaussian Mixture \& Hand Gesture Recognition} In~\cite{kampa2011closed}, they view the intensity of each pixel as a function of the location of the pixels, which can be approximated by the density function of a Gaussian mixture up to some normalizing constant. They therefore fit a $10$-component on each image with the non-background pixels, each image therefore can be represented by a GMM. An example of the original image and the heat-map of the density function of the corresponding fitted mixture model is given in Figure~\ref{fig:hand_gesture_fitted_gmm}. \begin{figure}[htpb] \centering \subfloat[Pre-processed image]{\includegraphics[height=0.35\linewidth, width=0.45\linewidth]{figure/hand_gestures/demo_c_original.png}} \subfloat[Heat-map of GMM]{\includegraphics[width=0.35\linewidth]{figure/hand_gestures/demo_c_gmm.png}} \caption{An example of (a) a pre-processed image of hand posture ``C"; (b) the heat-map of the fitted 10-component mixture on the pre-processed image in (a).} \label{fig:hand_gesture_fitted_gmm} \end{figure} We propose to classify a new image, which is also a GMM, by the minimum divergence classifier. In~\cite{kampa2011closed}, they classify a new image by computing the CS divergence between the test images and all training images, where both are represented by GMMs. Then the test image is classified based on the nearest neighbor. For example, a test image of hand gesture is classified as gesture ``A'' if there exists a training images with hand gesture ``A'' is closest to this test image. We perform the classification slightly different from~\cite{kampa2011closed}. We also use the minimum divergence classifier, but instead of computing the divergence from the test image to every single training image, we compute the divergence from the test images to the class ``prototype'' training images. We summarize the training images within the same class into a single training image and called it the class prototype. We combine all the images within the same class into a GMM with many components and reduce this GMM into a $10$-component GMM and the reduced mixture is our class prototype. When there are a lot of training examples, our approach can save the evaluation time. Figure~\ref{fig:hand_gesture_prototype} gives the class protype of the hand gestures based on different reduction approaches. \begin{figure}[ht] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{ccccccccccc} \toprule Method&A&B&C&D&G&H&I&L&V&Y\\ \midrule ISE&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/ISE_prototype_Y.png}\\ CTD-KL&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-KL_prototype_Y.png}\\ CTD-W2&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-W2_prototype_Y.png}\\ CTD-CS&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-CS_prototype_Y.png}\\ CTD-ISE&\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_A.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_B.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_C.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_D.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_G.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_H.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_I.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_L.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_V.png} &\includegraphics[width=0.07\linewidth]{figure/hand_gestures/reduction_visualization/CTD-ISE_prototype_Y.png}\\ \bottomrule \end{tabular} \caption{The class prototype of each hand gesture obtained by different reduction approaches.} \label{fig:hand_gesture_prototype} \end{figure} \noindent \textbf{Results} The quality of the class prototypes must have an effect on the classification accuracy, we therefore compare the classification accuracy based on various reduction methods and their corresponding runtime. Since the training set is relatively small, we perform 5-fold cross validation that is repeated for 100 times to approximate the classification accuracy. \begin{figure}[htpb] \centering \subfloat[Classification Accuracy]{\includegraphics[width=0.45\linewidth]{figure/hand_gestures/hand_gesture_accuracy.png}} \subfloat[Time]{\includegraphics[width=0.45\linewidth]{figure/hand_gestures/hand_gesture_time.png}}\\ \subfloat[Classification Accuracy]{\includegraphics[width=0.45\linewidth]{figure/hand_gestures/hand_gesture_cross_test.png}} \caption{The (a) classification accuracy when same divergence is used during reduction and test, (b) computational time based on different reduction approaches, (c) classification accuracy when different divergences are used during reduction and test.} \label{fig:hand_gesture_final_result} \end{figure} We consider two schemes during the test: \begin{enumerate} \item During the test time, the divergence used for classification is the same as that used during the reduction. For example, if we minimize the ISE to obtain the class prototype, we then also use ISE to measure the similarity between the test images and the class prototypes. The classification accuracy based on different divergences are given in Figure~\ref{fig:hand_gesture_final_result} (a). \item During the test time, we consider use all the divergences used in this experiment. Each combination of the divergence in the reduction and the divergence in the test gives a classification accuracy, which is visualized in Figure~\ref{fig:hand_gesture_final_result} (b). \end{enumerate} When the same divergence is used during the reduction and test, the ISE based approach gives the highest classification accuracy, however, its computational time is the longest. The performance of the CTD based approaches are worse, however, with the CS and ISE with the cost function, these two approaches have better performance than the CTD-KL, which is the classical clustering based algorithm in the literature. The CTD-KL takes shortest time. When different divergences are used during the reduction and test, it can be seen that when the ISE is used during the test, four of the reduction approaches gives us the same classification accuracy. When the CTD-W2 approach is used during the reduction and ISE is used during the test, then the performance is not as good as other approaches. Therefore, consider the computational time and the classification accuracy, we recommend to use the CTD-KL to perform reduction and use ISE during the test. \section{Conclusion} In this paper, we propose an optimization based GMR approach which connects the clustering based and optimization based approaches for GMR in the literature. We use this formulation to establish the convergence of the clustering based algorithms. We also show how the formulation can be generalized to other cost functions such as ISE to further improve the performance of the clustering based algorithms. Numerical experiments are conducted to illustrate the effectiveness of the proposed generalization. Although these algorithms maintains the computational simplicity of the clustering based algorithms, they are not as good as the minimum ISE based optimization approach from statistical efficiency. We leave it as a future work to develop a computationally efficient algorithm that is as good as the minimum ISE approach and has theoretical guarantees. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank Trevor Campbell, Nhat Ho, Kittipat Kampa, Richard Schonberg, and Lei Yu for their help and discussion. This research was enabled in part by support provided by WestGrid (\url{www.westgrid.ca}) and Compute Canada Calcul Canada (\url{www.computecanada.ca}). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Details of Minimum ISE} \section{Connection with Clustering Based Algorithms} \subsection{Gaussian Barycenter} \subsubsection{Guassian Baycenter under KL Divergence} \label{sec:kl_barycenter} We proof Lemma~\ref{lemma:Gaussian_KL_barycenter} in this section. \begin{proof} The KL-divergence between two Gaussians is given by \begin{equation*} \begin{split} &2D_{\text{KL}}(\Phi(\cdot|\mu_1,\Sigma_1)\|\Phi(\cdot|\mu_2,\Sigma_2))\\ =&\log \frac{|\Sigma_2|}{|\Sigma_1|} + \text{tr}(\Sigma_2^{-1}\Sigma_1) + (\mu_2-\mu_1)^{\tau}\Sigma_2^{-1}(\mu_2-\mu_1)-d \end{split} \end{equation*} where $|\Sigma|$ is the determinant of the matrix. Therefore, we can write \begin{equation*} \begin{split} L(\mu,\Sigma) &= \sum_{n=1}^{n} \lambda_n D_{\text{KL}}(f_n\|f) \\ & = \frac{1}{2}\sum_{n} \lambda_n \left\{\log |\Sigma|+ \text{tr}(\Sigma^{-1}\Sigma_n)\right\} \\ &+ \frac{1}{2}\sum_{n} \lambda_n (\mu-\mu_n)^{\tau}\Sigma^{-1}(\mu-\mu_n) + C \end{split} \end{equation*} for some constant $C$. We now use the following linear algebra formulas $$ \frac{\partial \log |\Sigma|}{\partial \Sigma} = (\Sigma^{-1})^{\tau} = (\Sigma^{\tau})^{-1}, $$ $$ \frac{\partial \text{tr}(A\Sigma^{-1}B)}{\partial\Sigma} = -(\Sigma^{-1}BA\Sigma^{-1})^{\tau}, $$ and $$\frac{\partial}{\partial x} (x-\mu)^{\tau}\Sigma^{-1}(x-\mu) = 2\Sigma^{-1}(x-\mu) $$ to work out partial derivatives of $L$ with respect to $\mu$ and $\Sigma$. They are given by \begin{align*} \frac{\partial L}{\partial \mu} &= 2\sum_{n}\lambda_n \Sigma^{-1}(\mu - \mu_n), \\ \frac{\partial L}{\partial \Sigma} &= \Sigma^{-1} - \Sigma^{-1}\sum_{n} \lambda_n \left\{ \Sigma_n + (\mu-\mu_n)(\mu-\mu_n)^{\tau}\right\}\Sigma^{-1}. \end{align*} Setting both partial derivatives to $\mathbf{0}$, we obtain \[ \bar{\mu}= \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n\mu_n \] and the covariance \[ \bar{\Sigma} = \{\sum_{n} \lambda_n\}^{-1}\sum_{n=1}^N\lambda_n (\Sigma_n + (\mu_n-\bar{\mu})(\mu_n-\bar{\mu})^{\tau}). \] Clearly, these solutions are the mean and covariance matrix of $f$ that minimizes $L(f)$. This completes the proof. \end{proof} \subsubsection{Gaussian Barycenter under CS Divergence} \label{sec:CS_barycenter_proof} Similar to the Gaussian barycenter under the KL divergence, in this section, we derive the Gaussian barycenter under the CS divergence. \begin{proof} Based on~\eqref{eq:cauchy-schwartz-divergence-gaussian}, let \begin{equation*} \begin{split} L(\mu,\Sigma)=&\sum_{n}\lambda_n D_{\text{SW}}(f_n\|f) \\ =&\sum_{n}\lambda_{n} \{-\log \phi(\mu_n|\mu, \Sigma_n+ \Sigma) - \frac{1}{4}\log|\Sigma|\} + C\\ =&\frac{1}{2}\sum_{n}\lambda_{n}(\mu_n-\mu)^{\tau}(\Sigma_n+\Sigma)^{-1}(\mu_n-\mu)\\ & +\frac{1}{2}\sum_{n}\lambda_{n} \{\log|\Sigma_n+\Sigma|- \frac{1}{2}\log |\Sigma|\} + C \end{split} \end{equation*} for some constant $C$. The gradients are given by $$\frac{\partial L}{\partial \mu} = -\sum_{n}\lambda_n (\Sigma_n+\Sigma)^{-1}(\mu_n-\mu)$$ and \[ \begin{split} &2\frac{\partial L}{\partial \Sigma} = -\frac{1}{2}\left\{\sum_{n} \lambda_n \right\}\Sigma^{-1}\\ &+\sum_{n}\lambda_n(\Sigma_n+\Sigma)^{-1}\left\{I-(\mu_n-\mu)(\mu_n-\mu)^{\tau}(\Sigma_n+\Sigma)^{-1}\right\} \end{split} \] Setting both partial derivatives to $\mathbf{0}$, we obtain \[ \bar\mu=\left\{\sum_{n}\lambda_n (\Sigma_n+\bar\Sigma)^{-1}\right\}^{-1}\sum_{n}\lambda_n (\Sigma_r+\bar\Sigma)^{-1}\mu_n \] and \[ \bar{\Sigma}^{-1} = 2 \sum_{n}\tilde \lambda_n (\Sigma_n+\bar\Sigma)^{-1}\{ I -(\mu_r-\bar\mu)(\mu_r-\bar\mu)^{\tau}(\Sigma_r+\bar\Sigma)^{-1}\}. \] where $\tilde \lambda_n = \lambda_n/\sum_{n}\lambda_n$. This completes the proof. \end{proof} \subsection{Connection with Hard Clustering Algorithm in~\cite{schieferdecker2009gaussian}} \label{sec:connection_hard_clustering} In this section, we show that when the cost function is chosen to be \begin{equation*} c(\phi_n, \tilde \phi_m) = D_{\text{KL}}(\phi_n\|\tilde \phi_m), \end{equation*} our algorithm reduces to the hard clustering based algorithm in~\cite{schieferdecker2009gaussian}. According to our assignment step in Algorithm~\ref{alg:mm_reduction}, the transportation plan becomes \[ \pi_{nm} = \begin{cases} w_n&\text{if } C(n): = \argmin_{m'} c(\phi_n, \tilde \phi_{m'})\\ 0&\text{otherwise}. \end{cases} \] Then the mixing weights becomes \begin{equation*} \tilde w_m = \sum_{n=1}^{N}\pi_{nm} = \sum_{C(n)=m} w_{n} \end{equation*} and the $m$th subpopulation is updated via the KL barycenter \begin{equation*} \tilde \phi_{m} = \arginf_{\phi} \sum_{n=1}^{N}\pi_{nm} D_{\text{KL}}(\phi_n\| \phi) \end{equation*} By substituting $\lambda_n$ with $\pi_{nm}$ above, the updated subpopulation parameters based on our approach becomes \begin{equation*} \tilde\mu_{m} = \tilde w_m^{-1} \sum_{C(n)=m}w_n \mu_n \end{equation*} and \begin{equation*} \tilde\Sigma_{m} = \tilde w_m^{-1} \sum_{C(n)=m}w_n\{\Sigma_n + (\mu_n-\tilde\mu_m)(\mu_n-\tilde\mu_m)^{\tau}\}, \end{equation*} which are the same as the moment matching given in the hard clustering algorithm in Algorithm~\ref{alg:hard_clustering}. \section{Connection With Optimization Based Algorithms} \subsection{Proof for Equation~\ref{eq:barycenter_equiv}} \label{app:CTD_equiv} \begin{proof} We give the proof respectively under the following two cases. \noindent \textbf{ISE} When $c(\cdot,\cdot)=D_{\text{ISE}}(\cdot,\cdot)$, then according to~\eqref{eq:ISE}, the objective function on the LHS of~\eqref{eq:barycenter_equiv} is \begin{align*} &\sum_{n=1}^{N} w_n D_{\text{ISE}}(\phi_n, \phi)\\ =&\sum_{n=1}^{N} w_n \left\{\int \phi_n^2(x) dx + \int \tilde \phi^2(x) dx - 2\int \phi_n(x) \tilde \phi(x) dx\right\}\\ \end{align*} The RHS of~\eqref{eq:barycenter_equiv} is \begin{align*} &D_{\text{ISE}}\left(\sum_{n=1}^{N} w_n \phi_{n}, \tilde \phi\right)= \int \left\{\sum_{n=1}^{N} w_n \phi_{n}(x) - \tilde \phi^2(x)\right\} dx\\ =&\int \left\{ \sum_{n=1}^{N} w_n\phi_{n}(x)\right\}^2 dx + \int \tilde \phi^2(x) dx- 2\sum_{n} w_n \int \phi_{n}(x)\tilde \phi(x)dx\\ =& C + \sum_{n=1}^{N} w_n D_{\text{ISE}}(\phi_{n}, \tilde \phi) \end{align*} where $C$ is some constant that does not depend on $\phi$. This relationship implies that~\eqref{eq:barycenter_equiv} holds when the cost function is the ISE. \noindent \textbf{KL divergence} \begin{equation*} \begin{split} &D_{\mathrm{KL}}\left(\sum_{n=1}^{N} w_m\phi_{n} \| \tilde\phi\right)\\ =&\int \left\{\sum_{n=1}^{N} w_n\phi_{n}(x)\right\} \log \left \{\frac{\sum_{n} w_n\phi_{n}(x)}{\tilde \phi (x)}\right \} dx\\ =&C_1 - \sum_{n} w_n \int \phi_{n}(x) \log \tilde \phi (x) dx\\ =&C_2 + \sum_{n} w_n \int \phi_{n}(x) \log \frac{\phi_{n}(x)}{\tilde \phi (x)} dx \\ =&C_2 + \sum_{n} w_n D_{\mathrm{KL}}(\phi_{n}\|\tilde \phi) \end{split} \end{equation*} where $C_1$ and $C_2$ are constants not dependent on $\phi$. This relationship implies that~\eqref{eq:barycenter_equiv} holds when the cost function is the KL divergence. \end{proof} \subsection{Proof for Theorem~\ref{thm:upper_bound}} \label{app:CTD_is_upper_bound} \begin{proof} Let ${\boldsymbol\pi}\in\Pi({\mathbf{w}}, \tilde {\mathbf{w}})$ be a transportation plan, then we can write $p_{G} = \sum_{n} w_n \phi_n = \sum_{n}\sum_{m}\pi_{nm}\phi_n$, similarly $p_{\tilde G} = \sum_{n,m}\pi_{nm}\tilde \phi_m$. Therefore, \begin{equation*} \begin{split} c(p_{G}, p_{\tilde G}) &= c(\sum_{n}w_n\phi_n, \sum_{m} \tilde w_m \tilde \phi_m)\\ &= c(\sum_{n}\pi_{nm}\phi_n, \sum_{m} \pi_{nm} \tilde \phi_m)\\ &\leq \sum_{n,m}\pi_{nm} c(\phi_n, \tilde \phi_m) \end{split} \end{equation*} The last inequality holds because of the ``convexity" property of the cost funtion. Since this inequality holds for any ${\boldsymbol\pi}$, therefore taking the infimum with respect to ${\boldsymbol\pi}$ on the right hand side, we then have $$c(p_{G}, p_{\tilde G})\leq \mathcal{T}_{c}^{0}(p_{G}, p_{\tilde G})$$ which finishes the proof. \end{proof} \subsection{Convexity of Cost Functions} \subsubsection{ISE is Convex} \label{app:ISE_convexity} \begin{proof} Based on the definition of $D_{\text{ISE}}$, we have \begin{equation*} \begin{split} &D_{\text{ISE}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)\\ =&\int \{\alpha \{f_1(x)-\phi_1(x)\}+(1-\alpha) \{f_2(x) - \phi_2(x)\}\}^2 dx\\ =&\alpha^2D_{\text{ISE}}(f_1,\phi_1) +(1-\alpha)^2D_{\text{ISE}}(f_2,\phi_2) \\ &+ 2\alpha(1-\alpha)\int (f_1(x)-\phi_1(x))(f_2(x)-\phi_2(x))dx\\ :=&\alpha^2D_{\text{ISE}}(f_1,\phi_1) +(1-\alpha)^2D_{\text{ISE}}(f_2,\phi_2) \\ &+ 2\alpha(1-\alpha) \langle f_1(x)-\phi_1(x), f_2(x)-\phi_2(x)\rangle \end{split} \end{equation*} Therefore, we have \begin{equation*} \begin{split} &\alpha D_{\text{ISE}}(f_1,\phi_1) + (1-\alpha)D_{\text{ISE}}(f_2,\phi_2) \\ &- D_{\text{ISE}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)\\ =&\alpha(1-\alpha)D_{\text{ISE}}(f_1,\phi_1) + \alpha(1-\alpha)D_{\text{ISE}}(f_2,\phi_2)\\ &- 2\alpha(1-\alpha)\langle f_1(x)-\phi_1(x), f_2(x)-\phi_2(x)\rangle\\ =&\alpha(1-\alpha)\{D_{\text{ISE}}(f_1,\phi_1) + D_{\text{ISE}}(f_2,\phi_2) \\ &- 2\langle f_1(x)-\phi_1(x), f_2(x)-\phi_2(x)\rangle\}\\ =&\alpha(1-\alpha) D_{\text{ISE}}(f_1-\phi_1, f_2-\phi_2)\geq 0 \end{split} \end{equation*} The last inequality holds because the ISE is non-negative and $\alpha\in(0,1)$. We therefore show that $D_{\text{ISE}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)\leq \alpha D_{\text{ISE}}(f_1,\phi_1) + (1-\alpha)D_{\text{ISE}}(f_2,\phi_2)$ which finishes the proof. \end{proof} \subsubsection{CS Divergence is Nonconvex} \label{app:CS_convexity_counter_example} In this section, we present a counter example for the convexity of the CS divergence. That is to show that there exists a $f_1$, $f_2$, $\phi_1$, $\phi_2$, and $\alpha$ so that $D_{\text{CS}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2) > \alpha D_{\text{CS}}(f_1,\phi_1) + (1-\alpha)D_{\text{CS}}(f_2,\phi_2)$. Due to the form of the CS divergence, we show that \begin{equation} \label{eq:cs_nonconvex_obj} \begin{split} &-D_{\text{CS}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2) \\ <&-\alpha D_{\text{CS}}(f_1,\phi_1) - (1-\alpha)D_{\text{CS}}(f_2,\phi_2). \end{split} \end{equation} Let $\alpha=0.5$, $f_1(x) = \phi(x|-1, \sigma^2)$, $f_2(x) = \phi(x|-\mu,1)$, $\phi_1(x)=\phi(x|1,1)$, and $\phi_2(x) = \phi(x|\mu,\sigma^2)$ for some $\mu>0$ and $\sigma>0$. Then based on the closed form of the CS divergence, it is easy to see that \begin{equation*} \begin{split} D_{\text{CS}}(f_1, \phi_1) &= -\log \phi(1|-1, 1+\sigma^2) - \frac{1}{4}\log(16\pi^2\sigma^2)\\ D_{\text{CS}}(f_2, \phi_2) &= -\log \phi(\mu|-\mu, 1+\sigma^2) - \frac{1}{4}\log(16\pi^2\sigma^2) \end{split} \end{equation*} Therefore, the RHS of~\eqref{eq:cs_nonconvex_obj} becomes \begin{equation*} \text{RHS} =\frac{1}{2}\{\log \phi(1|-1,1+\sigma^2) + \log \phi(\mu|-\mu, 1+\sigma^2) + \log(4\pi\sigma)\} \end{equation*} We also have the LHS of~\eqref{eq:cs_nonconvex_obj} \begin{equation*} \begin{split} &\text{LHS}\\ =&\log\{\frac{1}{4}\{\phi(1|-1,1+\sigma^2) + \phi(\mu|-\mu,1+\sigma^2)\\ &+ \phi(\mu|-1,2\sigma^2)+\phi(-\mu|1,2)\}\}\\ &-\log \{\frac{1}{4}\{(4\pi\sigma^2)^{-1/2}+(4\pi)^{-1/2}\}+ \frac{1}{2}\phi(\mu|1,1+\sigma^2)\}\\ \end{split} \end{equation*} The difference between LHS and RHS for various values of $\mu$ and $\sigma$ is shown in Figure~\ref{fig:non_convexity_CS}. It can be seen from the figure that the surface plot is a saddle surface and is not always negative. Therefore, the CS divergence is not convex. \begin{figure}[htpb] \centering \includegraphics[width=\columnwidth]{./figure/misc/non_convexity_of_CS.png} \caption{The difference $\alpha D_{\text{CS}}(f_1,\phi_1) + (1-\alpha)D_{\text{CS}}(f_2,\phi_2)-D_{\text{CS}}(\alpha f_1+(1-\alpha) f_2, \alpha \phi_1+(1-\alpha) \phi_2)$ when $\alpha=0.5$, $f_1(x) = \phi(x|-1, \sigma^2)$, $f_2(x) = \phi(x|-\mu,1)$, $\phi_1(x)=\phi(x|1,1)$, and $\phi_2(x) = \phi(x|\mu,\sigma^2)$ for different values of $\mu$ and $\sigma$.} \label{fig:non_convexity_CS} \end{figure} \section{Generalization to Reduce Mixtures from Exponential Families} \label{app:generalization_exponential_family} In this section, we discuss the generalization of our proposed algorithm to the reduction of mixtures from full exponential families. \begin{table*}[htb] \centering \scriptsize \caption{Parameterization of widely used exponential families.} \begin{tabular}{llllll} \toprule ${\mathcal{F}}$ & $\theta$ &$ T$ & $A$& $h$ & ${\mathbb{E}}\{h\}$\\ \midrule \multicolumn{5}{c}{Univariate distribution}\\ Exponential & $-\lambda$ & $x$ & $-\frac{1}{\theta}$&1&1\\ Weibull (known $k$)& $-\frac{1}{\lambda^k}$ & $x^k$ & $-\frac{1}{\theta}$ & $kx^{k-1}$& $\frac{k!}{\lambda^{k-1}}$ \\ Laplace (known $\mu$)& $-\frac{1}{b}$ & $|x-\mu|$ & $-\frac{2}{\theta}$ & $1$ & 1\\ Rayleigh& $-\frac{1}{2\sigma^2}$ & $x^2$ & $-\frac{1}{2\theta}$ & $x$ & $\sigma\sqrt{\frac{\pi}{2}}$\\ Log-normal& $(\frac{\mu}{\sigma^2}, -\frac{1}{2\sigma^2})^{\tau}$ & $(\log x, (\log x)^2)^{\tau}$ & $\exp(\frac{\theta_1^2}{\theta_2})\frac{1}{\sqrt{-2\theta_2}}$ & $\frac{1}{\sqrt{2\pi} x}$ & $\frac{1}{\sqrt{2\pi}}\exp(\frac{\theta_1}{2\theta_2}-\frac{1}{4\theta_2})$\\ Gamma&$(-\beta,\alpha-1)^{\tau}$ & $(x,\log x)$ & $ \Gamma(\theta_2+1))(-\theta_1)^{- (\theta_2+1)}$ & $1$& $1$\\ Inverse Gamma& $(-\beta,-\alpha-1)^{\tau}$ & $(x,\log x)$ & $\Gamma(-\theta_2-1))(-\theta_1)^{\theta_2+1}$ & $1$& $1$\\ \midrule \multicolumn{5}{c}{Multivariate distribution}\\ Gaussian Gamma&$(\alpha-\frac{1}{2}, -\beta-\frac{\lambda\mu^2}{2}, \lambda\mu, -\frac{\lambda}{2})^{\tau}$ & $(\log \tau, \tau, \tau x, \tau x^2)^{\tau}$ & $\Gamma(\theta_1+\frac{1}{2})\frac{1}{\sqrt{-2\theta_4}} (\frac{\theta_3^2}{4\theta_4}-\theta_2)^{-(\theta_1+\frac{1}{2})}$ & $\frac{1}{\sqrt{2\pi}}$& $\frac{1}{\sqrt{2\pi}}$\\ Dirichlet& $\boldsymbol\alpha-1$ & $\log {\bm{x}}$ & $\exp\{\mathbbm{1}_K^{\tau}\log \Gamma(\boldsymbol\alpha) - \log \{\mathbbm{1}_K^{\tau}\Gamma(\boldsymbol\alpha)\}\}$ & $1$& $1$\\ \bottomrule \end{tabular} \label{tab:ISE_exp_family} \end{table*} \begin{definition}[Full Exponential Family] ${\mathcal{F}} = \{f(x|\theta): \theta\in \Theta\}$ is said to be a full exponential family if its density can be represented as \begin{equation} f(x|\theta) = h(x)\exp(\theta^{\tau}T(x) - \log A(\theta)) \end{equation} where $\theta = (\theta_1, \theta_2,\ldots,\theta_m)^{\tau}$ is called the natural parameter and $T(x)=(T_1(x), T_2(x),\ldots, T_m(x))^{\tau}$ is called the natural sufficient statistics. The parameter space $\Theta$ is formed by $$\Theta=\{\theta\in{\mathbb{R}}^{m}: \int \exp(\theta^{\tau}T(x))dx<\infty\}.$$ \end{definition} The natural parameters and the natural sufficient statistics for some widely used exponential families is given in Table~\ref{tab:ISE_exp_family}. We first consider the case where the cost function is the KL divergence. Under exponential family, the KL divergence between two densitiy functions have the following form \begin{equation*} \begin{split} D_{\mathrm{KL}}(f(\cdot|\theta_1), f(\cdot|\theta_2)) =& (\theta_1-\theta_2)^{\tau} {\mathbb{E}}_{\theta_1}\{T(X)\} - \log \frac{A(\theta_1)}{A(\theta_2)}\\ :=&(\theta_1-\theta_2)^{\tau} \mu(\theta_1) - \log \frac{A(\theta_1)}{A(\theta_2)} \end{split} \end{equation*} where $\mu(\theta) = {\mathbb{E}}_{\theta}\{T(X)\}$. Therefore, the KL divergence can be computed in closed-form as long as $\mu(\theta)$ has a closed-form. The barycenter of $f_1,f_2,\ldots, f_n \in {\mathcal{F}}$ under the KL divergence minimizes $\sum_{n=1}^{N} \lambda_n D_{\mathrm{KL}}(f(\cdot|\theta_n), f(\cdot|\theta))$ is given by $f(\cdot|\bar{\theta})$ where $\bar\theta$ is the solution to \begin{equation} \label{eq:KL_barycenter} \frac{\partial}{\partial \theta} \log A(\theta) = \sum_{n=1}^{N}\frac{\lambda_n}{\sum_{n} \lambda_n} \mu(\theta_n) \end{equation} This equality can be further simplifed since $\mu(\theta) = \frac{\partial}{\partial \theta}\log A(\theta)$, which can be easily shown by taking derivative of $\int f(x|\theta) h(x) dx$ with respect to $\theta$. Therefore, by simplifying~\eqref{eq:KL_barycenter}, we can see that the parameter of the barycenter under KL divergence has the following closed form \begin{equation} \label{eq:KL_barycenter_closed_form} \bar\theta = \mu^{-1}\left(\sum_{n=1}^{N}\frac{\lambda_n}{\sum_{n} \lambda_n} \mu(\theta_n)\right). \end{equation} The conclusion is also established in~\cite{liu2014distributed} where the KL barycenter is used to combine local estimates under distributed learning. Under the context of mixture reduction, a similar discussion is carried out in~\cite{ardeshiri2013reduction}. We next discuss the case where the cost function is ISE. The ISE between two density functions have the following form \begin{equation*} \begin{split} &D_{\text{ISE}}(f(\cdot|\theta_1), f(\cdot|\theta_2))\\ =&\int (f(x|\theta_1)-f(x|\theta_2))^2 dx\\ =&\sum_{i=1}^2 \frac{A(2\theta_i)}{A^2(\theta_i)} {\mathbb{E}}_{2\theta_i}\{h(X)\} - 2\frac{A(\theta_1+\theta_2)}{A(\theta_1)A(\theta_2)} {\mathbb{E}}_{\theta_1+\theta_2}\{h(X)\}. \end{split} \end{equation*} The ISE therefore has a closed form as long as ${\mathbb{E}}_{\theta}\{h(X)\}$ as a closed form. The form of $H(\theta)={\mathbb{E}}_{\theta}\{h(X)\}$ under the widely used exponential families is given in the last column of Table~\ref{tab:ISE_exp_family}. It can be seen from the table that for distributions from these exponential families, the ISE has a closed form. The ISE barycenter of $f_1,f_2,\ldots, f_n \in {\mathcal{F}}$ minimizes the objective function $${\mathcal{L}}(\theta) = \sum_{n} \lambda_n \left\{\frac{A(2\theta)}{A^2(\theta)} H(2\theta) - 2\frac{A(\theta_n+\theta)}{A(\theta_n)A(\theta)} H(\theta_n+\theta)\right\}.$$ This objective function is in general non-convex and numerical methods are required to find the local minimum. \section{Approximate Inference} \label{app:tracking} \begin{lemma}[Product of Gaussian Densities I] \label{lemma:product_gaussian_prediction} Let $\phi(x|A\mu+b,\Sigma)$ and $\phi(\mu|\mu_0,\Sigma_0)$ be two Gaussian densities, then $$\phi(x|A\mu+b,\Sigma)\phi(\mu|\mu_0,\Sigma_0) = C \phi(\mu|\hat \mu,\hat \Sigma)$$ and $$\int \phi(x|A\mu+b,\Sigma)\phi(\mu|\mu_0,\Sigma_0) d\mu = C$$ where $\hat \Sigma = (\Sigma_0^{-1}+A^{\tau}\Sigma^{-1}A)^{-1}$, $\hat \mu = \hat \Sigma(\Sigma_0^{-1}\mu_0 + A^{\tau}\Sigma^{-1}(x-b))$ and \begin{equation*} C = \phi(x|A\mu_0+b,\Sigma+A\Sigma_0A^{\tau}). \end{equation*} \end{lemma} \begin{lemma}[Product of Gaussian Densities II] \label{lemma:product_gaussian} Let $\phi(x|\mu_1,\Sigma_1)$ and $\phi(x|\mu_2,\Sigma_2)$ be two Gaussian densities, then $$\phi(x|\mu_1,\Sigma_1)\phi(x|\mu_2,\Sigma_2) = C \phi(x|\mu_3, \Sigma_3)$$ and $$\int \phi(x|\mu_1,\Sigma_1)\phi(x|\mu_2,\Sigma_2) dx = C$$ where $\Sigma_3 = (\Sigma_1^{-1}+\Sigma_2^{-1})^{-1}$, $\mu_3 = \Sigma_3(\Sigma_1^{-1}\mu_1 + \Sigma_2^{-1}\mu_2)$ and \begin{equation*} C = \phi(\mu_1|\mu_2,\Sigma_1+\Sigma_2). \end{equation*} \end{lemma}
2,877,628,088,730
arxiv
\section{Introduction} A connected graph is called transient (resp. recurrent) if the simple random walk on it is transient (resp. recurrent). Benjamini, Gurel-Gurevich and Lyons \cite{BGGL} showed the cerebrating result claiming that the trace of the simple random walk on a transient graph is recurrent almost surely. If a connected subgraph of an infinite connected graph is transient, then the infinite connected graph is transient. Therefore, the trace is somewhat ``smaller" than the graph on which the simple random walk runs. Now we consider the following questions: How far are a transient graph $G$ and the trace of the simple random walk on $G$? More generally, how far are $G$ and a recurrent subgraph $H$ of $G$? How many edges of $G$ do we need to add to $H$ so that the enlargement of $H$ becomes transient? There are numerous choices of edges of $G$ to be added to $H$. If we add finitely many edges to $H$, then the enlarged graph is also recurrent. Therefore, we add {\it infinitely} many edges to $H$ and consider whether the enlarged graph is transient. In this paper, we add infinitely many edges of $G$ to $H$ {\it randomly}. Specifically, we add open edges of Bernoulli bond percolation on $G$ to $H$, and consider the probability that the enlargement of $H$ is transient. We more precisely state our purpose as follows. Let $\mathbb{P}_p$ be the Bernoulli measure on the space of configurations of Bernoulli bond percolation on $G$ such that each edge of $G$ is open with probability $p \in (0,1)$. Consider the probability that the number of vertices of $G$ connected by open edges from a fixed vertex is infinite under $\mathbb{P}_p$. Then Hammersley's critical probability $p_c$ is the infimum of $p$ such that the probability is positive. Similarly, we consider the probability that the enlarged graph is transient under $\mathbb{P}_p$ and either of the following two values: the infimum of $p$ such that the probability is positive, or the infimum of $p$ such that the probability is one. We regard these two values as certain critical probabilities, and compare them with Hammersley's critical probability. We also consider questions of this kind, not only for transience, but also for other graph properties. Let $G$ be an infinite connected graph and $H$ be a subgraph of $G$. Let $\PP$ be a property of the subgraphs of $G$. Assume that $G$ satisfies $\PP$ and $H$ does not. Let $\U(H)$ be the graph obtained by adding open edges of Bernoulli bond percolation on $G$ to $H$. (See Definition \ref{enl} for a precise definition.) Let $\mathbb{P}_p$ be the Bernoulli measure on the space of configurations of Bernoulli bond percolation on $G$ such that each edge of $G$ is open with probability $p \in (0,1)$. Then we consider the probability that $\U(H)$ satisfies $\PP$ under $\mathbb{P}_p$. Let $p_{c,1}(G, H, \PP)$ (resp. $p_{c,2}(G, H, \PP)$) be the infimum of $p$ such that the probability is positive (resp. one). For example, if $\PP$ is infinite and $H$ is a subgraph consisting of a vertex of $G$ and no edges, then $p_{c,1}(G, H, \PP) = p_c(G)$ and $p_{c,2}(G, H, \PP) = 1$. The main purpose of this paper is to compare $p_{c,1}(G, H, \PP)$ and $p_{c,2}(G, H, \PP)$ with Hammersley's critical probability $p_{c}(G)$. We focus on the following cases that $\PP$ is: being a transient subgraph, having finitely many cut points or no cut points, being a recurrent subset, or being connected. $p_{c,1}(G, H, \PP)$ and $p_{c,2}(G, H, \PP)$ depend heavily on the choice of $(G, H, \PP)$. Assume $\PP$ is being a transient subgraph. Then there is a triplet $(G, H, \PP)$ such that $p_{c,1}(G, H, \PP) = 1$. On the other hand, there is a triplet $(G, H, \PP)$ such that $p_{c,2}(G, H, \PP) = 0$. There is also a triplet $(G, H, \PP)$ such that $p_{c,1}(G, H, \PP) = p_{c,2}(G, H, \PP) = p_{c}(G)$. See Theorem \ref{tr} for details. We also consider the case that $H$ is chosen {\it randomly}, specifically, $H$ is the trace of the simple random walk on $G$. Finally, we refer to related results. Benjamini, H\"aggstr\"om and Schramm \cite{BHS} considered questions of this kind with a different motivation to ours. Their original motivation was considering the conjecture that for all $d \ge 2$, there is no infinite cluster in Bernoulli percolation on $\mathbb{Z}^d$ with probability one at the critical point. If an infinite cluster of Bernoulli percolation $\mathcal{C}_{\infty}$ satisfies $p_{c}(\mathcal{C}_{\infty}) < 1$ $\mathbb{P}_p$-a.s. for any $p$, then the conjecture holds. A question related to this is considering what kinds of conditions on a subgraph $H^{\prime}$ of $\mathbb{Z}^d$ assure $p_c(H^{\prime}) < 1$. They introduced the concept of {\it percolating everywhere} (See Definition \ref{pe} for a precise definition.) and considered whether the following claim holds: if we add Bernoulli percolation to a percolating everywhere graph, then the enlarged graph is connected, and moreover, $p_{c}(\textup{the enlarged graph}) < 1$, $\mathbb{P}_p$-a.s. for any $p$. This case can be described using our terminology as follows. $G$ is $\mathbb{Z}^d$, $H$ is a percolating everywhere subgraph, and $\PP$ is connected and $p_c(\U(H)) < 1$. They showed that if $d = 2$, then $p_{c, 2}(G, H, \PP) = 0$, and conjectured that it also holds for all $d \ge 2$. Recently, Benjamini and Tassion \cite{BeTa} showed the conjecture for all $d \ge 2$ by a method different from \cite{BHS}. In this paper, we will discuss the values $p_{c,i}(G, H, \PP), i = 1,2$, for percolating everywhere subgraphs $H$ of $G$. $G$ is not necessarily assumed to be $\mathbb{Z}^d$, and the result depends on whether $G$ satisfies a certain condition. See Theorem \ref{conn} for details. \subsection{Framework and Main results} In this paper, a graph is a locally-finite simple graph. A simple graph is an unoriented graph in which neither multiple edges or self-loops are allowed. $V(X)$ and $E(X)$ denote the sets of vertices and edges of a graph $X$, respectively. If we consider the $d$-dimensional integer lattice $\mathbb{Z}^d$, then it is the nearest-neighbor model. Let $G$ be an infinite connected graph. In this paper, we consider Bernoulli {\it bond} percolation and do not consider site percolation. Denote a configuration of percolation by $\omega = (\omega_e)_{e \in E(G)} \in \{0,1\}^{E(G)}$. We say that an edge $e$ is open if $\omega_e = 1$ and closed otherwise. We say that an event $A \subset \{0,1\}^{E(G)}$ is increasing (resp. decreasing) if the following holds: if $\omega = (\omega_e) \in A$ and $\omega^{\prime}_{e} \ge \omega_{e}$ (resp. $\omega^{\prime}_{e} \le \omega_{e}$) for any $e \in E(G)$, then $\omega^{\prime} \in A$. Let $C_x$ be the open cluster containing $x \in V(G)$. We remark that $\{x\} \subset V(C_x)$ holds. By convention, we often denote the set of vertices $V(C_x)$ by $C_x$. Let $p_{c}(G)$ be Hammersley's critical probability of $G$. That is, for some $x \in V(G)$, \[ p_{c}(G) = \inf\left\{p \in (0,1) : \mathbb{P}_p(|C_x| = +\infty) > 0 \right\}. \] This value does not depend on the choice of $x$. \begin{Def}[Enlargement of subgraph]\label{enl} Let $H$ be a subgraph of $G$. Let $\U(H) = \U_{\omega}(H)$ be a random subgraph of $G$ such that \[ V(\U(H)) := \bigcup_{x \in V(H)} V(C_x) \text{ and } E(\U(H)) := E(H) \cup \left(\bigcup_{x \in V(H)} E(C_x)\right) .\] \end{Def} If $H$ is connected, then $\U(H)$ is also connected. If $H$ consists of a single vertex $x$ with no edges, then $\U(H)$ is identical to $C_x$. In this paper, a {\it property} $\PP$ is a subset of the class of subgraphs of $G$ which is invariant under any graph automorphism of $G$. We consider a property which is well-defined {\it only} on a class of subgraphs of $G$, and call the class the {\it scope} of the property. For example, being a transient subgraph is defined only for connected subgraphs of $G$, and the scope of being transient is the class of connected subgraphs of $G$. We denote $X \in \PP$ (resp. $X \notin \PP$) if a subgraph $X$ of $G$ is in the scope of $\PP$ and satisfies (resp. does not satisfy) $\PP$. Let $\mathcal{F}$ be the cylindrical $\sigma$-algebra on the configuration space $\{0,1\}^{E(G)}$. \begin{Ass} We assume that an infinite connected graph $G$, a subgraph $H$ of $G$, and a property $\PP$ satisfy the following: \\ (i) $G$, $H$ and $\mathcal{U}(H)$ are in the scope of $\PP$.\\ (ii) $G \in \PP$ and $H \notin \PP$. \\ (iii) The event that $\U(H) \in \PP$ is $\mathcal{F}$-measurable and increasing. If $H$ is chosen according to a probability law $(\Omega^{\prime}, \mathcal{F}^{\prime}, \mathbb{P}^{\prime})$, then we assume that (i) and (ii) above hold $\mathbb{P}^{\prime}$-a.s., and the event $\U(H) \in \PP$ is $\mathcal{F}^{\prime} \otimes \mathcal{F}$-measurable and increasing for $\mathbb{P}^{\prime}$-a.s. $\mathcal{F}^{\prime} \otimes \mathcal{F}$ denotes the product $\sigma$-algebra of $\mathcal{F}^{\prime}$ and $\mathcal{F}$. \end{Ass} In Section 2, we will check that the event $\{\U(H) \in \PP\}$ is $\mathcal{F}$-measurable for those properties, and give an example of $(G, H, \PP)$ such that $\U(H) \in \PP$ is {\it not} $\mathcal{F}$-measurable. \begin{Def}[A certain kind of critical probability] \begin{equation*} p_{c, 1}(G, H, \PP) := \inf\left\{p \in [0,1] : \mathbb{P}_p(\U(H) \in \PP) > 0 \right\}. \end{equation*} \begin{equation*} p_{c, 2}(G, H, \PP) := \inf\left\{p \in [0,1] : \mathbb{P}_p(\U(H) \in \PP) = 1 \right\}. \end{equation*} If $H$ obeys a law $\mathbb{P}^{\prime}$, then we define $p_{c, i}(G, H, \PP)$, $i = 1,2$, by replacing $\mathbb{P}_p$ above with the product measure $\mathbb{P}^{\prime} \otimes \mathbb{P}_p$ of $\mathbb{P}^{\prime}$ and $\mathbb{P}_p$. \end{Def} The main purpose of this paper is to compare the values $p_{c, i}(G, H, \PP)$, $i = 1,2$, with $p_c(G)$. If $H$ is a single vertex and $\PP$ is being an infinite graph, then the definitions of $p_{c, 1}(G, H, \PP)$ and $p_c(G)$ are identical and, hence, $p_{c,1}(G, H, \PP) = p_c(G)$. It is easy to see that $p_{c,2}(G, H, \PP) = 1$. In this paper, we focus on each of the following properties: (i) being a transient subgraph, (ii) having finitely many cut points or having no cut points, (iii) being a recurrent subset, and (iv) being a connected subgraph. The scopes for properties (i) and (ii) are connected subgraphs of $G$, and the scopes for (iii) and (iv) are all subgraphs. We now state our main results informally. The following four assertions deal with the four properties (i) - (iv) above respectively. Some of the assertions are special cases of full and precise versions of them appearing in Sections 3 to 6. \begin{Thm}\label{tr} Let $\PP$ be being a transient graph. Then, \\ (i-a) There is a pair $(G, H)$ such that $p_{c,1}(G,H,\PP) = 1$.\\ (i-b) There is a pair $(G, H)$ such that $p_{c,2}(G,H,\PP) = 0$.\\ (ii) Let $G = \mathbb{Z}^d, d \ge 3$. Then \\ (ii-a) For any $\epsilon > 0$, there exists a subgraph $H_{\epsilon}$ such that $p_{c,2}(\mathbb{Z}^d,H_{\epsilon},\PP) \le \epsilon$. \\ (ii-b) If $H$ is the trace of the simple random walk on $\mathbb{Z}^d$, then \[ p_{c,1}(\mathbb{Z}^d, H, \PP) = p_{c,2}(\mathbb{Z}^d, H, \PP) = p_c(\mathbb{Z}^d). \] (iii) Let $G$ be an infinite tree. Then \\ (iii-a) $p_{c,1}(G, H, \PP) = p_c(G)$ for any $H$. \\ (iii-b) There is a subgraph $H$ such that $p_{c,2}(G, H, \PP) = p_c(G)$.\\ (iii-c) There is a subgraph $H$ such that $p_{c,2}(G, H, \PP) = 1$. \end{Thm} We now consider the number of cut points. Let $P^{x, y}$ be the law of two independent simple random walks on $G$ which start at $x$ and $y$, respectively. \begin{Thm} Let $G = \mathbb{Z}^d, d \ge 5$. Let $H$ be the trace of the two-sided simple random walk on $\mathbb{Z}^d$. Let $P^{0,0} \otimes \mathbb{P}_p$ be the product measure of $P^{0,0}$ and $\mathbb{P}_p$. Then,\\ (i) If $p < p_c(G)$, then $\U(H)$ has infinitely many cut points $P^{0,0} \otimes \mathbb{P}_p$-a.s.\\ (ii) If $p > p_c(G)$, then $\U(H)$ has no cut points $P^{0,0} \otimes \mathbb{P}_p$-a.s. \\ In particular, if $\PP$ is having finitely many cut points, or having no cut points, then \[ p_{c,1}(\mathbb{Z}^d, H, \PP) = p_{c,2}(\mathbb{Z}^d, H, \PP) = p_c(\mathbb{Z}^d). \] \end{Thm} The term cut points above is similar to the notion of a cut point of a random walk. See Definition \ref{cut-def} for a precise definition. It is known that the trace of the two-sided simple random walk on $\mathbb{Z}^d$ has infinitely many cut points $P^{0,0}$-a.s. (Cf. Lawler \cite[Theorem 3.5.1]{La}) The result above means that in the subcritical regime, there remain infinitely many cut points that are not bridged by open bonds of percolation. Now we consider the case that $\PP$ is being a recurrent subset. In this paper, we regard this as a subgraph and consider the induced subgraph of the subset. \begin{Thm} Let $\PP$ be being a recurrent subset. Then, \\ (i-a) There is a pair $(G, H)$ such that $p_{c,1}(G,H,\PP) = 1$.\\ (i-b) There is a pair $(G, H)$ such that $p_{c,2}(G,H,\PP) = 0$.\\ (ii) Let $G = \mathbb{Z}^d$ and $H$ be the trace of the simple random walk on $\mathbb{Z}^d$. Then \\ (ii-a) \[ p_{c,1}(\mathbb{Z}^d, H, \PP) = p_{c,2}(\mathbb{Z}^d, H, \PP) = p_c(\mathbb{Z}^d), \ d \ge 5. \] (ii-b) \[ p_{c,1}(\mathbb{Z}^d, H, \PP) = p_{c,2}(\mathbb{Z}^d, H, \PP) = 0, \ d = 3,4. \] (iii) Let $G$ be an infinite tree. Then $p_{c,1}(G, H, \PP) = 1$ for any $H$. \end{Thm} The following concerns the connectedness of the enlargement of a percolating everywhere subgraph. \begin{Thm}\label{conn} Let $\PP$ be being connected and $H$ be a percolating everywhere subgraph of an infinite connected graph $G$. \\ (i) Assume $G$ satisfies the following: for any infinite subsets $A, B \subset V(G)$ satisfying $V(G) = A \cup B$ and $A \cap B = \emptyset$, the number of edges connecting a vertex of $A$ and a vertex of $B$ is infinite. Then \[ p_{c,1}(G, H, \PP) = p_{c,2}(G, H, \PP).\] (ii) Otherwise, there is a percolating everywhere subgraph $H$ such that \[ p_{c,1}(G, H, \PP) = 0 \ \text{ and } \ p_{c,2}(G, H, \PP) = 1.\] \end{Thm} The remainder of this paper is organized as follows. Section 2 states some preliminary results including the measurability of $\{\U(H) \in \PP\}$. We consider the case that $\PP$ is being a transient graph, the case that $\PP$ is a property concerning the number of cut points of graphs, the case that $\PP$ is being a recurrent subset, and the case that $\PP$ is being connected and $H$ is percolating everywhere, in Sections 3 to 6 respectively. \section{Preliminaries} This section consists of three subsections. First we give a lemma estimating $p_{c, i}(G, H, \PP)$. Then we state some results concerning random walk and percolation. Finally we discuss the measurability of the event $\{\U(H) \in \PP\}$. \subsection{A lemma} Roughly speaking, in the following, we will show that under a certain condition, $p_{c, i}(G, H, \PP)$ can be arbitrarily small, if there is a ``suitable" subgraph $H$. Let $\mathcal{N}(v)$ be the set of neighborhoods of a vertex $v$. \begin{Lem}\label{epsilon} Fix an infinite connected graph $G$ and a property $\PP$ for subgraphs of $G$. Let $i = 1, 2$. Assume that there is a subgraph $H$ of $G$ such that \[ p_{c, i}(G, H, \PP) < 1, \ \text{ and } \] \begin{equation}\label{near} v \in V(H) \text{ or } \mathcal{N}(v) \subset V(H), \ \text{ for any } v \in V(G). \end{equation} Then for any $\epsilon > 0$ there is a subgraph $H_{\epsilon}$ such that $p_{c, i}(G, H_{\epsilon}, \PP) \le \epsilon$. \end{Lem} \begin{proof} We show this assertion for $i=1$. Let $\Phi : \{0,1\}^{E(G)} \times \{0,1\}^{E(G)} \to \{0,1\}^{E(G)}$ be the map defined by \[ \Phi(\omega_{1}, \omega_{2}) = \omega_{1} \vee \omega_{2}.\] (Here and henceforth $\omega_{1} \vee \omega_{2}$ means the maximum of $\omega_1$ and $\omega_2$.) Then the push-forward measure of the product measure $\mathbb{P}_{q_1} \otimes \mathbb{P}_{q_2}$ on $\{0,1\}^{E(G)} \times \{0,1\}^{E(G)}$ by $\Phi$ is $\mathbb{P}_{q_1+q_2 - q_1q_2}$. Since $p_{c, 1}(G, H, \PP) < 1$, we have that for any $q_2 > 0$, there is $q_1 < p_{c, 1}(G, H, \PP)$ such that \[ q_1+q_2 - q_1q_2 > p_{c, 1}(G, H, \PP).\] It is easy to see that \[ \U_{\omega_{2}}(\U_{\omega_{1}}(H)) \subset \U_{\omega_{1} \vee \omega_{2}}(H).\] By (\ref{near}), \[ \U_{\omega_{2}}\left(\U_{\omega_{1}}(H)\right) = \U_{\omega_{1} \vee \omega_{2}}(H). \] Therefore, \[ \mathbb{P}_{q_1} \otimes \mathbb{P}_{q_2} \left( \U_{\omega_{2}}(\U_{\omega_{1}}(H)) \in \PP \right) = \mathbb{P}_{q_1+q_2-q_1q_2}\left( \U(H) \in \PP \right) > 0. \] Since $q_1 < p_{c, 1}(G, H, \PP)$, there is a configuration $\omega_{1}$ such that $\U_{\omega_{1}}(H) \notin \PP$ and \[ \mathbb{P}_{q_2}\left( \U_{\omega_{2}}(\U_{\omega_{1}}(H)) \in \PP \right) > 0.\] Hence \[ p_{c, 1}(G, \U_{\omega_{1}}(H), \PP) \le q_2.\] We can show this for $i=2$ in the same manner. \end{proof} \subsection{Random walk and percolation} We now define recurrent and transient subsets of $G$ by following Lawler and Limic \cite[Section 6.5]{LL}. Here and henceforth $((S_n)_{n \ge 0}, (P^x)_{x \in V(G)})$ denotes the simple random walk on $G$. We regard a recurrent subset as a subgraph and consider the induced subgraph of the recurrent subset. \begin{Def}[recurrent subset]\label{Def-recur} We say that a subset $A$ of $V(G)$ is a {\it recurrent subset} if for some $x \in V(G)$ \[ P^x\left(S_n \in A \text{ i.o. } n\right) > 0. \] Otherwise, $A$ is called a {\it transient subset}. This definition does not depend on choices of a vertex $x \in V(G)$. \end{Def} For a graph $X$, we let $d_X(x, y)$ be the graph distance between $x$ and $y$ in $X$ and \[ B_{X}(x, n) := \{y \in V(X) : d_{X}(x, y) \le n\}.\] We now briefly state the notion of Cayley graphs. Let $\Gamma$ be a finitely generated countable group and $\mathcal{S}$ be a symmetric finite generating subset of $\Gamma$ which does not contain the unit element. Then the {\it Cayley graph of $\Gamma$ with respect to $\mathcal{S}$} is the graph such that the set of vertices is $\Gamma$ and the set of edges is $\{\{x, y\} \subset \Gamma : x^{-1}y \in \mathcal{S}\}$. This graph depends on choices of $S$. In this paper, all results concerning Cayley graphs of groups do not depend on choices of $\mathcal{S}$. We say that a graph $G$ has the {\it degree of growth} $d \in (0, +\infty)$ if for any vertex $x$ of $G$, \[ 0 < \liminf_{n \to \infty} \frac{|B_G (x,n)|}{n^d} \le \limsup_{n \to \infty} \frac{|B_G (x,n)|}{n^d} < +\infty. \] \begin{Lem}\label{Recur-cluster} Let $G$ be a Cayley graph of a finitely generated group with the degree of growth $d$. Let $o$ be the unit element of the finitely generated group. Assume $p_c(G) < 1$ and $p \in (p_c(G), 1)$. Then,\\ (i) There is a unique infinite cluster $\mathcal{C}_{\infty}$, $\mathbb{P}_p$-a.s. \\ (ii) $\mathcal{C}_{\infty}$ is a recurrent subset of $G$, that is, \begin{equation}\label{recur-posi} P^{o}\left(S_n \in \mathcal{C}_{\infty} \text{ i.o. } n\right) > 0, \ \text{ $\mathbb{P}_p$-a.s.} \end{equation} (iii) \begin{equation}\label{recur-one} P^{o}\left(S_n \in \mathcal{C}_{\infty} \text{ i.o. } n\right) = 1, \ \text{ $\mathbb{P}_p$-a.s.} \end{equation} \end{Lem} Let $T_A$ be the first hitting time of $(S_n)_n$ to a subset $A \subset V(G)$. \begin{proof} By Woess \cite[Theorem 12.2 and Proposition 12.4]{Wo}, Cayley graphs of a finitely generated group with polynomial growth is amenable graphs. Therefore, by Bollob\'as and Riordan \cite[Theorem 4 in Chapter 5]{BR}, the number of infinite clusters is $0$ $\mathbb{P}_p$-a.s. or it is $1$ $\mathbb{P}_p$-a.s. By $p > p_c(G)$, the latter holds. Thus we have (i). We will show (ii). Let $P^{o} \otimes \mathbb{P}_p$ be the product measure of $P^{o}$ and $\mathbb{P}_p$. \[ P^{o} \otimes \mathbb{P}_p\left(S_{n} \in \mathcal{C}_{\infty} \text{ i.o. } n\right) = \lim_{N \to \infty} P^{o} \otimes \mathbb{P}_p\left(\bigcup_{n \ge N}\{S_{n} \in \mathcal{C}_{\infty}\}\right). \] Using the shift invariance of Bernoulli percolation and the Markov property for simple random walk, \[ P^{o} \otimes \mathbb{P}_p\left(\bigcup_{n \ge N}\{S_{n} \in \mathcal{C}_{\infty}\}\right) = P^{o} \otimes \mathbb{P}_p\left(\bigcup_{n \ge 0} \left\{S_{N}^{-1} \cdot S_{N + n} \in \mathcal{C}_{\infty}\right\}\right)\] \[ = P^{o} \otimes \mathbb{P}_p\left(\bigcup_{n \ge 0} \{S_{n} \in \mathcal{C}_{\infty}\}\right). \] Here $S_{N}^{-1}$ is the inverse element of $S_N$ as group. Hence, \[ P^{o} \otimes \mathbb{P}_p\left(S_{n} \in \mathcal{C}_{\infty} \text{ i.o. } n\right) = P^{o} \otimes \mathbb{P}_p\left(\bigcup_{n \ge 0} \{S_{n} \in \mathcal{C}_{\infty}\}\right). \] Since \[ \{S_{n} \in \mathcal{C}_{\infty} \text{ i.o. } n\} \subset \bigcup_{n \ge 0} \{S_{n} \in \mathcal{C}_{\infty}\},\] we have \[ P^{o}\left(S_{n} \in \mathcal{C}_{\infty} \text{ i.o. } n\right) = P^{o}\left(\bigcup_{n \ge 0} \{S_{n} \in \mathcal{C}_{\infty}\}\right) = P^{o}\left(T_{\mathcal{C}_{\infty}} < +\infty\right) > 0, \mathbb{P}_p\text{-a.s.} \] Thus we have (\ref{recur-posi}). By \cite[Corollary 25.10]{Wo} all bounded harmonic functions on $G$ are constant. By following the proof of \cite[Lemma 6.5.7]{LL}, we have (\ref{recur-one}). \end{proof} \subsection{Measurability of $\U(H) \in \PP$} Recall that $\mathcal{F}$ is the cylindrical $\sigma$-algebra of $\{0,1\}^{E(G)}$. First, we consider the case that $H$ is a non-random subgraph. \begin{Lem} (i) Let $H$ be a recurrent subgraph of a transient graph $G$. Then the event that $\U(H)$ is a transient subgraph of $G$ is $\mathcal{F}$-measurable. \\ (ii) Let $H$ be a recurrent subgraph of a transient graph $G$. Then the number of cut points of $\U(H)$ is an $\mathcal{F}$-measurable function. See Definition \ref{cut-def} in Section 4 for the definition of cut points. \\ (iii) Let $H$ be a transient subset of a transient graph $G$. Then the event that $\U(H)$ is a recurrent subset is $\mathcal{F}$-measurable. \\ (iv) Let $H$ be a non-connected subgraph of an infinite connected graph $G$. Then the event that $\U(H)$ is connected is $\mathcal{F}$-measurable. \end{Lem} \begin{proof} (i) Let $R_{\text{eff}}\left(x, \U(H) \setminus B_{\U(H)}(x,n)\right)$ be the effective resistance from $x$ to the outside of $B_{\U(H)}(x,n)$. It suffices to show that \\ $R_{\text{eff}}\left(x, \U(H) \setminus B_{\U(H)}(x,n)\right)$ is an $\mathcal{F}$-measurable function for each $n$. Since $\U(H)$ is a connected subgraph of $G$, $B_{\U(H)}(x,n)$ is contained in $B_{G}(x,n)$. Therefore, $R_{\text{eff}}\left(x, \U(H) \setminus B_{\U(H)}(x,n)\right)$ is determined by configurations in $B_{G}(x,n)$ and hence is $\mathcal{F}$-measurable. (ii) It suffices to show that for any $z \in V(G)$, the event that $z \in \U(H)$ and $z$ is a cut point of $\U(H)$ is $\mathcal{F}$-measurable. $z$ is a cut point of $\U(H)$ if and only if $z$ is a cut point of $\U(H \cap B_{G}(z, n))$ for any $n$. (iii) By Fubini's theorem, it suffices to see that $\{S_n \in \U(H)\}$ is $\mathcal{F}_{\text{SRW}} \otimes \mathcal{F}$-measurable. This follows from \[ \left\{S_n \in \U(H)\right\} = \bigcup_{y \in V(G)} \left\{S_n = y\right\} \times \left\{y \text{ is connected to } H \text{ by an open path}\right\}.\] (iv) If $x, y \in V(\U(H))$ are connected in $\U(H)$, then there is $n$ such that $x$ and $y$ are in a connected component of $\U(H) \cap B_{G}(x,n)$. This event is determined by configurations of edges in $B_{G}(x,n+1)$. \end{proof} We now consider the case that $H$ is a random subgraph of $G$. Let $\mathcal{F}_{SRW}$ be the $\sigma$-algebra on the path space defined by the simple random walk on $G$ and $\mathcal{F}_{\text{SRW}} \otimes \mathcal{F}$ be the product $\sigma$-algebra of $\mathcal{F}_{\text{SRW}}$ and $\mathcal{F}$. The following easily follows from that the event that the trace of the simple random walk is identical with a given connected subgraph $H$ is $\mathcal{F}_{\text{SRW}}$-measurable. \begin{Lem} Assume that the event $\U(H)$ satisfies $\PP$ is $\mathcal{F}$-measurable for any infinite connected subgraph $H$. Let $H$ be the trace of the simple random walk. Then the event $\U(H)$ satisfies $\PP$ is $\mathcal{F}_{\text{SRW}} \otimes \mathcal{F}$-measurable. \end{Lem} \begin{Exa}[A triplet $(G, H, \PP)$ such that the event $\{\U(H) \in \PP\}$ is not measurable] We first show that there is a non-measurable subset of $\{0,1\}^{\mathbb{N}}$ with respect to the cylindrical $\sigma$-algebra of $\{0,1\}^{\mathbb{N}}$. Here and henceforth, $\mathbb{N}$ denotes the set of natural numbers. Let $\phi : \{0,1\}^{\mathbb{N}} \to \{0,1\}^{\mathbb{N}}$ be the one-sided shift and $A$ be an uncountable subset of $\{0,1\}^{\mathbb{N}}$ such that (i) \[ \bigcup_{n \ge 0} \phi^{-n}(A) = \{0,1\}^{\mathbb{N}} \setminus \left(\bigcup_{n \ge 1} \{x \in \{0,1\}^{\mathbb{N}} : \phi^n (x) = x\}\right),\] and (ii) for any $x, y \in A$ and any $n \ge 1$ $y \ne \phi^n (x)$. Assume that $A$ is measurable. Let $\ell$ be the product measure of the probability measure $\mu$ on $\{0,1\}$ with $\mu(\{0\}) = \mu(\{1\}) = 1/2$. Since $\phi^{-i}(A) \cap \phi^{-j}(A) = \emptyset$ for $i \ne j$, \[ \ell\left(\bigcup_{i \ge 0} A\right) = \sum_{i \ge 0} \ell(\phi^{-i}(A)). \] Since $\cup_{n \ge 1} \{x \in \{0,1\}^{\mathbb{N}} : \phi^n (x) = x\}$ is countable, $\ell(\cup_{i \ge 0} A) = 1$. Since $\phi$ preserves $\ell$, we see that \[ \ell(\phi^{-i}(A)) = \ell(A)\] for any $i$, and \[ \sum_{i \ge 0} \ell(\phi^{-i}(A)) = 0 \text{ or } +\infty.\] But this is a contradiction. Hence $A$ is not measurable. Let $G$ be the connected subgraph of $\mathbb{Z}^2$ whose vertices are \[ \{(x, 0) : x \ge -2\} \cup \{(y, 1) : y \ge -1\} \cup \{(-1, -1)\}.\] Then any graph automorphism of $G$ is the identity map between vertices of $G$. Let $H$ be the connected subgraph of $G$ whose vertices are \[ \{(x, 0) : x \ge -2\} \cup \{(-1, 1)\} \cup \{(-1, -1)\}.\] Then \[ E(G) \setminus E(H) = \{\{(n, 0), (n, 1)\} : n \in \mathbb{N}\}.\] Let $\widetilde\omega$ be the projection of $\omega \in \{0,1\}^{E(G)}$ to $\{0,1\}^{E(G) \setminus E(H)}$. Regard $E(G) \setminus E(H)$ as $\mathbb{N}$. Let $\PP$ be the property that a graph is isomorphic to a graph in the class $\{\U_{\omega}(H) : \widetilde\omega \in A\}$. Then \[ \{\U(H) \in \PP\} = A \times \{0,1\}^{E(H)}.\] This event is not measurable with respect to the cylindrical $\sigma$-algebra of $\{0,1\}^{E(G)}$. \end{Exa} \begin{figure}[htbp] \centering \setlength\unitlength{0.8mm} \begin{picture}(70, 30) \put(0, 15){\line(10, 0){10}} \put(10, 15){\line(0, 10){10}\line(10, 0){10}} \put(10, 5){\line(0, 10){10}} \put(20, 15){\dashbox(0, 10)} \put(20, 15){\line(10, 0){10}} \put(20, 5){\line(0, 10){10}} \put(30, 15){\dashbox(0, 10)} \put(30, 15){\line(10, 0){10}} \put(30, 5){\line(0, 10){10}} \put(40, 15){\dashbox(0, 10)} \put(40, 15){\line(10, 0){10}} \put(40, 5){\line(0, 10){10}} \put(50, 15){\dashbox(0, 10)} \put(50, 15){\line(10, 0){10}} \put(50, 5){\line(0, 10){10}} \end{picture} \caption{\small Graph of $H$. The dotted lines are $E(G) \setminus E(H)$.} \end{figure} \section{$\PP$ is being a transient graph} In this section, we consider the case that $\PP$ is being a transient graph and assume that $H$ is connected. \subsection{The case that $H$ is a fixed subgraph} \begin{Thm}[Extreme cases]\label{extre-graph} (i) There is a graph $G$ such that \[ 0 < p_{c}(G) < p_{c, 1}(G, H, \PP) = 1.\] (ii) There is a graph $G$ such that for any infinite recurrent subgraph $H$ of $G$, \[ p_{c,2}(G, H, \PP) = 0.\] \end{Thm} We remark that if $H$ is finite, then $p_{c,2}(G, H, \PP) = 1$. \begin{proof} (i) Let $G$ be the graph which is constructed as follows : Take $\mathbb{Z}^{2}$ and attach a transient tree $T$ such that $p_{c}(T) = 1$ to the origin of $\mathbb{Z}^{2}$. This appears in H\"aggstr\"om and Mossel \cite[Section 6]{HM}. Then for any recurrent subgraph $H$, \[ p_{c}(G) < p_{c, 1}(G, H, \PP) = 1.\] $\U(H)$ is the graph obtained by the union of $\U(H \cap \mathbb{Z}^{2})$ and $\U(H \cap T)$. $\U(H \cap \mathbb{Z}^{2})$ is recurrent. Let $p < 1 = p_c(T)$. Then $\mathbb{P}_p$-a.s., $\U(H \cap T)$ is the graph obtained by adding at most countably many finite graphs to $H \cap T$. Hence $\U(H \cap T)$ is also recurrent, $\mathbb{P}_p$-a.s. Since the intersection of $\U(H \cap T)$ and $\U(H \cap \mathbb{Z}^{2})$ is the origin, $\U(H)$ is recurrent, $\mathbb{P}_p$-a.s. (ii) Let $G$ be an infinite connected line-graph in Benjamini and Gurel-Gurevich \cite[Section 2]{BGG}. In their paper, it is given as a graph having multi-lines, but we can construct a simple graph by adding a new vertex on each edge. Let $H$ be an infinite connected recurrent subgraph of $G$. Then $\mathbb{N} \subset V(H)$. Let $p > 0$. If the number of edges between $k$ and $k+1$ is $2k^3$, then \[ \mathbb{P}_p\left(|\text{two open consecutive edges connecting $k$ and $k+1$}| > k^2 \right) \] \[ \to 1, \, k \to \infty, \text{ exponentially fast.} \] Hence \[ \mathbb{P}_p\left(\bigcap_{k \ge 1} \left\{ |\text{two open consecutive edges connecting $k$ and $k+1$}| > k^2 \right\} \right) > 0. \] By this and the recurrence/transience criterion by effective resistance (See \cite[Theorem 2.12]{Wo} for example.), \[ \mathbb{P}_p(\U(H) \text{ is transient}) > 0.\] Since $H$ is infinite, we can use the 0-1 law and have \[ \mathbb{P}_p(\U(H) \text{ is transient}) = 1.\] \end{proof} We give rough figures of the two graphs in the proof above. \begin{figure}[htbp] \begin{minipage}{0.45\hsize} \centering \includegraphics[width = 6cm, height = 4cm, bb = 0 0 800 600]{enlarge-fig-1.eps} \caption{\small Graph in the proof of Theorem \ref{extre-graph} (i)} \end{minipage} \begin{minipage}{0.45\hsize} \centering \includegraphics[width = 6cm, height = 4cm, bb = 0 0 800 600]{enlarge-fig-2.eps} \caption{\small Graph in the proof of Theorem \ref{extre-graph} (ii)} \end{minipage} \end{figure} The proof of (ii) above heavily depends on the fact that $G$ has unbounded degrees. Now we consider a case that $G$ has bounded degrees. \begin{Thm}\label{epsilon-thm} Let $G = \mathbb{Z}^d, d \ge 3$. Then for any $\epsilon > 0$ there is a recurrent subgraph $H_{\epsilon}$ such that $p_{c, 2}(G, H_{\epsilon}, \PP) \le \epsilon$. \end{Thm} \begin{proof} Let $H$ be a recurrent subgraph of $\mathbb{Z}^d$ such that $V(H) = V(G)$. By \cite[(2.21)]{Wo} such $H$ exists. If $p > p_{c}(\mathbb{Z}^d)$, then $\U(H)$ contain the unique infinite open cluster a.s. By Grimmett, Kesten and Zhang \cite{GKZ}, $\U(H)$ is transient. Hence \[ p_{c, 2}(G, H, \PP) \le p_{c}(\mathbb{Z}^d) < 1.\] Now the assertion follows from this and Lemma \ref{epsilon}. \end{proof} In the proof of Theorem \ref{epsilon-thm}, we choose a subgraph $H$ such that $V(H) = V(G)$ and apply Lemma \ref{epsilon}. However, if $H$ is a connected proper subgraph of an infinite tree $T$ with $\deg(x) \ge 2, \forall x \in V(T)$, then (\ref{near}) in Lemma \ref{epsilon} fails. \begin{Thm}\label{tree-graph} Let $T$ be an infinite transient tree. Then \\ (i) If $T^{\prime}$ is a recurrent subtree of $T$, then \[ p_{c, 1}(T, T^{\prime}, \PP) = p_{c}(T).\] (ii) If $T^{\prime}$ is an infinite recurrent subtree of $T$, then \[ p_{c,2}(T, T^{\prime}, \PP) = \sup\left\{p_{c}(H) : H \text{ is a transient subtree in } T \text{ and } E(H) \cap E(T^{\prime}) = \emptyset \right\}. \] \end{Thm} \begin{proof} (i) By Peres \cite[Exercise 14.7]{P}, if $p > p_{c}(T)$, then \[ \mathbb{P}_p(C_{v} \text{ is transient}) > 0 \] for any $v \in T$. Since $C_{v} \subset \U(T^{\prime})$ for any $v \in T$, \[ \mathbb{P}_p(\U(T^{\prime}) \text{ is transient}) > 0.\] Therefore, $p_{c, 1}(T, T^{\prime}, \PP) \le p_{c}(T)$. If $p < p_{c}(T)$, then $\mathbb{P}_p$-a.s., $\U(T^{\prime})$ is an infinite tree obtained by attaching at most countably many finite trees to $T^{\prime}$. Hence, $\U(T^{\prime})$ is also a recurrent graph $\mathbb{P}_p$-a.s. Therefore, $p_{c, 1}(T, T^{\prime}, \PP) \ge p_{c}(T)$. (ii) Assume that there is a transient subtree $H$ of $T$ such that $E(H) \cap E(T^{\prime}) = \emptyset$ and $p < p_c(H)$. There is a finite path from $o$ to a vertex of $H$. Since $H$ is transient, the probability that random walk starts at $o$ and, then, goes to a vertex of $H$ and remains in $H$ after the hitting to $H$ is positive. Hence the probability that $\U(T^{\prime})$ is still recurrent is positive. Hence $p \le p_{c,2}(T, T^{\prime}, \PP)$. Assume that \[ p > \sup\left\{p_{c}(H) : H \text{ is a transient subtree in } T \text{ and } E(H) \cap E(T^{\prime}) = \emptyset \right\}.\] Since there are infinitely many transient connected subtrees $H$ of $T$ such that $E(H) \cap E(T^{\prime}) = \emptyset$, $\U(T^{\prime})$ contains at least one infinite transient cluster in $H$. \end{proof} Hereafter $\mathbb{T}_d$ denotes the $d$-regular tree, $d \ge 2$. By Theorem \ref{tree-graph}, \begin{Cor} Let $T = \mathbb{T}_d, d \ge 3$ and $T^{\prime}$ be a recurrent subgraph. Then \[ p_{c, 1}(T, T^{\prime}, \PP) = p_{c, 2}(T, T^{\prime}, \PP) = p_c(T).\] \end{Cor} The value $p_{c, 2}$ depends on choices of a subgraph $T^{\prime}$ as the following example shows. \begin{Exa} Let $T$ be the graph obtained by attaching a vertex of $\mathbb{T}_3$ to a vertex of $\mathbb{T}_4$. \\ (i) If $T^{\prime}$ is a subgraph of $\mathbb{T}_3$ which is isomorphic to $L = \left(\mathbb{N}, \{\{n, n+1\} : n \in \mathbb{N}\}\right)$, then \[ p_{c, 2}(T, T^{\prime}, \PP) = p_c(\mathbb{T}_3) = \frac{1}{2}.\] (ii) If $T^{\prime}$ is a subgraph of $\mathbb{T}_4$ which is isomorphic to $L$, then \[ p_{c, 2}(T, T^{\prime}, \PP) = p_c(\mathbb{T}_4) = \frac{1}{3}.\] \end{Exa} We give a short remark about stability with respect to rough isometry. Let $G$ be the graph obtained by attaching one vertex of the triangular lattice to a vertex of the $d$-regular tree $\mathbb{T}_d$. If $d = 3$, then \[ p_c(G) = p_c(\text{triangular lattice}) = 2 \sin \left(\frac{\pi}{18}\right) < \frac{1}{2} \] and \[ p_{c,1}(G, H, \PP) = p_c(\mathbb{T}_3) = \frac{1}{2}. \] If $d$ is large, then \[ p_c(G) = p_{c, 1}(G, H, \PP) = \frac{1}{d-1}. \] As this remark shows, there is a pair $(G, H)$ such that \[ p_c(G) < p_{c,1}(G, H, \PP) < 1.\] We are not sure that there is a pair $(G, H)$ such that \[ 0 < p_{c,1}(G, H, \PP) < p_c(G).\] \subsection{The case that $H$ is the trace of the simple random walk} \begin{Thm}\label{trace3} Let $G$ be a Cayley graph of a finitely generated countable group with the degree of growth $d \ge 3$. Let $H$ be the trace of the simple random walk on $G$. Then \[ p_{c, 1}(G, H, \PP) \ge p_c(G).\] \end{Thm} Hereafter $E^{\mu}$ denotes the expectation with respect to a probability measure $\mu$. \begin{proof} Let $p < p_c(G)$. We show that the volume growth of $\U(H)$ is (at most) second order. We assume that simple random walks start at a vertex $o$. Since $B_{\U(H)}(o, n)$ is contained in $B_{G}(o, n)$, \[ B_{\U(H)}(o, n) \subset \bigcup_{x \in V(H) \cap B_{G}(o, n)} C_x. \] By Mensikov, Molchanov and Sidrenko \cite{MMS}, \[ E^{\mathbb{P}_p}\left[|C_o|\right] < +\infty, \ \ p < p_c(G).\] Therefore, \[ E^{P^o \otimes \mathbb{P}_p}\left[ |B_{\U(H)}(o, n)| \right] \le E^{\mathbb{P}_p}[|C_o|] E^{P^o}\left[\left|V(H) \cap B_{G}(0, n)\right|\right]. \] Using Hebisch and Saloff-Coste \cite[Theorem 5.1]{HSC} and summation by parts, \begin{align*} E^{P^o}\left[\left|V(H) \cap B_{G}(o, n)\right|\right] &\le \sum_{x \in B_{G}(o, n)} \sum_{m \ge 0} P^{o}(S_m = x) \\ &= \sum_{x \in B_{G}(o, n)} d_G(o,x)^{2-d} = O(n^2). \end{align*} Using this and Fatou's lemma, \[ E^{P^o \otimes \mathbb{P}_p}\left[\liminf_{n \to \infty} \frac{| B_{\U(H)}(o, n) |}{n^2} \right] < +\infty. \] Hence \[ \liminf_{n \to \infty} \frac{| B_{\U(H)}(o, n) |}{n^2} < +\infty, \ \ P^o \otimes \mathbb{P}_p\text{-a.s.} \] The assertion follows from this and \cite[Lemma 3.12]{Wo}. \end{proof} Let $G = \mathbb{Z}^d$, $d \ge 3$ and $H$ be the trace of the simple random walk on $G$. Then by Lemma \ref{Recur-cluster} and the transience of infinite cluster by \cite{GKZ}, \[ p_{c, 2}(G, H, \PP) \le p_c(G).\] By this and Theorem \ref{trace3}, \begin{Cor} Let $G = \mathbb{Z}^d$, $d \ge 3$, and $H$ be the trace of the simple random walk on $G$. Then \[ p_{c, 1}(G, H, \PP) = p_{c, 2}(G, H, \PP) = p_{c}(G). \] \end{Cor} \section{$\PP$ is a property concerning cut points} In this section, we assume that $G$ is a transient graph and $H$ is a recurrent subgraph of $G$. \begin{Def}[cut point]\label{cut-def} We say that a vertex $x \in V(G)$ is a {\it cut point} if we remove an edge $e$ containing $x$, then the graph splits into two {\it infinite} connected components. \end{Def} The graph appearing in the proof of Theorem \ref{extre-graph} (ii) (see Figure 3) has a vertex such that if we remove it, then the graph splits into two connected components. However, it is {\it not} a cut point in the sense of the above definition. \begin{Thm}\label{Thm-cut} Let $G$ be a Cayley graph of a finitely generated countable group with the degree of growth $d \ge 5$. Let $H$ be the trace of the two-sided simple random walk on $G$. Then if $p < p_c(G)$, then $\U(H)$ has infinitely many cut points, $P^{o,o} \otimes \mathbb{P}_p$-a.s. \end{Thm} Let \[ S^i (A) := \{S^i_n : n \in A\} \text{ for } A \subset \mathbb{N}.\] \begin{proof} Fix a vertex $o$. First we will show that \begin{equation}\label{cut-1} P^{o,o} \otimes \mathbb{P}_p\left(\text{$o$ is a cut point of $\U(H)$}\right) > 0. \end{equation} We give a rough sketch of proof of (\ref{cut-1}). First we show there exists a vertex $z$ such that two simple random walks starting at $o$ and $z$ respectively do not intersect with positive probability. Then we ``make" vertices in a large box closed and show the two random walks do not return to the large box with positive probability. Finally we choose a path connecting the two traces in a suitable way. Using \cite{MMS}, $d \ge 5$, and \cite[Theorem 5.1]{HSC}, \[ \sum_{i, j \ge 0} P^{o,o} \otimes \mathbb{P}_p\left(\U(\{S^1_{i}\}) \cap \U(\{S^2_{j}\}) \ne \emptyset\right) = \sum_{x \in V(G)} \sum_{i, j \ge 0} P^{o}(S_{i+j} = x) \mathbb{P}_p(x \in C_o) < +\infty. \] Hence for large $N$ \[ P^{o,o} \otimes \mathbb{P}_p\left(\U\left(S^{1}\left([N, +\infty)\right)\right) \cap \U\left(S^2\left([N,+\infty)\right)\right) \ne \emptyset \right) \] \[ \le \sum_{i, j \ge N} P^{o,o} \otimes \mathbb{P}_p\left(\U(\{S^1_{i}\}) \cap \U(\{S^2_{j}\}) \ne \emptyset\right) < 1. \] Let \[ A := \left\{\U\left(S^{1} \left( [1, +\infty) \right)\right) \cap \U\left(S^{2}([0,+\infty))\right) = \emptyset \right\}.\] Then there is a vertex $z \in V(G)$ such that $P^{z, o}(A) > 0$. If $z = o$, then (\ref{cut-1}) holds. Assume $z \ne o$. Let $B := B_{G}(o, 3d_G(o, z))$ and $C$ be the event that all edges in $B$ are closed. Since $p < 1$ and $A$ is decreasing, \[ P^{z,o} \otimes \mathbb{P}_p \left(A \cap C\right) > 0. \] Since $S^1$ and $S^2$ are transient, there is $N$ such that \[ P^{z,o} \otimes \mathbb{P}_p \left(A \cap C \cap \bigcap_{i = 1,2}\left\{S^i((N, \infty)) \cap B = \emptyset\right\} \right) > 0. \] Now we can specify two finite paths of $S^i_j$. There are vertices $x^{i}_j, i=1,2, j = 0, \cdots, N$ such that \[ P^{z,o} \otimes \mathbb{P}_p \left(A \cap C \cap \bigcap_{i = 1,2}\left\{S^i_j = x^{i}_j, \forall j, S^i((N, \infty)) \cap B = \emptyset\right\} \right) > 0. \] Now we can pick up a path in $B$ connecting $\{x^{1}_j\}_{j} \cap B$ and $\{x^{2}_j\}_{j} \cap B$. We can let $S^i_j = x^i_j$, for $-m_i < j < 0$, $i = 1,2$, and $x^1_{-m_1} = x^2_{-m_2} = y_0$. \[ P^{y_0, y_0} \otimes \mathbb{P}_p \left(A \cap C \cap \bigcap_{i = 1,2}\left\{S^i_j = x^{i}_j, \forall j, S^i((N, \infty)) \cap B = \emptyset\right\} \right) > 0. \] This event is contained in the event $\left\{\U (S^1([-m_1, +\infty))) \cap \U (S^2([-m_2, +\infty))) = \emptyset\right\}$ and hence we have (\ref{cut-1}). Let $\mathcal{S}$ be the generating set of the Cayley graph $G$. Consider the following transformation $\Theta$ on $\mathcal{S}^{\mathbb{Z}} \times \{0,1\}^{E(G)}$ defined by \[ \Theta\left((a_j)_j, (\omega_e)_e\right) := \left((a_{j+1})_j, (\omega_{a_0 e})_e \right). \] Here we let $a e := \{a x, a y\}$ for an edge $e = \{x,y\}$ and a point $a \in \mathcal{S}$. We have that $\Theta$ preserves $P^{o,o} \otimes \mathbb{P}_p$. Define a transformation $\varphi_a$ on $\{0,1\}^{E(G)}$ by \[ \varphi_a \left((\omega_e)_e\right) := (\omega_{ae})_e.\] By following the proof of \cite[Lemma 1 in Chapter 5]{BR}, the family of maps $\{\varphi_a : a \in \mathcal{S}\}$ is ergodic. By Kakutani \cite[Theorem 3]{Ka}, $\Theta$ is ergodic with respect to $P^{o,o} \otimes \mathbb{P}_p$. By applying the Poincar\'e recurrence theorem (See Pollicott and Yuri \cite[Theorem 9.2]{PY} for example.), to the dynamical system $(\mathcal{S}^{\mathbb{Z}} \times \{0,1\}^{E(G)}, P^{o,o} \otimes \mathbb{P}_p, \Theta)$, we have \[ P^{o,o} \otimes \mathbb{P}_p \left(\U(\widetilde S((-\infty, n])) \cap \U(\widetilde S([n+1, +\infty))) = \emptyset \text{ infinitely many } n \in \mathbb{Z} \right) = 1, \] where we let \[ \widetilde S_n := \begin{cases} S^{1}_{n}, \ \ n \ge 0 \\ S^2_{-n} \ \ n < 0.\end{cases}\] \end{proof} The following considers this problem at the critical point in high dimensions. It is pointed out by Itai Benjamini. (personal communication) \begin{Thm}\label{Itai} Let $G = \mathbb{Z}^d, d \ge 11$. Let $H$ be the trace of the two-sided simple random walk on $\mathbb{Z}^d$. Let $p = p_c$. Then $\U(H)$ has infinitely many cut points $P^{o,o} \otimes \mathbb{P}_{p_c(\mathbb{Z}^d)}$-a.s. \end{Thm} \begin{proof} We will show that \[ \sum_{x \in \mathbb{Z}^d} \sum_{i, j \ge 0} P^{0}(S_{i+j} = x) \mathbb{P}_p(x \in C_0) < +\infty. \] In below $c$, $c^{\prime}$ and $c^{\prime\prime}$ are constants depending only on $d$ and $p$. Fitzner and van der Hofstad \cite[Theorem 1.4]{FvdH} claims that the decay rate for the two-point function $\mathbb{P}_{p_c(\mathbb{Z}^d)}(0 \leftrightarrow x)$ is $|x|^{2-d}$ as $|x| \to +\infty$, if $d \ge 11$. Therefore, \[ \mathbb{P}_p(x \in C_0) \le c \cdot d_{\mathbb{Z}^d}(0, x)^{2-d}, \ \ \text{ for any $x$.} \] Since $P^{0}(S_k = x) = 0$ if $k < d_{\mathbb{Z}^d}(o,x)$, \begin{align*} \sum_{x \in \mathbb{Z}^d} \sum_{i, j \ge 0} P^{0}(S_{i+j} = x) \mathbb{P}_p(x \in C_0) &\le \sum_{k} c^{\prime} k^{1 - d/2} \left(\sum_{l = 1}^{k} c l^{2-d} \left|\left\{x : d_{\mathbb{Z}^d}(0,x) = l\right\}\right| \right)\\ &= c^{\prime\prime} \sum_{k \ge 1} k^{3-d/2} < +\infty. \end{align*} The rest of the proof goes in the same way as in the proof of Theorem \ref{Thm-cut}. \end{proof} The following deals with supercritical phases. \begin{Prop} Let $G = \mathbb{Z}^d, d \ge 3$. Let $H$ be the trace of the two-sided simple random walk on $\mathbb{Z}^d$. If $p > p_c(G)$, then $\U(H)$ has no cut points $P^{0,0} \otimes \mathbb{P}_p$-a.s. \end{Prop} \begin{proof} Using the two-arms estimate by Aizenman, Kesten and Newman \cite{AKN}, \[ \mathbb{P}_p \left(\text{$0 \in \mathcal{C}_{\infty}$ and $0$ is a cut point of $\mathcal{C}_{\infty}$}\right) = 0. \] Using the shift invariance of $\mathbb{P}_p$, the unique infinite cluster $\mathcal{C}_{\infty}$ has no cut points $\mathbb{P}_p$-a.s. \end{proof} \section{$\PP$ is being a recurrent subset} In this section, we assume that $G$ is a transient graph. Recall Definition \ref{Def-recur}. We regard a recurrent subset as a subgraph and consider the induced subgraph of the recurrent subset. In other words, if $A$ is a recurrent subset of $V(G)$, then we consider the graph such that the set of vertices is $A$ and the set of edges $\{\{x, y\} \in E(G) : x, y \in A\}$. \subsection{The case that $H$ is a fixed subgraph} We proceed with this subsection as in Subsection 3.1. The following correspond to Theorem \ref{extre-graph}. \begin{Thm}[Extreme cases]\label{extre-subset} (i) There is a graph $G$ such that for any transient subset $H$ of $G$, \[ 0 < p_{c}(G) < p_{c, 1}(G, H, \PP) = 1.\] (ii) There is a graph $G$ such that for any infinite transient subset $H$ of $G$, \[ p_{c,2}(G, H, \PP) = 0.\] \end{Thm} We show this in the same manner as in the proof of Theorem \ref{extre-graph}. \begin{proof} Even if we add one edge to a transient subset, then the enlarged graph is also a transient subset. If not, the random walks hit an added vertex infinitely often, a.s., which contradicts that $G$ is a transient graph. Therefore, we can show (i) in the same manner as in the proof of Theorem \ref{extre-graph} (i). Let $G$ be the graph defined in the proof of Theorem \ref{extre-graph} (ii). Then \[ |\mathbb{N} \cap V(\U(H))| = +\infty. \] Hence $\U(H)$ is a recurrent subset, $\mathbb{P}_p$-a.s. for any $p > 0$. \end{proof} Second we consider the case $G$ is $\mathbb{Z}^d, d \ge 3$. Lemma \ref{Recur-cluster} implies that \begin{Prop} Let $G = \mathbb{Z}^d$, $d \ge 3$. Then for any transient subset $H$ of $G$ \[ p_{c,1}(G, H, \PP) \le p_c(G).\] \end{Prop} Third we consider the case that $G$ is a tree $T$. \begin{Thm} Let $T$ be an infinite tree and $H$ be a transient subset of $T$. Then, $p_{c, 1}(T, H, \PP) = 1$. \end{Thm} \begin{proof} For $e \in E(T)$ and $x \in e$, we let $T_{e, x}$ be the connected subtree of $T$ such that $x \in V(T_{e,x})$ and $e \notin E(T_{e,x})$. Since $H$ is a transient subset, there are an edge $e$ and a vertex $x \in e$ such that $T_{e, x}$ is a transient {\it subgraph} of $T$ and $V(H) \cap V(T_{e,x}) = \{x\}$. Then we can take an infinite path $(x_0, x_1, x_2, \dots)$ in $T_{e,x}$ such that $x_0 = x$, and for each $i \ge 0$, $\{x_i, x_{i+1}\} \in E(T_{\{x_{i-1}, x_{i}\}, x_i})$, and $T_{\{x_{i-1}, x_{i}\}, x_i}$ is a transient subgraph. If $p < 1$, then there is a number $i$ such that $\U(H)$ does not intersect with $T_{\{x_{i-1}, x_{i}\}, x_i}$ $\mathbb{P}_p$-a.s. Hence $\U(H)$ is a transient subset of $T$, $\mathbb{P}_p$-a.s. \end{proof} We do not give an assertion corresponding to Theorem \ref{epsilon-thm}. We are not sure that there is a recurrent subset such that the induced subgraph of it satisfies (\ref{near}) in Lemma \ref{epsilon}. \subsection{The case that $H$ is the trace of the simple random walk} \begin{Thm} Let $G$ be a Cayley graph of a finitely generated countable group with the degree of growth $d \ge 3$. Let $H$ be the trace of the simple random walk on $G$. Then \\ (i) If $d \ge 5$, \[ p_{c, 1}(G, H, \PP) = p_{c, 2}(G, H, \PP) = p_{c}(G). \] (ii) If $d = 3, 4$, \[ p_{c, 1}(G, H, \PP) = p_{c, 2}(G, H, \PP) = 0. \] \end{Thm} \begin{proof} Let $o$ be the unit element of the group. Let \[ \theta(x) := \sum_{n \ge 0} P^{o}(S_n = x) \text{ and } \theta_p(x) := \sum_{n \ge 0} P^{o} \otimes \mathbb{P}_p \left(S_n \in C_x\right). \] We remark that $\theta(x) = \theta_0(x)$. First we show $p_c(G) \le p_{c,1}(G, H, \PP)$. Let $p < p_c(G)$. It follows from \cite[Theorem 5.1]{HSC} and \cite{MMS} that \[ \theta_p(x) = O\left(d_{G}(o,x)^{2-d}\right).\] By following the proof of \cite[Theorem 6.5.10]{LL}, \[ E^{P^o \otimes \mathbb{P}_p}\left[\sum_{x \in \U(H)} \theta(x)\right] = \sum_{x \in V(G)} \theta_p(x)\theta(x) = O\left(\sum_{x \in V(G)} d_{G}(o,x)^{4-2d}\right). \] Using $d \ge 5$ and summation by parts, \[ \sum_{x \in V(G)} d_{G}(o,x)^{4-2d} < +\infty. \] Hence \[ P^o \otimes \mathbb{P}_p\left(S_n \in \U(H), \text{ i.o. } n\right) = 0 \] and $p \le p_{c,1}(G, H, \PP)$. Thus we have \[ p_c(G) \le p_{c,1}(G, H, \PP).\] If $p_c(G) < 1$, then by Lemma \ref{Recur-cluster}, \[ p_c(G) \ge p_{c,2}(G, H, \PP).\] If $p_c(G) = 1$, this clearly holds. Thus we see (i). We show (ii) by following the proof of \cite[Theorem 6.5.10]{LL}. Let $S^{1}$ and $S^{2}$ be two independent simple random walks on $G$. Let \[ Z_{k} := \left| \left(B_G(o, 2^k) \setminus B_G(o, 2^{k-1})\right) \cap S^{1}\left([0, T^{1}_{V(G) \setminus B_G(o, 2^k)})\right) \cap S^{2}\left([0, T^{2}_{V(G) \setminus B_G(o, 2^k)})\right) \right|. \] Let $E_{k}$ be the event that $Z_k$ is strictly positive. In below, $c_i$, $1 \le i \le 8$, are positive constants depending only on $G$. It follows from a generalized Borel-Cantelli lemma that if \\ (1) \[ \sum_{k \ge 1} P^{o,o}(E_{3k}) = +\infty \ \ \text{ and } \] (2) For some constant $c_1$ \[ P^{o,o}(E_{3k} \cap E_{3m}) \le c_1 P^{o,o}(E_{3k}) P^{o,o}(E_{3m}), \ \ k \ne m \] hold, then $E_{3k}$ holds i.o. $k$, $P^{o,o}$-a.s. and assertion (ii) follows. \cite[Theorem 5.1]{HSC} states that \[ c_2 \cdot d_{G}(o, x)^{2-d} \le \theta(x) \le c_3 \cdot d_{G}(o, x)^{2-d}.\] By Grigor\'yan and Telcs \cite[Proposition 10.1]{GT} the elliptic Harnack inequality holds. Therefore, we have (2). Now we show (1). \begin{align*} E^{P^{o,o}}[Z_k] &= \sum_{x \in B_G(o, 2^k) \setminus B_G(o, 2^{k-1})} P^o\left(T_x < T_{V(G) \setminus B_G(o, 2^k)}\right)^2 \\ &= c_4 \sum_{x \in B_G(o, 2^k) \setminus B_G(o, 2^{k-1})} \theta_{B_G(o, 2^k)}(x)^2. \end{align*} Since $\theta_{B_G(o, 2^k)}(x) \ge c_5 2^{k(2-d)}$ for any $x \in B_G(o, 3 \cdot 2^{k-2}) \setminus B_G(o, 2^{k-1})$, \[ E^{P^{o,o}}[Z_k] \ge c_6 2^{k(4-2d)} \left|B_G(o, 3 \cdot 2^{k-2}) \setminus B_G(o, 2^{k-1}) \right|. \] Using this and an isoperimetric inequality (Cf. \cite[Theorem 7.4]{HSC}), \begin{equation}\label{one} E^{P^{o,o}}[Z_k] \ge c_7 2^{k(4-d)}. \end{equation} We have \[ E^{P^{o,o}}[Z_k^2] = \sum_{x, y \in B_G(o, 2^k) \setminus B_G(o, 2^{k-1})} P^o\left(T_x \vee T_y < T_{V(G) \setminus B_G(o, 2^k)}\right)^2. \] Since \begin{align*} P^o\left(T_x \vee T_y < T_{V(G) \setminus B_G(o, 2^k)}\right) &\le P^o\left(T_x \le T_y < T_{V(G) \setminus B_G(o, 2^k)}\right) \\ &+ P^o\left(T_y \le T_x < T_{V(G) \setminus B_G(o, 2^k)}\right)\\ &\le \theta(x)\theta(x^{-1} y) + \theta(y)\theta(y^{-1}x) \\ &= O\left(2^{k(2-d)} (1+d_G(x,y))^{2-d}\right), \end{align*} we have that by using summation by parts \[ \sum_{y \in B_G(o, 2^k) \setminus B_G(o, 2^{k-1})} (1+d_G(x,y))^{4-2d} = \begin{cases} O(2^{k}) \ \ \ d = 3, \\ O(k) \ \ \ d = 4.\end{cases} \] Therefore, \begin{equation}\label{two} E^{P^{o,o}}[Z_k^2] = \begin{cases} O(4^{k}) \ \ \ d = 3, \\ O(k) \ \ \ d = 4.\end{cases} \end{equation} Using (\ref{one}), (\ref{two}) and the second moment method, for $d = 3,4$, \[ P^{o,o}(E_k) = P^{o,o}(Z_k > 0) \ge \frac{E^{P^{o,o}}[Z_k]^2}{E^{P^{o,o}}[Z_k^2]} \ge \frac{c_8}{k}. \] Thus we have (1). \end{proof} \section{$\PP$ is being connected} We say that a subgraph $H$ of $G$ is {\it connected} if for any two vertices $x$ and $y$ of $H$ there are vertices $x_0, \dots, x_n$ of $H$ such that $x_0 = x$, $x_n = y$, and $\{x_{i-1}, x_{i}\}$ is an edge of $H$ for each $i$. By Definition \ref{enl}, if $H$ is connected, then $\U(H)$ is also connected. On the other hand, if $H$ is {\it not} connected, then $\U(H)$ can be non-connected. For example, if $(V(G), E(G)) = (\mathbb{Z}, \{n,n+1 : n \in \mathbb{Z}\})$ and $(V(H), E(H)) = (\mathbb{Z}, \emptyset)$, then \[ \mathbb{P}_p(\U(H) \textup{ is connected}) = 0, \ \ p < 1. \] The following is introduced by \cite{BHS}. \begin{Def}[percolating everywhere]\label{pe} We say that a subgraph $H$ of $G$ is {\it percolating everywhere} if $V(H) = V(G)$ and every connected component of $H$ is infinite. \end{Def} We introduce a notion concerning connectivity. For $A, B \subset V(G)$, we let \[ E(A, B) := \left\{\{y,z\} \in E(G) : y \in A, z \in B \right\}.\] \begin{Def}\label{sc} We say that $G$ satisfies (TI) if for every $A, B \subset V(G)$ satisfying \[ V(G) = A \cup B, \ A \cap B = \emptyset \text{ and } |A| = |B| = +\infty,\] $E(A, B)$ is an infinite set. \end{Def} \begin{Exa} (i) $\mathbb{Z}^d, d \ge 2$, satisfy (TI).\\ (ii) $\mathbb{T}_d, d \ge 2$, does not satisfy (TI).\\ (iii) The trace of the two-sided simple random walk on $\mathbb{Z}^d, d \ge 5$, does not satisfy (TI) a.s. \end{Exa} \begin{proof} Let $a_1 \in A$ and $b_1 \in B$. Since $\mathbb{Z}^d$ is connected, there is a path $\gamma_1$ connecting $a_1$ and $b_1$. Then $\gamma_1$ contains at least one edge in $E(A, B)$ and is contained in a box $B_{\mathbb{Z}^d}(o, N_1)$. Since $A$ and $B$ are infinite, there are points $a_{2} \in A \cap (\mathbb{Z}^d \setminus B_{\mathbb{Z}^d}(o, N_1))$ and $b_2 \in B \cap (\mathbb{Z}^d \setminus B_{\mathbb{Z}^d}(o, N_1))$. There is a path $\gamma_2$ connecting $a_2$ and $b_2$ in $\mathbb{Z}^d \setminus B_{\mathbb{Z}^d}(o, N_1)$. Then $\gamma_2$ contains at least one edge in $E(A, B)$ and is contained in a box $B_{\mathbb{Z}^d}(o, N_2)$. Since $A$ and $B$ are infinite, we can repeat this procedure and have infinitely many disjoint paths $(\gamma_n)_{n \ge 1}$. Thus we have (i). Since $\mathbb{T}_d$ and trace have infinitely many cut points, (ii) and (iii) hold. \end{proof} \begin{Thm} (i) If $G$ is (TI), then for any percolating everywhere subgraph $H$ \[ p_{c,1}(G,H,\PP) = p_{c,2}(G,H,\PP). \] If the number of connected components of $H$ is finite, then \[ p_{c,1}(G,H,\PP) = p_{c,2}(G,H,\PP) = 0. \] (ii) If $G$ does not satisfy (TI), then there is a percolating everywhere subgraph $H$ such that \[ p_{c,1}(G,H,\PP) = 0 \text{ and } p_{c,2}(G,H,\PP) = 1. \] \end{Thm} \begin{proof} Let $\mathbb{P}_p^{H}$ be the product measure on $\{0,1\}^{E(G)}$ such that $\mathbb{P}_p^{H}(\omega_e = 1) = 1$ if $e \in E(H)$ and $\mathbb{P}_p^{H}(\omega_e = 1) = p$ if $e \notin E(H)$. Denote $x \leftrightarrow y$ if $x$ and $y$ are connected by an open path in this percolation model. Define $x \sim y$ if and only if $\mathbb{P}_p^{H}(x \leftrightarrow y) = 1$. This is an equivalent definition. Let $[x]$ be the equivalent class containing $x$. Let $G^{\prime}$ be the quotient graph of $G$ by $\sim$. This is a connected graph which may have multi-lines. The number of edges between two vertices are finite. If $|V(G^{\prime})| = 1$, then \[ \mathbb{P}_p(\text{$\U(H)$ is connected}) = 1.\] Assume that $|V(G^{\prime})| \ge 2$, $V(G^{\prime}) = A \cup B$ and $A \cap B = \emptyset$. Let $A^{\prime}$ (resp. $B^{\prime}$) be a subset of $V(G)$ such that the equivalent class of each element is in $A$ (resp. $B$). Then \[ V(G) = A^{\prime} \cup B^{\prime}, \ A^{\prime} \cap B^{\prime} = \emptyset \text{ and } |A^{\prime}| = |B^{\prime}| = +\infty.\] By (TI), $E(A^{\prime}, B^{\prime}) = +\infty$. Since the number of edges between two vertices of $G^{\prime}$ are finite, $E(A, B) = +\infty$. Define $p([x], [y])$ be the probability $[x]$ and $[y]$ are connected by an open edge with respect to the induced measure of $\mathbb{P}_p$ by the quotient map. Then \[ p\left([x], [y]\right) = 1 - (1-p)^{\left|E([x], [y])\right|}. \] Hence \[ \sum_{[x] \in A, [y] \in B} p([x], [y]) \ge p \sum_{[x] \in A, [y] \in B} 1_{\left\{\text{$[x]$ and $[y]$ are connected by an edge of $G$}\right\}} = +\infty.\] By Kalikow and Weiss \cite[Theorem 1]{KW}, \[ \mathbb{P}_p\left(\text{the random graph on $G^{\prime}$ is connected}\right) \in \{0,1\}.\] Each connected component of $H$ is contained in an equivalent class, and conversely, each equivalent class contains each connected component of $H$, due to the percolating everywhere assumption. Therefore, $\U(H)$ is connected if and only if the random graph on $G^{\prime}$ is connected. Hence, for any $p > 0$ \[ \mathbb{P}_p(\text{$\U(H)$ is connected}) \in \{0,1\}\] and hence \[ p_{c,1}(G, H, \PP) = p_{c,2}(G, H, \PP).\] If the number of connected components of $H$ is finite, then $E(A^{\prime}, B^{\prime}) < +\infty$ for any decomposition $V(G^{\prime}) = A^{\prime} \cup B^{\prime}$. Therefore, $|V(G^{\prime})| = 1$, and hence, \[ p_{c,1}(G, H, \PP) = p_{c,2}(G, H, \PP) = 0.\] Thus we have (i). Assume that $G$ does not satisfy (TI). Then there is two infinite disjoint sets $A$ and $B$ such that $V(G) = A \cup B$ and $E(A, B) < +\infty$. Since $(A, E(A, A))$ and $(B, E(B, B))$ may have finite connected components, we modify $A$ and $B$. Let $\partial A$ and $\partial B$ be the inner boundaries of $A$ and $B$, respectively. For any vertex in $(A, E(A, A))$, there is a vertex in $\partial A$ such that they are connected in $(A, E(A, A))$. Since $A$ and $B$ are infinite, there are $a_0 \in \partial A$ and $b_0 \in \partial B$ such that infinitely many vertices of $A$ are connected to $a_0$ in $A$ and infinitely many vertices of $B$ are connected to $b_0$ in $B$. Let \[ E_{A, B} := \left\{\{a,b\} \in E_{A} : b \leftrightarrow b_0 \text{ in } E(G) \setminus E_{A} \right\}, \] where \[ E_{A} := \left\{\{a,b\} \in E(A, B) : a \leftrightarrow a_0 \text{ in } A \right\}. \] Let $H$ be a subgraph of $G$ such that $E(H) = E(G) \setminus E_{A, B}$. Then $a_0$ and $b_0$ are not connected in $H$. Assume that there is a vertex $x$ such that it is not connected to $a_0$ in $H$. Then consider a path $\gamma$ from $x$ to $b_0$ {\it in $G$}. Let $\{a, b\} \in E_{A, B}$ be the first edge which $\gamma$ intersects with $E_{A,B}$. Since $a \leftrightarrow a_0 \text{ in } A$, $\gamma$ pass $b$ before it pass $a$. There is a path from $b$ to $b_0$ which does not pass any edges of $E_{A, B}$. Hence, there is a path from $x$ to $b_0$ in $H$. Therefore, there are just two connected components of $H$, and due to the choices of $a_0$ and $b_0$, they are both infinite. Thus $H$ is percolating everywhere. Since $E_{A, B}$ is finite, $p_{c,1}(G,H,\PP) = 0$. Since $E_{A, B}$ is non-empty, $p_{c,2}(G,H,\PP) = 1$. Thus we have (ii). \end{proof} We are not sure whether if $G$ satisfies (TI) and $H$ is a percolating everywhere subgraph with infinitely many connected components then \[ p_{c, 1}(G, H, \PP) = p_{c, 2}(G, H, \PP) = 0. \] \section*{Acknowledgements} The author wishes to express his gratitude to N. Kubota for stimulating discussions, to I. Benjamini for pointing out Theorem \ref{Itai} and to H. Duminil-Copin for notifying me of the reference \cite{BeTa}. The author was supported by Grant-in-Aid for JSPS Research Fellow (24.8491) and Grant-in-Aid for Research Activity Start-up (15H06311) and for JSPS Research Fellows (16J04213).
2,877,628,088,731
arxiv
\section{Introduction} In very simple terms we can define spinons as elementary excitations carrying spin but no charge. They emerge in two-dimensional Mott insulators as a consequence of electron fractionalization. \cite{Sachdev-Review} Similarly to quarks in QCD, spinons do not emerge so easily out of a confined state. As a consequence, {\it explicit} observation of spinons is very difficult. Quarks, though being confined, are essential to explain the properties and structure of matter. In a similar way, spinons, in spite of their confinement, seem to be essential in explaining all the remarkable properties of Mott insulators, including the doped ones, opening the way for the understanding of high-$T_c$ superconductors.\cite{Anderson} In several systems spinons may deconfine at a quantum critical point.\cite{Sachdev-Review} One prominent example is the critical point separating different types of order in quantum antiferromagnets, like for example in the phase transition between a N\'eel and a valence-bond solid (VBS) state,\cite{RS} which is a paramagnetic state breaking lattice symmetries. Spinon deconfinement also occurs in some paramagnetic Mott insulating states where no symmetries are broken, like for example the spin liquid state in pyrochlore antiferromagnets,\cite{Moessner,Nussinov} and in some theories for the underdoped state of high-$T_c$ superconductors.\cite{Wen-RMP} A remarkable property of deconfined spinons is that they are interacting at low energies (i.e., they are aymptotic interacting in the infrared), in contrast with deconfined quarks, which are free at high energies (asymptotic freedom).\cite{AF} Therefore, the field theory of a deconfined quantum critical point is a highly nontrivial conformal field theory (CFT). \cite{Senthil-2004} This deconfined quantum criticality (DQC) arises due to a destructive interference between the Berry phases of a quantum antiferromagnet and the instantons. This mechanism makes instanton events irrelevant at large distances, which in turn allows the spinons to deconfine. Furthermore, since deconfined spinons are strongly interacting, the critical exponents are nontrivial. Interestingly, the critical exponents following from such a theory have unusual values with respect to those arising from the Landau-Ginzburg-Wilson (LGW) paradigm of phase transitions.\cite{Senthil-2004} The difference can already be seen in mean-field theory. It is easy to see that mean-field theory based on the LGW paradigm gives an anomalous dimension $\eta_N=0$ for the N\'eel field ${\bf S}_i=(-1)^iS{\bf n}_i$, where $i$ is the lattice site, $S$ is the spin, and ${\bf n}_i^2=1$. However, this is not the case for a mean-field theory in the DQC scenario. There the direction field is written in terms of the spinon fields using a CP$^1$ representation, i.e., $n_{i,a}={\bf z}_i^\dagger\sigma_a{\bf z}_i$, $a=1,2,3$, where ${\bf z}_i=(z_{i1},z_{i2})$, and $\sigma_a$ are the Pauli matrices, and the constraint ${\bf n}_i^2=1$ implies ${\bf z}_i^\dagger{\bf z}_i=1$. In mean-field theory the dimension of fields is simply given by dimensional analysis of the action functional. In such a case this gives simply ${\rm dim}[z_{i\alpha}]=(d-2)/2$, where $d$ is the dimension of the spacetime. Therefore, mean-field theory leads to ${\rm dim}[n_{i,a}]=(d-2+\eta_N)/2=2{\rm dim}[z_\alpha]$, which implies $\eta_N=d-2$, or $\eta_N=1$ in $2+1$ dimensions. Thus, we see that the field theory of deconfined spinons leads to a large anomalous dimension of the N\'eel field already at the mean-field level. Quantum fluctuations typically reduce this value, but it still remains considerably larger than the prediction of the LGW scenario, \cite{Motrunich,Sandvik} whose fluctuation corrections lead to an anomalous dimension $\eta_N\approx 0.03$.\cite{KSF} A deconfined quantum critical point governs a second-order phase transition between different quantum states of matter at zero temperature, the paradigmatic situation in this case being the transition from a N\'eel state to a VBS. Thus, it is important to check whether the proposed models for DQC actually undergo a second-order phase transition with a large anomalous dimension. There has been in recent years a lot of activity in this direction. Indeed, some of the proposed models have been checked extensively in large scale Monte Carlo (MC) simulations. \cite{Motrunich,Sandvik,Kuklov_2006,Kragset,Harada,Melko,Jiang} Particularly important are the MC simulations for the well studied $S=1/2$ quantum antiferromagnet with easy-plane anisotropy. Due to its global $U(1)$ symetry and the additional {\it local} $U(1)$ symmetry, the model is self-dual.\cite{Motrunich} This model was predicted to exhibit a deconfined quantum critical point.\cite{Motrunich,Senthil-2004} However, recent MC simulations \cite{Kuklov_2006,Kragset} have clearly shown that in this case actually a first-order phase transition takes place. This result is confirmed by a recent renormalization group (RG) analysis.\cite{Nogueira-Kragset-Sudbo} Remarkably, despite the absence of quantum criticality, the destructive interference mechanism between the Berry phases and instantons was shown to work,\cite{Kragset} so that we can still talk about spinon deconfinement. The situation here is reminiscent of the $Z_2$-Higgs theory in three spacetime dimensions, though in this case we have a discrete gauge symmetry rather than a continuous one. This model is also self-dual,\cite{Balian} but is known to undergo a first-order phase transition along the self-dual line.\cite{Stack} However, the $Z_2$-Higgs theory has the peculiarity of having a deconfined phase bounded by two lines of second-order phase transitions that become first-order when they are very near each other and to the self-dual line. It is not known at present whether a similar behavior for the easy-plane frustrated quantum antiferromagnet is possible. For the $SU(2)$ case, a MC simulation on a Heisenberg model with hedgehog suppression \cite{Motrunich} seems to support the DQC scenario. Another interesting model is the $S=1$ Heisenberg antiferromagnet with a biquadratic interaction between the spins,\cite{Grover} where numerical evidence for DQC has also been found.\cite{Harada} Strong evidence for DQC has been reported in Ref. \onlinecite{Sandvik}, where a model featuring a four-spin interaction was simulated. There the anomalous dimension of the N\'eel field was found to be $\eta_N\approx 0.26$. Further MC studies \cite{Melko} confirm the analysis of Ref. \onlinecite{Sandvik}, in which a transition from a N\'eel state to a VBS occurs. There the obtained value of the anomalous dimension is $\eta_N\approx 0.35$. However, there is a recent paper \cite{Jiang} on the same model where MC simulations are reported to lead to a first-order phase transition. Due to the inherently non-perturbative character of deconfined quantum critical points, purely analytical and well controlled studies are not easy to perform. For instance, the perturbative RG applied to the $SU(N)$ case can only access a quantum critical point if $N>182.9$.\cite{Nogueira-Kragset-Sudbo} The smallest value of $N$ producing a critical point, $N=183$, leads to an anomalous dimension $\eta_N=609/671\approx 0.9076$. In principle the result of this RG analysis would mean that the phase transition for the $SU(2)$ case is a first-order one. However, it was argued in Ref. \onlinecite{Nogueira-Kragset-Sudbo} that, similarly to the Ginzburg-Landau (GL) superconductor with $N$ complex order parameter fields,\cite{Halperin-Lubensky-Ma} the actual critical behavior at low $N$ and in the strong coupling limit may be compatible with a second-order phase transition. For a GL model with a single complex order field, this expectation is confirmed both by a duality and MC analysis of the model. \cite{Dasgupta,Kleinert-tric,Kiometzis,Herbut,Olsson,Hove,Neuhaus} There are RG studies where low-$N$ critical points were found,\cite{Berg,Folk,Herbut-Tesanovic,Kleinert-Nogueira-1} but they all have some kind of problem, being either not well controlled or involving ad hoc assumptions. The two-loop perturbation in terms of the $\epsilon$-expansion should be in principle a well controlled approach.\cite{Kolnberger} The resummation of the two-loop result \cite{Folk} led to a critical point for $N=1$. However, the unresummed result, though correct, contains unacceptable pathologies, like for example a critical exponent $\nu$ for $182.9<N\leq 200$ larger than the $N\to\infty$ result at fixed dimension, and the absence of a fixed point for the gauge coupling if $N<18\epsilon$. In view of these pathologies, the resummed two-loop result cannot be completely trusted. The ideal approach would be to obtain a resummed three-loop result. Unfortunately, to this order the only available RG function is the $\beta$ function for the gauge coupling.\cite{3-loop} Anyway, the important argument from Ref. \onlinecite{Nogueira-Kragset-Sudbo} to be retained here is the following. The existence of a critical value of $N$ in the weak coupling analysis in $d=4-\epsilon$ spacetime dimensions, above which a critical point exists, is thought as an indication that in the strong-coupling limit a quantum critical point at lower values of $N$ may exist. This is of course a conjecture that must be tested further. Note, however, that the easy-plane case, also studied in Ref. \onlinecite{Nogueira-Kragset-Sudbo}, when generalized to a theory with a global $O(N)\times O(N)$ symmetry, does not have any fixed points at nonzero gauge coupling for all values of $N$, i.e., this theory does not have a critical value of $N$. The behavior of the $\epsilon$-expansion for the GL model with a single complex order field improves considerably if the theory is coupled via the gauge field to $N_f$ Dirac fermions species.\cite{Kleinert-Nogueira-2} Interestingly, it is not necessary to have a large value of $N_f$ to obtain an infrared stable fixed point. In the context of the present paper, it should be noted that the $SU(2)$ model for deconfined spinons is exactly the same as a GL model with two complex order fields. In this paper we will consider the Lagrangian: \begin{equation} \label{L} {\cal L}={\cal L}_b+{\cal L}_f, \end{equation} where \begin{eqnarray} \label{L-boson} {\cal L}_b&=&\frac{1}{2e_0^2}(\epsilon_{\mu\nu\lambda}\partial_\nu A_\lambda)^2+ \sum_{\alpha=1}^{N_b}|(\partial_\mu-iA_\mu)z_\alpha|^2\nonumber\\ &+&r_0\sum_{\alpha=1}^{N_b}|z_\alpha|^2+\frac{u_0}{2} \left(\sum_{\alpha=1}^{N_b}|z_\alpha|^2\right)^2, \end{eqnarray} and \begin{equation} \label{L-fermion} {\cal L}_f=\sum_{\alpha=1}^{N_f}\bar \psi_a(\slashchar{\partial}+i\slashchar{A})\psi_a. \end{equation} The Lagrangian ${\cal L}_b$ is precisely the model for deconfined spinons proposed in Ref. \onlinecite{Senthil-2004}. In this context, we are primarily interested in the case with $N_b=2$. The Lagrangian ${\cal L}_b$ is supposed to govern the universality class of the phase transition between a N\'eel state and a VBS. The Lagrangian ${\cal L}_f$ contains $N_f$ species of four-component Dirac fermions in $2+1$ Euclidean dimensions. \cite{Pisarski} The representation of $\gamma$ matrices in this case is given by \begin{equation} \gamma_0=\left( \begin{array}{cc} \sigma_3 & 0\\ \noalign{\medskip} 0 & -\sigma_3 \end{array} \right),~~~~~~~~~ \gamma_1=\left( \begin{array}{cc} \sigma_2 & 0\\ \noalign{\medskip} 0 & -\sigma_2 \end{array} \right), \nonumber \end{equation} \begin{equation} \gamma_2=\left( \begin{array}{cc} \sigma_1 & 0\\ \noalign{\medskip} 0 & -\sigma_1 \end{array} \right), \end{equation} where $\sigma_1$, $\sigma_2$, and $\sigma_3$ are the Pauli matrices. The Lagrangian ${\cal L}_f$ is typical of a so called algebraic quantum liquid. There are three such quantum liquids, the algebraic spin liquid (ASL),\cite{Rantner} the algebraic Fermi liquid (AFL),\cite{FT} and the recently introduced algebraic charge liquid (ACL).\cite{Kaul} Let us briefly mention the differences between these algebraic quantum liquids. The ASL emerges out of the so called staggered flux phase \cite{Affleck} in the large $N_f$ limit of a $SU(N_f)$ Heisenberg antiferromagnet. In this case the spin operators are written in terms of fermion bilinears such that the Heisenberg model reads $H=-(J/N_f)\sum_{\langle i,j\rangle}f_{i\alpha}^\dagger f_{j\alpha}f_{j\beta}^\dagger f_{i\beta}$, with the constraint $f_{i\alpha}^\dagger f_{i\alpha}=N_f/2$. A lattice gauge field arises from the phase fluctuations of the Hubbard-Stratonovich link field $\chi_{ij}=\langle f_{i\alpha}^\dagger f_{j\alpha}\rangle$. This gives rise to a compact $U(1)$ gauge theory. Linearizing around the nodes $\pm(\pi/2,\pi/2)$ of the quasi-paricle spectrum $E_{\bf k}=2|\chi_0|\sqrt{\cos^2k_x+\cos^2k_y}$ and taking the continuum limit leads to the Lagrangian (\ref{L-fermion}). For large enough $N_f$ the fermionic spinons deconfine \cite{Hermele,Nogueira-Kleinert-1,Nogueira-Kleinert-2} and we obtain a Mott insulating state having no broken symmetries, i.e., a spin liquid. For low enough $N_f$ the spin liquid is no longer stable \cite{Nogueira-Kleinert-2} and the chiral symmetry is probably broken due to spinon confinement. However, in this theory chiral symmetry breaking (CSB) takes place even in the non-compact case.\cite{Pisarski,Appelq} This CSB is associated to the development of an insulating antiferromagnetic state.\cite{Kim} The AFL, though having essentially the same Lagrangian ${\cal L}_f$, is physically very different from the ASL. Here the gauge field is not compact and has a completely different origin.\cite{FT,Herbut-AFL} While the ASL is associated to a Mott insulator, the AFL originates from a $d$-wave superconductor. The Dirac fermions are obtained by identifying the low-energy quasi-particles on the nodes of the $d$-wave gap. The gauge field follows from the coupling of these quasi-particles to vortices. The way this is done is very subtle and involves the manipulation of singular gauge transformations.\cite{FT} The fermions in the AFL, just like the ones of the ASL, carry no charge. Interestingly, the number $N_f$ in the AFL is related to the number of layers of the system. For example, a single-layer system has $N_f=2$, while a bilayer system has $N_f=4$. Note that in this case there is no need for other gauge fields in the corresponding layers, since the gauge field arises from a vortex-antivortex excitation. Thus, at large distances a vortex in one layer is always connected with a vortex in the second layer, the same happening for the antivortices. The end result is that Dirac fermions in different layers couple to the same gauge field.\cite{Note-1} Of course, similarly to the ASL, CSB also occurs here for small enough $N_f$,\cite{Herbut-AFL,Zlatko-CSB} and is once more associated to antiferromagnetism. In the ACL a fractional fermionic particle with charge $e$ and no spin, a {\it holon}, couples to bosonic spinons via a gauge field.\cite{Kaul} Therefore, it is necessarily related to the concept of DQC and in this case the whole Lagrangian (\ref{L}) has to be considered. Note that in contrast with the ASL and AFL, the Dirac fermions in the ACL carry charge. This new state of matter bears some resemblances with earlier ideas on spin-charge separation in the cuprates.\cite{Wen-RMP} An important difference is that in earlier theories superconductivity is obtained by doping a spin liquid, while in the ACL superconductivity arises by doping an antiferromagnetic state near the deconfined quantum critical point.\cite{Kaul-PRB} Thus, we can either dope a N\'eel or a VBS state. Like in the two other algebraic quantum liquids, here the chiral symmetry can also be broken. However, since the Dirac fermions now carry charge, we have a situation where charge density waves develop as a result of CSB. It is important to note that the coupling to bosons leads to a reduction of the value of $N_f$ below which the chiral symmetry breaks.\cite{Kim} From the above discussion we see that among the three algebraic quantum liquids, only the ACL cannot exist without the coupling to bosons. The results of this paper will concern mainly this case. Quantum criticality in $U(1)$ gauge theories with both bosonic and fermionic matter has been also considered in a recent paper.\cite{Kaul-Sachdev} There the authors considered a CP$^{N_b-1}$ model coupled to $N_f$ Dirac fermion species and analyzed the model for large $N_b$ and $N_f$, while keeping $N_f/N_b$ arbitrary. Here we have followed Ref. \onlinecite{Senthil-2004} and softened the CP$^{N_b-1}$ constraint. This leads to a theory that is tractable in a RG framework in $d=4-\epsilon$ spacetime dimensions. The main advantage of this approach is that we will be able to work with a fixed $N_b=2$ and $N_f$ does not need to be large to obtain a quantum critical point. Part of our results follow directly from Ref. \onlinecite{Kleinert-Nogueira-2}, the main difference being that here we are interested in $N_b=2$ rather than $N_b=1$.\cite{Note-2} However, in this paper we will improve substantially upon the previous analysis and present many new results which are relevant for the quantum critical behavior of the ACL. One of the main new results of this paper will be the calculation of $\eta_N$ for $N_b=2$ and $N_f\geq 4$ in terms of the crossover exponent $\varphi$. As shown in Ref. \onlinecite{Nogueira-Kragset-Sudbo}, the critical exponent $\eta_N$ is related to the crossover exponent by the formula \begin{equation} \label{eta-N} \eta_N=d+2(1-\varphi/\nu), \end{equation} where $\nu$ is the correlation length exponent. The crossover exponent is related to a mass anisotropy of the Lagrangian. Although the Lagrangian ${\cal L}_b$ does not have any mass anisotropy, the correlation function $\langle{\bf z}^*(x)\cdot{\bf z}(0)~{\bf z}(x)\cdot{\bf z}^*(0)\rangle$ is calculated through the insertion of the operator $z_\alpha^*z_\beta$. For $\alpha\neq\beta$ this insertion allows us to calculate $\varphi$.\cite{Amit} The plan of the paper is as follows. In order to explain why a second-order phase transition is easier to obtain when Dirac fermions are included, in Section II we will calculate the effective potential associated to the Lagrangian (\ref{L}). There we compare the cases with and without fermions. In the absence of fermions the one-loop effective potential typically describes a first-order phase transition. When fermions are included the situation changes and we can show that for small $N_f$ a first-order transition takes place, while for larger values of $N_f$ the effective potential features a second-order phase transition. In Section III we proceed with the RG analysis and the calculation of the critical exponents. In Section IV we briefly discuss the effect of CSB and show that it does not affect the quantum critical regime analyzed in Section III. We also compute the anomalous dimension of the chiral holon susceptibility. Section V concludes the paper. An Appendix sketches the details of the calculations of the anomalous dimension $\eta_N$. \section{Effective potentials and the order of the phase transition} \subsection{A simple example} In order to illustrate the subtlety involving the determination of the order of the phase transition in models for deconfined spinons, we will consider the following simple Lagrangian for an interacting scalar theory in $d=2+1$ Euclidean dimensions: \begin{equation} {\cal L}=\frac{1}{2}(\partial_\mu\phi)^2+\frac{m_0^2}{2}\phi^2+\frac{u_0}{4!}\phi^4. \end{equation} At the mean-field level the above model obviously exhibits a second-order phase transition with a critical point at $m_0^2=0$. Let us calculate the effective potential for the above model at one-loop order. This is more easily done by writing $\phi=\bar \phi+\delta\phi$, where $\bar \phi$ is a constant background field, and integrating out the quadratic fluctuations in $\delta\phi$ while disregarding the higher order ones. This calculation is very simple, since it just involves a Gaussian integration in $\delta\phi$. We obtain, \begin{eqnarray} &U_{\rm eff}(\bar \phi)=\frac{m_0^2}{2}\bar \phi^2+\frac{u_0}{4!}\bar \phi^4 \nonumber\\ &+\frac{1}{2V}\left[{\rm Tr}\ln\left(-\partial^2+m_0^2+\frac{u_0}{2}\bar \phi^2\right) -{\rm Tr}\ln(-\partial^2)\right], \end{eqnarray} where $V$ is the (infinite) volume. Explicit evaluation of the tracelog term yields \begin{equation} U_{\rm eff}(\bar \phi)=\frac{1}{2}\left(m_0^2+\frac{\Lambda u_0}{2\pi^2}\right)\bar \phi^2 +\frac{u_0}{4!}\bar \phi^4-\frac{1}{12\pi}\left(m_0^2+\frac{u_0}{2}\bar \phi^2\right)^{3/2}, \end{equation} where $\Lambda$ is a ultraviolet cutoff and we have neglected terms which are independent from $\bar \phi$. The above effective potential can be written in terms of renormalized quantities $m^2$ and $u$ defined by the normalization conditions $U''_{\rm eff}(0)=m^2$ and $U''''_{\rm eff}(0)=u$, where the primes denote derivatives with respect to $\bar \phi$. This leads to \begin{equation} m^2=m_0^2+\frac{\Lambda u_0}{2\pi^2}, \end{equation} and \begin{equation} \label{u} u=u_0-\frac{3u_0^2}{16\pi m}, \end{equation} where in Eq. (\ref{u}) we have replaced in the second term $m_0^2$ by $m^2$, since the error involved in this replacement contributes to an order higher than the one being calculated here. Thus, the effective potential becomes \begin{equation} U_{\rm eff}(\bar \phi)=\frac{m^2}{2}\bar \phi^2 +\frac{u_0}{4!}\bar \phi^4-\frac{1}{12\pi}\left(m^2+\frac{u_0}{2}\bar \phi^2\right)^{3/2}. \end{equation} The above potential is typical of a system exhibiting a second-order phase transition. The order parameter is obtained by extremizing the effective potential. It is given by \begin{equation} \bar \phi_\pm=\pm\frac{\sqrt{3}}{8\pi u_0}\left(3u_0^3-128\pi^2m^2u_0-\sqrt{9u_0^6-512\pi^2m^2u_0^4}\right)^{1/2}, \end{equation} and we see that the critical point is given by $m^2=0$. Looking at Eq. (\ref{u}) we may think that $u$ will get large and negative as the critical point is approached, such that the effective potential would become unstable against a $\phi^6$ interaction. However, this is not the case, since up to one-loop accuracy we can write the following RG equation for the dimensionless coupling $\hat u=u/(16\pi m)$, \begin{equation} m\frac{\partial\hat u}{\partial m}=-\hat u+3\hat u^2. \end{equation} We see that as $m\to 0$ the above $\beta$ function vanishes at the infrared stable fixed point $\hat u_*=1/3$. Thus, $u$ actually does not diverge as the critical point is approached. \subsection{Effective potential for the $SU(2)$ antiferromagnet} Let us consider now the one-loop effective potential for the Lagrangian (\ref{L-boson}). Like in the example of a single scalar field theory, all we have to do is to integrate out the Gaussian fluctuations. Note that the Lagrangian is already quadratic in the gauge field, so that the latter can be immediately integrated out exactly. We will consider a nonzero background for the fields $z_1$ and $z_2$, leaving a vanishing background for the remaining $N_b-2$ fields. Thus, we will have $z_1=\bar z_1+\delta z_1$, $z_2=\bar z_2+\delta z_2$, and $z_\alpha=\delta z_\alpha$ for $\alpha\geq 3$. Thus, we have for $N_b=2$ the background spin orientation field $\bar {\bf n}=\bar z_\alpha{\mbox{\boldmath $\sigma$}}_{\alpha\beta}\bar z_\beta$. The result is \begin{widetext} \begin{eqnarray} \label{Ueff-AF} &U_{\rm eff}^{\rm AF}(\bar z_1,\bar z_2)=m^2(|\bar z_1|^2+|\bar z_2|^2)+\frac{u_0}{2}(|\bar z_1|^2+|\bar z_2|^2)^2 -\frac{\sqrt{2}e_0^3}{3\pi}\left(|\bar z_1|^2+|\bar z_2|^2\right)^{3/2}\nonumber\\ &-\frac{1}{12\pi}\left\{\left[m^2+u_0\left(3|\bar z_1|^2+|\bar z_2|^2\right)\right]^{3/2} +\left[m^2+u_0\left(|\bar z_1|^2+3|\bar z_2|^2\right)\right]^{3/2} +2(N_b-1)\left[m^2+u_0\left(|\bar z_1|^2+|\bar z_2|^2\right)\right]^{3/2}\right\}, \nonumber\\ \end{eqnarray} where $m^2$ is given by \begin{equation} \label{m2} m^2=r_0+\left(\frac{N_b+1}{2}u_0+\frac{4e_0^2}{3}\right)\frac{\Lambda}{\pi^2}. \end{equation} The third term in Eq. (\ref{Ueff-AF}) arises from the integration over the gauge field and is reminiscent from the fluctuation-corrected mean-field theory in the GL superconductor. \cite{Halperin-Lubensky-Ma,Nogueira-Kleinert-Lecture} Due to this term the symmetry will be broken for a $m^2>0$, leading in this way to a weak first-order phase transition. \cite{Halperin-Lubensky-Ma} Note that for $e_0=0$ we have a second-order phase transition in the universality class of a $O(2N_b)$ symmetric classical magnetic system in three dimensions. \subsection{Effective potential for the algebraic charge liquid} Finally, let us consider the the one-loop effective potential for the the Lagrangian (\ref{L}). First we note that the one-loop contribution following from integrating out the Dirac fermions is given by \begin{equation} {\cal L}_f^{\rm eff}=\frac{N_fe_0^2}{32}F_{\mu\nu}\frac{1}{\sqrt{-\partial^2}}F_{\mu\nu}, \end{equation} where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ (note that the above expression is multiplied by a factor two in the case of two-component Dirac fermions). The above expression follows simply by calculating the one-loop vacuum polarization in $2+1$ dimensions. The background fields are chosen in the same way as in the previous Subsection. After integrating out the quadratic gauge fluctuations for a fixed spinon background, we obtain the correction \begin{equation} \delta{\cal L}_f^{\rm eff}=\frac{1}{V}\left\{{\rm Tr}\ln\left[-\partial^2+\frac{N_fe_0^2}{8}\sqrt{-\partial^2}+2e_0^2 (|\bar z_1|^2+|\bar z_2|^2)\right]-{\rm Tr}\ln(-\partial^2)\right\}. \end{equation} After evaluating the above tracelog and integrating out the spinon Gaussian fluctuations, we obtain \begin{eqnarray} \label{Ueff-ACL} &&U_{\rm eff}^{\rm ACL}(\bar z_1,\bar z_2)=m^2 (|\bar z_1|^2+|\bar z_2|^2)+\frac{u_0}{2}(|\bar z_1|^2+|\bar z_2|^2)^2 +\frac{N_fe_0^4}{16\pi^2}\left(|\bar z_1|^2+|\bar z_2|^2-\frac{N_f^2e_0^2}{384}\right) \ln\left[\frac{2e_0^2(|\bar z_1|^2+|\bar z_2|^2)}{\Lambda^2}\right] \nonumber\\ &&+\frac{(N_fe_0^2/8)^3-(5N_fe_0^4/4)(|\bar z_1|^2+|\bar z_2|^2)+(128e_0^2/N_f)(|\bar z_1|^2+|\bar z_2|^2)^2}{ 12\pi^2\sqrt{1-\frac{512}{N_f^2e_0^2}(|\bar z_1|^2+|\bar z_2|^2)}}\ln\left[\frac{1-\sqrt{1-\frac{512}{N_f^2e_0^2}(|\bar z_1|^2+|\bar z_2|^2)}}{1+\sqrt{1-\frac{512}{N_f^2e_0^2}(|\bar z_1|^2+|\bar z_2|^2)}}\right] \nonumber\\ &&-\frac{1}{12\pi}\left\{\left[m^2+u_0\left(3|\bar z_1|^2+|\bar z_2|^2\right)\right]^{3/2} +\left[m^2+u_0\left(|\bar z_1|^2+3|\bar z_2|^2\right)\right]^{3/2} +2(N_b-1)\left[m^2+u_0\left(|\bar z_1|^2+|\bar z_2|^2\right)\right]^{3/2}\right\},\nonumber\\ \end{eqnarray} where $m^2$ is still given by Eq. (\ref{m2}). Note that the annoying third term of Eq. (\ref{Ueff-AF}) disappeared once Dirac fermions were included. For $N_f\to 0$ the above equation reduces to Eq. (\ref{Ueff-AF}), as it should. Thus, we can expect that for $N_f$ sufficiently small a first-order phase transition occurs. On the other hand, for $N_f$ above some critical value a second-order phase transition should take place. The leading small $N_f$ correction shifts the mass term and add a logarithmic correction, resulting in \begin{equation} U_{\rm eff}^{\rm ACL}(\bar z_1,\bar z_2)=U_{\rm eff}^{\rm AF} +\frac{e_0^4N_f}{48\pi^2}(|\bar z_1|^2+|\bar z_2|^2)\left\{2+3\ln\left[\frac{2e_0^2(|\bar z_1|^2+|\bar z_2|^2)}{\Lambda^2}\right] \right\}+{\cal O}(N_f^2). \end{equation} The phase transition described by the above effective potential is certainly a first-order one. To see how the second-order phase transition takes place, we perform a large $N_f$ expansion with the limit $N_f\to\infty$ being taken with $N_fe_0^2$ kept fixed. Up to order $1/N_f$ we have \begin{eqnarray} \label{Ueff-largeN} &&U_{\rm eff}^{\rm ACL}(\bar z_1,\bar z_2)=(m^2-m^2_c)(|\bar z_1|^2+|\bar z_2|^2)+\left(\frac{u_0}{2}-\frac{8e_0^2}{N_f\pi^2}\right) (|\bar z_1|^2+|\bar z_2|^2)^2\nonumber\\ &&-\frac{1}{12\pi}\left\{\left[m^2+u_0\left(3|\bar z_1|^2+|\bar z_2|^2\right)\right]^{3/2} +\left[m^2+u_0\left(|\bar z_1|^2+3|\bar z_2|^2\right)\right]^{3/2} +2(N_b-1)\left[m^2+u_0\left(|\bar z_1|^2+|\bar z_2|^2\right)\right]^{3/2}\right\}\nonumber\\ &&+{\cal O}\left(\frac{1}{N_f^2}\right), \end{eqnarray} \end{widetext} where \begin{equation} m_c^2=\frac{e_0^4N_f}{24\pi^2}\left[3\ln\left(\frac{8\Lambda}{N_fe_0^2}\right)-1\right]. \end{equation} The $(|\bar z_1|^2+|\bar z_2|^2)^2$-term is reminiscent of the Landau expansion of the dual formulation of the GL model of a superconductor,\cite{Kleinert-tric,Kleinert-tric-1} where a tricritical point was shown to exist. By defining a ``Ginzburg parameter'' $\kappa^2=u_0/(2e_0^2)$, we see that the effective potential (\ref{Ueff-largeN}) is stable only for $\kappa^2>8/(N_f\pi^2)$, in which case a second-order phase transition occurs. The precise characterization of the change of behavior as a function of $N_f$ will be studied in the next Section by means of the RG. \section{Quantum critical behavior of the SU(2) algebraic charge liquid} The $\beta$ functions at one-loop order in $d=4-\epsilon$ dimensions are easily calculated using standard methods.\cite{KSF,ZJ} The dimensionless couplings are given as a function of the momentum scale $\mu$ by $f=Z_A\mu^{-\epsilon}e_0^2$ and $g=Z_z^2\mu^{-\epsilon}u_0/Z_u$, where $Z_A$ and $Z_z$ are the wave function renormalizations for the gauge field and spinon field, respectively. $Z_u$ is obtained from the vertex function associated to spinon self-interaction. The renormalization factor $Z_A$ is obtained directly from the one-loop vacuum polarization. The $\beta$ functions $\beta_f\equiv\mu\partial f/\partial\mu$ and $\beta_g\equiv\mu\partial g/\partial\mu$ are given by \begin{equation} \label{betaf} \beta_f=-\epsilon f+\frac{4N_f+N_b}{3}f^2, \end{equation} and \begin{equation} \label{betag} \beta_g=-\epsilon g-6fg+(N_b+4)g^2+6f^2, \end{equation} where $f$ and $g$ were rescaled as $f\to 8\pi^2f$ and $g\to 8\pi^2g$ in order to remove unnecessary geometrical factors. Note that at one-loop the number of fermion components $N_f$ appears only in Eq. (\ref{betaf}). Of course, for $N_f\to 0$ the above result coincides with the one for the Ginzburg-Landau superconductor. \cite{Halperin-Lubensky-Ma,Folk,Kolnberger} For $N_b=2$, and provided $N_f\geq 4$, there are two fixed points $(g_\pm,f_*)$ at nonzero gauge coupling whose coordinates are given by \begin{equation} f_*=\frac{3\epsilon}{2(2N_f+1)}, \end{equation} and \begin{equation} g_\pm=\frac{5+N_f\pm\sqrt{N_f^2+10N_f-56}}{6(2N_f+1)}\epsilon. \end{equation} From the above fixed points, only $(g_+,f_*)$ is infrared stable, thus corresponding to the quantum critical point where the critical exponents will be calculated. A schematic flow diagram is shown in Fig. 1. We see that the effect of Dirac fermions on the phase structure is quite remarkable, since neither $N_b$ nor $N_f$ need to be large in order to attain criticality. This behavior is in strong contrast with the one obtained in the limit $N_f\to 0$, where quantum criticality occurs only for $N_b>182.9$.\cite{Nogueira-Kragset-Sudbo} \begin{figure} \includegraphics[width=8cm]{scfixp.eps} \caption{Schematic flow diagram associated with the $\beta$ functions (\ref{betaf}) and (\ref{betag}) for $N_b=2$ and $N_f\geq 4$. In the figure $P_\pm=(g_\pm,f_*)$. Note that only $P_+$ is infrared stable, thus corresponding to a quantum critical point. For a vanishing gauge coupling we have a second-order phase transition governed by a $O(4)$ Wilson-Fisher fixed point. This fixed point is unstable for $f\neq 0$.} \end{figure} The case $N_f=4$ exhibits a peculiar critical behavior, as the calculations of the critical exponents will show. In fact, for $N_f=4$ the fixed points $(g_-,f_*)$ and $(g_+,f_*)$ coincide. The critical exponent $\nu$ is obtained through the calculation of the insertion of the operator $\sum_\alpha|z_\alpha|^2$ in the correlation function $\langle z_\alpha(x)z_\alpha^*(y)\rangle$.\cite{KSF,ZJ} This leads to new singularities, so that we have to introduce a new renormalization constant, $Z_2$, associated with this operator insertion. In pure scalar theories $Z_2$ is very simply calculated by differentiating the two-point function at zero momentum with respect to the mass. However, in general is better to perform the renormalization at nonzero momenta in order to avoid certain infrared divergences, especially when the scalar fields are coupled to a gauge field, like the case studied here. The standard way to calculate $\nu$ is to compute the following RG function at the infrared stable fixed point: \begin{equation} \label{gamma2} \gamma_2\equiv\mu\frac{\partial\ln(Z_2/Z_z)}{\partial\mu}, \end{equation} which leads to \begin{equation} \frac{1}{\nu}=2+\eta_2, \end{equation} where $\eta_2$ is the value of $\gamma_2$ at the infrared stable fixed point. At one-loop order, we have \begin{equation} \gamma_2=3f-(N_b+1)g. \end{equation} By inserting $f_*$ and $g_+$ for $N_b=2$ in the above equations and expanding up to first order in $\epsilon$, we obtain \begin{equation} \label{nu} \nu=\frac{1}{2}+\frac{N_f-4+\sqrt{N_f^2+10N_f-56}}{4(1+2N_f)}~\epsilon+ {\cal O}(\epsilon^2). \end{equation} The critical exponent $\eta_N$ is also derived through a quadratic operator insertion. However, in this case there is a mass anisotropy involved. For a general $N_b$ the correlation function ${\cal G}(x)\equiv\langle{\bf n}(x)\cdot{\bf n}(0)\rangle$ is given in terms of spinon fields as \begin{equation} \label{G} {\cal G}(x)=2\langle{\bf z}^*(x)\cdot{\bf z}(0)~{\bf z}(x)\cdot{\bf z}^*(0)\rangle -\frac{2}{N_b}\langle|{\bf z}(x)|^2|{\bf z}(0)|^2\rangle. \end{equation} For $N_b=2$ we have simply \begin{eqnarray} {\cal G}(x)&=&2\langle z_1^*(x)z_1(0)z_2(x)z_2^*(0)\rangle+ 2\langle z_1(x)z_1^*(0)z_2^*(x)z_2(0)\rangle \nonumber\\ &-&2\langle|z_1(x)|^2|z_2(0)|^2\rangle+\langle|z_1(x)|^2|z_1(0)|^2\rangle\nonumber\\ &+&\langle|z_2(x)|^2|z_2(0)|^2\rangle. \end{eqnarray} The critical exponent $\eta_N$ is defined by the behavior of ${\cal G}(x)$ at the quantum critical point: \begin{equation} \label{G-scaling} {\cal G}(x)\sim\frac{1}{|x|^{d-2+\eta_N}}, \end{equation} where $\eta_N$ is given in terms of the crossover exponent $\varphi$ in Eq. (\ref{eta-N}). We also note the following scaling behavior at the quantum critical point: \begin{equation} \label{density} \left\langle\sum_\alpha|z_\alpha(x)|^2\sum_\beta|z_\beta(0)|^2\right \rangle\sim\frac{1}{|x|^{d-2+\eta_4}}, \end{equation} where \begin{equation} \eta_4=d+2(1-1/\nu). \end{equation} Thus the scaling dimension of the operator $\sum_\alpha|z_\alpha(x)|^2$ is given by \begin{equation} {\rm dim}\left[\sum_\alpha|z_\alpha(x)|^2\right]=d-1/\nu. \end{equation} That $\eta_4$ depends only on $\nu$ is very easy to see, since it follows from the correlation function between operators $\sum_\alpha|z_\alpha|^2$, whose singular behavior leads to the renormalization constant $Z_2$ and consequently to $\nu$. From the expression (\ref{nu}) for the critical exponent $\nu$, we obtain \begin{equation} \label{eta4} \eta_4=2-\frac{9+2\sqrt{N_f^2+10N_f-56}}{1+2N_f}~\epsilon+{\cal O}(\epsilon^2) \end{equation} The calculation of $\eta_N$ depends on the calculation of the crossover exponent $\varphi$. The calculation is facilitated by observing that the scaling dimension of each component of ${\bf n}$ is given by $d-\varphi/\nu$. In particular, we have \begin{equation} {\rm dim}\left[|z_1|^2-|z_2|^2\right]=d-\varphi/\nu. \end{equation} Thus, $\varphi$ can be calculated from the insertions of $|z_1|^2$ and $|z_2|^2$ in the correlation function $G_{11}(x)\equiv\langle z_1(x)z_1^*(0)\rangle$.\cite{Amit} This calculation is sketched in Appendix A. By denoting those insertions by $G_{11,\alpha}$, where $\alpha=1,2$, we can obtain the crossover exponent from the renormalization of $G_{11,1}(x)-G_{11,2}(x)$. This procedure leads to a new renormalization constant $Z_2'$ and, analogously to Eq. (\ref{gamma2}), we can define the RG function \begin{equation} \label{gamma2-prime} \gamma_2'\equiv\mu\frac{\partial\ln(Z_2'/Z_z)}{\partial\mu}, \end{equation} which at the infrared stable fixed point yields \begin{equation} \label{eta2-prime} \eta_2'=\frac{\varphi}{\nu}-2. \end{equation} By comparing the above equation with Eq. (\ref{eta-N}) we see that $\eta_N=d-2-2\eta_2'$. At one-loop order we have \begin{equation} \label{gamma2-prime-1} \gamma_2'=3f-g, \end{equation} and therefore \begin{equation} \eta_N=2+\frac{\sqrt{N_f^2+10N_f-56}-5N_f-16}{3(1+2N_f)}~\epsilon+{\cal O}(\epsilon^2), \end{equation} while the crossover exponent is given by \begin{equation} \varphi=1+\frac{5\left(N_f+\sqrt{N_f^2+10N_f-56}\right)-11}{12(1+2N_f)}~\epsilon +{\cal O}(\epsilon^2). \end{equation} \begin{table} \caption{Values of the critical exponents for $N_b=2$ and different values of $N_f\geq 4$. We have set $\epsilon=1$.} \begin{ruledtabular} \begin{tabular}{ccccc} $N_f$ & $\nu$ & $\eta_N$ & $\eta_4$ & $\varphi$\\ \hline 4 & $1/2$ & $1/3$ & 1 & $13/12$\\ 5 & $0.62179$ & $0.61694$ & $0.38929$ & $1.27117$\\ 6 & $0.66009$ & $0.75191$ & $0.33468$ & $1.3245$\\ 7 & $0.68229$ & $0.84305$ & $0.3417$ & $1.35381$\\ 8 & $0.69678$ & $0.90943$ & $0.36696$ & $1.37208$\\ 9 & $0.70689$ & $0.96007$ & $0.39749$ & $1.38429$\\ 10 & $5/7$ & $1$ & $3/7$ & $39/28$\\ 11 & $0.71988$ & $1.0323$ & $0.45837$ & $1.39907$\\ 20 & $0.73978$ & $1.17336$ & $0.64274$ & $1.41792$\\ 50 & $0.74817$ & $1.27148$ & $0.83646$ & $1.42103$ \end{tabular} \end{ruledtabular} \end{table} The critical exponents for three spacetime dimensions ($\epsilon=1$) are shown for several values of $N_f$ at Table I. As already mentioned, the case $N_f=4$ is peculiar. Indeed, for $N_f=4$ the critical exponents are rational and, moreover, it corresponds to the only case where $\eta_N<\eta_4$. Note that for $N_f=10$ the exponents are also rational, but in this case $\eta_N>\eta_4$, just like for all the other values of $N_f$. Interestingly, we have that $\eta_N>1$ for $N_f>10$. This result is consistent with the analysis at large $N_f$ and $N_b$ with $N_b/N_f$ arbitrary.\cite{Kaul-Sachdev} Indeed, in that case we find that $\eta_N>1$ if $N_f/N_b>3$ (if four-component Dirac fermions are used, the latter inequality becomes $N_f/N_b>3/2$). \section{Chiral symmetry breaking and chiral susceptibility} We have already mentioned in the Introduction that CSB constitutes an important physical aspect of both algebraic Fermi and spin liquids. In this Section we want to investigate the effect of CSB on the ACL.\cite{Liu} This is typically a phenomenon that takes place at low energies. In the case of AFL and ASL it occurs for $|p|\ll N_fe_0^2$.\cite{Pisarski,Appelq} For the ACL this corresponds roughly to $|p|\ll\alpha\equiv(N_f+N_b/2)e_0^2$. In this regime the gauge field propagator is dominated by the one-loop vacuum polarization and is given by \begin{equation} D_{\mu\nu}(p)\approx\frac{16}{(2N_f+N_b)e_0^2|p|}\left(\delta_{\mu\nu}-\frac{p_\mu p_\nu}{p^2}\right), \end{equation} where we have used the Landau gauge and assumed a three-dimensional Euclidean spacetime. Following the standard literature,\cite{Pisarski,Appelq} we consider the one-loop fermion Green function with a dressed fermionic propagator and where the above gauge field propagator has been inserted. The inverse propagator is given by $G_\psi^{-1}(p)=i\slashchar{p}Z(p)+\Sigma(p)$ and the corresponding self-consistent equation reads \begin{widetext} \begin{equation} G_\psi^{-1}(p)=i\slashchar{p}+\frac{16}{2N_f+N_b}\left\{ \int\frac{d^3k}{(2\pi)^3}\frac{\gamma_\mu [\Sigma(k-p)+i(\slashchar{k}-\slashchar{p})Z(k-p)]\gamma_\mu} {[Z^2(k-p)(k-p)^2+\Sigma^2(k-p)]|k|} -\int\frac{d^3k}{(2\pi)^3}\frac{\slashchar{k}[\Sigma(k-p)+i(\slashchar{k}- \slashchar{p})Z(k-p)]\slashchar{k}}{[Z^2(k-p)(k-p)^2+\Sigma^2(k-p)]|k|^3}\right\}. \end{equation} Note that the only difference with respect to the standard analysis is the coefficient in front of the curly brackets, which gets modified due to the spinon loop contribution to the vacuum polarization. We will set $N_b=2$, which is the case of physical interest to us. Futhermore, in order to simplify the analysis we will assume that $Z(p)\approx 1$.\cite{Appelq,Gusynin-1} After standard manipulations involving the algebra of the gamma matrices, we obtain the self-consistent equation for the self-energy: \begin{equation} \Sigma(p)=\frac{32}{2N_f+1}\int\frac{d^3k}{(2\pi)^3}\frac{\Sigma(k)} {[k^2+\Sigma^2(k)]|k+p|}, \end{equation} Integrating over the angles, we obtain \begin{eqnarray} \label{sc-int-sig} \Sigma(p)&=&\frac{8}{(2N_f+1)\pi^2 |p|}\int_0^\alpha d|k| \frac{|k|\Sigma(k)(|k|+|p|-|k-p|)}{k^2+\Sigma^2(k)}\nonumber\\ &=&\frac{16}{(2N_f+1)\pi^2 |p|}\left[\int_0^{|p|}d|k|\frac{k^2\Sigma(k)}{k^2+\Sigma^2(k)} +|p|\int_{|p|}^\alpha d|k|\frac{|k|\Sigma(k)}{k^2+\Sigma^2(k)}\right], \end{eqnarray} \end{widetext} which can easily be converted to the differential equation \begin{equation} \label{d-Sig} \frac{d}{dp}\left[p^2\frac{d\Sigma(p)}{dp}\right]= -\frac{16}{\pi^2(2N_f+1)}\frac{p^2\Sigma(p)}{p^2+\Sigma^2(p)}. \end{equation} The above equation can be linearized in the regime $|p|\gg\Sigma(p)$ where it can be solved. We will omit the details here, since the analysis is well known.\cite{Appelq} The solution here is only slightly changed due to the spinon degrees of freedom. The result is a dynamically generated mass gap $\Sigma(0)\sim\langle\bar \psi\psi\rangle$ of the form \begin{equation} \Sigma(0)=\alpha\exp\left[-\frac{2\pi}{\sqrt{\frac{64}{\pi^2(2N_f+1)}-1}}\right]. \end{equation} This mass generation is associated to the development of a chiral condensate, which signals the appearence of holon charge density waves in the system. The above mass gap vanishes for $N_f>N_{\rm ch}$, where \begin{equation} N_{\rm ch}=\frac{1}{2}\left(\frac{64}{\pi^2}-1\right)\approx 2.74, \end{equation} which is smaller than the corresponding value in absence of spinons.\cite{Note-CSB} Thus, CSB seems to occur only in the regime where no quantum critical point is found. Therefore, by accepting the calculations and underlying approximations employed so far in this paper, it seems that CSB does not spoil the quantum critical behavior. Another correlation function of interest is the chiral susceptibility \begin{equation} G_\chi(x)=\langle\bar \psi(x)\psi(x)\bar \psi(0)\psi(0)\rangle, \end{equation} whose scaling behavior for a large number of fermion components and $2+1$ dimensions has been calculated in Refs. \onlinecite{Gusynin-2} and \onlinecite{Franz}. At the quantum critical point the chiral susceptibility behaves like \begin{equation} \label{G-chiral} G_\chi(x)\sim\frac{1}{|x|^{d-2+\tilde \eta_4}}, \end{equation} which defines the critical exponent $\tilde \eta_4$. If fluctuations are ignored, we can write simply \begin{equation} G_\chi(x)\sim\frac{1}{|x|^{2(d-1)}}, \end{equation} which leads to $\tilde \eta_4=d$. We can obtain the anomalous dimension $\tilde \eta_4$ in $2+1$ dimensions for large $N_f$ and $N_b$, with $N_f/N_b$ arbitrary, directly from the result of Ref. \onlinecite{Franz} by the simple replacement $8/N\to 8/(N_f+N_b/2)$. The result agrees of course with the one obtained in Ref. \onlinecite{Kaul-Sachdev}, up to a trivial difference related to the fact that the latter reference works with two-component Dirac fermions. Let us compute now $\tilde \eta_4$ for the $SU(2)$ ACL in first-order in $\epsilon$. Similarly to the case of four-spinon correlation functions, the calculation can be done by computing the insertion of the operator $\bar \psi\psi$ in the correlation function $G_\psi(x)$. This amounts in computing the scaling dimension of the operator $\bar \psi\psi$, i.e., \begin{equation} \label{dim-chiral} {\rm dim}[\bar \psi\psi]=d-1-\tilde \eta_2, \end{equation} where $\tilde \eta_2$ is the fixed point value of the RG function related to mass renormalization. Thus, the easiest way to compute the above scaling dimension is by adding a bare mass term in the Lagrangian ${\cal L}_f$. The calculation of the needed renormalization constant is standard and can be found in quantum field theory textbooks.\cite{ZJ} We obtain at one-loop the result $\tilde \eta_2=3f_*$. From Eqs. (\ref{G-chiral}) and (\ref{dim-chiral}), we obtain \begin{equation} \tilde \eta_4=d-2\tilde \eta_2, \end{equation} and therefore, \begin{equation} \tilde \eta_4=2\left[2-\frac{N_f+5}{2N_f+1}\epsilon+{\cal O}(\epsilon^2)\right]. \end{equation} In particular, for $N_f=4$ and $\epsilon=1$ we obtain $\tilde \eta_4=2$. In this case we have that the chiral susceptibility behaves at the quantum critical point as \begin{equation} G_\chi(x)\sim\frac{1}{|x|^{3}}, \end{equation} which leads to a logarithmic behavior in momentum space. \section{Conclusion and discussion} In this paper we have studied the quantum critical behavior of a field theory associated to the algebraic charge liquid (abreviated as ACL throughout this paper), which is a new type of quantum liquid proposed recently in Ref. \onlinecite{Kaul}. Our study was made in the framework of the $\epsilon$-expansion. To lowest order it leads to a quantum critical point provided the number of fermion components $N_f\geq 4$. Since we are considering a representation using four-component spinors, the physically relevant number of fermions is $N_f=2$.\cite{Note-3} Thus, in the framework of the present study, we are unable to find a deconfined quantum critical point for $N_f=2$. However, it is likely that an improved approximation will show that the theory is also critical for $N_f=2$. The point is that the $\epsilon$-expansion for the Lagrangian (\ref{L-boson}) should, in a nonperturbative framework, exhibit an infrared stable fixed point for all values of $N_b\geq 1$. The $\epsilon$-expansion cannot capture such a strong-coupling regime. At the moment there is no reliable nonperturbative scheme for this problem. As discussed in the introduction, a resummed three-loop $\epsilon$-expansion would provide the desirable controlled approximation. The inclusion of Dirac fermions make the $\epsilon$-expansion more reliable, since they essentially make the gauge coupling weaker. It is remarkable that in such a case it is not any longer necessary to have a large number of spinons to obtain a quantum critical point. Interestingly, the number of Dirac fermions does not need to be large either. Thus, we were able to explicitly exhibit a deconfined quantum critical point for an $SU(2)$ theory of bosonic spinons. In order to convincingly show how the Dirac fermions make the full theory more well behaved, let us have a look at the two-loop $\beta$ function for the gauge coupling. This result can be easily derived by using the earlier two-loop result for the Lagrangian (\ref{L-boson}),\cite{Kolnberger} combined with the higher order result for the Lagrangian (\ref{L-fermion}).\cite{Gorishny,Gracey} The result is \begin{equation} \beta_f=-\epsilon f+\frac{N_b+4N_f}{3}f^2+2(N_b+2N_f)f^3, \end{equation} which has a nontrivial fixed point at \begin{equation} f_*=\frac{3\epsilon}{N_b+4N_f}\left[1-\frac{18}{N_b+4N_f}\epsilon+{\cal O}(\epsilon^2)\right]. \end{equation} For $N_f=0$ we obtain that $f_*>0$ only for $N_b>18\epsilon$. Furthermore, in this limit the two-loop result for $\beta_g$ shows that we still have an infrared stable fixed point only for $N_b>182.9$. Thus, besides the two-loop result for $N_f=0$ being unable to produce a quantum critical point for low values of $N_b$, it introduces a new difficulty, since a nontrivial fixed point for $f$ does not any longer exists for all $N_b\geq 1$. However, the situation improves considerably if $N_f\neq 0$. Indeed, in this case we have that $f_*>0$ for $N_b>18\epsilon-4N_f$. Thus, for $N_b=2$ we should have $N_f>(9\epsilon-1)/2$, which after setting $\epsilon=1$ becomes $N_f>4$. While this clearly shows that the introduction of Dirac fermions improves the behavior of the $SU(2)$ antiferromagnet, it is a little bit frustrating to see that at two-loop no quantum critical point for $N_f=4$ is found anymore. However, we should keep in mind that the theory in absence of fermions is not being considered in a strong-coupling regime. Once this is done in some way, a critical regime even for values as low as $N_f=2$ should be expected to occur. Note that even if such a result is achieved, we may have to face another problem, namely, CSB. The calculation in Sect. IV indicates that CSB leads to gapped fermions for $N_f=2$. If this number is really correct, then there will be no deconfined quantum critical point anyway. Physically this means that CDW will screen the interspinon interaction in such a way as to make the spinon correlation length finite, and the only possible phase transition, if any, would be a first-order one. Adopting a more optimistic point of view, let us mention that there is strong evidence that the values of $N_{\rm ch}$ calculated using methods like the one we have used in Sect. IV tend to be overestimated.\cite{Appel,Hands,Note-4} Since the spinons contribute further to reduce the value of $N_{\rm ch}$, we may hope that the quantum critical point for $N_f=2$ will survive. Larger values of $N_f$ are physically relevant for multilayer systems. As discussed in the Introduction, this is precisely the case with the AFL.\cite{FT} However, we must be aware of the fact that in the case of the ACL additional gauge fields may play a role, so that the analysis made here would have to be modified accordingly. The main achievement of this paper was the computation of the anomalous dimensions of composite operators associated to the quantum critical behavior of the Lagrangian (\ref{L}). Particularly important was the calculation of the anomalous dimension $\eta_N$ of the N\'eel field. The results presented here constitute an explicit and well controlled example of deconfined quantum critical point for a system having $SU(2)$ invariance. The computed values of $\eta_N$ are substantially larger than the ones that would follow from a LGW analysis, reflecting a typical feature of the DQC scenario. \acknowledgments The author would like to thank Hagen Kleinert and Zlatko Tesanovic for discussions. This work received partial support from the Deutsche Forschungsgemeinschaft (DFG), Grant No. KL 256/46-1.
2,877,628,088,732
arxiv
\section{Introduction} \label{sec:intro} Early phase II studies in clinical oncology are conducted after investigation of safety and dosage of a new anti-tumor agent in preceding phase I studies. The objective of such early phase II trials is to identify substances that show promising anti-tumor activity warranting further research in larger phase II or confirmatory phase III studies. Usually, early phase II trials in clinical oncology are designed as single-arm studies with a binary endpoint indicating whether patients had substantial tumor remission or, at least, no progression during a defined follow-up period. Let $\rho$ be the probability of observing a favorable outcome (response) of the binary endpoint. The main interest lies in testing the null hypothesis $\mathcal{H}_0:\rho \leq \rho_0$ for some pre-specified value $\rho_0$ chosen as the maximal value still considered clinically uninteresting. Due to the early stage of clinical research at which this type of study is conducted, there is usually a high degree of uncertainty about the magnitude of $\rho$. In order to compensate for this uncertainty during planning, group sequential designs \citep{Jennison2000a} can be employed which allow early stopping for futility or efficacy at pre-defined stages. The large logistical effort of conducting interim analyses and the long follow-up times for oncological endpoints render multiple stages or strictly sequential testing impractical. Instead, designs involving a single interim analysis after observing a pre-specified number of patients are viable compromises between the requirements of clinical practice and the desire for the option of early stopping for either futility or efficacy. In his seminal paper, \cite{Simon1989} derived two types of two-stage group sequential designs which either minimize the expected sample size under the null hypothesis (Simon's optimal designs) or the maximal sample size (Simon's minimax designs). Other optimality criteria have been proposed in the literature. For example, \cite{Shuster2002} aimed at optimizing the maximal expected sample size over the complete range of success probabilities. In the following, we focus on the optimality criterion of Simon's optimal designs. This is due to its wide acceptance among practitioners and its intuitive appeal: When testing potentially harmful substances \textit{in vivo} one would like to minimize the number of patients exposed to non-active or even harmful drugs. Should, on the other hand, the drug under investigation be beneficial, administering it to a larger number of patients within the study is unproblematic. Therefore, minimizing expected sample size under the null hypothesis better meets the ethical requirements than, say, minimizing expected sample size under a pre-specified alternative value $\rho_1> \rho_0$. While two-stage designs allow early stopping in case of lower or higher treatment effect than assumed in the planning stage, they require that initially defined sample sizes and decision rules are to be followed strictly in order to assure control of the type one error rate. During the last years, the development of more flexible methods for reacting to the observed interim outcome of a trial have lead to an increasing research effort concerning so-called adaptive designs \citep{Bauer2015}. This class of designs allows not only to stop early for futility or efficacy but also to adjust the sample size of the second stage based on the observed interim outcome as long as the conditional error of the initially planned design is preserved. Transferring the conditional error function principle \citep{Muller2004} to discrete data, \cite{Englert2012a, Englert2012} derived flexible designs for single-arm studies with binary endpoints. Furthermore, formulating the decision rules in terms of the discrete conditional error function results in counterparts of Simon's designs that are flexible and at the same time as least as efficient if the pre-defined sample size is not changed. The same theoretical framework can be used to find sample size adaptation rules that are optimal with respect to, e.g., the expected sample size under the null hypothesis thus extending Simon's designs in a natural way. Previous approaches to the problem of finding optimal adaptive designs in the sense of \cite{Simon1989} needed to impose technical constraints in order to render the optimization feasible. \cite{Englert2013} imposed the additional constraint that the conditional error function of the design should be non-decreasing in the number of observed positive outcomes after stage one (\mbox{EK designs}). While it might be intuitive to `shift' type one error to more promising stage-one outcomes, this was primarily a constraint for technical convenience as it also guarantees consistent regions for stopping for futility (conditional error of 0) or efficacy (conditional error of 1). Yet, in some situations, the conditional error constraint is not sufficient to prevent the sample size function of the designs from being shaped oddly. For example, the optimal design for $\rho_0=0.5$, a fixed alternative value of $0.7$, and $\alpha=0.05,\beta=0.1$ in \cite{Englert2013} requires 47 subjects for stage two upon observing 13 responses out of 20 subjects in stage one, 44 for 14 responses but 47 again for 15 responses (Table~1 in \cite{Englert2013}). This non-smooth sample size function resulting from the discreteness of the underlying distribution is unintuitive and might impede adoption of these designs in practice. The goal of this paper is two-fold. Firstly, we demonstrate that the problem can be solved in feasible time without any additional technical constraints using binary linear programming, cf. \cite{Garfinkel1972, Conforti2014}. Subsequently, we investigate how restrictive the monotone conditional error constraint of Englert and Kieser is in practice and suggest a novel approach to obtaining `nice' solutions which prevent the potential pathologies of sample size function with negligible increase of the expected sample size under the null hypothesis. The remainder of this paper is organized as follows. In Section~\ref{sec:notation}, we introduce the notation used throughout the paper and describe the optimization problem for Simon's optimality criterion in an adaptive two-stage setting. Section~\ref{sec:formulation} explains how the problem can be formulated as a binary linear program with minimal constraints, and Section~\ref{sec:tuning} explains the link to previous algorithms and explores the possibility of improving the quality of the solutions obtained with respect to various aspects. A numerical comparison for a range of commonly encountered parameter configurations is given in Section \ref{sec:results}. The discussion in Section \ref{sec:discussion} highlights the main differences of our approach to previous work and gives a prospect of possible extensions. \section{Notation} \label{sec:notation} Throughout this paper, we consider two-stage single-arm clinical trial designs with binary endpoint. A pre-defined number of patients is enrolled during the first stage. Based on the number of observed responses and a pre-defined sample size function, it is then decided how many patients are included in the second stage. It is possible to stop early either for futility or for efficacy after the first stage. Let $n_1$ be the number of patients enrolled in stage one, $X_1$ the number of responses observed in stage one, $X_2$ the number of responses observed in stage two and $X=X_1 + X_2$ the overall observed number of such events. The interest lies in testing $\mathcal{H}_0:\rho \leq \rho_0$ for a pre-specified type one error rate $\alpha$ and a type two error rate $\beta$ at an alternative parameter value $\rho_1>\rho_0$. Any two-stage design addressing this test problem can be described as a tuple $\big(n, c\big)$ of the total sample size function $n:\{0, 1, \ldots, n_1\} \to \{n_1, n_1 + 1, \ldots, n_{max}\}$ and the overall critical value function $c:\{0, 1, \ldots, n_1\} \to \{0, 1, \ldots, n_{max}\}\cup\{-\infty, \infty\}$, where $n_1$ is the number of patients pre-planned for the first stage and $n_{max}$ is the maximal total sample size. For any observed number of responses at interim $x_1$, the total sample size function $n(x_1)$ determines the number of patients enrolled in both stages and therefore the number needed for the second stage. After completing stage-two, the null hypothesis is rejected if and only if $X>c(x_1)$. Besides $n_{max}\geq n_1$, the functions $n(\cdot)$ and $c(\cdot)$ must fulfill the following consistency constraints for all $x_1 \in \{0, \ldots, n_1\}$ \begin{align} & c(x_1) \in \{-\infty, \infty\} \Leftrightarrow n(x_1) = n_1 \label{eq:consistency}\\ & n(x_1) > n_1 \Rightarrow n(x_1) > c(x_1) \geq x_1, \nonumber \end{align} which ensure that the final test decision is not yet fixed when continuing to the second stage. \section{Optimal two-stage design} \label{sec:formulation} Finding the optimal adaptive two-stage design for given type~one and type~two error rates $\alpha,\beta$ using Simon's classical optimality criterion of minimal expected sample size under the null hypothesis can be formulated as the following optimization problem \begin{equation*} \begin{aligned} \underset{n_1,\,n(\cdot),\,c(\cdot)}{\text{minimize}}\quad & \boldsymbol{E}_{\rho_0}\big[\,n(X_1)\,\big] \\ \text{subject to}\quad & \mathbb{P}_{\rho}\big[\,X > c(X_1)\,\big] \leq \alpha \quad \forall\, \rho\leq\rho_0\\ & \mathbb{P}_{\rho_1}\big[\,X > c(X_1)\,\big] \geq 1 - \beta. \end{aligned} \end{equation*} Note that all quantities involved are discrete and therefore specialized techniques are needed to solve the problem, which is not linear in the original variables $c(x_1),n(x_1), {x_1 = 0,\ldots,n_1}$. By reformulation as assignment problem using auxiliary variables it can, however, be stated as a binary linear program. This class of problems can be solved efficiently with existing software. Firstly, we simplify he problem by considering the optimization problem only for a particular value of $n_1$. The optimization over $n_1$ can then be performed by solving the conditional problem to optimality for every $n_1=1,2,\ldots$ until $n_1>\boldsymbol{E}_{\rho_0}\big[\,n(X_1)\,\big]$. Let to this end for fixed $n_1$ \begin{align*} \big(y_{x_1,n_2,c}\big), \end{align*} $x_1=0\ldots n_1,n_2=0\ldots n_{max}-n_1, c=-\infty,0,\ldots,n_{max}-1,\infty$ be the three-dimensional binary assignment array with $y_{x_1,n_2,c}=1$ if and only if $n(x_1)-n_1=n_2$, and $c(x_1)=c$. The fact that $n(\cdot)$ and $c(\cdot)$ must be valid functions of $x_1$ is easily represented by constraints of the form \begin{align*} \sum_{n_2, c} y_{x_1,n_2,c} = 1 \quad \forall\, x_1 = 0\ldots n_1. \end{align*} The consistency constraints on $n(\cdot)$ and $c(\cdot)$ in equations (\ref{eq:consistency}) can be implemented as follows \begin{align*} y_{x_1,n_2,c}=0&\quad \text{if}\ c(x_1)\in\{-\infty,\infty\}\wedge n_2 \in\{n_1 + 1,\ldots, n_{max}\} \\ y_{x_1,n_1,c}=0&\quad \text{if}\ c \not\in\{-\infty,\infty\}\\ y_{x_1,n_2,c}=0&\quad \text{if}\ n_2 > 0 \wedge \big(c<x_1\vee n_2 + n_1 \leq c\big). \end{align*} Additionally, let $\big(ce_{x_1,n_2,c}\big)$ be the corresponding three-dimensional array holding the respective conditional errors $ce(x_1)$, i.e., \begin{align} ce_{x_1,n_2,c} := &\ \mathbb{P}_{\rho_0}\big[X_1 + X_2 > c\,|\,X_1=x_1\big] \nonumber \\ =&\ \mathbb{P}_{\rho_0}\big[X_2 > c - x_1\big], \end{align} and $\big(cp_{x_1,n_2,c}\big)$ the three-dimensional array holding the conditional power $cp(x_1)$ for each configuration, i.e., \begin{align} cp_{x_1,n_2,c} := &\ \mathbb{P}_{\rho_1}\big[X_1 + X_2 > c\,|\,X_1 = x_1\big] \nonumber \\ = &\ \mathbb{P}_{\rho_1}\big[X_2>c-x_1\big]. \end{align} Further, let $\big(n_{x_1,n_2,c}\big)$ be the three-dimensional array of corresponding sample sizes, i.e., \begin{align} n_{x_1,n_2,c} := n_2 + n_1. \end{align} The objective criterion can then be expressed as \begin{align} \underset{\big(y_{x_1,n_2,c}\big)}{\text{minimize}} \quad \sum_{x_1,n_2,c} \mathbb{P}_{\rho_0}\big[X_1=x_1\big]\cdot n_{x_1,n_2,c}\cdot y_{x_1,n_2,c} \end{align} which is linear in the binary assignment variables $y_{x_1,n_2,c}$. An overall power of $1-\beta$ at $\rho_1$ is guaranteed by the linear constraint \begin{align} &\ \sum_{x_1,n_2,c} \mathbb{P}_{\rho_1}\big[X_1=x_1\big]\cdot cp_{x_1,n_2,c}\cdot y_{x_1,n_2,c} \geq 1-\beta. \end{align} Controlling the overall maximal type~one~error rate at $\alpha$ is more complicated. Intuitively, the type~one~error rate should be largest at the boundary of the null hypothesis in which case the linear constraint \begin{align} \sum_{x_1,n_2,c} \mathbb{P}_{\rho_0}\big[X_1=x_1\big]\cdot ce_{x_1,n_2,c}\cdot y_{x_1,n_2,c} \leq \alpha \end{align} would be sufficient to maintain type~one~error rate control. Yet, we were unable to prove that this constraint is sufficient to achieve strict type~one~error rate control on $\mathcal{H}_0$. We therefore resort to solving the problem only controlling the type~one~error rate at the boundary of the null hypothesis and numerically verify strict type~one~error rate control for the solutions obtained. For all situations considered here the resulting designs maintained strict type~one~error rate control. As an example, let $n_1=10, n_{max} = 40, \alpha=0.05, \beta=0.2, \rho_0=0.2$ and $\rho_1=0.4$. Table \ref{tab:examples} and Figure \ref{fig:exampleDesigns} show the optimal adaptive design (`optimal') besides various modifications using additional constraints which are discussed in Section~\ref{sec:tuning}. The optimal design has an expected sample size of 21.241. Although it violates monotonicity of $ce(\cdot)$ it still controls the type~one~error rate at $0.05$, which we checked numerically. Therefore, the example shows that a monotone conditional error function is not a necessary condition for strict type~one~error rate control. Also note that the stopping regions of the optimal design are not contiguous. In fact, the design stops for futility when observing 15 out of 15 responses in stage one. Although optimal in the sense of the specified criterion, such a behavior is not acceptable in practice and needs to be addressed. In Section~\ref{sec:tuning} we first illustrate how the solutions of Englert~and~Kieser can be re-produced within the framework presented here and then explore alternative options for regularization of the optimization problem. \begin{table} \centering \caption{Sample size function $n(\cdot)$ and critical value function $c(\cdot)$ of the unconstrained (`optimal'), the monotone conditional error (`EK'), and the unimodal sample size function (`nice') adaptive designs for the example given in Section~\ref{sec:formulation}.} \label{tab:examples} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}rrccccccccccc} \hline Design Type & $x_1$: & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline Optimal & $n(x_1)$: & 10 & 10 & 17 & 38 & 40 & 36 & 39 & 10 & 27 & 10 & 10 \\ &$c(x_1)$: & $\infty$ & $\infty$ & 5 & 11 & 12 & 11 & 11 & -$\infty$ & 10 & - $\infty$ & - $\infty$ \\ \rule{0pt}{4ex} EK &$n(x_1)$: & 10 & 10 & 17 & 38 & 40 & 36 & 39 & 22 & 10 & 10 & 10 \\ &$c(x_1)$: & $\infty$ & $\infty$ & 5 & 11 & 12 & 11 & 11 & 7 & -$\infty$ & -$\infty$ & -$\infty$ \\ \rule{0pt}{4ex} Nice & $n(x_1)$: & 10 & 10 & 17 & 38 & 40 & 37 & 35 & 19 & 10 & 10 & 10 \\ & $c(x_1)$: & $\infty$ & $\infty$ & 5 & 11 & 12 & 11 & 11 & 7 & -$\infty$ & -$\infty$ & -$\infty$ \\ \hline \end{tabular*} \end{table} \begin{figure} \centerline{ \begin{minipage}{\textwidth} \centering \includegraphics[width=\textwidth]{sampleSize.pdf} \includegraphics[width=\textwidth]{probReject.pdf} \end{minipage} } \caption{Sample size function $n(\cdot)$, critical value function $c(\cdot)$, conditional error function $ce(\cdot)$, and conditional power function $cp(\cdot)$ of the discussed adaptive designs for the example given in Section~\ref{sec:formulation}.} \label{fig:exampleDesigns} \end{figure} \section{Regularizing the solution with additional constraints} \label{sec:tuning} \subsection{Monotone conditional error function} The solutions obtained by the branch-and-bound algorithm suggested by \cite{Englert2013} can be reproduced by adding constraints to the optimization problem that enforce a monotone conditional error function. As a consequence, it is immediately clear that any solution obtained without these constraints (`optimal' designs) will be at least as good as the one obtained by the EK algorithm in terms of the expected sample size under the null hypothesis. Monotonicity of the conditional error function can be enforced by $n_1$ additional constraints of the form \begin{align*} \sum_{n_2, c} ce_{x_1, n_2, c}\,\big(y_{x_1, n_2, c} -\ y_{x_1 - 1, n_2, c}\big)\geq 0 \end{align*} for $x_1\in\{1,\ldots,n_1\}$. As the example in Table~\ref{tab:examples} shows, however, the monotone conditional error function constraints on their own do not suffice to ensure `nice' solutions with sufficient practical appeal. While the \mbox{EK design} achieves contiguous stopping regions, which are implied by a monotone conditional error function, it does not guarantee a smooth sample size function. The expected sample size under the null hypothesis is only slightly larger than the optimal solution's (21.250 vs. 20.241). \subsection{Contiguous stopping regions} In order to obtain stopping regions which are connected to their respective boundary (in case of stopping for futility to $x_1=0$ and in case of stopping for efficacy to $x_1=n_1$), one needs to enforce that for any fixed $x_1^*\in\{0,\ldots,n_1\}$ the following property holds: $c(x_1^*) = \infty \Rightarrow c(x_1)=\infty$ for all $x_1 < x_1^*$ and \emph{vice versa} $c(x_1^*) = -\infty \Rightarrow c(x_1)=-\infty$ for all $x_1 > x_1^*$. This set of conditional constraints can be formalized via $2\,n_1$ new binary variables $y^{fut}_{x_1}\in\{0,1\}$ for $x_1 \in\{1,\ldots,n_1\}$ and $y^{eff}_{x_1}\in\{0,1\}$ for $x_1\in\{0,\ldots,n_1 - 1\}$. Let for $x_1 \in\{1,\ldots,n_1\}$ \begin{align*} y_{x_1, n_1, \infty} - y^{fut}_{x_1} = 0. \end{align*} Then $y^{fut}_{x_1} = 1$ if and only if $c(x_1) = \infty$ (stopping for futility). Adding a second constraint \begin{align*} y_{x_1 - 1, n_1, \infty} - y^{fut}_{x_1} \geq 0 \end{align*} enforces that $c(x_1 - 1)=\infty$ whenever $y^{fut}_{x_1} = 1$. Therefore, by transitivity all stage-one outcomes smaller than $x_1$ must also lead to stopping for futility. Similarly, constraints for $y^{eff}_{x_1}$, $x_1 \in\{0,\ldots,n_1-1\}$ can be constructed to ensure a contiguous stopping for efficacy region connected to $x_1=n_1$ \begin{align*} y_{x_1, n_1, -\infty} - y^{eff}_{x_1} &= 0\\ y_{x_1 + 1, n_1, -\infty} - y^{eff}_{x_1} &\geq 0. \end{align*} While contiguous stopping regions are obviously implied by a monotonously increasing conditional error function, the opposite is not true an therefore any design using only the contiguous stopping regions constraints instead of enforcing a monotone $ce(\cdot)$ has more flexibility between the stopping regions. In some cases, this additional flexibility might lead to a better performance in terms of minimal expected sample size under the null hypothesis. \subsection{Unimodal sample size function} Motivated by the characteristics of the optimal solutions and the EK designs, we propose to resolve the issue of potentially unintuitive sample size functions by enforcing unimodality of the sample size function, which can be obtained by restricting the number of sign changes from positive to negative of the first order differences of $n(\cdot)$ to one. This implies that at most one strict local maximum of the sample size function exists. To this end, $n_1 + 1$ additional binary auxiliary variables $y_{x_1}^{\wedge}\in\{0,1\}, x_1\in\{0, \ldots, n_1\}$ and constraint sets \begin{align} \sum_{n_2,c} n_{x_1',n_2,c}\cdot y_{x_1',n_2,c} - n_{(x_1 - 1),n_2,c}\cdot y_{(x_1-1),n_2,c} - 2\cdot n_{max}\cdot y_{i}^{\wedge}\geq -2\cdot n_{max}, \end{align} $x_1'\in\{1,\ldots, x_1\}$ are needed. Whenever $y_{x_1}^{\wedge}=1$ these constraints enforce non-negative increments of $n_2(x_1')$ for all $x_1'<x_1$. Similarly, non-positive increments for values larger than $x_1$ can be guaranteed by \begin{align} \sum_{n_2,c} n_{x_1',n_2,c}\cdot y_{x_1',n_2,c} - n_{(x_1'-1),n_2,c}\cdot y_{(x_1'-1),n_2,c} + 2\cdot n_{max}\cdot y_{x_1}^{\wedge}\leq 2\cdot n_{max} \end{align} for $x_1'\in\{x_1,\ldots,n_1\}$. Finally, one must ensure that at least one of the above designed constraint sets is active in the solution. This can be achieved by \begin{align} \sum_{x_1=0}^{n_1}y_{x_1}^{\wedge}\geq 1. \end{align} Jointly, the constraints for contiguous stopping regions and a unimodal sample size function result in designs which exhibit a `smooth' (unimodal) sample size function in all cases and guarantee that the stopping for efficacy and futility regions are contiguous and connected to their respective boundaries. For the example given above the inflation of the expected sample size as compared to the optimal design is still negligible (21.252 vs. 21.241). \section{Results} \label{sec:results} We compared the expected sample size under the respective null hypotheses for four different sets of constraints over a range of parameter values for $\rho_0 = 0.1, 0.2, \ldots, 0.7$ and ${\rho_1=\rho_0 + 0.2}$. In all cases, we set $\alpha=0.05$ and $\beta=0.2$. The search space for $n_1$ and $n_{max}$ was chosen with reference to Simon's original designs \citep{Simon1989}. We allowed $n_{max}$ to be $10\%$ larger than the combined stage-one and stage-two sample size of the corresponding optimal design identified by Simon. The search range for $n_1$ was chosen from $5$ up to $n_{max}-5$. Table \ref{tab:results} shows the results. In all cases we numerically verified strict type~one~error rate control of the solutions. Note that the small deviations from the figures reported in \cite{Englert2013} originate from the fact that we did not need to restrict possible assignments by specifying a minimal or maximal conditional power in order to render the optimization feasible. All computations were conducted in the programming language Julia \citep{Bezanson2014} using its interface \citep{Lubin2015} to the commercial Gurobi solver \citep{Gurobi2015}. Graphics were produced using R \citep{r2015} and the ggplot2 package \citep{ggplot2009}. The reference for our comparison is the optimal adaptive design without any additional constraints. This design allows for maximal flexibility and must therefore exhibit the smallest expected sample size under the null hypothesis. Furthermore, we included designs equivalent to the ones obtained by Englert~and~Kieser by adding the monotone conditional error function constraint and `nice' designs which use the constraint-sets for contiguous stopping and unimodal sample size function. \begin{table} \centering \caption{ Results for four different adaptive and Simon's designs using various combinations of the constraint sets discussed in Section~\ref{sec:tuning}.`\,$\cdots$' indicate the same result as to the left; $^*$: figures were taken from the original publication. } \label{tab:results} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}cccccc} \hline \multicolumn{2}{c}{Parameters} & \multicolumn{4}{c}{$\boldsymbol{E}_{\rho_0}\big[\,n(X_1)\,\big]$}\\ \cline{3-6} $\rho_0$ & $\rho_1$ & Optimal & EK & Nice & Simon's* \\ \hline 0.1 & 0.3 & 14.65107 & 14.72498 & $\cdots$ & 15.0 \\ 0.2 & 0.4 & 19.78640 & $\cdots$ & $\cdots$ & 20.6 \\ 0.3 & 0.5 & 23.02199 & $\cdots$ & 23.02448 & 23.6 \\ 0.4 & 0.6 & 24.08002 & 24.08640 & $\cdots$ & 24.5 \\ 0.5 & 0.7 & 22.94827 & $\cdots$ & 22.95923 & 23.5 \\ 0.6 & 0.8 & 19.71893 & $\cdots$ & $\cdots$ & 20.5 \\ 0.7 & 0.9 & 14.82367 & $\cdots$ & $\cdots$ & 14.8 \\ \hline \end{tabular*} \end{table} The large search space with relatively high $n_{max}$ as compared to the example from Section~\ref{sec:formulation} results in the EK design mostly coinciding with the optimal and the `nice' design. This indicates that the monotone conditional error function constraint is most restrictive when $n_{max}$ is relatively small, as it is the case in the example from Section~\ref{sec:formulation}. In practice, however, $n_{max}$ need not always be chosen liberally due to operational constraints. It is therefore important that the unimodal sample size constraint guarantees intuitive sample size functions in any situation. For the parameter constellations considered here, both for $\rho_0=0.3$ and $\rho_0=0.5$, the EK and `nice' designs differ slightly despite the relatively large $n_{max}$ because the EK design's sample size function is not unimodal, cf. Table~\ref{tab:results}. Overall, the differences in expected sample size between the optimal, EK, and nice designs are small for the situation considered here. This indicates that the conditional error function approach of Englert and Kieser is not unnecessarily restrictive. However, the small differences to the nice designs demonstrate that the occasional issues with unintuitive sample size function can be resolved at minimal additional costs in terms of expected sample size under the null hypothesis. \section{Discussion} \label{sec:discussion} We presented a framework for extending the classical optimality criterion of \cite{Simon1989} to arbitrary adaptive two-stage designs improving previous work in two ways: Firstly, we demonstrated how the problem can be formulated as a binary linear program which makes it amenable to solution by standardized and highly specialized software. In this way, for the first time, we were able to solve the problem to optimality without any additional technical constraints. Comparing the newly found optimal designs with the ones obtained previously by \cite{Englert2013}, we conclude that the performance improvements in terms of expected sample size under the null hypothesis are almost negligible. Consequently, we conclude that the monotone conditional error constraint is not unnecessarily restrictive. Secondly, we utilized the binary linear program formulation to enforce certain desirable features of solutions obtained, which resolve problems arising due to the discreteness of the underlying statistics while preserving most of the advantages in terms of Simon's optimality criterion. When comparing the optimal solutions without any additional constraints to the ones obtained by adding `niceness' constraints such as unimodality of the sample size function or contiguous stopping regions. For many parameter settings the solutions coincide with the ones found by the algorithm of Englert and Kieser and where they differ due to a non-unimodal sample size function of the latter the performance loss in terms of expected sample size under the null hypothesis is extremely small. Additionally, the fact that we formulated the problem in terms of a binary linear program allowed us to use commercial grade software for its solution. Thus, the solutions can be obtained very quickly which is a great improvement over naive implementations of the branch-and-bound algorithm. Note that the binary linear programming framework presented in this paper also allows the addition of further constraints which can be used to tailor the solution's properties towards custom needs or preferences. For example, one might chose to require a conditional power of at least $1-\beta$ upon continuing to the second stage which would render sample size re-calculation based on conditional power unnecessary. We considered expected sample size under the null hypothesis as optimality criterion. However, we would like to mention that it is straightforward to instead minimize the expected sample size under, e.g., some Bayesian prior distribution over $\rho$, which could then be seen as an extension of the ideas of \cite{Dong2012}. \cite{Simon1989} also considered so-called `minimax designs' which minimize the maximal total sample size of the design. Unfortunately, while it is theoretically possible to express minimax objective functions in terms of a binary linear program \citep{bisshop2015}, this is not feasible in practice as the number of additional constraints required is extremely large. Although adaptive versions of Simon's minimax designs are thereby practically out of reach of the generic binary linear programming approach, it is still possible to modify the objective function to favor designs with smaller maximal sample sizes. This can be achieved by minimizing $\boldsymbol{E}_{\rho_0}\big[\,n(X_1)^\gamma\,\big]$ for $\gamma>1$ or $\boldsymbol{E}_{\rho_0}\big[\operatorname{exp}\big(n(X_1)\big)\,\big]$ instead of $\boldsymbol{E}_{\rho_0}\big[\,n(X_1)\,\big]$. Each of these objective functions is easily obtained by piecewise modifications of the respective coefficients. Alternatively, the parameter $n_{max}$ might be used more restrictively to obtain a solution with acceptable maximal sample size. Although all methods presented in this paper are developed for rate comparisons of a single binomial random variable, an extension to more complex hypothesis tests based on, e.g., multinomial variables can be derived along the same lines.
2,877,628,088,733
arxiv
\section{Introduction} It is with great pleasure that we contribute to this book in honor of Prof. Takeo Fujiwara. GTL enjoyed eighteen months of Prof. Fujiwara's hospitality at the University of Tokyo during the early 1990's. At that time the work of Prof. Fujiwara in the field of electronic structure of quasicrystals had already made a major contribution to the literature (see for instance \cite{FujiwaraSofArt}). Since that time our research owes much to his work. Prof. Fujiwara was the first who performed realistic calculations of the electronic structure in quasicrystalline materials without adjustable parameters (ab-initio calculations) \cite{Fujiwara89}. Indeed these complex alloys \cite{Shechtman84} have very exotic physical properties (see Refs. \cite{Berger94,Grenet00_Aussois} and Refs therein), and it rapidly appeared that realistic calculations on the actual quasicrystalline materials are necessary to understand the physical mechanism that govern this properties. In particular, these calculations allow to analyze numerically the role of transition-metal elements which is essential in those materials. In this paper, we briefly present our work on the role of transition-metal element in electronic structure and transport properties of quasicrystals and related complex phases. Several Parts of these works have been done or initiated in collaboration with Prof. T. Fujiwara. \section{Electronic structure} \subsection{Ab-initio determination of the density of states} A way to study the electronic structure of quasicrystal is to consider the case of approximants. Approximants are crystallines phases, with very large unit cell, which reproduce the atomic order of quasicrystals locally. Experiments indicate that approximant phases, like $\alpha$-AlMnSi, $\alpha$-AlCuFeSi, $R$-AlCuFe, etc., have transport properties similar to those of quasicrystals \cite{Berger94,Quivy96}. In 1989 and 1991, Prof. Fujiwara performed the first numerical calculations of the electronic structure in realistic approximants of quasicrystals \cite{Fujiwara89,Fujiwara91,Fujiwara93}. He showed that their density of states (DOS, see figure \ref{Fig_DOS_Al6Mn_sugi}) is characterized by a depletion near the Fermi energy $E_{\rm F}$, called {\it ``pseudo-gap''}, in agreement with experimental results (for review see Ref. \cite{Berger94,Mizutani02,Belin04}) and a Hume-Rothery stabilization \cite{Massalski78,PMS05}. The electronic structure of simpler crystals such as orthorhombic $\rm Al_6Mn$, cubic $\rm Al_{12}Mn$, present also a pseudo-gap near $E_{\rm F}$ which is less pronounced than in complex approximants phases (figure \ref{Fig_DOS_Al6Mn_sugi}) \cite{PMS05}. \begin{figure}[t] \begin{center} \includegraphics[width=7.5cm]{al6mn_DOS_arttransp.eps} \vskip 0.2cm \includegraphics[width=7.5cm]{sugi_dAlSi_Zijl_b_6_all.eps} \vspace{-0.4cm} \caption{Ab-initio total DOS of $\rm Al_6Mn$ (simple crystal) and $\alpha$-Al$_{69.6}$Si$_{13.0}$Mn$_{17.4}$ (approximant of icosahedral quasicrystals) \cite{PMS05,Zijlstra03}. \label{Fig_DOS_Al6Mn_sugi}} \end{center} \end{figure} \subsection{Models to analyze the role of transition-metal element} \subsubsection*{$sp$--$d$ hybridization model} The role of the transition-metal (TM, TM = Ti, Cr, Mn, Fe, Co, Ni) elements in the pseudo-gap formation has been shown from experiments, ab-initio calculations and model analysis [4,13--19,11]. Indeed the formation of the pseudo-gap results from a strong $sp$--$d$ coupling associated to an ordered sub-lattice of TM atoms \cite{Guy04_ICQ8,PMS05}. Consequently, the electronic structure, the magnetic properties and the stability, depend strongly on the TM positions, as was shown from ab-initio calculations [28--33,20,21]. \subsubsection*{How an effective TM--TM interaction induces stability?} Just as for Hume-Rothery phases a description of the band energy can be made in terms of pair interactions (figure \ref{PotMn_Mn_m0}) \cite{ZouPRL93,Guy04_ICQ8}. Indeed, it has been shown that an effective medium-range Mn--Mn interaction mediated by the $sp$(Al)--$d$(Mn) hybridization plays a determinant role in the occurrence of the pseudo-gap \cite{Guy04_ICQ8}. We have shown that this interaction, up to distances 10--20\,$\rm \AA$, is essential in stabilizing these phases, since it can create a Hume-Rothery pseudo-gap close to $E_{\rm F}$. The band energy is then minimized as shown on figure \ref{FigEBetaPhiAlphaAl6Mn} \cite{Guy03,PMS05}. \begin{figure}[t!] \begin{center} \includegraphics[width=8cm]{PotMn_Mn_m0_c.eps} \vspace{-0.4cm} \caption{Effective medium-range Mn--Mn interaction between two non-magnetic manganese atoms in a free electron matrix which models aluminum atoms. \cite{PMS05}} \label{PotMn_Mn_m0} \end{center} \vspace{0.3cm} \begin{center} \includegraphics[width=8cm]{Energie_Al6mnAlphaBeta.eps} \vspace{-0.4cm} \caption{Variation of the band energy due to the effective Mn--Mn interaction in $\rm o$-$\rm Al_6Mn$, $\rm \alpha$-AlMnSi and $\rm \beta$-$\rm Al_9Mn_3Si$. \cite{Guy03} } \label{FigEBetaPhiAlphaAl6Mn} \end{center} \end{figure} The effect of these effective Mn--Mn interactions has been also studied by several groups \cite{ZouPRL93,Guy03,GuyPRL00} (see also Refs in \cite{PMS05}). It has also explained the origin of large vacancies in the hexagonal $\beta$-$\rm Al_9Mn_3Si$ and $\varphi$-$\rm Al_{10}Mn_3$ phases on some sites, whereas equivalent sites are occupied by Mn in $\rm \mu$-$\rm Al_{4.12}Mn$ and $\rm \lambda$-$\rm Al_4Mn$, and by Co in $\rm Al_5Co_2$ \cite{Guy03}. On the other hand, an spin-polarized effective Mn--Mn interaction is also determinant for the existence (or not) of magnetic moments in AlMn quasicrystals and approximants \cite{GuyPRL00,Virginie2,Duc03}. The analysis can be applied to any Al(rich)-Mn phases, where a small number of Mn atoms are embedded in the free electron like Al matrix. The studied effects are not specific to quasicrystals and their approximants, but they are more important for those alloys. Such a Hume-Rothery stabilization, governed by the effective medium-range Mn--Mn interaction, might therefore be intrinsically linked to the emergence of quasi-periodicity in Al(rich)-Mn system. \subsubsection*{Cluster Virtual Bound states} One of the main results of the ab-initio calculations performed by Prof. Fujiwara for realistic approximant phases, is the small energy dispersion of electrons in the reciprocal space. Consequently, the density of states of approximants is characterized by {\it ``spiky''} peaks \cite{Fujiwara89,Fujiwara91,Fujiwara93,GuyPRB94_AlCuFe}. In order to analyze the origin of this spiky structure of the DOS, we developed a model that show a new kind of localization by atomic cluster \cite{GuyPRB97}. As for the local atomic order, one of the characteristics of the quasicrystals and approximants is the occurrence of atomic clusters on a scale of 10--30 $\rm \AA$ \cite{Gratias00}. The role of clusters has been much debated in particular by C. Janot \cite{Janot94} and G. Trambly de Laissardi\`ere \cite{GuyPRB97}. Our model is based on a standard description of inter-metallic alloys. Considering the cluster embedded in a metallic medium, the variation $\Delta n(E)$ of the DOS due to the cluster is calculated. For electrons, which have energy in the vicinity of the Fermi level, transition atoms (such as Mn and Fe) are strong scatters whereas Al atoms are weak scatters. In the figure \ref{DOS_Clusters} the variation, $\Delta n(E)$, of the density of states due to different clusters are shown. The Mn icosahedron is the actual Mn icosahedron of the $\alpha$-AlMnSi approximant. As an example of a larger cluster, we consider one icosahedron of Mn icosahedra. \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{DOS_cluster.eps} \end{center} \vspace{-0.4cm} \caption{\label{DOS_Clusters}Variation $\Delta n(E)$ of the DOS due to Mn atoms. Mn atoms are embedded in a metallic medium (Al matrix). From \cite{GuyPRB97}.} \end{figure} $\Delta n(E)$ of clusters exhibits strong deviations from the Virtual Bound States (1 Mn atom) \cite{Friedel56}. Indeed several peaks and shoulders appear. The width of the most narrow peaks ($50 - 100$\,meV) are comparable to the fine peaks of the calculated DOS in the approximants (figure \ref{Fig_DOS_Al6Mn_sugi}). Each peak indicates a resonance due to the scattering by the cluster. These peaks correspond to states {\it ``localized''} by the icosahedron or the icosahedron of icosahedra. They are not eigenstate, they have finite lifetime of the order of $\hbar / \delta E$, where $\delta E$ is the width of the peak. Therefore, the stronger the effect of the localization by cluster is, the narrower is the peak. A large lifetime is the proof of a localization, but in the real space these states have a quite large extension on length scale of the cluster. The physical origin of these states can be understood as follows. Electrons are scattered by the Mn atoms of a cluster. By an effect similar to that of a Faraday cage, electrons can by confined by the cluster provided that their wavelength $\lambda$ satisfies $\lambda \gtrsim l$, where $l$ is the distance between two Mn spheres. Consequently, we expect to observe such a confinement by the cluster. This effect is a multiple scattering effect, and it is not due to an overlap between $d$-orbitals because Mn atoms are not first neighbor. \section{Transport properties} Quasicrystals have many fascinating electronic properties, and in particular quasicrystals with high structural quality, such as the icosahedral AlCuFe and AlPdMn alloys, have unconventional conduction properties when compared with standard inter-metallic alloys. Their conductivities can be as low as 150--200\,($\rm \Omega\,cm)^{-1}$ (see Refs. \cite{Berger94,Grenet00_Aussois,Mayou93} and Refs. therein). Furthermore the conductivity increases with disorder and with temperature, a behavior just at the opposite of that of standard metal. In a sense the most striking property is the so-called {\it ``inverse Mathiessen rule''} according to which the {\it increases of conductivity} due to different sources of disorder seems to be {\it additive}. This is just the opposite that happens with normal metals where the increases of resistivity due to several sources of scattering are {\it additive}. An important result is also that many approximants of these quasicrystalline phases have similar conduction properties. For example the crystalline $\alpha$-AlMnSi phase with a unit cell size of about 12\,$\rm \AA$ and 138 atoms in the unit cell has a conductivity of about 300\,($\rm \Omega\,cm)^{-1}$ at low temperature \cite{Berger94}. \begin{figure}[t!] \begin{center} \includegraphics[width=10cm]{Resistivite_schema.eps} \vspace{-.4cm} \caption{Schematic temperature dependencies of the experimental resistivity of quasicrystals, amorphous and metallic crystals. \label{Fig_Resistivite_schema}} \end{center} \vspace{.1cm} \begin{center} \begin{tabular}{cc} \begin{minipage}[c]{6.4cm} \includegraphics[width=6.3cm]{Resisvity_logT_Al_Al12Mn_Al6Mn_sugi_Z18b_prl2.eps} \end{minipage} & \begin{minipage}[c]{4.5cm} \caption{\label{Fig_ResistiviteAbInitio} Ab-initio electrical resistivity versus inverse scattering time, in cubic approximant $\alpha$-Al$_{69.6}$Si$_{13.0}$Mn$_{17.4}$, pure Al (f.c.c.), and cubic Al$_{12}$Mn.} \end{minipage} \end{tabular} \end{center} \end{figure} \subsection{Small Boltzmann velocity} Prof. Fujiwara {\it et al.} was the first to show that the electronic structure of AlTM approximants and related phases is characterized by two energy scales \cite{Fujiwara89,Fujiwara91,Fujiwara93,GuyPRB94_AlCuFe,GuyPRBAlCuCo} (see previous section). The largest energy scale, of about $0.5 - 1$\,eV, is the width of the pseudogap near the Fermi energy $E_{\rm F}$. It is related to the Hume--Rothery stabilization via the scattering of electrons by the TM sub-lattice because of a strong $sp$--$d$ hybridization. The smallest energy scale, less than 0.1\,eV, is characteristic of the small dispersion of the band energy $\rm E({\bf k})$. This energy scale seems more specific to phases related to the quasi-periodicity. The first consequence on transport is a small velocity at Fermi energy, Boltzmann velocity, $V_{\mathrm{B}} =(\partial E/ \partial k)_{E=E_{\rm F}}$. From numerical calculations, Prof. Fujiwara {\it et al.} evaluated the Bloch--Boltzmann dc conductivity $\sigma_{\mathrm{B}}$ in the relaxation time approximation. With a realistic value of scattering time, $\tau \sim 10^{-14}$\,s~\cite{Mayou93}, one obtains $\sigma_{\mathrm{B}} \sim 10-150\,({\rm \Omega cm})^{-1}$ for a $\alpha$-AlMn model~\cite{Fujiwara93} and 1/1-AlFeCu model~\cite{GuyPRB94_AlCuFe}. This corresponds to the measured values~\cite{Berger94,Quivy96}, which are anomalously low for metallic alloys. For decagonal approximant the anisotropy found experimentally in the conductivity is also reproduced correctly~\cite{GuyPRBAlCuCo}. \subsection{Quantum transport in Quasicrystals and approximants} The semi-classical Bloch--Boltzmann description of transport gives interesting results for the intra-band conductivity in crystalline approximants, but it is insufficient to take into account many aspects due to the special localization of electrons by the quasi-periodicity (see Refs. [34--43] and Refs. therein). Some specific transport mechanisms like the temperature dependence of the conductivity (inverse Mathiessen rule, the defects influence, the proximity of a metal\,/\,insulator transition), require to go beyond a Bloch--Boltzmann analysis. Thus, it appears that in quasicrystals and related complex metallic alloys a new type of breakdown of the semi-classical Bloch-Boltzmann theory operates. In the literature, two different unconventional transport mechanisms have been proposed for these materials. Transport could be dominated, for short relaxation time $\tau$ by hopping between ``{\it critical} localized states'', whereas for long time $\tau$ the regime could be dominated by non-ballistic propagation of wave packets between two scattering events. We develop a theory of quantum transport that applies to a normal ballistic law but also to these specific diffusion laws. As we show phenomenological models based on this theory describe correctly the experimental transport properties \cite{MayouPRL00,PRL06,MayouRevueTransp} (compare figures \ref{Fig_Resistivite_schema} and \ref{Fig_ResistiviteAbInitio}). \subsection{Ab-initio calculations of quantum transport} According to the Einstein relation the conductivity $\sigma$ depends on the diffusivity $D(E)$ of electrons of energy $E$ and the density of states $n(E)$ (summing the spin up and spin down contribution). We assume that $n(E)$ and $D(E)$ vary weakly on the thermal energy scale $kT$, which is justified here. In that case, the Einstein formula writes \begin{eqnarray} \sigma = e^2 n(E_{\mathrm{F}})D(E_{\mathrm{F}}) \end{eqnarray} where $E_{\mathrm{F}}$ is the chemical potential and $e$ is the electronic charge. The temperature dependence of $\sigma$ is due to the variation of the diffusivity $D(E_F)$ with temperature. The central quantity is thus the diffusivity which is related to quantum diffusion. Within the relaxation time approximation, the diffusivity is written \cite{MayouPRL00} \begin{eqnarray} D(E) =\frac{1}{2} \int_{0}^{+\infty} C_0(E,t) \,{\rm e}^{-|t| / \tau}\,{\rm d} t \label{DRTA} \end{eqnarray} where $C_0(E,t)=\Big\langle V_x(t)V_x(0) + V_x(0)V_x(t) \Big\rangle_E$ it the velocity correlation functions without disorder, and $\tau$ is the relaxation time. Here, the effect of defects and temperature (scattering by phonons\,...) is taken into account through the relaxation time $\tau$. $\tau$ decreases as disorder increases. In the case of crystals phases (such as approximants of quasicrystals), one obtains \cite{PRL06,MayouRevueTransp}: \begin{eqnarray} \sigma &=& \sigma_{\mathrm{B}} ~+~ \sigma_{\mathrm{NB}} \label{Eq_sigma0}\\ \sigma_{\mathrm{B}} = e^2 n(E_{\mathrm{F}})\,V_{\mathrm{B}}^2 \,\tau &{\rm and}& \sigma_{\mathrm{NB}} = e^2 n(E_{\mathrm{F}})\,\frac{L^2(\tau)}{\tau} \label{Eq_sigma} \end{eqnarray} where $\sigma_{\mathrm{B}}$ is actual the Bolzmann contribution to the conductivity and $\sigma_{\mathrm{NB}}$ a non-Boltzmann contribution. $L^2(\tau)$ is smaller than the square of the unit cell size $L_0$. $L^2(\tau)$ can be calculated numerically for the ab-initio electronic structure \cite{PRL06}. From (\ref{Eq_sigma0}) and (\ref{Eq_sigma}), it is clear that the Bolzmann term dominates when $L_0 \ll V_{\mathrm{B}}\tau$: The diffusion of electrons is then ballistic, which is the case in normal metallic crystals. But, when $L_0 \simeq V_{\mathrm{B}}\tau$, i.e. when the Bolzmann velocity $V_{\mathrm{B}}$ is very low, the non-Bolzmann term is essential. In the case of $\alpha$-Al$_{69.6}$Si$_{13.0}$Mn$_{17.4}$ approximant (figure \ref{Fig_Conductivity_t6_T_e.012D.001}) \cite{PRL06}, with realistic value of $\tau$ ($\tau$ equals a few $10^{-14}$\,s \cite{Mayou93}), $\sigma_{\mathrm{NB}}$ dominates and $\sigma$ increases when $1/\tau$ increases, i.e. when defects or temperature increases, in agreement with experimental measurement (compare figures \ref{Fig_Resistivite_schema} and \ref{Fig_ResistiviteAbInitio}). \begin{figure}[t!] \begin{center} \includegraphics[width=8cm]{Conductivity_t6_T_e-012D-001.eps} \end{center} \vspace{-0.5cm} \caption{ \label{Fig_Conductivity_t6_T_e.012D.001} Ab-initio dc-conductivity $\sigma$ in cubic approximant $\alpha$-Al$_{69.6}$Si$_{13.0}$Mn$_{17.4}$ versus inverse scattering time. \cite{PRL06} } \vspace{0.3cm} \begin{center} \includegraphics[width=8cm]{sugiAlCuSI_t10_T_sig_e-012D001.eps} \end{center} \vspace{-0.5cm} \caption{\label{Fig_Sig_aAlCuSi} Ab-initio dc-conductivity $\sigma$ in an hypothetical cubic approximant $\alpha$-Al$_{69.6}$Si$_{13.0}$Cu$_{17.4}$ versus inverse scattering time. \cite{MayouRevueTransp} } \end{figure} To evaluate the effect of TM elements on the conductivity, we have considered an hypothetical $\alpha$-Al$_{69.6}$Si$_{13.0}$Cu$_{17.4}$ constructed by putting Cu atoms in place of Mn atoms in the actual $\alpha$-Al$_{69.6}$Si$_{13.0}$Mn$_{17.4}$ structure. Cu atoms have almost the same number of $sp$ electrons as Mn atoms, but their $d$ DOS is very small at $E_{\rm F}$. Therefore in $\alpha$-Al$_{69.6}$Si$_{13.0}$Cu$_{17.4}$, the effect of $sp$(Al)--$d$(TM) hybridization on electronic states with energy near $E_{\rm F}$ is very small. As a result, the pseudogap disappears in total DOS, and the conductivity is now ballistic (metallic), $\sigma \simeq \sigma_{\mathrm{B}}$, as shown on figure~\ref{Fig_Sig_aAlCuSi}. \section{Conclusion} In this article we present the effect of transition-metal atoms on the physical properties of quasicrystals and related complex phases. These studies lead to consider these aluminides as $spd$ electron phases \cite{PMS05}, where a specific electronic structure governs stability, magnetism and quantum transport properties. The principal aspects of this new physics are now understood particularly thanks to seminal work of Prof. T. Fujiwara and subsequent developpements of his ideas. \bibliographystyle{plain}
2,877,628,088,734
arxiv
\section{Introduction} In recent decades, many indications have been found that Galactic Globular Clusters (GCs) are not simple stellar populations (SSP), i. e. systems where all the stars share the same age and chemical composition, but that most of them host multiple stellar populations. The multiple populations (MPs) in a cluster differ in their abundance of light elements, with an ``anomalous'' component typically showing enhancements in He, N, and Na abundances, and depletions in C and O with respect to the field stars of the host galaxy \citep[e.g.][]{minniti1993,carretta2009,gratton2013, marino14}. However, in most cases the different populations are similar in their Fe content \citep{gratton2004}. In addition to the Milky Way (MW), these anomalous abundances have been observed in nearby galaxies such as the large and small Magellanic clouds \citep[hereafter LMC and SMC;][]{mucciarelli2009} Fornax \citep{larsen2014} and M31 \citep{nardiello2019}. In the literature, the stellar populations within GCs with a ``normal'' chemical compositions, i. e. similar to ones of field stars, are usually referred to as first-generation (FG), first-population (1P) or primordial population. On the other hand, other populations with anomalous abundances are labeled as second-generation (SG), second-population (2P) or enriched population \citep{bastian2018}. Through high precision HST photometry of red giant branch stars it has been revealed that the average number fraction of SG stars is 0.38 and 0.68 in GCs of the MW and the Magellanic clouds, respectively \citep[][]{milone2020}. Several scenarios have been proposed to explain the origin of multiple populations in GCs. At present, which scenario is favoured with respect to others is still a matter of debate \citep{renzini2015, bastian2018}. In most of these scenarios the chemical differences in FG and SG stars can be explained by a framework in which gas enriched by the ejecta of FG stars fuels the formation of SG stars. The various scenarios mainly differ in what are assumed to be the sources of the enriched gas. These include fast-rotating massive stars \citep{decressin2007}, massive binaries \citep{Mink2009,bastian2013}, supermassive stars \citep{denissenkov2013} and asymptotic giant branch \citep{D'Ercole2008} stars (AGBs). In the fast-rotating massive stars (FRMS) scenario, SG stars form from the enriched material ejected from massive stars (20-120 $M_{\odot}$) rotating with speeds near their break-up limit. Such material is still slow enough to be retained by the cluster and most of the SG star formation has to occur in a very short time, while the original cluster is younger than $3.5$ Myr (\citealt{krause2013}), before the first supernovae explosions start to pollute the star-forming gas with Fe and eject most of the gas from the cluster, suppressing further star formation \citep{decressin2007}. In the massive binary scenario the sources of enriched material are massive interacting binaries (10 - 100 $M_{\odot}$). These systems eject most of the envelopes of their primary stars into the interstellar medium (ISM) during non-conservative mass transfer and then SG stars can form from this enriched gas \citep{Mink2009}. The main advantage of this scenario is that it provides more enriched material than the AGB and FRMS scenarios. Supermassive stars with mass $\approx 10^4 M_{\odot}$ and with significant mass loss have also been presented as a viable source for the origin of MPs \citep{denissenkov2013}. Due to chemically homogeneous ejecta enriched in light elements, this model may account for the observed abundance anomalies in GCs. The problem is that such massive stars still only exist in the realm of theory, putting this scenario in significant doubt. An alternative scenario for SG formation was proposed by \citet{bastian2013}. According to this model, there is really only one generation of stars, and the reason for some low-mass FG stars having anomalous metal abundances is that they accrete or sweep up metal-enriched ejecta from massive binaries during their lifetime. Another proposed scenario sees the presence of massive (with masses in the range $9-500 M_{\odot})$ metal-poor supergiants which induce an early star-formation episode, occurring within the first 4 Myr, i.e. before the explosion of the first core-collpse SNe \citep{szecsi2019}. This scenario predicts a positive correlation between the SG fraction and cluster mass and, as luminous supergiants are expected to be present only at low metallicity, it predicts a limited presence of a polluted second generation at high metallicity. The AGB scenario \citep{D'Ercole2008,D'Ercole2016} is probably the one which has gained the most widespread attention and is the focus of this paper. Here, the SG of stars is formed out of AGB ejecta from the FG, which start to be ejected about $40$ Myr after the formation of the cluster, mixed with pristine\footnote{Throughout this paper, the pristine gas has the same helium abundance as FG stars, i. e. a helium mass fraction of Y=0.246.} gas accreted from the ambient ISM gas. By means of 1-D hydrodynamic simulations modelling this scenario, \cite{D'Ercole2008} showed that the SG stellar component is more concentrated than the FG stars and has enhanced He abundances. In this model, the pristine gas accreted by the cluster is a vital component in order to explain the observed relative fractions of FG to SG masses as well as their present-day abundance pattern. \citet{D'Ercole2011} showed that the large reservoir of pristine gas is also necessary to dilute the AGB ejecta and thus explain the correlations between light elements. To form a massive SG cluster which incorporates a significant fraction of AGB ejecta, an initially very massive cluster is required, larger than its present-day mass typically by factors between 5 and 20 \citep{D'Ercole2008,renzini2015}. Hence within this scenario, it becomes very important to study the capability of a cluster to retain its AGB ejecta and to accrete pristine gas. Moreover, this model has to cope with empirical evidence emerging from recent observations, which show a strong positive correlation between the number ratio of SG to FG stars and the present-day total mass of clusters hosting MPs \citep{milone2017,milone2020,baumgardt2018,bastian2018}. The inferred initial masses of such clusters appear to be between $10^5 M_{\odot}$ and $10^7 M_{\odot}$ \citep[e.g.][]{milone2009,mackey2008,baumgardt2019,carretta2010}. From these pieces of evidence and the general expectation that more massive clusters accrete gas more easily, one can expect that the initial mass of the cluster plays a fundamental role in determining its ability to form multiple stellar populations \citep{bastian2018}. Several studies have focused on the initial mass, modelling the main physical processes regulating star formation in a young cluster, such as its gravity \citep{pflamm2009}, gas accretion \citep{naiman2011}, ram pressure stripping \citep{charlie2010}, dynamical interactions between individual stars and the accreted gas \citep{bekki2019}, and stellar feedback \citep{naiman2018,D'Ercole2008,vesperini2010}. Some of these processes depend directly or indirectly on the mass of the star cluster and the properties of the external environment \citep{calura19, lin2007,charlie2010}, including gas density and temperature, and velocity of the cluster though the surrounding ISM. By means of one-dimensional hydrodynamical simulations, \citet{naiman2018} studied, in a range of cluster masses, the role played by other likely relevant parameters such as compactness, metallicity, and cluster age on the capability of isolated, spherically symmetric clusters to retain the ejecta of their stars. They found that isolated clusters need special conditions to retain mass: they essentially need to be compact and massive, with a velocity dispersion $\sigma>25$ km/s, corresponding to a cluster with a mass $10^7 \Msun$ and a core radius of 27 pc. Other studies have focused on a cluster moving through the ISM \citep[e.g.][]{lin2007,naiman2011}. These works confirm that if the central velocity dispersion of the cluster is greater than the sound speed of the ISM, as well as the relative velocity between the cluster and ISM, the cluster can accrete a significant amount of the ambient gas. However no direct prediction has been provided regarding the amount of gas which can be converted into new stars. Following up on previous results, recently \citet{calura19} (hereafter C19) presented the first set of 3-D hydrodynamic simulations, taking into account self-gravity and radiative cooling, aimed to study of the relative roles of stellar winds from AGB stars and the accretion of pristine gas in the formation of SG stars. The C19 simulations focus on the feasibility of the AGB scenario proposed by \citet{D'Ercole2016}, with a FG globular cluster of $10^7 M_{\odot}$ and a half-mass radius of $\approx 30$ pc. This cluster was placed into a uniform distribution of $10^4$ K gas with two different densities, $10^{-23}$ and $10^{-24} \ {\rm g/cm^3}$, through which the cluster was moving with a speed of $20$ km/s. In both these simulations, a compact SG sub-component was formed with a mass of $5\times10^6 M_{\odot}$ and $7\times10^5 M_{\odot}$ in the high-density and low-density cases, respectively. A distinct feature of their results was that the first and most helium-enhanced SG stars were born at the centre of the cluster, whereas stars born later and with lower He content were characterised by more extended spatial distributions. This predicted compactness of the SG stars is qualitatively in agreement with a few observational studies, in which clusters are found to retain some memory of their initial stellar distribution and the SG is found to be more spatially concentrated than the FG \citep[see e.g.][]{sollima2007,simioni2016}. Some of the standing problems of the AGB scenario require attention and are the subject of the present paper. To further explore the viability of the AGB scenario, the simulations from C19 now need to be expanded to a further, more detailed investigation of structural parameters that can also be checked against observations. The aim of the present paper is to extend the study carried out by C19 to a range of initial cluster masses, allowing us to study the capability of less massive clusters to retain their stellar winds, to accumulate pristine gas and transform them into stars. We start from a setup similar to that of C19 and study SG star formation in clusters of ten and a hundred times lower mass, $10^5$ and $10^6$ $M_{\odot}$. By combining our results with those of C19, we cover a dynamic range of three orders of magnitude and investigate the scaling of some fundamental properties like the SG mass and the fraction of AGB ejecta incorporated into new stars with initial cluster mass. Having a range of cluster masses also allows us to investigate whether the AGB model can reproduce the observed correlation between the number ratio of SG to FG stars and cluster mass. The paper is organised as follows. In Section \ref{setup.sec} we describe our simulation setup and our most critical assumptions. In Section \ref{results.sec}, we present the results of our set of simulations. In Section \ref{discussion.sec} we discuss the most fundamental implications of our results and compare them with relevant observed properties of present-day GCs. Finally, in Section \ref{conclusions.sec} we present a summary and the main conclusions of our study. \section{Simulation setup}\label{setup.sec} The simulation setup used in this work is similar to that described in C19. We perform a series of 3-D simulations with the goal of studying the formation of a new generation of stars in a $\approx 30$ Myr old globular cluster, which roughly corresponds to the lifetime of the least massive type II SNe progenitors in the FG. This means that at the beginning of our simulations, the energetic feedback of massive FG stars has completely cleared out the residual, metal-rich gas polluted by the SNe (e. g., \citealt{calura2015}). The simulations are intended to study two critical parameters in the SG formation: the initial mass of the globular cluster (i.e. the FG mass) and the gas density of the surrounding ISM. Both these parameters determine whether the cluster can accumulate the required mix of material to form a SG of stars. We use the \ramses{} code \citep{teyssier2002} to solve for the interactions of gas and SG stars via gravity, hydrodynamics, and radiative cooling on a uniform mesh. The gas advection is calculated by means of a second-order Godunov scheme to solve the Euler equations and the dynamical evolution of the SG stars is calculated with a Particle-Mesh solver. We use the acoustic Riemann solver for gas advection. We use an adiabatic index of $\gamma = 5/3$ for the ratio between internal energy and gas pressure. For simplicity, we neglect the dynamical evolution of FG stars, which are modelled with a static Plummer density profile \citep{plummer1911}. In all our simulations, the size of the computational grid is $50$ pc which is uniformly divided into $512^3$ cells, corresponding to a spatial resolution of $0.1$ pc. Gas accretion and a subsequent formation of SG stars is only possible if we neglect feedback from massive SG stars (i.e. stars with mass $M > 8 M_{\odot}$ ). Such feedback efficiently suppresses SG formation \citep[see][]{D'Ercole2008} and the supernova (SN) ejecta would produce a spread in heavy elements such as Fe, which is found only in a small subset of the sample of Galactic clusters studied so far \citep{renzini2015}. For SG stars, we therefore assume an initial mass function (IMF) truncated at $M \ge 8 M_{\odot}$. By necessity, the same assumption regarding the IMF has been made in previous theoretical works on the formation of SG stars \citep[e.g. C19, ][]{D'Ercole2008,D'Ercole2010}. The implication of this assumption is that stellar feedback from SG massive stars, in the form of stellar winds, SN explosions and radiative feedback, is not present. However, feedback from FG stars may still be present, e.g. in the form of ionising radiation. An investigation of the effects of this energy source is currently in progress and will be presented in a forthcoming work. A discussion of the realism of a truncated IMF for the SG will be presented in Sect. \ref{sec_IMF}. \begin{table*} \centering \begin{tabular}{lcccccc} \hline Simulation & M$_{{\textup FG}}$ \ ${\rm [M_{\odot}]}$ & $\sigma$ \ $[10^6 \ {\rm cm \ s^{-1}}]$ & $\rho_{ \small pg}$ \ $[\rm g \ cm^{-3}]$ & $R_{\small eq}$ \ $[\rm pc]$ & $t_{\small I}$ \ $[\rm Myr]$ & $t_0$ \ $[\rm Myr]$ \\ \hline \MHL{} & $10^6$ & 2.68& $10^{-24}$ & 414& 43.5 & 12.2\\ \MLL{} & $10^5$ & 0.85 & $10^{-24}$& 131& 34.3 & 3\\ \MHH{} & $10^6$ & 2.68& $10^{-23}$ & 131& 34.3 & 3\\ \MLH{} & $10^5$ & 0.85 & $10^{-23}$& 41& 31.3 & 0\\ \hline \end{tabular} \caption{Main simulation parameters. Here M$_{\rm FG}$ is the mass of the cluster, $\sigma$ is its velocity dispersion and $\rho_{\rm pg}$ is the ambient density. The stalling radius R$_{\rm eq}$ and the time at which the infall starts t$_{\rm I}$ are computed by means equations (\ref{eq:Req}) and (\ref{eq:tI}), respectively. $t_0$ is the infall time in the time reference frame assumed in this study (i.e. $t_I-31.3$ Myr).} \label{tabl1} \end{table*} \subsection{Initial conditions} Following \citet{D'Ercole2016}, the cluster is assumed to belong to a high-redshift disky dwarf galaxy. Most globular clusters are indeed believed to be relics from the high-redshift Universe \citep{kravtsov2005,kruijssen2015}. After the rapid formation of FG stars, the most massive stars explode as supernovae (SNe), blowing the local gas out of the cluster, suppressing further star formation and powering a hot gas bubble \citep{weaver1977,calura2015,gavagnin2017}. Supported by continuous SN II explosions, the bubble expands, sweeps up the ambient gas and blows out of the disk, dispersing the metal-rich bubble material into the circum-galactic medium. A galactic disk is an ideal environment for our scenario because of the large reservoir of gas that will be available for later accretion and because its limited vertical extent allows the blowout of the bubble, preventing the incorporation of SN II ejecta into SG stars. The cluster can thus accrete mass from the galactic ISM while orbiting around the centre of the galaxy \citep[see, e. g.][]{goodman2018}. In principle, the cluster may be located in a gas distribution with a non-disky geometry which can represent other real cases such as a reservoir of gas left over from a merger or simply an irregular galaxy. At this stage, the shell confining the bubble is suddenly accelerated, and becomes prone to Rayleigh-Taylor instability which may lead it to disruption. At the break-out, the hot interior of the bubble leaks out and its pressure drops. The bubble loses its spherical shape and the wind due to continuous SN explosions impinges directly on the inner part of the shell. When SN explosions cease, the further expansion of the shell can continue, driven by the inertia of the swept-up mass. The bubble stalls as soon as the expansion velocity equals the velocity dispersion of the unperturbed ISM, which is of the order of the local sound speed \citep{D'Ercole2016}. At this stage, the shell merges with the external ISM, the bubble loses its initial structure and can be regarded as a hole in the disk carved by the feedback of massive FG stars. The cavity then starts to be re-filled by ISM gas pouring in at a velocity on the order of the local sound speed. The stalling radius of the bubble can be computed as (C19, \citealt{ D'Ercole2016}): \begin{align}\label{eq:Req} \Req=4.143 \times 10^3 \ {\rm pc } \ \ \left( \frac{L_{41}}{n_0 V_{w,8} \ (\sigma_{\rm pg,6}^2+v_{\rm pg,6}^2)}\right)^{1/2}. \end{align} Here $L_{41}$ is the mechanical luminosity of FG SNe in units of $10^{41}$ ergs $s^{-1}$, $V_{w,8}$ is the velocity of the SN ejecta in units of $10^8 \ {\rm cm \ s^{-1}}$, $\sigma_{\rm pg,6}$ and $v_{\rm pg,6}$ are respectively the isothermal sound speed and the velocity of the pristine gas relative to the globular cluster (both in units of $10^6$ cm/s), and $n_0$ is the ISM number density in $\rm cm^{-3}$. We assume that at a time ${\rm t_{I}}$ after its birth, the cluster reaches the edge of the feedback bubble as described above and starts moving through the unperturbed ISM. We refer to this moment as the infall time $\tinf{}$, defined as \begin{equation}\label{eq:tI} \tinf=\tSN+\frac{\Req}{\sigpg+\vpg}, \end{equation} where $\tSN\approx30$ Myr is the time at which we assume SN explosions to cease. Our aim is to mimic the motion of the cluster into the cavity carved by FG massive stars and a consequently asymmetric accretion of gas. As discussed in \cite{D'Ercole2016}, the accreted gas has not been enriched by FG core-collapse SNe. In fact, numerical simulations of nuclear starbursts have shown that the wind hitting the shell on the disc is deflected away from the plane, ablating the superficial gas layers of the inner side of the shell and dragging them away (\citealt{tenoriotagle1998}). Moreover, due to numerical diffusion, the mixing achieved in numerical simulations is stronger than in reality. In another model, \cite{tenoriotagle1996} have shown that, even if the metal-enriched blown-out gas mixes efficiently in a hot gaseous halo and is later reaccreted by the system, this does not occur before $\sim 10^8$ yr. It is therefore reasonable to assume that, for the time interval of interest in our model, the GC wind does not contaminate the composition of the host galaxy. In our reference frame the cluster is at rest and at time $\tinf$ the infalling pristine gas enters the box on one side with a velocity $\vpg$ parallel to the x-axis. The chemical composition of the pristine gas is assumed to be the same as the that of the FG stars, with a He mass fraction Y=0.246 and a metal mass fraction Z=0.001. As in C19, we assume $\sigpg= 1.16 \times 10^6$ cm/s and $\vpg=2.3 \times 10^6$ cm/s. As current observations indicate a flat relation between the mass and the half-mass radius for young massive clusters and GCs \citep{portegies2010,ryon2017,krumholz2019} with mass $<10^6~M_{\odot}$, for all our clusters we assume a half mass radius of 4 pc, corresponding to a Plummer radius $r_{\rm P} = 3$ pc. The main model parameters are listed in Table \ref{tabl1}. \subsection{Star formation}\label{SF} The star formation sub-grid model used in our simulation is described in \citet{rasera2006} and in C19. In this scheme, gas cells are eligible for star formation if the gas temperature $T<2\times10^4$ K and the gas flow is converging, i.e. $\nabla \cdot\mbox{\boldmath{$v$}} < 0$, where $\mbox{\boldmath{$v$}}$ is the velocity of the gas. The gas in eligible cells is converted into star particles with an average rate per unit time expressed by a \citet[][]{schmidt1959} law \begin{equation}\label{eq10} \dot \rho_{*}= \frac{\rho}{t_*}, \end{equation} where $t_*$ is the star formation timescale. We assume a uniform star formation timescale of $t_*=100$ Myr (C19). Our results do not depend significantly on the choice of this parameter, as shown in \citet{D'Ercole2008}. In each time-step, the gas may be converted into stellar particles, each having a mass of an integer multiple of $m_*=0.1 \ \Msun$, by sampling the Poisson probability distribution for conversion of the gas to stars via eq. \ref{eq10}. We allow no more than 90\% of the cell gas to be turned into stars. This condition implies a minimum density threshold for star formation \begin{equation} \rho_{{\rm th}}=\frac{m_*}{0.9 \ (\Delta x)^3}=7.6 \times 10^{-21} \ {\rm g \ cm^{-3}}, \end{equation} where $\Delta x=0.1$ pc is the cell width. The new star particle formed in a cell is placed at its centre and it is given a velocity equal to the cell gas. At the beginning of the simulation, we assume the FG to be already in place. The gravitational effect of the FG is modelled by means of an analytic potential. In our 'wind tunnel' setup, we use a reference frame in which the cluster is at rest and is accreting mass from one of the simulation boundaries. FG stars are continuously distributed in space, therefore occupying all the cells, with a spatial distribution described by an analytic Plummer mass density profile: \begin{equation} \rho_{*,\rm FG}(r) = \frac{3 \ M_{\rm tot}}{4\pi\, a^3} \left(1+\frac{r^2}{a^2}\right)^{-\frac{5}{2}}, \label{plum} \end{equation} where $r$ is the radius from the centre of the cluster and $a=3$ pc. For the total mass of the FG cluster, $M_{\rm tot}$, we assume two different values, i.e. $10^5 \ M_{\odot}$ and $10^6 \ M_{\odot}$, in two sets of simulations. Our simulations are stopped 100 Myr after the formation of the cluster. This time corresponds to the onset of FG type Ia SNe, whose energetic feedback is assumed to halt star formation \citep{D'Ercole2008, maoz2014}. \subsection{Heating and cooling} FG AGB stars act as a heating source in our simulations by injecting mass and energy into their surrounding medium. In each simulation cell, the mass injected by AGB stars per unit time and volume is \begin{equation}\label{AGB_ej} \dot{\rho}_{ \rm AGB}=\alpha \rho_{*,\rm FG}, \end{equation} where $\rho_{*,\rm FG}$ is the local FG density and \begin{equation} \alpha(\tau)=0.065 ~\tau^{-1.01} \end{equation} is the specific mass return rate (in yr$^{-1}$) as a function of the age $\tau$ (expressed in yr) of the FG population computed for a \cite{kroupa2001} IMF. Following C19, the rate of energy injection per unit volume from the FG AGB stars is \begin{equation}\label{eq14} S=0.5 \ \alpha \ \rho_{*,\rm FG} \ (3\sigma^2+v^2+v_{\rm wind}^2), \end{equation} where $\sigma$ is the 1D velocity dispersion of the cluster, whereas $v_{\rm wind}$ and $v$ are the wind velocity of AGB stars and the velocity of the gas, respectively. We assume $v_{\rm wind} =2\times 10^6 \ {\rm cm \ s}^{-1}$ \citep[see][]{D'Ercole2008}. We also include radiative cooling in its native implementation in the \ramses{} code due to hydrogen, helium, and metals \citep[see][]{few2014}. We apply a temperature floor of $10^3 K$. The inflowing ISM gas, as well as all the gas in the initial conditions, has a temperature of $10^4$ K, which is typical for the warm photoionised ISM \citep[e.g.][]{haffner2009}. Regarding the chemical compositions of SG stars, in our simulations we focus only on the evolution of their helium (He) abundance and do not track the abundances of the other elements. To track the gas He abundance, we use a passive scalar which is advected with the gas, and we likewise attach a helium mass fraction to each stellar particle. As in C19, the helium abundances used in our work are from \cite{ventura2011}. For the helium yield of the 8 $M_{\odot}$ progenitor we adopt a value approximately equal to the average of the yields of the model of \cite{ventura2011} and by \cite{siess2010}, calculated for a metallicity Z = 0.001. This allows us to achieve a helium mass fraction in the AGB ejecta which varies between Y=0.36 at the beginning of the AGB phase (39 Myr) and Y=0.32, at the assumed end of star formation (i.e. 100 Myr, \citealt{D'Ercole2008, D'Ercole2010}). \section{Results}\label{results.sec} \begin{figure*} \centering \includegraphics[width=\linewidth]{den.pdf} \caption{Density slices computed in the x-y plane for our simulations, from top to bottom: \MHL{}, \MLL{}, \MHH{}, \MLH{}) at three different times, as indicated above of each column. In each panel the black and orange contours enclose regions within which the SG stellar density is $>6 \times 10^{-6}$ and $>0.06$ times the maximum value, respectively. The physical scale is shown in the bottom-right panel. } \label{fig:dens} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{temp.pdf} \caption{Temperature slices computed in the x-y plane for our simulations, as indicated on the left, at three different times, as indicated on top. Gray arrows represent the gas velocity field.} \label{fig:temp} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{f3.pdf} \caption{First and SG stellar density profiles for various sub-components with different He abundance (see the legend in the bottom-left panel) computed at different times for our simulations. The times are indicated at the top and the names of the simulations are shown at the right of the figure. } \label{fig:profil} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{dist.pdf} \caption{Mass distribution of SG stars as a function of the He abundance $Y$ in SG stars at various times (see the legend in the top-right panel). From top-left, clockwise: results for the \MHL{}, \MLL{}, \MLH{}, and \MHH{}. The helium mass fraction of SG stars varies between Y=0.25, which is the helium mass fraction of the pristine ISM gas (and FG stars), and Y=0.36, originating purely from the most massive AGB ejecta at t=39 Myr.} \label{fig:dist} \end{figure*} We now describe the results of our simulations, which again are listed in table \ref{tabl1}. We examine the effect of the initial cluster mass on the formation of SG stars and explore how and under what conditions ram pressure limits the ability of the gravitational potential to retain the stellar winds ejected by AGB stars and accumulate pristine gas. We expect that a cluster can retain its AGB ejecta only if it has a velocity dispersion higher than the velocity of the AGB ejecta, which may not always be true, especially at low mass \citep{naiman2018}. Similarly, in the case of a cluster moving in a gaseous medium, only a sufficiently massive cluster will be able to accrete enough mass to form new stars. This will happen only when the central velocity dispersion of the cluster is greater than the sum of the sound speed and the relative velocity with respect to the external ISM \citep{naiman2011,lin2007}. Based on these approximate analytical arguments, it is likely, as concluded in C19, that only clusters with mass larger than $10^6 M_{\odot}$ have the conditions to produce a new stellar generation. However, factors such as gas self-gravity, radiative cooling, and star formation are not properly accounted for in the analytic model, and hence it becomes relevant to study numerically even the case of lower-mass clusters. As already stressed, $t_{\rm I}$, i.e. the time at which we start the inflow of gas into the side of the simulation volume, varies with the FG cluster mass and the density of the unperturbed ISM. The minimum such time is $t_{\rm I}=31.3$ Myr. Thus throughout this paper, times are expressed by $t_0$ as the time after this minimum $t_{\rm I}$, i.e. $t_0=\tinf-31.3$ Myr. The values of both $t_{\rm I}$ and $t_0$ are given in the last two columns of Table \ref{tabl1}. Also, in this time reference frame the injection of AGB ejecta starts at $t_{\rm AGB}=7.7$ Myr in all simulations. \subsection{Low-density simulations} C19 showed that if a cluster with the a mass $10^7 M_{\odot}$ and velocity $v_{pg}=23$ km/s moves in a medium with a density $10^{-24} g/cm^3$, a compact, massive SG with both He-rich and He-intermediate stars can form within the cluster. This was found to be in good agreement with observational data from very massive GCs. Here we study the same ISM density, but lower FG cluster masses of $10^6$ and $10^5 M_{\odot}$. \subsubsection{Model \MHL{}}\label{M6-Infall24} This model starts with the injection of mass and energy from AGB stars at $t_{\rm AGB}=7.7$ Myr, followed 4.5 Myr later by the infall of pristine gas. The top row of Fig.\ref{fig:dens} shows gas and SG stellar densities from this simulation in the x-y plane at $10$ Myr, $29$ Myr and $71$ Myr. The orange and black contours indicate regions within which the SG stellar density is $>6 \times 10^{-6}$ and $>0.06$ times the maximum central value, respectively. The aim of the contours is to visualise the extent of all SG stars and a dense, central region where the stellar density is very high (typically about $10^{-17} \ {\rm g \ cm}^{-3}$ or $10^5 \Msun \ {\rm pc}^{-3}$). The top row of Fig. \ref{fig:temp} shows the gas temperature and velocity, computed at the same times. At $10$ Myr, AGB stars have already started ejecting enriched gas. The cluster can retain gas at radii $r$ where the cluster velocity dispersion $\sigma(r)$ > v$_{\rm wind}$. Ignoring the self-gravity of the gas and SG stars, the gravity only consists of the static Plummer potential. It can be shown that this condition holds at radii $<20$ pc. This rough analytic estimate nicely matches the results shown in the top left density map in Fig. \ref{fig:temp}. The gas within this radius flows inwards and cools increasingly as the density increases. The cooling flow directed toward the cluster core is visible from the gray arrows in the top-left panel of Fig. \ref{fig:temp}, showing that the center of the cluster is occupied by cold and dense gas with temperature close to the temperature floor of $10^3$ K. This converging cold gas now comes under the correct conditions to form a SG of He-rich stars, and indeed a nearly spherical stellar distribution of SG stars is visible at the centre of the box, represented by the black contour in the top-left panel of Fig.\ref{fig:dens}. At $12.2$ Myr, an infall of pristine gas with velocity $v_{pg}=23$ km/s enters the box from the left, moving along the x-direction to the right. The high-velocity gas crosses the centre of the cluster, creating a wide cone-shaped tail downstream if it. Now the accretion of gas becomes regulated by the balance between the gravitational pull of the cluster and the ram pressure stripping of the incoming gas. Since in this simulation $\sigma^2>{V^2_{p g}+c^2_s}$ \citep{lin2007}, the cluster potential overcomes the ram pressure and, beside retaining AGB ejecta, the cluster begins to accrete pristine gas. At $29$ Myr a dense, symmetric cold tail is visible downstream of the cluster centre (top middle panel in Figs. \ref{fig:dens} and \ref{fig:temp}). The arrows in the temperature map show the motion of the incoming gas. At the far left side of the box, the gas flows strictly towards the right, but as it flows further in, the gas is increasingly pulled towards the center and to the cold downstream tail. The tail itself flows in the opposite direction to the incoming ISM gas, towards the centre of the cluster from the right, in an ``accretion column'' \citep{bondi1944, shima1985}. It is mostly thanks to the dense accretion column that pristine gas flows on towards the centre and mixes with the AGB ejecta. Stars are forming along the column. Once formed, new stars are gravitationally bound to the center. In their orbits, some of the newly formed stars cross the center and may end up upstream of the cluster, moving in an opposite direction with respect to the gas flow. The elongated distribution of the stars reflects their motion as well as the dynamical state and extent of the gas out of which they form (see the black contour in the top-middle panel of Fig. ~\ref{fig:dens}). At the final time ($t=71$ Myr), the accretion of pristine gas and confinement of AGB ejecta has allowed the stellar mass to grow considerably. The shape of the stellar distribution has become more circular and diffuse (top-right panel of Fig. \ref{fig:dens}). Fig \ref{fig:profil} shows density profiles of FG and SG stars, in addition to sub-components of SG stars with different He contents. The density profiles are plotted at the same times as in Figures \ref{fig:dens} and \ref{fig:temp}, and are computed from the center of FG cluster, using concentric shells around the center and evaluating the stellar mass contained in each shell. The top row of the figure shows that early in the \MHL{} simulation, all the newly born SG stars are He-rich ($Y>0.33$), being fuelled almost entirely by AGB ejecta. The SG stars are located at the centre of the cluster, within the innermost $\approx 2$ pc. It is worth noting that SG stars already dominate the mass distribution in the cluster core. At a later time ($29$ Myr, top middle panel of of Fig. \ref{fig:profil}), the infall of pristine gas has started and the infalling matter mixed with the AGB ejecta is starting to be incorporated into new stars. Although the SG profile is still fuelled predominantly by AGB ejecta, we start to see the formation of stars with He abundances between the FG ejecta ($Y>0.32$) and the ISM ($Y=0.246$), which we refer to as He-intermediate stars, and which in the beginning have a less concentrated profile than the more He-rich SG stars. At $71$ Myr, an additional sub-population with $Y < 0.27$ is starting to emerge out of pristine gas (top-right panel of Fig. \ref{fig:profil}). The SG density profiles show that with time their distribution becomes increasingly extended and in general, the lower the helium abundance, the more extended the stellar distribution. Fig. \ref{fig:dist} shows the SG mass-distribution of helium abundance. For \MHL{} (top left panel), a narrow, single-peaked distribution is seen at $\sim 10$ Myr, reflecting the He-rich population fuelled by AGB ejecta. At later times, as a result of the dilution of the AGB ejecta, the distribution becomes bimodal, with one ``mixed'' peak at $Y \approx 0.34$ and an ``enriched'' one at $ Y \approx 0.36$. The distribution broadens and the SG stars gradually become less He-enriched towards the final time of $71$ Myr. Eventually in this simulation, after $71$ Myr a compact SG is present, with a total mass of $6\times 10^4 \ \Msun$, which gives a SG-to-FG mass ratio of about 0.06, and a half-mass radius of 0.33 pc. \subsubsection{Model \MLL{}} We now report on a cluster with a ten times less massive FG ($10^5 M_{\odot}$). In this case, a smaller pre-simulation feedback bubble is expected and the infall of pristine gas starts earlier than in the case of the more massive FG from the previous sub-section. By means of equations (\ref{eq:Req}) and (\ref{eq:tI}), we derive that the infall begins at $t_0=3$ Myr, i. e. 4.7 Myr before the onset of AGB ejecta at $t_{\rm AGB}=7.7$ Myr. In this case, the gravitational potential well of the cluster is weaker than in the previous simulation. Hence in the earliest phases, the ram pressure prevents the cluster from retaining a significant amount of gas. As we can see from second-row panels of Fig. \ref{fig:dens}, at $t=10$ Myr a density enhancement is visible in the central region. The accumulated material is composed of a mix of pristine gas and AGB ejecta and is cooled down to $T<10^4$ K (second row of Fig. \ref{fig:temp}). Despite the central over-density, in this case no SG stars have formed yet. This is because the gas density has not reached the density threshold for star formation, i. e. $7.8\times 10^{-21} \ {\rm g \ cm}^{-3}$ (see section \ref{SF}). At $t=29$ Myr, a very compact SG component is in place, visible from the black contour at the centre of the cluster. This component is mostly composed of He-rich stars ($Y \sim 0.35$, Figs. \ref{fig:profil} and \ref{fig:dist}), i.e. mostly made out of AGB ejecta. The infalling gas is only weakly affected by the presence of the cluster and as a result no significant tail has formed at this time. At 71 Myr, we see the appearance of a stellar component with an intermediate He abundance (second row right column panel of Fig. \ref{fig:profil}), due to dilution of the AGB ejecta. The final $Y$ distribution is much narrower than for the more massive FG cluster and in the range $ Y\sim 0.31-0.36$ (top right panel of Fig. \ref{fig:dist}). The final SG stellar mass here is $ 3.2\times 10^3M_\odot$, corresponding to a FG-SG mass ratio of $0.03$, which is half of the ratio in the case of the more massive FG cluster from the previous sub-section. The final half-mass radius is 0.08 pc. In table \ref{tabl2} we report the main properties of the SG population, computed at the end of our simulations. This includes the final SG mass, the SG/FG ratio, the fraction of AGB ejecta and pristine gas incorporated in SG stars, the He abundance pattern of SG stars (as indicated by the minimum and maximum values of Y) and half-mass radius of SG stars. \begin{table*} \centering \label{tbl2} \begin{tabular}{lcccccccr} \hline Model & FG mass \ [$M_{\odot}$] & SG mass \ [$M_{\odot}$] & $f_{ \rm SG/FG}$ & $f_{\rm AGB}$ &$f_{\rm P}$ &$Y_{\rm min}$ &$Y_{\rm max}$ & $r_{\rm h,SG}$ [pc]\\ \hline \MHL{} & $10^6$ & 6.3$\times10^4$ &0.063 &0.77 &0.23 &0.264 &0.36 & 0.33 \\ \MLL{} & $10^5$ & 3.2$\times10^3$ &0.032 &0.91 &0.09 &0.31 &0.36 & 0.08 \\ \MHH{} & $10^6$ & 2.0$\times10^5$ &0.205 &0.23 &0.77 &0.246 &0.331 & 1.52 \\ \MLH{} & $10^5$ & 2.3$\times10^3$ &0.023 &0.49 &0.51 &0.273 &0.314 & 0.14 \\ \hline \end{tabular} \caption{Main results obtained at the end of each simulation. From left to right, the columns show: the name of the model, the mass of the FG cluster, the final SG mass, the final SG mass fraction ($f_{ \rm SG/FG}$), the fractions of AGB ejecta and pristine gas incorporated in SG stars ($f_{\rm AGB}$ and $f_{\rm P}$ respectively), the minimum and maximum He of SG stars ($Y_{\rm min}$ and $Y_{\rm max}$) and the half-mass radius of SG stars $r_{\rm h,SG}$.} \label{tabl2} \end{table*} \subsection{High-density simulations} A higher ISM density than the value $\rho=10^{-24} {\rm g \ cm}^{-3}$ explored in the previous sub-section is expected to have a strong impact on the capability of the cluster to accumulate mass \citep{naiman2011} and to form SG stars. According to C19, when a very massive cluster moves through a denser medium, a more massive and extended -- but also less He-enriched -- SG can form, due to a larger amount of accreted pristine gas. In this section we thus study simulated scenarios with the same two FG masses as in the previous subsection, but with a ten times higher ISM density, i. e. $\rho=10^{-23} {\rm g \ cm}^{-3}$. \subsubsection{Model \MHH{}} Assuming a FG cluster with mass $10^6 M_{\odot}$ in a factor 10 denser medium than described in section \ref{M6-Infall24}, the radius of the bubble formed by multiple SN explosions is a factor 3 smaller (see Eq. \ref{eq:Req}). As a consequence, the time required for the cluster to move out of the cavity generated by FG SNe and reach the pristine gas is shorter. By means of equation \ref{eq:tI}, for $t_{\rm I}=3$ Myr we obtain the same value as the \MLL{} model. The infall passes through the center of the cluster at 4 Myr, which is also the time at which dense gas starts to be accumulated and star formation is ignited. This is before the onset of AGB ejecta, so He-poor stars begin forming very quickly out of the pristine ISM gas. When the AGB ejecta are injected at $7.7$ Myr, more enriched gas is available and then stars start to form with larger helium abundances. The third rows in Figures \ref{fig:dens} and \ref{fig:temp} illustrate the time evolution in the simulation. At $10$ Myr a dense tail of cold gas has already formed. The newly-formed SG stars are distributed across a wide region whose shape reflects the relative motion between the cluster and the ISM. At $29$ Myr, an elongated dense central stellar component is visible, whereas the new stellar population is settling into a more regular, nearly ellipsoidal distribution. This process continues towards later times and eventually SG stars can be seen almost everywhere in the box, while the central dense region slightly expands along the tail. The final stellar distribution is significantly more extended than in the case with lower ISM density, i.e. \MHL{}. This can better be seen in in the third-row panels of Fig. \ref{fig:profil}, showing the SG density radial profile computed at different times. At first, in the innermost $\sim 4$ pc, the dominating stellar population has an intermediate helium abundance, whereas stars belonging to the lowest $Y$ bins are present in significant quantities. Almost no He-rich stars (with $Y>0.33$) are present, due to the dominant fraction of ISM gas fuelling the star formation. At distances $>10~pc$ from the centre, the SG population becomes decreasingly He-enriched. Everywhere in the cluster, the stellar density is still dominated by FG stars. As time progresses, the SG starts to dominate the stellar density in the inner $1$ pc and the SG population becomes decreasingly He-enriched, i.e. increasingly fuelled by the pristine ISM gas, and much less enriched than in the previous models. The SG is more compact than the FG of stars, but it is significantly less compact here than in the other models. The central density here at $71$ Myrs is $\gtrsim 10^{-17}$ g cm$^{-3}$ ($\gtrsim 1.5 \times 10^5$ $M_{\odot} \ {\rm pc}^{-3}$), about a factor ten lower than the \MHL{} model. The lower left panel of Fig. \ref{fig:dist} shows that the stellar distribution in $Y$ is totally dominated by ISM gas ($Y = 0.246$), though it has a flat tail of enriched helium that peaks at the opposite end of $Y \approx 0.32$. This evolves with time towards a slight but rather broad He enrichment due to dilution of AGB ejecta by the pristine gas, peaking at $ Y\approx 0.265$ at the final time. The final SG stellar mass formed in this model is $2~\times~10^5$ M$_{\odot}$ and the final half-mass radius of SG stars is 1.5 pc. \subsubsection{Model \MLH{}} This model presents the physical conditions in which the early SN feedback of massive FG stars is the least effective, due to the low mass of the FG population and dense ISM gas it has to fight against. As a consequence, this is the model with the earliest infall. The time-evolution of this run is shown in the bottom rows of Figures \ref{fig:dens} and \ref{fig:temp}). Due to the low FG stellar mass, the central accumulation of mass and star formation are significantly delayed with respect to the other simulations. The cluster accumulates matter at its centre rather slowly and starts to form stars at about $30$ Myr. A significant enhancement of the central density of the gas becomes visible at this time (central panel at the bottom of Figure \ref{fig:dens}), along with a compact, newly born stellar component. The temperature map highlights the appearance of a tenuous cold tail downstream of the centre (Figure \ref{fig:temp}), through which the cluster starts to slowly accrete gas. As this process continues, the cluster is able to form more SG stars. The bottom-row panels of Fig.\ref{fig:profil} show that the emerging SG population grows slowly, is compact, and dominated by intermediate He abundances. There is a rather strong contrast with the low-mass low-density model (\MLL{}; second row), which has a much more enriched SG population, due to the clusters ability to hold on to its AGB ejecta. In Fig. \ref{fig:dist}, we can see that the emerging stellar component is mildly enriched with He as due to a composition made of an approximately equal mixture of pristine gas and AGB ejecta and, as in te other runs, it evolves towards a broader He distribution. Eventually, this model produces the smallest final SG mass (2.3 $\times~10^3$ M$_\odot$) and a final half-mass radius of $0.14$ pc. The mass fractions of stars born out of pristine gas and AGB ejecta are nearly equal (Table \ref{tabl2}). In summary, our high-density simulations present two different kinds of results. In the case of the higher mass cluster, more pristine matter can be accreted and the SG stellar mass is much higher than the case with more diffuse infall. SG stars are more extended with a denser ISM and the have lower He abundances. In general, this case gives results similar to the simulation with a very massive cluster (C19). The case of the lower mass cluster is quite different. Not only very few stars form, but the SG stars are still concentrated in a radius of less than 1 parsec. The SG to FG mass is similar for the two masses in the diffuse case (Table \ref{tabl2}), but very different in the dense ISM case. In the low-mass cluster case, the results are fairly similar regardless of the ISM density. \subsection{Second generation star formation} \begin{figure} \centering \includegraphics[width=1.\linewidth]{sfr.pdf} \caption{\textit{Top panel}: SG star formation rates vs. time for our simulated clusters. \textit{Middle and lower panels}: cumulative SG stellar mass formed in the $10^6~M_{\odot}$ and $10^5~M_{\odot}$ clusters, respectively. The curves are colour-coded as indicated in the legend in the bottom panel. The solid, dashed and dotted lines show the total SG stellar mass, the mass formed from AGB ejecta, and from pristine gas, respectively. The coloured and grey vertical lines mark the times at which the infalling gas crosses the center of the cluster in each model and the onset of the injection of AGB ejecta, respectively.} \label{fig:sfr} \end{figure} The top panel of Fig. \ref{fig:sfr} shows the star formation rates of our simulations. Star formation in the $10^6~M_{\odot}$ models starts when the accreted gas reaches the center (high-density model) or the AGB ejecta initiate (low-density model). This is different in the $10^5~M_{\odot}$ cases. Here, star formation begins with a time delay compared to the times of the onset of AGB and the time at which the infall crosses the cluster centre. This can be attributed to the weaker gravitational potential of these clusters and the smaller amount of ejected AGB gas. The time delay is longer in the case of \MLH{}, due to the higher ram pressure from the ISM. In Fig. \ref{fig:sfr}, we also show the rate of mass injection from AGB stars with dash-dotted lines. The more massive cluster runs settle to star formation rates larger than the AGB mass injection rate, whereas the lower mass runs settle to SFRs lower than the injection rate (and approximately the same for both infall densities). This is because the more massive cluster is capable of retaining the AGB ejecta both at low and high density, but the amount of gas accretion is clearly larger if the accreting ISM density is larger. In \MHL, the amount of mass that can be retained by the cluster and transformed into stars is dominated by the injection of AGB ejecta, and hence the SFR is regulated mostly by the rate of stellar mass return. On the other hand, the lower mass cases show a much lower capability to retain both the AGB ejecta and the pristine gas, especially at the start of the simulations. As a result, this cluster exhibits lower SFRs than the rate of stellar mass return. The middle and bottom panels of Fig. \ref{fig:sfr} show the cumulative mass of SG stars (solid lines) formed as a function of time in our $10^6~M_{\odot}$ and $10^5~M_{\odot}$ simulations, respectively. In these plots, we also show how the SG mass is divided into pure AGB ejecta (dashed lines) and pristine gas (dotted lines). For reference, the cumulative AGB ejecta (computed from equation \ref{AGB_ej}) are shown as dash-dotted lines. The most massive SG component ($\sim 2 \times 10^5~M_{\odot}$) is obtained in the \MHH{} model, which is mostly formed out of pristine gas. In this model, star formation starts before FG stars enter their AGB phase. Here AGB ejecta compose about $25$ percent of the SG mass fraction. In the \MHL{} model this is reversed, as at all times the majority of gas forming the SG stars are AGB ejecta, composing about $80$ percent of the SG stellar mass at the final time (and more at earlier times). Another notable feature of both the \MHH{} and \MHL{} simulations is that the AGB ejecta are retained and converted into stars very efficiently (as shown by the close vicinity of the black dash-dotted and red and blue dotted lines in the upper panel of Fig. \ref{fig:sfr}). The $10^5~M_{\odot}$ cluster shows a much lower capability to retain mass and to form new stars. In the \MLL{} model, star formation starts a few Myr after the infall and, at all times, it is almost completely dominated by AGB ejecta. The higher-density \MLH{} run sees a later beginning of the SF phase, occurring approximately at 27 Myr. In this model the fractions of stars formed out of AGB ejecta and pristine gas are comparable at later times, whereas the former dominates at earlier times. In both cases, the cumulative AGB mass is significantly larger than the stellar mass, with a larger retained fraction in the lower density model. One interesting aspect of these models is the larger amount of stellar mass formed at a low ISM density, the reverse of the results of the $10^6~M_{\odot}$ runs. This further outlines how the capability to form stars is sensitive to both accretion and ram pressure from the external ISM. \section{Discussion}\label{discussion.sec} \begin{figure*} \centering \includegraphics[width=\linewidth]{r_h.pdf} \caption{Correlations between half-mass radius of SG stars and FG cluster mass (left panel), SG cluster mass (middle panel) and velocity dispersion of FG stars (right panel) computed at the final time of our simulations (71 Myr). The orange stars and solid blue circles show the results obtained in our low-density and high-density models, respectively. The data shown for a FG mass of $10^7 M_{\odot}$ are from C19. Note that the half-mass radius of the FG clusters of masses of $10^5$ and $10^6 \Msun$ are set to be 4pc in our simulations, whereas it is adopted to be 30 pc in the $10^7 \Msun$ cluster shown here and taken from C19. Depending on the density of the pristine gas, positive correlations can be seen in all panels. } \label{fig:rhm} \end{figure*} Our hydrodynamical simulations show how clusters with FG masses of $10^5~M_{\odot}$ and $10^6~M_{\odot}$ moving through diffuse gas can accrete enough mass to form a new stellar generation. This result has been found assuming two different, realistic values for the external ISM density, i. e. $\rho=10^{-24} \ {\rm g~cm}^{-3}$ and $\rho=10^{-23} \ {\rm g~cm}^{-3}$ or particle densities $\approx 1$ and 10 ${\rm cm^{-3}}$, typical for normal main-sequence star-forming galaxies at low and high redshifts \citep{federrath2017,wardlow2017}. After describing the results of our simulations computed for FG components spanning three orders of magnitude in mass, in this Section we show how some important structural properties scale with cluster mass. Fig.~\ref{fig:rhm} shows three scaling relations for the SG half-mass radius, $r_{\rm h,SG}$, computed from our simulations, as well as a more massive cluster from C19. These include correlations with the FG mass (left panel), the final SG mass (middle panel), and the FG velocity dispersion (right panel). Positive correlations are found in all three cases, with slopes which depend on the density of the pristine gas. The high-density simulations produce steeper slopes in all three plots. This suggests that the half-mass radius depends on both intrinsic and environmental properties and that the interplay between mass and gas density is crucial in determining the structural properties, such as the size of the SG component. We note that in all calculated models, $r_{\rm h,SG}$ is less than the FG half mass radius, $ \rm r_{h,FG}$, i.e. the SG stars are more concentrated. Since the ratio of $ \rm r_{h,SG} / r_{h,FG}$ plays a key role in the mixing of two populations and their dynamical evolution \citep{vesperini2021}, further parametric studies on $ \rm r_{h,FG}$ need to be performed in the future. We recall that some of the results shown in Fig.~\ref{fig:rhm} are dependent on the assumed initial conditions. In our simulations, for the FG we assume scaling radii supported by local observations, i.e. by the flat size-mass relation observed in a variety of stellar clusters of various types and ages \citep{krumholz2019}. It is currently not known if the same relation holds also at high redshift, in physical conditions more similar to the ones in which GCs originated. A somewhat peculiar behaviour which is common to both cluster masses is that the central density of SG stars achieved in the lower density ISM case is higher than in the higher ISM density case (Fig. \ref{fig:profil}). A similar result was found also by C19 in a $10^7~M_{\odot}$ cluster, hence it seems to be independent of the FG mass. This is due to the larger ram-pressure of the high-density model, in which the mass accumulation in the cluster centre is less efficient than in low-density models. The final result is a more massive cluster, but also more extended. In all cases, the final central density is $\gtrsim 10^{-17}$ g cm$^{-3}$ ($10^5$ $M_{\odot} \ {\rm pc}^{-3}$), consistent with the values observed today in GCs (e. g., \citealt{renzini2015}). The shape of the final He distribution in Fig. \ref{fig:dist} is found to be very sensitive to the model. A final, double-peaked distribution is obtained only with the $10^6~M_{\odot}$ cluster, as was found for a ten times more massive cluster in C19. The lower mass clusters cover a more limited range of helium enhancements for SG stars, showing that these cases do not present the right conditions for an effective dilution. On the other hand, since the predominant gas inside the cluster in the low-density cases is the gas enriched by AGBs, the He distribution tends towards high helium enhancements. However, in all simulations it gradually becomes broader towards lower helium enhancements over time, due to the dilution of AGB ejecta by the pristine gas. In Table \ref{tabl2} we present the main properties of the SG components formed in this paper, namely the final SG mass, the SG mass fractions and the He enrichment. As for the final SG mass fraction, the interplay between mass and ISM density is found to be complex. At fixed stellar mass, for the $10^6~M_{\odot}$ cluster the SG fraction increases with the ISM density, but the same is not true for the $10^5~M_{\odot}$ cluster, which presents a slightly more massive SG in the lower ISM density model. The reason for this behaviour is that, due to the lower ram-pressure, the M5-Infall24 accumulates mass (mostly AGB ejecta) more easily than the M5-Infall23. A more detailed interpretation of the relative amount of SG stars obtained in our simulations as a function of cluster mass will be discussed in Sect.~\ref{sec_SG_frac}, where our results will be compared with previous, observationally-based estimates. \subsection{SG fraction as a function of the initial mass}\label{sec:sgmf} \label{sec_SG_frac} One of the most important observational discoveries in the field of multiple stellar generations in globular clusters is the strong correlation between the number ratio of SG to FG stars and the present-day cluster mass \citep{bastian2018}. We now investigate this correlation in our simulations and assess whether it agrees with observations. \cite{milone2017} and \cite{milone2020} studied the correlation between the SG to FG number ratio and cluster mass in globular clusters with various structural parameters. In their study, they used archival observational data, such as the extant GC comprehensive catalogues of \citet[][2010 edition]{harris1996} and \citet{mclaughlin2005}. \citet{milone2017} analysed high-precision HST photometry along the RGB for 57 Galactic GCs and showed that the SG to FG number ratio in their sample correlates with the absolute luminosity and the mass of the GC. In \citet{milone2020} these studies were extended to consider also various multiple-generation GCs belonging to both Magellanic Clouds (MC), in order to assess whether this correlation is dependent on the properties of the host galaxy. They conclude that a strong correlation between the present-day number ratio of SG to FG and the present-day cluster mass is found for all GCs with MPs. The main dynamical effect of long-term cluster evolution is mass loss. In this process, due to stellar evolution and other dynamical effects such as two-body relaxation, tidal stripping and shocks the stars that are lost from clusters add to the population of field stars (e. g., \citealt{Heggie2003, lamers10}). The final result of this process can be a significant decrease of the mass of the cluster; as a consequence, it is therefore expected that GCs were initially more massive than now. Recently, \citet{baumgardt2018} computed initial cluster masses for a comprehensive sample of Milky-Way GCs. Based on these results, \citet{milone2020} showed that a correlation exists between the present-day SG-to-total number ratio $N_{\rm SG}/N_{\rm t}$, where $N_{\rm t}={ N_{\rm FG}+N_{\rm SG}}$ and the initial cluster mass. In Figure \ref{fig:frac}, the open circles and squares show the observed correlation between the present-day SG-to-total number ratio, $N_{\rm SG}/N_{\rm t}$ and initial cluster mass for Galactic and Magellanic GCs. We also show the results of our high-density (blue circles) and low-density (orange stars) models obtained at the final simulation time of 71 Myr\footnote{To compare our results with the observed SG fractions, for each model a calculation of the \textit{number} ratios between SG and FG stars is required. The derivation of these quantities from the mass fractions presented in Table 2 requires an assumption regarding the stellar IMF. For this purpose, for the sake of simplicity we assume that FG and SG stars have the same IMF. However, we have ascertained that the assumption of different IMFs for FG and SG stars does not have a significant impact on the results discussed in Sec.~\ref{sec_SG_frac}}. Both the simulations and the observed data show a positive correlation between the SG-to-total number ratio and cluster mass. The different slope in the models is mostly related to the relative amount of SG stars formed out of the AGB ejecta. In the low-density models, the SG stellar mass is dominated by the cumulative AGB ejecta which, to a first approximation, is proportional to the FG mass (Eq. \ref{AGB_ej}), therefore the resulting $N_{SG}/N_t$ ratio results weakly dependent on this quantity. On the other hand, in the high-density models with mass $\ge 10^6 M_{\odot}$ the total SG mass is dominated by the pristine gas (Fig. \ref{fig:sfr}) and grows with the cluster mass but its dependence on the FG mass is not linear. A qualitative explanation may be found in the analytic formula for the Bondi-Hoyle-Lyttleton accretion rate, namely $\dot{M} \propto G M^2 \rho v^{-3}$, where $M$ is the mass of the accretor, $\rho$ is the ambient density and $v$ the relative velocity between the accretor and the ISM \citep{edgar04}. Therefore, if the accreted mass is dominated by the amount accreted from the pristine gas, to a first approximation the SG mass is proportional to $M_{FG}^2$, which qualitatively accounts for the steeper increase of $N_{SG}/N_t$ as a function of mass. \begin{figure} \centering \includegraphics[width=\linewidth]{Nfrac.pdf} \caption{SG-to-total number ratio ($N_{\rm SG}/N_{\rm t}$) as a function of cluster mass computed at the final time of our simulations (71 Myr), compared to an observational compilation of \citet{milone2020} of Galactic and Magellanic GCs. The orange stars and solid blue circles show the results obtained in our low-density and high-density models, respectively. The simulation results for a FG mass of $10^7 M_{\odot}$ are from C19. The open grey circles and squares are number ratios observed in present-day MW and MC clusters, respectively. The blue (high-density models) and the orange shaded regions (low-density models) show the areas covered by the SG/FG number ratios corrected by factors between 5 and 20 to account for the effects of long-term dynamical evolution. (see Sect.~\ref{sec:sgmf} for details). } \label{fig:frac} \end{figure} However, the observed $N_{\rm SG}/N_{\rm t}$ ratios are significantly higher than in our simulations. It is important to bear in mind that a direct comparison of our results with the observed present-day properties of GCs require taking into account the long-term dynamical evolution of the clusters. The long-term dynamical evolution and the preferential depletion of FG stars, which are characterised by a more diffuse distribution than SG stars (see Fig.~\ref{fig:profil}) is expected to lead to a progressive increase of the $N_{\rm SG}/N_{\rm t}$ ratio. We attempt to approximately account for such an increase by means of an alternative, simplified approach, which consists in rescaling the initial SG to FG ratios by factors between 5 and 20 \citep{renzini2015}. Such factors are commonly regarded as indicative lower and upper limits for the ratio between the present-day and initial mass of GCs required to address the so-called 'Mass Budget' problem. In this framework, GCs had to be initially much massive than today in order to deliver significant amounts of matter with the composition needed to explain the observed chemical anomalies (\citealt{D'Ercole2008,renzini2015,bastian2018}, C19). This rescaling gives us an approximate range for the present-day value of the $N_{\rm SG}/N_{\rm t}$ ratio of our models. The rescaled $N_{\rm SG}/N_{\rm t}$ values are shown with blue and orange shaded regions in Fig. \ref{fig:frac} for the high-density and low-density models, respectively. Incorporating this correction produces good agreement between the high-density models and the observations. The fact that the slope of the observed relation is roughly accounted for is a further confirmation of the key role of the pristine gas accretion in the formation of SG stars, which is clearly of minor importance in the low-density models. It remains important in the future to model the long-term dynamical evolution of the systems emerging from our simulations. The use of an N-body approach and a detailed modelling of the external tidal field of the host galaxy will be useful to investigate the evolution of the $N_{\rm SG}/N_{\rm t}$ ratio, as well as the dynamical evolution of the various SG sub-populations characterised by different chemical abundances. \subsection{Second generation He enrichment versus cluster mass} Recent results from HST photometry \citep{milone2015,milone2017,lagioia2019,milone2020} reveal a correlation between the observed He enhancement and initial cluster mass in GCs hosting MPs, such that more massive clusters have larger He enhancements. \begin{figure} \centering \includegraphics[width=\linewidth]{DeltaY.pdf} \caption{Maximum helium enhancement in SG stars as a function of cluster mass. The orange stars and solid blue circles show the results obtained in our low-density and high-density simulations, respectively. The simulation results for a FG mass of $10^7 M_{\odot}$ are from C19. The open grey circles and squares are the helium enhancements observed in present-day MW and MC clusters, respectively.} \label{fig:DY} \end{figure} In Figure \ref{fig:DY} we show the maximum He enhancement, defined as $\DeltaYmax=Y_{\rm max}-Y_0$ (where $Y_{\rm max}$ and $Y_0$ are the maximum and the FG He mass fraction, respectively) as a function of cluster mass in our high-density (filled blue circles) and low-density (filled orange stars) models, including also the models of $10^7 \Msun$ FG mass of C19. This figure shows that the chemical composition of SG stars can vary with both the FG mass and density of the external ISM. We also show observational results assembled by \citet{milone2020} for MW and Magellanic GCs with open grey circles and squares, respectively. Observationally, $\DeltaYmax$ represents the difference in helium abundance between SG and FG stars and was computed by means of the method introduced by \citet{milone2013} (see also \citealt{lagioia2019}). In brief, the difference of the He abundance in stars of different generations is measured from photometric data, i. e. by means of color-magnitude diagrams in various HST filters covering a wide range of wavelengths and from the estimated color difference between separated sequences. By comparing the observed colors with those obtained from synthetic spectra, estimates of the relative He abundances can be derived. In the observational dataset, the maximum He enhancement is found to increase as a function of the FG mass, to lie in the range $\DeltaYmax \approx 0-0.2$ and to follow a similarly increasing trend in both Galactic and Magellanic GCs \citet{milone2020}. The low-density simulations show a $\DeltaYmax$ independent of cluster mass, whereas a positive correlation is found in the case of the high-density runs. In this case, even if the predicted relation is shallower than the observations, this is a promising result. To our knowledge, it has not previously been attempted to reproduce this relation within a any scenario for MP formation. In the low-density models, the accretion of pristine gas and dilution play a marginal role, therefore the maximum enrichment level is dictated mostly by the difference between the abundance of the most He-enriched stars and the FG, a factor which is independent from mass. In fact, as seen in Fig.~\ref{fig:dist}, these models contain stars formed out of almost pure AGB ejecta. In the high-density models, more massive clusters are characterised by more He-enhanced SG stars. As visible in Fig.~\ref{fig:profil}, the infall prevents the birth of a substantial population of very He-rich stars in the $10^5~M_{\odot}$ and $10^6~M_{\odot}$ clusters. This does not occur in the $10^7~M_{\odot}$ model, in which dilution is delayed with respect to the lower mass models and which is more efficient in retaining and transforming into new stars the ejecta of the most massive AGBs (C19). This model presents a $\DeltaYmax$ value very similar to the one of its homologous, lower-density model. In order to better match the observed slope, a stronger dilution of stellar winds is required in lower mass clusters, perhaps achievable by considering an even denser ISM, or by preventing the retention of the most He-enriched ejecta in lower mass clusters by means of feedback processes such as ionising radiation (\citealt{gavagnin2017,chantereau2020}). The inclusion of such processes is currently in progress and their effects on the He abundance of SG stars will be presented in a future paper. \subsection{Caveats} The present study makes a few simplifications. In this Section, we will discuss the most important ones, their implications and how they could be possibly investigated further in the future. First, we model the FG of stars by means of a static Plummer density profile. This might lead us to omit a few dynamical effects. In principle, the accretion of mass and subsequent star formation might lead to further contraction of the FG stellar distribution, which is not likely to affect the star formation history but it could have effects on its long-term dynamical evolution. The study of the subsequent evolution requires methods different than the ones discussed in the present paper, i.e. an N-body approach to model the evolution of the clusters in an external tidal field (e. g., \citealt{mastrobuono19}). Work is already in progress to address these topics and will be the subject of a future paper. Moreover, we assume a constant value of 20 km s$^{-1}$ for the velocity of the pristine gas relative to the GC, of the order of the isothermal sound speed of a medium characterised by a temperature of $10^4$ (\citealt{D'Ercole2016}, C19). The adoption of a different velocity might affect the timescale over which the clusters starts accreting pristine gas, i.e. the time of the infall, and, if too large, even its capability to accrete mass. In principle, a different velocity could delay or anticipate the infall time but it would not prevent gas accretion as long as the relation $\sigma^2>{V^2_{p g}+c^2_s}$ is satisfied \citep{lin2007}. It is expected that idealised, core potentials moving at high velocities have density enhancements and characteristic radii that decrease with increasing velocity \citep{naiman2011}. However, to estimate even approximately the effects of a different velocity on gas accretion in our model is not feasible. In fact, a detailed analytic formalism for the accretion of gas onto core potentials in realistic conditions does not exist, since real systems include physical processes (such as radiative cooling and star formation) that cannot be studied without a numerical approach. The results of the lower mass models show that even if at the beginning the above relation is not satisfied, the density field is expected to be perturbed by the presence of the cluster and even a small density enhancement in the centre might lead to a local, slight increase of the velocity dispersion, which might eventually result in further accretion and ignite star formation. For the sake of simplicity, we consider only the feedback of FG AGB stars and neglect the injection of energy by other sources, such as SNe or other forms of feedback, e. g. ionising radiation. Our simulations are stopped at $\approx 100$ Myr after the formation of the cluster, corresponding to the onset of FG type Ia SNe. However, results based on 3-D simulations have shown that in realistic conditions, stellar feedback from sparse energy sources is not very efficient in dispersing the cold star-forming gas \citep[][]{romano2019}. \cite{Lacchin21} studied how the explosions of Type Ia SNe affect the star formation history and the chemical properties of SG stars in massive GCs of $10^7~M_{\odot}$. They used an initial setup similar to ours, i.e. a cluster in motion with respect to a uniform gas distribution and tested different assumptions for the gas density and for the type Ia SN delay time distribution. Their results indicate that type Ia SNe are able to severely limit SF only assuming an ambient gas density of $\rho = 10^{-24}$ g cm$^{-3}$, whereas their higher-density case ( $\rho = 10^{-23}$ g cm$^{-3}$) is weakly affected by SN explosions, with a final SG mass similar to the one obtained without SNe Ia. In the future, it will be very important to extend the study of \cite{Lacchin21} to cluster of different masses, like the ones considered in this paper, in order to improve our still limited understanding of the effects of discrete feedback sources on the ISM and, most of all, to probe the effects of different numbers of SNe explosions on mass accretion and on star formation in young GCs. \subsection{On the requirement of a truncated IMF and on SG massive stars} \label{sec_IMF} The adoption of a IMF truncated below $8-10~M_{\odot}$ is probably the strongest assumption required by the AGB scenario. This assumption is necessary in oder to avoid that the SG is significantly enriched with Fe with respect to FG stars, and also in order to avoid the suppression of star formation due to stellar feedback. In this section, we analyse a few theoretical and indirect arguments to support this assumption. Current observational studies of the variation of the stellar mass functions in the GCs concern only a very limited mass range and low stellar masses ($\le 0.75 M_{\odot}$, \citealt{cadelano20}). Indirect arguments in support of this assumption can be found in the framework of the integrated galactic IMF theory (IGIMF, \citealt{weidner2005}). In this picture, stars can form only in clusters and, since available observations indicate that the cluster mass function is a power law with index $\sim -2$ \citep{zhang1999}, most stars in the Universe form in low-mass clusters. The requirement that the most massive star within a cluster cannot exceed the total cluster mass implies that low-mass clusters will present a deficiency of massive stars \citep{haas2010}. This also implies that if a system is characterised by a particularly low star formation rate, in principle a generation of stars deprived of massive stars can originate. These arguments can explain the absence of massive stars only in very low mass custers, such as Taurus or IC348 (\citealt{luhman2004}) . A few works carried out in this framework have shown that the complete lack of massive stars (i.e. stars with $M\ge 8-10~M_{\odot}$) can occur in systems with an average SFR $\le 10^{-4}~M_{\odot}/yr$ (\citealt{calura2010}; \citealt{recchi2014}; \citealt{yan2017}). The average star formation rates in our $10^5 M_{\odot}$ cluster are between $\approx 3\times 10^{-5} ~M_{\odot}/yr$ and $4.5\times 10^{-5}~M_{\odot}/yr$, hence below this value. However, in the case of the $10^6 M_{\odot}$ cluster, SFR values are typically larger than $10^{-4}~M_{\odot}/yr$, hence the complete absence of massive stars cannot be justified within the IGIMF theory. However, according to the same theory, in low SF regimes the number of massive stars is expected to be considerably reduced with respect to a standard IMF. In a recent theoretical study based on hydrodynamic simulations, \citep{bekki2019} have shown that in a dense stellar environment, the dynamical interaction between the gravitational potential of the globular cluster and collapsing star-forming gaseous clouds can suppress the formation of massive stars, leading to a truncated IMF for SG stars. The validity of this result requires confirmation in a broader context and needs to be tested further, perhaps via a comprehensive study of the initial conditions and of the role of crucial parameters such as custer mass, size and of processes such as turbulence and proto-stellar feedback. Another possible mechanism to avoid the effects of massive stellar feedback in dense cluster cores involves the dynamical decoupling of a small population of segregated massive stars, which might result in the eventual expulsion of some of them. If a few massive stars form in a dense environment, two-body relaxation is expected to be very fast and segregation might occurr well before they can explode as SNe (\citealt{gurkan2004, allison2009}), or stars can even originate already segregated (\citealt{bonnell2002}). Several works have shown that close dynamical encounters in a collisional subsystem formed by massive stars can result in the ejection of some of them (\citealt{gualandris2004,oh2015,wang2019}). On the observational side, isolated, runaway massive stars have been frequently observed near young massive clusters, with typical velocities in excess of $30$ km/s (e. g., \citealt{gies1986,stone1991}, see \citealt{andersson2020} and references therein), hence higher than the velocity dispersion values of the systems modelled here. Testing the possibility that a significant fraction of massive stars might be expelled from systems with features similar to the ones modelled here requires collisional N-body simulations, which currently have been performed only for systems much less massive that $10^5$ M$_\odot$ (see \citealt{oh2016}). Performing this kind of simulations for large numer of particles is notorously challenging \citep{rodriguez2018}; however, other less computationally demanding techniques exist to model collisional systems, such as Monte Carlo methods \citep{sollima2021, vesperini2021}. A model involving these techniques and adopting the outputs of our simulations as initial conditions is already under development. Facing the problem of massive stars in the SG is the next step of the roadmap to a more complete theoretical understanding of the origin of MPs within GCs. This topic demands attention and represents an interesting subject for future work. \section{Summary and Conclusions} \label{conclusions.sec} By means of hydrodynamical simulations, we study the capability of clusters of different masses and moving through a uniform gas distribution to form a new stellar generation. We consider FG clusters of mass $10^5$ and $10^6~M_{\odot}$ moving at a constant velocity through a homogeneous gas with densities $10^{-24}$ and $10^{-23}$ g cm$^{-3}$. These simulations are designed to mimic the encounter of a young cluster with a fresh reservoir of gas, e. g. during its orbit through the disc of a host galaxy. The adopted values for the ISM density are meant to be typical for normal star-forming galaxies, whereas the cluster masses are chosen to complement and extend the study started by C19, who considered the same setup but a more massive cluster of $10^7 M_{\odot}$. As in C19, we assume that the SG population does not contain any type II SN progenitor stars (i.e. no stars with mass $>8 M_{\odot}$) and the simulations are stopped 100 Myr after the birth of the cluster, when the first type Ia SNe are expected to explode and halt the formation of stars. In addition to star formation, our simulations include mass and energy return from FG AGB stars, radiative cooling and an infall of homogeneous pristine gas with the same composition as the one of FG stars, entering from one side of the box at constant velocity. We are able to study how a few important quantities depend on cluster mass and to model how the chemical properties of SG stars (expressed by the He mass fraction Y) depend on the intrinsic and environmental quantities analysed in our work. Our main results can be summarised as follows. \begin{enumerate} \item The capability to accrete mass is determined by the interplay between (a) AGB feedback, through which the FG stars continuously restore gas, (b) the mass of the cluster and (c) the ram pressure exerted by the external gas. Initially, in the high-mass cluster with diffuse infall (\MHL) the cool AGB ejecta collect into a cooling flow directed towards the centre of the cluster, where the first SG stars are born, showing the highest He abundances ($Y>0.33$). When the infall starts, the fact that a compact stellar core is already present is key to drive further accretion and the growth of the SG component. Most of the mass is accreted through an ``accretion column'', namely a dense and cold trail of gas developed after the infalling gas has crossed the cluster centre. Soon after the start of the infall, SG stars with intermediate He content between the one of FG stars and the first SG stars start to appear. In general, stars formed with a lower He fraction are characterised by a more extended distribution than extreme, very He-rich stars. The final $Y$ distribution function is bimodal and broad, with two peaks at $Y\approx$ 0.3 and 0.36. \item In the low-mass cluster with diffuse infall (\MLL) the mass accretion is initially less efficient due to its shallower gravitational potential. In the earliest phases, the ram pressure prevents the cluster to retain a significant amount of matter to ignite star formation, until a slight density enhancement in the central regions of the cluster is sufficient to increase the accretion rate and to ignite star formation. At 28.8 Myr a very compact stellar component, mostly composed of He-rich stars, occupies the cluster centre. In this case, the infalling gas is weakly perturbed by the presence of the cluster and, at variance with the previous case, an accretion column is not developed. The final $Y$ distribution is flatter and narrower than in the previous model. The two low-density models have final SG masses between 3.2 $\times 10^3 M_{\odot}$ and 6.3 $\times 10^4 M_{\odot}$, which correspond to SG-to-FG mass ratios of $\approx$0.03 and 0.06, respectively. \item The two high-density models are subject to a stronger ram pressure than the lower-density models. The larger ambient density in which the higher mass cluster moves leads it to accrete more mass and form more stars than in the case with more diffuse infall. This is because the cluster starts immediately to accrete gas from the infalling ISM through a dense and narrow accretion column, causing the formation of a massive and extended stellar component which enhances the gravitational field of the cluster. When the mass return from AGB stars begins, their ejecta are easily retained by the cluster, collecting in the centre and fuelling the formation of a compact stellar component. The early dilution of helium causes the central SG component to be less He-rich than in the case with more diffuse infall. The massive high-density model is characterised by a broad final $Y$ distribution, ranging from $Y\approx$ 0.25 to 0.33. \item On the other hand, the lower mass cluster forms SG stars less efficiently when subjected to a denser infalling gas. This is the model in which star formation starts at the latest time due to its low FG stellar mass and strong ram pressure. At later times as more mass is accumulated at the centre, a compact central SG stellar component with intermediate He enhancement (Y$\approx 0.3$) forms, along with a smaller, less concentrated population of stars with semi-pristine abundances. This is the model with the narrowest He distribution at the final time, ranging from $Y\approx 0.27$ to $Y \approx 0.31$. The two high-density models are characterised by broader range for the final SG masses, i. e. between 2.3 $\times 10^3 M_{\odot}$ and $2 \times 10^5 M_{\odot}$ corresponding to SG-to-FG mass ratios of $\approx$0.02 and 0.2, respectively. \item All the models considered here develop a second stellar generation, with a variety of star formation histories which depend on both the cluster mass and the density of the infalling gas. Matched with previous results presented in C19, our simulations allow us to analyse cluster-related scaling relations across a dynamical range of two orders of magnitude in mass (from $10^5 M_{\odot}$ to $10^7 M_{\odot}$). Positive correlations are found between the half-mass radius of the SG stellar component and FG cluster mass, SG cluster mass and velocity dispersion of FG stars and, most importantly, the final SG to total mass ratio as a function of both FG and SG mass, with slopes that depend on the assumed density of the pristine gas. \end{enumerate} In all simulations, SG stars dominate the central density profile of the cluster, whereas FG stars dominate the outskirts. The most He-enriched SG stars are also the most concentrated ones, in agreement with observational studies (e. g., \citealt{simioni2016} and references therein). We have compared the $N_{\rm SG}/N_{\rm t}$ ratio vs mass with the observational dataset of \citet{milone2020}, which includes a sample of Galactic and Magellanic GCs containing multiple populations. After performing approximate corrections for long-term dynamical evolution, good agreement is found between the observational dataset and the high-density simulations, but this is not true for the low-density simulations. Therefore, in order to account for the observed correlation between $N_{\rm SG}/N_{\rm t}$ and mass, our results indicate that the SG needs to be formed in dense environments. For the first time, we have compared simulation results to the observed correlation between maximum He enhancement $\DeltaYmax$ and cluster mass. Our low-density models show a flat relation between $\DeltaYmax$ and cluster mass, whereas a positive correlation is found in the case of the high-density models. Although caution is needed in the comparison of our results with observations of real GCs, this is a further confirmation that the pristine gas accreted by the cluster has a vital role in the AGB scenario in order to explain the properties of GCs. Even if the $\DeltaYmax$ - mass relation predicted in the high-density models is shallower than observations, this can be regarded as a promising result. A fair comparison with observational data will require a modelling of the long-term dynamical evolution, which will be the subject of future work. Our models need also to be improved with the inclusion of relevant physical mechanisms which are expected to affect mass accretion and star formation, such as radiative feedback. Our understanding of the formation of multiple stellar populations will be improved by simulating the formation of GCs in a more realistic environment, e. g. by means of high-resolution cosmological simulations \citep{kimm2016}. \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Acknowledgements} We would like to thank Holger Baumgardt, Antonino Milone and Pouria Khalaj for useful discussions and Léo Michel-Dansac for assistance with computational facilities. AY is grateful to the Centre de Recherche Astrophysique de Lyon (CRAL) for hospitality during her visit. FC acknowledges support from grant PRIN MIUR 2017 - 20173ML3WW 001, from the INAF main-stream (1.05.01.86.31) and from PRIN INAF 1.05.01.85.01. We acknowledge support and computational resources from the Common Computing Facility (CCF) of the LABEX Lyon Institute of Origins (ANR-10-LABX-66). \bibliographystyle{mnras}
2,877,628,088,735
arxiv
\section{Diophantine approximation on submanifolds of $\R^n$.} A vector $x \in \R^n$ is called \emph{extremal} (or \emph{not very well approximable}), if for every $\varepsilon>0$ there is $c_\varepsilon>0$ such that $$|q \cdot x + p| > \frac{c_\varepsilon }{\|q\|^{n+ \varepsilon}} $$ for all $p \in \Z$ and all $q \in \Z^n \setminus\{0\}$. Here $q \cdot x$ denotes the standard scalar product in $\R^n$ and $\|q\|:=\sqrt{q \cdot q}$ the standard Euclidean norm. As is well-known (Borel-Cantelli) Lebesgue almost every $x \in \R^n$ is extremal. An important question in metric diophantine approximation is that of understanding the diophantine properties of points $x$ that are allowed to vary inside a fixed submanifold $\mathcal{M}$ of $\R^n$. The submanifold $\mathcal{M}$ is called \emph{extremal} if Lebesgue almost every point on $\mathcal{M}$ is extremal. A key result here is \bigskip \begin{theorem}[Kleinbock-Margulis, \cite{kleinbock-margulis}]\label{km} Let $U$ be an open connected subset of $\R^k$ and $\mathcal{M}:=\{\mathbf{f}(x); x \in U\}$, where $\mathbf{f}:U \to \R^n$ is a real analytic map. Assume that $\mathcal{M}$ is not contained in a proper affine subspace of $\R^n$, then $\mathcal{M}$ is extremal. \end{theorem} \bigskip This answered a conjecture of Sprindzuk. The proof made use of homogeneous dynamics via the so-called \emph{Dani correspondence} between diophantine exponents and the rate of escape to infinity of a diagonal flow in the space of lattices. We will also utilize these tools. \section{Diophantine approximation on submanifolds of matrices.} It is natural to generalize this setting to that of submanifolds of matrices, namely submanifolds $\mathcal{M} \subset M_{m,n}(\R)$. The diophantine problem now becomes that of finding good integer approximations (by a vector $p \in \Z^m$) of the image $M \cdot q$ of an integer vector $q \in \Z^n$ under the linear endomorphism $M \in M_{m,n}(\R)$. The case $m=1$ corresponds to the above classical case (that of linear forms), while the dual case $n=1$ corresponds to simultaneous approximation. It turns out that it is more natural to study the slightly more general problem of approximating $0$ by the image $M \cdot q$ of an integer vector $q$. One can pass from the old problem to the new by embedding $\mathcal{M}$ inside $M_{m,m+n}(\R)$, via the embedding ($I_m$ denotes the $m \times m$ identity matrix) \begin{eqnarray*} M_{m,n}(\R) &\to& M_{m,m+n}(\R)\\ M &\mapsto& (I_m | M) \end{eqnarray*} From now on, we will consider an arbitrary connected analytic submanifold $\mathcal{M} \subset M_{m,m+n}(\R)$, given as $\mathcal{M}:=\{\mathbf{f}(x); x \in U\}$, where $\mathbf{f}:U \to M_{m,m+n}(\R)$ is a real analytic map from a connected open subset $U$ in some $\R^k$. \bigskip \begin{definition}[Diophantine exponent] We say that a matrix $M \in M_{m+n,n}(\R)$ has diophantine exponent $\beta(M) \geq 0$, if $\beta(M)$ is the supremum of all numbers $\beta \geq 0$ for which there are infinitely many $q \in \Z^{m+n}$ such that $$ \|M \cdot q\| < \frac{1}{\|q\|^{\beta}}.$$ \end{definition} \section{The pigeonhole argument and the obstructions to extremality.}\label{pigeon} By the pigeonhole principle (Dirichlet's theorem), the lower bound $\beta(M) \geq \frac{m}{n}$ holds for all $M$. Indeed one compares the number of integer points in a box of side length $T$ in $\Z^{m+n}$ with the volume occupied by the image of this box under $M$ in $\R^m$. Furthermore, instead of considering the full box of side length $T$ in $\Z^{m+n}$, we could have restricted attention to the intersection of this box with a \emph{rational} subspace $W \leq \R^{m+n}$. The same argument would have then given the lower bound $$\beta(M) \geq \frac{\dim W}{\dim MW} -1.$$ Of course it may happen, given $M$, that for some exceptional subspace $W$, $\frac{\dim W}{\dim MW} -1 > \frac{n}{m} = \frac{n+m}{m}-1$. And this may well also happen for all $M \in \mathcal{M}$, provided $\mathcal{M}$ lies in the following algebraic subvariety $\mathcal{P}_{W,r}$ of $M_{m,m+n}(\R)$ \begin{equation}\label{pencil}\mathcal{P}_{W,r} : = \{ M \in M_{m,m+n}(\R); \dim MW \leq r\}, \end{equation} where $W$ is a fixed rational subspace of $\R^{m+n}$ and $r$ a non-negative integer such that \begin{equation}\label{ob} \frac{\dim W}{r} -1 > \frac{n}{m}. \end{equation} By convention, we agree that $(\ref{ob})$ is satisfied if $r=0$. We will call the subvariety $\mathcal{P}_{W,r}$ of $M_{m,m+n}(\R)$ a \emph{pencil of endomorphisms} with parameters $W$ and $r$ (defined also for arbitrary, non rational, subspaces $W$). Note that when $m=1$, and $r=0$, this notion reduces to the notion of linear subspace (the orthogonal of $W$) of $\R^{n+1}$ (or affine subspace of $\R^n$). Hence asking that the submanifold $\mathcal{M}$ be not contained in any of those pencils $\mathcal{P}_{W,r}$ satisfying $(\ref{ob})$ is analogous in the matrix context to the condition of Theorem \ref{km} that $\mathcal{M}$ be not contained in an affine subspace. \bigskip \begin{theorem}[Extremal submanifolds]\label{extremal} Let $\mathcal{M} \subset M_{m,m+n}(\R)$ be a connected real analytic submanifold. Assume that $\mathcal{M}$ is not contained in any of the pencils $\mathcal{P}_{W,r}$, where $W,r$ range over all non-zero linear subspaces $W \leq \R^{m+n}$ and non-negative integers $r$ such that $(\ref{ob})$ holds. Then $\mathcal{M}$ is extremal, i.e. $\beta(M)=\frac{n}{m}$ for Lebesgue almost every $M \in \mathcal{M}$. \end{theorem} \bigskip This result is close in spirit to that of \cite{beresnevich-kleinbock-margulis}, which gave a sufficient geometric condition for strong extremality. The condition in Theorem \ref{extremal} is strictly weaker. It does not imply strong extremality, but only extremality, and with regards to extremality it is the optimal condition, as shown below in Theorems \ref{closure} and \ref{rational}. \subsection{Non extremal submanifolds} A general result of Kleinbock \cite{kleinbock-almostallversus} implies that the diophantine exponent of a random point of $\mathcal{M}$ is always well-defined. Namely there is $\beta= \beta(\mathcal{M})\in [0,+\infty]$ such that for Lebesgue almost every $x \in U$, $$\beta(\mathbf{f}(x)) = \beta(\mathcal{M}).$$ Our first result is a general upper bound: \begin{theorem}[Upper bound on the exponent] \label{general} Let $\mathcal{M} \subset M_{m,m+n}(\R)$ be an analytic submanifold as defined above. Then $$\beta(\mathcal{M}) \leq \max \{ \frac{\dim W}{r} -1 ; \mathcal{P}_{W,r} \supset \mathcal{M} \}.$$ \end{theorem} Of course Theorem \ref{extremal} is an immediate consequence of this bound. \bigskip In \cite{kleinbock-extremal-gafa,kleinbock-anextension} Kleinbock showed that the diophantine exponent of an analytic submanifold of $\R^n$ depends only on its linear span. Our next result is a matrix analogue of this fact. Note that the diophantine exponent of a matrix $M$ depends only on its kernel $\ker M$. As $M$ varies in the submanifold $\mathcal{M} \subset M_{m,m+n}(\R)$, consider the set of these kernels as a subset of the Grassmannian and take its linear span in the Pl\"ucker embedding. Denote by $\mathcal{H(M)}$ the matrices $M$ whose kernel lies in this linear span. The set $\mathcal{H(M)}$ is an algebraic subvariety containing $\mathcal{M}$ and contained in every pencil containing $\mathcal{M}$. \bigskip \begin{theorem}[Optimality of the exponent] \label{closure} We have: $$\beta(\mathcal{M}) = \beta(\mathcal{H(M)}).$$ In particular $\beta(\mathcal{M})= \beta(\textnormal{Zar}(\mathcal{M}))$, where $\textnormal{Zar}(\mathcal{M})$ denotes the Zariski closure of $\mathcal{M}$, and $\beta(\mathcal{M}) = \beta(\Omega)$ for any open subset $\Omega \subset \mathcal{M}$. \end{theorem} \bigskip In particular $\mathcal{M}$ is extremal if and only if $\mathcal{H(M)}$ is extremal. \section{Lower bounds on the exponent and rationality} Theorem \ref{general} gives a general upper bound on the exponent. The pigeonhole argument described at the beginning of \S \ref{pigeon} yields a lower bound on $\beta(\mathcal{M})$ in terms of the exponents of the \emph{rational obstructions} in which $\mathcal{M}$ is contained, i.e. the pencils $\mathcal{P}_{W,r}$ with $W$ a rational subspace of $\R^{m+n}$. Hence, for a general analytic submanifold $\mathcal{M} \subset M_{m,m+n}(\R)$, we only have the following general upper and lower bound: \begin{equation}\label{uplow} \max_{\mathcal{P}_{W,r} \supset \mathcal{M} , W \textnormal{ rational } } \frac{\dim W -r}{r} \leq \beta(\mathcal{M}) \leq \max_{\mathcal{P}_{W,r} \supset \mathcal{M}} \frac{\dim W -r}{r}. \end{equation} For a submanifold $\mathcal{M}$ in general position the upper and lower bound are typically distinct. However we will prove: \bigskip \begin{theorem}[Subvarieties defined over $\Q$] \label{rational} Assume that the Zariski-closure of the connected real analytic submanifold $\mathcal{M} \subset M_{m,m+n}(\R)$ is defined over $\Q$. Then the upper and lower bounds in $(\ref{uplow})$ coincide, and hence are equal to $\beta(\mathcal{M})$. In particular $\beta(\mathcal{M}) \in \Q$. \end{theorem} \bigskip The proof of Theorem \ref{rational} is based on the following combinatorial lemma, which is used here with $G=\textnormal{Gal}(\C|\Q)$ and will be used once again later on in the applications to nilpotent groups with $G=\GL_k$. Let $V$ be a finite dimensional vector space over a field and $\phi: \textnormal{Grass}(V) \to \N \cup\{0\}$ a function on the Grassmannian, which is non-decreasing (for set inclusion) and \emph{submodular} in the sense that for every two subspaces $W_1$ and $W_2$ we have $$\phi(W_1 + W_2) + \phi(W_1 \cap W_2) \leq \phi(W_1) + \phi(W_2).$$ \bigskip \begin{lemma}[Submodularity lemma]\label{submodular} Let $G$ be a group acting by linear automorphisms on $V$. If $\phi$ is invariant under $G$, then the following minimum is attained on a $G$-invariant subspace $$\min_{W \in \textnormal{Grass}(V) \setminus\{0\}} \frac{\phi(W)}{\dim W}.$$ \end{lemma} \section{Diophantine approximation on Lie groups} Inspired by work of Gamburd-Jakobson-Sarnak \cite{gamburd-jakobson-sarnak} and Bourgain-Gamburd \cite{bourgain-gamburd} on the spectral gap problem for finitely generated subgroups of compact Lie groups, we defined in a previous article \cite{abrs} the notion of diophantine subgroup of an arbitrary Lie group $G$. The definition is as follows. Any finite symmetric subset $S:=\{1,s_1^{\pm 1}, \ldots,s_k^{\pm 1}\}$ in $G$ generates a subgroup $\Gamma \leq G$. If for all $n \in \N$ $$ \inf\{ d(1,\gamma) ; \gamma \in S^n \setminus\{1\}\} > \frac{1}{|S^n|^\beta},$$ then we say that $(\Gamma,S)$ is \emph{$\beta$-diophantine}. And we say that $\Gamma$ is \emph{diophantine} if it is $\beta$-diophantine for some finite $\beta$. Here $d(\cdot, \cdot)$ denotes a fixed Riemannian metric on $G$ and $|S^n|$ is the cardinality of the $n$-th product set $S^n:=S \cdot \ldots \cdot S$. It is easily seen that being diophantine does not depend on the choice of $S$ or $d(\cdot, \cdot)$. And if $G$ is nilpotent this is also true of being $\beta$-diophantine. \bigskip The connected Lie group $G$ is said to be \emph{diophantine on $k$ letters} if for almost every choice of $k$ group elements $s_1,\ldots,s_k$ chosen independently with respect to the Haar measure, the subgroup they generate is diophantine. Finally one says that $G$ is \emph{diophantine} if it is diophantine on $k$ letters for every integer $k$. \bigskip While it is conjectured that all semisimple Lie groups are diophantine, there are examples of non-diophantine Lie groups. Indeed a construction was given in \cite{abrs} for each integer $k \in \N$ of a connected Lie group which is diophantine on $k$ letters, but not on $k+1$ letters. Our examples are certain nilpotent Lie groups without a rational structure. We showed in that paper that the first examples arise in nilpotency class $6$ and higher. In fact every nilpotent Lie group $G$ with nilpotency class at most $5$, or derived length at most $2$ (i.e. metabelian), is diophantine. \section{Diophantine exponent of nilpotent Lie groups} If $G$ is nilpotent, $|S^n|$ grows like $n^{\alpha_S}$, where $\alpha_S$ is an integer given by the Bass-Guivarc'h formula. If the $k$ elements $s_i$'s forming $S$ are chosen at random with respect to Haar measure, then $\alpha_S$ is almost surely a fixed integer, which is a polynomial in $k$ (see \cite{abrs}). \bigskip \begin{proposition}[Zero-one law]\label{zerone} Let $G$ be a simply connected nilpotent Lie group, and pick an integer $k \geq \dim G/[G,G]$. There is a number $\beta_k \in [0,+\infty]$, such that if $\beta> \beta_k$ (resp. $\beta< \beta_k$), then with respect to Haar measure almost every (resp. almost no) $k$-tuple in $G$ generates a $\beta$-Diophantine subgroup. \end{proposition} \bigskip The proof of this is based on the ergodicity of the group of rational automorphisms of the free Lie algebra on $k$ letters acting on $(\textnormal{Lie}(G))^k$. When the nilpotent Lie group $G$ is rational (i.e. admits a $\Q$-structure) the exponent $\beta_k$ can be computed explicitly using Theorem \ref{rational}. We have: \bigskip \begin{theorem}[A formula for the exponent]\label{formula} Assume that $G$ is a rational simply connected nilpotent Lie group. There is a rational function $F \in \Q(X)$ with coefficients in $\Q$ such that for all large enough $k$, $$\beta_{k} = F(k).$$ In particular $\beta_k \in \Q$. When $k \to \infty$, $\beta_k$ converges to a limit $\beta_{\infty}$ with $0< \beta_{\infty} \leq 1$. \end{theorem} \bigskip For example, if $G$ is the $(2m+1)$-dimensional Heisenberg group and $k \geq 2m$, then $\beta_k=1-\frac{1}{k}-\frac{2}{k^2}$. More generally if $G$ is any $2$-step nilpotent group not necessarily rational, then $\beta_k = (1-\frac{1}{k})\frac{1}{\dim[G,G]} - \frac{2}{k^2}$ for $k \geq \dim G/[G,G]$. \bigskip We also obtain closed formulas for $\beta_k$ in the case when $G$ is the group of $n \times n$ unipotent upper-triangular matrices, e.g. if $n=4$, and $k \geq 3$, then $\beta_k= \frac{k^3-k-3}{k^3+k^2-k}$. And in the case when $G$ is an $s$-step free nilpotent group on $m$ generators, e.g. if $m=2$ and $s=3$, then $\beta_k=\frac{k^3-k-6}{2(k^3+k^2-k)}$. These formulas involve the dimensions of the maximal (for the natural partial order on Young diagrams) irreducible $\GL_k$-submodule of the free Lie algebra on $k$ generators modulo the ideals of laws of $G$. \bigskip The reduction to Theorem \ref{rational} proceeds as follows. Since $k$ is large, one can restrict attention to the last term $G^{(s)}$ in the central descending series. Given a $\Z$-basis $e_1,\ldots,e_{m+n}$ of the $s$-homogeneous part of the relatively free Lie algebra of $G$ on $k$ generators $\mathcal{F}_{k,G}$ (see \cite{abrs}), the submanifold $\mathcal{M}_{k,G}$ of matrices to be considered is the image of $(\textnormal{Lie}(G))^k$ under the (polynomial) map sending ${\mathsf x} \in (\textnormal{Lie}(G))^k$ to the $(n+m) \times m$ matrix whose columns are the $e_i({\mathsf x})$. Here $m=\dim G^{(s)}$. Computing the exponent amounts to first identify the pencils $\mathcal{P}_{W,r}$ in which $\mathcal{M}_{k,G}$ sits and then compute the maximum of the ratios $\frac{\dim W}{r}$. Using the submodularity lemma (Lemma \ref{submodular}) applied for the $\GL_k$ action of linear substitutions we may restrict attention to those pencils corresponding to subspaces $W$ of $\mathcal{F}_{k,G}$ that are fully invariant ideals. Determining those ideals is usually possible, depending on $G$, thanks to the known representation theory of the free Lie algebra viewed as a $\GL_k$-module. \bibliographystyle{alpha}
2,877,628,088,736
arxiv
\section{Introduction} \IEEEPARstart{O}{rthogonal} Frequency Division Multiplexing (OFDM) systems are widely used and their signals are generated by Inverse Fast Fourier Transformation (IFFT) \cite{ofdmcdma}. One advantage for OFDM systems is that OFDM systems can deal with multi-path delay. Since OFDM systems are implemented by IFFT, effects of multi-path delay in frequency selective channels can be removed with guard interval techniques and zero padding techniques \cite{zp}. Due to the resistance to flat fading channels, Multiple-Input Multiple-Output systems with OFDM systems have been investigated \cite{mimo}. While there are some advantages in OFDM systems, there are two main problems. One is that signals of OFDM systems have relatively large side-lobes \cite{filterbank}. This problem is caused since OFDM signals consists of sine waves. The other is that signals of OFDM systems have large Peak-to-Average Power Ratio (PAPR), which is the ratio of the maximum value of RF signal powers to the average value of them. Approximately, the output power grows linearly for low values of the input powers. However, for input signals with large power, the growth of the output power is not linear. Then, in-band distortion and out-of-band distortion are caused for a large input power \cite{concept}. With symbols chosen independently, PAPR for OFDM signals has been investigated in \cite{ochiai} \cite{extreme}. With dependent symbols, for example, Bose-Chaudhuri-Hocquenghem (BCH) codes, their PAPR has been investigated \cite{general}. Further, the performance of OFDM systems with non-linear amplifiers has been investigated in \cite{clip} \cite{costa}. To reduce PAPR, many methods have been proposed and explored. For example, a selected mapping method \cite{slm}, a balancing method \cite{balance}, an active constellation extension method \cite{ace}, a tone injection method \cite{tone}, an iterative filtering method \cite{iter} and a compounding method \cite{airy}. These methods are summarized in \cite{overview} \cite{litsyn}. In some of these methods, it is necessary to transmit some parameters as side information since receivers have to know the parameters to recover symbols. One of methods to reduce PAPR is a Partial Transmit Sequence (PTS) technique \cite{pts}-\cite{chicago}. PTS techniques are to multiply symbols by components to reduce PAPR. Therefore, it is necessary to transmit the vector as side information to the receiver. Further, with PTS techniques, OFDM signals are not distorted since we only modulate symbols. Then, their side lobes stay unchanged. With PTS techniques, there is a significant task for reducing PAPR, how to reduce the calculation amount. In \cite{pts}, the suitable vector is chosen as one which achieves the lowest PAPR from all of the candidates. Then, the calculation amount exponentially gets larger as the length of a vector increases. To reduce such calculations, there are some methods. In \cite{neighborhood}, the neighborhood search algorithm has been proposed. With this method, we can obtain a local optimal solution. Another method is a phase random method \cite{phaserandom}. This method consists of generating random vectors whose phase is uniformly distributed in the set of the candidates. In this paper, we propose a method to search the vector which achieves low PAPR. The main point of our method is to obtain the vector from a set of random vectors generated from the Gaussian distribution. Therefore, our method is similar to a phase random method \cite{phaserandom}. Then, we derive the optimization problem to reduce PAPR and obtain a solution from the relaxed problem. We regard the solution as a covariance matrix and we can determine the Gaussian distribution. This paper is organized as follows: Section II shows the definitions of PAPR and Peak-to-Mean Envelope Power Ratio (PMEPR). In the literatures, these two notions sometimes are assumed to coincide, however, these two are different since PAPR and PMEPR are defined with RF signals and base-band signals, respectively. In this Section, we clarify the property of signals considered in this paper. Section III shows a partial transmit sequence technique and an optimization problem. It is not straightforward to solve this optimization problem since the feasible region of this optimization problem is discrete. Therefore, in Section IV, we show a semidefinite relaxation technique to solve it. With this technique, we can obtain approximate solutions. In Section V, we consider random vectors generated from the Gaussian distribution whose covariance matrix is the solution of the relaxed problem. Then, in Section VI, we show the relation between our randomization method and a phase random method, which is a conventional method. In Section VII, since our problem stated in Sections IV and V has the large number of constraints, we propose another optimization problem to reduce the upper bound of PAPR. In this problem, the number of constraints is less than one of that in Section IV and V. Finally, we compare the PAPR of our method with one of an existing phase random method. \section{OFDM System Model and PAPR} In this section, we fix the model and the quantities used throughout this paper. A complex baseband OFDM signal is written as \cite{ofdmcdma} \begin{equation} s(t) = \sum_{k=1}^{K} A_k \exp\left(2 \pi j \frac{k-1}{T}t\right), \hspace{2mm} 0 \leq t < T, \label{eq:ofdm} \end{equation} where $A_k$ is a transmitted symbol, $K$ is the number of symbols, $j$ is the unit imaginary number and $T$ is a duration of symbols. It is known that OFDM signals are generated by Inverse Fast Fourier Transformation (IFFT) \cite{ofdmcdma}. As seen in Eq. (\ref{eq:ofdm}), our OFDM signals have no cyclic prefixes. With the cyclic prefix technique, the PAPR of OFDM signals is preserved since the cyclic prefix does not introduce any new peaks \cite{sharif}. Therefore, we consider such OFDM signals written in Eq. (\ref{eq:ofdm}). A Radio Frequency (RF) OFDM signal $\zeta(t)$ is written with Eq. (\ref{eq:ofdm}) as \begin{equation} \begin{split} \zeta(t) &= \operatorname{Re}\{s(t)\exp(2 \pi j f_c t)\}\\ &= \operatorname{Re}\left\{\sum_{k=1}^{K} A_k \exp\left(2 \pi j \left(\frac{k-1}{T} + f_c \right)t\right)\right\}, \end{split} \end{equation} where $\operatorname{Re}\{z\}$ is the real part of $z$, and $f_c$ is a carrier frequency. With RF signals, PAPR is defined as \cite{compute} \cite{exist} \begin{equation} \operatorname{PAPR} = \max_{0 \leq t < T}\frac{\left|\operatorname{Re}\left\{\displaystyle\sum_{k=1}^{K} A_k \exp\left(2 \pi j \left(\frac{k-1}{T} + f_c \right)t\right)\right\}\right|^2}{P_{\operatorname{av}}}, \label{eq:PAPR} \end{equation} where $P_{\operatorname{av}}$ corresponds to the average power of signals, $P_{\operatorname{av}} = \sum_{k=1}^{K}\operatorname{E}\{|A_k|^2\}$, and $\operatorname{E}\{X\}$ is the average of $X$. Similarly, with baseband signals, PMEPR is defined as \cite{exist} \cite{compute} \begin{equation} \operatorname{PMEPR} = \max_{0 \leq t < T}\frac{\left|\displaystyle\sum_{k=1}^{K} A_k \exp\left(2 \pi j \frac{k-1}{T} t\right) \right|^2}{P_{\operatorname{av}}}. \label{eq:PMEPR} \end{equation} In the literatures, PAPR and PMEPR have often been evaluated as probabilities, since PAPR and PMEPR depend on symbols $A_k$ that can be regarded as random variables \cite{ochiai} \cite{general}. Obviously, PAPR does not always correspond to PMEPR. Further, from Eqs. (\ref{eq:PAPR}) and (\ref{eq:PMEPR}), PAPR does not exceed PMEPR. In \cite{sharif}, under some conditions described below, it has been proven that the following relations are established \begin{equation} \left( 1 - \frac{\pi^2K^2}{2r^2}\right)\cdot \operatorname{PMEPR} \leq \operatorname{PAPR} \leq \operatorname{PMEPR}, \label{eq:rel1} \end{equation} where $r$ is an integer such that $f_c = r/T$. The conditions that Eq. (\ref{eq:rel1}) holds are $K \ll r$ and $\exp(2 \pi j K/r) \approx 1$. In addition to these, another relation has been shown in \cite{litsyn}. Equation (\ref{eq:rel1}) implies that PMEPR approximately equals PAPR for sufficiently large $f_c$. It is often the case that PMEPR is evaluated instead of PAPR \cite{ochiai}. In what follows, we assume that the carrier frequency $f_c$ is sufficiently large, that is, we consider baseband OFDM signals instead of RF signals. \section{Partial Transmit Sequence Technique} With OFDM systems, the Partial Transmit Sequence (PTS) technique has been proposed to reduce PAPR. In this section, we show the model and the details of PTS techniques. PTS techniques need a vector to reduce PAPR. A disadvantage of PTS techniques is that the large amount of calculation is necessary in some situations. The details of PTS techniques are described in \cite{litsyn} \cite{pts} \cite{pts2}. Our symbols and how to derive an index sets for symbols are given as follows. We assume that the symbols $A_k$ are given and the number of the symbols is $K$. Further, the index set $\Lambda=\{1,2,\ldots,K\}$ corresponds to the set of unordered symbols $\{A_1,A_2,\ldots,A_{K}\}$. To apply a PTS technique, we divide the index set $\Lambda$ into $P$ disjoint subsets, $\Lambda_1,\ldots,\Lambda_{P}$, that is, \begin{equation} \Lambda = \Lambda_1 \cup \cdots \cup \Lambda_{P}, \hspace{2mm}\Lambda_k \cap \Lambda_m = \emptyset\hspace{2mm}\mbox{if}\hspace{2mm}k \neq m \end{equation} for $k,m = 1,2,\ldots,P$. There are some discussions about how to divide the index set. We refer the reader to \cite{mseq} \cite{radix} \cite{random}. To express the instantaneous power, we define some quantities as below. For each subsets of symbols, we introduce a rotation vector $\mathbf{b} = (b_1,b_2,\ldots,b_{P})^\top$, where $\mathbf{x}^\top$ is the transpose of $\mathbf{x}$. The vector $\mathbf{b}$ is chosen as one satisfying $b_p = \exp(j \theta_p)$ for $p = 1,2,\ldots,P$, where $\theta_p \in [0,2\pi)$. This $\mathbf{b}$ plays various roles throughout this paper. For convenience, let us define the quantities \begin{equation} A_k^{(p)} = \left\{ \begin{array}{cc} A_k & k \in \Lambda_p\\ 0 & \mbox{otherwise} \end{array} \right. \end{equation} for $k = 1,2,\ldots,K$ and $p = 1,2,\ldots,P$. With $A_k^{(p)}$, a modified baseband OFDM signal $\hat{s}(t)$ is written as \begin{equation} \hat{s}(t) = \sum_{p=1}^{P} \sum_{k=1}^{K} A_k^{(p)}b_p \exp\left(2 \pi j \frac{k-1}{T}t \right). \label{eq:pts} \end{equation} Note that the average power of modified signals is equivalent to one of the original OFDM signals since $|b_p|=1$. With a matrix and vectors, the above equation is rewritten as \begin{equation} \hat{s}(t) = \mathbf{v}^\top_t A \mathbf{b}, \end{equation} where \begin{equation} \begin{split} \mathbf{v}_t &= \left(\begin{array}{cccc} v_{1,t} & v_{2,t} & \cdots & v_{K,t}\\ \end{array} \right)^\top,\\ A &= \left( \begin{array}{cccc} A_1^{(1)} & A_1^{(2)} & \cdots & A_1^{(P)}\\ A_2^{(1)} & A_2^{(2)} & \cdots & A_2^{(P)}\\ \vdots & \vdots & \ddots & \vdots \\ A_{K}^{(1)} & A_{K}^{(2)} & \cdots & A_{K}^{(P)} \end{array} \right)\\ \end{split} \end{equation} and \begin{equation} v_{k,t} = \exp\left(2 \pi j \frac{k-1}{T}t \right). \end{equation} With these quantities, the instantaneous power $|\hat{s}(t)|^2$ is written as \begin{equation} |\hat{s}(t)|^2 = \mathbf{b}^*A^*(\mathbf{v}_t^*)^\top\mathbf{v}^\top_t A \mathbf{b}, \label{eq:inst} \end{equation} where $\mathbf{z}^*$ is the complex conjugate transpose of $\mathbf{z}$. We denote by $C_t$ the matrix $A^*(\mathbf{v}_t^*)^\top\mathbf{v}^\top_t A$. Note that $C_t$ is a positive semidefinite matrix since $C_t$ is a Gram matrix and each value of $b_p$ is chosen as one achieving the lowest PAPR. At the receiver side, to recover symbols, it is necessary for the receiver to know the explicit values of $\mathbf{b}$. Note that Signal to Noise Ratio (SNR) is preserved if the receiver knows $\mathbf{b}$. To let the receiver know, the vector $\mathbf{b}$ has to be transmitted as side-information. To reduce information content, the value of $b_p$ is usually restricted to \begin{equation} b_p \in \left\{1,\exp\left(2 \pi j \frac{1}{L}\right),\ldots,\exp\left(2 \pi j \frac{L-1}{L}\right)\right\}, \end{equation} where $L$ is a positive integer. We denote by $\Omega_L$ the set $\left\{1,\exp\left(2 \pi j \frac{1}{L}\right),\ldots,\exp\left(2 \pi j \frac{L-1}{L}\right)\right\}$. From the above definition, the vector $\mathbf{b} \in \Omega_L^P$, where $\Omega_L^P$ is the set of $P$-dimensioned vectors whose elements are in $\Omega_L$. Then, the information content of $\mathbf{b}$ is $(P-1) \log_2 L$ [bits] since we can set $b_1 = 1$ without loss of generality. It is obvious that the number of elements in the set $\Omega_L^{P-1}$ is $L^{P-1}$. Let $\mathbf{b}^\star$ be the vector which realizes the minimum PAPR. Then, it turns out that $\mathbf{b}^\star$ is the global solution of the optimization problem \begin{equation} \begin{split} (Q_L) \hspace{3mm} & \hspace{3mm}\min \max_{0 \leq t < T} |\hat{s}(t)|^2\\ \mbox{subject to}\hspace{3mm} & b_p \in \Omega_L \hspace{3mm} (p=1,2,\ldots,P). \end{split} \end{equation} Our aim is to find the vector $\mathbf{b}^\star$. To this end, there are two main obstacles to solve the problem $(Q_L)$. One obstacle is that the time $t$ is continuous. In \cite{sharif}, with baseband OFDM signals $s(t)$ defined in Eq. (\ref{eq:ofdm}), it has been shown that there is a following relation between continuous signals and sampled signals \begin{equation} \max_{0 \leq t < T}|s(t)| < \sqrt{\frac{J^2}{J^2 - \pi^2/2}}\max_{0 \leq n < JK} \left|s\left(\frac{nT}{JK}\right)\right|, \label{eq:sample} \end{equation} where $J$ is an integer satisfying $J > \pi/\sqrt{2}$. Equation (\ref{eq:sample}) implies that PMEPR can be estimated precisely from signals sampled with a sufficiently large oversampling factor. For maxima of continuous signals and sampled signals, other relations have been shown in \cite{local} \cite{tellambura}. The integer $J$ is often called oversampling factor \cite{clip}. As an oversampling factor, $J \geq 4$ is often chosen. How to choose the oversampling factor $J$ has been discussed in \cite{sharif}. With sampled signals, the problem $(Q_L)$ is rewritten as \begin{equation} \begin{split} (\hat{Q}_L) \hspace{3mm} & \hspace{3mm} \min \lambda \\ \mbox{subject to}\hspace{3mm} & \mathbf{b}^* C_{nT/(JK)} \mathbf{b} \leq \lambda \hspace{2mm} (n = 0,1,\ldots,JK-1)\\ & b_p \in \Omega_L \hspace{3mm} (p=1,2,\ldots,P)\\ & \lambda \in \mathbb{R}. \end{split} \end{equation} Note that the variables in the problem $(\hat{Q}_L)$ are $\mathbf{b}$ and $\lambda$. The other obstacle is that the feasible region $\Omega_L$ is discrete. In \cite{pts}, a brute-force search has been used to find the global solution $\mathbf{b}^\star$. With this method, we have to find the vector $\mathbf{b}^\star$ from $L^{P-1}$ candidates, and the calculation amount exponentially gets larger as $P$ increases. In \cite{neighborhood}, the neighborhood search algorithm has been proposed. With this method, we can obtain a local optimal solution. However, it is only known that its calculation amount is proportional to $_{P-1}C_r \cdot L^r$, where $r$ is an integer parameter expressing the distance of a neighborhood and $_aC_b$ is a binomial coefficient. Another existing method is a phase random method \cite{phaserandom}. This method consists of generating random vectors whose phase is uniformly distributed in the region $\Omega^P_L$, from which we obtain a solution. \section{Semidefinite Relaxation} Since it is not straightforward to obtain the global solution, we propose an efficient method to obtain a solution which achieves low PAPR. Optimization problems, such as the problem $(\hat{Q}_L)$, appear in MIMO detection \cite{detect}. Thus, we can use these methods that have already been developed to our problem. One of such existing methods uses a semidefinite relaxation technique \cite{sdr}. In this section, we obtain a solution with such semidefinite relaxation techniques. We apply semidefinite relaxation techniques to the problem $(\hat{Q}_L)$. Our main aim is to change the variable $\mathbf{b}$ to a positive semidefinite matrix $X$. The ways to solve the problem $(\hat{Q}_L)$ depend on $\Omega_L$. Therefore, we consider each problem for various cases of $L$. \subsection{Optimization Problem for $L=2$} First, we consider the problem $(\hat{Q}_L)$ for $L=2$, $(\hat{Q}_2)$. Then, $\Omega_2 = \{-1,1\}$. Note that $b^2=1$ for $b \in \Omega_2$. If we define the matrix $X = \mathbf{b}\mathbf{b}^\top$, then $X$ is a positive semidefinite matrix whose rank is 1 and the problem $(\hat{Q}_2)$ is rewritten as \begin{equation} \begin{split} (\hat{Q}_2) \hspace{3mm} & \hspace{3mm} \min \lambda \\ \mbox{subject to}\hspace{3mm} & \operatorname{Tr}(C_{nT/(JK)}X) \leq \lambda \hspace{2mm} (n = 0,1,\ldots,JK-1)\\ & X_{p,p} = 1 \hspace{3mm} (p=1,2,\ldots,P)\\ & \operatorname{rank}(X) = 1\\ & X \succcurlyeq 0\\ & X \in \mathbb{S}_P, \qquad \lambda \in \mathbb{R}, \end{split} \end{equation} where $\operatorname{Tr}(X)$ is the trace of $X$, $\operatorname{rank}(X)$ is the rank of $X$, $X \succcurlyeq 0$ indicates that $X$ is a positive semidefinite matrix and $\mathbb{S}_P$ is the set of symmetric matrices of dimension $P$. Due to the constraint $\operatorname{rank}(X) = 1$, the problem $(\hat{Q}_2)$ is not convex. Note that the set of positive semidefinite matrices is convex \cite{boyd}. By dropping the rank constraint, we obtain the relaxed optimization problem \begin{equation} \begin{split} (\hat{Q}'_2) \hspace{3mm} & \hspace{3mm} \min \lambda \\ \mbox{subject to}\hspace{3mm} & \operatorname{Tr}(C_{nT/(JK)}X) \leq \lambda \hspace{2mm} (n = 0,1,\ldots,JK-1)\\ & X_{p,p} = 1 \hspace{3mm} (p=1,2,\ldots,P)\\ & X \succcurlyeq 0\\ & X \in \mathbb{S}_P, \qquad \lambda \in \mathbb{R}. \end{split} \end{equation} The above problem $(\hat{Q}'_2)$ can be immediately solved since the problem $(\hat{Q}'_2)$ is convex. Let $X^\star_2$ be the global solution of the problem $(\hat{Q}'_2)$. If the rank of $X^\star_2$ is 1, then we obtain the global solution of the problem $(\hat{Q}_2)$, denoted by $\mathbf{b}^\star_2$. However, the rank of $X^\star_2$ is not always 1. To deal with this, we obtain an approximate solution from $X^\star_2$. In general, the solution $X^\star_2$ is decomposed as \begin{equation} X^\star_2 = \sum_{i=1}^{r_2} \lambda_i \mathbf{q}_i \mathbf{q}_i^*, \end{equation} where $r_2 = \operatorname{rank}(X^\star_2)$, $\lambda_i$ is the eigenvalue of $X^\star_2$, $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{r_2}$ and $\mathbf{q}_i$ is the respective eigenvector. Then, in a least two norm sense, the approximate solution whose rank 1 is obtained as $\hat{X}^\star_2 = \lambda_1 \mathbf{q}_1 \mathbf{q}_1^*$. From this approximate solution, we systematically obtain the solution of the original problem $(\hat{Q}_2)$ as $\sqrt{\lambda_1}\mathbf{q}_1$. However, this solution is not always in the feasible region of the problem $(\hat{Q}_2)$. To have an approximate solution in the feasible region, for the problem $(\hat{Q}_2)$, we need to project the solution onto the feasible region. We arrive at the $p$-th element of the approximate solution of the problem $(\hat{Q}_2)$ as \begin{equation} \hat{b}_{2,p} = \operatorname{sgn}(q_{1,p}), \end{equation} where $q_{1,p}$ is the $p$-th element of $\mathbf{q}_1$ and \begin{equation} \operatorname{sgn}(x) = \left\{ \begin{array}{c c} 1 & x \geq 0\\ -1 & x < 0 \end{array} \right. . \end{equation} \subsection{Optimization Problem for $L=4$} Similarly, for $L=4$, we obtain the approximate solution of the problem $(\hat{Q}_4)$. For $L=4$, the set $\Omega_4$ is written as $\Omega_4 = \{1, \exp(j \pi/2),-1,\exp(j 3\pi/4)\}$. To obtain the relaxed problem, we rewrite the problem $(\hat{Q}_4)$ as follows. First, let us define the set \begin{equation} \hat{\Omega}_4 = \{+1+j,+1-j,-1+j,-1-j\}, \end{equation} which can be expressed as \begin{equation} \hat{\Omega}_4 = \{\sqrt{2}\exp(j \pi/4) \cdot a \mid a \in \Omega_4\}. \end{equation} Note that $\operatorname{Re}\{b\}^2=\operatorname{Im}\{b\}^2=1$ for $b \in \hat{\Omega}_4$. Second, since the set $\hat{\Omega}_4$ consists of complex elements, we rewrite the set $\hat{\Omega}_4$ in terms of real parts and imaginary parts. We introduce the following transformations for $\mathbf{z} \in \mathbb{C}^n$ and $Z \in \mathbb{H}_n$, where $\mathbb{H}_n$ is the set of Hermitian matrices of dimension $n$ \cite{maxcut}, \begin{equation*} \mathcal{T}(\mathbf{z}) = \left( \begin{array}{c} \operatorname{Re}\{\mathbf{z}\}\\ \operatorname{Im}\{\mathbf{z}\} \end{array} \right),\hspace{3mm}\mbox{and}\hspace{3mm} \mathcal{T}(Z) = \left( \begin{array}{cc} \operatorname{Re}\{Z\} & -\operatorname{Im}\{Z\}\\ \operatorname{Im}\{Z\} & \operatorname{Re}\{Z\} \end{array} \right). \end{equation*} Note that $\mathcal{T}(X) \in \mathbb{S}_{2n}$ if $X \in \mathbb{H}_n$ \cite{goemans}. Finally, with the above operations, we arrive at the relaxed problem for $L=4$. \begin{equation} \begin{split} (\hat{Q}'_4) \hspace{3mm} & \hspace{3mm} \min \lambda \\ \mbox{subject to}\hspace{3mm} & \operatorname{Tr}(\hat{C}_{nT/(JK)}X) \leq \lambda \hspace{2mm} (n = 0,1,\ldots,JK-1)\\ & X_{p,p} = 1 \hspace{3mm} (p=1,2,\ldots,2P)\\ & X \succcurlyeq 0\\ & X \in \mathbb{S}_{2P}, \qquad \lambda \in \mathbb{R}, \end{split} \end{equation} where $\hat{C}_t = \mathcal{T}(C_t)$. Note that the problem $(\hat{Q}'_4)$ is equivalent to $(\hat{Q}_4)$ if we impose the rank constraints to the problem $(\hat{Q}'_4)$ and the problem $(\hat{Q}'_4)$ is convex. Let $X^\star_4$ be the global solution of the problem $(\hat{Q}'_4)$. From $X^\star_4$, we can obtain an approximate solution as follows. Similar to the problem for $L=2$, $X^\star_4$ is decomposed as \begin{equation} X^\star_4 = \sum_{i=1}^{r_4} \lambda_i \mathbf{q}_i \mathbf{q}_i^*, \end{equation} where $r_4 = \operatorname{rank}(X^\star_4)$, $\lambda_i$ is the eigenvalue of $X^\star_4$, $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{r_4}$ and $\mathbf{q}_i$ is the respective eigenvector. From the vector $\mathbf{q}_1$, we can obtain the approximate solution $\hat{\mathbf{b}}_4 \in \mathbb{C}^P$ written as \begin{equation} \hat{b}_{4,p} = \frac{1}{\sqrt{2}}\exp(-j \pi/4)\left(\operatorname{sgn}(q_p) + j \cdot \operatorname{sgn}(q_{p+P})\right), \end{equation} where $\hat{b}_{4,p}$ and $q_p$ are the $p$-th elements of $\hat{\mathbf{b}}_4$ and $\mathbf{q}_1$, respectively. \subsection{Optimization Problem for General $L$} For general $L$, we consider the relaxed problem \begin{equation} \begin{split} (\hat{Q}'_L) \hspace{3mm} & \hspace{3mm} \min \lambda \\ \mbox{subject to}\hspace{3mm} & \operatorname{Tr}(C_{nT/(JK)}X) \leq \lambda \hspace{2mm} (n = 0,1,\ldots,JK-1)\\ & X_{p,p} = 1 \hspace{3mm} (p=1,2,\ldots,P)\\ & X \succcurlyeq 0\\ & X \in \mathbb{H}_P, \qquad \lambda \in \mathbb{R}. \end{split} \end{equation} Note that the set of Hermitian semidefinite positive matrices is convex \cite{sdr} and the above problem $(\hat{Q}'_L)$ is convex. The problem $(\hat{Q}'_L)$ is not equivalent to the problem $(\hat{Q}_L)$ for $L \neq 2,4$ if the rank constraint is imposed. Similar to the problem for $L=2$ and $L=4$, the approximate solution can be obtained as follows. Let $X^\star_L$ be the global solution of the problem $(\hat{Q}'_L)$. Then, $X^\star_L$ is decomposed as \begin{equation} X^\star_L = \sum_{i=1}^{r_L} \lambda_i \mathbf{q}_i \mathbf{q}_i^*, \end{equation} where $r_L = \operatorname{rank}(X^\star_L)$, $\lambda_i$ is the eigenvalue of $X^\star_L$, $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{r_L}$ and $\mathbf{q}_i$ is the respective eigenvector. From the vector $\mathbf{q}_1$, we can obtain the approximate solution $\hat{\mathbf{b}}_L \in \mathbb{C}^P$ as \begin{equation} \hat{\mathbf{b}}_L = \mathop{\rm arg~min}\limits_{\mathbf{b} \in \Omega_L^P} \|\mathbf{q} - \mathbf{b}\|, \end{equation} where $\mathbf{q} = \sqrt{\lambda_1}\mathbf{q}_1$ and $\|\mathbf{z}\|$ is the Euclidean norm of $\mathbf{z}$. \section{Randomization Method} In Section IV, we have discussed the relaxed problems and how to obtain the approximate solutions. However, clearly, approximate solutions are not suitable if the global solutions of the relaxed problems have some large eigenvalues, that is, the ranks of solutions are not regarded as unity. In this section, we introduce a randomization method. This method is used to analyze how far the optimal value of relaxed problems is from one of original problems \cite{randap}. With this method, we obtain solutions as random values which are generated from Gaussian distribution. Similar to discussions in Section IV, we consider each problem for various cases of $L$. \subsection{Randomization for $L=2$} First, we consider the problem for $L=2$. Let $\boldsymbol{\xi}$ be a random vector generated from the Gaussian distribution $\mathcal{N}(\mathbf{0},X)$ with zero mean and a covariance matrix $X$. The definition and properties of a Gaussian distribution have been shown in \cite{aspects}. To find an approximate solution, we rewrite the problem $(\hat{Q}'_2)$ as follows. With \begin{equation} \operatorname{E}\{\boldsymbol{\xi}^\top C_t\boldsymbol{\xi}\} = \operatorname{Tr}(C_tX), \end{equation} the problem $(\hat{Q}'_2)$ can be written as \begin{equation} \begin{split} (\hat{Q}'_2) \hspace{3mm} & \hspace{3mm} \min \lambda \\ \mbox{subject to}\hspace{3mm} & \operatorname{E}\{\boldsymbol{\xi}^\top C_{nT/(JK)}\boldsymbol{\xi}\} \leq \lambda \hspace{2mm} (n = 0,1,\ldots,JK-1)\\ & X_{p,p} = 1 \hspace{3mm} (p=1,2,\ldots,P)\\ & X \succcurlyeq 0\\ & \boldsymbol{\xi} \sim \mathcal{N}(0,X)\\ & X \in \mathbb{S}_P, \qquad \lambda \in \mathbb{R}. \end{split} \end{equation} Note that the variables of the above problem are $X$ and $\lambda$. Then, it is clear that the optimal matrix $X^\star_2$ defined in Section IV is the optimal matrix of the above problem in a sense of a covariance matrix. This result suggests that a suitable solution can be obtained from a set of random vectors generated from the Gaussian distribution $\mathcal{N}(\mathbf{0},X^\star_2)$ \cite{sdprand}. We can then obtain the approximate solution as follows. \begin{enumerate} \item Solve the problem $(\hat{Q}'_2)$ and obtain the covariance matrix $X^\star_2$. \item Generate random vectors $\{\boldsymbol{\xi}\}$ from the Gaussian distribution $\mathcal{N}(\mathbf{0},X^\star_2)$ and project them onto the feasible region of the original problem $(\hat{Q}_L)$, that is, for $L=2$, obtain the projected solutions \begin{equation} \hat{b}_p = \operatorname{sgn}\left(\xi_p\right) \hspace{2mm}(p=1,2,\ldots,P), \end{equation} where $\xi_p$ is the $p$-th element of $\boldsymbol{\xi}$. \item Choose the solution which achieves the minimum PAPR among all the random vectors and regard it as an approximate solution. \end{enumerate} \subsection{Randomization for $L=4$} Similar to the case for $L=2$, we can obtain the covariance matrix $X^\star_4$ for $L=4$ and obtain random vectors $\{\boldsymbol{\xi}\}$ generated from $\mathcal{N}(\mathbf{0},X^\star_4)$. Since the dimension of the vectors $\{\boldsymbol{\xi}\}$ is $2P$, the way to project is written as \begin{equation} \hat{b}_p = \frac{1}{\sqrt{2}}\exp(-j \pi/4)\left(\operatorname{sgn}(\xi_p) + j \cdot \operatorname{sgn}(\xi_{p+P})\right) \end{equation} for $p=1,2,\ldots,P$. From these random vectors, we choose an approximate solution which achieves the minimum PAPR among them. \subsection{Randomization with General $L$} For general $L$, we can obtain the complex covariance matrix $X^\star_L$ as a solution of the problem $(\hat{Q}'_L)$. Similar to the methods for $L=2$ and $L=4$, our goal is to choose the solution from random vectors. Our main part of our method is to obtain an approximate solution from random vectors generated from the complex Gaussian distribution $\mathcal{CN}(\mathbf{0},X^\star_L)$. The definition and the detail of a complex Gaussian distribution have been shown in \cite{graphical}. There are some methods to obtain an approximate solution from random vectors \cite{proj1}-\cite{proj3}, and our method is a special case of an algorithm in \cite{proj3}. From the complex Gaussian distribution $\mathcal{CN}(\mathbf{0},X^\star_L)$, we can obtain the random vectors $\{\boldsymbol{\xi}\}$. Then, we have to transform the random vectors $\{\boldsymbol{\xi}\}$ into feasible ones as solutions of the problem $(\hat{Q}_L)$. Our transformation method is written as follows. Let $f_L$ be \begin{equation} f_L(z) = \left\{ \begin{array}{c l} 1 & \operatorname{Arg} z \in [0,\frac{1}{L}2\pi)\\ \omega_L & \operatorname{Arg} z \in [\frac{1}{L}2\pi,\frac{2}{L}2\pi)\\ \vdots & \\ \omega^l_L & \operatorname{Arg} z \in [\frac{l}{L}2\pi,\frac{l+1}{L}2\pi)\\ \vdots & \\ \omega^{L-1}_L & \operatorname{Arg} z \in [\frac{L-1}{L}2\pi,2\pi) \end{array} \right. , \end{equation} where $z \in \mathbb{C}$, $\omega_L = \exp(2 \pi j / L)$ and $\operatorname{Arg} z$ is the angle of $z$. With the function $f_L$, the random vector $\boldsymbol{\xi}$ generated from the complex Gaussian distribution $\mathcal{CN}(\mathbf{0},X^\star_L)$ is transformed to \begin{equation} \hat{b}_p = f_L(\xi_p) \hspace{2mm}(p=1,2,\ldots,P). \end{equation} It is clear that $\hat{b}_p \in \Omega_L$. Therefore, $\hat{\mathbf{b}}$ is a feasible solution of the problem $(\hat{Q}_L)$. With the above method, we can obtain the feasible solutions from random vectors generated from the complex Gaussian distribution $\mathcal{CN}(\mathbf{0},X^\star_L)$. Then, we choose the approximate solution from them which achieves the minimum PAPR among the set of the random vectors. Our method is summarized as follows, \begin{algorithm}[h] Obtain the relaxed problem with semidefinite relaxation techniques.\\ Obtain the positive semidefinite matrix $X^\star$ as the optimal solution of the relaxed problem.\\ Determine the Gaussian distribution $\mathcal{N}(\mathbf{0},X^\star)$ (with $L \neq 2,4$, $\mathcal{CN}(\mathbf{0},X^\star)$ is determined). Then, generate $N$ samples from the Gaussian distribution as the candidates of the solution.\\ Project the samples onto the feasible region, and obtain the projected samples.\\ Choose the solution $\mathbf{b}^\star$ from the projected samples which achieves the minimum PAPR. Then, output $\mathbf{b}^\star$ as the solution. \caption{Randomization Method with Semidefinite Relaxation} \label{algo:ours} \end{algorithm} \section{Relation between Our Method and Phase Random Method} In Section V, we have shown our randomization method. Similar to our method, a phase random method has been proposed \cite{phaserandom}. This method uses random vectors whose phase is uniformly distributed in $\Omega_L$. In this Section, we discuss the relation between our method and a phase random method. First, we explain a phase random method. We define the probability mass function as \begin{equation} \operatorname{Pr}\left\{z = \omega^l_L\right\} = \frac{1}{L}\hspace{2mm}(l=0,1,\ldots,L-1), \end{equation} where $\omega_L = \exp(2 \pi j /L)$, defined in Section V. Then, $\omega_L \in \Omega_L$ and phases are uniformly distributed in $\Omega_L$. Further, let us discuss the complex Gaussian distribution. From \cite{graphical}, if $\mathbf{z} \in \mathbb{C}^n$ follows $\mathcal{CN}(\boldsymbol{\mu},\Sigma)$, then the probability density function of $\mathcal{T}(\mathbf{z}) \in \mathbb{R}^{2n}$ is the Gaussian distribution $\mathcal{N}\left(\mathcal{T}(\boldsymbol{\mu}),\frac{1}{2}\mathcal{T}(\Sigma)\right)$. Therefore, we can consider a real-value Gaussian distribution instead of a complex Gaussian distribution. Let us consider the complex Gaussian distribution $\mathcal{CN}(\mathbf{0},I_P)$, where $I_P$ is the identity matrix whose size is $P$. It is clear that the matrix $\mathcal{T}(I_P)$ is the identity matrix whose size is $2P$. From the above discussion, and the covariance matrix is identity matrix, each variable of $\mathbf{z}$ generated from $\mathcal{CN}(\mathbf{0},I_P)$ is uncorrelated. It is known in \cite{prob} that uncorrelatedness is equivalent to independence for normal variables. Therefore, it is sufficient to consider a vector $\mathbf{z}$ whose element is generated from the complex Gaussian distribution $\mathcal{CN}(0,1)$. The variable $z$ which is the element of $\mathbf{z}$ can be decomposed as \begin{equation} z = x + jy, \end{equation} where $x$ and $y$ are real numbers following the independent Gaussian distribution $\mathcal{N}(0,1/2)$, respectively. Let us define $r \geq 0$ and $\theta \in [0,2\pi)$ so that \begin{equation} x + jy = r \exp(j\theta) \end{equation} Then, since $x$ and $y$ are normal variables following $\mathcal{N}(0,1/2)$, the probability density of $\theta \in [0,2\pi]$ is \cite{prob} \begin{equation} p(\theta) = \frac{1}{2\pi}, \end{equation} from which, the phase of a variable $z$ generated from $\mathcal{CN}(0,1)$ is uniformly distributed. From the above discussions and the definition of the function $f_L(z)$, the probability mass function of $f_L(z)$ is written as \begin{equation} \operatorname{Pr}\left\{f_L(z) = \omega^l_L\right\} = \frac{1}{L}. \end{equation} This result implies that a phase random method is equivalent to our method whose covariance matrix is the identity matrix with the function $f_L(z)$. \section{Reducing Upper Bound of PAPR} We have discussed how to obtain a covariance matrix to determine a Gaussian distribution. In Section V, we have obtained the optimization problem $(\hat{Q}'_L)$. This problem contains the oversampling parameter $J$. As seen in Section III, measured PAPR calculated from sampled signals converges to the true value of PAPR as $J \rightarrow \infty$. Therefore, a sufficiently large $J$ is necessary to evaluate PAPR tightly. Then, however, the number of constraints in the optimization problem $(\hat{Q}'_L)$ gets larger as $J$ increases. In such a situation, the optimization problem $(\hat{Q}'_L)$ gets complicated. To overcome this obstacle, instead of PAPR, we consider an optimization problem to reduce the upper bound of PAPR which does not depend on time $t$. From this problem, we obtain a covariance matrix as the solution. In this Section, we consider a general $L$. Then, specifying $L=2,4$, we can verify the same results to ones obtained in this Section with the techniques discussed in Section IV: with $L=2$, the set of matrices is the symmetric matrices $\mathbb{S}_P$, and with $L=4$, we replace a positive semidefinite matrix $X$ with $\mathcal{T}(X)$. The upper bound of the signal envelope has been shown with Eq. (\ref{eq:ofdm}) as \cite{tellambura_bound} \begin{equation} |s(t)|^2 \leq \sum_{k=1}^K|A_k|^2 + 2\sum_{i=1}^{K-1}|\rho(i)|, \label{eq:bound_PAPR} \end{equation} where \begin{equation} \rho(i) = \sum_{k=1}^{K-i}A_k\overline{A}_{k+i} \end{equation} and $\overline{z}$ is the complex conjugate of $z$. We define $\rho(K)=0$. The right hand side of Eq. (\ref{eq:bound_PAPR}) is independent of the time $t$. Let us define $\boldsymbol{\rho'}=(\rho(1),\rho(2),\ldots,\rho(K-1))^\top$. Note that the first term in right side of Eq. (\ref{eq:bound_PAPR}), $\sum_{k=1}^K|A_k|^2$ corresponds to $\rho(0)$ and this term is not varied with PTS techniques since each element of a vector $b_n$ satisfies $|b_n|=1$. From the above discussion, without taking into account convexity, it is expected to decrease PAPR when we reduce $\|\boldsymbol{\rho'}\|_{l_1}$, where $\|\mathbf{z}\|$ is the $l_1$-norm of $\mathbf{z}$. However, it is not the case since each $|\rho(i)|$ is not convex if we regard $A_k$ as variables. Therefore, we use $l_2$-norm of $\boldsymbol{\rho'}$, $\|\boldsymbol{\rho'}\|_{l_2}$. From the Cauchy-Schwarz inequality, it follows that \begin{equation} \|\boldsymbol{\rho'}\|_{l_1} \leq \sqrt{K-1}\|\boldsymbol{\rho'}\|_{l_2}. \end{equation} Therefore, $\|\boldsymbol{\rho'}\|_{l_1}$ is expected to be reduced when $\|\boldsymbol{\rho'}\|_{l_2}$ is reduced. Let us consider the vector $\hat{\boldsymbol{\rho}}=(\rho(0),\sqrt{2}\rho(1),\ldots,\sqrt{2}\rho(K-1))^\top$. It is clear that minimizing $\|\hat{\boldsymbol{\rho}}\|_{l_2}$ is equivalent to minimizing $\|\sqrt{2}\boldsymbol{\rho'}\|_{l_2}$ since $\rho(0)$ is constant. Then, $\|\hat{\boldsymbol{\rho}}\|^2_{l_2}$ is written as \begin{equation} \begin{split} \| \hat{\boldsymbol{\rho}}\|^2_{l_2} =& 2\sum_{k=1}^{K-1}|\rho(k)|^2 + |\rho(0)|^2\\ =& \sum_{k=0}^{K-1}|\rho(k)|^2 + \sum_{k=0}^{K-1}|\rho(K-k)|^2 \\ =&\frac{1}{2}\left\{\sum_{k=0}^{K-1}|\rho(k) + \overline{\rho(K-k))}|^2\right.\\ &\left. + \sum_{k=0}^{K-1}|\rho(k) - \overline{\rho(K-k)}|^2\right\}. \end{split} \label{eq:l2_expression} \end{equation} From the above equations, $\|\hat{\boldsymbol{\rho}}\|^2_{l_2}$ is divided into a periodic correlation term and an odd periodic correlation term. With Eq. (\ref{eq:pts}), these terms are written as \begin{equation} \begin{split} \rho(k) + \overline{\rho(K-k)} &= \mathbf{b}^*A^* B^{(k)}_{1,1}A\mathbf{b}\\ \rho(k) - \overline{\rho(K-k)} &= \mathbf{b}^*A^* B^{(k)}_{-1,1}A\mathbf{b},\\ \end{split} \end{equation} where the matrices $B^{(k)}_{1,1}$ and $B^{(k)}_{-1,1}$ are written as \begin{equation} B^{(k)}_{1,1} = \left( \begin{array}{cc} O & I_{k}\\ I_{K-k} & O \end{array} \right),\hspace{3mm} B^{(k)}_{-1,1} = \left( \begin{array}{cc} O & -I_{k}\\ I_{K-k} & O \end{array} \right). \end{equation} Since these matrices are regular matrices, they can be transformed to diagonal matrices. With this general discussion, these matrices are decomposed as \cite{mypaper} \begin{equation} \begin{split} B^{(k)}_{1,1} &= V^*D^{(k)}V\\ B^{(k)}_{-1,1} &= \hat{V}^*\hat{D}^{(k)}\hat{V}, \end{split} \end{equation} where $V$ and $\hat{V}$ are unitary matrices whose $(m,n)$-th elements are \begin{equation} \begin{split} V_{m,n} &= \frac{1}{\sqrt{K}}\exp\left(-2 \pi j \frac{mn}{K}\right),\\ \hat{V}_{m,n} &= \frac{1}{\sqrt{K}}\exp\left(-2 \pi j n\left(\frac{m}{K} + \frac{1}{2K}\right)\right), \end{split} \end{equation} and $D^{(k)}$ and $\hat{D}^{(k)}$ are diagonal matrices whose $n$-th diagonal elements are \begin{equation} \begin{split} D_n^{(k)} &= \exp\left(-2 \pi j k\frac{n}{K}\right),\\ \hat{D}_n^{(k)} &= \exp\left(-2 \pi j k\left(\frac{n}{K} + \frac{1}{2K}\right)\right). \end{split} \end{equation} With these expressions, Eq. (\ref{eq:l2_expression}) is written as \begin{equation} \|\hat{\boldsymbol{\rho}}\|_{l_2}^2 = \frac{K}{2}\left\{\sum_{k=1}^{K}|\alpha_k|^4 + \sum_{k=1}^{K}|\beta_k|^4\right\}, \end{equation} where $\alpha_k$ and $\beta_k$ are the $k$-th element of $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ written as $\boldsymbol{\alpha}=VA\mathbf{b}$ and $\boldsymbol{\beta} = \hat{V}A\mathbf{b}$, respectively. With the variable $\mathbf{b}$, the above equation is written as \begin{equation} \|\hat{\boldsymbol{\rho}}\|_{l_2}^2 = \frac{K}{2}\sum_{k=1}^K \left\{\left(\mathbf{b}^*A^*V^*G_kVA\mathbf{b}\right)^2 + \left(\mathbf{b}^* A^*\hat{V}^*G_k\hat{V}A\mathbf{b}\right)^2\right\}, \label{eq:l2_expression2} \end{equation} where $G_k$ is a matrix whose $(k,k)$-th element is unity and the other elements are zero. Note that $G_k = G_k^*G_k$. Then, the matrices $A^*V^*G_kVA$ and $A^*\hat{V}^*G_k\hat{V}A$ are positive semidefinite matrices since they are the Gram matrices. Further, Eq. (\ref{eq:l2_expression2}) is convex with respect to the variable $\mathbf{b}$. This is proven in Appendix A. From the above discussions, it follows that the squared $l_2$ norm of $\hat{\boldsymbol{\rho}}$ is a convex function with respect to the variable $\mathbf{b}$. Combining these discussions above, we obtain the optimization problem, \begin{equation} \begin{split} (Q_{l_2}) \hspace{3mm} & \hspace{3mm}\min F(\mathbf{b})\\ \mbox{subject to}\hspace{3mm} & b_p \in \Omega_L \hspace{3mm} (p=1,2,\ldots,P), \end{split} \end{equation} where \begin{equation} F(\mathbf{b})=\sum_{k=1}^K \left\{\left(\mathbf{b}^*A^*V^*G_kVA\mathbf{b}\right)^2 + \left(\mathbf{b}^* A^*\hat{V}^*G_k\hat{V}A\mathbf{b}\right)^2\right\}. \label{eq:upper_obj} \end{equation} To deal the discrete set $\Omega_L$, we obtain the relaxed problem with semidefinite relaxation techniques. This convex problem is written as \begin{equation} \begin{split} (Q'_{l_2}) \hspace{3mm} & \hspace{3mm}\min \hat{F}(X)\\ \mbox{subject to}\hspace{3mm} & X_{p,p} = 1 \hspace{3mm} (p=1,2,\ldots,P)\\ & X \succcurlyeq 0\\ & X \in \mathbb{H}_P, \end{split} \end{equation} where \begin{equation} \hat{F}(\mathbf{X})=\sum_{k=1}^K \left\{\left(A^*V^*G_kVAX\right)^2 + \left( A^*\hat{V}^*G_k\hat{V}AX\right)^2\right\}. \label{eq:upper_relax_obj} \end{equation} From the above discussions, how to obtain the optimal solution as a positive semidefinite matrix is shown. Then, we discuss the relation between our randomization method and the relaxed problem $(Q'_{l_2})$. Let $X^\star$ and $\{\boldsymbol{\xi}\}$ be the global solution of the problem $(Q'_{l_2})$ and the random vectors generated from the Gaussian distribution $\mathcal{CN}(\mathbf{0},X^\star)$, respectively. They satisfy $\operatorname{E}\{\boldsymbol{\xi}\boldsymbol{\xi}^*\} = X^\star$. Then, the relations are shown \begin{equation} \begin{split} & \sum_{k=1}^K \left\{\operatorname{Tr}\left(A^*V^*G_kVAX^\star\right)^2 + \operatorname{Tr}\left( A^*\hat{V}^*G_k\hat{V}AX^\star\right)^2\right\}\\ \leq & \sum_{k=1}^K \operatorname{E}\left\{\left(\boldsymbol{\xi}^*A^*V^*G_kVA\boldsymbol{\xi}\right)^2 + \left(\boldsymbol{\xi}^* A^*\hat{V}^*G_k\hat{V}A\boldsymbol{\xi}\right)^2\right\}\\ \leq & 3\sum_{k=1}^K \left\{\operatorname{Tr}\left(A^*V^*G_kVAX^\star\right)^2 + \operatorname{Tr}\left( A^*\hat{V}^*G_k\hat{V}AX^\star\right)^2\right\}. \end{split} \label{eq:upper_relation} \end{equation} The above relations are proven in Appendix B. Our main aim is to find $X^\star_{l_2}$ minimizing $\operatorname{E}\left\{F(\boldsymbol{\xi})\right\}$ under the constraints, where $\boldsymbol{\xi} \sim \mathcal{CN}(\mathbf{0},X^\star_{l_2})$. Two inequalities are involved in Eq. (\ref{eq:upper_relation}). The first inequality in Eq (\ref{eq:upper_relation}) implies that the global solution of the relaxed problem $X^\star$ does not always correspond to $X^\star_{l_2}$. However, the last inequality in Eq (\ref{eq:upper_relation}) implies that $X^\star$ will be an appropriate solution for our randomization method since $X^\star$ will makes $\operatorname{E}\left\{F(\boldsymbol{\xi})\right\}$ small where $\boldsymbol{\xi} \sim \mathcal{CN}(\mathbf{0},X^\star)$. From the above discussions, the global solution of the relaxed problem $X^\star$ is not the optimal covariance matrix with our randomization method minimizing upper bound of PAPR. However, $X^\star$ will achieve low PAPR with our randomization method. \section{Numerical Results} We numerically solve the problems $(\hat{Q}'_L)$ and $(Q'_{l_2})$ with CVX \cite{cvx} and obtain approximate solutions with the two kinds of methods, a $l_2$ approximation method discussed in Section IV and a random method discussed in Sections V and VII, respectively. As the parameters, the number of carriers $K=256$ and the oversampling parameter $J=16$ are chosen. We obtain PAPR curves with three kinds of parameters, $(P,L)=(16,2), (8,4)$ and $(8,8)$. The oversampling parameter $J$ is also used in calculating PAPR (see Eq. (\ref{eq:sample})). As the modulation scheme, each symbol is independently chosen from 16QAM symbols. The index sets $\Lambda_n$ are fixed and chosen as adjacent sets, that is \begin{equation} \Lambda_n = \left\{\frac{K}{P}(n-1)+1,\ldots,\frac{K}{P}(n-1)+P\right\} \end{equation} for $n = 1,2,\ldots,P$. With our randomization methods discussed in Sections V and VII, and the phase random method, we generate 10 and 70 samples as solution candidates and choose the optimal solution from such candidates (see Algorithm \ref{algo:ours}). For the brute force method and the other methods, we draw the PAPR curves from 200 results and 2000 results, respectively. Note that the PAPR curve with the brute force method is optimal. Figures \ref{fig:P16L2}, \ref{fig:P8L4} and \ref{fig:P8L8} show each PAPR curve with original OFDM systems, brute force method \cite{pts}, the $l_2$ approximation method discussed in Section IV, the randomization method discussed in Section V, the reducing upper-bound method discussed in Section VII, and the phase random method \cite{phaserandom}. In the legends, ``$l_2$ approximation'', ``Ours (PAPR)'' and ``Ours (Upper Bound)'' mean the $l_2$ approximation method, the randomization method discussed in Section V, and the reducing upper-bound method discussed in Section VII, respectively. In Fig. \ref{fig:P8L8}, the PAPR curve with the brute force method is not drawn since it is not straightforward to obtain the optimal vector due to its significantly large calculation amount. From these figures, the PAPR curve with the $l_2$ approximation method is far from one with the brute force method. This result shows that the optimal solution of the relaxed problem is far from a rank-1 matrix and it tends to have some large eigenvalues. Therefore, we conclude that the $l_2$ approximation method is not suitable for PTS techniques. With randomization methods, there are two PAPR curves in 10 random vectors and 70 random vectors. In both numbers of random vectors, the PAPRs of our two randomization methods are lower than one of phase random techniques. As seen in Section VI, the phase random method is equivalent to our method with the identical matrix as a covariance matrix. Therefore, the performance of randomization methods can be improved when a suitable covariance matrix is chosen. In Section VII, we have discussed the method to reduce the upper bound of PAPR. From the numerical results, PAPR with the reducing upper bound method is higher than one of the randomization method discussed in Section V. However, in a sense of solving optimization problems, the complexity with the reducing upper bound method is lower than one with the randomization method discussed in Section V. The reason is as follows. The main point of this method is that the problem reducing upper bound of PAPR is independent of the oversampling parameter $J$. With this and the number of constraints is invariant, the complexity of the solver does not increase as $J$ increase. As seen in Eq. (\ref{eq:sample}), the sufficiently large $J$ is necessary. Then, with the randomization method discussed in Section V, the necessary number of constraints is large since $J$ is sufficiently large. Therefore, the reducing upper bound method can achieve low PAPR with low complexity. \begin{figure}[htbp] \centering \includegraphics[width=2.9in]{16QAM_adj_P16_L2_K256_J16.eps} \caption{PAPR with the parameters $(P,L)=(16,2)$} \label{fig:P16L2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=2.9in]{16QAM_adj_P8_L4_K256_J16.eps} \caption{PAPR with the parameters $(P,L)=(8,4)$} \label{fig:P8L4} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=2.9in]{16QAM_adj_P8_L8_K256_J16.eps} \caption{PAPR with the parameters $(P,L)=(8,8)$} \label{fig:P8L8} \end{figure} \section{Conclusion} In this paper, we have discussed how to obtain a suitable vector for partial transmit sequence techniques and have proposed two kinds of randomization methods with semidefinite relaxation techniques. Further, we have shown the relation between our methods and the phase random method. Then, in our numerical results, we have shown their PAPR curves and that our methods can achieve lower PAPR than one with the phase random method. Moreover, our numerical results have implied that randomization methods can achieve lower PAPR if a more suitable covariance matrix is obtained. A remaining issue is to explore how to obtain a suitable covariance matrix for a randomization method. One of necessities to address this is to obtain the explicit form of a suitable covariance matrix. After giving such an explicit way, we expect an ideal method for obtaining low PAPR. \appendices \section{Proof of Convexity of Equation (\ref{eq:l2_expression2})} In this appendix, we prove that the function defined in Eq. (\ref{eq:l2_expression2}) is convex with respect to $\mathbf{b}$ \begin{equation*} \|\hat{\boldsymbol{\rho}}\|_{l_2}^2 = \frac{K}{2}\sum_{k=1}^K \left\{\left(\mathbf{b}^*A^*V^*G_kVA\mathbf{b}\right)^2 + \left(\mathbf{b}^* A^*\hat{V}^*G_k\hat{V}A\mathbf{b}\right)^2\right\}. \end{equation*} First, it follows that the matrices $A^*V^*G_kVA$ and $A^*\hat{V}^*G_k\hat{V}A$ are positive semidefinite matrices since they are the Gram matrices. To prove the convexity of the above function, it is sufficient to prove that each term of the above function is convex since the sum of convex functions is convex. Therefore, we prove \begin{equation} \begin{split} &\gamma\left(\mathbf{b}_1^*G\mathbf{b}_1\right)^2 + (1-\gamma)\left(\mathbf{b}_2^*G\mathbf{b}_2\right)^2\\ \geq & \left(\left(\gamma\mathbf{b}_1 + (1-\gamma\mathbf{b}_2\right)^*G\left(\gamma\mathbf{b}_1 + (1-\gamma)\mathbf{b}_2\right)\right)^2, \end{split} \end{equation} where $\gamma \in [0,1]$, $\mathbf{b}_1, \mathbf{b}_2 \in \mathbb{C}^P$ and $G$ is a positive semidefinite matrix corresponding to either $A^*V^*G_nVA$ or $A^*\hat{V}^*G_n\hat{V}A$. Let us prove the convexity. Since $x^2$ is a convex and non-decreasing function for $x \geq 0$ and $\mathbf{b}^*G\mathbf{b}$ is convex and non-negative, the following inequalities are satisfied \begin{equation} \begin{split} & \left(\left(\gamma\mathbf{b}_1 + (1-\gamma)\mathbf{b}_2\right)^*G\left(\gamma\mathbf{b}_1 + (1-\gamma)\mathbf{b}_2\right)\right)^2\\ \leq & \left(\gamma\mathbf{b}_1^*G\mathbf{b}_1 + (1-\gamma)\mathbf{b}_2^*G\mathbf{b}_2\right)^2.\\ \leq & \gamma\left(\mathbf{b}_1^*G\mathbf{b}_1\right)^2 + (1-\gamma)\left(\mathbf{b}_2^*G\mathbf{b}_2\right)^2. \end{split} \end{equation} Applying the above inequalities to each term of Eq. (\ref{eq:l2_expression2}), and the sum of convex functions is convex, we have that $\|\hat{\boldsymbol{\rho}}\|^2_{l_2}$ in Eq. (\ref{eq:l2_expression2}) is convex. The same result can be obtained with the Theorem 5.1 written in \cite{cvx_analysis}. \section{Proof of Relations in Equation (\ref{eq:upper_relation})} In this appendix, we prove the relations written in Eq. (\ref{eq:upper_relation}) \begin{equation*} \begin{split} & \sum_{k=1}^K \left\{\operatorname{Tr}\left(A^*V^*G_kVAX^\star\right)^2 + \operatorname{Tr}\left( A^*\hat{V}^*G_k\hat{V}AX^\star\right)^2\right\}\\ \leq & \sum_{k=1}^K \operatorname{E}\left\{\left(\boldsymbol{\xi}^*A^*V^*G_kVA\boldsymbol{\xi}\right)^2 + \left(\boldsymbol{\xi}^* A^*\hat{V}^*G_k\hat{V}A\boldsymbol{\xi}\right)^2\right\}\\ \leq & 3\sum_{k=1}^K \left\{\operatorname{Tr}\left(A^*V^*G_kVAX^\star\right)^2 + \operatorname{Tr}\left( A^*\hat{V}^*G_k\hat{V}AX^\star\right)^2\right\} \end{split} \end{equation*} for $\boldsymbol{\xi} \sim \mathcal{CN}(\mathbf{0},X^\star)$. A proof that the first inequality holds is given as follows. From the Cauchy-Schwarz inequality, it holds that \begin{equation} \begin{split} & \sum_{k=1}^K \left\{\operatorname{Tr}\left(A^*V^*G_kVAX^\star\right)^2 + \operatorname{Tr}\left( A^*\hat{V}^*G_k\hat{V}AX^\star\right)^2\right\}\\ \leq & \sum_{k=1}^K \operatorname{E}\left\{\left(\boldsymbol{\xi}^*A^*V^*G_kVA\boldsymbol{\xi}\right)^2 + \left(\boldsymbol{\xi}^* A^*\hat{V}^*G_k\hat{V}A\boldsymbol{\xi}\right)^2\right\}. \end{split} \end{equation} Then, a proof that the last inequality holds is given as follows. Similar to Appendix A, it is sufficient to prove \begin{equation} \operatorname{E}\left\{\left(\boldsymbol{\xi}^* G \boldsymbol{\xi}\right)^2\right\} \leq 3 \operatorname{Tr}\left(G X^\star\right)^2, \label{eq:proof_obj} \end{equation} where $G$ is a Hermitian and positive semidefinite matrix and $\boldsymbol{\xi} \sim \mathcal{CN}(\mathbf{0},X^\star)$. In \cite{graphical}, it has been shown that $\mathcal{T}(\boldsymbol{z}) \sim \mathcal{N}\left(\mathcal{T}(\boldsymbol{\mu}),\frac{1}{2}\mathcal{T}(\Sigma))\right)$ if $\boldsymbol{z} \sim \mathcal{CN}(\boldsymbol{\mu},\Sigma)$. Note that the matrices $\mathcal{T}(X^\star)$ and $\mathcal{T}(G)$ are symmetric and positive semidefinite since $G$ and $X^\star$ are Hermitian and positive semidefinite \cite{telatar}. From this result, it follows that $\mathcal{T}(\boldsymbol{\xi}) \sim \mathcal{N}\left(\mathcal{T}(\mathbf{0}),\frac{1}{2}\mathcal{T}(X^\star)\right)$. With this and discussions in \cite{telatar}, the left hand side of Eq. (\ref{eq:proof_obj}) is rewritten as \begin{equation} \begin{split} &\operatorname{E}\left\{\left(\boldsymbol{\xi}^* G \boldsymbol{\xi}\right)^2\right\}\\ = & \operatorname{E}\left\{\left(\mathcal{T}(\boldsymbol{\xi})^\top \mathcal{T}(G) \mathcal{T}(\boldsymbol{\xi})\right)^2\right\}\\ = & \sum_{i,j,k,l} \hat{g}_{i,j}\hat{g}_{k,l}\operatorname{E}\{\hat{\xi}_i\hat{\xi}_j\hat{\xi}_k\hat{\xi}_l\}, \end{split} \end{equation} where $\hat{g}_{i,j}$ and $\hat{\xi}_k$ are the $(i,j)$-th element of $\mathcal{T}(G)$ and $k$-th element of $\mathcal{T}(\boldsymbol{\xi})$, respectively. In \cite{multivariate}, for $\boldsymbol{x} \sim \mathcal{N}(\boldsymbol{\mu},\Sigma)$, the forth moment about the mean has been derived as \begin{equation} \begin{split} & \operatorname{E}\{(x_i - \mu_i)(x_j - \mu_j)(x_k - \mu_k)(x_l - \mu_l)\}\\ = & \sigma_{i,j}\sigma_{k,l} + \sigma_{i,k}\sigma_{j,l} + \sigma_{i,l}\sigma_{j,k}, \end{split} \label{eq:4thmoment} \end{equation} where $x_i$, $\mu_i$ and $\sigma_{i,j}$ are the $i$-th element of $\boldsymbol{x}$, the $i$-th element of $\boldsymbol{\mu}$ and the $(i,j)$-th element of the real valued-covariance matrix $\Sigma$, respectively. With Eq. (\ref{eq:4thmoment}), Eq. (\ref{eq:proof_obj}) is rewritten as \begin{equation} \begin{split} &\operatorname{E}\left\{\left(\boldsymbol{\xi}^* G \boldsymbol{\xi}\right)^2\right\}\\ = & \frac{1}{4}\left\{\sum_{i,j,k,l} \hat{g}_{i,j}\hat{g}_{k,l}(\hat{x^\star}_{i,j}\hat{x^\star}_{k,l} + \hat{x^\star}_{i,k}\hat{x^\star}_{j,l} + \hat{x^\star}_{i,l}\hat{x^\star}_{j,k})\right\}\\ = & \frac{1}{4}\left\{\operatorname{Tr}(\mathcal{T}(G)\mathcal{T}(X^\star))^2 + 2\operatorname{Tr}(\mathcal{T}(G)\mathcal{T}(X^\star)\mathcal{T}(G)\mathcal{T}(X^\star))\right\}, \end{split} \label{eq:4thmoment_expand} \end{equation} where $\hat{x^\star}_{i,j}$ is the $(i,j)$-th element of $\mathcal{T}(X^\star)$. In deriving the above second equality, we have used the property that the matrices $\mathcal{T}(G)$ and $\mathcal{T}(X^\star)$ are symmetric. Let $V$ be a matrix such that $VV^\top = \mathcal{T}(G)$, where such a $V$ can be found since $\mathcal{T}(G)$ is a positive semidefinite. With this decomposition, the relations \begin{equation} \begin{split} & \operatorname{Tr}(\mathcal{T}(G)\mathcal{T}(X^\star)\mathcal{T}(G)\mathcal{T}(X^\star))\\ = & \operatorname{Tr}(VV^\top\mathcal{T}(X^\star)VV^\top\mathcal{T}(X^\star))\\ = & \operatorname{Tr}(V^\top\mathcal{T}(X^\star)V \cdot V^\top\mathcal{T}(X^\star)V)\\ \leq & \operatorname{Tr}(V^\top\mathcal{T}(X^\star)V)^2\\ = & \operatorname{Tr}(\mathcal{T}(X^\star)\mathcal{T}(G))^2 \end{split} \label{eq:4thmoment_right} \end{equation} are obtained. In the above relations, we have used the properties that the matrix $V^\top\mathcal{T}(X^\star)V$ is positive semidefinite and $\operatorname{Tr}(XX) \leq \operatorname{Tr}(X)^2$ for any positive semidefinite matrix $X$. In \cite{maxcut}, it has been shown that $\operatorname{Tr}(\mathcal{T}(X)\mathcal{T}(Y)) = 2\operatorname{Tr}(XY)$ for positive semidefinite matrices $X$ and $Y$. Combining this result and Eqs. (\ref{eq:4thmoment_expand}) (\ref{eq:4thmoment_right}), we arrive at the relation \begin{equation} \operatorname{E}\left\{\left(\boldsymbol{\xi}^* G \boldsymbol{\xi}\right)^2\right\} \leq 3 \operatorname{Tr}\left(G X^\star\right)^2. \end{equation} This is the desired result. We have proven Eq. (\ref{eq:proof_obj}) for general $L$. For $L=2$ and $L=4$, we have the same expressions of Eq. (\ref{eq:proof_obj}). \section*{Acknowledgment} The authors would like to thank Dr. Shin-itiro Goto for his advise. \ifCLASSOPTIONcaptionsoff \newpage \fi
2,877,628,088,737
arxiv
\section{Model} Our model adopts the widely used two-stage pipelined approach for passage retrieval \cite{multi-stage-bert,rethink_bert_reranker, duoT5}, which consists of a retrieval stage and a reranking stage. While many prior works have shown its effectiveness, they use either a sparse model (e.g., BM25) or a dense model (e.g., a dense neural retriever) to generate training data for training the reranker. To the best of our knowledge, this is the first work to use a hybrid sparse-dense model to generate training data for rerankers. In our model, we use a hybrid of BM25 and dual encoder models in the retrieval stage, and then use a cross-attention reranking model, which is trained using candidates retrieved from the first stage. The architecture is shown in Figure \ref{fig:pipeline}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{diagram.pdf} \caption{The architecture of our two-stage retrieval model.} \label{fig:pipeline} \end{figure} \subsection{Hybrid Retriever} \label{sec:retrieval} In this section we describe the hybrid retriever. We adapt Ma et al.'s \shortcite{QGen} hybrid first-stage retriever. Specifically, we use a BM25 model as the sparse retrieval model and a T5-based dual encoder model \cite{t5xr} as the dense retrieval model. For each document (and question), we concatenate the encodings from the two models to create a hybrid encoding. We perform maximal inner-product search (MIPS) using approximate nearest neighbor search over these hybrid encodings. This results in a different set of neighbors than independently selecting neighbors from BM25 and the dual encoder. For the BM25 model, we represent each question as a $|V|$-dimensional binary encoding $\textbf{q}^{\text{bm25}}$, where $\textbf{q}^{\text{bm25}}[i]$ is $k$ if the i-th entry of vocabulary $V$ is seen $k$ times in the question, 0 otherwise (e.g., this is a vector of question term counts). We represent each passage as a sparse real-valued vector $\textbf{p}^{\text{bm25}}$: \begin{small} \begin{equation} \textbf{p}_i^\text{bm25} = \frac{\text{IDF}(p_i)* \text{cnt}(p_i, P)*(k + 1)}{\text{cnt}(p_i, P) + k*(1 - b + b * \frac{m}{m_\text{avg}})} \nonumber \end{equation} \end{small} where $p_i$ are tokens from passage $P$, cnt($p_i$, $P$) is $p_i$'s term frequency in $P$, $k$/$b$ are BM25 hyperparameters, IDF is the term's inverse document frequency from the document collection, $m$ are the number of tokens in $P$, and $m_\text{avg}$ is the collection's average passage length. We can recover the original BM25 score via dot-product between the question vector and the document vector. Our dual encoder is based on T5\cite{2020t5}. We adapt the encoder-only structure from Ni et al.\shortcite{t5xr}'s GTR model. To encode a question, we feed the question text to T5 encoder and use the mean pooling of the encoder output as the question encoding $\textbf{q}^{\text{de}}$. A passage encoding $\textbf{p}^{\text{de}}$ is generated in a similar way but we concatenate the passage and corresponding document title as the input to T5 encoder. The question tower and the passage tower share parameters. The question to passage relevance is computed by the cosine similarity of their encodings. We optimize the model using the in-batch sampled softmax loss \footnote{The formula can be found in the Appendix.} \cite{in-batch-neg}. The structure is shown in Figure~\ref{fig:de}. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{dual_encoder.pdf} \caption{The architecture of T5 dual encoder model.} \label{fig:de} \end{figure} To train the dual encoder, we use synthetically generated questions. We apply a T5-based question generation model (QGen) on the passages in the target corpus to generate (synthetic question, passage) pairs. We then use these data to train a dual encoder model $DE_{0}$. To filter low quality questions, we apply round-trip consistency for retrieval\cite{roundtrip, paq}. Specifically, given a synthetic question in the training data, we run 1-nearest neighbor search based on scores between the question and all passages using $DE_{0}$. If the neighbor is the one from which the question is generated, we keep that (question, passage). Otherwise, the (question, passage) pair is filtered. With the filtered data, we continue fine tune $DE_{0}$ to get the final dual encoder model $DE_{1}$. The QGen model is created by fine-tuning a general T5 model using question and passage pairs from Natural Question (NQ) \cite{nq}. This is the only place that we use supervised data in our pipeline. Particularly, we form the input of the encoder as ``Generate question \verb_>>>_ {title}.{passage} \verb_>>>_ {target sentence}'', and the output of the decoder is the corresponding question. Here ``target sentence'' is the sentence that contains the short answer span, and ``passage'' corresponds to long answer and the passage of NQ. At inference time, given a passage, we iterate over every sentence as the target to generate diverse questions. Similar to PAQ \shortcite{paq}, we perform targeted generation, where knowing the location of the answer in a passage is important. To benefit from both sparse model and dense neural model, we create the hybrid model by combining the encodings from two models in a principled way: \begin{equation} \begin{split} \nonumber \text{sim}(\textbf{q}^\text{hyb}, \textbf{p}^\text{hyb}) &= \langle \textbf{q}^\text{hyb}, \textbf{p}^\text{hyb} \rangle \\ \nonumber &= \langle [\textbf{q}^\text{bm25},\lambda\textbf{q}^\text{de}], [\textbf{p}^\text{bm25}, \textbf{p}^\text{de}] \rangle \\ \nonumber &= \langle \textbf{q}^\text{bm25}, \textbf{p}^\text{bm25} \rangle + \lambda \langle \textbf{q}^\text{de}, \textbf{p}^\text{de} \rangle \end{split} \end{equation} where $\textbf{q}^\text{hyb}$ and $\textbf{p}^\text{hyb}$ are the hybrid encodings that concatenate the BM25 ($\textbf{q}^\text{bm25}$/$\textbf{p}^\text{bm25}$) and the dual encoder encodings ($\textbf{q}^\text{de}$/$\textbf{p}^\text{de}$ described above; and $\lambda$ is an interpolation hyperparameter that trades-off the relative weight of BM25 versus the dual encoder models. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{reranker.pdf} \caption{The architecture of T5 reranking model.} \label{fig:reranker} \end{figure} \subsection{Hybrid Reranker} \label{sec:reranking} Our reranking model is also based on a T5 model. We use the encoder and discard the decoder allowing us to exploit the encoder-decoder pretraining while not requiring a decoder for inference. We represent the question-passage pair as input sequence ``Query: \{question\} Document: \{passage\}'' and feed it into the encoder. The output of the encoder is the encodings of the input sequence. We then apply a projection layer on the encoding of the first token and the output is used as the ranking score. For all passage retrieved for a question, the ranking result is obtained by sorting the passages based on their ranking scores. We optimize the model using the listwise softmax cross entropy loss function \footnote{The formula can be found in the Appendix.} \cite{ranking-softmax}. The structure is shown in Figure~\ref{fig:reranker}. To generate the training data for rerankers, for each question, we generate a list of 1 positive example and N negative examples. We randomly sample negative examples from top retrieved results from rank $i$ to rank $j$ using the hybrid retrieval model described in Section ~\ref{sec:retrieval}. We skip the first $i$ results to avoid false negatives. \section{Introduction} These instructions are for authors submitting papers to *ACL conferences using \LaTeX. They are not self-contained. All authors must follow the general instructions for *ACL proceedings,\footnote{\url{http://acl-org.github.io/ACLPUB/formatting.html}} and this document contains additional instructions for the \LaTeX{} style files. The templates include the \LaTeX{} source of this document (\texttt{acl.tex}), the \LaTeX{} style file used to format it (\texttt{acl.sty}), an ACL bibliography style (\texttt{acl\_natbib.bst}), an example bibliography (\texttt{custom.bib}), and the bibliography for the ACL Anthology (\texttt{anthology.bib}). \section{Engines} To produce a PDF file, pdf\LaTeX{} is strongly recommended (over original \LaTeX{} plus dvips+ps2pdf or dvipdf). Xe\LaTeX{} also produces PDF files, and is especially suitable for text in non-Latin scripts. \section{Preamble} The first line of the file must be \begin{quote} \begin{verbatim} \documentclass[11pt]{article} \end{verbatim} \end{quote} To load the style file in the review version: \begin{quote} \begin{verbatim} \usepackage[review]{acl} \end{verbatim} \end{quote} For the final version, omit the \verb|review| option: \begin{quote} \begin{verbatim} \usepackage{acl} \end{verbatim} \end{quote} To use Times Roman, put the following in the preamble: \begin{quote} \begin{verbatim} \usepackage{times} \end{verbatim} \end{quote} (Alternatives like txfonts or newtx are also acceptable.) Please see the \LaTeX{} source of this document for comments on other packages that may be useful. Set the title and author using \verb|\title| and \verb|\author|. Within the author list, format multiple authors using \verb|\and| and \verb|\And| and \verb|\AND|; please see the \LaTeX{} source for examples. By default, the box containing the title and author names is set to the minimum of 5 cm. If you need more space, include the following in the preamble: \begin{quote} \begin{verbatim} \setlength\titlebox{<dim>} \end{verbatim} \end{quote} where \verb|<dim>| is replaced with a length. Do not set this length smaller than 5 cm. \section{Document Body} \subsection{Footnotes} Footnotes are inserted with the \verb|\footnote| command.\footnote{This is a footnote.} \subsection{Tables and figures} See Table~\ref{tab:accents} for an example of a table and its caption. \textbf{Do not override the default caption sizes.} \begin{table} \centering \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\ \hline \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, Bib\TeX{} entries.} \label{tab:accents} \end{table} \subsection{Hyperlinks} Users of older versions of \LaTeX{} may encounter the following error during compilation: \begin{quote} \tt\verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|. \end{quote} This happens when pdf\LaTeX{} is used and a citation splits across a page boundary. The best way to fix this is to upgrade \LaTeX{} to 2018-12-01 or later. \subsection{Citations} \begin{table*} \centering \begin{tabular}{lll} \hline \textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citealp{Gusfield:97} & \verb|\citealp| & no equivalent \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \hline \end{tabular} \caption{\label{citation-guide} Citation commands supported by the style file. The style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} Table~\ref{citation-guide} shows the syntax supported by the style files. We encourage you to use the natbib styles. You can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations, like this citation to a paper by \citet{Gusfield:97}. You can use the command \verb|\citep| (cite in parentheses) to get ``(author, year)'' citations \citep{Gusfield:97}. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author, year'' citations, which is useful for using citations within parentheses (e.g. \citealp{Gusfield:97}). \subsection{References} \nocite{Ando2005,borschinger-johnson-2011-particle,andrew2007scalable,rasooli-tetrault-2015,goodman-etal-2016-noise,harper-2014-learning} The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format. If your own bib file is named \texttt{custom.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you: \begin{quote} \begin{verbatim} \bibliographystyle{acl_natbib} \section{Introduction} Information Retrieval (and subsequent retrieve-and-read approaches to open-domain question answering (QA)) have generally seen pipelined retrieve-and-rank approaches achieve the best accuracies. This paradigm has become so common that many of the TREC IR evaluations have placed an emphasis on reranking the top N (often 1000) results from a first-pass retriever \cite{trecdl2021,trec-fair2021}. There has been significant work recently showing that neural retrieval models can improve recall and produce state-of-the-art retrievers \cite{dpr,rocketqa,rocketqav2}, but this is not true across the board; there are still many tasks where simple term-based techniques such as BM25 \cite{bm25} significantly outperform sophisticated neural retrieval models. This motivates recent exploration in hybrid retrievers in order to join the best from each world. Techniques which combine term-based models and neural models in either early stage \cite{QGen} or later stage \cite{dpr,ecir22} outperform each individual model. No matter how much improvement achieved from first-stage retrievers, the reranking model continues to achieve an additional performance boost. In this work, we revisit the question of how best to train a reranking model for retrieval. We study two different settings: the supervised setting where we have training data in the target domain, and the zero-shot setting where we have no training data in the target domain. The traditional wisdom is that you should train the reranker on data that is similar to the distribution that you will observe at inference time, and the best means to do so is to use the same retriever for inference as you do for training the reranker. That is to say, the first-stage retriever produces candidates for the reranker for both training and inference \cite{multi-stage-bert,reranking-seq2seq}. However, recent work shows that this does not always guarantee an effective reranker \cite{rethink_bert_reranker}. While we do see this approach achieving strong results in general, we show that by training a robust reranker which has been exposed to training data from a hybrid of term-based and neural retrieval models, we are able to achieve high performance on MS MAMRCO passage ranking task and a large set of out-of-domain datasets (BEIR \cite{beir}) even when reranking candidates are generated by a BM25 retriever. The primary contributions of this paper are: \begin{itemize} \item We present \textbf{HYRR}, a training paradigm for training rerankers based on hybrid term-based and neural retrievers. \item We show that this approach is effective in both supervised setting and zero-shot setting. \item We show that this approach results in a robust reranker which performs well across different retrievers, domains, and tasks; though there are still limitations which appear to be based in the query generation approach utilized in the zero-shot setting. \end{itemize} \section{Related Work} \subsection{Neural ranking models} Many prior works explore using neural models for text ranking, most recently the focus as been on transformer-based models \cite{transformer}. Even through it is computationally expensive, a BERT-based cross-attention model is one of the most dominant models for text ranking \cite{bert-reranker, rethink_bert_reranker} because of its capability to model the interaction between the query and passage. Concretely, queries and passages are concatenated and fed into the BERT model, a pairwise score is then obtained by projecting the encoding of [CLS] token. The text ranking problem is cast as a binary classification problem. Recently, encoder-decoder language models, such as T5 \cite{2020t5}, have been adapted for text ranking. Nogueira et al.\shortcite{t5-reranker} proposed a model that takes a query and passage pair as input of encoder, and the decoder produces the tokens ``true'' or ``false'' to indicate the relevance of a query and a given passage. Pradeep et al. \shortcite{duoT5} further proposed a pairwise ranking model that takes a query and two passages as input and the decoder produces the token ``true'' if the first passage is more relevant then the second passage, and ``false'' otherwise. \newcite{rankT5} proposed T5 encoder-only and encoder-decoder rerankers that optimize ranking performance directly by outputting real-value scores and using ranking losses. \newcite{sgpt} proposed SGPT that uses pre-trained language model as reranking model directly; and \newcite{upr} proposed UPR that uses a zero-shot question generation model via prompting a large language model in order to directly rerank passages. \subsection{Multi-stage retrieval pipelines} Passage retrieval systems commonly adopt the multi-stage pipelined approach where in the first stage, an initial ranking is produced by an efficient model which searches through the entire corpus which may contain millions of passages and in the later stages, the ranking is refined by more precise models. Many existing works use the model in the previous stage to generate training examples for the current stage \cite{bert-reranker,multi-stage-bert,t5-reranker,reranking-seq2seq,inpars}. Gao et al. \shortcite{rethink_bert_reranker} point out sometimes an effective first-stage model may introduce false positives for the next stages and propose localized contrastive estimation learning. \section{Model} Given a query $q_i$ and a list of candidate passages $C(i) = {c_1, c_2, ..., c_n}$ in a document collection $D$, the ranking task aims to sort passages in the $C(i)$ such that more relevant passages have higher scores. More formally, we aim to learn a scoring function $s(q_i, c_j)$ such that ${c^* = argmax_{j\in C(i)} s(q_i, c_j)}$ is the most relevant passage to the query. \subsection{Model structure} \label{sec:reranking} We follow \newcite{rankT5} to use a T5-based cross-attention model. Specifically, we represent the query-passage pair as input sequence ``Query: \{Query\} Document: \{Title. Passage\}'' and feed it into the encoder. The output of the encoder is the encodings of the input sequence. We then apply a projection layer on the encoding of the first token and the output is used as the score. We use the encoder and discard the decoder allowing us to exploit the encoder-decoder pretraining while not requiring a decoder for inference. During inference, we pair query $q_i$ with each passage in $C(i)$ and compute scores. The ranking result is obtained by sorting the passages based on their scores. The structure is shown in Figure~\ref{fig:reranker}. The loss function we use is a listwise softmax cross entropy loss \cite{LTR-softmax} and is defined as follows: \begin{equation} \ell = \sum_{i=1}^{m} \hat{y}_{ij} \log\Big(\frac{e^{s_{ij}}}{\sum_{j'} e^{s_{ij'}}}\Big) \nonumber \end{equation} where $s_{ij}$ is the predicted ranking score on query $q_i$ and passage $c_j$, and $\hat{y}_{ij}$ is the relevance label. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{reranker.pdf} \caption{The architecture of T5-based reranking model.} \label{fig:reranker} \end{figure} \subsection{Training data generation} Usually the number of passages in $C(i)$ is much smaller than the total number of passages in the document collection. How to construct $C(i)$ is important and affects the performance of the ranking model. The widely used strategy is to sample passages returned by a retriever, such as BM25 \cite{sgpt,inpars}. In the pipelined multi-stage retrieval system consisting of retrieval and reranking stages, $C(i)$ is usually formed by using the first-stage retriever. However, a better first-stage retriever not always guarantee a better training set for reranking models. In this work, we present a strategy to train robust rerankers. We use a hybrid retriever to generate passage lists, inspired by Ma et al.'s \shortcite{QGen} hybrid first-stage retriever. Specifically, we use a BM25 model as the sparse retrieval model and a T5-based dual encoder model \cite{t5xr} as the dense retrieval model. For each passage (and query), we concatenate the encodings from the two models to create a hybrid encoding. We perform maximal inner-product search (MIPS) using approximate nearest neighbor search over these hybrid encodings. This results in a different set of neighbors than independently selecting neighbors from BM25 and the dual encoder. For the BM25 model, we represent each query as a $|V|$-dimensional binary encoding $\textbf{q}^{\text{bm25}}$, where $\textbf{q}^{\text{bm25}}[i]$ is $k$ if the i-th entry of vocabulary $V$ is seen $k$ times in the query, 0 otherwise (e.g., this is a vector of query term counts). We represent each passage as a sparse real-valued vector $\textbf{c}^{\text{bm25}}$: \begin{small} \begin{equation} \textbf{c}^\text{bm25}[i] = \frac{\text{IDF}(t_i)* \text{cnt}(t_i, c)*(k + 1)}{\text{cnt}(t_i, c) + k*(1 - b + b * \frac{m}{m_\text{avg}})} \nonumber \end{equation} \end{small} where $t_i$ are tokens from passage $c$, cnt($t_i$, $c$) is $t_i$'s term frequency in $c$, $k$/$b$ are BM25 hyperparameters, IDF is the term's inverse document frequency from the document collection, $m$ are the number of tokens in $c$, and $m_\text{avg}$ is the collection's average passage length. We can recover the original BM25 score via dot-product between the query vector and the passage vector. Our dual encoder is based on T5 \cite{2020t5} as well. We adapt the encoder-only structure from Ni et al.\shortcite{t5xr}'s GTR model. To encode a query, we feed the query text to T5 encoder and use the mean pooling of the encoder output as the query encoding $\textbf{q}^{\text{de}}$. A passage encoding $\textbf{c}^{\text{de}}$ is generated in a similar way but we concatenate the passage and corresponding document title as the input to T5 encoder. The query tower and the passage tower share parameters. The query to passage relevance is computed by the cosine similarity of their encodings. We optimize the model using the in-batch sampled softmax loss \cite{in-batch-neg}. The structure is shown in Figure~\ref{fig:de}. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{dual_encoder.pdf} \caption{The architecture of T5 dual encoder model.} \label{fig:de} \end{figure} To benefit from both sparse model and dense neural model, we create the hybrid model by combining the encodings from two models in a principled way: \begin{equation} \begin{split} \nonumber \text{sim}(\textbf{q}^\text{hyb}, \textbf{c}^\text{hyb}) &= \langle \textbf{q}^\text{hyb}, \textbf{c}^\text{hyb} \rangle \\ \nonumber &= \langle [\textbf{q}^\text{bm25},\lambda\textbf{q}^\text{de}], [\textbf{c}^\text{bm25}, \textbf{c}^\text{de}] \rangle \\ \nonumber &= \langle \textbf{q}^\text{bm25}, \textbf{c}^\text{bm25} \rangle + \lambda \langle \textbf{q}^\text{de}, \textbf{c}^\text{de} \rangle \end{split} \end{equation} where $\textbf{q}^\text{hyb}$ and $\textbf{c}^\text{hyb}$ are the hybrid encodings that concatenate the BM25 ($\textbf{q}^\text{bm25}$/$\textbf{c}^\text{bm25}$) and the dual encoder encodings ($\textbf{q}^\text{de}$/$\textbf{c}^\text{de}$) described above; and $\lambda$ is an interpolation hyperparameter that trades-off the relative weight of BM25 versus the dual encoder models. To generate the training data for reranking model, we apply the hybrid retriever to the queries in the training set and retrieve top-K passages. We then sample $N$ negatives from retrieved result. In result, $C(i)$ is a passage list of size $N+1$ with one positive and $N$ negatives. \section{Experimental Setup} We evaluate our proposed approach on two tasks: first is the MS MARCO passage ranking task to understand how our approach performs in supervised setting when labeled training data is available. Second is the zero-shot retrieval on BEIR where no labeled data is available in the target domains. \subsection{MS MARCO Passage Ranking} \label{sec:msmarco_eval} The MS MARCO passage ranking task aims to retrieve passages from a collection of web documents containing about 8.8 million passages. All questions in this dataset are sampled from real and anonymized Bing queries \cite{msmarco}. The dataset contains 532,761 and 6980 examples in the training and development set respectively. We report our results using MRR@10 metric on the development set. We use the GTR-Large from \newcite{t5xr} as the dual encoder used in the hybrid retriever. \subsection{Zero-shot Retrieval} We also perform evaluation on the BEIR corpus \cite{beir}, a benchmark for zero-shot evaluation, to understand how our approach generalizes to out-of-domain setting. BEIR contains 18 evaluation datasets across 9 domains and no training data is available for those datasets. To train the dual encoder used for hybrid retriever, we follow Ma et al. \shortcite{QGen} to generate synthetic training data from a query generator and extend with iterative training. Specifically, following \newcite{Promptagator}, we pretrain the dual encoder on C4 dataset \cite{2020t5} with the independent cropping task \cite{contriever}. We then fine-tune the dual encoder using synthetically generated queries. We apply a T5-based query generation model (QGen) on the passages in the target corpus to generate (synthetic query, passage) pairs. For each dataset, we iterate over every passage and treat every sentence as the target to generate synthetic queries. For large datasets, such as BioASQ and Climate-fever, we randomly sample 2 million passages for query generation. The statistics of BEIR and the number of synthetic data can be found in the Table \ref{tab:stats}. We then use these data to train a dual encoder model $DE_{0}$. To filter low quality questions, we apply round-trip consistency for retrieval\cite{roundtrip, paq, Promptagator}. Specifically, given a synthetic query in the training data, we run 1-nearest neighbor search based on scores between the query and all passages using $DE_{0}$. If the neighbor is the one from which the query is generated, we keep that (query, passage). Otherwise, the (query, passage) pair is filtered. With the filtered data, we continue fine tune $DE_{0}$ to get the final dual encoder model $DE_{1}$. The QGen model is created by fine-tuning a general T5 model using question and passage pairs from Natural Question (NQ) \cite{nq}. This is the only place that we use supervised data in the pipeline. Particularly, we form the input of the encoder as ``Generate question \verb_>>>_ {title}.{passage} \verb_>>>_ {target sentence}'', and the output of the decoder is the corresponding question. Here ``target sentence'' is the sentence that contains the short answer span, and ``passage'' corresponds to long answer and the passage of NQ. At inference time, given a passage, we iterate over every sentence as the target to generate diverse queries. Similar to PAQ \shortcite{paq}, we perform targeted generation, where knowing the location of the answer in a passage is important. \begin{table}[] \footnotesize \resizebox{\linewidth}{!}{% \begin{tabular}{r|c|cccc} Datasets & Domain & \begin{tabular}[c]{@{}c@{}}\# Test \\ Query\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Corpus \\ Doc.\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Synth. \\ Ques. \end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Synth. \\ Ques. \\ After \\ Filter \end{tabular} \\\hline NQ & Wiki & 3452 & 2.68M & 2.62M & 976K \\ MS MARCO & Misc. & 6980 & 8.84M & 3.74M & 1.08M \\ Trec-Covid & BioMed & 50 & 171K & 753K & 352K \\ BioASQ & BioMed & 500 & 14.91M & 10.75M & 3.44M \\ NFCorpus & BioMed & 323 & 3.6K & 22K & 13K \\ HotpotQA & Wiki & 7405 & 5.23M & 4.78M & 3.19M \\ FiQA-2018 & Finance & 648 & 57K & 255K & 157K \\ Signal-1M & Twitter & 97 & 2.86M & 2.58M & 1.03M \\ Trec-News & News & 57 & 595K & 1.64M & 618K \\ Robust04 & News & 249 & 528K & 2.65M & 1.05M \\ ArguAna & Misc. & 1406 & 8.67K & 41K & 21K \\ Touché-2020 & Misc. & 49 & 382K & 1.34M & 348K \\ Quora & Quora & 10000 & 523K & 552K & 400K \\ DBPedia-entity & Wiki & 400 & 4.63M & 4.76M & 3.25M \\ SCIDOCS & Science & 1000 & 25K & 136K & 114K \\ Fever & Wiki & 6666 & 5.42M & 6.41M & 4.22M \\ Climate-Fever & Wiki & 1535 & 5.42M & 6.41M & 4.23M \\ SciFact & Science & 300 & 5K & 27.3K & 20K \\ CQADupStack & StackEx. & 13145 & 457K & 1.55M & 1.18M \end{tabular} } \caption{The statistics of BEIR benchmark and number of synthetic questions before and after filtering.} \label{tab:stats} \end{table} Results are obtained using the official TREC evaluation tool\footnote{https://github.com/usnistgov/trec\_eval}. We report normalised cumulative discount gain (nDCG@10) and recall@100 for all datasets. \subsection{Implementation Details} The query generation model is initialized from T5 Version 1.1 XL model and we fine-tune using question and passage pairs from NQ \cite{nq} for 20,000 steps with a learning rate of 0.01, batch size 256 and dropout rate 0.1. The BM25 model in our hybrid retriever is a unigram model. We use the WordPiece tokenizer and vocabulary from uncased BERT\textsubscript{base} of size 30522. We use K=0.9 and b=0.8 on all datasets. We only index the passages. The dual encoder model is initialized from the public pre-trained T5 Version 1.1 large model and pre-train on C4 for 200k steps with batch size 4096. We encode queries and passages into vectors of size 64 and 512 for all datasets except Climate-Fever and ArguAna. We use query length 100 for Climate-Fever and 512 for ArguAna. We use batch size of 5120 and train $DE_{0}$ for 100 epochs. $DE_{1}$ is initialized from $DE_{0}$ and we train 10 epochs. For the hybrid retrieval model, we use $\lambda = 600$ in searching from 50 to 750 with step size 50. The reranking models are initialized from T5 Version 1.1 models: Base and Large. Since we only use the encoders, the number of parameters are about 125M for Base model and 400M for Large model. We sample 50 negative examples from top 250 retrieved passage for MS MARCO and top retrieved passage from rank 10 to rank 210 for BEIR. We use input sequence length as 512 for all datasets except ArguAna, for which we use 1024. We train the models for 20000 steps with batch size 64. All of the above hyperparameters were tuned on the MS MARCO development set and then applied to all other datasets, allowing for a true zero-shot setting (not to mention that not every dataset has a development set). From preliminary experiments, we found that tuning parameters for each dataset may achieve improved results due to the nature of each dataset. For example, when tuning the weight $\lambda$ between BM25 and dual encoder model in the hybrid retriever, the value should be smaller if we observe more term overlapping between questions and passage collection, such as SciFact and BioASQ, to emphasize the BM25 model. However, for the results in this paper, we have use the parameters which were selected based on the MS MARCO dataset. We implement the neural models using T5X\footnote{https://github.com/google-research/t5x} and we also use RAX \cite{rax}, a learning-to-rank framework for implementing the ranking losses in reranking models. For training, it takes about 6 hours to train a dual encoder model and 6.5 hours to train a reranking model of T5-Large size using Cloud TPU-V3. \begin{table}[] \begin{tabular}{c|cc} \toprule & Model size & MRR@10 \\ \hline {BM25 Anserini} & & 0.1874 \\ \hline HLATR & RoBERTa\textsubscript{Large} & 0.3680 \\ MiniLM & Distilled BERT & 0.3901 \\ monoT5 & T5\textsubscript{3B} & 0.3980 \\ RankT5-EncDec & T5\textsubscript{Large} & 0.3986 \\ RankT5 (Ours) & T5\textsubscript{Large} 1.1 & 0.4222 \\ HYRR & T5\textsubscript{Large} 1.1 & \textbf{0.4235} \\ \bottomrule \end{tabular} \caption{Reranking performance on MS MARCO Dev set in MRR@10.} \label{tab:msmarco_result} \end{table} \begin{table*}[t!] \centering \small \resizebox{\linewidth}{!}{% \begin{tabular}{r|c|ccc|cc|cccc} \toprule & \textbf{Retriever} & \multicolumn{9}{c}{\textbf{Reranker}} \\ \hline & & \multicolumn{3}{c}{\textbf{Supervised Baselines}} & \multicolumn{2}{|c}{\textbf{Unsupervised Baselines}} & \multicolumn{4}{|c}{\textbf{Ours}} \\ & BM25 & MiniLM & NQRR & MSHYRR & SGPT & UPR & HYRR & HYRR & HYRR\textsubscript{ft} & HYRR\textsubscript{ft} \\ \textbf{Rerank Top K} & (Anserini) & 100 & 100 & 100 & 100 & 1000 & 100 & 100 & 100 & 1000 \\ \textbf{Model Size} & & 22M & 400M & 400M & 6.1B & 3B & 125M & 400M & 400M & 400M \\ \hline NQ & 0.329 & 0.533 & 0.624\textsuperscript{\ddag} & 0.569 & 0.401 & 0.454 & 0.532 & 0.555 & 0.573 & \textbf{0.626} \\ \hline MS MARCO & 0.228 & 0.413\textsuperscript{\ddag} & 0.330 & \textbf{0.435\textsuperscript{\ddag}} & 0.290 & 0.302 & 0.307 & 0.309 & 0.319 & 0.344 \\ Trec-Covid & 0.656 & 0.757 & 0.787 & 0.798 & 0.791 & 0.688 & 0.796 & 0.820 & \textbf{0.822} & 0.817 \\ BioASQ & 0.465 & 0.523 & 0.507 & 0.554 & 0.547 & - & 0.551 & 0.549 & 0.554 & \textbf{0.565} \\ NFCorpus & 0.325 & 0.350 & 0.345 & 0.371 & 0.347 & 0.348 & 0.379 & 0.382 & 0.391 & \textbf{0.396} \\ HotpotQA & 0.603 & 0.707 & 0.665 & 0.717 & 0.699 & \textbf{0.733} & 0.706 & 0.707 & 0.708 & 0.730 \\ FiQA-2018 & 0.236 & 0.347 & 0.397 & 0.411 & 0.401 & 0.444 & 0.408 & 0.437 & 0.437 & \textbf{0.470} \\ Signal-1M & 0.330 & \textbf{0.338} & 0.271 & 0.264 & 0.323 & - & 0.307 & 0.318 & 0.304 & 0.280 \\ Trec-News & 0.398 & 0.431 & 0.419 & 0.452 & \textbf{0.466} & - & 0.437 & 0.453 & 0.441 & 0.418 \\ Robust04 & 0.407 & 0.475 & 0.400 & 0.505 & 0.480 & - & 0.501 & 0.544 & 0.534 & \textbf{0.552} \\ ArguAna & \textbf{0.414} & 0.311 & 0.107 & 0.351 & 0.286 & 0.372 & 0.344 & 0.342 & 0.382 & 0.326 \\ Touché-2020 & 0.367 & 0.271 & 0.363 & 0.467 & 0.234 & 0.206 & 0.368 & \textbf{0.384} & 0.366 & 0.287 \\ Quora & 0.789 & 0.825 & 0.807 & 0.637 & 0.794 & 0.831 & 0.861 & 0.867 & \textbf{0.869} & 0.868 \\ DBPedia-entity & 0.313 & 0.409 & 0.391 & 0.402 & 0.370 & \textbf{0.534} & 0.385 & 0.403 & 0.408 & 0.443 \\ SCIDOCS & 0.158 & 0.166 & 0.182 & 0.184 & 0.196 & 0.170 & 0.183 & 0.187 & 0.198 & \textbf{0.201} \\ Fever & 0.753 & 0.819 & \textbf{0.871} & 0.825 & 0.725 & 0.591 & 0.868 & 0.861 & 0.851 & 0.856 \\ Climate-Fever & 0.213 & 0.253 & \textbf{0.321} & 0.262 & 0.161 & 0.117 & 0.272 & 0.294 & 0.275 & 0.271 \\ SciFact & 0.665 & 0.688 & 0.696 & 0.745 & 0.682 & 0.703 & 0.734 & 0.754 & 0.755 & \textbf{0.767} \\ CQADupStack & 0.299 & 0.370 & 0.397 & 0.368 & 0.420 & 0.416 & 0.398 & 0.416 & 0.421 & \textbf{0.444} \\ \hline Average & 0.418 & 0.473 & 0.467 & 0.490 & 0.453 & 0.461 & 0.491 & 0.504 & 0.506 & 0.508 \\ Average w/o NQ & 0.423 & 0.470 & 0.459 & 0.486 & 0.456 & 0.461 & 0.489 & 0.501 & 0.502 & 0.502 \\ \hline \multicolumn{2}{c|}{Avg. improvement on BM25} & 4.63\% & 3.54\% & 6.26\% & 3.29\% & 3.11\% & 6.58\% & 7.81\% & 7.86\% & 7.87\% \\ \bottomrule \end{tabular} } \caption{Reranking performance on BEIR in NDCG@10. \ddag~indicates the in-domain performances. The results of baseline models are copied verbatim from the original papers. ``-'' indicate results that are not reported. Baseline models rerank the top-100 passages from BM25 except UPR that reranks the top-1000 passages.} \label{tab:reranking} \end{table*} \section{Results and Discussion} \subsection{Results on MS MARCO} \label{sec:msmarcoexperiments} To understand the effectiveness of our proposed approach, we fix the first-stage retrieval system and compare the reranking performance. Table~\ref{tab:msmarco_result} shows the performance of our proposed reranker on reranking BM25 top-1000 results. The BM25 results in row 1 is obtained from Anserini \cite{anserini} toolkit\footnote{https://github.com/castorini/anserini} with parameters: k=0.82, b=0.68 following other baselines. The results of several strong baselines are shown in row 2-6. \textbf{HLATR} \cite{hlatr} extends the retrieval-and-rerank pipeline with an additional ranking module by using the features from retrieval and reranking stages. It achieves top performance on MS MARCO leaderboard. \textbf{MiniLM} \cite{minilm}, which is a cross-encoder reranking model distilled from an ensemble of three teacher models: BERT-base, BERT-large and ALBERT-large. The other baselines are T5-based models: \textbf{monoT5} \cite{2020t5,nopara} and \textbf{RankT5-EncDec} \cite{rankT5} adopt the encoder-decoder architecture. Our model adopts RankT5's encoder-only variant as described in Section \ref{sec:reranking}. To compare with RankT5 model fairly, we implement our version of \textbf{RankT5 (ours)}, which shares the same architecture and parameter settings as \textbf{HYRR}. They only differ from the training data generation. Our reproduced RankT5 generates training data from dual encoder retriever. From row 7, we can see that HYRR improves over BM25 performance by 23.6\% in MRR@10, and outperforms other baselines. This demonstrates that our proposed training framework is effective in supervised setting. \subsection{Results on BEIR} Similar as evaluation on MS MARCO, we fix the first-stage retrieval system and compare the reranking performance. Table~\ref{tab:reranking} shows the reranking performance of our proposed reranker. The BM25 results in Col.1 are obtained from Anserini toolkit with parameters:k=0.9 and b=0.4 following other baselines. We consider several reranking models that can be categorized in two zero-shot settings as baselines. The first one is to train a supervised reranker and perform inference on the target domains directly, and the results are shown in Col. 2-4. We choose \textbf{MiniLM} \cite{minilm}, which reranks the top 100 retrieved passages from BM25. The result on MS MARCO is considered as in-domain for this model. We train a T5-based reranker \textbf{NQRR} using supervised NQ data and the model structure is the same as the one described in Section \ref{sec:reranking}. The result on the NQ dataset is considered in-domain for the NQRR. We also use our reranker \textbf{MSHYRR} trained using our proposed approach on MS MARCO from Section \ref{sec:msmarco_eval} as baseline. The second setting is unsupervised, and we choose two models to compare. \textbf{SGPT} \cite{sgpt} uses a pre-trained GPT model in a cross-encoder setting for reranking. Specifically, query text and passage text are concatenated and fed into pre-trained GPT model. The log probability is used as the score for reranking. Col. 5 shows the results of reranking top 100 retrieved passages with model size of 6.1 billion parameters. \textbf{UPR} \cite{upr} also uses a pre-trained language model for reranking. UPR uses a T5 model by taking the passage and a simple prompt as input and uses the average likelihood of query tokens conditioned on the passage as the score for reranking. Col. 6 shows the results of reranking top 1000 retrieved passages using T0 model \cite{T0} with 3 billion parameters. \begin{table*}[] \centering \small \begin{tabular}{r|cccc|cccc} \toprule Retriever $\downarrow$ & No Reranker & BM25RR & DERR & HYRR & No Reranker & BM25RR & DERR & HYRR \\\hline & \multicolumn{8}{c}{\textbf{MS MARCO}} \\ \hline & \multicolumn{4}{c|}{\textbf{MRR@10}} & \multicolumn{4}{c}{\textbf{nDCG@10}} \\ BM25 & 0.187 & 0.375 & 0.422 & \textbf{0.424} & 0.234 & 0.438 & 0.485 & \textbf{0.486} \\ DE & 0.378 & 0.350 & \textbf{0.440} & \textbf{0.440} & 0.445 & 0.411 & \textbf{0.507} & \textbf{0.507} \\ Hybrid & 0.390 & 0.351 & 0.438 & \textbf{0.440} & 0.457 & 0.413 & 0.506 & \textbf{0.508} \\ \hline & \multicolumn{8}{c}{\textbf{SciFact}} \\ \hline & \multicolumn{4}{c|}{\textbf{nDCG@10}} & \multicolumn{4}{c}{\textbf{Recall@100}} \\ BM25 & 0.677 & 0.750 & 0.742 & \textbf{0.752} & 0.918 & 0.916 & 0.923 & \textbf{0.946} \\ DE & 0.597 & \textbf{0.755} & 0.745 & 0.752 & 0.903 & 0.919 & 0.923 & \textbf{0.946} \\ Hybrid & 0.706 & 0.753 & 0.744 & \textbf{0.759} & 0.941 & 0.931 & 0.931 & \textbf{0.958} \\ \bottomrule \end{tabular} \caption{Ablation results on MS MARCO and SciFact.} \label{tab:ablation} \end{table*} For our models, we show results of two training settings. \textbf{HYRR} is a fine-tuned vanilla T5 model, while \textbf{HYRR\textsubscript{ft}} is a fine-tuned \textbf{NQRR} model. The results are shown in Col. 7-10. As we can see, our proposed model outperform the baseline models in 10 of 18 out-of-domain datasets. The best setting is reranking the top 1000 retrieved passages using \textbf{HYRR\textsubscript{ft}} and the improvement of nDCG@10 over BM25 is from 2\% - 23.4\%, and in average 7.87\%. Comparing Col. 8 and Col. 9, we can see that \textbf{HYRR} already provides strong performance on many datasets, and fine-tuning an out-of-domain supervised reranker achieves an additional performance boost. We note that the results on NQ cannot be considered completely out-of-domain even for our \textbf{HYRR} as the question generation model used to generate training data for the hybrid retriever is trained on the NQ dataset. Similarly, \textbf{UPR}'s results on HotpotQA cannot be considered completely out-of-domain since T0 uses HotpotQA for training. The evaluation on BEIR demonstrates the effectiveness of our proposed method in zero-shot settings. \begin{table*}[] \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|p{16cm}} \toprule Query & what mlb team play in \\ \hline HYRR & \textbf{P1} major league baseball (\textbf{mlb}) is a professional baseball organization, ... a total of 30 teams now \textbf{play in} the american league (al) and national league (nl)... \\ & \textbf{P2} in the united states and canada, professional major league baseball (\textbf{mlb}) teams are divided into the national league (nl) and american league (al), each with three divisions..." \\ \hline BM25RR & \textbf{P3} major league baseball. the 16 teams were located in ten cities, all in the northeastern and midwestern united states: new york city had three teams and boston, chicago, .... \\ & \textbf{P4} list of current major league baseball stadiums. the newest \textbf{mlb} stadium is suntrust park in cumberland, georgia, home of the atlanta braves, which opened for the 2017 season. \\ \hline DERR & \textbf{P5} major league baseball. the arizona diamondbacks, ..., are a major league baseball (\textbf{mlb}) franchise that play in the west division.... \\ & \textbf{P6} boston red sox. the yankees compete in major league baseball (\textbf{mlb}) as a member club of the american league (al) east division.... \\ \bottomrule \end{tabular} } \caption{Reranker predictions on MS MARCO} \label{tab:example} \end{table*} \subsection{Ablation} To show the robustness of HYRR, we conduct an ablation experiment. We train rerankers using the training data generated from the BM25 or the dual encoder model, namely \textbf{BM25RR} and \textbf{DERR}. Those two variants are commonly seen in many pipelined retrieval systems, where rerankers are simply trained upon the first-stage retriever. We train them using the same training setting for HYRR and then apply them on three retrievers: the BM25 model, the dual encoder model (DE) and the hybrid retriever, respectively. We experiment on both supervised setting and zero-shot setting. The results on MS MARCO are shown in the top section of Table~\ref{tab:ablation}. As we can see HYRR provides the most performance gain over all three retrievers on both MRR@10 and nDCG@10. The BM25RR improves the performance on BM25 while hurts the other two. DERR achieves best performance when we apply it to DE. It also improves the other two retrievers but not as much as HYRR. This shows that HYRR not only outperforms the other two rerankers but also is effective on different retrievers. \begin{table*}[] \small \centering \resizebox{\linewidth}{!}{% \begin{tabular}{l|p{1.6cm}|p{6cm}|p{6cm}} \toprule \textbf{Corpus} & \textbf{Query Type} & \textbf{Test Queries} & \textbf{Synthetic Questions} \\ \hline FiQA-2018 & Financial Question & How does unemployment insurance work? & what is the advantage of stocks over bonds \\ \hline BioASQ & Biomedical Question & List types of DNA lesions caused by UV light. & how many subjects in the silent lacunar lesion study \\ & & Does deletion of cohesin change gene expression? & what happens to amino acids in the hypothermic kidney \\ \hline Fever & Claim & The Hunger Games are not based on a novel. & when did bag raiders album come out in australia \\ & & Pharmacology is a science. & when did angela lien win her first olympic medal \\ \hline SCIDOCS & Paper Title & Multi-task Domain Adaptation for Sequence Tagging & what is the use of blockchain in software architecture \\ & & HF outphasing transmitter using class-E power amplifiers & what is the relationship between test anxiety and mathematics performance \\ \hline \hline \rule{-2pt}{10pt} Signal-1M & \begin{tabular}[c]{@{}c@{}} News \\ Headline \end{tabular} & Inside Trevor Noah’s final test run for the new ‘Daily Show’ & how many food items at the state fair \\ & & Fancy Cooking & where is book week held in the uk \\ \hline Touché-2020 & Controversial Questions & Is golf a sport? & who is the democrat in the 2020 presidential debate \\ & & Is drinking milk healthy for humans? & what kind of music can you debate on wikipedia \\ \hline Arguana & Argument & Small businesses need advertisements to make their products known. If there wasn't advertising then small businesses would have no chance at all to make their product well known. Adverts can actually level the playing field - if you have a good new product, and market it in a clever way then it doesn't matter how small your company is, you can still make consumers interested.... & who benefits from the mass media and advertising \\ \bottomrule \end{tabular} } \caption{Examples of test questions in some corpora and generated synthetic questions.} \label{tab:qgen} \end{table*} When examining the ranking outputs on MS MARCO, we find that top ranked predictions from HYRR is more semantically and lexical relevant to the query, while BM25RR and DERR sometimes return partially related predictions. For example, query ``what mlb team play in'' which asks for general information about major league baseball(mlb) shown in Table~\ref{tab:example}, HYRR returns the correct predictions, while DERR returns predictions about individual team in mlb, and BM25RR returns less relevant information about mlb. We pick SciFact from BEIR as an example for zero-shot setting. The results are shown in bottom part of Table~\ref{tab:ablation}, and we observe the similar trends on other datasets in BEIR. Similarly, HYRR improves both nDCG@10 and Recall@100 over all three retrievers. Specifically, it outperforms BM25RR and DERR not only when we apply it to the matched Hybrid retriever but also the other two retrievers. This believe is the evidence that the robustness of the training data for the reranker is carried over to the robustness of the reranker itself. In addition, to understand the benefit to use hybrid retriever for training data generation, we conduct another ablation experiment. We mix the training data generated from the BM25 and the dual encoder model in 1:1 ratio and train a reranker. We evaluate on MS MARCO and the model achieves 0.417 in MRR@10 and 0.480 in nDCG@10 when reranking BM25 top 1000. When comparing with the results in row 1 from Table~\ref{tab:ablation}, we can see that our approach significantly outperforms the approach that simply applying training data ensemble. \subsection{Discussion} \textbf{Robustness against Initial Checkpoints.} While when reranking the BM25 outputs in the zero-shot settings, our best reranking model is the fine-tuned model on a supervised reranking model (\textbf{NQRR}), it is worth to note that the zero-shot reranking model also achieves strong performance and outperforms \textbf{NQRR} on many datasets as shown in Table~\ref{tab:reranking}. This shows that our proposed training framework is useful in many practical situation when supervised data is not available. \textbf{Robustness against Model Sizes.} Regarding the model size, recent work shows that increasing model size results in large gains in zero-shot performance for reranking models. A 10\% improvement in average nDCG@10 over all datasets in BEIR can be seen when the model size increases from T5-small to T5-3B \cite{nopara}. From Table~\ref{tab:msmarco_result} and \ref{tab:reranking}, we can see that our proposed model is smaller, but more effective than some larger models. \textbf{Robustness against Retrievers.} We also show that despite the common wisdom that the best reranking model will be one trained on the same retriever as will be used at inference time, a model trained with hybrid retrieval output is strong for both BM25 and dual encoder retrievers. Reranking BM25 resulted in superior performance despite the mismatch between the retriever used for generating training data and that used for retrieval. The ablation experiment also confirms this observation. \textbf{Robustness against Query Generation.} In addition, we analyze the synthetically generated questions and show that our proposed model generalizes to different domains and different query types. In Table~\ref{tab:qgen}, we show examples of queries from the test set and the synthetically generated questions. As we can see, when synthetic questions are similar to test queries, our model performs very well even though the target domain is very different from the domain where the query generator is trained. For example, questions in FiQA-2018 are from investment topic and questions in BioASQ are scientific questions in biomedical domain. Both retriever and reranker performs very well on those two datasets. Interestingly, we find that our model also performs well in some cases where test queries and synthetic questions are of different type. For example, Fever asks to retrieve passages to verify given claims. The test queries are claims rather than questions. A similar pattern exists for SCIDOCS where the queries are paper titles. Our retrieval and ranking models learn the relevance between queries and passages despite the test queries not being questions. However, when the synthetic question is very different from the test queries, our model fails. For example, Signal-1M asks to retrieve relevant tweets given news headlines. Another example is Touché-2020, where the task is to retrieve relevant arguments given a question on a controversial topic. Also in Arguana, the task is to retrieve counterargument to an argument, and queries are long text passages. The nuance of the task itself and whether it requires term-based or semantic abstractions to match queries to passages appears to be where our QGen approach is ineffective. \section{Conclusion} We proposed a generic training framework for rerankers based on a hybrid retriever. While the hybrid retriever is composed of term-based and neural models, the reranker is a neural cross-attention model which learns from negatives examples generated by the hybrid retriever. The proposed approach is robust and outperforms several strong baselines on MS MARCO passage ranking task and BEIR benchmark dataset, which demonstrates that it is practical and generalized. We observe that a model trained with robust training instances (in this case, from the hybrid retriever) produces a reranker that outperforms matched-training rerankers for term-based or neural retrievers.
2,877,628,088,738
arxiv
\section{Introduction} The Boltzmann equation, introduced by L. Boltzmann \cite{boltzmann} and J.C. Maxwell \cite{maxwell} describes the time evolution of the probability density of a rarefied, monoatomic gas in thermal non-equilibrium in $\mathbb{R}^d$, for $d\geq 2$. The Boltzmann equation accurately describes very dilute gases since only \textbf{binary} interactions between particles are taken into account. However, when the gas is dense enough, higher order interactions are much more likely to happen, therefore they significantly affect time evolution of the gas. A relevant example is a colloid, which is a homogeneous non-crystalline substance consisting of either large molecules or ultramicroscopic particles of one substance dispersed through a second substance. In \cite{ref 26}, authors pointed out importance of including higher order interactions among particles in a colloidal gas. In particular, they show that in addition to binary interactions, interactions among three particles significantly contribute to the grand potential of the colloid. A surprising result of \cite{ref 26}, but of invaluable computational importance in numerical simulations, is that interactions among three particles are actually characterized by the sum of the distances between particles, as opposed to depending on different geometric configurations among interacting particles. The results of \cite{ref 26} have been further verified experimentally e.g. \cite{imp3} and numerically e.g. \cite{imp4}. Motivated by the observations of \cite{ref 26}, in \cite{ternary} we suggested a model which goes beyond binary interactions incorporating sums of higher order interaction terms. In particular, we introduced the generalized equation \begin{equation}\label{intro:generalized Boltzmann} \begin{cases} \partial_t f+v\cdot\nabla_xf=\displaystyle\sum_{k=2}^m Q_k(\underbrace{f,f,\cdots,f\,}_\text{$k$-times}),\quad (t,x,v)\in(0,\infty)\times\mathbb{R}^{d}\times\mathbb{R}^d,\\ f(0,x,v)=f_0(x,v),\quad (x,v)\in\mathbb{R}^{d}\times\mathbb{R}^d, \end{cases} \end{equation} where, for $k=1,...,m$, the expression $Q_k(\underbrace{f,f,\cdots,f\,}_\text{$k$-times})$ is the $k$-th order collisional operator and $m\in\mathbb{N}$ is the accuracy of the approximation depending on the density of the gas. We note that equations similar to \eqref{intro:generalized Boltzmann} were studied for Maxwell molecules in the works of Bobylev, Gamba and Cercignani \cite{multiple gamba,multiple gamba 2} using Fourier transform methods. Notice that for $m=2$, equation \eqref{intro:generalized Boltzmann} reduces to the classical Boltzmann equation. The task of rigorously deriving an equation of the form \eqref{intro:generalized Boltzmann} from a classical many particle system, even for the case $m=2$ (i.e. the Boltzmann equation), is a challenging problem that has been settled for short times only in certain situations, see e.g. \cite{lanford,king,cercignani paper,gallagher,pulvirenti-simonella,spohn,uchiya} for results in this direction. A relevant step towards rigorously deriving \eqref{intro:generalized Boltzmann} for $m=3$ has been recently obtained in \cite{ternary}, where we considered a certain type of three particle interactions that lead us to derive a purely ternary kinetic equation, which we called a ternary Boltzmann equation. However, the derivation of \eqref{intro:generalized Boltzmann} for $m=3$ has not been addressed yet, and that is exactly what we do in this paper. \subsection{Challenges of detecting both binary and ternary interactions}\label{subsec challenges} The first challenge we face in deriving \eqref{intro:generalized Boltzmann} for $m=3$ is to provide a mathematical framework allowing us to detect both binary and ternary interactions among particles. We achieve that by assuming the following: \begin{itemize} \item Binary interactions are modeled as elastic collisions of hard spheres of diameter $\epsilon$ i.e. two particles interact when the distance of their centers defined as $$d_2(x_i,x_j):=|x_i-x_j|$$ becomes equal to the diameter $\epsilon$. \item Ternary interactions are of interaction zone type as in \cite{ternary}, by which we mean that the particle $i$ interacts with the particles $j$ and $k$ when the non-symmetric ternary distance $$d_3(x_i;x_j,x_k):=\sqrt{|x_i-x_j|^2+|x_i-x_k|^2 }$$ becomes $\sqrt{2}\epsilon$. \end{itemize} Simultaneous consideration of both binary and ternary interactions brings us closer to understanding our first obstacle. In particular, in the works on the derivation of the binary Boltzmann equation for hard spheres, pioneered by Lanford \cite{lanford} and recently completed by Gallagher, Saint-Raymond, Texier \cite{gallagher}, the relevant scaling is the Boltzmann-Grad scaling \cite{Grad 1, Grad 2} \begin{equation}\label{Boltzmann-Grad intro} N\epsilon^{d-1}\simeq 1, \end{equation} as the number of particles $N\to\infty$ and their diameter $\epsilon\to 0^+$. On the other hand, the scaling used in \cite{ternary} to control ternary interactions is a different scaling: \begin{equation}\label{ternary scaling intro} N\epsilon^{d-1/2}\simeq 1, \end{equation} A crucial, conceptual obstacle is the apparent incompatibility of the Boltzmann-Grad scaling \eqref{Boltzmann-Grad intro} dictated by binary interactions and the scaling \eqref{ternary scaling intro} of ternary interactions, if both of them are of order $\epsilon$. This incompatibility creates major difficulties even at the formal level. We overcome this scaling obstacle by assuming that, at the $N$-particle level, hard spheres are of diameter $\epsilon_2$ and that particles interact as triplets via an interaction zone $\epsilon_3$. Imposing scalings \eqref{Boltzmann-Grad intro} with $\epsilon:=\epsilon_2$ and \eqref{ternary scaling intro} with $\epsilon:=\epsilon_3$, we obtain the common scaling \begin{equation}\label{common scaling intro} N\epsilon_2^{d-1}\simeq N\epsilon_3^{d-1/2}\simeq 1, \end{equation} as $N\to\infty$ and $\epsilon_2,\epsilon_3\to 0^+$. Notice that the scaling \eqref{common scaling intro} implies that for sufficiently large $N$, we have \begin{equation}\label{inequality on epsilons intro} \epsilon_2<<\epsilon_3, \end{equation} which will have a prominent role in this paper. The next challenge we address is the need to decouple binary and ternary interactions for a system of finitely many particles. More precisely, our framework a-priori allows i.e. that particles $i$ and $j$ interact as hard spheres: $$d_2(x_i,x_j)=\epsilon_2,$$ while at the same time there is another particle $k$ such that the particle $i$ interacts with the particles $j$ and $k$: $$d_3(x_i;x_j,x_k)=\sqrt{2}\epsilon_3.$$ Such a configuration is illustrated in Figure \ref{both binary and ternary}. \begin{figure}[htp] \centering \captionsetup{justification=centering} \begin{tikzpicture}[scale=1] \coordinate (a) at (0,0); \coordinate (a') at (0,-0.7); \coordinate (b) at (0.25,0.25); \coordinate (c) at (2,0); \coordinate (b') at (0,0.5); \coordinate (c') at (0.5,1.1); \coordinate (e) at (-0.25,0.25); \coordinate (f) at (-0.5,0.5); \node (A) at (a) {\small{$\bullet$}}; \node (C) at (c) {\small{$\bullet$}}; \node (F) at (f) {\small{$\bullet$}}; \draw (a)--(c); \draw (a)--(f); \coordinate (d) at (2.25,0.25); \node(c7) at (a)[draw,circle through=(b),color=red] {}; \node(c8) at (c)[draw,circle through=(d),color=blue] {}; \node(c9) at (f)[draw,circle through=(e),color=teal] {}; \coordinate (x) at (-0.55,0.03); \node (x) at (x) {\small{$\epsilon_2$}}; \coordinate (f1) at (0.4,-0.3); \node (F1) at (f1){{\color{red}\small{$i$}}}; \coordinate (f2) at (2.5,0); \node (F2) at (f2){{\color{blue}\small{$k$}}}; \coordinate (f3) at (-0.9,0.85); \node (F3) at (f3){{\color{teal}\small{$j$}}}; \coordinate (f4) at (1,0.4); \node (F4) at (f4){\small{$\sqrt{2\epsilon_3^2-\epsilon_2^2}$}}; \end{tikzpicture} \caption{Both binary and ternary interactions at the same time} \label{both binary and ternary} \end{figure} Pathological configurations, including the one we just described, are going to be shown to be negligible. This is far from trivial and for more details on the microscopic dynamics, see Subsection \ref{subsec:intro dynamics} and Section \ref{sec:dynamics}. In particular, we shall show that as long as $0<\epsilon_2<\epsilon_3<1$, only the following two interaction scenarios are possible with non-trivial probability under time evolution: \begin{enumerate}[(i)] \item Two particles interact as hard-spheres while all other particles are not involved in any binary or ternary interactions at the same time. This type of configurations generates the binary collisional operator. It is illustrated in Figure \ref{binary interaction mixed}. \item Three particles interact via an interaction zone, while none of them is involved in a binary interaction with either of the other two particles of the interaction zone at the same time. The rest of the particles are not involved in any binary or ternary interactions. This type of configurations is responsible for generating the ternary collisional operator. It is illustrated in Figure \ref{ternary interaction mixed}. \end{enumerate} \begin{figure}[htp] \centering \captionsetup{justification=centering} \begin{tikzpicture}[scale=1] \coordinate (a) at (0,0); \coordinate (a') at (0,-0.7); \coordinate (b) at (0.25,0.25); \coordinate (c) at (4,2); \coordinate (b') at (0,0.5); \coordinate (c') at (0.5,1.1); \coordinate (e) at (-0.25,0.25); \coordinate (f) at (-0.5,0.5); \node (A) at (a) {\small{$\bullet$}}; \node (C) at (c) {\small{$\bullet$}}; \node (F) at (f) {\small{$\bullet$}}; \draw (a)--(c); \draw (a)--(f); \coordinate (d) at (4.25,2.25); \node(c7) at (a)[draw,circle through=(b),color=red] {}; \node(c8) at (c)[draw,circle through=(d),color=blue] {}; \node(c9) at (f)[draw,circle through=(e),color=teal] {}; \coordinate (x) at (-0.55,0.03); \node (x) at (x) {\small{$\epsilon_2$}}; \coordinate (f1) at (0.4,-0.3); \node (F1) at (f1){{\color{red}\small{$i$}}}; \coordinate (f2) at (4.55,1.9); \node (F2) at (f2){{\color{blue}\small{$k$}}}; \coordinate (f3) at (-0.9,0.85); \node (F3) at (f3){{\color{teal}\small{$j$}}}; \coordinate (f4) at (2,1.3); \node (F4) at (f4){\small{$\lambda_2$}}; \end{tikzpicture} \caption{Binary interaction: $\epsilon_2^2+\lambda_2^2>2\epsilon_3^2, \quad\lambda_2>\epsilon_2$.} \label{binary interaction mixed} \end{figure} \begin{figure}[htp] \centering \captionsetup{justification=centering} \begin{tikzpicture}[scale=1] \coordinate (a) at (0,0); \coordinate (a') at (0.25,0.25); \node (A) at (a) {\small{$\bullet$}}; \node(c7) at (a)[draw,circle through=(a'),color=red] {}; \coordinate (A1) at (0.5,0); \node (A1) at (A1){{\color{red}\small{$i$}}}; \coordinate (b) at (1,1); \coordinate (b') at (1.25,1.25); \node (B) at (b) {\small{$\bullet$}}; \node(c8) at (b)[draw,circle through=(b'),color=blue] {}; \coordinate (B1) at (1.5,1); \node (B1) at (B1){{\color{blue}\small{$k$}}}; \coordinate (c) at (-1,1); \coordinate (c') at (-1.25,1.25); \node (C) at (c) {\small{$\bullet$}}; \node(c9) at (c)[draw,circle through=(c'),color=teal] {}; \coordinate (C1) at (-1.5,1); \node (C1) at (C1){{\color{teal}\small{$j$}}}; \draw (a)--(c); \draw (a)--(b); \coordinate (f4) at (0.8,0.4); \node (F4) at (f4){\small{$\lambda_2$}}; \coordinate (f5) at (-0.6,0.4); \node (F5) at (f5){\small{$\lambda_1$}}; \end{tikzpicture} \caption{Ternary interaction: $\lambda_1^2+\lambda_2^2=2\epsilon_3^2,\quad\lambda_1,\lambda_2>\epsilon_2$.} \label{ternary interaction mixed} \end{figure} Finally, since we will eventually let the number of particles $N\to\infty$, the main challenge we need to address is the stability of a good configuration\footnote{by which we mean a configuration which does not run into any kind of interactions under backwards time evolution.} under the adjunction of one or two collisional particles. Assume, for a moment, that we have a good configuration of $m$-particles and we add $\sigma$ particles to the system, where $\sigma\in\{1,2\}$, such that a binary or ternary interaction is formed among one of the existing particles and the $\sigma$ new particles. In general, under backwards time evolution, the system could run into another binary or ternary interaction, see e.g. Figure \ref{recollision}, which illustrates the mathematically most difficult case where the newly formed $(m+2)$-configuration runs into a binary interaction. To the best of our knowledge, this is the first time there was the need to address the possibility of a newly formed interacting configuration running into an interaction of a different type (binary to ternary or ternary to binary) backwards in time. However, in Section \ref{sec:geometric} and Section \ref{sec:stability}, we develop novel algebraic and geometric tools which help us eliminate pathological scenarios, including the one described in Figure \ref{recollision}, by showing that outside of a small measure set, negligible in the limit, the newly formed configuration does not run into any additional interactions backwards in time. For more details on the technical difficulties faced, see Subsection \ref{subsec:difficulties}. \begin{figure}[htp] \centering \captionsetup{justification=centering} \begin{tikzpicture} \coordinate (a) at (0,0); \coordinate (a') at (0.25,0.25); \node (A) at (a) {\small{$\bullet$}}; \node(c7) at (a)[draw,circle through=(a'),color=red] {}; \coordinate (A1) at (0.5,0); \node (A1) at (A1){{\color{red}\small{$i$}}}; \coordinate (b) at (1,1); \coordinate (b') at (1.25,1.25); \node (B) at (b) {\small{$\bullet$}}; \node(c8) at (b)[draw,circle through=(b'),color=blue] {}; \coordinate (B1) at (1.8,1); \node (B1) at (B1){{\color{blue}\small{$m+2$}}}; \coordinate (c) at (-1,1); \coordinate (c') at (-1.25,1.25); \node (C) at (c) {\small{$\bullet$}}; \node(c9) at (c)[draw,circle through=(c'),color=teal] {}; \coordinate (C1) at (-1.8,1); \node (C1) at (C1){{\color{teal}\small{$m+1$}}}; \draw (a)--(c); \draw (a)--(b); \coordinate (f4) at (0.8,0.4); \node (F4) at (f4){\small{$\lambda_2$}}; \coordinate (f5) at (-0.6,0.4); \node (F5) at (f5){\small{$\lambda_1$}}; \node (G) at (0,-0.65) {\small{$\lambda_1^2+\lambda_2^2=2\epsilon_3^2,\quad\lambda_1,\lambda_2>\epsilon_2$}}; \coordinate (a) at (5,0); \coordinate (a') at (5.25,0.25); \node (A) at (a) {\small{$\bullet$}}; \node(c7) at (a)[draw,circle through=(a'),color=red] {}; \coordinate (A1) at (5.5,0); \node (A1) at (A1){{\color{red}\small{$i$}}}; \coordinate (b) at (5.5,0.5); \coordinate (b') at (5.25,0.25); \node (B) at (b) {\small{$\bullet$}}; \node(c7) at (b)[draw,circle through=(b'),color=blue] {}; \coordinate (B1) at (6.3,0.5); \node (B1) at (B1){{\color{blue}\small{$m+2$}}}; \draw (a)--(b); \node (D) at (5,0.5) {\small{$\epsilon_2$}}; \coordinate (d1) at (1.5,0.25); \coordinate (d2) at (4.5,0.25); \draw [->](d1)--(d2); \coordinate (d3) at (3,0.5); \node (D3) at (d3){{\small{backwards in time}}}; \end{tikzpicture} \caption{}\label{recollision} \end{figure} In the next subsection, we investigate more precisely what happens when a binary or a ternary interactions occurs and describe the time evolution of such a system. \subsection{Dynamics of finitely many particles}\label{subsec:intro dynamics} Let us describe the evolution in $\mathbb{R}^d$, $d\geq 2$, of a system of $N$ hard spheres of diameter $\epsilon_2$ and interaction zone $\epsilon_3$, where $0<\epsilon_2<\epsilon_3<1$. The assumption $\epsilon_2<\epsilon_3$ is necessary for ternary interactions to be of non trivial probability, see Remark \ref{remark on phase space} for more details. \subsubsection{Interactions considered} We first define the interactions considered in this paper. \begin{definition}\label{intro: def of ternary interaction} Let $N\in\mathbb{N}$, with $N\geq 3$, and $0<\epsilon_2<\epsilon_3<1$. We define binary and ternary interactions, also referred to as collisions, as follows: \begin{itemize} \item Consider two particles $i,j\in\{1,...,N\}$ with positions $x_i,x_j\in\mathbb{R}^{d}$. We say that the particles $i,j$ are in an $(i,j)$ binary interaction, if the following geometric condition holds: \begin{equation}\label{intro-binary collisions} d_2(x_i,x_j):=|x_i-x_j|=\epsilon_2. \end{equation} \item Consider three particles $i,j,k\in\{1,...,N\}$, with positions $x_i,x_j,x_k\in\mathbb{R}^{d}$. We say that the particles $i,j,k$ are in an $(i;j,k)$ interaction\footnote{we use the notation $(i;j,k)$ because the interaction condition is not symmetric. The particle $i$ is the central particle of the interaction i.e. the one interacting with the particles $j$ and $k$ respectively.} if the following geometric condition holds: \begin{equation}\label{intro-triple collisions} d_3(x_i;x_j,x_k):=\sqrt{|x_i-x_j|^2+|x_i-x_k|^2}=\sqrt{2}\epsilon_3. \end{equation} \end{itemize} \end{definition} When an $(i,j)$ interaction occurs, the velocities $v_i,v_j$ of the $i$-th and $j$-th particles instantaneously transform according to the binary collisional law: \begin{equation}\label{intro:binary collisional law} \begin{aligned} v_i'&=v_i+\langle\omega_1,v_j-v_i\rangle\omega_1,\\ v_j'&=v_j-\langle\omega_1,v_j-v_i\rangle\omega_1, \end{aligned} \end{equation} where \begin{equation}\label{Intro:omega def binary} \omega_1:=\frac{x_j-x_i}{\epsilon_2}. \end{equation} Thanks to \eqref{intro-binary collisions}, we have $\omega_1\in\mathbb{S}_1^{d-1}$. The vector $\omega_1$ is called binary impact direction and it represents the scaled relative position of the colliding particles. Moreover, one can see that the binary momentum-energy system: \begin{equation}\label{intro:binary MEC} \begin{aligned} v'+v_1'&=v+v_1,\\ |v'|^2+|v_1'|^2&=|v|^2+|v_1|^2, \end{aligned} \end{equation} is satisfied. When an $(i;j,k)$ interaction happens, the velocities $v_i,v_j,v_k$ of the $i$-th, $j$-th and $k$-th particles instantaneously transform according to the ternary collisional law derived in \cite{ternary} \begin{equation}\label{intro:ternary collisional law} \begin{aligned} v_i^*&=v_i+\frac{\langle\omega_1,v_j-v_i\rangle+\langle\omega_2,v_k-v_i\rangle}{1+\langle\omega_1,\omega_2\rangle}(\omega_{1}+\omega_{2}),\\ v_j^*&=v_j-\frac{\langle\omega_1,v_j-v_i\rangle+\langle\omega_2,v_k-v_i\rangle}{1+\langle\omega_1,\omega_2\rangle}\omega_{1},\\ v_k^*&=v_k-\frac{\langle\omega_1,v_j-v_i\rangle+\langle\omega_2,v_k-v_i\rangle}{1+\langle\omega_1,\omega_2\rangle}\omega_2, \end{aligned} \end{equation} where \begin{equation}\label{Intro:omega def ternary} (\omega_1,\omega_2):=\left(\frac{x_j-x_i}{\sqrt{2}\epsilon_3},\frac{x_k-x_i}{\sqrt{2}\epsilon_3}\right). \end{equation} Thanks to \eqref{intro-triple collisions}, we have $(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}$. The vectors $(\omega_1,\omega_2)$ are called ternary impact directions and they represents the scaled relative positions of the interacting particles. Moreover, it has been shown in \cite{ternary} that the ternary momentum-energy system: \begin{equation}\label{intro:ternary MEC} \begin{aligned} v^*+v_1^*+v_2^*&=v+v_1+v_2,\\ |v^*|^2+|v_1^*|^2+|v_2^*|^2&=|v|^2+|v_1|^2+|v_2|^2, \end{aligned} \end{equation} is satisfied. \subsubsection{Phase space and description of the flow} Let $N\in\mathbb{N}$, with $N\geq 3$, and $0<\epsilon_2<\epsilon_3<1$. The natural phase space \footnote{upon symmetrization, one could define the phase space without ordering the particles and obtain a symmetrized version of ternary operator (see \cite{thesis} for more details). For simplicity, we opt to work upon ordering the particles.} to capture both binary and ternary interactions is: \begin{equation}\label{intro:mixed phase space} \begin{aligned} \mathcal{D}_{N,\epsilon_2,\epsilon_3}=\big\{Z_N=(X_N,V_N)\in\mathbb{R}^{2dN}: d_2(x_i,x_j)\geq\epsilon_2,\text{ }\forall (i,j)\in\mathcal{I}_N^2, \text{ and }d_3(x_i;x_j,x_k)\geq\sqrt{2}\epsilon_3,\text{ }\forall (i,j,k)\in\mathcal{I}_N^3\big\}, \end{aligned} \end{equation} where $X_N=(x_1,x_2,...,x_N)$, $V_N=(v_1,v_2,...,v_N),$ represent the positions and velocities of the $N$-particles, and the index sets $\mathcal{I}_N^2,\mathcal{I}_N^3$ are given by \begin{align*} \mathcal{I}_N^2=\{(i,j)\in\{1,...,N\}^2:i<j\},\quad \mathcal{I}_N^3=\{(i,j)\in\{1,...,N\}^3:i<j<k\}. \end{align*} Let us describe the evolution in time of such a system. Consider an initial configuration $Z_N\in\mathcal{D}_{N,\epsilon_2,\epsilon_3}$. The motion is described as follows: \begin{enumerate}[(I)] \item Particles are assumed to perform rectilinear motion as long as there is no interaction \begin{equation*} \dot{x}_i=v_i,\quad \dot{v}_i=0,\quad\forall i\in\{1,...,N\}. \end{equation*} \item Assume now that an initial configuration $Z_N=(X_N,V_N)$ has evolved until time $t>0$, reaching $Z_N(t)=(X_N(t),V_N(t))$, and that there is an interaction at time $t$. We have the following cases: \begin{itemize} \item The interaction is binary: Assuming there is an $(i,j)$ interaction the velocities of the interacting particles instantaneously transform velocities according to the binary collisional law $(v_i(t),v_j(t))\to (v_i'(t),v_j'(t))$ given in \eqref{intro:binary collisional law}. \item The interaction is ternary: Assuming there is an $(i;j,k)$ interaction, the velocities of the interacting particles instantaneously transform velocities according to the ternary collisional law $(v_i(t),v_j(t),v_k(t))\to (v_i^*(t),v_j^*(t),v_k^*(t))$ given in \eqref{intro:ternary collisional law}. \end{itemize} \end{enumerate} Let us note that (I)-(II) are not sufficient to generate a global in time flow for the particle system, since the velocity transformations are not smooth. In general pathologies might arise as time evolves, meaning more than one type of interactions happening at the same time, grazing interaction, or infinitely many interactions in finite time. Although, well-defined dynamics was shown to exist in \cite{alexander} for hard spheres and in \cite{ternary} for the purely ternary case, those results do not imply well-posedness of the flow for the mixed case, where both binary and ternary interactions are taken into account. The reason for that is that a binary interaction can be succeeded by a ternary interaction and vice versa, a situation which was not addressed in \cite{alexander} nor \cite{ternary}. However we are showing that a non-grazing interaction cannot be succeeded by the same interaction. In other words, when two particles $(i,j)$ interact, the next interaction could be anything, binary or ternary, except a binary recollision of the particles $(i,j)$. Similarly, when three particles there is an $(i;j,k)$ interaction, the next interaction can be anything except a ternary $(i;j,k)$\footnote{any other permutation of the particle $i,j,k$ cannot form an interaction since $i<j<k$. In case one does not order the particles, a subsequent $(j;i,k)$ interaction, for instance, could possibly happen.} interaction. This observation allows us to define the flow locally a.e. and then run some combinatorial covering arguments to geometrically exclude a zero Lebesgue measure set such that the flow is globally in time defined on the complement. Let us informally state this result. For a detailed statement, see Theorem \ref{global flow}. \textbf{\textit{Existence of a global flow}:}\textit{ Let $N\in\mathbb{N}$ and $0<\epsilon_2<\epsilon_3<1$. There is a global in time measure-preserving flow $(\Psi_m^t)_{t\in\mathbb{R}}:\mathcal{D}_{N,\epsilon_2,\epsilon_3}\to\mathcal{D}_{N,\epsilon_2,\epsilon_3}$ described a.e. by (I)-(II) which preserves kinetic energy. This flow is called the $N$-particle $(\epsilon_2,\epsilon_3)$-interaction flow.} The global measure-preserving interaction flow yields the Liouville equation\footnote{in case $N=2$, the ternary boundary condition is not present in \eqref{intro:Liouville}, while if $N=1$, equation \eqref{intro:Liouville} is just the transport equation.} for the evolution $f_N$ of an initial $N$-particle probability density $f_{N,0}$. \begin{equation}\label{intro:Liouville} \begin{aligned} &\partial_tf_N+\sum_{i=1}^Nv_i\nabla_{x_i}f_N=0,\quad (t,Z_N)\in (0,\infty)\times\mathring{D}_{N,\epsilon_2,\epsilon_3},\\ &f_N(t,Z_N')=f(t,Z_N),\quad t\in[0,\infty),\quad Z_N\text{ is a simple binary interaction\footnotemark},\\ &f_N(t,Z_N^*)=f(t,Z_N),\quad t\in[0,\infty),\quad Z_N\text{ is a simple ternary interaction\footnotemark },\\ &f_N(0,Z_N)=f_{N,0}(Z_N),\quad Z_N\in\mathring{D}_{N,\epsilon_2,\epsilon_3}. \end{aligned} \end{equation} \addtocounter{footnote}{-1} \footnotetext{by simple binary interaction, we mean the only interaction happening is an $(i,j)$ interaction. In this case, we write $Z_N'=(X_N,V_N')$, where $V_N'=(v_1,...,v_{i-1},v_i',v_{i+1},...,v_{j-1},v_j',v_{j+1},...,v_N)$.} \addtocounter{footnote}{1} \footnotetext{by simple ternary interaction, we mean the only interaction happening is an $(i;j,k)$ interaction. In this case, we write $Z_N^*=(X_N,V_N^*)$, where $V_N^*=(v_1,...,v_{i-1},v_i^*,v_{i+1},...,v_{j-1},v_j^*,v_{j+1},...,v_{k-1},v_k^*,v_{k+1},...,v_N)$.} The Liouville equation provides a complete deterministic description of the system of $N$-particles. Although Liouville's equation is a linear transport equation, efficiently solving it is almost impossible in case where the particle number $N$ is very large. This is why an accurate kinetic description is welcome, and to obtain it one wants to understand the limiting behavior of it as $N\to\infty$ and $\epsilon_2,\epsilon_3\to 0^+$, with the hope that qualitative properties will be revealed for a large but finite $N$. \subsection{The binary-ternary Botzmann equation} To obtain such a kinetic description, we let the number of particles $N\to\infty$ and the diameter and interaction zone of the particles $\epsilon_2,\epsilon_3\to 0^+$ in the \textbf{common scaling} \eqref{common scaling intro}: \begin{equation*} N\epsilon_2^{d-1}\simeq N\epsilon_3^{d-\frac{1}{2}}\simeq 1, \end{equation*} which will lead the binary-ternary Boltzmann equation \begin{equation}\label{intro:binary-ternary} \partial_t f+v\cdot\nabla_x f=Q_2(f,f)+Q_3(f,f,f), \quad (t,x,v)\in (0,\infty)\times\mathbb{R}^d\times\mathbb{R}^d, \end{equation} The operator $Q_2(f,f)$, see e.g. \cite{cercignani gases}, is the classical hard sphere binary collisional operator given by \begin{equation}\label{intro: quad Boltz kernel} Q_2(f,f)=\int_{\mathbb{S}_1^{d-1}\times\mathbb{R}^d}\langle\omega_1,v_1-v\rangle_+\left(f'f_1'-ff_1\right)\,d\omega_1\,dv_1, \end{equation} where \begin{equation*} \begin{aligned} f'=f(t,x,v'),&\quad f=f(x,t,v),\quad f_1'=f_1(t,x,v_1'),&\quad f_1=f(t,x,v_1), \end{aligned} \end{equation*} The operator $Q_3(f,f,f)$, introduced for the first time in \cite{ternary}, is the ternary hard interaction zone operator given by \begin{equation}\label{intro:kernel} Q_3(f,f,f)=\int_{\mathbb{S}_1^{2d-1}\times\mathbb{R}^{2d}}b_+\left(f^*f_1^*f_2^*-ff_1f_2\right)\,d\omega_1\,d\omega_2\,dv_1\,dv_2, \end{equation} where \begin{equation}\label{intro:parameters boltzmann} \begin{aligned} b=b(\omega_1,\omega_2,v_1-v,v_2-v):=\langle\omega_1,v_{1}-v\rangle+\langle\omega_2,v_{2}-v_\rangle,\quad b_+=\max\{b,0\},&\\ f^*=f(t,x,v^*)\quad f=f(x,t,v)\quad f_i^*=f_i^*(t,x,v_i^*),\quad f_i=f(t,x,v_i), \text{ for }i\in\{1,2\}.& \end{aligned} \end{equation} We should mention that in \cite{gwp}, global well-posedness near vacuum has been shown for \eqref{intro:binary-ternary} for potentials ranging from moderately soft to hard in spaces of functions bounded by Maxwellian. In fact in \cite{gwp}, it is seen that the ternary collisional operator allows consideration of softer potentials that the binary operator. In other words the ternary correction to the Boltzmann equation does not behave worse than the classical Boltzmann equation. It is important to point out that, upon symmetrization of the ternary collisional operator (see \cite{thesis}), the corresponding binary-ternary Boltzmann equation enjoys similar statistical and entropy production properties and conservation laws as the classical Boltzmann equation. Therefore, such a model could serve as a correction of the classical Boltzmann equation to denser gases. This follows after combining the properties of the classical binary operator, see e.g. \cite{cercignani gases}, with the properties of the symmetrized ternary collisional operator investigated for the first time in \cite{thesis}. \subsection{Strategy of the derivation and statement of the main result} In order to pass from the $N$-particle system dynamics to the kinetic equation \eqref{intro:binary-ternary}, we implement the program of constructing linear finite and infinite hierarchies of equations, pioneered by Lanford \cite{lanford} and refined by Gallagher, Saint-Raymond, Texier \cite{gallagher}, and connecting them to the new binary-ternary Boltzmann equation. In \cite{ternary}, we extended this program to include ternary interactions, which led to the rigorous derivation of a purely ternary kinetic equation for particles with hard interaction zone in the scaling \eqref{ternary scaling intro}. However, rigorous derivation of \eqref{intro:binary-ternary} does not follow from \cite{lanford, gallagher} nor the ternary work \cite{ternary} . As mentioned in Subsection \ref{subsec challenges} the first difficulty is the apparent incompatibility of scalings \eqref{Boltzmann-Grad intro}-\eqref{ternary scaling intro}, which we overcome by introducing the common scaling \eqref{common scaling intro}. The most challenging task is to make the argument rigorous, though, is the analysis of all the possible recollisions\footnote{by recollisions we mean the possible divergence of the backwards $(\epsilon_2,\epsilon_3)$-interaction flow from the backwards free flow.} of the backwards $(\epsilon_2,\epsilon_3)$-flow. In contrast to the binary or the ternary case where each binary or ternary interaction is succeeded by a binary or ternary interaction respectively, here we can have any possible interaction sequence of binary or ternary interactions. We keep track of this combinatorics using the set \begin{equation}\label{combinatorics intro} S_k=\{\sigma=(\sigma_1,...,\sigma_k):\sigma_i\in\{1,2\},\quad\forall i=1,...,k\}. \end{equation} In addition to more involved combinatorics, careful analysis of all the possible interaction sequences requires development of novel geometric and algebraic tools, which we discuss in details in Subsection \ref{subsec:difficulties}. For now, we continue to discuss the process of derivation. More specifically, we first derive a finite, linear, coupled hierarchy of equations for the marginal densities \begin{align*} f_N^{(s)}(Z_s)&=\int_{\mathbb{R}^{2d(N-s)}}f_N(Z_N)\mathds{1}_{\mathcal{D}_{N,\epsilon_2,\epsilon_3}}(Z_N)\,dx_{s+1}...\,dx_N\,dv_{s+1}...\,dv_N,\quad s\in\{1,...,N-1\}, \end{align*} of the solution $f_N$ to the Liouville equation, which we call the BBGKY\footnote{Bogoliubov, Born, Green, Kirkwood, Yvon}. This hierarchy is given by \begin{equation}\label{intro:BBGKY} \partial_t f_N^{(s)}+\sum_{i=1}^sv_i\cdot\nabla_{x_i}f_N^{(s)}=\mathcal{C}_{s,s+1}^Nf_N^{(s+1)}+\mathcal{C}_{s,s+2}^Nf_N^{(s+2)},\quad s\in\{1,...,N-1\} \end{equation} For the precise form of the operators $\mathcal{C}_{s,s+1}^N$, $\mathcal{C}_{s,s+2}^N$, see \eqref{BBGKY operator binary}-\eqref{BBGKY operator triary}. Duhamel's Formula yields that the BBGKY hierarchy can be written in mild form as follows: \begin{equation}\label{intro:mild BBGKY} \hspace{2cm} f_N^{(s)}(t,Z_s)=T_s^tf_{N,0}(Z_s)+\int_0^t T_s^{t-\tau}(\mathcal{C}_{s,s+1}^Nf_N^{(s+1)}+\mathcal{C}_{s,s+2}^Nf_N^{(s+2)})(\tau,Z_s)\,d\tau,\quad s\in\mathbb{N}, \end{equation} where for any continuous function $g_s:\mathcal{D}_{s,\epsilon_2,\epsilon_3}\to\mathbb{R}$, we write $T_s^tg_s(Z_s):=g_s(\Psi_s^{-t}Z_s),$ and $\Psi_s^t$ is the $(\epsilon_2,\epsilon_3)$-interaction zone flow of $s$-particles. We then formally let $N\to\infty$ and $\epsilon_2,\epsilon_3\to 0^+$ in the scaling \eqref{common scaling intro} to obtain an infinite, linear, coupled hierarchy of equations, which we call the Boltzmann hierarchy. This hierarchy is given by \begin{equation}\label{intro:Boltzmann hierarchy} \partial_t f^{(s)}+\sum_{i=1}^sv_i\cdot\nabla_{x_i}f^{(s)}=\mathcal{C}_{s,s+1}^\infty f^{(s+1)}+\mathcal{C}_{s,s+2}^\infty f^{(s+2)},\quad s\in\mathbb{N}. \end{equation} For the precise form of the operators $\mathcal{C}_{s,s+1}^\infty$, $\mathcal{C}_{s,s+2}^\infty$, see \eqref{boltzmann hiera kernel binary}, \eqref{boltzmann hiera kernel ternary} respectively. Duhamel's Formula yields that the Boltzmann hierarchy can be written in mild form as follows: \begin{equation}\label{intro:mild Boltzmann} f^{(s)}(t,Z_s)=S_s^tf_{0}(Z_s)+\int_0^t S_s^{t-\tau}(\mathcal{C}_{s,s+1}^\infty f^{(s+1)}+\mathcal{C}_{s,s+2}^\infty f^{(s+2)})(\tau,Z_s)\,d\tau,\quad s\in\mathbb{N}, \end{equation} where for any continuous function $g_s:\mathbb{R}^{2ds}\to\mathbb{R}$, we write $S_s^tg_s(Z_s):=g_s(\Phi_s^{-t}Z_s),$ and $\Phi_s^t$ is the $s$-particle free flow of $s$-particles defined by $S_s^t Z_s=S_s^t(X_s,V_s)=(X_s-tV_s,V_s).$ It can be observed that for factorized initial data and assuming that the solution remains factorized in time\footnote{this is typically called propagation of chaos assumption}, the Boltzmann hierarchy reduces to the binary-ternary Boltzmann equation \eqref{intro:binary-ternary}. This observation connects the Boltzmann hierarchy with the binary-ternary Boltzmann equation \eqref{intro:binary-ternary}. To make this argument rigorous, we first show that the BBGKY and Boltzmann hierarchy are well-posed in the scaling \eqref{common scaling intro}, at least for short times, and then that the convergence of the BBGKY hierarchy initial data to the Boltzmann hierarchy initial data propagates in the time interval of existence of the solutions. Showing convergence is a very challenging task, and is the heart of our contribution. We describe details in Subsection \ref{subsec:difficulties}. Now, we informally state our main result. For a rigorous statement of the result see Theorem \ref{convergence theorem}. \textbf{\textit{Statement of the main result}:} \textit{Let $F_0$ be initial data for the Boltzmann hierarchy \eqref{intro:Boltzmann hierarchy}, and $F_{N,0}$ be some BBGKY hierarchy \eqref{intro:Boltzmann hierarchy} initial data which ``approximate"\footnote{see Subsection \ref{subseq:approximation} for details} $F_0$ as $N\to\infty$, $\epsilon\to 0^+$ under the scaling \eqref{common scaling intro}. Let $\bm{F_N}$ be the mild solution to the BBGKY hierarchy \eqref{intro:BBGKY} with initial data $F_{N,0}$, and $\bm{F}$ the mild solution to the Boltzmann hierarchy \eqref{intro:Boltzmann hierarchy}, with initial data $F_0$, up to short time $T>0$. Then $\bm{F_N}$ converges in observables\footnote{for a precise definition of convergence in observables, see Subsection \ref{subsec:con in observables}} to $\bm{F}$ in $[0,T]$ as $N\to\infty$, $\epsilon\to 0^+$, under the scaling \eqref{common scaling intro}.} \vspace{0.2cm} The convergence obtained implies that the solution of the finite hierarchy indeed approximates the solution of the infinite hierarchy in $[0,T]$, as $N\to\infty$, $\epsilon_2,\epsilon_3\to 0^+$ in the scaling \eqref{common scaling intro}. For factorized initial data (initial chaotic assumption) the Boltzmann hierarchy reduces to equation \eqref{intro:binary-ternary}. \subsection{Difficulties faced in the proof of the main result}\label{subsec:difficulties} The main idea to obtain convergence (Theorem \ref{convergence theorem}) is to inductively use mild forms \eqref{intro:mild BBGKY}, \eqref{intro:mild Boltzmann} of the BBGKY hierarchy and Boltzmann hierarchy respectively, to formally obtain series expansions with respect to the initial data: \begin{align} f_N^{(s)}(t,Z_s)=T_s^tf_{N,0}^{(s)}(Z_s)+\sum_{k=1}^\infty\sum_{\sigma\in S_k}\int_0^t&\int_0^{t_1}...\int_0^{t_{k-1}}T_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^NT_{s+\widetilde{\sigma}_1}^{t_1-t_2}...\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^NT_{s+\widetilde{\sigma}_k}^{t_k}f_{N,0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k...\,dt_1,\label{intro:BBGKY expansion}\\ f^{(s)}(t,Z_s)=S_s^tf_{0}^{(s)}(Z_s)+\sum_{k=1}^\infty\sum_{\sigma\in S_k}\int_0^t&\int_0^{t_1}...\int_0^{t_{k-1}}S_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^\infty S_{s+\widetilde{\sigma}_1}^{t_1-t_2}...\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_{k}}^\infty S_{s+\widetilde{\sigma}_k}^{t_k}f_{0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k...\,dt_1,\label{intro:Boltzmann expansion} \end{align} where $S_k$ is defined in \eqref{combinatorics intro}, and given $\sigma\in S_k$, $\ell=1,...,k$, we write $\widetilde{\sigma}_\ell:=\sum_{i=1}^\ell \sigma_i$. We note that the summation over $S_k$ in \eqref{intro:BBGKY expansion}-\eqref{intro:Boltzmann expansion} allows us to keep track of the possible interaction sequences occurring by ``adding" one or two particles in each time step. For more details, see Section \ref{sec: series expansion}. Comparing expressions \eqref{intro:BBGKY expansion}-\eqref{intro:Boltzmann expansion}, we expect to obtain the required convergence under the scaling \eqref{common scaling intro} as long as $f_{N,0}^{(s)}$ ``approximates" $f_0^{(s)}$ under the same scaling. However it is not possible to directly compare \eqref{intro:BBGKY expansion}-\eqref{intro:Boltzmann expansion} because of the possible divergence of the backwards interaction flow from the free flow, which we call recollisions. Although recollisions were also faced in \cite{gallagher} and \cite{ternary}, the mixed case, where both binary and ternary interactions are considered, requires different conceptual treatment in many instances, and is not implied by the results of these works. The reason for that is that a binary interaction can be succeeded by a ternary interaction and vice versa, a situation which was not addressed in \cite{gallagher,ternary}. The key to overcome these difficulties is that the diameter of the particles is much smaller than the interaction zone, as implied by the common scaling \eqref{common scaling intro}. This fact allows us to develop certain delicate algebraic and geometric arguments to extract a small measure set of pathological initial data which lead to recollisions. On the complement of this set, expansions \eqref{intro:BBGKY expansion}-\eqref{intro:Boltzmann expansion} are comparable and the required convergence is obtained. The main idea for eliminating recollisions is an inductive application in each time step of Proposition \ref{bad set double} and Proposition \ref{bad set double measure}, which treat the binary adjunction, or Proposition \ref{bad set triple} and Proposition \ref{bad set triple measure}, which treat the ternary adjunction. More precisely we face the following different cases: \vspace{0.1cm} \begin{enumerate}[(I)] \item\textbf{Binary adjunction}: One particle is added forming a binary interaction with one of the existing particles. The pathological situations that might arise under backwards time evolution are the following: \begin{itemize} \item The newly formed binary collisional configuration runs to a binary interaction under time evolution. This pathological situation is eliminated using arguments inspired by \cite{gallagher}. This is actually the only case which is similar to the cases covered in \cite{gallagher}. \item The newly formed binary collisional configuration runs to a ternary interaction under time evolution. This pathological situation did not appear in any of the previous works since merely binary or ternary interactions were studied. However, due to the fact that $\epsilon_2<<\epsilon_3$, which comes from the scaling \eqref{common scaling intro}, this pathological situation can be treated using techniques inspired by \cite{ternary} and adapting them to the binary case. \end{itemize} \vspace{0.2cm} Proposition \ref{bad set double} and Proposition \ref{bad set double measure} are the relevant results controlling recollisions after a binary adjunction. \vspace{0.2cm} \item\textbf{Ternary adjunction}: Two particles are added forming a ternary interaction with one of the existing particles. The pathological situations that might arise under backwards time evolution are the following: \begin{itemize} \item The newly formed ternary collisional configuration runs to a ternary interaction under time evolution. This case was studied in depth in \cite{ternary}. We eliminate this pathological situation using Proposition \ref{bad set tilde triple}. For its proof, we refer to \cite{ternary}. \item The newly formed ternary collisional configuration runs to a binary interaction under time evolution. This is the most challenging case to treat and is the heart of the technical contribution, because the scaling \eqref{common scaling intro} does not directly help as in the case of the binary adjunction where one of the collisional particles enters an interaction zone. To treat this case, we need to use new algebraic tools (see Proposition \ref{bad set triple}) to exclude sets of initial data which lead to these pathological trajectories and develop elaborate geometric estimates to control its measure. The geometric estimates needed are thoroughly presented in Section \ref{sec:geometric}. In particular, Subsection \ref{subsec:relying} is devoted to developing novel tools which rely on an appropriate representation of $(2d-1)$-spheres (see \eqref{representation of sphere for fixed omega_1}). More specifically, in \ref{subsubsec:ball} we perform some initial truncations to the impact directions, while in \ref{subsubsec:conic} we establish certain spherical cap and conic region estimates needed to control the precollisional case, while \ref{subsubsec:annulus} focuses on developing the necessary annuli estimates enabling us to control the postcollisional case using precollisional arguments. After establishing the necessary geometric tools, we employ them in Proposition \ref{bad set triple measure} to show that the corresponding set constructed in Proposition \ref{bad set triple} is negligible. \end{itemize} \end{enumerate} \subsection{Notation} For convenience, we introduce some basic notation which will be frequently used throughout the manuscript: \begin{itemize} \item $d\in\mathbb{N}$ will be a fixed dimension with $d\geq 2$. \item Given $x,y\in\mathbb{R}$, we write \begin{align} x&\lesssim y\Leftrightarrow\exists C_d>0: x\leq C_d y,\label{not ineq}\\ x&\simeq y\Leftrightarrow\exists C_d>0: x=C_d y,\label{not eq}\\ x&\thickapprox y\Leftrightarrow\exists C_{1,d},C_{2,d}>0: C_{1,d}y\leq x\leq C_{d,2}y.\label{not thick} \end{align} \item Given $n\in\mathbb{N}$, $\rho>0$ and $w\in\mathbb{R}^n$, we write $B_\rho^n(w)$ for the $n$-closed ball of radius $\rho>0$, centered at $w\in\mathbb{R}^n$. In particular, we write $B_\rho^n:=B_\rho^n(0),$ for the $\rho$-ball centered at the origin. \item Given $n\in\mathbb{N}$ and $\rho>0$, we write $\mathbb{S}_\rho^{n-1}$ for the $(n-1)$-sphere of radius $\rho>0$. \item When we write $x<<y$, we mean that there is a small enough constant $0<c<1$, independent of $x,y$, such that $x<cy$. This constant $c$ is appropriately chosen for the calculations to make sense. \end{itemize} \subsection*{Acknowledgements} I.A. and N.P. acknowledge support from NSF grants DMS-1516228 and DMS-1840314. Authors are thankful to Irene M. Gamba, Maja Taskovi\'c, Thomas Chen, Alexis Vasseur and Philip Morrison for helpful discussions regarding physical and mathematical aspects of the problem. \section{Collisional transformations}\label{sec:collisional} In this section we define the collisional transformations of two and three interacting particles respectively. In the two particle case, particles will interact as regular hard spheres, while in the three particle case, particles will interact as triplets of particles with an interaction zone. \subsection{Binary interaction} Here, we define the binary collisional tranformation of two interacting hard spheres, induced by an impact direction $\omega_1\in\mathbb{S}_1^{d-1}$. This will be the law under which the velocities $(v_1,v_2)$ of two interacting hard spheres, with impact direction $\omega_1\in\mathbb{S}_1^{d-1}$, instanteously transform. The impact direction will represent the scaled relative position of the colliding hard spheres. \begin{definition} Consider a binary impact direction $\omega_1\in\mathbb{S}_1^{d-1}$. We define the binary collisional transformation induced by $\omega_1\in\mathbb{S}_1^{d-1}$ as the map $ T_{\omega_1}:(v_1,v_2)\in\mathbb{R}^{2d}\to (v_1',v_2')\in\mathbb{R}^{2d}, $ where \begin{equation}\label{binary formulas without} \begin{aligned} v_1'&=v_1+\langle\omega_1,v_2-v_1\rangle\omega_1,\\ v_2'&=v_2-\langle\omega_1,v_2-v_1\rangle\omega_1. \end{aligned} \end{equation} \end{definition} Let us introduce some notation we will be constantly using. We define the binary cross-section \begin{equation}\label{binary cross} b_2(\omega_1,\nu_1):=\langle\omega_1,\nu_1\rangle,\quad (\omega_1,\nu_1)\in\mathbb{S}_1^{d-1}\times\mathbb{R}^d. \end{equation} Under this notation \eqref{binary formulas without} can be written as : \begin{equation}\label{binary formulas with} \begin{aligned} v_1'&=v_1+b_2(\omega_1,v_2-v_1)\omega_1,\\ v_2'&=v_2-b_2(\omega_1,v_2-v_1)\omega_1. \end{aligned} \end{equation} One can verify that \eqref{binary formulas with} provide the general solution, parametrized by $\omega_1\in\mathbb{S}_1^{d-1}$, of the binary momentum-energy conservation system: \begin{equation}\label{MEC binary} \begin{aligned} v_1'+v_2'&=v_1+v_2,\\ |v_1'|^2+|v_2|^2&=|v_1|^2+|v_2|^2. \end{aligned} \end{equation} Given a binary impact direction $\omega_1\in\mathbb{S}_1^{d-1}$, the binary collisional transformation $T_{\omega_1}$ satisfies the following properties (see, e.g. \cite{cercignani gases}). \begin{proposition}\label{binary scattering properties}Consider a binary impact direction $\omega_{1}\in\mathbb{S}_{1}^{d-1}$. The induced binary collisional transformation $T_{\omega_{1}}$ has the following properties: \begin{enumerate}[(i)] \item Conservation of momentum \begin{equation}\label{cons momentum binary} v_1'+v_2'=v_1+v_2. \end{equation} \item Conservation of energy \begin{equation}\label{cons energy binary} |v_1'|^2+|v_2'|^2=|v_1|^2+|v_2|^2. \end{equation} \item Conservation of relative velocities magnitude \begin{equation}\label{relative veloc binary} |v_1'-v_2'|=|v_1-v_2|. \end{equation} \item Micro-reversibility of the binary cross-section \begin{equation}\label{skew symmetry binary} b_2(\omega_1,v_2'-v_1')=-b_2(\omega_1,v_2-v_1). \end{equation} \item $T_{\omega_{1}}$ is a linear involution i.e. $T_{\omega_{1}}$ is linear and $ T_{\omega_{1}}^{-1}=T_{\omega_{1}}. $ In particular, $ |\det T_{\omega_{1}}|=1, $ so $T_{\omega_{1}}$ is measure-preserving. \end{enumerate} \end{proposition} \subsection{Ternary interaction} Now we define the ternary collisional tranformation, induced by a given pair of impact directions, and investigate its properties. The interaction considered will be an instantaneous interaction of three particles with an interaction zone (for more details see \cite{ternary}). This will be the law under which the velocities $(v_1,v_2,v_3)$ of three interacting particles, with impact directions $(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}$, instanteously transform. The impact directions will represent the scaled relative positions of the three particles in the interaction zone setting. \begin{definition} Consider a pair of impact directions $(\omega_{1},\omega_{2})\in\mathbb{S}_{1}^{2d-1}$. We define the ternary collisional transformation induced by $(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}$ as the map $ T_{\omega_1,\omega_2}:( v_1, v_2, v_3 )\in\mathbb{R}^{3d}\longrightarrow( v_1^*, v_2^*, v_3^*)\in\mathbb{R}^{3d}, $ where \begin{equation}\label{formulas ternary}\begin{cases} v_1^*=v_1+c_{\omega_{1},\omega_{2}, v_1,v_2,v_3}(\omega_{1}+\omega_{2}),\\ v_2^*=v_2-c_{\omega_{1},\omega_{2}, v_1,v_2,v_3}\omega_{1},\\ v_3^*=v_3-c_{\omega_{1},\omega_{2}, v_1,v_2,v_3}\omega_2. \end{cases} \end{equation} \begin{equation}\label{definition of c} c_{\omega_1,\omega_2,v_1,v_2,v_3}=\frac{\langle\omega_1,v_2-v_1\rangle+\langle\omega_2,v_3-v_1\rangle}{1+\langle\omega_1,\omega_2\rangle}. \end{equation} \end{definition} We also define the ternary cross-section \begin{equation}\label{cross} b_3(\omega_1,\omega_2,\nu_1,\nu_2):=\langle\omega_1,\nu_1\rangle+\langle\omega_2,\nu_2\rangle,\quad (\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1},\quad (\nu_1,\nu_2)\in\mathbb{R}^{2d}. \end{equation} Notice that, given $(\omega_1,\omega_2,v_1,v_2,v_3)\in\mathbb{S}_1^{2d-1}\times\mathbb{R}^{3d}$, we clearly have \begin{equation}\label{relation cross-c} b_3(\omega_1,\omega_2,v_2-v_1,v_3-v_1)=\left(1+\langle\omega_1,\omega_2\rangle\right)c_{\omega_1,\omega_2,v_1,v_2,v_3}. \end{equation} \begin{remark}\label{estimate for b and c} Cauchy-Schwartz inequality and the fact that $(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}$, yield \begin{equation}\label{bound on inverse quotient} \frac{2}{3}\leq\frac{1}{1+\langle\omega_1,\omega_2\rangle}\leq 2, \end{equation} hence for all $(\omega_1,\omega_2,v_1,v_2,v_3)\in\mathbb{S}_1^{2d-1}\times\mathbb{R}^{3d}$, relation \eqref{relation cross-c} implies \begin{equation}\label{bound for b relative to c} \frac{2}{3}b_3(\omega_1,\omega_2,v_2-v_1,v_3-v_1)\leq c_{\omega_1,\omega_2,v_1,v_2,v_3}\leq 2b_3(\omega_1,\omega_2,v_2-v_1,v_3-v_1). \end{equation} \end{remark} It has been seen in \cite{ternary,thesis} that \eqref{formulas ternary} provide the general solution, parametrized by $(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}$, of the ternary momentum-energy conservation system: \begin{equation}\label{ternary MEC} \begin{aligned} v_1^*+v_2^*+v_3^*&=v_1+v_2+v_3,\\ |v_1^*|^2+|v_2^*|^2+|v_3^*|^2&=|v_1|^2+|v_2|^2+|v_3|^2. \end{aligned} \end{equation} The main properties of the ternary collisional tranformation are summarized in the following Proposition. For the proof, see Proposition 2.3. from \cite{ternary}. \begin{proposition}\label{triary scattering properties} Consider a pair of impact directions $(\omega_{1},\omega_{2})\in\mathbb{S}_{1}^{2d-1}$. The induced collisional transformation $T_{\omega_{1},\omega_{2}}$ has the following properties: \begin{enumerate}[(i)] \item Conservation of momentum \begin{equation}\label{triary cons momentum} v_1^*+v_2^*+v_3^*=v_1+v_2+v_3. \end{equation} \item Conservation of energy \begin{equation}\label{triary cons energy} |v_1^*|^2+|v_2^*|^2+|v_3^*|^2=|v_1|^2+|v_2|^2+|v_3|^2. \end{equation} \item Conservation of relative velocities magnitude \begin{equation}\label{triary relative velocities} |v_1^*-v_2^*|^2+|v_1^*-v_3^*|^2+|v_2^*-v_3^*|^2=|v_1-v_2|^2+|v_1-v_3|^2+|v_2-v_3|^2. \end{equation} \item Micro-reversibility of the ternary cross-section \begin{equation}\label{skew symmetry triary} b_3(\omega_{1},\omega_{2},v_2^*-v_1^*,v_3^*-v_1^*)=-b_3(\omega_{1},\omega_{2},v_2-v_1,v_3-v_1). \end{equation} \item $T_{\omega_{1},\omega_{2}}$ is a linear involution i.e. $T_{\omega_{1},\omega_{2}}$ is linear and $ T_{\omega_{1},\omega_{2}}^{-1}=T_{\omega_{1},\omega_{2}}. $ In particular, $ |\det T_{\omega_{1},\omega_{2}}|=1, $ so $T_{\omega_{1},\omega_{2}}$ is measure-preserving. \end{enumerate} \end{proposition} \section{Dynamics of $m$-particles}\label{sec:dynamics} In this section we rigorously define the dynamics of $m$ hard spheres of diameter $\sigma_2$ and interaction zone $\sigma_3$, where $0<\sigma_2<\sigma_3<1$. Heuristically speaking, particles perform rectilinear motion as long as there is no interaction (binary or ternary) and they interact through the binary or ternary collision law when a binary or ternary interaction occurs respectively. However, it is far from obvious that a global dynamics can be defined, since the system might run into pathological configurations e.g. more than one interactions at a time, infinitely many interactions in finite time or interactions which graze under time evolution. The goal of this section is to extract a set of measure zero such that on the complement a global in time, measure preserving flow can be defined. Throughout this section we consider $m\in\mathbb{N}$ and $0<\sigma_2<\sigma_3<1$. \subsection{Phase space definitions} For convenience we define the following index sets: \begin{align} \text{For $m\geq 2$: }\mathcal{I}_m^2&=\left\{(i,j)\in\{1,...,m\}^2:i<j\right\}.\label{index 2}\\ \text{For $m\geq 3$: }\mathcal{I}_m^3&=\left\{(i,j,k)\in\{1,...,m\}^3:i<j<k\right\}.\label{index 3} \end{align} Given positions $(x_1,x_2)\in\mathbb{R}^{2d}$, we define the binary distance: \begin{equation}\label{binary distance} d_2(x_1,x_2):=|x_1-x_2|, \end{equation} and given positions $(x_1,x_2,x_3)\in\mathbb{R}^{3d}$, we define the ternary distance: \begin{equation}\label{triary distance} d_3(x_1;x_2,x_3)=\sqrt{|x_1-x_2|^2+|x_1-x_3|^2}. \end{equation} For $m\geq 3$, we define the phase space of $m$-particles of diameter $\sigma_2>0$ and interaction zone $\sigma_3>0$, with $\sigma_2<\sigma_3<1$ as: \begin{equation}\label{phase space} \begin{aligned} \mathcal{D}_{m,\sigma_2,\sigma_3}=\big\{Z_m=(X_m,V_m)\in\mathbb{R}^{2dm}: d_2(x_i,x_j)\geq\sigma_2,\text{ }\forall (i,j)\in\mathcal{I}_m^2, \text{ and }d_3(x_i;x_j,x_k)\geq\sqrt{2}\sigma_3,\text{ }\forall (i,j,k)\in\mathcal{I}_m^3\big\}, \end{aligned} \end{equation} where $X_m=(x_1,...,x_m)\in\mathbb{R}^{dm}$ represents the positions of the $m$-particles, while $V_m=(v_1,...,v_m)\in\mathbb{R}^{dm}$ represents the velocities of the $m$-particles. For convenience we also define \begin{equation}\label{phase space m=2} \mathcal{D}_{2,\sigma_2,\sigma_3}=\left\{Z_2=(X_2,V_2)\in\mathbb{R}^{2d}:|x_1-x_2|\geq\sigma_2\right\},\quad \mathcal{D}_{1,\sigma_2,\sigma_3}=\mathbb{R}^{2d}. \end{equation} For $m\geq 3$, the phase space $\mathcal{D}_{m,\sigma_2,\sigma_3}$ decomposes as: $\mathcal{D}_{m,\sigma_2,\sigma_3}=\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}\cup\partial\mathcal{D}_{m,\sigma_2,\sigma_3},$ where the interior is given by: \begin{equation}\label{interior phase space} \begin{aligned} \mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}=\big\{Z_m=(X_m,V_m)\in\mathbb{R}^{2dm}: d_2(x_i,x_j)>\sigma_2,\text{ }\forall (i,j)\in\mathcal{I}_m^2, \text{ and }d_3(x_i;x_j,x_k)>\sqrt{2}\sigma_3,\text{ }\forall (i,j,k)\in\mathcal{I}_m^3\big\}, \end{aligned} \end{equation} and the boundary is given by: \begin{equation}\label{boundary} \partial\mathcal{D}_{m,\sigma_2,\sigma_3}=\partial_2\mathcal{D}_{m,\sigma_2,\sigma_3}\cup\partial_3\mathcal{D}_{m,\sigma_2,\sigma_3}, \end{equation} where $\partial_2\mathcal{D}_{m,\sigma_2,\sigma_3}$ is the binary boundary: \begin{equation}\label{binary boundary} \partial_2\mathcal{D}_{m,\sigma_2,\sigma_3}=\left\{Z_m=(X_m,V_m)\in\mathcal{D}_{m,\sigma_2,\sigma_3}:\exists (i,j)\in\mathcal{I}_m^2\text{ with }d_2(x_i,x_j)=\sigma_2\right\}, \end{equation} and $\partial_3\mathcal{D}_{m,\sigma_2,\sigma_3}$ is the ternary boundary: \begin{equation}\label{triary boundary} \partial_3\mathcal{D}_{m,\sigma_2,\sigma_3}=\left\{Z_m=(X_m,V_m)\in\mathcal{D}_{m,\sigma_2,\sigma_3}:\exists (i,j,k)\in\mathcal{I}_m^3\text{ with }d_3(x_i;x_j,x_k)=\sqrt{2}\sigma_3\right\}. \end{equation} Elements of $\mathcal{D}_{m,\sigma_2,\sigma_3}$ are called configurations, elements of $\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}$ are called noncollisional configurations, and elements of $\partial_2\mathcal{D}_{m,\sigma_2,\sigma_3}$ are called collisional configurations, or just collisions. Elements of $\partial\mathcal{D}_{m,\sigma_2,\sigma_3}$ are called binary collisions, while elements of $\partial_3\mathcal{D}_{m,\sigma_2,\sigma_3}$ are called ternary collisions. When we refer to a collision, it will be either binary or ternary. Clearly the binary boundary can be written as: $\partial_2\mathcal{D}_{m,\sigma_2,\sigma_3}=\bigcup_{(i,j)\in\mathcal{I}_m^2}\Sigma_{ij}^2,$ where $\Sigma_{ij}^2$ are the binary collisional surfaces given by \begin{equation}\label{binary collisional surfaces} \Sigma_{ij}^2:=\left\{Z_m\in\mathcal{D}_{m,\sigma_2,\sigma_3}:d_2(x_i,x_j)=\sigma_2\right\}. \end{equation} In the same spirit the ternary boundary can be written as: $\partial_3\mathcal{D}_{m,\sigma_2,\sigma_3}=\bigcup_{(i,j,k)\in\mathcal{I}_m^3}\Sigma_{ijk}^3,$ where $\Sigma_{ijk}^3$ are the ternary collisional surfaces given by \begin{equation}\label{triary collisional surfaces} \Sigma_{ijk}^3:=\left\{Z_m\in\mathcal{D}_{m,\sigma_2,\sigma_3}:d_3(x_i;x_j,x_k)=\sqrt{2}\sigma_3\right\}. \end{equation} We now further decompose collisions to simple binary collisions, simple ternary collisions and multiple collisions. In particular we define simple binary collisions as: \begin{equation}\label{simple binary collisions} \begin{aligned} \partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}:=&\big\{Z_m=(X_m,V_m)\in\mathcal{D}_{m,\sigma_2,\sigma_3}:\exists (i,j)\in\mathcal{I}_m^2 \text{ with }Z_m\in\Sigma_{ij}^2,\\ &Z_m\notin\Sigma_{i'j'}^2,\text{ }\forall (i',j')\in\mathcal{I}_m^2\setminus\{(i,j)\},\text{ }Z_m\notin\Sigma_{i'j'k'}^3,\text{ }\forall (i',j',k')\in\mathcal{I}_m^3 \big\}. \end{aligned} \end{equation} We also define simple ternary collisions as: \begin{equation}\label{simple triary collisions} \begin{aligned} \partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}:=&\big\{Z_m=(X_m,V_m)\in\mathcal{D}_{m,\sigma_2,\sigma_3}:\exists (i,j,k)\in\mathcal{I}_m^3 \text{ with }Z_m\in\Sigma_{ijk}^3,\\ &Z_m\notin\Sigma_{i'j'k'}^3,\text{ }\forall (i',j',k')\in\mathcal{I}_m^3\setminus\{(i,j,k)\}, Z_m\notin\Sigma_{i'j'}^2,\text{ }\forall (i',j')\in\mathcal{I}_m^2 \big\}. \end{aligned} \end{equation} \begin{remark}\label{remark on phase space} The assumption $\sigma_2<\sigma_3$ made at the beginning of the section is necessary for $\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}$ to be non-empty. Indeed, let $\sigma_2\geq\sigma_3$ and assume that $\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}\neq\emptyset$. Consider $Z_m\in \partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}$. Then, by \eqref{simple triary collisions}, there is $(i,j,k)\in\mathcal{I}_m^3$ such that \begin{equation}\label{bigger triple} |x_i-x_j|^2+|x_i-x_j|^2=2\epsilon_3^2, \end{equation} and \begin{equation}\label{bigger double} |x_i-x_j|>\epsilon_2,\quad |x_i-x_k|>\epsilon_2. \end{equation} By \eqref{bigger triple}, at least one of $|x_i-x_j|$ or $|x_i-x_k|$ has to be smaller than or equal to $\epsilon_3$. Assume, without loss of generality, that $|x_i-x_j|\leq\epsilon_3$. Since $\epsilon_2\geq\epsilon_3$, we obtain $|x_i-x_j|\leq\epsilon_2$, which contradicts \eqref{bigger double}. Therefore, if $\sigma_2\geq\sigma_3$, we have $\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}=\emptyset$. \end{remark} A simple collision will be a binary or ternary simple collision i.e. \begin{equation}\label{simple collision boundary} \partial_{sc}\mathcal{D}_{m,\sigma_2,\sigma_3}:=\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}\cup \partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}. \end{equation} Multiple collisions are configurations which are not simple i.e. \begin{equation}\label{multiple collisions} \begin{aligned} \partial_{mu}\mathcal{D}_{m,\sigma_2,\sigma_3}&:=\partial\mathcal{D}_{m,\sigma_2,\sigma_3}\setminus\partial_{sc}\mathcal{D}_{m,\sigma_2,\sigma_3}. \end{aligned} \end{equation} \begin{remark} For $m=2$, there is only binary boundary. \end{remark} For the binary case, we give the following definitions: \begin{definition} Let $m\geq 2$ and $Z_m\in\partial_{2,sc} D_{m,\sigma_2,\sigma_3}$. Then there is a unique $(i,j)\in\mathcal{I}_m^2$ such that $Z_m\in\Sigma_{ij}^{2}$ and $Z_m\notin\Sigma_{i'j'k'}^3$, for all $(i',j',k')\in\mathcal{I}_m^3$. In this case we will say $Z_m$ is an $(i,j)$ collision and we will write \begin{equation}\label{simple collision surfaces binary} \Sigma_{ij}^{2,sc}=\left\{Z_m\in\mathcal{D}_{m,\sigma_1,\sigma_2}: Z_m\mbox{ is $(i,j)$ collision}\right\}. \end{equation} Clearly $\Sigma_{ij}^{2,sc}\cap\Sigma_{i'j'}^{2,sc}=\emptyset$, for all $ (i,j)\neq(i',j')\in\mathcal{I}_m^2$ and $\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}$ decomposes to: \begin{equation}\label{decomposition simple binary} \partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}=\bigcup_{(i,j)\in\mathcal{I}_m^2}\Sigma_{ij}^{2,sc}. \end{equation} \end{definition} \begin{remark} Let $m\geq 2$, $(i,j)\in\mathcal{I}_m^2$ and $Z_m\in\Sigma_{ij}^{2,sc}$. Then \begin{equation}\label{def of omega binary} \omega_{1}:=\frac{x_j-x_i}{\sigma_2}\in\mathbb{S}_1^{d-1}. \end{equation} Therefore, each $(i,j)$ collision naturally induces a binary impact direction $\omega_{1}\in\mathbb{S}_1^{d-1}$ and consequently a binary collisional transformation $T_{\omega_{1}}$. \end{remark} \begin{definition} Let $m\geq 2$, $(i,j)\in\mathcal{I}_m^2$ and $Z_m=(X_m,V_m)\in\Sigma_{ij}^{2,sc}$. We write $Z_m'=(X_m,V_m'), $ where \begin{equation*} V_m'=(v_1,...,v_{i-1},v_i',v_{i+1},...,v_{j-1},v_j',v_{j+1},...,v_{m}), \end{equation*} and $ (v_i',v_j')=T_{\omega_{1}}(v_i,v_j),\quad \omega_{1}\in\mathbb{S}_1^{d-1}\text{ is given by \eqref{def of omega binary}}. $ \end{definition} In the same spirit, for the ternary case, we give the following definitions: \begin{definition} Let $m\geq 3$ and $Z_m\in\partial_{3,sc} D_{m,\sigma_2,\sigma_3}$. Then there is a unique $(i;j,k)\in\mathcal{I}_m^3$ such that $Z_m\in\Sigma_{ijk}^3$ and $Z_m\notin \Sigma_{i'j'}^2$, for all $(i',j')\in\mathcal{I}_m^2$. In this case we will say $Z_m$ is an $(i;j,k)$ collision and we will write \begin{equation}\label{simple collision surfaces triary} \Sigma_{ijk}^{3,sc}=\left\{Z_m\in\mathcal{D}_{m,\sigma_2,\sigma_3}: Z_m\mbox{ is $(i;j,k)$ collision}\right\}. \end{equation} Clearly $\Sigma_{ijk}^{3,sc}\cap\Sigma_{i'j'k'}^{3,sc}=\emptyset$, for all $(i,j,k)\neq(i',j',k')\in\mathcal{I}_m^3$ and $\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}$ decomposes to: \begin{equation}\label{decomposition of simple ternary} \partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}=\bigcup_{(i,j,k)\in\mathcal{I}_m^3}\Sigma_{ijk}^{3,sc}. \end{equation} \end{definition} \begin{remark} Let $m\geq 3$, $(i,j,k)\in\mathcal{I}_m^3$ and $Z_m\in\Sigma_{ijk}^{3,sc}$. Then \begin{equation}\label{def of omega triary} (\omega_{1},\omega_2):=\frac{1}{\sqrt{2}\sigma_3}\left(x_j-x_i,x_k-x_i\right)\in\mathbb{S}_1^{2d-1}. \end{equation} Therefore, each $(i;j,k)$ collision naturally induces ternary impact directions $(\omega_{1},\omega_{2})\in\mathbb{S}_1^{2d-1}$ and consequently a collisional transformation $T_{\omega_{1},\omega_{2}}$. \end{remark} \begin{definition} Let $m\geq 3$, $(i,j,k)\in\mathcal{I}_m^3$ and $Z_m=(X_m,V_m)\in\Sigma_{ijk}^{3,s}$. We write $ Z_m^*=(X_m,V_m^*), $ where \begin{equation*} V_m^*=(v_1,...,v_{i-1},v_i^*,v_{i+1},...,v_{j-1},v_j^*,v_{j+1},...,v_{k-1},v_k^*,v_{k+1},...,v_{m}), \end{equation*} and $ (v_i^*,v_j^*,v_k^*)=T_{\omega_{1},\omega_{2}}(v_i,v_j,v_k),\quad (\omega_{1},\omega_{2})\in\mathbb{S}_1^{2d-1}\text{ are given by \eqref{def of omega triary}}. $ \end{definition} \subsection{Classification of simple collisions} We will now classify simple collisions in order to eliminate collisions which graze in time. For this purpose, we come across the following definitions for the binary and the ternary case respectively. For the binary case: \begin{definition}\label{collision class binary} Let $m\geq 2$, $(i,j)\in\mathcal{I}_m^2$ and $Z_m\in\Sigma_{ij}^{2,s}$. The configuration $Z_m$ is called: \begin{itemize} \item binary precollisional when $b_2(\omega_{1},v_j-v_i)<0,$ \item binary postcollisional when $b_2(\omega_{1},v_j-v_i)>0,$ \item binary grazing when $b_2(\omega_{1},v_j-v_i)=0,$ \end{itemize} where $\omega_1\in\mathbb{S}_1^{d-1}$ is given by \eqref{def of omega binary} and $b_2$ is given by \eqref{binary cross}. \end{definition} \begin{remark} Let $m\geq 2$, $(i,j)\in\mathcal{I}_m^2$ and $Z_m\in\Sigma_{ij}^{2,s}$. Using \eqref{skew symmetry binary}, we obtain the following: \begin{enumerate}[(i)] \item $Z_m$ is binary precollisional iff $Z_m'$ is binary postcollisional.\vspace{0.2cm} \item $Z_m$ is binary postcollisional iff $Z_m'$ is binary precollisional.\vspace{0.2cm} \item $Z_m=Z_m'$ iff $Z_m$ is binary grazing. \end{enumerate} \end{remark} For the ternary case: \begin{definition}\label{collision class triary} Let $m\geq 3$, $(i,j,k)\in\mathcal{I}_m^3$ and $Z_m\in\Sigma_{ijk}^{3,s}$. The configuration $Z_m$ is called: \begin{itemize} \item ternary precollisional when $b_3(\omega_{1},\omega_{2},v_j-v_i,v_k-v_i)<0,$ \item ternary postcollisional when $b_3(\omega_{1},\omega_{2},v_j-v_i,v_k-v_i)>0,$ \item ternary grazing when $b_3(\omega_{1},\omega_{2},v_j-v_i,v_k-v_i)=0,$ \end{itemize} where $(\omega_{1},\omega_{2})\in\mathbb{S}_1^{2d-1}$ is given by \eqref{def of omega triary} and $b$ is given by \eqref{cross}. \end{definition} \begin{remark} Let $m\geq 3$, $(i,j,k)\in\mathcal{I}_m^3$ and $Z_m\in\Sigma_{ijk}^{3,s}$. Using \eqref{skew symmetry triary}, we obtain the following: \begin{enumerate}[(i)] \item $Z_m$ is ternary precollisional iff $Z_m^*$ is ternary postcollisional.\vspace{0.2cm} \item $Z_m$ is ternary postcollisional iff $Z_m^*$ is ternary precollisional.\vspace{0.2cm} \item $Z_m=Z_m^*$ iff $Z_m$ is ternary grazing. \end{enumerate} \end{remark} We will just say precollisional, postcollisional or grazing configuration when it is implied whether a simple collision is binary or ternary. For $m\geq 2$, we refine the phase space defining \begin{equation}\label{refined phase space} \mathcal{D}_{m,\sigma_2,\sigma_3}^*:=\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}\cup \partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3}, \end{equation} where $\partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3}$ denotes the part of $\partial\mathcal{D}_{m,\sigma_2,\sigma_3}$ consisting of simple, non-grazing collisions i.e. defined as \begin{equation}\label{refined phase boundary} \partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3}:=\left\{Z_m\in\partial_{sc}\mathcal{D}_{m,\sigma_2,\sigma_3}: Z_m\text{ is non-grazing}\right\}. \end{equation} It is immediate that $\mathcal{D}_{m,\sigma_2,\sigma_3}^*$ is a full measure subset of $\mathcal{D}_{m,\sigma_2,\sigma_3}$ and $\partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3}$ is a full surface measure subset of $\partial\mathcal{D}_{m,\sigma_2,\sigma_3}$, since its complement constitutes of lower dimension submanifolds of $\partial\mathcal{D}_{m,\sigma_2,\sigma_3}$ which have zero surface measure. \subsection{Construction of the local flow} Next Lemma shows that the flow can be locally defined for any initial configuration $Z_m\in\mathcal{D}_{m,\sigma_2,\sigma_3}^*$ up to the time of the first collision. \begin{lemma}\label{elem dyn step} Let $m\geq 3$ and $Z_m\in\mathcal{D}_{m,\sigma_2,\sigma_3}^*$. Then there is a time $\tau^1_{Z_m}\in (0,\infty]$ such that defining $Z_m(\cdot):[0,\tau_{Z_m}^1]\to\mathbb{R}^{2dm}$ by: \begin{equation*} Z_m(t)= \begin{cases} (X_m+tV_m,V_m)\quad \text{if }Z_m\text{ is noncollisional or postcollisional,}\\ (X_m+tV_m',V_m'),\quad\text{if }Z_m\text{ is binary precollisional},\\ (X_m+tV_m^*,V_m^*),\quad\text{if }Z_m\text{ is ternary precollisional},\\ \end{cases} \end{equation*} the following hold: \begin{enumerate}[(i)] \item $Z_m(t)\in\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3},\quad\forall t\in (0,\tau^1_{Z_m})$.\vspace{0.2cm} \item if $\tau^1_{Z_m}<\infty$, then $Z_m(\tau^1_{Z_m})\in\partial\mathcal{D}_{m,\sigma_2,\sigma_3}$.\vspace{0.2cm} \item If $Z_m\in\Sigma_{ij}^{2,sc}$ for some $(i,j)\in\mathcal{I}_m^2$, then $Z_m(\tau^1_{Z_m})\notin\Sigma_{ij}^{2}$.\vspace{0.2cm} \item If $Z_m\in\Sigma_{ijk}^{3,sc}$ for some $(i,j,k)\in\mathcal{I}_m^3$, then $Z_m(\tau^1_{Z_m})\notin\Sigma_{ijk}^3$. \end{enumerate} An analogous statement holds in the case $m=2$, where we just neglect the ternary terms. \end{lemma} \begin{proof} Let us make the convention $\inf\emptyset=+\infty$. We define \begin{equation*} \begin{aligned} \tau_{Z_m}^1= \begin{cases} \inf\left\{t>0:X_m+tV_m\in\partial\mathcal{D}_{m,\sigma_2,\sigma_3}\right\},\quad\text{if $Z_m$ is noncollisional or postcollisional} , \\ \inf\left\{t>0:X_m+tV_m'\in\partial\mathcal{D}_{m,\sigma_2,\sigma_3}\right\},\quad\text{if $Z_m$ is binary precollisional},\\ \inf\left\{t>0:X_m+tV_m^*\in\partial\mathcal{D}_{m,\sigma_2,\sigma_3}\right\},\quad\text{if $Z_m$ is ternary precollisional}. \end{cases} \end{aligned} \end{equation*} Since $\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}$ is open, we get $\tau_{Z_m}^1>0,\quad\forall Z_m\in\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}$ and claims \textit{(i)-(ii)} follow immediately for $Z_m\in\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}$. Assume $Z_m\in\partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3}$ which yields that $Z_m$ is non-grazing. Therefore we may distinguish the following cases: \begin{itemize} \item $Z_m$ is an $(i,j)$ binary postcollisional configuration: For any $t>0$, we have \begin{equation*} \begin{aligned} |x_i-x_j+(v_i-v_j)t|^2&= |x_i-x_j|^2+t^2|v_i-v_j|^2+2t\langle x_i-x_j,v_i-v_j\rangle\\ &\geq \sigma_2^2+2tb_2(x_j-x_i,v_j-v_i)\\ &>\sigma_2^2, \end{aligned} \end{equation*} since $b_2(\omega_1,v_j-v_i)>0$. This inequality and the fact that $Z_m$ is a simple binary collision imply that $\tau_{Z_m}^1>0$ and claims $(i),(ii),(iii)$ as well. \vspace{0.2cm} \item $Z_m$ is $(i,j)$ binary precollisional configuration: We use the same argument for $Z_m'$ which is $(i,j)$ binary postcollisional.\vspace{0.2cm} \item $Z_m$ is an $(i;j,k)$ ternary postcollisional configuration: For any $t>0$, we have \begin{equation*} \begin{aligned} &|x_i-x_j+(v_i-v_j)t|^2+|x_i-x_k+(v_i-v_k)t|^2\\ &= |x_i-x_j|^2+|x_i-x_k|^2+t^2\left(|v_i-v_j|^2+|v_i-v_k|^2\right)+2t\left(\langle x_i-x_j,v_i-v_j\rangle +\langle x_i-x_k,v_i-v_k\rangle\right)\\ &\geq 2\sigma_3^2+2tb_3(x_j-x_i,x_k-x_i,v_j-v_i,v_k-v_i)\\ &>2\sigma_3^2, \end{aligned} \end{equation*} since $b_3(\omega_{1},\omega_{2},v_j-v_i,v_k-v_i)>0$. This inequality and the fact that $Z_m$ is a simple ternary collision imply that $\tau_{Z_m}^1>0$ and claims $(i),(ii),(iv)$ as well.\vspace{0.2cm} \item $Z_m$ is an $(i;j,k)$ ternary precollisional configuration: We use the same argument for $Z_m^*$ which is $(i;j,k)$ ternary postcollisional. \end{itemize} \end{proof} Let us make an elementary but crucial remark. \begin{remark}\label{remark on t1-t2} Clearly for configurations with $\tau_{Z_m}^1=\infty$ the flow is globally defined as the free flow. In the case where $\tau_{Z_m}^1<\infty$ and $Z_m(\tau_{Z_m}^1)$ is a non-grazing $(i,j)$ collision or non-grazing $(i;j,k)$ collision, we may apply Lemma \ref{elem dyn step} once more and get a corresponding time $\tau_{Z_m}^2$ with the property that $Z_m(\tau_{Z_m}^2)\notin\Sigma_{ij}^2$ or $Z_m(\tau_{Z_m}^2)\notin\Sigma_{ijk}^3$ respectively, if $\tau_{Z_m}^2<\infty$. Therefore, in this case the flow can be defined up to time $\tau_{Z_m}^2$. \end{remark} \begin{remark} Note that Lemma \ref{elem dyn step} implies that given a non-grazing $(i,j)$ collision, the next collision (if it happens) will not be $(i,j)$. Similarly, given a non-grazing $(i;j,k)$ collision, the next collision (if it happens) will not be $(i;j,k)$ However, Lemma \ref{elem dyn step} it does not imply that the same particles are not involved in a collision of a different type. For instance, one could have the sequence of collisions $(i,j)$ and $(i;j,k)$, or $(i;j,k)$ and $(i,j)$ etc. All these cases will be taken into account when establishing a global flow in Subsection \ref{subsec extension to a global flow}. \end{remark} \begin{remark} Similar results hold for the case $m=2$ where there are no ternary interactions. \end{remark} \subsection{Extension to a global flow}\label{subsec extension to a global flow} Now, we extract a zero measure set from $\mathcal{D}_{m,\sigma_2,\sigma_3}^*$ such that the flow is globally defined on the complement. For this purpose, we will first truncate positions and velocities using two parameters $1<<R<\rho$ and then perform time truncation with a small parameter $\delta$ in the scaling: \begin{equation}\label{scaling dynamics} 0<\delta R<<\sigma_2<\sigma_3<1<<R<\rho. \end{equation} Throughout this subsection, we consider parameters satisfying the scaling \eqref{scaling dynamics}. Recall that given $r>0$ we denote the $dm$-ball of radius $r>0$, centered at the origin as $B_r^{dm}$. We first assume initial positions are in $B_\rho^{dm}$ and initial velocities in $B_R^{dm}$. For $m\geq 2$, we decompose $D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm})$ in the following subsets: \begin{equation*} \begin{aligned} &I_{free}=\big\{Z_m=(X_m,V_m)\in D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm}): \tau_{Z_m}^1>\delta\big\},\\ &I_{sc,ng}^1=\big\{Z_m=(X_m,V_m)\in D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm}): \tau_{Z_m}^1\leq\delta,\text{ } Z_m(\tau_{Z_m}^1)\in\partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3} \text{ and } \tau_{Z_m}^2>\delta\big\},\\ &I_{sc,g}^1=\big\{Z_m=(X_m,V_m)\in D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm}): \tau_{Z_m}^1\leq\delta,\text{ } Z_m(\tau_{Z_m}^1)\in\partial_{sc}\mathcal{D}_{m,\sigma_2,\sigma_3},\text{ and $Z_m(\tau_{Z_m}^1)$ is grazing}\big\},\\ &I_{mu}^1=\big\{Z_m=(X_m,V_m)\in D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm}): \tau_{Z_m}^1\leq\delta,\text{ } Z_m(\tau_{Z_m}^1)\in\partial_{mu}\mathcal{D}_{m,\sigma_2,\sigma_3}\big\},\\ &I_{sc,ng}^2=\big\{Z_m=(X_m,V_m)\in D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm}): \tau_{Z_m}^1\leq\delta,\text{ } Z_m(\tau_{Z_m}^1)\in\partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3},\text{ but } \tau_{Z_m}^2\leq\delta\big\}. \end{aligned} \end{equation*} We remark that there is a well-defined flow up to time $\delta$ for $Z_m\in I_{free}\cup I^1_{sc,ng}$, since in such cases one has at most one simple non-grazing collision in $[0,\delta]$. We aim to estimate the measure of the pathological set $I^1_{sc,g}\cup I^1_{mu}\cup I^2_{sc,ng}$, with respect to the truncation parameters. \begin{lemma}\label{zero measure} Assume $m\geq 2$. Then $I_{sc,g}^1$ is of zero Lebesgue measure. \end{lemma} \begin{proof} Assume first $m\geq 3$. Clearly $I_{sc,g}^1\subseteq \displaystyle\bigcup_{(i,j)\in\mathcal{I}_m^2}M_{ij}^2\cup\displaystyle\bigcup_{(i,j,k)\in\mathcal{I}_m^3}M_{ijk}^3,$ where \begin{equation*} \begin{aligned} M_{ij}^2&=\left\{Z_m\in D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm}):Z_m(\tau_{Z_m}^1)\text{ is an $(i,j)$ grazing collision}\right\},\\ M_{ijk}^3&=\left\{Z_m\in D_{m,\sigma_2,\sigma_3}^*\cap(B_\rho^{dm}\times B_R^{dm}):Z_m(\tau_{Z_m}^1)\text{ is an $(i;j,k)$ grazing collision}\right\}. \end{aligned} \end{equation*} The above covering consists of lower dimension submanifolds of the space, so it has zero measure. For $m=2$, we use a similar argument. \end{proof} Before proceeding to the next result, let us note that conservation of energy \eqref{cons energy binary}, \eqref{triary cons energy} imply the following elementary but useful remark: \begin{remark}\label{remark on conservation} The following hold: \begin{itemize} \item For $m\geq 2$: $Z_m\in\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}\cap(\mathbb{R}^{dm}\times B_R^{dm})\Leftrightarrow Z_m'\in\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}\cap(\mathbb{R}^{dm}\times B_R^{dm}).$ \item For $m\geq 3$: $Z_m\in\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}\cap(\mathbb{R}^{dm}\times B_R^{dm})\Leftrightarrow Z_m^*\in\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}\cap(\mathbb{R}^{dm}\times B_R^{dm}).$ \end{itemize} \end{remark} \begin{lemma}\label{covering} For $m\geq 3$, the following inclusion holds: \begin{equation}\label{required inclusion} I_{mu}^1\cup I_{sc,ng}^2\subseteq U_{22}\cup U_{23}\cup U_{32}\cup U_{33}, \end{equation} where \begin{align} U_{22}&:=\bigcup_{(i,j)\neq (i',j')\in\mathcal{I}_{m}^2}(U_{ij}^2\cap U_{i'j'}^2),\label{U_22}\\ U_{23}&:=\bigcup_{(i,j)\in\mathcal{I}_{m}^2, (i',j',k')\in\mathcal{I}_m^3}(U_{ij}^2\cap U_{i'j'k'}^3),\label{U_23}\\ U_{32}&:=\bigcup_{(i,j,k)\in\mathcal{I}_{m}^3, (i',j')\in\mathcal{I}_m^2}(U_{ijk}^3\cap U_{i'j'}^2),\label{U_32}\\ U_{33}&:=\bigcup_{(i,j,k)\neq (i',j',k')\in\mathcal{I}_{m}^3}(U_{ijk}^3\cap U_{i'j'k'}^3)\label{U_33}, \end{align} and given $(i,j)\in\mathcal{I}_m^2$, $(i,j,k)\in\mathcal{I}_m^3$, we denote \begin{align} U_{ij}^2&:=\left\{Z_m=(X_m,V_m)\in B_\rho^{dm}\times B_R^{dm}: \sigma_2\leq d_2(x_i,x_j)\leq \sigma_2+2\delta R\right\}.\label{U_ij^2}\\ U_{ijk}^3&:=\left\{Z_m=(X_m,V_m)\in B_\rho^{dm}\times B_R^{dm}: 2\sigma_3^2\leq d_3^2(x_i;x_j,x_k)\leq (\sqrt{2}\sigma_3+4\delta R)^2\right\}.\label{U_ijk^3} \end{align} For $m=2$, we have $I_1^{mu}=I^2_{sc,ng}=\emptyset.$ \end{lemma} \begin{proof} For $m=2$, we have that $\partial_{mu}\mathcal{D}_{2,\sigma_2,\sigma_3}=\emptyset$, hence $I^{1}_{mu}=\emptyset.$ Also, since $m=2$, we trivially obtain $\mathcal{I}_2=\{(1,2)\}$, hence Remark \ref{remark on t1-t2} implies that $\tau_{Z_m}^2=\infty$ i.e. $I_{sc,ng}^2=\emptyset$. Assume now that $m\geq 3$. We first assume that either $Z_m\in\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}$ or $Z_m$ is postcollisional. Therefore, up to time $\tau_{Z_m}^1$, we have free flow i.e. $Z_m(t)=(X_m+tV_m,V_m),$ for all $t\in[0,\tau_{Z_m}^1].$ \textbf{Inclusion for $I^1_{mu}$:} We have $\tau_{Z_m}^1\leq\delta$ and $Z_m(\tau_{Z_m}^1)\in\partial_{mu}\mathcal{D}_{m,\sigma_2,\sigma_3}$. We claim the following which clearly imply inclusion \eqref{required inclusion} for $I_{mu}^1$: \begin{enumerate}[(I)] \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ij}^2\cap\Sigma_{i'j'}^2\Rightarrow Z_m\in U_{ij}^2\cap U_{i'j'}^2\quad\forall (i,j),(i',j')\in\mathcal{I}_m^2$. \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ij}^2\cap\Sigma_{i'j'k'}^3\Rightarrow Z_m\in U_{ij}^2\cap U_{i'j'k'}^3,\quad\forall (i,j)\in\mathcal{I}_m^3,\quad\forall (i',j,k')\in\mathcal{I}_m^3$. \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ijk}^3\cap\Sigma_{i'j'}^2\Rightarrow Z_m\in U_{ijk}^3\cap U_{i'j'}^2,\quad\forall (i,j,k)\in\mathcal{I}_m^3,\quad\forall (i',j')\in\mathcal{I}_m^2$. \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ijk}^3\cap\Sigma_{i'j'k'}^3\Rightarrow Z_m\in U_{ij}^2\cap U_{i'j'k'}^3,\quad\forall (i,j,k),(i',j',k')\in\mathcal{I}_m^3$. \end{enumerate} Without loss of generality, we prove claim \text{(III)}. We have $Z_m(\tau_{Z_m}^1)\in\Sigma_{ijk}^3\cap\Sigma_{i'j'}^2$, therefore \begin{align} d_3^2\left(x_i\left(\tau_{Z_m}^1\right); x_j\left(\tau_{Z_m}^1\right),x_k\left(\tau_{Z_m}^1\right)\right)&=2\sigma_3^2,\quad d_2\left(x_{i'}\left(\tau_{Z_m}^1\right), x_{j'}\left(\tau_{Z_m}^1\right)\right)=\sigma_2.\label{collision i'j'} \end{align} Since there is free motion up to $\tau_{Z_m}^1$, triangle inequality implies \begin{equation}\label{fremotion step 1} |x_i-x_j|\leq |x_i(\tau_{Z_m}^1)-x_j(\tau_{Z_m}^1)|+\delta |v_i-v_j|\leq |x_i(\tau_{Z_m}^1)-x_j(\tau_{Z_m}^1)|+2\delta R. \end{equation} Since there is an $(i;j,k)$ ternary collision at $\tau_{Z_m}^1$, we have \begin{equation}\label{sub collision} |x_i(\tau_{Z_m}^1)-x_j(\tau_{Z_m}^1)|^2+|x_i(\tau_{Z_m}^1)-x_k(\tau_{Z_m}^1)|^2=2\sigma_3^2\Rightarrow |x_i(\tau_{Z_m}^1)-x_j(\tau_{Z_m}^1)|\leq\sqrt{2}\sigma_3 \end{equation} Combining \eqref{fremotion step 1}-\eqref{sub collision}, we obtain \begin{equation}\label{fremotion step 2} |x_i-x_j|^2\leq |x_i(\tau_{Z_m}^1)-x_j(\tau_{Z_m}^1)|^2 +4\sqrt{2}\sigma_3\delta R+4\delta^2 R^2. \end{equation} Using the same argument for the pair $(i,k)$, adding and recalling the fact that there is $(i;j,k)$ collision at $\tau_{Z_m}^1$, we obtain \begin{align} 2\sigma_3^2\leq d_3^2(x_i;x_j,x_k)\leq 2\sigma_3^2+8\sqrt{2}\sigma_3 R\delta+8\delta R^2&\leq 2\sigma_3^2+8\sqrt{2}\sigma_3 R\delta+16\delta R^2= (\sqrt{2}\sigma_3+4\delta R)^2\nonumber\\ &\Rightarrow Z_m\in U_{ijk}^3,\label{free motion triary} \end{align} where the lower inequality holds trivially since $Z_m\in\mathcal{D}_{m,\sigma_2,\sigma_3}$. For the pair $(i',j')$, \eqref{collision i'j'} and triangle inequality yield \begin{equation}\label{free motion binary} \sigma_2\leq |x_i-x_j|=|x_{i'}(\tau_{Z_m}^1)-x_{j'}(\tau_{Z_m}^1)-\tau_{Z_m}^1(v_{i'}-v_{j'})|\leq \sigma_2+2\delta R\Rightarrow Z_m\in U_{i'j'}^2, \end{equation} where the lower inequality trivially holds because of the phase space. Combining \eqref{free motion triary}-\eqref{free motion binary}, we obtain $Z_m\in U_{ijk}^3\cap U_{i'j'}^2,$ and claim \text{(III)} is proved. The rest of the claims are proved by similar arguments and we obtain the inclusion \begin{equation}\label{inclusion for multiple}I_{mu}^1\subseteq U_{22}\cup U_{23}\cup U_{32}\cup U_{33}. \end{equation} \textbf{Inclusion for $I_{sc,ng}^2$:} Remark \ref{remark on t1-t2} guarantees that \begin{equation}\label{guarantee of remark} \begin{cases} Z_m(\tau_{Z_m}^1)\in\Sigma_{ij}^2\Rightarrow Z_m(\tau_{Z_m}^2)\notin \Sigma_{ij}^2,\\ Z_m(\tau_{Z_m}^1)\in\Sigma_{ijk}^3\Rightarrow Z_m(\tau_{Z_m}^2)\notin \Sigma_{ijk}^3. \end{cases} \end{equation} We claim the following: \begin{enumerate}[(I)] \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ij}^2$, $Z_m(\tau_{Z_m}^2)\in\Sigma_{i'j'}^2\Rightarrow Z_m\in U_{ij}^2\cap U_{i'j'}^2,\quad\forall (i,j),(i',j')\in\mathcal{I}_m^2$. \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ij}^2$, $Z_m(\tau_{Z_m}^2)\in\Sigma_{i'j'k'}^3\Rightarrow Z_m\in U_{ij}^2\cap U_{i'j'k'}^3,\quad\forall (i,j)\in\mathcal{I}_m^3,\quad\forall (i',j,k')\in\mathcal{I}_m^3$. \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ijk}^3$,$Z_m(\tau_{Z_m}^2)\in\Sigma_{i'j'}^2\Rightarrow Z_m\in U_{ijk}^3\cap U_{i'j'}^2,\quad\forall (i,j,k)\in\mathcal{I}_m^3,\quad\forall (i',j')\in\mathcal{I}_m^2$. \item $Z_m(\tau_{Z_m}^1)\in\Sigma_{ijk}^3$,$Z_m(\tau_{Z_m}^2)\in\Sigma_{i'j'k'}^3\Rightarrow Z_m\in U_{ij}^2\cap U_{i'j'k'}^3,\quad\forall (i,j,k),(i',j',k')\in\mathcal{I}_m^3$. \end{enumerate} By \eqref{guarantee of remark}, proving claims \text{(I)-(IV)} implies inclusion \eqref{required inclusion} for $I_{sc,ng}^2$. Without loss of generality, we prove claim \text{(III)}. Clearly all particles perform free motion until $\tau_{Z_m}^1$, so the same argument we used to obtain \eqref{free motion triary} yields \begin{equation}\label{shell 1} 2\sigma_3^2\leq d_3^2(x_i;x_j,x_k)\leq (\sqrt{2}\sigma_3+4\delta R)^2\Rightarrow Z_m\in U_{ijk}^3. \end{equation} Moreover, particles keep performing free motion up to time $\tau_{Z_m}^2$, except particles $i,j,k$ whose velocities instantaneously tranform because of the collision at $\tau_{Z_m}^1$. We wish to prove as well $Z_m\in U_{i'j'}^2$ i.e. \begin{equation}\label{shell 2} \sigma_2\leq d_2(x_{i'},x_{j'})\leq \sigma_2+2\delta R.\end{equation} The first inequality trivially holds because of the phase space. To prove the second inequality, we distinguish the following cases: \begin{enumerate}[(i)] \item $i',j'\notin\{i,j,k\}$: Since particles $(i',j')$ perform free motion up to $\tau_{Z_m}^2$, a similar argument to the one we used to obtain \eqref{free motion binary} yields $Z_m\in U_{i'j'}$. The only difference is that we apply the argument up to time $\tau_{Z_m}^2\leq\delta$, instead of $\tau_{Z_m}^1$ hence claim \eqref{shell 2} is proved. \vspace{0.3cm} \item There is at least one recollision i.e. at least one of $i',j'$ belongs to $\{i,j,k\}$: The argument is similar to (i), the only difference being that velocities of the recolliding particles transform at $\tau_{Z_m}^1$. Since the argument is similar for all cases, let us provide a detailed proof only for one recollisional case, for instance $(i',j')=(i,k)$. We have \begin{align*} x_i(\tau_{Z_m}^2)&=x_i(\tau_{Z_m}^1)+(\tau_{Z_m}^2-\tau_{Z_m}^1)v_i^*=x_i+\tau_{Z_m}^1v_i+(\tau_{Z_m}^2-\tau_{Z_m}^1)v_i^*,\\ x_k(\tau_{Z_m}^2)&=x_k(\tau_{Z_m}^1)+(\tau_{Z_m}^2-\tau_{Z_m}^1)v_k^*=x_k+\tau_{Z_m}^1v_k+(\tau_{Z_m}^2-\tau_{Z_m}^1)v_k^*, \end{align*} so \begin{equation*} x_i-x_k=x_i(\tau_{Z_m}^2)-x_k(\tau_{Z_m}^2)-\tau_{Z_m}^1(v_i-v_k)-(\tau_{Z_m}^2-\tau_{Z_m}^1)(v_i^*-v_k^*). \end{equation*} Therefore, triangle inequality implies \begin{align} |x_i-x_k|&\leq |x_i(\tau_{Z_m}^2)-x_k(\tau_{Z_m}^2)|+\tau_{Z_m}^1|v_i-v_k|+(\tau_{Z_m}^2-\tau_{Z_m}^1)|v_i^*-v_k^*|\nonumber\\ &\leq |x_i(\tau_{Z_m}^2)-x_k(\tau_{Z_m}^2)|+2\tau_{Z_m}^1 R+2(\tau_{Z_m}^2-\tau_{Z_m}^1)R\label{use of remark on cons}\\ &=|x_i(\tau_{Z_m}^2)-x_k(\tau_{Z_m}^2)|+2\tau_{Z_m}^2 R\nonumber\\ &\leq |x_i(\tau_{Z_m}^2)-x_k(\tau_{Z_m}^2)|+2\delta R,\label{tau leq delta ik} \end{align} to obtain \eqref{use of remark on cons}, we use triangle inequality and Remark \ref{remark on conservation}, and to obtain \eqref{tau leq delta ik}, we use the assumption $\tau_{Z_m}^2\leq\delta$. Therefore \eqref{shell 2} is proved. \end{enumerate} Combining \eqref{shell 1}, \eqref{shell 2}, we obtain $Z_m\in U_{ijk}^3\cap U_{i'j'}^2,$ and claim \text{(III)} follows. The remaining claims are proved in a similar way. We obtain \begin{equation}\label{inclusion for consecutive} I_{sc,ng}^2\subseteq U_{22}\cup U_{23}\cup U_{32}\cup U_{33}. \end{equation} Inclusions \eqref{inclusion for multiple}, \eqref{inclusion for consecutive} imply inclusion \eqref{required inclusion}. Assume now that $Z_m$ is precollisional. Therefore, we obtain $$Z_m(t)= \begin{cases}(X_m+tV_m',V_m'),\quad\forall t\in[0,\tau_{Z_m}^1],\mbox{ if } Z_m\in\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}\\ (X_m+tV_m^*,V_m^*),\quad\forall t\in[0,\tau_{Z_m}^1],\mbox{ if } Z_m\in\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3}. \end{cases}$$ where the collisional transformation is taken with respect to the initial collisional particles. The proof follows the same lines, using Remark \ref{remark on conservation} for the initial collisional particles whenever needed. \end{proof} Now we wish to estimate the measure of $I_{sc,g}^1\cup I_{mu}^1\cup I_{sc,ng}^2$ in order to show that outside of a small measure set we have a well defined flow. Let us first introduce some notation. For $m\geq 2$, $(i,j)\in\mathcal{I}_m^2$, a permutation $\pi:\{i,j\}\to\{i,j\}$ and $x_{\pi_j}\in\mathbb{R}^{d}$, we define the set \begin{equation}\label{S perm binary} S_{\pi_i}(x_{\pi_j})=\{x_{\pi_i}\in\mathbb{R}^d: (x_i,x_j)\in U_{ij}^2\}. \end{equation} For $m\geq 3$, $(i,j,k)\in\mathcal{I}_m^3$, a permutation $\pi:\{i,j,k\}\to\{i,j,k\}$ and $(x_{\pi_j},x_{\pi_k})\in\mathbb{R}^{2d}$, we define the set \begin{equation}\label{S perm ternary} S_{\pi_i}(x_{\pi_j},x_{\pi_k})=\{x_{\pi_i}\in\mathbb{R}^d: (x_i,x_j,x_k)\in U_{ijk}^3\}. \end{equation} \begin{lemma}\label{estimate of S} The following hold \begin{enumerate}[(i)] \item Let $m\geq 2$, $(i,j,k)\in\mathcal{I}_m^2$, a permutation $\pi:\{i,j\}\to\{i,j\}$ and $x_{\pi_j}\in\mathbb{R}^{d}$. Then \begin{equation}\label{estimate of S perm binary} |S_{\pi_i}(x_{\pi_j})|_d\leq C_{d,R}\delta. \end{equation} \item Let $m\geq 3$, $(i,j,k)\in\mathcal{I}_m^3$, a permutation $\pi:\{i,j,k\}\to\{i,j,k\}$ and $(x_{\pi_j},x_{\pi_k})\in\mathbb{R}^{2d}$. Then \begin{equation}\label{estimate of S perm ternary} |S_{\pi_i}(x_{\pi_j},x_{\pi_k})|_d\leq C_{d,R}\delta. \end{equation} \end{enumerate} \end{lemma} \begin{proof} For proof of estimate \eqref{estimate of S perm ternary}, we refer to Lemma 3.10. in \cite{ternary}. Let us prove \eqref{estimate of S perm binary}. Consider $(i,j)\in\mathcal{I}_m^2$, and assume without loss of generality that $\pi(i,j)=(i,j)$. Let $x_j\in\mathbb{R}^d$. Recalling \eqref{S perm binary}, we obtain $$S_i(x_j)=\left\{x_i\in\mathbb{R}^d:\sigma_2\leq |x_i-x_j|\leq\sigma_2+2\delta R\right\},$$ thus $S_i(x_j)$ is a spherical shell in $\mathbb{R}^d$ of inner radius $\sigma_2$ and outer radius $\sigma_2+2\delta R$. Therefore, by scaling \eqref{scaling dynamics}, we obtain \begin{align*} |S_i(x_j)|_d&\simeq (\sigma_2+2\delta R)^d-\sigma_2^d=2\delta R\sum_{\ell=0}^{d-1} (\sigma_2+2\delta R)^{d-1-\ell}\sigma_2^\ell\leq C_{d,R}\delta. \end{align*} \end{proof} \begin{remark} Estimates of Lemma \ref{estimate of S} are not sufficient to generate a global flow because $\delta$ represents the length of an elementary time step, therefore iterating, we cannot eliminate pathological sets. We will derive a better estimate of order $\delta ^2$ to achieve this elimination. \end{remark} \begin{lemma}\label{ellipse shell measure} Let $m\geq 2$, $1<R<\rho$ and $0<\delta R<\sigma_2<\sigma_3<1$. Then the following estimate holds: \begin{equation}\label{measure estimate dynamics} |I_{sc,g}^1\cup I_{mu}^1\cup I_{sc,ng}^2|_{2dm}\leq C_{m,d,R}\rho^{d(m-2)}\delta ^2. \end{equation} \end{lemma} \begin{proof} For $m=2$, the result comes trivially from Lemma \ref{zero measure} and Lemma \ref{covering}. For $m\geq 3$, we recall from Lemma \ref{zero measure} that $I^1_{g}$ is of measure zero and that by Lemma \ref{covering}, we have $$I_{mu}^1\cup I_{sc,ng}^2=U_{22}\cup U_{23}\cup U_{32}\cup U_{33},$$ where $U_{22},U_{23},U_{32},U_{33}$ are given by \eqref{U_22}-\eqref{U_33}. Therefore it suffices to estimate the measure of $U_{22},U_{23},U_{32},U_{33}$. We will strongly rely on Lemma \ref{estimate of S}. $\bullet$ \textbf{Estimate of $U_{22}$:} By \eqref{U_22}, we have $$U_{22}=\bigcup_{(i,j)\neq (i',j')\in\mathcal{I}_m^2}(U_{ij}^2\cap U_{i'j'}^2).$$ Consider $(i,j)\neq (i',j')\in\mathcal{I}_m^2$. We distinguish the following possible cases: \begin{enumerate}[(I)] \item $i',j'\notin\{i,j\}$: By \eqref{U_ij^2}, followed by Fubini's Theorem and part {\em (i)} of Lemma \ref{estimate of S}, we have \begin{align*} |&U_{ij}^2\cap U_{i'j'}^2|_{2dm}\lesssim R^{dm}\rho^{d(m-4)}\int_{B_\rho^{4d}}\mathds{1}_{S_{i}^2(x_j)\cap S_{i'}^2(x_{j'})}\,dx_{i}\,dx_{i'}\,dx_{j}\,dx_{j'}\\ &\leq R^{dm}\rho^{d(m-4)}\left(\int_{B_\rho^d}\int_{\mathbb{R}^d}\mathds{1}_{S_i^2(x_j)}\,dx_i\,dx_j\right)\left(\int_{B_\rho^d}\int_{\mathbb{R}^d}\mathds{1}_{S_{i'}^2(x_{j'})}\,dx_{i'}\,dx_{j'}\right)\\ &\leq C_{d,R}\rho^{d(m-2)}\delta^2. \end{align*} \item Exactly one of $i',j'$ belongs to $\{i,j\}$: Without loss of generality we consider the case $(i',j')=(j,j')$, for some $j'>j$ and all other cases follow similarly. Fubini's Theorem and part {\em (i)} of Lemma \ref{estimate of S} imply \begin{align*} |&U_{ij}^2\cap U_{jj'}^2|_{2dm}\lesssim R^{dm}\rho^{d(m-3)}\int_{B_\rho^{3d}}\mathds{1}_{S_i^2(x_j)\cap S_{j}^2(x_{j'})}\,dx_{j}\,dx_{j'}\,dx_i\\ &\leq R^{dm}\rho^{d(m-3)}\int_{B_{\rho}^d}\left(\int_{\mathbb{R}^d}\mathds{1}_{S_i^2(x_j)}\,dx_i\right)\left(\int_{\mathbb{R}^d}\mathds{1}_{S_{j'}^2(x_j)}\,dx_{j'}\right)\,dx_j\\ &\leq C_{d,R}\rho^{d(m-2)}\delta^2. \end{align*} \end{enumerate} Combining cases \text{(I)-(II)}, we obtain \begin{equation}\label{estimate U_22} |U_{22}|_{2dm}\leq C_{m,d,R}\rho^{d(m-2)}\delta^2. \end{equation} $\bullet$ \textbf{Estimate of $U_{23}$:} By \eqref{U_23}, we have $$U_{23}=\bigcup_{(i,j)\in\mathcal{I}_m^2, (i',j',k')\in\mathcal{I}_m^3}(U_{ij}^2\cap U_{i'j'k'}^3).$$ Consider $(i,j)\in\mathcal{I}_m^2$, $(i',j',k')\in\mathcal{I}_m^3$. We distinguish the following possible cases: \begin{enumerate}[(I)] \item $i',j',k'\notin\{i,j\}$: By Fubini's Theorem and parts {\em (i)-(ii)} of Lemma \ref{estimate of S}, we obtain \begin{align*} |&U_{ij}^2\cap U_{i'j'k'}^3|_{2dm}\lesssim R^{dm}\rho^{d(m-5)}\int_{B_\rho^{5d}}\mathds{1}_{S_j^2(x_i)\cap S_{k'}^3(x_{i'},x_{j'})}\,dx_i\,dx_j\,dx_{i'}\,dx_{j'}\,dx_{k'}\\ &\leq R^{dm}\rho^{d(m-5)}\left(\int_{B_\rho^d}\int_{\mathbb{R}^d}\mathds{1}_{S_j^2(x_i)}\,dx_i\,dx_j\right)\left(\int_{B_\rho^d\times B_\rho^d}\int_{\mathbb{R}^d}\mathds{1}_{S_{k'}^3(x_{i'},x_{j'})}\,dx_{i'}\,dx_{j'}\,dx_{k'}\right)\\ &\leq C_{d,R}\rho^{d(m-2)}\delta^2. \end{align*} \item Exactly one of $i',j',k'$ belongs in $\{i,j\}$: Without loss of generality we consider the case $(i',j',k'):=(i',i,k')$, for some $i'<i<k'$ and all other cases follow similarly. Using Fubini's Theorem and parts {\em (i)-(ii)} of Lemma \ref{estimate of S}, we obtain \begin{align*} |&U_{ij}^2\cap U_{i'ik'}|_{2dm}\lesssim R^{dm}\rho^{d(m-4)}\int_{B_\rho^{4d}}\mathds{1}_{S_j^2(x_i)\cap S_{i'}^3(x_{i},x_{k'})}\,dx_i\,dx_j\,dx_{i'}\,dx_{k'}\\ &\leq R^{dm}\rho^{d(m-4)}\int_{B_\rho^d}\left(\int_{\mathbb{R}^d}\mathds{1}_{S_j^2(x_i)}\,dx_j\right)\left(\int_{B_\rho^d}\int_{\mathbb{R}^d}\mathds{1}_{S_{i'}^3(x_i,x_{k'})}\,dx_{i'}\,dx_{k'}\right)\,dx_i\\ &\leq C_{d,R}\rho^{d(m-2)}\delta^2. \end{align*} \item Exactly two of $i',j',k'$ belongs in $\{i,j\}$: Without loss of generality we consider the case $(i',j',k')=(i',i,j)$, for some $i'<i$ and all other cases follow similarly. Using Fubini's Theorem and parts {\em (i)-(ii)} of Lemma \ref{estimate of S}, we obtain \begin{align*} |&U_{ij}^2\cap U_{i'ij}^3|_{2dm}\lesssim R^{dm}\rho^{d(m-3)}\int_{B_\rho^{3d}}\mathds{1}_{S_{i}^2(x_j)\cap S_{i'}^3(x_i,x_j)}\,dx_i\,dx_j\,dx_{i'}\\ &\leq R^{dm}\rho^{d(m-3)}\int_{B_\rho^d\times B_\rho^d}\left(\int_{\mathbb{R}^d}\mathds{1}_{S_{i}^2(x_j)}\mathds{1}_{S_{i'}^3(x_i,x_j)}\,dx_{i'}\right)\,dx_i\,dx_j\\ &=R^{dm}\rho^{d(m-3)}\int_{B_\rho^d\times B_\rho^d}\mathds{1}_{S_{i}^2(x_j)}(\int_{\mathbb{R}^d}\mathds{1}_{S_{i'}^3(x_i,x_j)}\,dx_{i'})\,dx_i\,dx_j\\ &\leq C_{d,R}\rho^{d(m-3)}\delta\int_{B_\rho^{d}}\int_{\mathbb{R}^d}\mathds{1}_{S_i(x_j)}\,dx_i\,dx_j\\ &\leq C_{d,R}\rho^{d(m-2)}\delta^2. \end{align*} \end{enumerate} Combining cases \text{(I)-(III)}, we obtain \begin{equation}\label{estimate U_23} |U_{23}|_{2dm}\leq C_{m,d,R}\rho^{d(m-2)}\delta^2. \end{equation} $\bullet$ \textbf{Estimate of $U_{32}$:} We use a similar argument to the estimate for $U_{23}$, to obtain \begin{equation}\label{estimate U_32} |U_{32}|_{2dm}\leq C_{m,d,R}\rho^{d(m-2)}\delta^2. \end{equation} $\bullet$ \textbf{Estimate of $U_{33}$:} We refer to Lemma 3.11. from \cite{ternary} for a detailed proof. We obtain \begin{equation}\label{estimate U_33} |U_{33}|_{2dm}\leq C_{m,d,R}\rho^{d(m-2)}\delta^2. \end{equation} Combining \eqref{estimate U_22}-\eqref{estimate U_33}, we obtain \eqref{measure estimate dynamics} and the proof is complete. \end{proof} We inductively use Lemma \ref{ellipse shell measure} to define a global flow which preserves energy for almost all configuration. For this purpose, given $Z_m=(X_m,V_m)\in\mathbb{R}^{2dm}$, we define its kinetic energy as: \begin{equation}\label{kinetic energy} E_m(Z_m)=\frac{1}{2}\sum_{i=1}^m|v_i|^2 \end{equation} For convenience, let us define the $m$-particle free flow: \begin{definition} Let $m\in\mathbb{N}$. We define the $m$-particle free flow as the family of measure-preserving maps $(\Phi_m^t)_{t\in\mathbb{R}}:\mathbb{R}^{2dm}\to\mathbb{R}^{2dm}$, given by \begin{equation}\label{free flow} \Phi_m^tZ_m=\Phi_m^t(X_m,V_m)=(X_m+tV_m,V_m). \end{equation} \end{definition} We are now in the position to state the Existence Theorem of the $m$-particle $(\sigma_2,\sigma_3)$-flow. \begin{theorem}\label{global flow} Let $m\in\mathbb{N}$ and $0<\sigma_2<\sigma_3<1$. There exists a family of measure-preserving maps $(\Psi_m^t)_{t\in\mathbb{R}}:\mathcal{D}_{m,\sigma_2,\sigma_3}\to\mathcal{D}_{m,\sigma_2,\sigma_3}$ such that \begin{align} &\Psi_m^{t+s}Z_m=(\Psi_m^t\circ \Psi_m^s)(Z_m)=(\Psi_m^s\circ \Psi_m^t)(Z_m),\quad\text{a.e. in }\mathcal{D}_{m,\sigma_2,\sigma_3},\quad\forall t,s\in\mathbb{R}\label{flow property},\\ &E_m\left(\Psi_m^t Z_m\right)=E_m(Z_m),\quad\mbox{a.e. in }\mathcal{D}_{m,\sigma_2,\sigma_3},\quad\forall t\in\mathbb{R},\text{ where $E_m$ is given by \eqref{kinetic energy}}\label{kinetic energy flow}. \end{align} Moreover, for $m\geq 3$, we have \begin{align} \Psi_m^t Z_m'&=\Psi_m^t Z_m,\quad\sigma-\text{a.e. on }\partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3}\cap\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3},\quad\forall t\in\mathbb{R},\label{bc flow binary}\\ \Psi_m^t Z_m^*&=\Psi_m^t Z_m,\quad\sigma-\text{a.e. on }\partial_{sc,ng}\mathcal{D}_{m,\sigma_2,\sigma_3}\cap\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3},\quad\forall t\in\mathbb{R},\label{bc flow triary} \end{align} while for $m=2$, we have \begin{align} \Psi_2^t Z_2'&=\Psi_2^t Z_2,\quad\sigma-\text{a.e. on }\partial_{sc,ng}\mathcal{D}_{2,\sigma_2,\sigma_3}\cap\partial_{2,sc}\mathcal{D}_{2,\sigma_2,\sigma_3},\quad\forall t\in\mathbb{R},\label{bc flow binary m=2} \end{align} where $\sigma$ is the surface measure induced on $\partial\mathcal{D}_m$ by the Lebesgue measure. This family of maps is called the $m$-particle $(\sigma_2,\sigma_3)$-flow. For $m=1$, we define $\Psi_1^t:=\Phi_1^t\quad\forall t\in\mathbb{R}.$ \end{theorem} \begin{proof} The proof follows exactly the same steps as the proof of Theorem 3.14. from \cite{ternary} (for additional details see Theorem 4.9.1 from \cite{thesis}). \end{proof} \begin{remark}\label{remark on dynamics notation} We have seen that the flow can be defined only a.e. in $\mathcal{D}_{m,\sigma_2,\sigma_3}$. However to simplify the notation, without loss of generality, we may assume that the flow is well defined on the whole phase space $\mathcal{D}_{m,\sigma_2,\sigma_3}$. \end{remark} \subsection{The Liouville equation} Here, we formally derive the Liouville equation for $m$ hard spheres of diameter $\sigma_2$ and interaction zone $\sigma_3$, where $0<\sigma_2<\sigma_3<1$. Without loss of generality, we derive the equation for $m\geq 3$, and for $m=2$ we follow a similar argument neglecting the ternary terms. For $m=1$, the Liouville equation will be trivial since the flow coincides with the free flow. We then introduce the $m$-particle $(\sigma_2,\sigma_3)$ interaction flow operator and the $m$-particle free flow operator. Let $m\geq 3$ and consider an initial absolutely continuous Borel probability measure $P_0$ on $\mathbb{R}^{2dm}$, with a probability density $f_{m,0}$ satisfying the following properties: \begin{itemize} \item $f_{m,0}$ is supported in $\mathcal{D}_{m,\sigma_2,\sigma_3}$ i.e. \begin{equation}\label{support initial} \supp f_{m,0}:=\overline{\{Z_m\in\mathbb{R}^{2dm}:f_{m,0}(Z_m)\neq 0\}}\subseteq\mathcal{D}_{m,\sigma_2,\sigma_3}. \end{equation} \item $f_{m,0}$ is symmetric i.e. for any permutation $p_m$ of the $m$-particles, there holds: \begin{equation}\label{symmetry of initial density} f_{m,0}(Z_{p_m})=f_{m,0}(Z_m),\quad\forall Z_m\in\mathbb{R}^{2dm}. \end{equation} \end{itemize} The probability measure $P_0$ expresses the initial distribution in space and velocities of the $m$-particles. We are interested in the evolution of this measure under the flow. For this purpose, given $t\geq 0$ we define $P_t$ to be the push-forward of $P_0$ under the flow i.e. \begin{equation*} P_t(A)=P_0\left(\Psi_m^{-t}\left(A\right)\right),\quad A\subseteq\mathbb{R}^{2dm}\text{ Borel measurable}. \end{equation*} Conservation of measure under the flow implies that $P_t$ is absolutely continuous with probability density given by \begin{equation}\label{density of push-forward} f_m(t,Z_m)=\begin{cases} f_{m,0}\circ\Psi_m^{-t},\quad\text{a.e. in }\mathcal{D}_{m,\sigma_2,\sigma_3},\\ 0,\quad \text{a.e. in }\mathbb{R}^{2dm}\setminus\mathcal{D}_{m,\sigma_2,\sigma_3}. \end{cases} \end{equation} Clearly $f_m(t,Z_m)$ is symmetric and supported in $\mathcal{D}_{m,\sigma_2,\sigma_3}$, for all $t\geq 0$. Moreover, we have \begin{equation}\label{initial condition mild liouville} f_m(0,Z_m)=f_{m,0}\circ\Psi_m^0(Z_m)=f_{m,0}(Z_m),\quad Z_m\in\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}. \end{equation} Since $m>2$, \eqref{bc flow binary} implies \begin{equation}\label{bc liouville binary} f_m(t,Z_m')=f_{m,0}\circ\Psi_m^{-t}(Z_m')=f_{m,0}\circ\Psi_m^{-t}(Z_m)=f_m(t,Z_m),\quad\sigma-\text{a.e. on }\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3},\quad\forall t\geq 0. \end{equation} Additionally, since $m\geq 3$, \eqref{bc flow triary} implies \begin{equation}\label{bc liouville triary} f_m(t,Z_m^*)=f_{m,0}\circ\Psi_m^{-t}(Z_m^*)=f_{m,0}\circ\Psi_m^{-t}(Z_m)=f_m(t,Z_m),\quad\sigma-\text{a.e. on }\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3},\quad\forall t\geq 0.\\ \end{equation} Finally, recall from \eqref{density of push-forward} that \begin{equation}\label{conservation of density} \begin{aligned} f_m(t,Z_m)=f_{m,0}\circ\Psi_m^{-t}(Z_m),\quad\text{a.e. in }\mathcal{D}_{m,\sigma_2,\sigma_3},\quad\forall t\geq 0. \end{aligned} \end{equation} Combining \eqref{initial condition mild liouville}-\eqref{conservation of density}, and formally assuming that $f_m$ is smooth in time, by the chain rule, we obtain that $f_m$ formally satisfies the $m$-particle Liouville equation in $\mathcal{D}_{m,\sigma_2,\sigma_3}$: \begin{equation}\label{Liouville equation} \begin{cases} \partial_tf_m+\displaystyle\sum_{i=1}^m v_i\cdot\nabla_{x_i}f_m=0,\quad(t,Z_m)\in(0,\infty)\times\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}\\ f_m(t,Z_m')=f_m(t,Z_m),\quad (t,Z_m)\in[0,\infty)\times\partial_{2,sc}\mathcal{D}_{m,\sigma_2,\sigma_3},\\ f_m(t,Z_m^*)=f_m(t,Z_m),\quad (t,Z_m)\in[0,\infty)\times\partial_{3,sc}\mathcal{D}_{m,\sigma_2,\sigma_3},\\ f_m(0,Z_m)=f_{m,0}(Z_m),\quad Z_m\in\mathring{\mathcal{D}}_{m,\sigma_2,\sigma_3}. \end{cases} \end{equation} With similar arguments, we conclude that, in the case $m=2$, $f_2$ formally satisfies the $2$-particle Liouville equation $\mathcal{D}_{2,\sigma_2,\sigma_3}$: \begin{equation}\label{Liouville equation binary} \begin{cases} \partial_tf_2+\displaystyle v_1\cdot\nabla_{x_1}f_2+v_2\cdot\nabla_{x_2}f_2=0,\quad(t,Z_2)\in(0,\infty)\times\mathring{\mathcal{D}}_{2,\sigma_2,\sigma_3},\\ f_2(t,Z_2')=f_2(t,Z_2),\quad (t,Z_2)\in[0,\infty)\times\partial_{2,sc}\mathcal{D}_{2,\sigma_2,\sigma_3},\\ f_2(0,Z_2)=f_{2,0}(Z_2),\quad Z_2\in\mathring{\mathcal{D}}_{2,\sigma_2,\sigma_3}. \end{cases} \end{equation} In the case $m=1$, we trivially have $ f(t,x_1,v_1)=f_0(\Phi^{-t}_1(x_1,v_1))=f_0(x_1-tv_1,v_1). $ Now, we introduce some notation defining the $m$-particle free flow operator and the $m$-particle $(\sigma_2,\sigma_3)$-flow operator. For convenience, let us denote \begin{align} C^0(\mathcal{D}_{m,\sigma_2,\sigma_3})&:=\{g_m\in C^0(\mathbb{R}^{2dm}):\supp g_m\subseteq \mathcal{D}_{m,\sigma_2,\sigma_3}\}\label{continuous finite}. \end{align} \begin{definition} For $t\in\mathbb{R}$ and $0<\sigma_2<\sigma_3<1$, we define the $m$-particle $(\sigma_2,\sigma_3)$-flow operator $T_m^t:C^0(\mathcal{D}_{m,\sigma_2,\sigma_3})\to C^0(\mathcal{D}_{m,\sigma_2,\sigma_3})$ as: \begin{equation}\label{liouville operator} T_m^tg_m(Z_m)=\begin{cases} g_{m}(\Psi_m^{-t}Z_m),\quad \text{if }Z_m\in\mathcal{D}_{m,\sigma_2,\sigma_3},\\ 0,\quad \text{if }Z_m\notin\mathcal{D}_{m,\sigma_2,\sigma_3}, \end{cases} \end{equation} where $\Psi_m$ is the $m$-particle $(\sigma_2,\sigma_3)$-flow defined in Theorem \ref{global flow}. \end{definition} \begin{remark} Given an initial probability density $f_{m,0}$, satisfying \eqref{support initial}-\eqref{symmetry of initial density}, the function $f_m(t,Z_m)=T_m^tf_{m,0}(Z_m)$ is formally the unique solution to the Liouville equation \eqref{Liouville equation} with initial data $f_{m,0}$. \end{remark} We also define the free flow and the $m$-particle free flow operator. \begin{definition}For $t\in\mathbb{R}$ and $m\in\mathbb{N}$, we define the $m$-particle free flow operator $S_m^t:C^0(\mathbb{R}^{2dm})\to C^0(\mathbb{R}^{2dm})$ as: \begin{equation}\label{free flow operator} S_m^tg_m(Z_m)=g_m(\Phi_m^{-t}Z_m)=g_m(X_m-tV_m,V_m). \end{equation} \end{definition} \section{BBGKY hierarchy, Boltzmann hierarchy and the binary-ternary Boltzmann equation}\label{sec:BBGKY} \subsection{The BBGKY hierarchy} Consider $N$-particles of diameter $0<\epsilon_2<1$ and interaction zone $0<\epsilon_3<1$, where $N\geq 3$ and $\epsilon_2<\epsilon_3$. For $s\in\mathbb{N}$, we define the $s$-marginal of a symmetric probability density $f_N$, supported in $\mathcal{D}_{N,\epsilon_2,\epsilon_3}$, as \begin{equation}\label{def marginals} f_N^{(s)}(Z_s)= \begin{cases} \displaystyle\int_{\mathbb{R}^{2d\left(N-s\right)}}f_N(Z_N)\,dx_{s+1}...\,dx_N\,dv_{s+1}...\,dv_N,\text{ } 1\leq s< N,\\ f_N,\text{ } s=N,\\ 0,\text{ } s>N, \end{cases} \end{equation} where for $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$, we write $Z_N=(X_s,x_{s+1},...,x_N,V_s,v_{s+1},...,v_N)$. One can see, for all $1\leq s\leq N$, the marginals $f_N^{(s)}$ are symmetric probability densities, supported in $\mathcal{D}_{s,\epsilon_2,\epsilon_3}$ and \begin{equation*} f_N^{(s)}(Z_s)=\int_{\mathbb{R}^{2d}}f_N^{(s+1)}(X_N,V_N)\,dx_{s+1}\,dv_{s+1},\quad\forall 1\leq s\leq N-1. \end{equation*} Assume now that $f_N$ is formally the solution to the $N$-particle Liouville equation \eqref{Liouville equation} with initial data $f_{N,0}$. We seek to formally find a hierarchy of equations satisfied by the marginals of $f_N$. For $s\geq N$, by definition, we have \begin{equation}\label{BBGKY s=N} f_N^{(N)}=f_N,\text{ and } f_N^{(s)}= 0,\text{ for } s>N, \end{equation} We observe that $\partial \mathcal{D}_{N,\epsilon_2,\epsilon_3}$ is equivalent up to surface measure zero to $\Sigma^X\times\mathbb{R}^{dN}$ where \begin{equation}\label{decomposition of the boundary BBGKY}\Sigma^X:=\bigcup_{(i,j)\in\mathcal{I}_N^2}\Sigma_{ij}^{2,sc,X}\cup\bigcup_{(i,j,k)\in\mathcal{I}_N^3}\Sigma_{ijk}^{3,sc,X},\end{equation} \begin{align*} \Sigma_{ij}^{2,sc,X}:=\big\{X_N\in\mathbb{R}^{dN}:&d_2(x_i,x_j)=\epsilon_2,\text{ }d_2(x_{i'},x_{j'})>\epsilon_2,\quad\forall (i',j')\in\mathcal{I}_N^2\setminus\{(i,j)\}\\ &\text{and }d_3(x_{i'};,x_{j'},x_{k'})>\sqrt{2}\epsilon_3,\quad\forall (i',j',k')\in\mathcal{I}_N^3\big\}, \end{align*} \begin{align*} \Sigma_{ijk}^{3,sc,X}:=\big\{X_N\in\mathbb{R}^{dN}:&d_3(x_i;x_j,x_k)=\sqrt{2}\epsilon_3,\text{ }d_2(x_{i'},x_{j'})>\epsilon_2,\quad\forall (i',j')\in\mathcal{I}_N^2\\ &\text{and }d_3(x_{i'};,x_{j'},x_{k'})>\sqrt{2}\epsilon_3,\quad\forall (i',j',k')\in\mathcal{I}_N^3\}\setminus\{(i,j,k)\}\big\}. \end{align*} Notice that \eqref{decomposition of the boundary BBGKY} is a pairwise disjoint union. \begin{remark}\label{remark on ordering epsilons} The assumption $\epsilon_2<\epsilon_3$ made at at the beginning of the section is necessary for the ternary contribution to be visible. Indeed, if $\epsilon_2\geq\epsilon_3$, Remark \ref{remark on phase space} and \eqref{decomposition of simple ternary} would imply that $\Sigma_{ijk}^{3,sc,X}=\emptyset$ for all $(i,j,k)\in\mathcal{I}_m^3$, therefore there would not be a ternary collisional term. \end{remark} The hierarchy for $s<N$ will come after integrating by parts the Liouville equation \eqref{Liouville equation}. Consider $1\leq s\leq N-1$. The boundary and initial conditions can be easily recovered integrating Liouville's equation boundary and initial conditions respectively i.e. \begin{equation}\label{bc of BBGKY}\begin{cases} f_N^{(s)}(t,Z_s')=f^{(s)}_N(t,Z_s),\quad (t,Z_s)\in[0,\infty)\times\partial_{2,sc}\mathcal{D}_{s,\epsilon_2,\epsilon_3},\quad s\geq 2,\\ f_N^{(s)}(t,Z_s^*)=f^{(s)}_N(t,Z_s),\quad (t,Z_s)\in[0,\infty)\times\partial_{3,sc}\mathcal{D}_{s,\epsilon_2,\epsilon_3},\quad s\geq 3,\\ f_N^{(s)}(0,Z_s)=f_{N,0}^{(s)}(Z_s),\quad Z_s\in\mathring{\mathcal{D}}_{s,\epsilon_2,\epsilon_3}. \end{cases} \end{equation} Notice that for $s=2$ there is no ternary boundary condition, while for $s=1$ there is no boundary condition at all. Consider now a smooth test function $\phi_s$ compactly supported in $(0,\infty)\times\mathcal{D}_{s,\epsilon_2,\epsilon_3}$ such that the following hold: \begin{itemize} \item For any $(i,j)\in\mathcal{I}_N^2$ with $j\leq s$, we have \begin{equation}\label{test boundary binary} \phi_s(t,p_sZ_N')=\phi_s(t,p_sZ_N)=\phi_s(t,Z_s),\quad\forall (t,Z_N)\in (0,\infty)\times\Sigma_{i,j}^{sc,2}, \end{equation} \item For any $(i,j,k)\in \mathcal{I}_N^3$ with $j\leq s$, we have \begin{equation}\label{test boundary triary}\phi_s(t,p_sZ_N^*)=\phi_s(t,p_sZ_N)=\phi_s(t,Z_s),\quad\forall (t,Z_N)\in (0,\infty)\times\Sigma_{i,j,k}^{sc,3},\end{equation} \end{itemize} where $p_s:\mathbb{R}^{2dN}\to\mathbb{R}^{2ds}$ denotes the natural projection in space and velocities, given by $p_s(Z_N)=Z_s.$ Multiplying the Liouville equation by $\phi_s$ and integrating, we obtain its weak form \begin{equation}\label{weak form initial} \int_{(0,\infty)\times\mathcal{D}_{N,\epsilon_2,\epsilon_3}}\bigg(\partial_tf_N\left(t,Z_N\right)+\sum_{i=1}^Nv_i\nabla_{x_i}f_N\left(t,Z_N\right)\bigg)\phi_s(t,Z_s)\,dX_N\,dV_N\,dt=0. \end{equation} For the time derivative in \eqref{weak form initial}, we use Fubini's Theorem, integration by parts in time, the fact that $f_N$ is supported in $(0,\infty)\times\mathcal{D}_{N,\epsilon_2,\epsilon_3}$ and the fact that $\phi_s$ is compactly supported in $(0,\infty)\times\mathcal{D}_{s,\epsilon_2,\epsilon_3}$, to obtain \begin{align} \int_{(0,\infty)\times\mathcal{D}_{N,\epsilon_2,\epsilon_3}}\partial_tf_N(t,Z_N)\phi_s(t,Z_s)\,dX_N\,dV_N\,dt&=\int_{(0,\infty)\times\mathcal{D}_{s,\epsilon_2,\epsilon_3}}\partial_tf_N^{(s)}(t,Z_s)\phi_s(t,Z_s)\,dX_s\,dV_s\,dt\label{time term BBGKY}. \end{align} For the material derivative term in \eqref{weak form initial}, the Divergence Theorem implies that \begin{align} &\int_{\mathcal{D}_{N,\epsilon_2,\epsilon_3}}\sum_{i=1}^Nv_i\nabla_{x_i}f_N\left(t,Z_N\right)\phi_s(t,Z_s)\,dX_N\,dV_N=\int_{\mathcal{D}_{N,\epsilon_2,\epsilon_3}}\diverg_{X_N}\left[f_N\left(t,Z_N\right)V_N\right]\phi_s(t,Z_s)\,dX_N\,dV_N\nonumber\\ &=-\int_{\mathcal{D}_{N,\epsilon_2,\epsilon_3}}V_N\cdot\nabla_{X_N}\phi_s(t,Z_s)f_N(t,Z_N)\,dX_N\,dV_N+\int_{\Sigma^X\times\mathbb{R}^{dN}}\hat{n}\left(X_N\right)\cdot V_Nf_N\left(t,Z_N\right)\phi_s\left(t,Z_s\right)\,dV_N\,d\sigma,\label{first int by parts diverg} \end{align} where $\Sigma^X$ is given by \eqref{decomposition of the boundary BBGKY}, $\hat{n}(X_N)$ is the outwards normal vector on $\Sigma^X$ at $X_N\in\Sigma^X$ and $\,d\sigma$ is the surface measure on $\Sigma^X$. Using the fact that $f_N$ is supported in $\mathcal{D}_{N,\epsilon_2,\epsilon_3}$, Divergence Theorem and the fact that $\phi_s$ is compactly supported in $(0,\infty)\times\mathcal{D}_{s,\epsilon_2,\epsilon_3}$, we obtain \begin{align} \int_{\mathcal{D}_{N,\epsilon_2,\epsilon_3}}V_N\cdot\nabla_{X_N}\phi_s(t,Z_s)f_N(t,Z_N)\,dX_N\,dV_N &=-\int_{\mathcal{D}_{s,\epsilon_2,\epsilon_3}}\sum_{i=1}^sv_i\nabla_{x_i}f_N^{(s)}(t,Z_s)\phi_s(t,Z_s)\,dX_s\,dV_s\label{divergence transport term}, \end{align} Combining \eqref{weak form initial}-\eqref{divergence transport term}, and recalling the space boundary decomposition \eqref{decomposition of the boundary BBGKY}, we obtain \begin{align} &\int_{(0,\infty)\times\mathcal{D}_{s,\epsilon_2,\epsilon_3}}\left(\partial_tf_N^{(s)}\left(t,Z_s\right)+\sum_{i=1}^sv_i\nabla_{x_i}f_N^{(s)}\left(t,Z_s\right)\right)\phi_s\left(t,Z_s\right)\,dX_s\,dV_s\,dt\nonumber\\ &=-\int_{(0,\infty)\times\Sigma^X\times\mathbb{R}^{dN}}\hat{n}\left(X_N\right)\cdot V_Nf_N\left(t,Z_N\right)\phi_s\left(t,Z_s\right)\,dV_N\,d\sigma\,dt,\nonumber\\ &=:\int_0^\infty \sum_{(i,j)\in\mathcal{I}_N^2} C_{ij}^2(t)+\sum_{(i,j,k)\in\mathcal{I}_N^3} C_{ijk}^3(t)\,dt,\label{weak Liouville} \end{align} where for $(i,j)\in\mathcal{I}_N^2$, $t>0$, we denote \begin{equation}\label{c_ijk^2 def} C_{ij}^2(t)=-\int_{\Sigma_{i,j}^{2,sc,X}\times\mathbb{R}^{dN}}\hat{n}_{ij}^2\left(X_N\right)\cdot V_Nf_N\left(t,Z_N\right)\phi_s\left(t,Z_s\right)\,dV_N\,d\sigma_{ij}^2, \end{equation} for $(i,j,k)\in\mathcal{I}_N^3$, $t>0$, we denote \begin{equation}\label{c_ijk^3 def} C_{ijk}^3(t)=-\int_{\Sigma_{i,j,k}^{3,sc,X}\times\mathbb{R}^{dN}}\hat{n}_{ijk}^3\left(X_N\right)\cdot V_Nf_N\left(t,Z_N\right)\phi_s\left(t,Z_s\right)\,dV_N\,d\sigma_{ijk}^3, \end{equation} and $\hat{n}_{ij}^2(X_N)$ is the outwards normal vector on $\Sigma_{ij}^{2,sc,X}$ at $X_N\in\Sigma_{ij}^{2,sc,X}$, $\,d\sigma_{ij}^2$ is the surface measure on $\Sigma_{ij}^{2,sc,X}$, while $\hat{n}_{ijk}^3(X_N)$ is the outwards normal vector on $\Sigma_{ijk}^{3,sc,X}$ at $X_N\in\Sigma_{ijk}^{3,sc,X}$ and $\,d\sigma_{ijk}^3$ is the surface measure on $\Sigma_{ijk}^{3,sc,X}$. Following similar calculations to \cite{gallagher} which treats the binary case, and \cite{ternary} which treats the ternary case, we formally obtain the BBGKY hierarchy: \begin{equation}\label{BBGKY}\begin{cases} \partial_tf_N^{(s)}+\displaystyle\sum_{i=1}^sv_i\nabla_{x_i}f_N^{(s)}=\mathcal{C}_{s,s+1}^Nf_N^{(s+1)}+\mathcal{C}_{s,s+2}^Nf_N^{(s+2)},\quad (t,Z_s)\in (0,\infty)\times\mathring{\mathcal{D}}_{s,\epsilon_2,\epsilon_3},\\ f_N^{(s)}(t,Z_s')=f_N^{(s)}(t,Z_s),\quad(t,Z_s)\in [0,\infty)\times\partial_{2,sc}\mathcal{D}_{s,\epsilon_2,\epsilon_3},\text{ whenever } s\geq 2,\\ f_N^{(s)}(t,Z_s^*)=f_N^{(s)}(t,Z_s),\quad(t,Z_s)\in [0,\infty)\times\partial_{3,sc}\mathcal{D}_{s,\epsilon_2,\epsilon_3},\text{ whenever } s\geq 3,\\ f_N^{(s)}(0,Z_s)=f_{N,0}^{(s)}(Z_s),\quad Z_s\in\mathring{\mathcal{D}}_{s,\epsilon_2,\epsilon_3}, \end{cases} \end{equation} where \begin{align} \mathcal{C}_{s,s+1}^N&=\mathcal{C}_{s,s+1}^{N,+}-\mathcal{C}_{s,s+1}^{N,-}\label{BBGKY operator binary},\\ \mathcal{C}_{s,s+2}^N&=\mathcal{C}_{s,s+2}^{N,+}-\mathcal{C}_{s,s+2}^{N,-}\label{BBGKY operator triary}. \end{align} and we use the following notation: $\bullet$ \textbf{Binary notation:} For $1\leq s\leq N-1$ we denote \begin{equation}\label{BBGKY operator+ binary} \begin{aligned} \mathcal{C}_{s,s+1}^{N,+}f_N^{(s+1)}(t,Z_s) =A_{N,\epsilon_2,s}^2\sum_{i=1}^s&\int_{\mathbb{S}_1^{d-1}\times\mathbb{R}^{d}}b_2^+(\omega_1,v_{s+1}-v_i) f_N^{(s+1)}\left(t,Z_{s+1,\epsilon_2,i}'\right)\,d\omega_1\,dv_{s+1}, \end{aligned} \end{equation} \begin{equation}\label{BBGKY operator- binary} \begin{aligned} \mathcal{C}_{s,s+1}^{N,-}f_N^{(s+2)}(t,Z_s)= A_{N,\epsilon_2,s}^2\sum_{i=1}^s&\int_{\mathbb{S}_1^{d-1}\times\mathbb{R}^{d}}b_2^+(\omega_1,v_{s+1}-v_i)f_N^{(s+1)}\left(t,Z_{s+1,\epsilon_2,i}\right)\,d\omega_1\,dv_{s+1}, \end{aligned} \end{equation} where \begin{equation}\label{A binary} \begin{aligned} &b_2(\omega_1,v_{s+1}-v_i)=\langle\omega_1,v_{s+1}-v_i\rangle,\\ &b_2^+=\max\{b_2,0\},\\ &A_{N,\epsilon_2,s}^2=(N-s)\epsilon_2^{d-1},\\ &Z_{s+1,\epsilon_2,i}=(x_1,...,x_i,...,x_s,x_i-\epsilon_2\omega_1,v_1,...v_{i-1},v_i,v_{i+1},...,v_s,v_{s+1}),\\ &Z_{s+1,\epsilon_2,i}'=(x_1,...,x_i,...,x_s,x_i+\epsilon_2\omega_1,v_1,...v_{i-1},v_i',v_{i+1},...,v_s,v_{s+1}'). \end{aligned} \end{equation} For $s\geq N$ we trivially define $ \mathcal{C}_{s,s+1}^{N}\equiv 0. $ $\bullet$ \textbf{Ternary notation:} For $1\leq s\leq N-2$ we denote \begin{equation}\label{BBGKY operator+ triary} \begin{aligned} \mathcal{C}_{s,s+2}^{N,+}f_N^{(s+2)}(t,Z_s) =A_{N,\epsilon_3,s}^3\sum_{i=1}^s&\int_{\mathbb{S}_1^{2d-1}\times\mathbb{R}^{2d}}\frac{b_3^+(\omega_1,\omega_2,v_{s+1}-v_i,v_{s+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}}\\ &\times f_N^{(s+2)}\left(t,Z_{s+2,\epsilon_3,i}^*\right)\,d\omega_1\,d\omega_2\,dv_{s+1}\,dv_{s+2}, \end{aligned} \end{equation} \begin{equation}\label{BBGKY operator- triary} \begin{aligned} \mathcal{C}_{s,s+2}^{N,-}f_N^{(s+2)}(t,Z_s)= A_{N,\epsilon_3,s}^3\sum_{i=1}^s&\int_{\mathbb{S}_1^{2d-1}\times\mathbb{R}^{2d}}\frac{b_3^+(\omega_1,\omega_2,v_{s+1}-v_i,v_{s+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}}\\ &\times f_N^{(s+2)}\left(t,Z_{s+2,\epsilon_3,i}\right)\,d\omega_1\,d\omega_2\,dv_{s+1}\,dv_{s+2}, \end{aligned} \end{equation} where \begin{equation}\label{A triary} \begin{aligned} &A_{N,\epsilon_3,s}^3=2^{d-2}(N-s)(N-s-1)\epsilon_3^{2d-1},\\ &b_3(\omega_1,\omega_2,v_{s+1}-v_i,v_{s+2}-v_i)=\langle\omega_1,v_{s+1}-v_i\rangle+\langle\omega_2,v_{s+2}-v_i\rangle,\\ &b_{3}^+=\max\{b_3,0\},\\ &Z_{s+2,\epsilon_3,i}=(x_1,...,x_i,...,x_s,x_i-\sqrt{2}\epsilon_3\omega_1,x_i-\sqrt{2}\epsilon_3\omega_2,v_1,...v_{i-1},v_i,v_{i+1},...,v_s,v_{s+1},v_{s+2}),\\ &Z_{s+2,\epsilon_3,i}^*=(x_1,...,x_i,...,x_s,x_i+\sqrt{2}\epsilon_3\omega_1,x_i+\sqrt{2}\epsilon_3\omega_2,v_1,...v_{i-1},v_i^*,v_{i+1},...,v_s,v_{s+1}^*,v_{s+2}^*). \end{aligned} \end{equation} For $s\geq N-1$ we trivially define $ \mathcal{C}_{s,s+2}^{N}\equiv 0. $ Duhamel's formula implies that the BBGKY hierarchy can be written in mild form as follows \begin{equation}\label{mild BBGKY}f_N^{(s)}(t,Z_s)=T_s^tf_{N,0}^{(s)}(Z_s)+\int_0^tT_s^{t-\tau}\left(\mathcal{C}_{s,s+1}^Nf_N^{(s+1)}+\mathcal{C}_{s,s+2}^Nf_N^{(s+2)}\right)(\tau,Z_s)\,d\tau,\quad s\in\mathbb{N},\end{equation} where $T_s^t$ is the $s$-particle $(\epsilon_2,\epsilon_3)$-flow operator given in \eqref{liouville operator}. \subsection{The Boltzmann hierarchy} We will now derive the Boltzmann hierarchy as the formal limit of the BBGKY hierarchy as $N\to\infty$ and $\epsilon_2,\epsilon_3\to 0^+$ under the scaling \begin{equation}\label{scaling} N\epsilon_2^{d-1}\simeq N\epsilon_3^{d-1/2}\simeq 1. \end{equation} This scaling implies that $\epsilon_2$,$\epsilon_3$ satisfy \begin{equation}\label{relation epsilons} \epsilon_2^{d-1}\simeq\epsilon_3^{d-1/2}. \end{equation} \begin{remark}\label{remark for epsilons} Using the scaling \eqref{scaling}, we obtain \begin{align} \epsilon_2\simeq N^{-\frac{1}{d-1}}\overset{N\to\infty}\longrightarrow 0,\quad \epsilon_3\simeq N^{-\frac{2}{2d-1}}\overset{N\to\infty}\longrightarrow 0,\label{epsilon with respect to N} \end{align} thus \begin{equation}\label{quotient of epsilon} \frac{\epsilon_2}{\epsilon_3}\simeq N^{-\frac{1}{(d-1)(2d-1)}}\overset{N\to\infty}\longrightarrow 0, \end{equation} Therefore, for $N$ large enough, we have $\epsilon_2<<\epsilon_3$. \end{remark} \begin{remark}\label{remark on growth factors} The scaling \eqref{scaling} guarantees that for a fixed $s\in\mathbb{N}$, we have \begin{equation*} \begin{aligned} A_{N,\epsilon_2,s}^2&=(N-s)\epsilon_2^{d-1}\longrightarrow 1,\quad\text{as }N\to\infty,\\ A_{N,\epsilon_3,s}^3&=2^{d-2}(N-s)(N-s-1)\epsilon_3^{2d-1}\longrightarrow 1,\quad\text{as }N\to\infty. \end{aligned} \end{equation*} \end{remark} Formally taking the limit under the scaling imposed we may define the following collisional operators: $\bullet$ \textbf{Binary Boltzmann operator:} \begin{equation}\label{boltzmann hiera kernel binary} \mathcal{C}_{s,s+1}^{\infty}=\mathcal{C}_{s,s+1}^{\infty,+}-\mathcal{C}_{s,s+1}^{\infty,-}, \end{equation} where \begin{equation}\label{Boltzmann operator + binary} \mathcal{C}_{s,s+1}^{\infty,+}f^{(s+1)}(t,Z_s)=\sum_{i=1}^s\int_{(\mathbb{S}_1^{d-1}\times\mathbb{R}^{d})}b_2^+(\omega_1,v_{s+2}-v_i)f^{(s+1)}\left(t,Z_{s+1,i}'\right) \times \,d\omega_1\,dv_{s+1}, \end{equation} \begin{equation}\label{Boltzmann operator - binary} \mathcal{C}_{s,s+1}^{\infty,-}f^{(s+1)}(t,Z_s)=\sum_{i=1}^s\int_{(\mathbb{S}_1^{d-1}\times\mathbb{R}^{d})}b_2^+(\omega_1,v_{s+2}-v_i)\times f^{(s+1)}\left(t,Z_{s+1,i}\right) \times\,d\omega_1\,dv_{s+1}, \end{equation} \begin{equation}\label{boltzmann notation binary} \begin{aligned} &b_2(\omega_1,v_{s+1}-v_i)=\langle\omega_1,v_{s+1}-v_i\rangle,\\ &b_2=\max\{0,b_2\},\\ &Z_{s+1,i}=(x_1,...,x_i,...,x_s,x_i,v_1,...v_{i-1},v_i,v_{i+1},...,v_s,v_{s+1}),\\ &Z_{s+1,i}'=(x_1,...,x_i,...,x_s,x_i,v_1,...v_{i-1},v_i',v_{i+1},...,v_s,v_{s+1}'). \end{aligned} \end{equation} $\bullet$ \textbf{Ternary Boltzmann operator:} \begin{equation}\label{boltzmann hiera kernel ternary} \mathcal{C}_{s,s+2}^{\infty}=\mathcal{C}_{s,s+2}^{\infty,+}-\mathcal{C}_{s,s+2}^{\infty,-}, \end{equation} where \begin{equation}\label{Boltzmann operator + triary} \begin{aligned} \mathcal{C}_{s,s+2}^{\infty,+}f^{(s+2)}(t,Z_s)=\sum_{i=1}^s\int_{(\mathbb{S}_1^{2d-1}\times\mathbb{R}^{2d})}\frac{b_3^+(\omega_1,\omega_2,v_{s+1}-v_i,v_{s+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}}f^{(s+2)}\left(t,Z_{s+2,i}^*\right)&\\ \times \,d\omega_1\,d\omega_2\,dv_{s+1}\,dv_{s+2}&, \end{aligned} \end{equation} \begin{equation}\label{Boltzmann operator - triary} \begin{aligned} \mathcal{C}_{s,s+2}^{\infty,-}f^{(s+2)}(t,Z_s)=\sum_{i=1}^s\int_{(\mathbb{S}_1^{2d-1}\times\mathbb{R}^{2d})}\frac{b_3^+(\omega_1,\omega_2,v_{s+1}-v_i,v_{s+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}}\times f^{(s+2)}\left(t,Z_{s+2,i}\right)&\\ \times\,d\omega_1\,d\omega_2\,dv_{s+1}\,dv_{s+2}&, \end{aligned} \end{equation} \begin{equation}\label{boltzmann notation triary} \begin{aligned} &b_3(\omega_1,\omega_2,v_{s+1}-v_i,v_{s+2}-v_i)=\langle\omega_1,v_{s+1}-v_i\rangle+\langle\omega_2,v_{s+2}-v_i\rangle,\\ &b_3^+=\max\{b_3,0\},\\ &Z_{s+2,i}=(x_1,...,x_i,...,x_s,x_i,x_i,v_1,...v_{i-1},v_i,v_{i+1},...,v_s,v_{s+1},v_{s+2}),\\ &Z_{s+2,i}^*=(x_1,...,x_i,...,x_s,x_i,x_i,v_1,...v_{i-1},v_i^*,v_{i+1},...,v_s,v_{s+1}^*,v_{s+2}^*). \end{aligned} \end{equation} Now we are ready to introduce the Boltzmann hierarchy. More precisely, given an initial probability density $f_0$, the Boltzmann hierarchy for $s\in\mathbb{N}$ is given by: \begin{equation}\label{Boltzmann hierarchy}\begin{cases} \partial_tf^{(s)}+\displaystyle\sum_{i=1}^sv_i\nabla_{x_i}f^{(s)}=\mathcal{C}_{s,s+1}^\infty f^{(s+1)}+\mathcal{C}_{s,s+2}^\infty f^{(s+2)},\quad(t,Z_s)\in (0,\infty)\times\mathbb{R}^{2ds},\\ f^{(s)}(0,Z_s)=f_0^{(s)}(Z_s),\quad\forall Z_s\in\mathbb{R}^{2ds}. \end{cases} \end{equation} Duhamel's formula implies that the Boltzmann hierarchy can be written in mild form as follows \begin{equation}\label{mild Boltzmann} f^{(s)}(t,Z_s)=S_s^tf_0^{(s)}(Z_s)+\int_0^tS_s^{t-\tau}\left(\mathcal{C}_{s,s+1}^\infty f^{(s+1)}+\mathcal{C}_{s,s+2}^\infty f^{(s+2)}\right)(\tau,Z_s)\,d\tau,\quad s\in\mathbb{N}, \end{equation} where $S_s^t$ denotes the $s-$particle free flow operator given in \eqref{free flow operator}. \subsection{ The binary-ternary Boltzmann equation} \subsubsection{The binary-ternary Boltzmann equation} In most applications, particles are initially independently distributed. This translates to tensorized Boltzmann hierarchy initial data i.e. \begin{equation}\label{tensorized initial data} f_0^{(s)}(Z_s)=f_0^{\otimes s}(Z_s)=\prod_{i=1}^sf_0(x_i,v_i),\quad s\in\mathbb{N}, \end{equation} where $f_0:\mathbb{R}^{d}\times\mathbb{R}^d\to\mathbb{R}$ is a given function. One can easily verify that the anszatz: \begin{equation}\label{propagation of chaos} f^{(s)}(t,Z_s)=f^{\otimes s}(t,Z_s)=\prod_{i=1}^sf(t,x_i,v_i),\quad s\in\mathbb{N}, \end{equation} solves the Boltzmann hierarchy with initial data given by \eqref{tensorized initial data}, if $f:[0,\infty)\times\mathbb{R}^{d}\times\mathbb{R}^d\to\mathbb{R}$ satisfies the following nonlinear integro-differential equation: \begin{equation}\label{Boltzmann equation} \begin{cases} \partial_tf+v\cdot\nabla_xf=Q_2(f,f)+Q_3(f,f,f),\quad (t,x,v)\in (0,\infty)\times\mathbb{R}^{2d},\\ f(0,x,v)=f_0(x,v),\quad (x,v)\in\mathbb{R}^{2d}, \end{cases} \end{equation} which we call the binary-ternary Boltzmann equation. The binary collisional operator $Q_2$ is given by \begin{equation}\label{Boltzmann operator binary} Q_2(f,f)(t,x,v)=\int_{\mathbb{S}_1^{d-1}\times\mathbb{R}^{d}}b_2^+(\omega_1,v_{1}-v)\left(f'f_1'-ff_1\right)\,d\omega_1\,dv_1, \end{equation} where \begin{equation}\label{notation binary} \begin{aligned} &b_2(\omega_1,v_1-v)=\langle\omega_1,v_1-v\rangle,\\ &b_2^+=\max\{0,b_2\},\\ &f'=f(t,x,v'),\quad f=f(t,x,v),\\ &f_1'=f(t,x,v_1'),\quad f_1=f(t,x,v_1). \end{aligned} \end{equation} The ternary collisional operator $Q_3$ is given by \begin{equation}\label{Boltzmann operator triary} Q_3(f,f,f)(t,x,v)=\int_{\mathbb{S}_1^{2d-1}\times\mathbb{R}^{2d}}\frac{b_3^+(\omega_1,\omega_2,v_1-v,v_2-v)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}}\left(f^*f_1^*f_2^*-ff_1f_2\right)\,d\omega_1\,d\omega_2\,dv_1\,dv_2, \end{equation} where \begin{equation}\label{notation triary} \begin{aligned} &b_3(\omega_1,\omega_2,v_{s+1}-v_i,v_{s+2}-v_i)=\langle\omega_1,v_1-v\rangle+\langle\omega_2,v_2-v\rangle,\\ &b_3^+=\max\{0,b_3\},\\ &f^*=f(t,x,v^*),\quad f=f(t,x,v),\\ &f_1^*=f(t,x,v_1^*),\quad f_1=f(t,x,v_1),\\ &f_2^*=f(t,x,v_2^*),\quad f=f(t,x,v_2). \end{aligned} \end{equation} Duhamel's formula implies the binary-ternary Boltzmann equation can be written in mild form as \begin{equation}\label{mild Boltzmann equation} \begin{aligned} f(t,x,v)&=S_1^tf_0(x,v)+\int_0^tS_1^{t-\tau}Q(f,f,f)(\tau,x,v)\,d\tau, \end{aligned} \end{equation} where \begin{equation*} S_1^tg(x,v)=g(x-tv,v),\quad\forall (t,x,v)\in[0,\infty)\times\mathbb{R}^{2d},\quad g:\mathbb{R}^{2d}\to\mathbb{R}. \end{equation*} \begin{remark}\label{remark on propagation} We will see in Section \ref{sec:local} that both the Boltzmann hierarchy and the binary-ternary Boltzmann equation are well-posed in appropriate functional spaces. It is not hard to see that if $f$ is formally a solution to the binary-ternary Boltzmann equation with initial data $f_0$, then the tensorized product $F:=(f^{\otimes s})_{s\in\mathbb{N}}$ is a solution to the Boltzmann hierarchy with initial data $F_0:=(f_0^{\otimes s})_{s\in\mathbb{N}}$. Therefore, the tensorized product of the unique solution to the binary-ternary Boltzmann equation with initial data $f_0$ will give the unique mild solution to the Boltzmann hierarchy with initial data $F_0$. \end{remark} \begin{remark}\label{remark on equation properties} It is important to point out that in \cite{thesis}, the ternary operator $Q_3$ was symmetrized to an operator $\widetilde{Q}_3$ which shares similar statistical and entropy production properties with the classical binary Boltzmann operator $Q_2$ (see \cite{cercignani gases}). In particular, it has a weak formulation which yields an $\mathcal{H}$-Theorem and local conservation laws. Hence, the operator $Q_2+\widetilde{Q}_3$ satisfies these statistical properties as well. This observation illustrates that the binary-ternary equation we are studying could serve as an extension term of the classical Boltzmann equation in modeling denser gases. \end{remark} \section{Local well-posedness}\label{sec:local} In this section, we show that the BBGKY hierarchy, the Boltzmann hierarchy and the binary-ternary Boltzmann equation are well-posed for short times in Maxwellian weighted $L^\infty$-spaces. To obtain these results, we combine the continuity estimates on the binary and ternary collisional operators, obtained in \cite{gallagher} and \cite{ternary} respectively. \subsection{LWP for the BBGKY hierarchy}\label{sub BBGKY well posedness} Consider $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling}, with $N\geq 3$. For $s\in\{1,...,N\}$, recall from \eqref{continuous finite} the space of functions \begin{align*} C^0(\mathcal{D}_{s,\epsilon_2,\epsilon_3})&:=\{g_m\in C^0(\mathbb{R}^{2ds}):\supp g_s\subseteq \mathcal{D}_{s,\epsilon_2,\epsilon_3}\}. \end{align*} For $\beta> 0$ we define the Banach space \begin{equation*} X_{N,\beta,s} :=\left\{g_{N,s}\in C^0(\mathcal{D}_{m,\epsilon_2,\epsilon_3})\text{ and } |g_{N,s}|_{N,\beta,s}<\infty\right\}, \end{equation*} with norm $|g_{N,s}|_{N,\beta,s}=\sup_{Z_s\in\mathbb{R}^{2ds}}|g_{N,s}(Z_s)|e^{\beta E_s(Z_s)},$ where $E_s(Z_s)$ is the kinetic energy of the $s$-particles given by \eqref{kinetic energy}. For $s>N$ we trivially define $X_{N,\beta,s}:=\left\{0\right\}. $ \begin{remark}\label{T_s isometry} Given $t\in\mathbb{R}$ and $s\in\mathbb{N}$, conservation of energy under the flow \eqref{kinetic energy flow} implies that the $s$-particle of $(\epsilon_2,\epsilon_3)$-flow operator $T_s^t:X_{N,\beta,s}\to X_{N,\beta,s}$, given in \eqref{liouville operator} is an isometry i.e. \begin{equation*} |T_s^tg_{N,s}|_{N,\beta,s}=|g_{N,s}|_{N,\beta,s},\quad\forall g_{N,s}\in X_{N,\beta,s}. \end{equation*} \end{remark} \begin{proof} Let $g_{N,s}\in X_{N,\beta,s}$ and $Z_s\in\mathbb{R}^{2ds}$. If $Z_s\notin \mathcal{D}_{s,\epsilon_2,\epsilon_3}$, the result is trivial since $ g_{N,s}$ is supported in $\mathcal{D}_{s,\epsilon_2,\epsilon_3}$. Assume $Z_s\in\mathcal{D}_{s,\epsilon_2,\epsilon_3}$. Then Theorem \ref{global flow} yields \begin{equation*} \begin{aligned} e^{\beta E_s(Z_s)}|T_s^tg_{N,s}|&=e^{\beta E_s(Z_s)}|(g_{N,s}\circ\Psi_s^{-t})(Z_s)|=e^{\beta E_s\left(\Psi_s^{-t}Z_s\right)}|g_{N,s}(\Psi_s^{-t}Z_s)|\leq |g_{N,s}|_{N,s,\beta}, \end{aligned} \end{equation*} hence $|T_s^tg_{N,s}|_{N,s,\beta}\leq |g_{N,s}|_{N,s,\beta}$. The other side of the inequality comes similarly using the fact that $Z_s=\Psi_s^{-t}(\Psi_s^tZ_s)$. \end{proof} Consider as well $\mu\in\mathbb{R}$. We define the Banach space \begin{equation*} X_{N,\beta,\mu}:=\left\{G_N=(g_{N,s})_{s\in\mathbb{N}}:\|G_N\|_{N,\beta,\mu}<\infty\right\},\end{equation*} with norm $\|G_N\|_{N,\beta,\mu}=\sup_{s\in\mathbb{N}}e^{\mu s}|g_{N,s}|_{N,\beta,s}=\max_{s\in\{1,...,N\}}e^{\mu s}|g_{N,s}|_{N,\beta,s}.$ \begin{remark}\label{T_N isometry} Given $t\in\mathbb{R}$, Remark \ref{T_s isometry} implies that the map $\mathcal{T}^t:X_{N,\beta,\mu}\to X_{N,\beta,\mu}$ given by \begin{equation}\label{T_N definition} \mathcal{T}^tG_N:=\left(T_s^tg_{N,s}\right)_{s\in\mathbb{N}}, \end{equation} is an isometry i.e. $ \|\mathcal{T}^tG_N\|_{N,\beta,\mu}=\|G_N\|_{N,\beta,\mu},$ for any $G_N\in X_{N,\beta,\mu}. $ \end{remark} Finally, given $T>0$, $\beta_0> 0$, $\mu_0\in\mathbb{R}$ and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ decreasing functions of time with $\bm{\beta}(0)=\beta_0$, $\bm{\beta}(T)> 0$, $\bm{\mu}(0)=\mu_0$, we define the Banach space \begin{equation*} \bm{X}_{N,\bm{\beta},\bm{\mu}}:=L^\infty\left([0,T],X_{N,\bm{\beta}(t),\bm{\mu}(t)}\right), \end{equation*} with norm $|||\bm{G_N}|||_{N,\bm{\beta},\bm{\mu}}=\sup_{t\in[0,T]}\|\bm{G_N}(t)\|_{N,\bm{\beta}(t),\bm{\mu}(t)}.$ Similarly as in Proposition 6.2. from \cite{thesis}, one can obtain the following bounds: \begin{proposition}\label{remark for initial} Let $T>0$, $\beta_0>0$, $\mu_0\in\mathbb{R}$ and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ decreasing functions with $\beta_0=\bm{\beta}(0)$, $\bm{\beta}(T)> 0$ $\mu_0=\bm{\mu}(0)$. Then for any $G_N=\left(g_{N,s}\right)_{s\in\mathbb{N}}\in X_{N,\beta_0,\mu_0}$, the following estimates hold: \begin{enumerate}[(i)] \item $|||G_N|||_{N,\bm{\beta},\bm{\mu}}\leq\|G_N\|_{N,\beta_0,\mu_0}$.\vspace{0.2cm} \item $\left|\left|\left|\displaystyle\int_0^t\mathcal{T}^{\tau}G_N\,d\tau\right|\right|\right|_{N,\bm{\beta},\bm{\mu}}\leq T\|G_N\|_{N,\beta_0,\mu_0}.$ \end{enumerate} \end{proposition} From Proposition 5.3.1. in \cite{gallagher} and Lemma 5.1. in \cite{ternary}, we have the following continuity estimates for the binary and ternary collisional operators respectively: \begin{lemma}\label{a priori lemma for C BBGKY} Let $m\in\mathbb{N}$, $\beta>0$. For any $Z_m\in\mathcal{D}_{m,\epsilon_2,\epsilon_3}$ and $k\in\{1,2\}$, the following estimate holds: \begin{equation*} \left|\mathcal{C}_{m,m+k}^{N}g_{N,m+k}(Z_m)\right|\lesssim \beta^{-kd/2}\left(m\beta^{-1/2}+\sum_{i=1}^m|v_i|\right)e^{-\beta E_m(Z_m)}|g_{N,m+k}|_{N,\beta,m+k},\quad\forall g_{N,m+k}\in X_{N,\beta,m+k}. \end{equation*} \end{lemma} Let us now define mild solutions to the BBGKY hierarchy: \begin{definition}\label{def of mild bbgky} Consider $T>0$, $\beta_0> 0$, $\mu_0\in\mathbb{R}$ and the decreasing functions $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ with $\bm{\beta}(0)=\beta_0$, $\bm{\beta}(T)> 0$, $\bm{\mu}(0)=\mu_0$. Consider also initial data $G_{N,0}=\left(g_{N,s,0}\right)\in X_{N,\beta_0,\mu_0}$. A map $\bm{G_N}=\left(g_{N,s}\right)_{s\in\mathbb{N}}\in\bm{X}_{N,\bm{\beta},\bm{\mu}}$ is a mild solution of the BBGKY hierarchy in $[0,T]$, with initial data $G_{N,0}$, if it satisfies: \begin{equation*}\bm{G_N}(t)=\mathcal{T}^tG_{N,0}+\int_0^t \mathcal{T}^{t-\tau}\mathcal{C}_N\bm{G_N}(\tau)\,d\tau,\end{equation*} where, given $\beta>0$, $\mu\in\mathbb{R}$ and $G_{N}=(g_{N,s})_{s\in\mathbb{N}}\in X_{N,\beta,\mu}$, we write \begin{align*} \mathcal{C}_N G_N:=(\mathcal{C}_N^2 +\mathcal{C}_N^3) G_N,\quad\mathcal{C}_N^2 G_N:=\left(\mathcal{C}_{s,s+1}^N g_{N,s+1}\right)_{s\in\mathbb{N}},\quad\mathcal{C}_N^3 G_N:=\left(\mathcal{C}_{s,s+2}^N g_{N,s+2}\right)_{s\in\mathbb{N}}, \end{align*} and $\mathcal{T}^t$ is given by \eqref{T_N definition}. \end{definition} Using Lemma \ref{a priori lemma for C BBGKY}, we obtain the following a-priori bounds: \begin{lemma}\label{a priori lemma for T BBGKY} Let $\beta_0> 0$, $\mu_0\in\mathbb{R}$, $T>0$ and $\lambda\in (0,\beta_0/T)$. Consider the functions $\bm{\beta}_\lambda,\bm{\mu}_\lambda:[0,T]\to\mathbb{R}$ given by \begin{equation}\label{beta_lambda-mu_lambda} \begin{aligned} \bm{\beta}_\lambda(t)=\beta_0-\lambda t,\quad\bm{\mu}_\lambda(t)=\mu_0-\lambda t. \end{aligned} \end{equation} Then for any $\mathcal{F}(t)\subseteq [0,t]$ measurable, $\bm{G_N}=\left(g_{N,s}\right)_{s\in\mathbb{N}}\in\bm{X}_{N,\bm{\beta}_\lambda,\bm{\mu}_\lambda}$ and $k\in\{1,2\}$ the following bounds hold: \begin{align} \left|\left|\left|\displaystyle\int_{\mathcal{F}(t)}\mathcal{T}^{t-\tau}\mathcal{C}_N^{k+1}\bm{G_N}(\tau)\,d\tau\right|\right|\right|_{N,\bm{\beta}_\lambda,\bm{\mu}_\lambda}&\leq C_{k+1}|||\bm{G_N}|||_{N,\bm{\beta}_\lambda,\bm{\mu}_\lambda},\label{both bbkky with constant} \end{align} \begin{align} C_{k+1}=C_{k+1}(d,\beta_0,\mu_0,T,\lambda)&=C_d\lambda^{-1}e^{-k\bm{\mu}_\lambda(T)}\bm{\beta}_\lambda^{-kd/2}(T)\left(1+\bm{\beta}_{\lambda}^{-1/2}(T)\right)\label{constant of WP binary}. \end{align} \end{lemma} \begin{proof} For the proof of \eqref{both bbkky with constant} for $k=1$, see Lemma 5.3.1. from \cite{gallagher} and for the proof for $k=2$ see Lemma 6.4. from \cite{thesis}. \end{proof} Choosing $\lambda=\beta_0/2T$, Lemma \ref{a priori lemma for T BBGKY} implies well-posedness of the BBGKY hierarchy up to short time. The proof follows similar steps to the proof of Theorem 6 from \cite{gallagher} and Theorem 6.4.1 from \cite{thesis}. \begin{theorem}\label{well posedness BBGKY} Let $\beta_0> 0$ and $\mu_0\in\mathbb{R}$. Then there is $T=T(d,\beta_0,\mu_0)>0$ such that for any initial datum $F_{N,0}=(f_{N,0}^{(s)})_{s\in\mathbb{N}}\in X_{N,\beta_0,\mu_0}$ there is unique mild solution $\bm{F_N}=(f_N^{(s)})_{s\in\mathbb{N}}\in\bm{X}_{N,\bm{\beta},\bm{\mu}}$ to the BBGKY hierarchy in $[0,T]$ for the functions $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ given by \begin{equation}\label{beta mu given lambda} \begin{aligned} \bm{\beta}(t)&=\beta_0-\frac{\beta_0}{2T}t,\quad \bm{\mu}(t)&=\mu_0-\frac{\beta_0}{2T}t. \end{aligned} \end{equation} The solution $\bm{F_N}$ satisfies the bound: \begin{equation} \label{a priori bound F_N,0}|||\bm{F_N}|||_{N,\bm{\beta},\bm{\mu}}\leq 2\|F_{N,0}\|_{N,\beta_0,\mu_0}. \end{equation} Moreover, for any $\mathcal{F}(t)\subseteq[0,t]$ measurable and $k\in\{1,2\}$, the following bound holds: \begin{align} \label{a priori binary bound F_N}\left|\left|\left|\int_{\mathcal{F}(t)}\mathcal{T}^{t-\tau}C_N^{k+1}\bm{G_N}(\tau)\,d\tau\right|\right|\right|_{N,\bm{\beta},\bm{\mu}}&\leq\frac{1}{16}|||G_N|||_{{N,\bm{\beta},\bm{\mu}}},\quad\forall G_N\in\bm{X}_{N,\bm{\beta},\bm{\mu}},\\ \end{align} The time $T$ is explicitly given by: \begin{equation}\label{time} T\simeq\beta_0\left(e^{-\mu_0-\frac{\beta_0}{2}}(\frac{\beta_0}{2})^{-d/2}+e^{-2\mu_0-\beta_0}(\frac{\beta_0}{2})^{-d}\right)^{-1}\left(1+(\frac{\beta_0}{2})^{-1/2}\right)^{-1}. \end{equation} \end{theorem} \subsection{LWP for the Boltzmann hierarchy} Similary to Subsection \ref{sub BBGKY well posedness}, here we establish a-priori bounds and local well-posedness for the Boltzmann hierarchy. Without loss of generality, we will omit the proofs since they are identical to the BBGKY hierarchy case. Given $s\in\mathbb{N}$ and $\beta> 0$, we define the Banach space \begin{equation*} X_{\infty,\beta,s} :=\left\{g_{s}\in C^0(\mathbb{R}^{2ds}):|g_{s}|_{\infty,\beta,s}<\infty\right\}, \end{equation*} with norm $|g_{s}|_{\infty,\beta,s}=\sup_{Z_s\in\mathbb{R}^{2ds}}|g_{s}(Z_s)|e^{\beta E_s(Z_s)},$ where $E_s(Z_s)$ is the kinetic energy of the $s$-particles given by \eqref{kinetic energy}. \begin{remark}\label{S_s isometry} Given $t\in\mathbb{R}$ and $s\in\mathbb{N}$, conservation of energy under the free flow implies that the $s$-particle free flow operator $S_s^t:X_{\infty,\beta,s}\to X_{\infty,\beta,s}$, given in \eqref{free flow operator}, is an isometry i.e. \begin{equation*} |S_s^tg_{s}|_{\infty,\beta,s}=|g_{s}|_{\infty,\beta,s},\quad\forall g_{s}\in X_{\infty,\beta,s}. \end{equation*} \end{remark} Consider as well $\mu\in\mathbb{R}$. We define the Banach space \begin{equation*} X_{\infty,\beta,\mu}:=\left\{G=(g_{s})_{s\in\mathbb{N}}:\|G\|_{\infty,\beta,\mu}<\infty\right\},\end{equation*} with norm $\|G\|_{\infty,\beta,\mu}=\sup_{s\in\mathbb{N}}e^{\mu s}|g_{s}|_{\infty,\beta,s}.$ \begin{remark}\label{S isometry} Given $t\in\mathbb{R}$, Remark \ref{S_s isometry} implies that the map $\mathcal{S}^t:X_{\infty,\beta,\mu}\to X_{\infty,\beta,\mu}$ given by \begin{equation}\label{S definition} \mathcal{S}^tG:=\left(S_s^tg_{s}\right)_{s\in\mathbb{N}}, \end{equation} is an isometry i.e. $ \|\mathcal{S}^tG\|_{\infty,\beta,\mu}=\|G\|_{\infty,\beta,\mu},$ for any $G\in X_{\infty,\beta,\mu}.$ \end{remark} Finally, given $T>0$, $\beta_0> 0$, $\mu_0\in\mathbb{R}$ and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ decreasing functions of time with $\bm{\beta}(0)=\beta_0$, $\bm{\beta}(T)> 0$, $\bm{\mu}(0)=\mu_0$, we define the Banach space \begin{equation*} \bm{X}_{\infty,\bm{\beta},\bm{\mu}}:=L^\infty\left([0,T],X_{\infty,\bm{\beta}(t),\bm{\mu}(t)}\right), \end{equation*} with norm $|||\bm{G}|||_{\infty,\bm{\beta},\bm{\mu}}=\sup_{t\in[0,T]}\|\bm{G}(t)\|_{\infty,\bm{\beta}(t),\bm{\mu}(t)}.$ \begin{proposition}\label{remark for initial boltzmann hierarchy} Let $T>0$, $\beta_0>0$, $\mu_0\in\mathbb{R}$ and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ decreasing functions with $\beta_0=\bm{\beta}(0)$, $\bm{\beta}(T)> 0$ $\mu_0=\bm{\mu}(0)$. Then for any $G=\left(g_{s}\right)_{s\in\mathbb{N}}\in X_{\infty,\beta_0,\mu_0}$, the following estimates hold: \begin{enumerate}[(i)] \item $|||G|||_{\infty,\bm{\beta},\bm{\mu}}\leq\|G\|_{\infty,\beta_0,\mu_0}$.\vspace{0.2cm} \item $\left|\left|\left|\displaystyle\int_0^t\mathcal{S}^{\tau}G\,d\tau\right|\right|\right|_{\infty,\bm{\beta},\bm{\mu}}\leq T\|G\|_{\infty,\beta_0,\mu_0}.$ \end{enumerate} \end{proposition} Similarly to Lemma \ref{a priori lemma for C BBGKY}, we obtain: \begin{lemma}\label{a priori lemma for C Boltzmann} Let $m\in\mathbb{N}$ and $\beta>0$. For any $Z_m\in\mathbb{R}^{2dm}$ and $k\in\{1,2\}$, the following continuity estimate holds: \begin{equation} \left|\mathcal{C}_{m,m+k}^{\infty}g_{m+k}(Z_m)\right|\lesssim \beta^{-kd/2}\left(m\beta^{-1/2}+\sum_{i=1}^m|v_i|\right)e^{-\beta E_m(Z_m)}|g_{m+k}|_{\infty,\beta,m+k},\quad\forall g_{m+k}\in X_{\infty,\beta,m+k}.\label{cont estimate both boltzmann} \end{equation} \end{lemma} Let us now define mild solutions to the Boltzmann hierarchy: \begin{definition}\label{def mild solution boltzmann} Consider $T>0$, $\beta_0> 0$, $\mu_0\in\mathbb{R}$ and the decreasing functions $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ with $\bm{\beta}(0)=\beta_0$, $\bm{\beta}(T)> 0$, $\bm{\mu}(0)=\mu_0$. Consider also initial data $G_{0}=\left(g_{s,0}\right)\in X_{\infty,\beta_0,\mu_0}$. A map $\bm{G}=\left(g_{s}\right)_{s\in\mathbb{N}}\in\bm{X}_{\infty,\bm{\beta},\bm{\mu}}$ is a mild solution of the Boltzmann hierarchy in $[0,T]$, with initial data $G_0$, if it satisfies: \begin{equation*}\bm{G}(t)=\mathcal{S}^tG_{0}+\int_0^t \mathcal{S}^{t-\tau}\mathcal{C}_\infty\bm{G}(\tau)\,d\tau,\end{equation*} where, given $\beta>0$, $\mu\in\mathbb{R}$ and $\widetilde{G}=(\widetilde{g}_{s})_{s\in\mathbb{N}}\in X_{\infty,\beta,\mu}$, we write \begin{align*} \mathcal{C}_\infty G:=(\mathcal{C}_\infty^2 +\mathcal{C}_\infty^3) G,\quad \mathcal{C}_\infty^2 G:=\left(\mathcal{C}_{s,s+1}^\infty g_{s+1}\right)_{s\in\mathbb{N}},\quad \mathcal{C}_\infty^3 G:=\left(\mathcal{C}_{s,s+2}^\infty g_{s+2}\right)_{s\in\mathbb{N}}, \end{align*} and $\mathcal{S}^t$ is given by \eqref{S definition}. \end{definition} Using Lemma \ref{a priori lemma for C Boltzmann}, we obtain the following a-priori bounds: \begin{lemma}\label{a priori lemma for S boltzmann} Let $\beta_0> 0$, $\mu_0\in\mathbb{R}$, $T>0$ and $\lambda\in (0,\beta_0/T)$. Consider the functions $\bm{\beta}_\lambda,\bm{\mu}_\lambda:[0,T]\to\mathbb{R}$ given by \eqref{beta_lambda-mu_lambda}. Then for any $\mathcal{F}(t)\subseteq [0,t]$ measurable, $\bm{G}=\left(g_{s}\right)_{s\in\mathbb{N}}\in\bm{X}_{\infty,\bm{\beta}_\lambda,\bm{\mu}_\lambda}$ and $k\in\{1,2\}$, the following bound holds: \begin{align} \left|\left|\left|\displaystyle\int_{\mathcal{F}(t)}\mathcal{S}^{t-\tau}\mathcal{C}_\infty^{k+1}\bm{G}(\tau)\,d\tau\right|\right|\right|_{\infty,\bm{\beta}_\lambda,\bm{\mu}_\lambda}&\leq C_{k+1}|||\bm{G}|||_{\infty,\bm{\beta}_\lambda,\bm{\mu}_\lambda},\label{both boltzmann with constant} \end{align} where the constant $C_{k+1}=C_{k+1}(d,\beta_0,\mu_0,T,\lambda)$ is given by \eqref{constant of WP binary}. \end{lemma} Choosing $\lambda=\beta_0/2T$, Lemma \ref{a priori lemma for S boltzmann} directly implies well-posedness of the Boltzmann hierarchy up to short time. \begin{theorem}\label{well posedness boltzmann} Let $\beta_0> 0$ and $\mu_0\in\mathbb{R}$. Then there is $T=T(d,\beta_0,\mu_0)>0$ such that for any initial datum $F_{0}=(f_{0}^{(s)})_{s\in\mathbb{N}}\in X_{\infty,\beta_0,\mu_0}$ there is unique mild solution $\bm{F}=(f^{(s)})_{s\in\mathbb{N}}\in\bm{X}_{\infty,\bm{\beta},\bm{\mu}}$ to the Boltzmann hierarchy in $[0,T]$ for the functions $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ given by \eqref{beta mu given lambda}. The solution $\bm{F}$ satisfies the bound: \begin{equation} \label{a priori bound F_0 Boltzmann}|||\bm{F}|||_{\infty,\bm{\beta},\bm{\mu}}\leq 2\|F_{0}\|_{\infty,\beta_0,\mu_0}. \end{equation} Moreover, for any $\mathcal{F}(t)\subseteq[0,t]$ measurable and $k\in\{1,2\}$, the following bound holds: \begin{align} \label{a priori binary bound F Boltzmann}\left|\left|\left|\int_{\mathcal{F}(t)}\mathcal{S}^{t-\tau}C_\infty^{k+1}\bm{G}(\tau)\,d\tau\right|\right|\right|_{\infty,\bm{\beta},\bm{\mu}}&\leq\frac{1}{16}|||G|||_{{\infty,\bm{\beta},\bm{\mu}}},\quad\forall G\in\bm{X}_{\infty,\bm{\beta},\bm{\mu}}, \end{align} and the time $T$ is explicitly given by \eqref{time}. \end{theorem} \subsection{LWP for the binary-ternary Boltzmann equation and propagation of chaos} Now, we show local well-posedness for the binary-ternary Boltzmann equation and that, for chaotic initial data, their tensorized product produces the unique mild solution of the Boltzmann hierarchy. Therefore uniqueness implies that the mild solution to the Boltzmann hierarchy remains factorized under time evolution, hence chaos is propagated in time. For $\beta>0$ let us define the Banach space \begin{equation*} X_{\beta,\mu}:=\left\{g\in C^0(\mathbb{R}^{2d}):|g|_{\beta,\mu}<\infty\right\}, \end{equation*} with norm $ |g|_{\beta,\mu}=\sup_{(x,v)\in\mathbb{R}^{2d}} |g(x,v)|e^{\mu+\frac{\beta}{2} |v|^2}. $ Notice that for any $t\in[0,T]$, the map $S_1^t:X_{\beta,\mu}\to X_{\beta,\mu}$ is an isometry. Consider $\beta_0>0$, $\mu_0\in\mathbb{R}$, $T>0$ and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ decreasing functions of time with $\bm{\beta}(0)=\beta_0$, $\bm{\beta}(T)>0$ and $\bm{\mu}(0)=\mu_0$. We define the Banach space \begin{equation*} \bm{X}_{\bm{\beta},\bm{\mu}}:=L^\infty\left([0,T],X_{\bm{\beta}(t),\bm{\mu}(t)}\right), \end{equation*} with norm $ \|\bm{g}\|_{\bm{\beta},\bm{\mu}}=\sup_{t\in[0,T]}|\bm{g}(t)|_{\bm{\beta}(t),\bm{\mu}(t)}. $ One can see that the following estimate holds: \begin{remark}\label{remark for initial equation} Let $T>0$, $\beta_0>0$, $\mu_0\in\mathbb{R}$ and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ decreasing functions with $\beta_0=\bm{\beta}(0)$, $\bm{\beta}(T)> 0$ $\mu_0=\bm{\mu}(0)$. Then for any $g\in X_{\beta_0,\mu_0}$, the following estimate holds: \begin{equation*} \|g\|_{\bm{\beta},\bm{\mu}}\leq |g|_{\beta_0,\mu_0}. \end{equation*} \end{remark} To prove LWP for the binary-ternary Boltzmann equation \eqref{Boltzmann equation}, we will need certain continuity estimates on the binary and ternary collisional operators. The binary estimate we provide below is the bilinear analogue of Proposition 5.3.2. in \cite{gallagher}. For the ternary operator, continuity estimates have been derived in \cite{thesis}, Lemma 6.10. Combining these results we derive continuity estimates for the binary-ternary collisional operator $Q_2+Q_3$: \begin{lemma}\label{continuity boltzmann lemma} Let $\beta>0$, $\mu\in\mathbb{R}$. Then for any $g,h\in X_{\beta,\mu}$ and $(x,v)\in\mathbb{R}^{2d}$, the following nonlinear continuity estimate holds: \begin{align*} \big|&\left[Q_2(g,g)+Q_3(g,g,g)\right](x,v)-\left[Q_2(h,h)+Q_3(h,h,h)\right](x,v)\big|\\ &\lesssim \left(e^{-2\mu}\beta^{-d/2}+e^{-3\mu}\beta^{-d}\right)\left(\beta^{-1/2}+|v|\right)e^{-\frac{\beta}{2}|v|^2}\left(|g|_{\beta,\mu}+|h|_{\beta,\mu}\right)(1+|g|_{\beta,\mu}+|h|_{\beta,\mu})|g-h|_{\beta,\mu}. \end{align*} \end{lemma} We define mild solutions to the binary-ternary Boltzmann equation \eqref{Boltzmann equation} as follows: \begin{definition} Consider $T>0$, $\beta_0> 0$, $\mu_0\in\mathbb{R}$ and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ decreasing functions of time, with $\bm{\beta}(0)=\beta_0$, $\bm{\beta}(T)> 0$, $\bm{\mu}(0)=\mu_0$. Consider also initial data $g_{0}\in X_{\beta_0,\mu_0}$. A map $\bm{g}\in\bm{X}_{\bm{\beta},\bm{\mu}}$ is a mild solution to the binary-ternary Boltzmann equation \eqref{Boltzmann equation} in $[0,T]$, with initial data $g_0\in X_{\beta_0,\mu_0}$, if it satisfies \begin{equation}\label{mild boltzmann equation} \bm{g}(t)=S_1^tg_0+\int_0^tS_1^{t-\tau}\left[Q_2(\bm{g},\bm{g})+Q_3(\bm{g},\bm{g},\bm{g})\right](\tau)\,d\tau. \end{equation} where $S_1^t$ denotes the free flow of one particle given in \eqref{free flow operator}. \end{definition} A similar proof to Lemma \ref{a priori lemma for T BBGKY} gives the following: \begin{lemma}\label{Lemma for integral boltzmann} Let $\beta_0> 0$, $\mu_0\in\mathbb{R}$, $T>0$ and $\lambda\in (0,\beta_0/T)$. Consider the functions $\bm{\beta}_\lambda,\bm{\mu}_\lambda:[0,T]\to\mathbb{R}$ given by \eqref{beta_lambda-mu_lambda}. Then for any $\bm{g},\bm{h}\in\bm{X}_{\bm{\beta}_\lambda,\bm{\mu}_\lambda}$ the following bounds hold: \begin{equation*} \begin{aligned} &\left\|\int_0^tS_1^{t-\tau}\left[Q_2(\bm{g}-\bm{h},\bm{g}-\bm{h})+Q_3(\bm{g}-\bm{h},\bm{g}-\bm{h},\bm{g}-\bm{h})\right](\tau)\,d\tau\right\|_{\bm{\beta}_\lambda,\bm{\mu}_\lambda} \\ &\hspace{1cm}\leq C\left(|\bm{g}|_{\bm{\beta}_\lambda,\bm{\mu}_\lambda}+|\bm{h}|_{\bm{\beta}_\lambda,\bm{\mu}_\lambda}\right)\left(1+|\bm{g}|_{\bm{\beta}_\lambda,\bm{\mu}_\lambda}+|\bm{h}|_{\bm{\beta}_\lambda,\bm{\mu}_\lambda}\right) |\bm{g}-\bm{h}|_{\bm{\beta}_\lambda,\bm{\mu}_\lambda}, \end{aligned} \end{equation*} where $C=C(d,\beta_0,\mu_0,T,\lambda)=C_2+C_3$ and $C_2,C_3$ are given by \eqref{constant of WP binary} for $k=1,2$ respectively. \end{lemma} Choosing $\lambda=\beta_0/2T$, this estimate implies local well-posedness of the binary-ternary Boltzmann equation up to short times. Let us write $B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$ for the unit ball of $\bm{X}_{\bm{\beta},\bm{\mu}}$. \begin{theorem}[LWP for the binary-ternary Boltzmann equation]\label{lwp boltz eq} Let $\beta_0> 0$ and $\mu_0\in\mathbb{R}$. Then there is $T=T(d,\beta_0,\mu_0)>0$ such that for any initial data $f_0\in X_{\beta_0,\mu_0}$, with $|f_0|_{\beta_0,\mu_0}\leq 1/2$, there is a unique mild solution $\bm{f}\in B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$ to the binary-ternary Boltzmann equation in $[0,T]$ with initial data $f_0$, where $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ are the functions given by \eqref{beta mu given lambda}. The solution $\bm{f}$ satisfies the bound: \begin{equation}\label{bound on initial data boltzmann equ} \|\bm{f}\|_{\bm{\beta},\bm{\mu}}\leq 4|f_0|_{\beta_0,\mu_0}. \end{equation} Moreover, for any $\bm{g,h}\in\bm{X}_{\bm{\beta},\bm{\mu}}$, the following estimates hold: \begin{align} &\left\|\int_0^tS_1^{t-\tau}\left[Q_2(\bm{g}-\bm{h},\bm{g}-\bm{h})+Q_3(\bm{g}-\bm{h},\bm{g}-\bm{h},\bm{g}-\bm{h})\right](\tau)\,d\tau\right\|_{\bm{\beta},\bm{\mu}}\nonumber \\ &\hspace{1cm}\leq \frac{1}{8}\left(\|\bm{g}\|_{\bm{\beta},\bm{\mu}}+\|\bm{h}\|_{\bm{\beta},\bm{\mu}}\right)\left(1+|\bm{g}|_{\bm{\beta},\bm{\mu}}+|\bm{h}|_{\bm{\beta},\bm{\mu}}\right) \|\bm{g}-\bm{h}\|_{\bm{\beta},\bm{\mu}}.\label{a-priori BE 1} \end{align} The time $T$ is explicitly given by \eqref{time}. \end{theorem} \begin{proof} Choosing $T$ as in \eqref{time}, we obtain $C(d,\beta_0,\mu_0,T,\beta_0/2T)=1/8.$ Thus, Lemma \ref{Lemma for integral boltzmann} implies estimate \eqref{a-priori BE 1}. Therefore, for any $g\in B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$, using \eqref{a-priori BE 1} for $\bm{h}=0$, we obtain \begin{equation}\label{inequality for cubic boltzmann 1} \left\|\int_0^tS_1^{t-\tau}\left[Q_2(\bm{g},\bm{g})+Q_3(\bm{g},\bm{g},\bm{g})\right](\tau)\,d\tau\right\|_{\bm{\beta}_\lambda,\bm{\mu}_\lambda}\leq \frac{1}{8}(1+\|\bm{g}\|_{\bm{\beta},\bm{\mu}})\|\bm{g}\|^2_{\bm{\beta},\bm{\mu}}\leq \frac{1}{4}\|\bm{g}\|_{\bm{\beta},\bm{\mu}}. \end{equation} Let us define the nonlinear operator $\mathcal{L}:\bm{X}_{\bm{\beta},\bm{\mu}}\to\bm{X}_{\bm{\beta},\bm{\mu}}$ by \begin{equation*} \mathcal{L}\bm{g}(t)=S_1^tf_0+\int_0^tS_1^{t-\tau}Q(\bm{g},\bm{g},\bm{g})(\tau)\,d\tau. \end{equation*} By triangle inequality, the fact that the free flow is isometric, Remark \ref{remark for initial equation}, bound \eqref{inequality for cubic boltzmann 1} and the assumption $|f_0|_{\beta_0,\mu_0}\leq 1/2$, for any $\bm{g}\in B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$ and $t\in[0,T]$, we have \begin{equation*} \begin{aligned} |\mathcal{L}\bm{g}|_{\bm{\beta}(t),\bm{\mu}(t)}\leq |S_1^tf_0|_{\bm{\beta}(t),\bm{\mu}(t)}+\frac{1}{4}\|\bm{g}\|_{\bm{\beta},\bm{\mu}}=|f_0|_{\bm{\beta}(t),\bm{\mu}(t)}+\frac{1}{4}\|\bm{g}\|_{\bm{\beta},\bm{\mu}}\leq |f_0|_{\beta_0,\mu_0}+\frac{1}{4}\|\bm{g}\|_{\bm{\beta},\bm{\mu}}\leq \frac{1}{2}+\frac{1}{4}=\frac{3}{4}. \end{aligned} \end{equation*} thus $\mathcal{L}:B_{\bm{X}_{\bm{\beta},\bm{\mu}}}\to B_{\bm{X}_{\bm{\beta},\bm{\mu}}}.$ Moreover, for any $\bm{g},\bm{h}\in B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$, using \eqref{a-priori BE 1}, we obtain \begin{equation}\label{pre-triang-bolt} \begin{aligned} \left\|\mathcal{L}\bm{g}-\mathcal{L}\bm{h}\right\|_{\bm{\beta},\bm{\mu}} &\leq \frac{1}{8}\left(\|\bm{g}\|_{\bm{\beta},\bm{\mu}}+\|\bm{h}\|_{\bm{\beta},\bm{\mu}}\right)\left(1+\|\bm{g}\|_{\bm{\beta},\bm{\mu}}+\|\bm{h}\|_{\bm{\beta},\bm{\mu}}\right)\|\bm{g}-\bm{h}\|_{\bm{\beta},\bm{\mu}}\leq\frac{3}{4}\|\bm{g}-\bm{h}\|_{\bm{\beta},\bm{\mu}}. \end{aligned} \end{equation} Therefore, the operator $\mathcal{L}:B_{\bm{X}_{\bm{\beta},\bm{\mu}}}\to B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$ is a contraction, so it has a unique fixed point $\bm{f}\in B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$ which is clearly the unique mild solution of the binary-ternary Boltzmann equation in $[0,T]$ with initial data $f_0$. To prove \eqref{bound on initial data boltzmann equ}, we use the fact that $\bm{f}=\mathcal{L}\bm{f}$. Then for any $t\in[0,T]$, triangle inequality, definition of $\mathcal{L}$, estimate \eqref{pre-triang-bolt}(for $\bm{g}=\bm{f}$ and $\bm{g}=0$), free flow being isometric, and Remark \ref{remark for initial equation} yield \begin{align*} |\bm{f}|_{\bm{\beta}(t),\bm{\mu}(t)}=|\mathcal{L}\bm{f}|_{\bm{\beta}(t),\bm{\mu}(t)}&\leq |\mathcal{L}\bm{0}|_{\bm{\beta}(t),\bm{\mu}(t)}+|\mathcal{L}\bm{f}-\mathcal{L}\bm{0}|_{\bm{\beta}(t),\bm{\mu}(t)} \leq|S_1^tf_0|_{\bm{\beta}(t),\bm{\mu}(t)}+\frac{3}{4}\|\bm{f}\|_{\bm{\beta},\bm{\mu}}\\ &=|f_0|_{\bm{\beta}(t),\bm{\mu}(t)}+\frac{3}{4}\|\bm{f}\|_{\bm{\beta},\bm{\mu}} \leq |f_0|_{\beta_0,\mu_0}+\frac{3}{4}\|\bm{f}\|_{\bm{\beta},\bm{\mu}}, \end{align*} thus $\|\bm{f}\|_{\bm{\beta},\bm{\mu}}\leq |f_0|_{\beta_0,\mu_0}+\displaystyle\frac{3}{4}\|\bm{f}\|_{\bm{\beta},\bm{\mu}},$ and \eqref{bound on initial data boltzmann equ} follows. \end{proof} We can now prove that chaos is propagated by the Boltzmann hierarchy. \begin{theorem}[Propagation of chaos]\label{theorem propagation of chaos} Let $\beta_0>0$, $\mu_0\in\mathbb{R}$, $T>0$ be the time given in \eqref{time}, and $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ the functions defined by \eqref{beta mu given lambda}. Consider $f_0\in X_{\beta_0,\mu_0}$ with $|f_0|_{\beta_0,\mu_0}\leq 1/2$. Assume $\bm{f}\in B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$ is the corresponding mild solution of the binary-ternary Boltzmann equation in $[0,T]$, with initial data $f_0$ given by Theorem \ref{lwp boltz eq}. Then the following hold: \begin{enumerate}[(i)] \item $F_0=(f_0^{\otimes s})_{s\in\mathbb{N}}\in X_{\infty,\beta_0,\mu_0}$. \item $\bm{F}=(\bm{f}^{\otimes s})_{s\in\mathbb{N}}\in\bm{X}_{\infty,\bm{\beta},\bm{\mu}}$. \item $\bm{F}$ is the unique mild solution of the Boltzmann hierarchy in $[0,T]$, with initial data $F_0$. \end{enumerate} \end{theorem} \begin{proof} \textit{(i)} is trivially verified by the bound on the initial data \eqref{bound on initial data boltzmann equ} and the definition of the norms. By the same bound again, we may apply Theorem \ref{lwp boltz eq} to obtain the unique mild solution $\bm{f}\in B_{\bm{X}_{\bm{\beta},\bm{\mu}}}$ of the corresponding binary-ternary Boltzmann equation. Since $\|\bm{f}\|_{\bm{\beta},\bm{\mu}}\leq 1$, the definition of the norms directly imply \textit{(ii)}. It is also staightforward to verify that $\bm{F}$ is a mild solution of the Boltzmann hierarchy in $[0,T]$, with initial data $F_0$. Uniqueness of the mild solution to the Boltzmann hierarchy, obtained by Theorem \ref{well posedness boltzmann}, implies that $\bm{F}$ is the unique mild solution. \end{proof} \section{Convergence Statement}\label{sec:convergence} \label{sec_conv statement} In this section we define an appropriate notion of convergence, namely convergence in observables, and we state the main result of this paper. \subsection{Approximation of Boltzmann hierarchy initial data}\label{subseq:approximation} Here, we approximate Boltzmann hierarchy initial data by BBGKY hierarchy initial data. Let us first introduce some notation we are using from now on. Given $\theta>0$, we introduce the set of well-separated spatial configurations as follows:\\ For $m\\in\mathbb{N}$, we define \begin{equation}\label{separated space data} \Delta_m^X(\theta):=\left\{\widetilde{X}_m\in\mathbb{R}^{dm}:|\widetilde{x}_i-\widetilde{x}_j|>\theta,\quad\forall 1\leq i<j\leq m\right\},\quad m\geq 2,\quad\Delta_1^X(\theta):=\mathbb{R}^{d}. \end{equation} For $m\in\mathbb{N}$, we also define the set of well-separated configurations as: \begin{equation}\label{separated data} \Delta_m(\theta):=\Delta_m^X(\theta)\times\mathbb{R}^{dm}=\left\{(\widetilde{X}_m,\widetilde{V}_m)\in\mathbb{R}^{2dm}:|\widetilde{x}_i-\widetilde{x}_j|>\theta,\quad\forall 1\leq i<j\leq m\right\}. \end{equation} Recall we consider $(N,\epsilon_2,\epsilon_3)$ in the scaling \begin{equation}\label{scaling admisib} N\epsilon_2^{d-1}\simeq N\epsilon_3^{d-\frac{1}{2}}\simeq 1.\end{equation} Let us write $\epsilon_{2,N}$, $\epsilon_{3,N}$ for the $\epsilon_2,\epsilon_3$ associated to $N$ under \eqref{scaling admisib}. By Remark \ref{remark for epsilons}, for $N$ large enough, we have $ 0<\epsilon_{2,N}<<\epsilon_{3,N}\overset{N\to\infty}\longrightarrow 0. $ We define the following approximating sequence: \begin{definition}Let $s\in\mathbb{N}$, $\beta>0$, $\mu\in\mathbb{R}$ and $G=(g_s)_{s\in\mathbb{N}}\in X_{\infty,\beta,\mu}$. We define \begin{equation}\label{approximating sequence} G_N=(g_{N,s})_{s\in\mathbb{N}},\quad\text{where}\quad g_{N,s}=\mathds{1}_{\Delta_s(\epsilon_{3,N})} g_s. \end{equation} The sequence $(G_N)_{N\in\mathbb{N}}$ is called approximating BBGKY hierarchy sequence of $G$. \end{definition} Similarly to Proposition 7.2. from \cite{thesis}, one obtains the following approximation property: \begin{proposition}\label{approximation proposition} Let $s\in\mathbb{N}$, $\beta>0$, $\mu\in\mathbb{R}$, $G=(g_s)_{s\in\mathbb{N}}\in X_{\infty,\beta,\mu}$ and $(G_N)_{N\in\mathbb{N}}$ the approximating BBGKY hierarchy sequence of $G$. Then the following hold: \begin{enumerate}[(i)] \item $G_N\in X_{N,\beta,\mu}$ for all $N\in\mathbb{N} $. In particular, \begin{equation}\label{uniform bound on approximating sequene} \sup_{N\in\mathbb{N}}\|G_N\|_{N,\beta,\mu}\leq \|G\|_{\infty,\beta,\mu} \end{equation}\vspace{0.2cm} \item For any $s\in\mathbb{N}$ and $\theta>0$, we have \begin{equation}\label{initial convergence to 0} \lim_{N\to\infty}\|g_{N,s}-g_s\|_{L^\infty\left(\Delta_s\left(\theta\right)\right)}= 0. \end{equation} \end{enumerate} \end{proposition} \subsection{Convergence in observables}\label{subsec:con in observables} Here, we define the convergence in observables. Let us first introduce some notation. Given $s\in\mathbb{N}$, we define the space of test functions \begin{equation}\label{test functions} C_c(\mathbb{R}^{ds})=\left\{\phi_s:\mathbb{R}^{ds}\to\mathbb{R}:\phi_s\text{ is continuous and compactly supported}\right\}. \end{equation} \begin{definition} Consider $T>0$, $s\in\mathbb{N}$ and $g_s\in L^\infty\left([0,T],L^\infty\left(\mathbb{R}^{2ds}\right)\right)$. Given a test function $\phi_s\in C_c(\mathbb{R}^{ds})$, we define the $s$-observable functional as \begin{equation*} I_{\phi_s}g_s(t)(X_s)=\int_{\mathbb{R}^{ds}}\phi_s(V_s)g_s(t,X_s,V_s)\,dV_s. \end{equation*} \end{definition} Recalling the set of initially good spatial configurations $\Delta_s^X(\theta)$ from \eqref{separated space data}, we give the definition of the convergence in observables: \begin{definition} Let $T>0$. For each $N\in\mathbb{N}$, consider $\bm{G_N}=(g_{N,s})_{s\in\mathbb{N}}\in \prod_{s=1}^\infty L^\infty\left([0,T],L^\infty\left(\mathbb{R}^{2ds}\right)\right)$ and $\bm{G}=(g_s)_{s\in\mathbb{N}}\in \prod_{s=1}^\infty L^\infty\left([0,T],L^\infty\left(\mathbb{R}^{2ds}\right)\right)$. We say that the sequence $(\bm{G_N})_{N\in\mathbb{N}}$ converges in observables to $\bm{G}$ if for any $s\in\mathbb{N}$, $\theta>0$ and $\phi_s\in C_c(\mathbb{R}^{ds})$ , we have \begin{equation*} \lim_{N\to\infty}\|I_{\phi_s}g_{N,s}(t)-I_{\phi_s}g_s(t)\|_{L^\infty\left(\Delta_s^X\left(\theta\right)\right)}=0,\quad\text{uniformly in }[0,T]. \end{equation*} \end{definition} \subsection{Statement of the main result} We are now in the position to state our main result. The rest of the paper will be devoted to its proof. \begin{theorem}[Convergence]\label{convergence theorem} Let $\beta_0> 0$, $\mu_0\in\mathbb{R}$ and $T=T(d,\beta_0,\mu_0)>0$ given by \eqref{time}. Consider some initial Boltzmann hierarchy data $F_0=(f_0^{(s)})_{s\in\mathbb{N}}\in X_{\infty,\beta_0,\mu_0}$ with approximating BBGKY hierarchy sequence $\left(F_{N,0}\right)_{N\in\mathbb{N}}$. Assume that \begin{itemize} \item for each $N$, $\bm{F_N}\in\bm{X}_{N,\bm{\beta},\bm{\mu}}$ is the mild solution (given by Theorem \ref{well posedness BBGKY}) of the BBGKY hierarchy in $[0,T]$ with initial data $F_{N,0}$. \item $\bm{F}\in\bm{X}_{\infty,\bm{\beta},\bm{\mu}}$ is the mild solution (given by Theorem \ref{well posedness boltzmann}) of the Boltzmann hierarchy in $[0,T]$ with initial data $F_0$. \item $F_0$ satisfies the following uniform continuity growth condition: There is a constant $C>0$ such that, for any $\zeta>0$, there is $q=q(\zeta)>0$ such that for all $s\in\mathbb{N}$, and for all $Z_s,Z_s'\in\mathbb{R}^{2ds}$ with $|Z_s-Z_s'|<q$, we have \begin{equation}\label{continuity assumption} |f_0^{(s)}(Z_s)-f_0^{(s)}(Z_s')|<C^{s-1}\zeta. \end{equation} \end{itemize} Then, $\bm{F_N}$ converges in observables to $\bm{F}$. \end{theorem} \begin{remark} Using the definition of convergence, proving Theorem \ref{convergence theorem} is equivalent to proving that for any $s\in\mathbb{N}$, $\phi_s\in C_c(\mathbb{R}^{ds})$ and $\theta>0$ we have \begin{equation*} \lim_{N\to\infty}\|I_s^N(t)-I_s^\infty(t)\|_{L^\infty\left(\Delta_s^X\left(\theta\right)\right)}=0,\quad\text{uniformly in }[0,T], \end{equation*} where \begin{equation}\label{def-I-Ns} I_s^N(t)(X_s):=I_{\phi_s}f_N^{(s)}(t)(X_s)=\int_{\mathbb{R}^{ds}}\phi_s(V_s)f_N^{(s)}(t,X_s,V_s)\,dV_s, \end{equation} \begin{equation}\label{def-I-s} I_s^\infty(t)(X_s):=I_{\phi_s}f^{(s)}(t)(X_s)=\int_{\mathbb{R}^{ds}}\phi_s(V_s)f^{(s)}(t,X_s,V_s)\,dV_s. \end{equation} \end{remark} We also obtain the following Corollary\footnote{which can be proved in a similar way as in Corollary 7.5. from \cite{thesis}} of Theorem \ref{convergence theorem}. \begin{corollary} Let $\beta_0>0$, $\mu_0\in\mathbb{R}$ and $f_0\in X_{\beta_0,\mu_0}$, with $|f_0|_{\beta_0,\mu_0}\leq 1/2$. Assume as well that $f_0$ is uniformly continuous. Then for any $s\in\mathbb{N}$, $\phi_s\in C_c(\mathbb{R}^{ds})$ and $\theta>0$, the following convergence holds: \begin{equation}\label{derivation} \lim_{N\to\infty}\|I_{\phi_s}f^{\otimes s}\mathds{1}_{\Delta_s(\epsilon_{3,N})}-I_{\phi_s}f^{\otimes s}\|_{L^\infty(\Delta_s(\theta))}=0, \end{equation} where $f$ is the mild solution to the binary-ternary Boltzmann equation in $[0,T]$, with initial data $f_0$, given by Theorem \ref{lwp boltz eq} and $T$ is given by \eqref{time}. \end{corollary} \begin{comment} {\color{red} \begin{proof} It is enough to show that $F_0=(f_0^{\otimes s})_{s\in\mathbb{N}}$ satisfies \eqref{continuity assumption}, and claim \eqref{derivation} follows by Theorem \ref{theorem propagation of chaos} and Theorem \ref{convergence theorem}. Fix $\zeta>0$. Since $f_0$ is uniformly continuous, there is $q=q(\zeta)>0$ such that for any $z=(x,v),z'=(x',v')\in\mathbb{R}^{2d}$ with $|z-z'|<q$, we have \begin{equation}\label{uniform continuity of f_0} |f_0^{\otimes 1}(z)-f_0^{\otimes 1}(z')|=|f_0(z)-f_0(z')|<\zeta. \end{equation} Consider $C>2\|f_0\|_{L^\infty}$. It suffices to prove the following claim: \textbf{Claim}: For any $s\in\mathbb{N}$, $\ell\in\{1,...,s\}$ and $|Z_\ell-Z_\ell'|<q$ there holds: \begin{equation}\label{claim propagation} |f_0^{\otimes\ell}(Z_\ell)-f_0^{\otimes\ell}(Z_\ell')|< C^{\ell-1}\zeta. \end{equation} \textit{Proof of the claim}: Fix $s\in\mathbb{N}$. We prove that claim \eqref{claim propagation} holds for $\ell\in\{1,...,s\}$. We will use induction on $\ell\in\{1,...,s\}$. \begin{itemize} \item $\ell=1$: Claim \eqref{claim propagation} comes directly from \eqref{uniform continuity of f_0}, since $f_0^{\otimes 1}=f_0$. \item Assume claim \eqref{claim propagation} holds for $\ell\in\{1,...,s-1\}$ i.e. for each $Z_{\ell},Z_{\ell}'\in\mathbb{R}^{2d\ell}$, with $|Z_\ell-Z_\ell'|<q$, there holds: \begin{equation}\label{induction propagation} |f_0^{\otimes\ell}(Z_\ell)-f_0^{\otimes\ell}(Z_\ell')|<C^{\ell-1}\zeta. \end{equation} We will show \eqref{claim propagation} holds for $\ell+1\in\{2,...,s\}$. Consider $Z_{\ell+1},Z_{\ell+1}'\in\mathbb{R}^{2d(\ell+1)}$, with \begin{equation}\label{proximity of Z} |Z_{\ell+1}-Z_{\ell+1}'|<q \end{equation} Let us write $Z_{\ell+1}=(X_\ell,x_{\ell+1},V_\ell,v_{\ell+1})$, $Z_{\ell+1}'=(X_\ell',x_{\ell+1}',V_\ell',v_{\ell+1}')$, where $Z_\ell=(X_\ell,V_\ell), Z_\ell'=(X_\ell',V_\ell')\in\mathbb{R}^{2d\ell}$. By \eqref{proximity of Z}, we have $|Z_\ell-Z_\ell'|<q$ and $|z_{\ell+1}-z_{\ell+1}'|<q$, where $z_{\ell+1}=(x_{\ell+1},v_{\ell+1})$, $z_{\ell+1}'=(x_{\ell+1}',v_{\ell+1}')$. Therefore \eqref{uniform continuity of f_0}, \eqref{induction propagation} and the fact that $C>2\|f_0\|_{L^\infty}$ imply \begin{align*} |f_0^{\otimes (\ell+1)}(Z_{\ell+1})-f_0^{\otimes(\ell+1)}(Z_{\ell+1}')|&=|f_0^{\otimes\ell}(Z_\ell)f_0(z_{\ell+1})-f_0^{\otimes\ell}(Z_\ell')f_0(z_{\ell+1}')|\\ &\leq |f_0(z_{\ell+1})||f_0^{\otimes\ell}(Z_\ell)-f_0^{\otimes\ell}(Z_\ell')|+|f_0^{\otimes\ell}(Z_\ell')||f_0(z_{\ell+1})-f_0(z_{\ell+1}')|\\ &\leq \|f_0\|_{L^\infty}C^{\ell-1}\zeta+\|f_0^{\otimes\ell}\|_{L^\infty}\zeta\\ &\leq \|f_0\|_{L^\infty}C^{\ell-1}\zeta+\|f_0\|_{L^\infty}^{\ell}\zeta\\ &\leq \frac{1}{2}C^\ell\zeta+\frac{1}{2^\ell}C^\ell\zeta\\ &\leq C^\ell\zeta. \end{align*} \end{itemize} Claim \eqref{claim propagation} is proved, and the result follows. \end{proof} } \end{comment} In order to prove Theorem \ref{convergence theorem}, we will first use the local estimates developed in Section \ref{sec:local} to reduce the proof to finitely many observables of bounded energy, which are also well separated in time. Then, we will develop some geometric estimates which will enable us to eliminate recollisions of the backwards $(\epsilon_2,\epsilon_3)$-flow. \section{Reduction to term by term convergence}\label{sec: series expansion} In this section we reduce the proof of Theorem \ref{convergence theorem} to term by term convergence after truncating the observables. After introducing the necessary combinatorial notation to take care of all the possible collision sequences occurring, the idea of the truncation is essentially the same as in \cite{gallagher, thesis}, and it relies on the local estimates developed in Section \ref{sec:local}. For this reason, we illustrate the similarities by providing the proof of the first estimate and omit the proofs of the rest of the estimates. Throughout this section, we consider $\beta_0>0$, $\mu_0\in\mathbb{R}$, the functions $\bm{\beta},\bm{\mu}:[0,T]\to\mathbb{R}$ defined by \eqref{beta mu given lambda}, $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling} and initial data $F_{N,0}\in X_{N,\beta_0,\mu_0}$, $F_0\in X_{\infty,\beta_0,\mu_0}$. Let $\bm{F_N}=(f_N^{(s)})_{s\in\mathbb{N}}\in\bm{X}_{N,\bm{\beta},\bm{\mu}}$, $\bm{F}=(f^{(s)})_{s\in\mathbb{N}}\in\bm{X}_{\infty,\bm{\beta},\bm{\mu}}$ be the mild solutions of the corresponding BBGKY and Boltzmann hierarchies, respectively, in $[0,T]$, given by Theorems \ref{well posedness BBGKY} and Theorem \ref{well posedness boltzmann}. Let us note that by \eqref{beta mu given lambda}, we obtain \begin{equation}\label{non dependence} \bm{\beta}(T)=\frac{\beta_0}{2},\quad\bm{\mu}(T)=\mu_0-\frac{\beta_0}{2}, \end{equation} thus $\bm{\beta}(T),\bm{\mu}(T)$ do not depend on $T$. For convenience, we introduce the following notation. Given $k\in\mathbb{N}$ and $t\geq 0$, we denote \begin{equation}\label{collision times} \mathcal{T}_k(t):=\left\{(t_1,...,t_k)\in\mathbb{R}^k:0\leq t_k<...\leq t_1\leq t\right\}. \end{equation} Since the collisions happening can be either binary or ternary we will introduce some additional notation to keep track of the collision sequences. In particular, given $k\geq 1$, we denote \begin{equation}\label{S_k} S_k:=\left\{\sigma=(\sigma_1,...,\sigma_k):\sigma_i\in\left\{1,2\right\},\quad\forall i=1,...,k\right\}. \end{equation} Notice that the cardinality of $S_k$ is given by: \begin{equation}\label{cardinality of S_k} |S_k|=2^k,\quad\forall k\geq 1. \end{equation} Given $k\in\mathbb{N}$ and $\sigma\in S_k$, for any $1\leq \ell\leq k$ we write \begin{equation}\label{sigma tilde} \widetilde{\sigma}_\ell=\sum_{i=1}^\ell\sigma_i. \end{equation} We also write $ \widetilde{\sigma}_0:=0. $ Notice that \begin{equation}\label{bound on sigma} k\leq\widetilde{\sigma}_k\leq 2k,\quad\forall k\in\mathbb{N}. \end{equation} \subsection{Series expansion} Now, we make a series expansion for the mild solution $\bm{F_N}=(f_N^{(s)})_{s\in\mathbb{N}}$ of the BBGKY hierarchy with respect to the initial data $F_{N,0}$. By Definition \ref{def of mild bbgky}, for any $\in\mathbb{N}$, we have Duhamel's formula: \begin{equation*} f^{(s)}_N(t)=T_s^tf^{(s)}_{N,0}+\int_0^tT_s^{t-t_1}\left[\mathcal{C}_{s,s+1}^{N}f^{(s+1)}_N+\mathcal{C}_{s,s+2}^{N}f^{(s+2)}_N\right](t_1)\,dt_1. \end{equation*} Let $n\in\mathbb{N}$. Iterating $n$-times Duhamel's formula, we obtain \begin{equation}\label{function plus remainder BBGKY} f^{(s)}_N(t)=\sum_{k=0}^nf^{(s,k)}_N(t)+R_{N}^{(s,n+1)}(t), \end{equation} where we use the notation: \begin{equation}\label{function expansion BBGKY} \begin{aligned} f^{(s,k)}_N(t)&:=\sum_{\sigma\in S_k}f^{(s,k,\sigma)}_N(t),\text{ for } 1\leq k\leq n,\quad f^{(s,0)}_N(t):=T_s^tf^{(s)}_{N,0}. \end{aligned} \end{equation} \begin{equation}\label{function expansion with indeces BBGKY} \begin{aligned} f^{(s,k,\sigma)}_N(t)=\int_{\mathcal{T}_k(t)}T_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{N}T_{s+\widetilde{\sigma}_1}^{t_1-t_2}\mathcal{C}_{s+\widetilde{\sigma}_1,s+\widetilde{\sigma}_2}^{N}T_{s+\widetilde{\sigma}_2}^{t_2-t_3}... T_{s+\widetilde{\sigma}_{k-1}}^{t_{k-1}-t_k}\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{N}T_{s+\widetilde{\sigma}_k}^{t_k}f_{N,0}^{(s+\widetilde{\sigma}_k)}\,dt_k...\,dt_1, \end{aligned} \end{equation} \begin{equation}\label{remainder BBGKY} \begin{aligned} R_N^{(s,n+1)}(t):=\sum_{\sigma\in S_{n+1}}R_N^{(s,n+1,\sigma)}(t), \end{aligned} \end{equation} \begin{equation}\label{remainder BBGKY with indeces} \begin{aligned} R_N^{(s,n+1,\sigma)}(t):=\int_{\mathcal{T}_{n+1}(t)}T_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^NT_{s+\widetilde{\sigma}_1}^{t_1-t_2}\mathcal{C}_{s+\widetilde{\sigma}_1,s+\widetilde{\sigma}_2}^NT_{s+\widetilde{\sigma}_2}^{t_2-t_3}...&\\ T_{s+\widetilde{\sigma}_{n-1}}^{t_{n-1}-t_n}\mathcal{C}_{s+\widetilde{\sigma}_{n-1},s+\widetilde{\sigma}_n}^NT_{s+\widetilde{\sigma}_n}^{t_n-t_{n+1}}\mathcal{C}_{s+\widetilde{\sigma}_n,s+\widetilde{\sigma}_{n+1}}^Nf^{(s+\widetilde{\sigma}_{n+1})}_N(t_{n+1})\,dt_{n+1}\,dt_n...\,dt_1. & \end{aligned} \end{equation} One can make a similar series expansion for the Boltzmann hierarchy. By Definition \ref{def of mild bbgky}, for any $\in\mathbb{N}$, we have Duhamel's formula: \begin{equation*} f^{(s)}(t)=S_s^tf^{(s)}_{0}+\int_0^tS_s^{t-t_1}\left[\mathcal{C}_{s,s+1}^{\infty}f^{(s+1)}+\mathcal{C}_{s,s+2}^{\infty}f^{(s+2)}\right](t_1)\,dt_1. \end{equation*} Iterating $n$-times Duhamel's formula, we obtain \begin{equation}\label{function plus remainder Boltzmann} f^{(s)}(t)=\sum_{k=0}^nf^{(s,k)}(t)+R^{(s,n+1)}(t), \end{equation} where we use the notation: \begin{equation}\label{function expansion Boltzmann} f^{(s,k)}(t):=\sum_{\sigma\in S_k}f^{(s,k,\sigma)}(t),\text{ for } 1\leq k\leq n,\quad f^{(s,0)}(t):=S_s^tf^{(s)}_{0}. \end{equation} \begin{equation}\label{function expansion Boltzmann with indeces} \begin{aligned} f^{(s,k,\sigma)}(t):=\int_{\mathcal{T}_k(t)}S_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{\infty}S_{s+\widetilde{\sigma}_1}^{t_1-t_2}\mathcal{C}_{s+\widetilde{\sigma}_1,s+\widetilde{\sigma}_2}^{\infty}S_{s+\widetilde{\sigma}_2}^{t_2-t_3}... S_{s+\widetilde{\sigma}_{k-1}}^{t_{k-1}-t_k}\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{\infty}S_{s+\widetilde{\sigma}_k}^{t_k}f_{0}^{(s+\widetilde{\sigma}_k)}\,dt_k...\,dt_1, \end{aligned} \end{equation} \begin{equation}\label{remainder boltzmann} \begin{aligned} R^{(s,n+1)}(t):=\sum_{\sigma\in S_{n+1}}R^{(s,n+1,\sigma)}(t), \end{aligned} \end{equation} \begin{equation}\label{remainder boltzmann with indeces} \begin{aligned} R^{(s,n+1,\sigma)}(t):=\int_{\mathcal{T}_{n+1}(t)}S_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^\infty S_{s+\widetilde{\sigma}_1}^{t_1-t_2}\mathcal{C}_{s+\widetilde{\sigma}_1,s+\widetilde{\sigma}_2}^\infty S_{s+\widetilde{\sigma}_2}^{t_2-t_3}...&\\ S_{s+\widetilde{\sigma}_{n-1}}^{t_{n-1}-t_n}\mathcal{C}_{s+\widetilde{\sigma}_{n-1},s+\widetilde{\sigma}_n}^\infty S_{s+\widetilde{\sigma}_n}^{t_n-t_{n+1}}\mathcal{C}_{s+\widetilde{\sigma}_n,s+\widetilde{\sigma}_{n+1}}^\infty f^{(s+\widetilde{\sigma}_{n+1})}(t_{n+1})\,dt_{n+1}\,dt_n...\,dt_1. & \end{aligned} \end{equation} Given $\phi_s\in C_c(\mathbb{R}^{ds})$ and $k\in\mathbb{N}$, let us denote \begin{equation}\label{bbgky observ k} I_{s,k}^N(t)(X_s):= \int_{\mathbb{R}^{ds}}\phi_s(V_s)f_N^{(s,k)}(t,X_s,V_s)\,dV_s, \end{equation} \begin{equation}\label{boltz observ k} I_{s,k}^\infty(t)(X_s):= \int_{\mathbb{R}^{ds}}\phi_s(V_s)f^{(s,k)}(t,X_s,V_s)\,dV_s. \end{equation} We obtain the following estimates: \begin{lemma}\label{term by term} For any $s,n\in\mathbb{N}$ and $t\in [0,T]$, the following estimates hold: \begin{equation*}\|I_s^N(t)-\sum_{k=0}^nI_{s,k}^N(t)\|_{L^\infty_{X_s}}\leq C_{s,\beta_0,\mu_0} \|\phi_s\|_{L^\infty_{V_s}}4^{-n}\|F_{N,0}\|_{N,\beta_0,\mu_0}, \end{equation*} \begin{equation*}\|I_s^\infty(t)-\sum_{k=0}^nI_{s,k}^\infty(t)\|_{L^\infty_{X_s}}\leq C_{s,\beta_0,\mu_0} \|\phi_s\|_{L^\infty_{V_s}}4^{-n}\|F_{0}\|_{\infty,\beta_0,\mu_0}, \end{equation*} where the observables $I_s^N$, $I_s^\infty$ defined in \eqref{def-I-Ns}-\eqref{def-I-s}. \end{lemma} \begin{proof} Fix $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$, $t\in [0,T]$ and $\sigma\in S_{n+1}$. We repeatedly use estimate \eqref{a priori binary bound F_N} of Theorem \ref{well posedness BBGKY}, for $k=1$ if $\sigma_i=1$ or for $k=2$ if $\sigma_i=2$, to obtain \begin{equation*} e^{\bm{\beta}(t)E_s(Z_s)+s\bm{\mu}(t)}|R_N^{(s,n+1,\sigma)}(t,X_s,V_s)|\leq 8^{-(n+1)}|||\bm{F_N}|||_{N,\bm{\beta},\bm{\mu}}, \end{equation*} so adding for all $\sigma\in S_{n+1}$, using \eqref{cardinality of S_k}, \eqref{a priori bound F_N,0} and the definition of the norms, we take \begin{equation*} \begin{aligned} |\phi_s(V_s)R_N^{(s,n+1)}(t,X_s,V_s)|&\lesssim 4^{-(n+1)}e^{-s\bm{\mu}(t)}\|\phi_s\|_{L^\infty_{V_s}}|||\bm{F_N}|||_{N,\bm{\beta},\bm{\mu}}e^{-\bm{\beta}(t)E_s(Z_s)}\\ &\leq 4^{-n}e^{-s\bm{\mu}(T)}\|\phi_s\|_{L^\infty_{V_s}}\|F_{N,0}\|_{N,\beta_0,\mu_0}e^{-\bm{\beta}(T)E_s(Z_s)}. \end{aligned} \end{equation*} Thus, integrating with respect to velocities and recalling \eqref{function plus remainder BBGKY}, \eqref{bbgky observ k}, \eqref{non dependence}, we obtain \begin{equation*} \begin{aligned} |I_s^N(t)(X_s)-\sum_{k=0}^nI_{s,k}^N(t)(X_s)|&\leq C_{s,\mu_0}\|\phi_s\|_{L^\infty_{V_s}}4^{-n}\|F_{N,0}\|_{N,\beta_0,\mu_0}\int_{\mathbb{R}^{ds}}e^{-\bm{\beta}(T)E_s(Z_s)}\,dV_s\\ &\leq C_{s,\beta_0,\mu_0} \|\phi_s\|_{L^\infty_{V_s}}4^{-n}\|F_{N,0}\|_{N,\beta_0,\mu_0}. \end{aligned} \end{equation*} For the Boltzmann hierarchy, we follow a similar argument using estimates \eqref{a priori binary bound F Boltzmann} and \eqref{a priori bound F_0 Boltzmann} instead. \end{proof} \subsection{High energy truncation} We will now truncate energies, so that we can focus on bounded energy domains. Let us fix $s,n\in\mathbb{N}$ and $R>1$. As usual we denote $B_R^{2d}$ to be the $2d$-ball of radius $R$ centered at the origin. We first define the truncated BBGKY hierarchy and Boltzmann hierarchy collisional operators. For $\ell\in\mathbb{N}$ we define \begin{equation}\label{velocity truncation of operators} \begin{aligned} \mathcal{C}_{\ell,\ell+1}^{N,R}g_{l+1}&:=\mathcal{C}_{\ell,\ell+1}^N(g_{l+1}\mathds{1}_{[E_{\ell+1}\leq R^2]}),\quad \mathcal{C}_{\ell,\ell+2}^{N,R}g_{l+2}&:=\mathcal{C}_{\ell,\ell+2}^N(g_{l+2}\mathds{1}_{[E_{\ell+2}\leq R^2]}),\\ \mathcal{C}_{\ell,\ell+1}^{\infty,R} g_{l+1}&:=\mathcal{C}_{\ell,\ell+1}^\infty (g_{l+1}\mathds{1}_{[E_{\ell+1}\leq R^2]}),\quad\mathcal{C}_{\ell,\ell+2}^{\infty,R} g_{l+2}&:=\mathcal{C}_{\ell,\ell+2}^\infty (g_{l+2}\mathds{1}_{[E_{\ell+2}\leq R^2]}). \end{aligned} \end{equation} For the BBGKY hierarchy we define \begin{equation*} f_{N,R}^{(s,k)}(t,Z_s):=\sum_{\sigma\in S_k}f_{N,R}^{(s,k,\sigma)}(t,Z_s),\text{ for }1\leq k\leq n,\quad f_{N,R}^{(s,0)}(t,Z_s):=T_s^t(f_{N,0}\mathds{1}_{[E_s\leq R^2]})(Z_s), \end{equation*} where given $k\geq 1$ and $\sigma\in S_k$, we denote \begin{equation*} \begin{aligned} f_{N,R}^{(s,k,\sigma)}(t,Z_s&):=\int_{\mathcal{T}_k(t)}T_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{N,R} T_{s+\widetilde{\sigma}_1}^{t_1-t_2}...\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{N,R} T_{s+\widetilde{\sigma}_k}^{t_k}f_{N,0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k...\,dt_{1}. \end{aligned} \end{equation*} For the Boltzmann hierarchy we define \begin{equation*} f_{R}^{(s,k)}(t,Z_s):=\sum_{\sigma\in S_k}f_{R}^{(s,k,\sigma)}(t,Z_s),\text{ for }1\leq k\leq n,\quad f_{R}^{(s,0)}(t,Z_s):=S_s^t(f_{0}\mathds{1}_{[E_s\leq R^2]})(Z_s), \end{equation*} where given $k\geq 1$ and $\sigma\in S_k$, we denote \begin{equation*} \begin{aligned} f_{R}^{(s,k,\sigma)}(t,Z_s&):=\int_{\mathcal{T}_k(t)}S_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{\infty,R} S_{s+\widetilde{\sigma}_1}^{t_1-t_2}...\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{\infty,R} S_{s+\widetilde{\sigma}_k}^{t_k}f_{0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k...\,dt_{1}. \end{aligned} \end{equation*} Given $\phi_s\in C_c(\mathbb{R}^{ds})$ and $k\in\mathbb{N}$, let us denote \begin{equation}\label{BBGKY bounded energy} I_{s,k,R}^N(t)(X_s):= \int_{\mathbb{R}^{ds}}\phi_s(V_s)f_{N,R}^{(s,k)}(t,X_s,V_s)\,dV_s=\int_{B_R^{ds}}\phi_s(V_s)f_{N,R}^{(s,k)}(t,X_s,V_s)\,dV_s, \end{equation} \begin{equation}\label{Botlzmann bounded energy} I_{s,k,R}^\infty(t)(X_s):= \int_{\mathbb{R}^{ds}}\phi_s(V_s)f_R^{(s,k)}(t,X_s,V_s)\,dV_s=\int_{B_R^{ds}}\phi_s(V_s)f_{R}^{(s,k)}(t,X_s,V_s)\,dV_s. \end{equation} Recalling the observables $I_{s,k}^N$, $I_{s,k}^\infty$, defined in \eqref{bbgky observ k}-\eqref{boltz observ k}, we obtain the following estimates: \begin{lemma}\label{energy truncation} For any $s,n\in\mathbb{N}$, $R>1$ and $t\in [0,T]$, the following estimates hold: \begin{equation*}\sum_{k=0}^n\|I_{s,k,R}^N(t)-I_{s,k}^N(t)\|_{L^\infty_{X_s}}\leq C_{s,\beta_0,\mu_0,T} \|\phi_s\|_{L^\infty_{V_s}}e^{-\frac{\beta_0}{3}R^2}\|F_{N,0}\|_{N,\beta_0,\mu_0}, \end{equation*} \begin{equation*}\sum_{k=0}^n\|I_{s,k,R}^\infty(t)-I_{s,k}^\infty(t)\|_{L^\infty_{X_s}}\leq C_{s,\beta_0,\mu_0,T} \|\phi_s\|_{L^\infty_{V_s}}e^{-\frac{\beta_0}{3}R^2}\|F_{0}\|_{\infty,\beta_0,\mu_0}. \end{equation*} \end{lemma} \begin{proof} For the proof, we use the same ideas as in Lemma 8.4. from \cite{thesis}, and we also use \eqref{cardinality of S_k} to sum over all possible collision sequences. \begin{comment} {\color{red} We first prove it for the BBGKY hierarchy case. Let $\beta_0'=2\beta_0/3$ and $\lambda'=\beta'_0/4T$. Let us define the functions $$\bm{\beta'}_\lambda(t)=\beta'_0-\lambda' t,\quad\bm{\mu'}_\lambda(t)=\mu_0-\lambda' t.$$ It is clear that $\bm{\beta'}_{\lambda}(T)=\beta_0/2$. Then, a straightforward calculation and \eqref{time} imply that \begin{equation}\label{C velocities} C(d,\beta_0',\mu_0',T,\lambda')\leq\frac{3}{4}, \end{equation} where $C(d,\beta_0',\mu_0',T,\lambda')$ is given by \eqref{constant of well posedness}. We define \begin{equation*} G_{N,0}=\left(g_{N,0,m}\right)_{m\in\mathbb{N}},\quad\text{where}\quad g_{N,0,m}=f_{N,0}^{(m)}\mathds{1}_{V_m\notin B_R^{dm}}. \end{equation*} Notice that \begin{equation}\label{exp decay} \|G_{N,0}\|_{N,\beta_0',\mu_0}\leq e^{-\frac{\beta_0}{3}R^2}\|F_{N,0}\|_{N,\beta_0,\mu_0}. \end{equation} We first assume $1\leq k\leq n$. Applying $k-1$ times Lemma \ref{a priori lemma for T BBGKY} for $\beta_0',\mu_0,\lambda',\bm{\beta'}_\lambda,\bm{\mu'}_\lambda, T$, part \textit{(ii)} of Proposition \ref{remark for initial} and \eqref{exp decay}, we get \begin{align} |f_{N}^{(s,k)}(t,Z_s)-f_{N,R}^{(s,k)}(t,Z_s)&|\leq e^{-s\bm{\mu'}_\lambda(T)-\bm{\beta'}_\lambda(T)E_s(Z_s)}\left(\frac{3}{4}\right)^{k-1}\left|\left|\left|\int_0^t\mathcal{T}^{\tau}G_{N,0}\,d\tau\right|\right|\right|_{N,\bm{\beta'},\bm{\mu'}}\nonumber\\ &\leq Te^{-s\bm{\mu'}_\lambda(T)-\bm{\beta'}_\lambda(T)E_s(Z_s)}\left(\frac{3}{4}\right)^{k-1}\|G_{N,0}\|_{N,\beta_0',\mu_0}\nonumber\\ &\leq C_{s,\beta_0,\mu_0,T}\left(\frac{3}{4}\right)^{k-1}e^{-\frac{\beta_0}{3}R^2}\|F_{N,0}\|_{N,\beta_0,\mu_0}e^{-\bm{\beta'}_{\lambda}(T)E_s(Z_s)}\label{velocity estimate k>1}. \end{align} For $k=0$, part \textit{(i)} of Proposition \ref{remark for initial} and Remark \ref{T_N isometry} yield \begin{align} |f_{N}^{(s,0)}(t,Z_s)-f_{N,R}^{(s,0)}(t,Z_s)|&\leq e^{-s\bm{\mu'}_\lambda(T)-\bm{\beta'}_\lambda(T)E_s(Z_s)}\|\mathcal{T}^tG_{N,0}\|_{N,\bm{\beta'}_\lambda(t),\bm{\mu'}_\lambda(t)}\nonumber\\ &\leq e^{-s\bm{\mu'}_\lambda(T)-\bm{\beta'}_\lambda(T)E_s(Z_s)}\|\mathcal{T}^tG_{N,0}\|_{N,\beta_0',\mu_0}\nonumber\\ &=e^{-s\bm{\mu'}_\lambda(T)-\bm{\beta'}_\lambda(T)E_s(Z_s)}\|G_{N,0}\|_{N,\beta_0',\mu_0}\nonumber\\ &\leq C_{s,\beta_0,\mu_0}e^{-\frac{\beta_0}{3}R^2}\|F_{N,0}\|_{N,\beta_0,\mu_0}e^{-\bm{\beta'}_\lambda(T)E_s(Z_s)}\label{velocity estimate k=0}. \end{align} Combining \eqref{velocity estimate k>1}-\eqref{velocity estimate k=0}, adding for $\sigma\in S_k$ and $k=0,...,n$, and using \eqref{cardinality of S_k}, we obtain \begin{equation*} \begin{aligned} \sum_{k=0}^n\|I_{s,k,R}^N(t)-I_{s,k}^N(t)\|_{L^\infty_{X_s}}&\leq C_{s,\beta_0,\mu_0,T}\|\phi_s\|_{L^\infty_{V_s}}e^{-\frac{\beta_0}{3}R^2}\|F_{N,0}\|_{N,\beta_0,\mu_0} \int_{\mathbb{R}^{ds}}e^{-\bm{\beta'}_\lambda(T)E_s(Z_s)}\,dV_s\\ &\leq C_{s,\beta_0,\mu_0,T}\|\phi_s\|_{L^\infty_{V_s}}e^{-\frac{\beta_0}{3}R^2}\|F_{N,0}\|_{N,\beta_0,\mu_0}. \end{aligned} \end{equation*} The proof for the Boltzmann hierarchy case is similar, using Lemma \ref{a priori lemma for S boltzmann} and Proposition \ref{remark for initial boltzmann hierarchy} instead.} \end{comment} \end{proof} \subsection{Separation of collision times} We will now separate the time intervals we are integrating at, so that collisions occuring are separated in time. For this purpose consider a small time parameter $\delta>0$. For convenience, given $t\geq 0$ and $k\in\mathbb{N}$, we define \begin{equation}\label{separated collision times} \mathcal{T}_{k,\delta}(t):=\left\{(t_1,...,t_k)\in\mathcal{T}_k(t):\quad 0\leq t_{i+1}\leq t_i-\delta,\quad\forall i\in [0,k]\right\}, \end{equation} where we denote $t_{k+1}=0$, $t_0=t$. For the BBGKY hierarchy, we define \begin{equation*} f_{N,R,\delta}^{(s,k)}(t,Z_s):=\sum_{\sigma\in S_k}f_{N,R,\delta}^{(s,k,\sigma)}(t,Z_s),\text{ for } 1\leq k\leq n,\quad f_{N,R,\delta}^{(s,0)}(t,Z_s):=T_s^t(f_{N,0}\mathds{1}_{[E_s\leq R^2]})(Z_s), \end{equation*} where, given $k\geq 1$ and $\sigma\in S_k$, we denote \begin{equation*} f_{N,R,\delta}^{(s,k,\sigma)}(t,Z_s):=\int_{\mathcal{T}_{k,\delta}(t)}T_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{N,R} T_{s+\widetilde{\sigma}_1}^{t_1-t_2} ...\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{N,R} T_{s+\widetilde{\sigma}_k}^{t_k}f_{N,0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k,...\,dt_{1}. \end{equation*} In the same spirit, for the Boltzmann hierarchy we define \begin{equation*} f_{N,R,\delta}^{(s,k)}(t,Z_s):=\sum_{\sigma\in S_k}f_{N,R,\delta}^{(s,k,\sigma)}(t,Z_s),\text{ for } 1\leq k\leq n,\quad f_{R,\delta}^{(s,0)}(t,Z_s):=S_s^t(f_{0}\mathds{1}_{[E_s\leq R^2]})(Z_s), \end{equation*} where, given $k\geq 1$ and $\sigma\in S_k$, we denote \begin{equation*} f_{R,\delta}^{(s,k,\sigma)}(t,Z_s):=\int_{\mathcal{T}_{k,\delta}(t)}S_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{\infty,R} S_{s+\widetilde{\sigma}_1}^{t_1-t_2} ...\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{\infty,R} S_{s+\widetilde{\sigma}_k}^{t_m}f_{0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k,...\,dt_{1}. \end{equation*} Given $\phi_s\in C_c(\mathbb{R}^{ds})$ and $k\in\mathbb{N}$, we define \begin{equation}\label{bbgky truncated time} I_{s,k,R,\delta}^N(t)(X_s):= \int_{\mathbb{R}^{ds}}\phi_s(V_s)f_{N,R,\delta}^{(s,k)}(t,X_s,V_s)\,dV_s=\int_{B_R^{ds}}\phi_s(V_s)f_{N,R,\delta}^{(s,k)}(t,X_s,V_s)\,dV_s, \end{equation} \begin{equation}\label{boltzmann truncated time} I_{s,k,R,\delta}^\infty(t)(X_s):= \int_{\mathbb{R}^{ds}}\phi_s(V_s)f_{R,\delta}^{(s,k)}(t,X_s,V_s)\,dV_s=\int_{B_R^{ds}}\phi_s(V_s)f_{R,\delta}^{(s,k)}(t,X_s,V_s)\,dV_s. \end{equation} \begin{remark}\label{time functionals trivial} For $0\leq t\leq\delta$, we trivially obtain $\mathcal{T}_{k,\delta}(t)=\emptyset$. In this case the functionals $I_{s,k,R,\delta}^N(t), I_{s,k,R,\delta}^\infty(t)$ are identically zero. \end{remark} \begin{comment} {\color{red} Using Lemma \ref{a priori lemma for C BBGKY} and Lemma \ref{a priori lemma for C Boltzmann} respectively, we obtain the following continuity estimates: \begin{lemma}\label{inductive time sep} Let $s,k\in\mathbb{N}$, $1\leq j\leq k$, $\sigma_j\in S_j$, $t>0$ and $\mathcal{F}(t)\subseteq [0,t]$ measurable. Then the following estimates hold: \begin{enumerate}[(i)] \item Assume $g_{N,s+\widetilde{\sigma}_{j}}(\tau,\cdot)\in X_{N,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}},\quad\forall\tau\in\mathcal{F}(t)$. Then there holds: \begin{equation*} \begin{aligned} \bigg|&\int_{\mathcal{F}(t)}T_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{N,R}g_{N,s+\widetilde{\sigma}_{j}}(\tau,Z_{s+\widetilde{\sigma}_{j-1}})\,d\tau\bigg|_{N,\frac{\beta_0}{2}+\frac{\left(j-1\right)\beta_0}{2k},s+\widetilde{\sigma}_{j-1}}\\ &\leq C_{d,\beta_0}(s+2k)\int_{\mathcal{F}(t)}|g_{N,s+\widetilde{\sigma}_{j}}(\tau)|_{N,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}}\,d\tau. \end{aligned} \end{equation*} \item Assume $g_{s+\widetilde{\sigma}_{j}}(\tau,\cdot)\in X_{\infty,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}},\quad\forall\tau\in\mathcal{F}(t)$. Then \begin{equation*} \begin{aligned} \bigg|&\int_{\mathcal{F}(t)}S_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{\infty,R} g_{s+\widetilde{\sigma}_{j}}(\tau,Z_{s+\widetilde{\sigma}_{j-1}})\,d\tau\bigg|_{\infty,\frac{\beta_0}{2}+\frac{\left(j-1\right)\beta_0}{2k},s+\widetilde{\sigma}_{j-1}}\\ &\leq C_{d,\beta_0}(s+2k)\int_{\mathcal{F}(t)}|g_{s+\widetilde{\sigma}_{j}}(\tau)|_{\infty,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}}\,d\tau. \end{aligned} \end{equation*} \end{enumerate} \end{lemma} \begin{proof} We prove it first for the BBGKY hierarchy. Consider $t>0$, $k\in\mathbb{N}$, $1\leq j\leq k$ and $Z_{s+\widetilde{\sigma}_{j-1}}\in\mathbb{R}^{2d(s+\widetilde{\sigma}_{j-1})}$. Let us write $$Z_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}=(X_{s+\widetilde{\sigma}_{j-1}}^{t-\tau},V_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}):=\Psi_{s+\widetilde{\sigma}_{j-1}}^{\tau-t}Z_{s+\widetilde{\sigma}_{j-1}}.$$ Conservation of energy \eqref{kinetic energy flow} yields \begin{equation}\label{cons energy 2j-2} E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}}^{t-\tau})=E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}}),\quad\forall\tau\in\mathcal{F}(t). \end{equation} Using Lemma \ref{a priori lemma for C BBGKY}, with $\beta=\frac{\beta_0}{2}+\frac{j\beta_0}{2k}$, $m=s+\widetilde{\sigma}_{j-1}$, we obtain \begin{align} &\left|T_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{N,R}g_{N,s+\widetilde{\sigma}_{j}}(\tau,Z_{s+\widetilde{\sigma}_{j-1}})\right|=\left|\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{N,R}g_{N,s+\widetilde{\sigma}_{j}}(\tau,Z_{s+\widetilde{\sigma}_{j-1}}^{t-\tau})\right|\nonumber\\ &=\left|\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{N}g_{N,s+\widetilde{\sigma}_{j}}\mathds{1}_{E_{s+\widetilde{\sigma}_{j}}(Z_{s+\widetilde{\sigma}_{j}})\leq R^2}(\tau,Z_{s+\widetilde{\sigma}_{j-1}}^{t-\tau})\right|\nonumber\\ &\leq C_d \left(\frac{\beta_0}{2}+\frac{j\beta_0}{2k}\right)^{-d}\left[\left(s+\widetilde{\sigma}_{j-1}\right)\left(\frac{\beta_0}{2}+\frac{j\beta_0}{2k}\right)^{-\frac{1}{2}}+\sum_{i=1}^{s+\widetilde{\sigma}_{j-1}}|v_i^{t-\tau}|\right]\nonumber\\ &\times e^{-\left(\frac{\beta_0}{2}+\frac{j\beta_0}{2k}\right)E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}}^{t-\tau})}|g_{N,s+\widetilde{\sigma}_{j}}\mathds{1}_{E_{s+\widetilde{\sigma}_{j}}(Z_{s+\widetilde{\sigma}_{j}})\leq R^2}(\tau)|_{N,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}}\nonumber\\ &\leq C_d \left(\frac{\beta_0}{2}\right)^{-d}\left[\left(s+\widetilde{\sigma}_{j-1}\right)\left(\frac{\beta_0}{2}\right)^{-\frac{1}{2}}+\sum_{i=1}^{s+\widetilde{\sigma}_{j-1}}|v_i^{t-\tau}|\right] e^{-\left(\frac{\beta_0}{2}+\frac{j\beta_0}{2k}\right)E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\nonumber\\ &\times|g_{N,s+\widetilde{\sigma}_{j}}(\tau)|_{N,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}}\label{use of kin energy time sep}\\ &=C_d e^{-\left(\frac{\beta_0}{2}+\frac{\left(j-1\right)\beta_0}{2k}\right) E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\left(\frac{\beta_0}{2}\right)^{-d}\left[\left(s+2k\right)\left(\frac{\beta_0}{2}\right)^{-\frac{1}{2}}+\sum_{i=1}^{s+\widetilde{\sigma}_{j-1}}|v_i^{t-\tau}|\right]\nonumber \\ &\times e^{-\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}|g_{N,s+\widetilde{\sigma}_{j}}(\tau)|_{N,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}}\label{bound on transport time}, \end{align} where to obtain \eqref{use of kin energy time sep}, we use \eqref{cons energy 2j-2}, and to obtain \eqref{bound on transport time}, we use \eqref{bound on sigma}. But Cauchy-Schwartz inequality implies \begin{align} &e^{-\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\sum_{i=1}^{s+\widetilde{\sigma}_{j-1}}|v_i^{t-\tau}| =e^{-\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\left(\frac{4k}{\beta_0}\right)^{1/2} \left(\frac{\beta_0}{4k}\right)^{1/2}\sum_{i=1}^{s+\widetilde{\sigma}_{j-1}}|v_i^{t-\tau}|\nonumber\\ &\leq e^{-\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\left(\frac{4k(s+\widetilde{\sigma}_{j-1})}{\beta_0}\right)^{1/2}\left(\frac{\beta_0}{4k}\right)^{1/2}\left(\sum_{i=1}^{s+\widetilde{\sigma}_{j-1}}|v_i^{t-\tau}|^2\right)^{1/2}\nonumber\\ &=\left(\frac{4k(s+\widetilde{\sigma}_{j-1})}{\beta_0}\right)^{1/2}\left(\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}\left(Z_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}\right)\right)^{1/2}e^{-\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}}^{t-\tau})}\nonumber\\ &=\left(\frac{4k(s+\widetilde{\sigma}_{j-1})}{\beta_0}\right)^{1/2}\left(\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}\left(Z_{s+\widetilde{\sigma}_{j-1}}\right)\right)^{1/2}e^{-\frac{\beta_0}{2k}E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\label{pass time sep 2}\\ &\leq 2\beta_0^{-1/2}(s+2k)\sup_{x\geq 0}|\sqrt{x}e^{-x^2}|\nonumber\\ &\leq C_{\beta_0}(s+2k),\label{sup<inf} \end{align} where to obtain \eqref{pass time sep 2}, we use \eqref{cons energy 2j-2} and to obtain \eqref{sup<inf} we used the elementary bound: $$\sup_{x\geq 0}|\sqrt{x}e^{-x^2}|\leq C<\infty.$$ Therefore, \eqref{bound on transport time}, \eqref{sup<inf} yield \begin{equation*} \begin{aligned} e&^{\left(\frac{\beta_0}{2}+\frac{\left(j-1\right)\beta_0}{2k}\right) E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\left|\int_{\mathcal{F}(t)}T_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{N,R}g_{N,s+\widetilde{\sigma}_{j}}(\tau,Z_{s+\widetilde{\sigma}_{j-1}})\,d\tau\right|\\ &=\int_{\mathcal{F}(t)}e^{\left(\frac{\beta_0}{2}+\frac{\left(j-1\right)\beta_0}{2k}\right) E_{s+\widetilde{\sigma}_{j-1}}(Z_{s+\widetilde{\sigma}_{j-1}})}\left|T_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{N,R}g_{N,s+\widetilde{\sigma}_{j}}(\tau,Z_{s+\widetilde{\sigma}_{j-1}})\right|\,d\tau\\ &\leq C_{d,\beta_0}(s+2k)\int_{\mathcal{F}(t)}|g_{N,s+\widetilde{\sigma}_{j}}(\tau)|_{N,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}}\,d\tau. \end{aligned} \end{equation*} Hence, \begin{equation*} \begin{aligned} \bigg|&\int_{\mathcal{F}(t)}T_{s+\widetilde{\sigma}_{j-1}}^{t-\tau}\mathcal{C}_{s+\widetilde{\sigma}_{j-1},s+\widetilde{\sigma}_{j}}^{N,R}g_{N,s+\widetilde{\sigma}_{j}}(\tau,Z_{s+\widetilde{\sigma}_{j-1}})\,d\tau\bigg|_{N,\frac{\beta_0}{2}+\frac{\left(j-1\right)\beta_0}{2k},s+\widetilde{\sigma}_{j-1}}\\ &\leq C_{d,\beta_0}(s+2k)\int_{\mathcal{F}(t)}|g_{N,s+\widetilde{\sigma}_{j}}(\tau)|_{N,\frac{\beta_0}{2}+\frac{j\beta_0}{2k},s+\widetilde{\sigma}_{j}}\,d\tau. \end{aligned} \end{equation*} For the Boltzmann hierarchy, the proof is identical using Lemma \ref{a priori lemma for C Boltzmann} instead. \end{proof}} \end{comment} Recalling the observables $I_{s,k,R}^N$, $I_{s,k,R}^\infty$ defined in \eqref{BBGKY bounded energy}-\eqref{Botlzmann bounded energy}, we obtain the following estimates: \begin{lemma}\label{time sep} For any $s,n\in\mathbb{N}$, $R>0$, $\delta>0$ and $t\in[0,T]$, the following estimates hold: \begin{equation*} \sum_{k=0}^n\|I_{s,k,R,\delta}^N(t)-I_{s,k,R}^N(t)\|_{L^\infty_{X_s}}\leq \delta\|\phi_s\|_{L^\infty_{V_s}}C_{d,s,\beta_0,\mu_0,T}^n\|F_{N,0}\|_{N,\beta_0,\mu_0}, \end{equation*} \begin{equation*}\sum_{k=0}^n\|I_{s,k,R,\delta}^\infty(t)-I_{s,k,R}^\infty(t)\|_{L^\infty_{X_s}}\leq \delta\|\phi_s\|_{L^\infty_{V_s}}C_{d,s,\beta_0,\mu_0,T}^n\|F_{0}\|_{\infty,\beta_0,\mu_0}. \end{equation*} \end{lemma} \begin{proof} For the proof, we follow similar ideas as in Lemma 8.7. from \cite{thesis}, and we also use bound \eqref{bound on sigma} to control the combinatorics occurring. \begin{comment} {\color{red} We first prove it for the BBGKY hierarchy case. For $k=0$, the corresponding difference trivially vanishes, so we may assume $1\leq k\leq n$. Recalling \eqref{collision times}, \eqref{separated collision times}, notice that \begin{equation}\label{decomposition of the truncated time set} \mathcal{T}_k(t)\setminus\mathcal{T}_{k,\delta}(t)=\bigcup_{i=0}^{k-1}\mathcal{F}_i(t), \end{equation} where \begin{equation}\label{time component} \mathcal{F}_i(t)=\left\{(t_1,...,t_{k})\in\mathcal{T}_k(t): t_i-\delta<t_{i+1}\leq t_i\right\},\quad t_0=t,\quad t_{k+1}=0. \end{equation} We obtain \begin{align} |\mathcal{F}_i(t)|&\leq\int_0^{t}...\int_{0}^{t_{i-1}}\int_{t_i-\delta}^{t_i}\int_{0}^{t_{i+1}}...\int_{0}^{t_{k-1}}\,dt_k...\,dt_1\nonumber\\ &\leq\int_0^t...\int_{0}^{t_{i-1}}\int_{t_i-\delta}^{t_i}\frac{t_{i+1}^{k-i-1}}{(k-i-1)!}\,dt_{i+1}...\,dt_1\nonumber\\ &=\int_0^t...\int_0^{t_{i-1}}\frac{1}{(k-i)!}\left(t_i^{k-i}-\left(t_i-\delta\right)^{k-i}\right)\,dt_i...\,dt_1\nonumber\\ &\leq\int_0^t...\int_0^{t_{i-1}}\frac{\delta (k-i)t_i^{k-i-1}}{(k-i)!}\,dt_i...\,dt_1\nonumber\\ &=\delta\int_0^t...\int_0^{t_{i-1}}\frac{ t_i^{k-i-1}}{(k-i-1)!}\,dt_i...\,dt_1\nonumber\\ &=\frac{\delta t^{k-1}}{(k-1)!}\leq \frac{\delta T^{k-1}}{(k-1)!}.\label{time measure} \end{align} We also have \begin{align} |&I_{s,k,R,\delta}^N(t)(X_s)-I_{s,k,R}^N(t)(X_s)|\nonumber\\ &\leq\|\phi_s\|_{L^\infty_{V_s}}\sum_{\sigma\in S_k}|f_{N,R,\delta}^{(s,k,\sigma)}(t)- f_{N,R}^{(s,k,\sigma)}(t)|_{N,\frac{\beta_0}{2},s}\int_{\mathbb{R}^{ds}}e^{-\frac{\beta_0}{2}E_s(Z_s)}\,dV_s\nonumber\\ &\leq C_{s,\beta_0}\|\phi_s\|_{L^\infty_{V_s}}\sum_{\sigma\in S_k}|f_{N,R,\delta}^{(s,k,\sigma)}(t)- f_{N,R}^{(s,k,\sigma)}(t)|_{N,\frac{\beta_0}{2},s}.\label{first time estimate} \end{align} But by \eqref{decomposition of the truncated time set}-\eqref{time measure}, an inductive application of the first estimate of Lemma \ref{inductive time sep} for $j=1,...,k$, implies that for any $\sigma\in S_k$, we obtain \begin{align} &|f_{N,R,\delta}^{(s,k,\sigma)}(t)- f_{N,R}^{(s,k,\sigma)}(t)|_{N,\frac{\beta_0}{2},s}\nonumber\\ &\leq \sum_{i=0}^{k-1}\left|\int_{\mathcal{F}_i(t)}T_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{N,R} T_{s+\widetilde{\sigma}_1}^{t_1-t_2} ...\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{N,R} T_{s+\widetilde{\sigma}_k}^{t_k}f_{N,0}^{(s+\widetilde{\sigma}_k)}(Z_s)\right|_{N,\frac{\beta_0}{2},s}\,dt_k...\,dt_1\nonumber\\ &\leq C_{d,\beta_0}^k(s+2k)^k\sum_{i=0}^{k-1}\int_{\mathcal{F}_i(t)}|T_{s+\widetilde{\sigma}_k}^{t_k}f_{N,0}^{(s+\widetilde{\sigma}_k)}|_{N,\beta_0,s+\widetilde{\sigma}_k}\,dt_{k}...\,dt_1\nonumber\\ &=C_{d,\beta_0}^k(s+2k)^k|f_{N,0}^{(s+\widetilde{\sigma}_k)}|_{N,\beta_0,s+\widetilde{\sigma}_k}\sum_{i=0}^{k-1}\int_{\mathcal{F}_i(t)}\,dt_{k}...\,dt_1\label{use of isometry}\\ &\leq C_{d,\beta_0}^k(s+2k)^k|f_{N,0}^{(s+\widetilde{\sigma}_k)}|_{N,\beta_0,s+\widetilde{\sigma}_k}\frac{k\delta T^{k-1}}{(k-1)!}\label{application of time measure}\\ &\leq C_{d,\beta_0}^k(s+2k)^ke^{-\mu_0(s+\widetilde{\sigma}_k)}\|F_{N,0}\|_{N,\beta_0,\mu_0}\frac{k^2\delta T^{k-1}}{k!}\nonumber\\ &\leq C_{d,\beta_0}^k(s+2k)^ke^{-\mu_0(s+k)}\|F_{N,0}\|_{N,\beta_0,\mu_0}\frac{k^2\delta T^{k-1}}{k!}\label{use on bound of sigma}\\ &\leq\delta C_{d,\beta_0,\mu_0,T}^k\frac{(s+2k)^k}{k!}\|F_{N,0}\|_{N,\beta_0,\mu_0}\nonumber\\ &\leq \delta C_{d,s,\beta_0,\mu_0,T}^k\|F_{N,0}\|_{N,\beta_0,\mu_0},\label{second time estimate} \end{align} where to obtain \eqref{use of isometry}, we use Remark \ref{T_s isometry}, to obtain \eqref{application of time measure}, we use \eqref{time measure}, to obtain \eqref{use on bound of sigma}, we use \eqref{bound on sigma}, and to obtain \eqref{second time estimate}, we use the elementary inequality: \begin{equation}\label{elementary inequality} \frac{(s+2k)^k}{k!}\leq2^k\frac{(s+k)^k}{k!}\leq 2^k\sum_{\ell=0}^\infty\frac{(s+k)^\ell}{\ell!}=2^ke^{s+k}\leq C_s^k. \end{equation} Using \eqref{first time estimate}, \eqref{second time estimate}, and adding for $\sigma\in S_k$ and $k=1,...,n$, \eqref{cardinality of S_k} implies \begin{equation*} \sum_{k=0}^n\|I_{s,k,R,\delta}^N(t)-I_{s,k,R}^N(t)\|_{L^\infty_{X_s}}\leq \delta\|\phi_s\|_{L^\infty_{V_s}}C_{d,s,\beta_0,\mu_0,T}^n\|F_{N,0}\|_{N,\beta_0,\mu_0}. \end{equation*} The proof for the Boltzmann hierarchy case is similar using the second estimate of Lemma \ref{inductive time sep}.} \end{comment} \end{proof} Combining Lemma \ref{term by term}, Lemma \ref{energy truncation} and Lemma \ref{time sep}, we obtain \begin{proposition}\label{reduction} For any $s,n\in\mathbb{N}$, $R>1$, $\delta>0$ and $t\in[0,T]$, the following estimates hold: \begin{equation*} \begin{aligned} \|I_s^N(t)-\sum_{k=1}^nI_{s,k,R,\delta}^N(t)\|_{L^\infty_{X_s}}\leq C_{s,\beta_0,\mu_0,T}\|\phi_s\|_{L^\infty_{V_s}}\left(2^{-n}+e^{-\frac{\beta_0}{3}R^2}+\delta C_{d,s,\beta_0,\mu_0,T}^n\right)\|F_{N,0}\|_{N,\beta_0,\mu_0}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \|I_s^\infty(t)-\sum_{k=1}^nI_{s,k,R,\delta}^\infty(t)\|_{L^\infty_{X_s}}\leq C_{s,\beta_0,\mu_0,T}\|\phi_s\|_{L^\infty_{V_s}}\left(2^{-n}+e^{-\frac{\beta_0}{3}R^2}+\delta C_{d,s,\beta_0,\mu_0,T}^n\right)\|F_{0}\|_{\infty,\beta_0,\mu_0}. \end{aligned} \end{equation*} \end{proposition} Proposition \ref{reduction} implies that, given $0\leq k\leq n$, $R>1$, $\delta>0$, the convergence proof reduces to controlling the differences $I_{s,k,R,\delta}^N(t)-I_{s,k,R,\delta}^\infty(t),$ where the observables $I_{s,k,R,\delta}^N$, $I_{s,k,R,\delta}^\infty$ are given by \eqref{bbgky truncated time}-\eqref{boltzmann truncated time}. However this is not immediate since the backwards $(\epsilon_2,\epsilon_3)$-flow and the backwards free flow do not coincide in general. The goal is to eliminate some small measure set of initial data, negligible in the limit, such that the backwards $(\epsilon_2,\epsilon_3)$-flow and the backwards free flow are comparable. \section{Geometric estimates}\label{sec:geometric} In this section we present some geometric results which will be essential for estimating the measure of the pathological sets leading to recollisions of the backwards $(\epsilon_2,\epsilon_3)$ flow (see Section \ref{sec:stability}). First, we review some of the results we used in \cite{ternary} which are useful here as well. We then present certain novel results, namely Lemma \ref{estimate of cubes}, Lemma \ref{estimate of difference in shell}, Lemma \ref{estimate on annulus I_1} and most importantly Lemma \ref{lemma on I_1,2}, which crucially rely on the following symmetric representation of the $(2d-1)$ sphere of radius $r>0$: \begin{align} \mathbb{S}_r^{2d-1}=\left\{(\omega_1,\omega_2)\in B_r^d\times B_r^d:\omega_2\in\mathbb{S}_{\sqrt{r^2-|\omega_1|^2}}^{d-1}\right\} =\left\{(\omega_1,\omega_2)\in B_r^d\times B_r^d:\omega_1\in\mathbb{S}_{\sqrt{r^2-|\omega_2|^2}}^{d-1}\right\}\label{representation of sphere for fixed omega_1} \end{align} Representation \eqref{representation of sphere for fixed omega_1} is very useful when one wants to estimate the intersection of $\mathbb{S}_r^{2d-1}$ with sets of the form $S\times\mathbb{R}^d$ or $\mathbb{R}^d\times S$, where $S\subseteq\mathbb{R}^d$ is of small measure. \subsection{Cylinder-Sphere estimates} Here, we present certain estimates based on the intersection of a sphere with a given solid cylinder. These estimates were used in \cite{ternary} as well. Similar estimates can be found in \cite{denlinger,gallagher}. \begin{lemma}\label{Ryan's lemma} Let $\rho,r>0$ and $K_\rho^d\subseteq\mathbb{R}^d$ be a solid cylinder. Then the following estimate holds for the $(d-1)$-spherical measure: $$\int_{\mathbb{S}_r^{d-1}}\mathds{1}_{K_\rho^d}\,d\omega\lesssim r^{d-1}\min\left\{1,\left(\displaystyle\frac{\rho}{r}\right)^{\frac{d-1}{2}}\right\}.$$ \end{lemma} \begin{proof} After re-scaling we may clearly assume that $r=1$. Then, we refer to the work of R. Denlinger \cite{denlinger}, p.30, for the rest of the proof. \end{proof} Applying Lemma \ref{Ryan's lemma}, we obtain the following geometric estimate, which will be crucially used in Section \ref{sec:stability}. \begin{corollary}\label{spherical estimate} Given $0<\rho\leq 1\leq R$, the following estimate holds: $$|B_{R}^{d}\cap K_\rho^d|_{d}\lesssim R^{d}\rho^{\frac{d-1}{2}}.$$ \end{corollary} \begin{proof} The co-area formula and Lemma \ref{Ryan's lemma} imply \begin{equation}\label{estimate with min} \begin{aligned} |B_R^d\cap K_\rho^d|_d&= \int_0^R \int_{\mathbb{S}_r^{d-1}}\mathds{1}_{K_\rho^d}\,d\omega\,dr\\ &\lesssim \int_0^Rr^{d-1}\min\left\{1,(\frac{\rho}{r})^{\frac{d-1}{2}}\right\}\,dr\\ &\leq\int_0^\rho r^{d-1}\,dr+\rho^{\frac{d-1}{2}}\int_0^R r^{\frac{d-1}{2}}\,dr\\ &\simeq \rho^d+\rho^{\frac{d-1}{2}}R^{\frac{d+1}{2}},\quad\text{since }d\geq 2\\ &\lesssim R^{d}\rho^{\frac{d-1}{2}},\quad\text{since } 0<\rho\leq 1\leq R. \end{aligned} \end{equation} \end{proof} \subsection{Estimates relying on the $(2d-1)$-sphere representation}\label{subsec:relying} Here we present certain geometric estimates relying on the representation \eqref{representation of sphere for fixed omega_1}. In particular, up to our knowledge, Lemma \ref{estimate of cubes}, Lemma \ref{estimate of difference in shell}, Lemma \ref{estimate on annulus I_1} and most importantly Lemma \ref{lemma on I_1,2} are novel results. Lemma \ref{strip lemma} is a special case of a result proved in \cite{ternary}. \subsubsection{Truncation of impact directions}\label{subsubsec:ball} We first estimate the intersection of $\mathbb{S}_1^{2d-1}$ with sets of the form $B_\rho^d\times\mathbb{R}^d$ or $\mathbb{R}^d\times B_\rho^d$. \begin{lemma}\label{estimate of cubes} Consider $\rho>0$. We define the sets \begin{align} M_{1}(\rho)&=B_\rho^d\times\mathbb{R}^d=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:|\omega_1|\leq\rho\right\}\label{cube parameters 1},\\ M_{2}(\rho)&=\mathbb{R}^d\times B_\rho^d=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:|\omega_2|\leq\rho\right\}\label{cube parameters 2} \end{align} Then, the following holds \begin{equation*} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{M_1(\rho)}\,d\omega_1\,d\omega_2=\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{M_2(\rho)}\,d\omega_1\,d\omega_2\lesssim\min\{1,\rho^d\}. \end{equation*} \end{lemma} \begin{proof} By symmetry it suffices to estimate the first term. Using \eqref{cube parameters 1} and representation \eqref{representation of sphere for fixed omega_1}, we obtain \begin{align*} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{M_1(\rho)}\,d\omega_1\,d\omega_2&=\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{B_{\rho}^d\times \mathbb{R}^d}\,d\omega_1\,d\omega_2\lesssim \int_{B_{\rho}^d\cap B_1^d}\int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\,d\omega_2\,d\omega_1\lesssim \min\{1,\rho^d\}. \end{align*} \end{proof} The following result is a special case of Lemma 8.4. from \cite{ternary}. For the proof, see Lemma 9.5. in \cite{thesis}. \begin{lemma}\label{strip lemma} Consider $\rho>0$. Let us define the strip \begin{equation}\label{strip} W_\rho^{2d}=\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:|\omega_1-\omega_2|\leq\rho\}. \end{equation} Then, the following estimate holds: $$\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{W_\rho^{2d}}\,d\omega_1\,d\omega_2\lesssim\min\left\{1,\rho^{\frac{d-1}{2}}\right\}.$$ \end{lemma} \begin{proof} For the proof, see Lemma 9.5. in \cite{thesis}. The main idea is to first use representation \eqref{representation of sphere for fixed omega_1} and then apply Lemma \ref{Ryan's lemma}. \end{proof} \subsubsection{Conic estimates}\label{subsubsec:conic} Now we establish estimates related to conic regions. We first present a well-known spherical cap estimate. \begin{lemma}\label{shell estimate} Consider $0\leq\alpha\leq 1$ and $\nu\in\mathbb{R}^{d}\setminus\{0\}$. Let us define \begin{equation}\label{shell parameters} S(\alpha,\nu)=\left\{\omega\in\mathbb{R}^d:|\langle\omega,\nu\rangle|\geq\alpha |\omega||\nu|\right\}. \end{equation} Then, for $\rho>0$, the following estimate holds: $$\int_{\mathbb{S}_r^{d-1}}\mathds{1}_{S(\alpha,\nu)}\,d\omega= r^{d-1}|\mathbb{S}_1^{d-2}|\int_0^{2\arccos\alpha}\sin^{d-2}(\theta)\,d\theta\lesssim r^{d-1}\arccos\alpha.$$ \end{lemma} \begin{proof} After re-scaling, it suffices to prove the result for $r=1$. Notice that $\mathbb{S}_1^{d-1}\cap S(\alpha,\nu)$ is a spherical cap of angle $2\arccos\alpha$ and direction $\nu\neq 0$ on the unit sphere. Therefore, integrating in spherical coordinates, we obtain $$\int_{\mathbb{S}_1^{d-1}}\mathds{1}_{S(\alpha,\nu)}\,d\omega=|\mathbb{S}_1^{d-2}| \int_0^{2\arccos\alpha}\sin^{d-2}\theta\,d\theta\lesssim\arccos\alpha.$$ \end{proof} We apply Lemma \ref{shell estimate} to obtain the following result: \begin{comment} \begin{lemma}\label{estimate of difference in shell} Consider $0\leq\alpha\leq 1$ and $\nu\in\mathbb{R}^{d}\setminus\{0\}$. Let us define \begin{equation}\label{difference shell parameters} N(\alpha,\nu)=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:\langle\omega_1-\omega_2,\nu\rangle\geq\alpha|\omega_1-\omega_2||\nu|\right\}. \end{equation} Then we have the estimate the estimate: \begin{equation*} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{N(\alpha,\nu)}\,d\omega_1\,d\omega_2\lesssim\arccos\alpha. \end{equation*} \end{lemma} \begin{proof} Using representation \eqref{representation of sphere for fixed omega_1}, we obtain \begin{equation}\label{initial estimate for W} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{N(\alpha,\nu)}\,d\omega_1\,d\omega_2=\int_{B_1^d}\int_{\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}}\mathds{1}_{N_{\omega_2}(\alpha,\nu)}(\omega_1)\,d\omega_1\,d\omega_2, \end{equation} where for fixed $\omega_2\in B_1^d$, we denote $$N_{\omega_2}(\alpha,\nu):=\left\{\omega_1\in\mathbb{R}^d:(\omega_1,\omega_2)\in N(\alpha,\nu)\right\}.$$ Let us fix $\omega_2\in B_1^d$. We define the translation map $F_{\omega_2}:\mathbb{R}^d\to\mathbb{R}^d$ by $$\omega:=F_{\omega_2}(\omega_1)=\omega_1-\omega_2.$$ Clearly \begin{equation}\label{equality of domains W} F_{\omega_2}[\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}]=\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}(-\omega_2), \end{equation} where $\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}(-\omega_2)$ is the sphere of radius $\sqrt{1-|\omega_2|^2}$, centered at $-\omega_2$. Also, recalling notation from \eqref{shell parameters}, we have \begin{equation}\label{equality of char W} \omega_1\in N_{\omega_2}(\alpha,\nu)\Leftrightarrow F_{\omega_2}(\omega_1)\in S(\alpha,\nu). \end{equation} Therefore, for fixed $\omega_2\in B_1^d$, we obtain \begin{align}\int_{\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}}\mathds{1}_{N_{\omega_2}(\alpha,\nu)}(\omega_1)\,d\omega_1&=\int_{\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}}\mathds{1}_{S(\alpha,\nu)}(F_{\omega_2}(\omega_1))\,d\omega_1\label{equality of chars, estimate W}\\ &=\int_{\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}(-\omega_2)}\mathds{1}_{S(\alpha,\nu)}(\omega)\,d\omega\label{change of variables}\\ &\lesssim\arccos\alpha\label{use of lemma in W}, \end{align} where to obtain \eqref{equality of chars, estimate W} we use \eqref{equality of char W}, to obtain \eqref{change of variables} we use the substitution $\omega=F_{\omega_2}(\omega_1)$ and \eqref{equality of domains W}, and to obtain \eqref{use of lemma in W} we use Lemma \ref{shell estimate}. By \eqref{initial estimate for W}, we obtain \begin{align*} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{N(\alpha,\nu)(\omega_1,\omega_2)}\,d\omega_1\,d\omega_2&=\int_{B_1^d}\int_{\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}}\mathds{1}_{N_{\omega_2}(\alpha,\nu)}(\omega_1)\,d\omega_1\,d\omega_2\lesssim\arccos\alpha. \end{align*} \end{proof} \end{comment} \begin{lemma}\label{estimate of difference in shell} Consider $0\leq\alpha\leq 1$ and $\nu\in\mathbb{R}^{d}\setminus\{0\}$. Let us define \begin{equation}\label{difference shell parameters} N(\alpha,\nu)=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:\langle\omega_1-\omega_2,\nu\rangle\geq\alpha|\omega_1-\omega_2||\nu|\right\}. \end{equation} Then, we have the estimate: \begin{equation*} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{N(\alpha,\nu)}\,d\omega_1\,d\omega_2\lesssim\arccos\alpha. \end{equation*} \end{lemma} \begin{proof} Recalling \eqref{shell parameters}-\eqref{difference shell parameters}, we have \begin{equation}\label{equation on integrals 1 W} N(\alpha,\nu)=\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:\omega_1-\omega_2\in S(\alpha,\nu)\}. \end{equation} Let us define the linear map $T:\mathbb{R}^{2d}\to\mathbb{R}^{2d}$ by \begin{equation*} (u_1,u_2)=T(\omega_1,\omega_2):=(\omega_1+\omega_2,\omega_1-\omega_2). \end{equation*} Clearly $$|u_1|^2+|u_2|^2=|\omega_1+\omega_2|^2+|\omega_1-\omega_2|^2=2|\omega_1|^2+2|\omega_2|^2=2,\quad\forall(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1},$$ hence $T:\mathbb{S}_1^{2d-1}\to\mathbb{S}_{\sqrt{2}}^{2d-1}$. Therefore, using \eqref{equation on integrals 1 W} and changing variables under $T$, we have \begin{align} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{N(\alpha,\nu)}(\omega_1,\omega_2)\,d\omega_1\,d\omega_2&=\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{S(\alpha,\nu)}(\omega_1-\omega_2)\,d\omega_1\,d\omega_2\nonumber\\ &\simeq\int_{\mathbb{S}_2^{2d-1}}\mathds{1}_{S(\alpha,\nu)}(u_2)\,du_1\,du_2\nonumber\\ &=\int_{B_{\sqrt{2}}^d}\int_{\mathbb{S}_{\sqrt{2-|u_1|^2}}^{d-1}}\mathds{1}_{S(\alpha,\nu)}(u_2)\,du_2\,du_1\label{decomposition sqrt}\\ &\lesssim\arccos\alpha,\label{use of spherical shell lemma} \end{align} where to obtain \eqref{decomposition sqrt} we use the representation of the sphere \eqref{representation of sphere for fixed omega_1}, and to obtain \eqref{use of spherical shell lemma} we use Lemma \ref{shell estimate}. \end{proof} \subsubsection{Annuli estimates}\label{subsubsec:annulus} We present estimates based on the intersection of the unit sphere some appropriate annuli. \begin{lemma}\label{estimate on annulus I_1} Let $0<\beta<1/2$, and consider the sets \begin{align} I_1&=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}: \left|1-2\left|\omega_1\right|^2\right|\leq 2\beta\right\},\label{annulus I1}\\ I_2&=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}: \left|1-2\left|\omega_2\right|^2\right|\leq 2\beta\right\}\label{annulus I2}. \end{align} There hold the estimates: \begin{equation*} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{I_1}\,d\omega_1\,d\omega_2=\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{I_2}\,d\omega_1\,d\omega_2\lesssim\beta. \end{equation*} \end{lemma} \begin{proof} By symmetry, it suffices to prove the estimate for $I_1$. Since $0<\beta<1/2$, we may write $$I_1=\left\{(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}:\sqrt{\frac{1}{2}-\beta}\leq|\omega_1|\leq\sqrt{\frac{1}{2}+\beta}\right\}. $$ Using the representation \eqref{representation of sphere for fixed omega_1} of the $(2d-1)$-unit sphere, we obtain \begin{align*} \int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{I_1}\,d\omega_1\,d\omega_2&\leq \int_{\sqrt{\frac{1}{2}-\beta}\leq|\omega_1|\leq\sqrt{\frac{1}{2}+\beta}}\int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\,d\omega_2\,d\omega_1\\ &\lesssim(\frac{1}{2}+\beta)^{d/2}-(\frac{1}{2}-\beta)^{d/2}\\ &\overset{d\geq 2}=\left(\sqrt{\frac{1}{2}+\beta}-\sqrt{\frac{1}{2}-\beta}\right)\sum_{j=0}^{d-1}\left(\frac{1}{2}+\beta\right)^{j/2}\left(\frac{1}{2}-\beta\right)^{\frac{d-1-j}{2}}\\ &=\frac{2\beta}{\sqrt{\frac{1}{2}+\beta}+\sqrt{\frac{1}{2}-\beta}}\sum_{j=0}^{d-1}\left(\frac{1}{2}+\beta\right)^{j/2}\left(\frac{1}{2}-\beta\right)^{\frac{d-1-j}{2}}\\ &\leq 2\sqrt{2}\beta\sum_{j=0}^{d-1}\left(\frac{1}{2}+\beta\right)^{j/2}\left(\frac{1}{2}-\beta\right)^{\frac{d-1-j}{2}}\\ &\lesssim \beta, \end{align*} since $0<\beta<1/2$. The proof is complete. \end{proof} \begin{lemma}\label{lemma on I_1,2} Consider $0<\beta<1/4$. Let us define the hemispheres \begin{align} \mathcal{S}_{1,2}&=\{(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}:|\omega_1|<|\omega_2|\},\label{sphere 2<1}\\ \mathcal{S}_{2,1}&=\{(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}:|\omega_2|<|\omega_1|\}\label{sphere 1<2}. \end{align} and the annuli \begin{align} I_{1,2}&=\{(\omega_1,\omega_2)\in \mathbb{R}^{2d}:\left|\left|\omega_1\right|^2+2\langle\omega_1,\omega_2\rangle\right|\leq\beta\}\label{I_1,2},\\ I_{2,1}&=\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:\left|\left|\omega_2\right|^2+2\langle\omega_1,\omega_2\rangle\right|\leq\beta\}\label{I_2,1}. \end{align} Then, there holds $$\int_{\mathcal{S}_{1,2}}\mathds{1}_{I_{1,2}}\,d\omega_1\,d\omega_2=\int_{\mathcal{S}_{2,1}}\mathds{1}_{I_{2,1}}\,d\omega_1\,d\omega_2\lesssim\beta.$$ \end{lemma} \begin{proof} By symmetry, it suffices to prove \begin{equation}\label{sufficient condition lemma annulus} \int_{\mathcal{S}_{2,1}}\mathds{1}_{I_{2,1}}\,d\omega_1\,d\omega_2\lesssim\beta. \end{equation} Recalling notation from \eqref{cube parameters 1}-\eqref{cube parameters 2}, let us define $$U_\beta=M_1^c(2\sqrt{\beta})\cap M_2^c(2\sqrt{\beta})=\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}: |\omega_1|>2\sqrt{\beta}\text{ and }|\omega_2|> 2\sqrt{\beta}\}.$$ Clearly $U_\beta^c=M_1(2\sqrt{\beta})\cup M_2(2\sqrt{\beta}).$ Writing $ A:=I_{2,1}\cap U_\beta, $ we have \begin{align} \int_{\mathcal{S}_{2,1}}\mathds{1}_{I_{2,1}}\,d\omega_1\,d\omega_2 &\leq\int_{\mathcal{S}_{2,1}}\mathds{1}_{ U_\beta^c}\,d\omega_1\,d\omega_2+\int_{\mathcal{S}_{2,1}}\mathds{1}_{A}\,d\omega_1\,d\omega_2\lesssim\beta^{d/2}+\int_{\mathcal{S}_{2,1}}\mathds{1}_{A}\,d\omega_1\,d\omega_2\label{truncated lemma annulus}, \end{align} where to obtain \eqref{truncated lemma annulus}, we used Lemma \ref{estimate of cubes}. Notice that we may write \begin{equation}\label{equivalent condition for I_21} A=\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}: |\omega_1|>2\sqrt{\beta},\text{ }|\omega_2|>2\sqrt{\beta}\text{ and }\sqrt{|\omega_1|^2-\beta}\leq|\omega_1+\omega_2|\leq\sqrt{|\omega_1|^2+\beta}\}. \end{equation} By \eqref{truncated lemma annulus}, the representation of the sphere \eqref{representation of sphere for fixed omega_1} and \eqref{equivalent condition for I_21}, we have \begin{equation}\label{reduction to single integral} \int_{\mathcal{S}_{2,1}}\mathds{1}_{I_{2,1}}\,\omega_1\,d\omega_2 \lesssim\beta^{d/2}+\int_{2\sqrt{\beta}<|\omega_1|\leq 1}\int_{\mathcal{S}_{2,1,\omega_1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2\,d\omega_1, \end{equation} where given $2\sqrt{\beta}<|\omega_1|\leq 1$, we denote \begin{align} \mathcal{S}_{2,1,\omega_1}&=\{\omega_2\in\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}:|\omega_2|<|\omega_1|\},\label{S_21 projection}\\ A_{\omega_1}&=\{\omega_2\in \mathbb{R}^d:(\omega_1,\omega_2)\in A\}=\{\omega_2\in\mathbb{R}^{d}:|\omega_2|>2\sqrt{\beta}\text{ and }\sqrt{|\omega_1|^2-\beta}\leq|\omega_1+\omega_2|\leq\sqrt{|\omega_1|^2+\beta}\}.\label{A omega1} \end{align} Since $\beta<1/4$, it suffices to control the term: \begin{equation}\label{iterated integral I'} I'=\int_{2\sqrt{\beta}<|\omega_1|\leq 1}\int_{\mathcal{S}_{2,1,\omega_1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2\,d\omega_1. \end{equation} Now we shall prove that, in fact \begin{equation}\label{ordered I'} I'=\int_{2\sqrt{\beta}<\sqrt{1-|\omega_1|^2}<|\omega_1|\leq 1}\int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2\,d\omega_1. \end{equation} Indeed, assume $\omega_1$ does not satisfy \begin{equation}\label{geometric condition_1} 2\sqrt{\beta}<\sqrt{1-|\omega_1|^2}< |\omega_1|. \end{equation} Since we are integrating in the region $2\sqrt{\beta}<|\omega_1|\leq 1$, exactly one of the following holds: \begin{align}|\omega_1|\leq \sqrt{1-|\omega_1|^2},&\label{bad condition 1}\\ \sqrt{1-|\omega_1|^2}\leq 2\sqrt{\beta}.&\label{bad condition 2} \end{align} Recalling \eqref{S_21 projection}, condition \eqref{bad condition 1} implies that $\mathcal{S}_{2,1,\omega_1}=\emptyset$, while recalling \eqref{A omega1}, condition \eqref{bad condition 2} implies $\mathcal{S}_{2,1,\omega_1}\cap A_{\omega_1}=\emptyset$. Therefore $$I'=\int_{2\sqrt{\beta}<\sqrt{1-|\omega_1|^2}<|\omega_1|\leq 1}\int_{\mathcal{S}_{2,1,\omega_1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2\,d\omega_1,$$ and \eqref{ordered I'} follows from \eqref{S_21 projection}. Fix any $\omega_1$ satisfying \eqref{geometric condition_1}. We first estimate the inner integral: \begin{equation}\label{wanted integral} \int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2. \end{equation} Notice that \eqref{geometric condition_1} also yields \begin{equation}\label{geometric condition_2} |\omega_1|-\sqrt{|\omega_1|^2-\beta}=\frac{\beta}{|\omega_1|+\sqrt{|\omega_1|^2-\beta}}<\frac{\beta}{|\omega_1|}\leq\frac{1}{2}\sqrt{\beta}\leq\frac{1}{4}\sqrt{1-|\omega_1|^2}. \end{equation} Condition \eqref{geometric condition_1} guarantees that the vector\footnote{understood as a point in $\mathbb{R}^d$} $-\omega_1$ lays outside of the sphere $\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}$, while condition \eqref{geometric condition_2} guarantees that the sphere is not contained in the annulus $A_{\omega_1}$. Therefore, the projection of $\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}\cap A_{\omega_1}$ on any plane containing the origin and the vector $-\omega_1$ can be visualized as follows: \begin{center} \begin{tikzpicture}[scale=0.8] \tkzDefPoint(0,0){A} \tkzDefPoint(4,0){B} \tkzInterCC[R](A,2.5 cm)(B,4.4cm) \tkzGetPoints{M1}{M2} \tkzInterCC[R](A,2.5 cm)(B,3.6cm) \tkzGetPoints{N1}{N2} \tkzDrawCircle[R](A,2.5cm) \tkzDrawCircle[dashed,R](B,4.4cm) \tkzDrawCircle[dashed,R](B,3.6cm) \draw [thick] (A)--(M1); \draw [thick] (A)--(N2); \draw[thick](A)--(B); \draw[thick](B)--(M1); \draw [thick] (A)--(N2)--(B)--cycle; \draw [thick] (A)--(M1)--(B)--cycle; \draw [thick](A)--(M2); \draw [thick](A)--(N1); \node at (A) {$\bullet$ }; \node at (B) {$\bullet$ }; \node at (M1) {$\bullet$ }; \node at (M2) {$\bullet$ }; \node at (N1) {$\bullet$ }; \node at (N2) {$\bullet$ }; \node at ($(A) +(-0.2,0)$){$O$}; \node at ($(M1) +(-0.1,0.3)$){$A$}; \node at ($(N1) +(-0.1,0.3)$){$B$}; \node at ($(B) +(0.3,0)$){$C$}; \node at ($(N2) +(0.4,-0.1)$){$D$}; \node at ($(M2) +(0.4,-0.1)$){$E$}; \draw (B) coordinate (a) node[right] {} (A) coordinate (b) node[left] {} (M1) coordinate (c) node[above right] {} pic["$$", draw=red, <->, angle eccentricity=1.2, angle radius=1.4cm] {angle=a--b--c}; \draw (B) coordinate (a) node[right] {} (A) coordinate (b) node[left] {} (N1) coordinate (c) node[above right] {} pic["$$", draw=blue, <->, angle eccentricity=1.2, angle radius=0.7cm] {angle=a--b--c}; \coordinate (a2) at (1.2,0.4); \node at (a2){$\theta_2$}; \coordinate (a1) at (2.1,0.4); \node at (a1){$\theta_1$}; \end{tikzpicture} \begin{align*} (OA)&=(OB)=\sqrt{1-|\omega_1|^2},\quad \overrightarrow{OC}=-\omega_1,\\ (AC)&=\sqrt{|\omega_1|^2+\beta},\quad (CD)=\sqrt{|\omega_1|^2-\beta}. \end{align*} \end{center} We conclude that \begin{equation}\label{description as difference of shells} \mathbb{S}_{\sqrt{1-|\omega|^2}}^{d-1}\cap A_{\omega_1}=\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}\cap\left( S(\cos\theta_1,-\omega_1)\setminus S(\cos\theta_2,-\omega_1)\right), \end{equation} where recalling the notation introduced in \eqref{shell parameters}, $$\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}\cap S(\cos\theta_1,-\omega_1),\quad \mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}\cap S(\cos\theta_2,-\omega_1),$$ are the spherical shells on $\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}$, of direction $-\omega_1$ and angles $2\theta_1$, $2\theta_2$ respectively where $$\theta_1=\widehat{AOC},\quad\theta_2=\widehat{BOC}.$$ Therefore, by \eqref{description as difference of shells}, we have \begin{align} \int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2 &=\int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\mathds{1}_{S(\cos\theta_1,-\omega_1)\setminus S(\cos\theta_2,-\omega_1)}(\omega_2)\,d\omega_2\nonumber\\ &=(1-|\omega_1|^2)^{\frac{d-1}{2}}|\mathbb{S}_1^{d-2}|\int_{2\theta_2}^{2\theta_1}\sin^{d-2}\theta\,d\theta\label{use of shell estim annulus I21}\\ &\lesssim\theta_1-\theta_2.\label{estimate of wanted integral} \end{align} where to obtain \eqref{use of shell estim annulus I21}, we use Lemma \ref{shell estimate}, and to obtain \eqref{estimate of wanted integral} we use the fact that $d\geq 2$. Let us calculate $\alpha_1=\cos\theta_1$, $\alpha_2=\cos\theta_2$. By the cosine law on the triangle $AOC$, we obtain \begin{equation}\label{x1} \alpha_1=\cos\theta_1=\frac{(OA)^2+(OC)^2-(AC)^2}{2(OA)(OC)}=\frac{1-|\omega_1|^2-\beta}{2|\omega_1|\sqrt{1-|\omega_1|^2}}, \end{equation} and by the cosine law on the triangle $BOC$, we obtain \begin{equation}\label{x2} \alpha_2=\cos\theta_2=\frac{(OB)^2+(OC)^2-(CB)^2}{2(OB)(OC)}=\frac{1-|\omega_1|^2+\beta}{2|\omega_1|\sqrt{1-|\omega_1|^2}}. \end{equation} Then, expression \eqref{x1} implies \begin{equation}\label{bound on alpha_1} |\alpha_1|\leq \frac{\sqrt{1-|\omega_1|^2}}{2|\omega_1|}+\frac{\beta}{2|\omega_1|\sqrt{1-|\omega_1|^2}}<\frac{5}{8}, \end{equation} since by \eqref{geometric condition_1} we have $|\omega_1|>\sqrt{1-|\omega_1|^2}>2\sqrt{\beta}$. In the same spirit, expression \eqref{x2} yields \begin{equation}\label{bound on alpha_2} |\alpha_2|<\frac{5}{8}. \end{equation} The inverse cosine is smooth in $(-1,1)$, so it is Lipschitz in $[-\frac{5}{8},\frac{5}{8}]$, thus by \eqref{bound on alpha_1}-\eqref{bound on alpha_2} and \eqref{geometric condition_1}, we have \begin{equation*} |\arccos \alpha_1-\arccos \alpha_2|\lesssim |\alpha_1-\alpha_2|=\frac{\beta}{|\omega_1|\sqrt{1-|\omega_1|^2}}. \end{equation*} Therefore \eqref{estimate of wanted integral} implies \begin{equation}\label{final estimate truncated sphere} \int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2\lesssim \theta_1-\theta_2=\arccos \alpha_1-\arccos \alpha_2\lesssim \frac{\beta}{|\omega_1|\sqrt{1-|\omega_1|^2}}. \end{equation} Using \eqref{final estimate truncated sphere}, and recalling \eqref{ordered I'}, we have \begin{align} I'&=\int_{2\sqrt{\beta}<\sqrt{1-|\omega_1|^2}|\omega_1|<1}\int_{\mathbb{S}_{\sqrt{1-|\omega_1|^2}}^{d-1}}\mathds{1}_{A_{\omega_1}}(\omega_2)\,d\omega_2\,d\omega_1\nonumber\\ &\lesssim \beta\int_{B_1^d}\frac{1}{|\omega_1|\sqrt{1-|\omega_1|^2}}\,d\omega_1\nonumber\\ &\simeq\beta\int_{0}^1\frac{r^{d-2}}{\sqrt{1-r^2}}\,dr\label{integration in polar coordinates}\\ &\leq\beta\int_0^1\frac{1}{\sqrt{1-r^2}}\,dr\label{use of d>2}\\ &=\frac{\pi}{2}\beta,\label{bound on auxiliary integral I21} \end{align} where to obtain \eqref{integration in polar coordinates} we use integration in polar coordinates, and to obtain \eqref{use of d>2} we use the fact that $d\geq 2$. Using \eqref{reduction to single integral} and \eqref{bound on auxiliary integral I21}, we obtain \begin{equation*} \int_{\mathcal{S}_{2,1}}\mathds{1}_{I_{2,1}}\,d\omega_1\,d\omega_2\lesssim \beta^{d/2}+\beta\lesssim\beta, \end{equation*} since $\beta<1/4$. The proof is complete. \end{proof} \section{Good configurations and stability}\label{sec:stability} \subsection{Adjunction of new particles} In this section, we investigate stability of good configurations under adunctions of collisional particles. Subsection \ref{subsec:binary} investigates binary adjunctions, while Subsection \ref{subsec:ternary} investigates ternary adjunctions. To perform the measure estimates needed, we will strongly rely on the results of Section \ref{sec:geometric}. We start with some definitions on the configurations we are using. Consider $m\in\mathbb{N}$ and $\theta>0$, and recall from \eqref{separated space data}-\eqref{separated data} the set of well-separated configurations \begin{equation}\label{separated conf} \begin{aligned} \Delta_m(\theta)&=\{\widetilde{Z}_m=(\widetilde{X}_m,\widetilde{V}_m)\in\mathbb{R}^{2dm}: |\widetilde{x}_i-\widetilde{x}_j|>\theta,\quad\forall 1\leq i<j\leq m\},\quad m\geq 2,\quad \Delta_1(\theta)=\mathbb{R}^{2d}. \end{aligned} \end{equation} Roughly speaking, a good configuration is a configuration which remains well-separated under backwards time evolution. More precisely, given $\theta>0$, $t_0>0$, we define the set of good configurations as: \begin{equation}\label{good conf def} G_m(\theta,t_0)=\left\{Z_m=(X_m,V_m)\in\mathbb{R}^{2dm}:Z_m(t)\in\Delta_m(\theta),\quad\forall t\geq t_0\right\}, \end{equation} where $Z_m(t)$ denotes the backwards in time free flow of $Z_m=(X_m,V_m)$, given by: \begin{equation}\label{back-wards flow} Z_m(t)=\left((X_m\left(t\right),V_m\left(t\right)\right):=(X_m-tV_m,V_m),\quad t\geq 0. \end{equation} Notice that $Z_m$ is the initial point of the trajectory i.e. $Z_m(0)=Z_m$. In other words for $m\geq 2$, we have \begin{equation}\label{good conf def m>=2} \begin{aligned} G_m(\theta,t_0)=\left\{Z_m=(X_m,V_m)\in\mathbb{R}^{2dm}:|x_i(t)-x_j(t)|>\theta,\quad\forall t\geq t_0,\quad\forall i<j\in \left\{1,...,m\right\}\right\}.\end{aligned} \end{equation} From now on, we consider parameters $R>>1$ and $0< \delta,\eta,\epsilon_0,\alpha<<1$ satisfying: \begin{equation}\label{choice of parameters} \alpha<<\epsilon_0<<\eta\delta,\quad R\alpha<<\eta\epsilon_0. \end{equation} For convenience we choose the parameters in \eqref{choice of parameters} in the very end of the paper, see \eqref{first parameter}-\eqref{final parameter}. Throughout this section, we will write $K_\eta^d$ for a cylinder of radius $\eta$ in $\mathbb{R}^d$. The following Lemma is useful for the adjunction of particles to a given configuration. For the proof, see Lemma 12.2.1 from \cite{gallagher} or Lemma 10.2. from \cite{thesis}. \begin{lemma}\label{adjuction of 1} Consider parameters $\alpha,\epsilon_0,R,\eta,\delta$ as in \eqref{choice of parameters} and $\epsilon_3<<\alpha$. Let $\bar{y}_1,\bar{y}_2\in\mathbb{R}^d$, with $|\bar{y}_1-\bar{y}_2|>\epsilon_0$ and $v_1\in B_R^d$. Then there is a $d$-cylinder $K_\eta^d\subseteq\mathbb{R}^d$ such that for any $y_1\in B_\alpha^d(\bar{y}_1)$, $y_2\in B_\alpha^d(\bar{y}_2)$ and $v_2\in B_R^d\setminus K_\eta^d$, we have \begin{enumerate}[(i)] \item $(y_1,y_2,v_1,v_2)\in G_2(\sqrt{2}\epsilon_3,0)$,\vspace{0.2cm} \item $(y_1,y_2,v_1,v_2)\in G_2(\epsilon_0,\delta).$ \end{enumerate} \end{lemma} \subsection{Stability under binary adjunction}\label{subsec:binary} The main results of this subsection are stated in Proposition \ref{bad set double} which will be the inductive step of adding a colliding particle, and Proposition \ref{bad set double measure}, which presents the measure estimate of the bad set that appears in this process. The proofs of the Propositions presented below are in part inspired by arguments in \cite{gallagher} and \cite{ternary} with a caveat that the new scenario needs to be addressed, in the case when the binary collisional configuration formed runs to a ternary interaction under time evolution. \subsubsection{Binary adjunction} For convenience, given $v\in\mathbb{R}^d$, let us denote \begin{equation}\label{post notation double} \left(\mathbb{S}_1^{d-1}\times B_R^{d}\right)^+(v)=\big\{(\omega_1, v_1)\in\mathbb{S}_{1}^{d-1}\times B_R^{d}:b_2(\omega_1, v_1-v)>0\big\}, \end{equation} where $b_2(\omega)1,v_1-v)=\langle\omega_1,v_1-v\rangle.$ Recall from \eqref{back-wards flow} that given $m\in\mathbb{N}$ and $Z_m=(X_m,V_m)\in\mathbb{R}^{2dm}$, we denote the backwards in time free flow as $Z_m(t)=(X_m-tV_m,V_m)$, $t\geq 0.$ Recall also the notation from \eqref{interior phase space} \begin{align*}\mathring{\mathcal{D}}_{m+1,\epsilon_2,\epsilon_3}=\big\{Z_{m+1}=(X_{m+1},V_{m+1})\in\mathbb{R}^{2d(m+1)}:\text{ } d_2(x_i,x_j)>\epsilon_2,\quad\forall (i,j)\in\mathcal{I}_{m+1}^2,&\\ \text{ and }d_3(x_i;x_j,x_k)>\sqrt{2}\epsilon_3,\quad\forall (i,j,k)\in\mathcal{I}_{m+1}^3\big\}&, \end{align*} where $\mathcal{I}_{m+1}^2,\mathcal{I}_{m+1}^3$ are given by \eqref{index 2}-\eqref{index 3} respectively. \begin{proposition}\label{bad set double} Consider parameters $\alpha,\epsilon_0,R,\eta,\delta$ as in \eqref{choice of parameters} and $\epsilon_2<<\epsilon_3<<\alpha$. Let $m\in\mathbb{N}$, $\bar{Z}_m=(\bar{X}_m,\bar{V}_m)\in G_m(\epsilon_0,0)$, $\ell\in\{1,...,m\}$ and $X_m\in B_{\alpha/2}^{dm}(\bar{X}_m)$. Then there is a subset $\mathcal{B}_{\ell}^2(\bar{Z}_m)\subseteq (\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_\ell)$ such that: \begin{enumerate}[(i)] \item For any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_{\ell})\setminus\mathcal{B}_{\ell}^2(\bar{Z}_m)$, one has: \begin{align} Z_{m+1}(t)&\in\mathring{\mathcal{D}}_{m+1,\epsilon_2,\epsilon_3},\quad\forall t\geq 0,\label{pre-0-double}\\ Z_{m+1}&\in G_{m+1}(\epsilon_0/2,\delta),\label{pre-delta-double}\\ \bar{Z}_{m+1}&\in G_{m+1}(\epsilon_0,\delta),\label{pre-delta-double-bar} \end{align} where \begin{equation}\label{pre-notation double} \begin{aligned} &Z_{m+1}=(x_1,...,x_\ell,...,x_m,x_{m+1},\bar{v}_1,...,\bar{v}_{\ell},...,\bar{v}_m,v_{m+1}),\\ &x_{m+1}=x_{\ell}-\epsilon_2\omega_1,\\ &\bar{Z}_{m+1}=(\bar{x}_1,...,\bar{x}_{\ell},...,\bar{x}_m,\bar{x}_{m},\bar{v}_1,...,\bar{v}_{\ell},...,\bar{v}_m,v_{m+1}),\\ \end{aligned} \end{equation} \item For any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_{\ell})\setminus\mathcal{B}_{\ell}^2(\bar{Z}_m)$, one has: \begin{align} &Z_{m+1}'(t)\in\mathring{\mathcal{D}}_{m+1,\epsilon_2,\epsilon_3},\quad\forall t\geq 0,\label{post-0-double}\\ &Z_{m+1}'\in G_{m+1}(\epsilon_0/2,\delta),\label{post-delta-double}\\ &\bar{Z}_{m+1}'\in G_{m+1}(\epsilon_0,\delta)\label{post-delta-double-bar}, \end{align} where \begin{equation}\label{post-notation double} \begin{aligned} &Z_{m+1}'=(x_1,...,x_\ell,...,x_m,x_{m+1},\bar{v}_1,...,\bar{v}_{\ell}',...,\bar{v}_m,v_{m+1}'),\\ &x_{m+1}=x_{\ell}+\epsilon_2\omega_1,\\ &\bar{Z}_{m+1}'=(\bar{x}_1,...,\bar{x}_\ell,...,\bar{x}_m,\bar{x}_{m},\bar{v}_1,...,\bar{v}_{\ell}',...,\bar{v}_m,v_{m+1}'),\\ &(\bar{v}_{\ell}',v_{m+1}')=T_{\omega_1}(\bar{v}_{\ell},v_{m+1}). \end{aligned} \end{equation} \end{enumerate} \end{proposition} \begin{proof} By symmetry, we may assume that $\ell=m$. For convenience, let us define the set $$\mathcal{F}_{m+1}=\left\{(i,j)\in\{1,...,m+1\}\times\{1,...,m+1\}:i<\min\{j,m\}\right\}.$$ \textbf{Proof of \it{(i)}:} Here we use notation from \eqref{pre-notation double}. We start by formulating the following claim, which will imply \eqref{pre-0-double}. \begin{lemma}\label{aux lemma pre-0} Under the assumptions of Proposition \ref{bad set double}, there is a subset $\mathcal{B}_m^{2,0,-}(\bar{Z}_m)\subseteq \mathbb{S}_1^{d-1}\times B_R^d$ such that for any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\setminus\mathcal{B}_m^{2,0,-}(\bar{Z}_m)$, there holds: \begin{align} d_2\left(x_i\left(t\right),x_j\left(t\right)\right)&>\sqrt{2}\epsilon_3,\quad\forall t\geq 0,\quad\forall (i,j)\in\mathcal{F}_{m+1},\label{pre lemma i<m}\\ d_2\left(x_m\left(t\right),x_{m+1}\left(t\right)\right)&>\epsilon_2,\quad\forall t\geq 0.\label{pre lemma i=m} \end{align} \end{lemma} Notice that \eqref{pre lemma i<m}-\eqref{pre lemma i=m} trivially imply \eqref{pre-0-double}, since $\epsilon_2<<\epsilon_3$. \textit{Proof of Lemma \ref{aux lemma pre-0}} {\it{Step 1: The proof of \eqref{pre lemma i<m}}}: We distinguish the following cases: $\bullet$ $j\leq m$: Since $\bar{Z}_m\in G_m(\epsilon_0,0)$ and $j\leq m$, we have $|\bar{x}_i(t)-\bar{x}_j(t)|>\epsilon_0,$ for all $t\geq 0.$ Therefore, triangle inequality implies that \begin{equation}\label{e/2 pre} \begin{aligned} |x_i(t)-x_j(t)|&=|x_i-x_j-t(\bar{v}_i-\bar{v}_j)|\geq |\bar{x}_i-\bar{x}_j-t(\bar{v}_i-\bar{v}_j)|-\alpha\geq\epsilon_0-\alpha>\frac{\epsilon_0}{2}>\sqrt{2}\epsilon_3, \end{aligned} \end{equation} since $\epsilon_3<<\alpha<<\epsilon_0$. $\bullet$ $j=m+1$: Since $(i,m+1)\in\mathcal{F}_{m+1}$, we have $i\leq m-1$. Since $\bar{Z}_m\in G_m(\epsilon_0,0)$ and $X_m\in B_{\alpha/2}^{dm}(\bar{X}_m)$, we conclude \begin{align*} &|\bar{x}_i-\bar{x}_m|>\epsilon_0,\quad |x_i-\bar{x}_i|\leq\frac{\alpha}{2}<\alpha,\quad |x_{m+1}-\bar{x}_m|\leq |x_m-\bar{x}_m|+\epsilon_2|\omega_1|\leq\frac{\alpha}{2}+\epsilon_2<\alpha,\quad\text{since }\epsilon_2<<\alpha.\\ \end{align*} Applying part $\textit{(i)}$ of Lemma \ref{adjuction of 1} for $\bar{y}_1=\bar{x}_i$, $\bar{y}_2=\bar{x}_m$, $y_1=x_i$, $y_2=x_{m+1}$, we may find a cylinder $K_\eta^{d,i}$ such that for any $v_{m+1}\in B_R^d\setminus K_\eta^{d,i}$, we have $|x_i(t)-x_{m+1}(t)|>\sqrt{2}\epsilon_3,$ for all $t\geq 0.$ Hence the inequality in \eqref{pre lemma i<m} holds for any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\setminus V_{m+1}^i$, where \begin{equation}\label{V_m+1 i} V_{m+1}^i=\mathbb{S}_1^{d-1}\times K_\eta^{d,i}. \end{equation} We conclude that \eqref{pre lemma i<m} holds for any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^{d})\setminus\bigcup_{i=1}^{m-1}V_{m+1}^i.$ \textit{Step 2: The proof of \eqref{pre lemma i=m}:} We recall notation from \eqref{pre-notation double}. Considering $t\geq 0$ and $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_m)$. Using the fact that $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)$, we obtain \begin{align} |x_{m}(t)-x_{m+1}(t)|^2=|\epsilon_2\omega_1-t(\bar{v}_m-v_{m+1})|^2\geq\epsilon_2^2|\omega_1|^2+2\epsilon_2 tb_2(\omega_1,v_{m+1}-\bar{v}_{m}) >\epsilon_2^2.\label{use of pre adjuction} \end{align} Therefore, \eqref{pre lemma i=m} holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_m)$. Defining \begin{equation}\label{B-pre-0} \mathcal{B}_{m}^{2,0,-}(\bar{Z}_m)=\bigcup_{i=1}^{m-1} V_{m+1}^i, \end{equation} the claim of Lemma \ref{aux lemma pre-0} follows. Now we go back to the proof of part \textit{(i)} of Proposition \ref{bad set double}. We will find a set $\mathcal{B}_m^{2,\delta,-}(\bar{Z}_m)\subseteq \mathbb{S}_1^{d-1}\times B_R^{d}$ such that \eqref{pre-delta-double} holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})\setminus \mathcal{B}_m^{2,\delta,-}(\bar{Z}_m).$ Let us fix $i,j\in\{1,...,m+1\}$ with $i<j$. We distinguish the following cases: $\bullet$ $j\leq m$: We use the same argument as in \eqref{e/2 pre}, to obtain $|x_i(t)-x_j(t)|>\frac{\epsilon_0}{2},$ for all $t\geq 0.$ $\bullet$ $(i,j)\in\mathcal{F}_{m+1}$, $j=m+1$: Since $(i,m+1)\in\mathcal{F}_{m+1}$, we have $i\leq m-1$. Applying a similar argument to the corresponding case in the proof of \eqref{pre lemma i<m}, using part \textit{(ii)} of Lemma \ref{adjuction of 1} instead, we obtain that the inequality $ |x_i(t)-x_{m+1}(t)|>\epsilon_0, $ for all $t\geq\delta$, holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)\setminus V_{m+1}^i,$ where $V_{m+1}^i$ is given by \eqref{V_m+1 i}. Notice that the lower bound is in fact $\epsilon_0$. $\bullet$ $i=m$, $j=m+1$: Triangle inequality and the fact that $\epsilon_2<<\epsilon_0<<\eta\delta$ imply that for any $t\geq\delta$ and $(\omega_1,v_{m+1})\in\mathbb{S}_1^{d-1}\times B_R^d$ with $|v_{m+1}-\bar{v}_m|>\eta$, we have \begin{align*} |x_m(t)-x_{m+1}(t)|&=|\epsilon_2\omega_1-t(\bar{v}_m-v_{m+1})|\geq |\bar{v}_m-v_{m+1}|\delta-\epsilon_2>\eta\delta-\epsilon_2\nonumber>\epsilon_0. \end{align*} Therefore, the inequality $ |x_m(t)-x_{m+1}(t)|>\epsilon_0,$ for all $t\geq\delta$, holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)\setminus V_{m,m+1},$ where \begin{equation}\label{V_m,m+1} V_{m,m+1}=\mathbb{S}_1^{d-1}\times B_\eta^{d}(\bar{v}_m). \end{equation} Notice that the lower bound is $\epsilon_0$ again. Defining \begin{equation}\label{B-pre-delta} \mathcal{B}_m^{2,\delta,-}(\bar{Z}_m)=\mathcal{B}_m^{2,0,-}(\bar{Z}_m)\cup V_{m,m+1}, \end{equation} we conclude that \eqref{pre-delta-double} holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)\setminus\mathcal{B}_m^{2,\delta,-}(\bar{Z}_m).$ Let us note that the only case which prevents us from having $Z_{m+1}\in G_{m+1}(\epsilon_0,\delta)$ is the case $1\leq i<j\leq m$, where we obtain a lower bound of $\epsilon_0/2$. In all other cases we can obtain lower bound $\epsilon_0$. More precisely, for $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})\setminus \mathcal{B}_{m}^{2,\delta,-}(\bar{Z}_m)$, the inequality $ |\bar{x}_i(t)-\bar{x}_j(t)|>\epsilon_0, $ for all $t\geq\delta$, holds for all $1\leq i<j\leq m+1$ except the case $1\leq i<j\leq m$. However in this case, for any $1\leq i<j\leq m$, we have $ |\bar{x}_i(t)-\bar{x}_j(t)|>\epsilon_0, $ for all $t>0$, since $\bar{Z}_m\in G_m(\epsilon_0,0)$. Therefore, \eqref{pre-delta-double-bar} holds for $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)\setminus\mathcal{B}_m^{2,\delta,-}(\bar{Z}_m)$. We conclude that the set \begin{equation}\label{B-double-pre} \mathcal{B}_{m}^{2,-}(\bar{Z}_m)=(\mathbb{S}_{1}^{d-1}\times B_R^{d})^+(\bar{v}_m)\cap\left(\mathcal{B}_{m}^{2,0,-}\left(\bar{Z}_m\right)\cup\mathcal{B}_{m}^{2,\delta,-}\left(\bar{Z}_m\right)\right), \end{equation} is the set we need for the precollisional case. \textbf{Proof of \it{(ii)}:} Here we use the notation from \eqref{post-notation double}. The proof follows the steps of the precollisional case, but we replace the velocities $(\bar{v}_m,v_{m+1})$ by the transformed velocities $(\bar{v}_{m}',v_{m+1}')$ and then pull-back. It is worth mentioning that the $m$-th particle needs special treatment since its velocity is transformed to $\bar{v}_{m}'$. Following similar arguments to the precollisional case, we conclude that the appropriate set for the postcollisional case is given by \begin{equation}\label{B-double-post} \mathcal{B}_m^{2,+}(\bar{Z}_m):=(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\cap\left[ V_{m,m+1}\cup\bigcup_{i=1}^{m-1}\left(V_{m}^{i'}\cup V_{m+1}^{i'}\right)\right], \end{equation} where \begin{align} V_{m}^{i'}&=\left\{(\omega_1,v_{m+1})\in\mathbb{S}_1^{d-1}\times B_R^d: \bar{v}_{m}'\in K_\eta^{d,i}\right\},\label{V_m i'}\\ V_{m+1}^{i'}&=\left\{(\omega_1,v_{m+1})\in\mathbb{S}_1^{d-1}\times B_R^d: v_{m+1}'\in K_\eta^{d,i}\right\},\label{V_m+1 i'}\\ V_{m,m+1}&=\mathbb{S}_1^{d-1}\times B_\eta^{d}(\bar{v}_m). \label{V_m,m+1-post} \end{align} \begin{comment} {\color{red} Here we use notation from \eqref{post-notation double}. The argument is quite similar with the precollisional case, taking into account that velocities $\bar{v}_m$, $v_{m+1}$ are transformed to $\bar{v}_m'$, $v_{m+1}'$. Therefore the $m$-th particle needs special treatment. We also use the fact that $|\bar{v}_m'-v_{m+1}'|=|\bar{v}_{m}-v_{m+1}|$ from \eqref{relative veloc binary}. We start by formulating the following claim, which will imply \eqref{post-0-double}. \begin{lemma}\label{aux lemma post} There is a subset $\mathcal{B}_m^{2,0,+}(\bar{Z}_m)\subseteq\mathbb{S}_1^{d-1}\times B_R^d$ such that for any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\setminus\mathcal{B}_m^{2,0,+}(\bar{Z}_m)$, there holds \begin{align} d_2(x_i(t),x_j(t))&>\sqrt{2}\epsilon_3,\quad\forall t\geq 0,\quad\forall (i,j)\in\mathcal{F}_{m+1},\label{post lemma i<m}\\ d_2(x_{m}(t),x_{m+1}(t))&>\epsilon_2,\quad\forall t\geq 0\label{post lemma i=m}. \end{align} \end{lemma} Similarly to the precollisional case, \eqref{post lemma i<m}-\eqref{post lemma i=m} trivially imply \eqref{post-0-double}, since $\epsilon_2<<\epsilon_3$. \textit{Proof of Lemma \ref{aux lemma post}} {\it{Step 1: The proof of \eqref{post lemma i<m}}}: We distinguish the following cases: \begin{itemize} \item $j\leq m-1$: This step follows in the same way as the proof of \eqref{} for the case $j\leq m$.{\color{red} Since $\bar{Z}_m\in G_m(\epsilon_0,0)$ and $j\leq m-1$, we have $$|\bar{x}_i(t)-\bar{x}_j(t)|>\epsilon_0,\quad\forall t\geq 0.$$ Therefore, triangle inequality implies that \begin{equation}\label{terms <m-1 post} \begin{aligned} |x_i(t)-x_j(t)|&=|x_i-x_j-t(\bar{v}_i-\bar{v}_j)|\\ &\geq |\bar{x}_i-\bar{x}_j-t(\bar{v}_i-\bar{v}_j)|-\alpha\\ &>\frac{\epsilon_0}{2}\\ &>\sqrt{2}\epsilon_3, \end{aligned} \end{equation} since $\epsilon_3<<\alpha<<\epsilon_0$.} \item $j=m$: Since $(i,m)\in\mathcal{F}_{m+1}$, we have $i\leq m-1$. Since $\bar{Z}_m\in G_m(\epsilon_0,0)$ and $X_m\in B_{\alpha/2}^{dm}(\bar{X}_m)$, we conclude \begin{align*} &|\bar{x}_i-\bar{x}_m|>\epsilon_0,\\ &|x_{m}-\bar{x}_m|\leq\frac{\alpha}{2}<\alpha,\\ &|x_i-\bar{x}_i|\leq\frac{\alpha}{2}<\alpha. \end{align*} Applying part $\textit{(i)}$ of Lemma \ref{adjuction of 1} for $\bar{y}_1=\bar{x}_i$, $\bar{y}_2=\bar{x}_m$, $y_1=x_i$, $y_2=x_{m}$, we may find a cylinder $K_\eta^{d,i}$ such that for any $v_{m}'\in B_R^d\setminus K_\eta^{d,i}$, we have $$|x_i(t)-x_{m}(t)|>\sqrt{2}\epsilon_3,\quad\forall t\geq 0.$$ Hence the inequality in \eqref{post lemma i<m} holds for any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^d)\setminus V_{m}^{i'}$, where \begin{equation}\label{V_m i'-red} V_{m}^{i'}=\left\{(\omega_1,v_{m+1})\in\mathbb{S}_1^{d-1}\times B_R^d: \bar{v}_{m}'\in K_\eta^{d,i}\right\}. \end{equation} \item $j=m+1$: Since $(i,m+1)\in\mathcal{F}_{m+1}$, we have $i\leq m-1$. Since $\bar{Z}_m\in G_m(\epsilon_0,0)$ and $X_m\in B_{\alpha/2}^{dm}(\bar{X}_m)$, we conclude \begin{align*} &|\bar{x}_i-\bar{x}_m|>\epsilon_0,\\ &|x_{m+1}-\bar{x}_m|\leq|x_m-\bar{x}_m|+\epsilon_2|\omega_1|\leq\frac{\alpha}{2}+\epsilon_2<\alpha,\\ &|x_i-\bar{x}_i|\leq\frac{\alpha}{2}<\alpha. \end{align*} Applying part $\textit{(i)}$ of Lemma \ref{adjuction of 1} for $\bar{y}_1=\bar{x}_i$, $\bar{y}_2=\bar{x}_m$, $y_1=x_i$, $y_2=x_{m+1}$, we may find a cylinder $K_\eta^{d,i}$ such that for any $v_{m+1}'\in B_R^d\setminus K_\eta^{d,i}$, we have $$|x_i(t)-x_{m+1}(t)|>\sqrt{2}\epsilon_3,\quad\forall t\geq 0.$$ Hence the inequality in \eqref{post lemma i<m} holds for any $(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^d)\setminus V_{m+1}^{i'}$, where \begin{equation}\label{V_m+1 i'-red} V_{m+1}^{i'}=\left\{(\omega_1,v_{m+1})\in\mathbb{S}_1^{d-1}\times B_R^d: v_{m+1}'\in K_\eta^{d,i}\right\}. \end{equation} \end{itemize} We conclude that \eqref{post lemma i<m} holds for any $$(\omega_1,v_{m+1})\in (\mathbb{S}_1^{d-1}\times B_R^{d})\setminus\bigcup_{i=1}^{m-1}(V_{m}^{i'}\cup V_{m+1}^{i'}).$$ \textit{Step 2: The proof of \eqref{post lemma i=m}:} Let us recall notation from \eqref{post notation double}. Consider $t\geq 0$ and $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_m)$. We have \begin{align} |x_{m}(t)-x_{m+1}(t)|^2&=|-\epsilon_2\omega_1-t(\bar{v}_m'-v_{m+1}')|^2\nonumber\\ &\geq\epsilon_2^2|\omega_1|^2+2\epsilon_2 t b_2(\omega_1,\bar{v}_m'-v_{m+1}')\nonumber\\ &=\epsilon_2^2+2\epsilon_2 t b_2(\omega_1,v_{m+1}-\bar{v}_m)\nonumber\\ &>\epsilon_2^2,\label{use of post adjuction} \end{align} where to obtain \eqref{use of post adjuction} we use the fact that $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)$. Therefore, \eqref{post lemma i=m} holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_m)$. Defining \begin{equation}\label{B-post-0} \mathcal{B}_m^{2,+,0}:=\bigcup_{i=1}^{m-1}(V_m^{i'}\cup V_{m+1}^{i'}), \end{equation} the claim of Lemma \ref{aux lemma post} follows. Now we go back at the proof of Proposition \ref{bad set double}. We will find a set $\mathcal{B}_m^{2,\delta,+}(\bar{Z}_m)\subseteq \mathbb{S}_1^{d-1}\times B_R^{d}$ such that \eqref{post-delta-double} holds for any $$(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})\setminus \mathcal{B}_m^{2,\delta,+}(\bar{Z}_m).$$ We distinguish the following cases: \begin{itemize} \item $j\leq m-1$: This step follows in the same way as the proof of \eqref{} for the case $j\leq m$. \item $j=m$, $(i,j)\in\mathcal{F}_{m+1}$: Since $(i,m)\in\mathcal{F}_{m+1}$, we have $i\leq m-1$. Applying a similar argument to the corresponding case in the proof of \eqref{post lemma i<m}, using part \textit{(ii)} of Lemma \ref{adjuction of 1} instead, we obtain that the inequality $$|x_i(t)-x_m(t)|>\epsilon_0,\quad\forall t\geq\delta,$$ holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)\setminus V_{m}^{i'},$ where $V_{m}^{i'}$ is given by \eqref{V_m i'}. Notice that the lower bound is in fact $\epsilon_0$. \item $j=m+1$, $(i,j)\in\mathcal{F}_{m+1}$: Since $(i,m+1)\in\mathcal{F}_{m+1}$, we have $i\leq m-1$. Applying a similar argument to the corresponding case in the proof of \eqref{post lemma i<m}, using part \textit{(ii)} of Lemma \ref{adjuction of 1} instead, we have that the inequality $$|x_i(t)-x_{m+1}(t)|>\epsilon_0,\quad\forall t\geq\delta,$$ holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)\setminus V_{m+1}^{i'},$ where $V_{m+1}^{i'}$ is given by \eqref{V_m+1 i'}. Notice that the lower bound is again $\epsilon_0$. \item $j=m+1$, $i=m$: Triangle inequality implies that for any $t\geq\delta$ and $(\omega_1,v_{m+1})\in\mathbb{S}_1^{d-1}\times B_R^d$ with $|v_{m+1}-\bar{v}_m|>\eta$, we have \begin{align} |x_m(t)-x_{m+1}(t)|&=|\epsilon_2\omega_1-t(\bar{v}_m'-v_{m+1}')|\nonumber\\ &\geq |\bar{v}_m'-v_{m+1}'|\delta-\epsilon_2\nonumber\\ &= |\bar{v}_m-v_{m+1}|\delta-\epsilon_2\label{use of reflection}\\ &>\eta\delta-\epsilon_2\label{use of B_eta post}\\ &>\epsilon_0,\label{use parameters post} \end{align} where to obtain \eqref{use of reflection} we use \eqref{relative veloc binary}, to obtain \eqref{use of B_eta post} we use the assumption $|v_{m+1}-\bar{v}_m|>\eta$ and to obtain \eqref{use parameters post} we use the fact that $\epsilon_2<<\epsilon_0<<\eta\delta$. Therefore, recalling \eqref{V_m,m+1}, we obtain that the inequality $$|x_m(t)-x_{m+1}(t)|>\epsilon_0,\quad\forall t\geq\delta,$$ holds for any $(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)\setminus V_{m,m+1}.$ \end{itemize} Recalling \eqref{B-pre-0}, and defining \begin{equation}\label{B + double} \mathcal{B}_m^{2,\delta,+}(\bar{Z}_m)=\mathcal{B}_m^{2,0,+}(\bar{Z}_m)\cup V_{m,m+1}, \end{equation} \eqref{post-delta-double} holds for any $$(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\setminus\mathcal{B}_m^{2,\delta,+}(\bar{Z}_m).$$ In a similar manner with the precollisional case, we obtain that \eqref{post-delta-double-bar} holds for any $$(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\setminus\mathcal{B}_m^{2,\delta,+}(\bar{Z}_m),$$ as well. We conclude that the set \begin{equation}\label{B-post-double-red} \mathcal{B}_m^{2,+}(\bar{Z}_m):=(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\cap\left[ V_{m,m+1}\cup\bigcup_{i=1}^{m-1}\left(V_{m}^{i'}\cup V_{m+1}^{i'}\right)\right], \end{equation} is the appropriate set for the postcollisional case.} \end{comment} The set \begin{equation}\label{B double} \mathcal{B}_m^2(\bar{Z}_m)=\mathcal{B}_m^{2,-}(\bar{Z}_m)\cup\mathcal{B}_m^{2,+}(\bar{Z}_m), \end{equation} is the one we need to conclude the proof. \end{proof} \subsubsection{Measure estimate for binary adjunction} We now estimate the measure of the pathological set $\mathcal{B}_\ell^2(\bar{Z}_m)$ appearing in Proposition \ref{bad set double}. To control postcollisional configurations, we will strongly rely on the binary transition map introduced in the Appendix (see Proposition \ref{transition prop}). \begin{proposition}\label{bad set double measure} Consider parameters $\alpha,\epsilon_0,R,\eta,\delta$ as in \eqref{choice of parameters} and $\epsilon_2<<\epsilon_3<<\alpha$. Let $m\in\mathbb{N}$, $\bar{Z}_m\in G_m(\epsilon_0,0)$, $\ell\in\{1,...,m\}$ and $\mathcal{B}_{\ell}^2(\bar{Z}_m)$ the set given in the statement of Proposition \ref{bad set double}. Then the following measure estimate holds: \begin{equation*} \left|\mathcal{B}_{\ell}^2(\bar{Z}_m)\right|\lesssim mR^{d}\eta^{\frac{d-1}{2d+2}}, \end{equation*} where $|\cdot|$ denotes the product measure on $\mathbb{S}_1^{d-1}\times B_R^{d}$. \end{proposition} \begin{proof} Without loss of generality, we may assume that $\ell=m$. By \eqref{B double} it suffices to estimate the measure of $\mathcal{B}_m^{2,-}(\bar{Z}_m)$ and $\mathcal{B}_m^{2,+}(\bar{Z}_m)$. \textbf{Estimate of $\mathcal{B}_m^{2,-}(\bar{Z}_m)$:} Recalling \eqref{post notation double}, \eqref{B-double-pre}, \eqref{B-pre-delta}, \eqref{B-pre-0}, we have \begin{equation}\label{B- double measure representation} \mathcal{B}_m^{2,-}(\bar{Z}_m)=(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\cap\left[ V_{m,m+1}\cup\bigcup_{i=1}^{m-1} V_{m+1}^{i}\right], \end{equation} where $V_{m,m+1}$ is given by \eqref{V_m,m+1} and $V_{m+1}^i$ are given by \eqref{V_m+1 i}. By sub-additivity, it suffices to estimate the measure of each term in \eqref{B- double measure representation}. $\bullet$ Estimate of the term corresponding to $V_{m,m+1}$: By \eqref{V_m,m+1}, we have $V_{m,m+1}=\mathbb{S}_1^{d-1}\times B_\eta^d(\bar{v}_m),$ therefore \begin{align} |(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\cap V_{m,m+1}|&\leq |\mathbb{S}_1^{d-1}\times (B_R^d\cap B_\eta^d(\bar{v}_m))|\leq|\mathbb{S}_1^{d-1}|_{\mathbb{S}_1^{d-1}} |B_\eta^d(\bar{v}_m)|_d\lesssim \eta^d.\label{measure V_m,m+1} \end{align} $\bullet$ Estimate of the term corresponding to $V_{m+1}^i$: By \eqref{V_m+1 i}, we have $V_{m+1}^i=\mathbb{S}_1^{d-1}\times K_\eta^{d,i},$ therefore by Corollary \ref{spherical estimate}, we obtain \begin{align} |(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\times V_{m+1}^i|&\leq |\mathbb{S}_1^{d-1}\times (B_R^d\cap K_\eta^{d,i})|\simeq|\mathbb{S}_{1}^{d-1}|_{\mathbb{S}_1^{d-1}}|B_R^d\cap K_{\eta}^{d,i}|_d\lesssim R^d\eta^{\frac{d-1}{2}}.\label{measure V_m+1,i} \end{align} Using \eqref{B- double measure representation}-\eqref{measure V_m+1,i}, subadditivity, and the fact that $\eta<<1$, $m\geq 1$, we obtain \begin{equation}\label{B- measure} |\mathcal{B}_m^{2,-}(\bar{Z}_m)|\lesssim m R^d\eta^{\frac{d-1}{2}}. \end{equation} \textbf{Estimate of $\mathcal{B}_m^{2,+}(\bar{Z}_m)$:} Recalling \eqref{B-double-post}, we have \begin{equation}\label{B+ double measure representation} \mathcal{B}_m^{2,+}(\bar{Z}_m)=(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\cap\left[ V_{m,m+1}\cup\bigcup_{i=1}^{m-1}\left(V_{m}^{i'}\cup V_{m+1}^{i'}\right)\right], \end{equation} where $V_{m,m+1}$ is given by \eqref{V_m,m+1} and $V_{m}^{i'}$, $V_{m+1}^{i'}$ are given by \eqref{V_m i'}-\eqref{V_m+1 i'}. By subadditivity, it suffices to estimate the measure of each term in \eqref{B+ double measure representation}. The term corresponding to $V_{m,m+1}$ has already benn estimated in \eqref{measure V_m,m+1}. We have \begin{equation}\label{measure V_m,m+1'} |(\mathbb{S}_1^{d-1}\times B_R^d)^+(\bar{v}_m)\cap V_{m,m+1}|\lesssim\eta^d. \end{equation} To estimate the measure of the remaining terms, we will strongly rely on the properties of the binary transition map defined in Proposition \ref{transition prop}. We first introduce some notation. Given $0<r\leq 2R$, let us define the $r$-sphere, centered at $\bar{v}_m$: \begin{equation*}S_r^{d-1}(\bar{v}_{m})=\left\{v_{m+1}\in\mathbb{R}^{d}:|\bar{v}_{m}-v_{m+1}|=r\right\}.\end{equation*} Also, given $v_{m+1}\in\mathbb{R}^d$, we define the set \begin{equation}\label{S-shell} \begin{aligned} \mathcal{S}_{\bar{v}_{m},v_{m+1}}^+&=\left\{\omega_1\in\mathbb{S}_1^{d-1}:b_2(\omega_1,v_{m+1}-\bar{v}_m)>0\right\}=\left\{\omega_1\in\mathbb{S}_1^{d-1}:(\omega_1,v_{m+1})\in(\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_m)\right\}. \end{aligned} \end{equation} Since $\bar{v}_m\in B_R^d$, triangle inequality implies $ B_R^{d}\subseteq B_{2R}^d(\bar{v}_{m}). $ Under this notation, Fubini's Theorem, the co-area formula, and relations \eqref{B+ double measure representation}-\eqref{measure V_m,m+1'} yield \begin{equation}\label{integral expression for post} \begin{aligned} |\mathcal{B}_{m}^{2+}(\bar{Z}_m)|&=\int_{(\mathbb{S}_1^{d-1}\times B_R^{d})^+(\bar{v}_m)}\mathds{1}_{\mathcal{B}_{m}^{2+}(\bar{Z}_m)}\,d\omega_1\,dv_{m+1}\\ &=\int_{B_R^d}\int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^+}\mathds{1}_{\mathcal{B}_{m}^{2+}(\bar{Z}_m)}\,d\omega_1\,dv_{m+1}\\ &\lesssim\eta^d+\int_0^{2R}\int_{S_r^{d-1}(\bar{v}_{m})}\int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^+}\mathds{1}_{\bigcup_{i=1}^{m-1}(V_{m}^{i'}\cup V_{m+1}^{i'})}(\omega_1)\,d\omega_1\,dv_{m+1}\,dr. \end{aligned} \end{equation} Let us estimate the integral: $$\int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^+}\mathds{1}_{\bigcup_{i=1}^{m-1}(V_{m}^{i,'}\cup V_{m+1}^{i,'})}(\omega_1)\,d\omega_1,$$ for fixed $0<r\leq 2R$ and $v_{m+1}\in S_r^{d-1}(\bar{v}_{m})$. We introduce a parameter $0<\beta<<1$, which will be chosen later in terms of $\eta$, and decompose $\mathcal{S}^+_{\bar{v}_{m},v_{m+1}}$ as follows: \begin{equation}\label{S-decomposition} \mathcal{S}^+_{\bar{v}_{m},v_{m+1}}=\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}\cup \mathcal{S}_{\bar{v}_{m},v_{m+1}}^{2,+}, \end{equation} where \begin{equation}\label{S1} \mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}=\left\{\omega_1\in\mathcal{S}_{\bar{v}_{m},v_{m+1}}^+:b_2(\omega_1,v_{m+1}-\bar{v}_m)>\beta |v_{m+1}-\bar{v}_m|\right\}, \end{equation} and \begin{equation}\label{S2} \begin{aligned} \mathcal{S}_{\bar{v}_{m},v_{m+1}}^{2,+}&=\left\{\omega_1\in\mathcal{S}_{\bar{v}_{m},v_{m+1}}^+:b_2(\omega_1,v_{m+1}-\bar{v}_m)\leq\beta |v_{m+1}-\bar{v}_m|\right\}. \end{aligned} \end{equation} Notice that $\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{2,+}$ is the union of two unit $(d-1)$-spherical caps of angle $\pi/2-\arccos\beta$. Thus, integrating in spherical coordinates, we may estimate its measure as follows: \begin{equation*}\int_{\mathbb{S}_1^{d-1}}\mathds{1}_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{2,+}}(\omega_1) \,d\omega_1\lesssim \int_{\arccos\beta}^{\pi/2} \sin^{d-2}(\theta)\,d\theta\leq \frac{\pi}{2}-\arccos\beta=\arcsin\beta. \end{equation*} Thus \begin{equation}\label{estimate on S2} \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{2,+}}\mathds{1}_{\bigcup_{i=1}^{m-1}(V_{m}^{i'}\cup V_{m+1}^{i'})}(\omega_1)\,d\omega_1\lesssim\arcsin\beta. \end{equation} We now wish to estimate \begin{equation}\label{wanted estimate on S1} \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}}\mathds{1}_{\bigcup_{i=1}^{m-1}(V_{m}^{i'}\cup V_{m+1}^{i'})}(\omega_1)\,d\omega_1. \end{equation} We will use the binary transition map $\mathcal{J}_{\bar{v}_m,m_{m+1}}:\mathcal{S}_{\bar{v}_m,v_{m+1}}^+\to\mathbb{S}_1^{d-1}$, which is given by \begin{equation}\label{transition text} \nu_1:=\mathcal{J}_{\bar{v}_m,v_{m+1}}(\omega_1)=r^{-1}(\bar{v}_m'-v_{m+1}'), \end{equation} to change variables in the above integral. For details on the transition map, see Proposition \ref{transition prop} in the Appendix. By Proposition \ref{transition prop}, for $\omega_1\in\mathcal{S}_{\bar{v}_{m},v_{m+1}}^+$, the Jacobian matrix of the transition map is \begin{equation*} \jac(\mathcal{J}_{\bar{v}_{m},v_{m+1}})(\omega_1)\simeq r^{-d}b_2^d(\omega_1,v_{m+1}-\bar{v}_m)>0. \end{equation*} Therefore, for $\omega_1\in\mathcal{S}_{\bar{v}_m,v_{m+1}}^{1+}$, we have \begin{equation}\label{estimate on inverse jacobian} \jac^{-1}(\mathcal{J}_{\bar{v}_{m},v_{m+1}})(\omega_1)\simeq r^{d}b_2^{-d}(\omega_1,v_{m+1}-\bar{v}_m)\leq r^{d}\beta^{-d} |v_{m+1}-\bar{v}_m|^{-d}\lesssim\beta^{-d}, \end{equation} since $|v_{m+1}-\bar{v}_m|=r.$ For convenience, we express $\bar{v}_m'$, $v_{m+1}'$ in terms of the precollisional velocities $\bar{v}_m$, $v_{m+1}$ and $\nu_1$ given by \eqref{transition text}. As a consequence of \eqref{binary formulas with}, we obtain \begin{align} \bar{v}_m'&=\frac{\bar{v}_m+v_{m+1}}{2}+\frac{r}{2}\nu_1\label{v_m' with respect to v},\\ v_{m+1}'&=\frac{\bar{v}_m+v_{m+1}}{2}-\frac{r}{2}\nu_1\label{v_m+1' with respect to v} \end{align} We are now in the position to estimate the integral in \eqref{wanted estimate on S1}. We first estimate for the term corresponding to $V_m^{i'}$: Recalling \eqref{V_m i'}, we have $ V_{m}^{i'}=\left\{(\omega_1,v_{m+1})\in\mathbb{S}_1^{d-1}\times B_R^d: \bar{v}_{m}'\in K_\eta^{d,i}\right\}. $ By \eqref{v_m' with respect to v}, \begin{equation}\label{iff cylinder m} \bar{v}_m'\in K_\eta^{d,i}\Leftrightarrow\nu_1=\mathcal{J}_{\bar{v}_m,v_{m+1}}(\omega_1)\in \widetilde{K}_{2\eta/r}^{d,i}, \end{equation} where $\widetilde{K}_{2\eta/r}^{d,i}$ is a cylinder of radius $2\eta/r$. Therefore, we obtain \begin{align} \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}}\mathds{1}_{V_{m}^{i'}}(\omega_1)\,d\omega_1&= \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}}\mathds{1}_{\bar{v}_m'\in K_{2\eta}^{d,i}}(\omega_1)\,d\omega_1\nonumber\\ &=\int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}}(\mathds{1}_{\widetilde{K}_{2\eta/r}^{d,i}}\circ\mathcal{J}_{\bar{v}_m,v_{m+1}})(\omega_1)\,d\omega_1\label{V_s^* iff inside}\\ &\lesssim\beta^{-d}\int_{\mathbb{S}_1^{d-1}}\mathds{1}_{\widetilde{K}_{2\eta/r}^{d,i}}(\nu)\,d\nu\label{jac and change V_s^*}\\ &\lesssim \beta^{-d}\min\left\{1,\left(\frac{\eta}{r}\right)^{\frac{d-1}{2}}\right\},\label{cylinder estimate} \end{align} where to obtain \eqref{V_s^* iff inside} we use \eqref{iff cylinder m}, to obtain \eqref{jac and change V_s^*} we use part \textit{(iv)} of Proposition \ref{transition prop} and estimate \eqref{estimate on inverse jacobian}, and to obtain \eqref{cylinder estimate} we use Lemma \ref{Ryan's lemma}. Hence, for fixed $v_{m+1}\in S_r^{d-1}(\bar{v}_m)$, we have \begin{equation}\label{estimate on S1-Vm^*} \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}}\mathds{1}_{V_{m}^{i,'}}(\omega_1)\,d\omega_1\lesssim \beta^{-d}\min\left\{1,\left(\frac{\eta}{r}\right)^{\frac{d-1}{2}}\right\}. \end{equation} Recalling also $V_{m+1}^{i'}$ from \eqref{V_m+1 i'}, we obtain in an analogous way the estimate: \begin{equation}\label{estimate on S1-Vm+1^*} \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}}\mathds{1}_{V_{m+1}^{i'}}(\omega_1)\,d\omega_1\lesssim \beta^{-d}\min\left\{1,\left(\frac{\eta}{r}\right)^{\frac{d-1}{2}}\right\}. \end{equation} Combining \eqref{estimate on S1-Vm^*}-\eqref{estimate on S1-Vm+1^*} and adding for $i=1,...,m-1$, we obtain \begin{equation}\label{on S_1} \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^{1,+}}\mathds{1}_{\bigcup_{i=1}^{m-1}(V_m^{i,'}\cup V_{m+1}^{i,'})}(\omega_1)\,d\omega_1\lesssim m \beta^{-d}\min\left\{1,\left(\frac{\eta}{r}\right)^{\frac{d-1}{2}}\right\} \end{equation} Therefore, recalling \eqref{S-decomposition} and using estimates \eqref{estimate on S2}, \eqref{on S_1}, we obtain the estimate: \begin{equation}\label{estimate on post integral} \int_{\mathcal{S}_{\bar{v}_{m},v_{m+1}}^+}\mathds{1}_{\bigcup_{i=1}^{m-1}(V_m^{i'}\cup V_{m+1}^{i'})}(\omega_1)\,d\omega_1\lesssim \arcsin\beta+m\beta^{-d}\min\left\{1,\left(\frac{\eta}{r}\right)^{\frac{d-1}{2}}\right\}. \end{equation} Hence, \eqref{integral expression for post} yields \begin{equation}\label{post collisional estimate with beta} \begin{aligned} |\mathcal{B}_{m}^{2+}(\bar{Z}_m)|&\lesssim \eta^d+\int_0^{2R}\int_{S_r^{d-1}(\bar{v}_{m})}\arcsin\beta+m\beta^{-d}\min\left\{1,\left(\frac{\eta}{r}\right)^{\frac{d-1}{2}}\right\}\,dv_{m+1}\,dr\\ &\lesssim \eta^d+\int_0^{2R}r^{d-1}\left(\arcsin\beta+m\beta^{-d}\min\left\{1,\left(\frac{\eta}{r}\right)^{\frac{d-1}{2}}\right\}\right)\,dr\\ &\lesssim \eta^{d}+mR^{d}\left(\arcsin\beta+\beta^{-d}\eta^{\frac{d-1}{2}}\right)\\ &\lesssim mR^{d}\left(\beta+\beta^{-d}\eta^{\frac{d-1}{2}}\right), \end{aligned} \end{equation} after using an estimate similar to \eqref{estimate with min} and the fact that $\eta<<1$, $m\geq 1$, $\beta<<1$. Choosing $\beta=\eta^{\frac{d-1}{2d+2}}$, we obtain \begin{equation}\label{measure B+} |\mathcal{B}_{m}^{2+}(\bar{Z}_m)|\lesssim m R^{d}\eta^{\frac{d-1}{2d+2}}. \end{equation} Combining \eqref{B double}, \eqref{B- measure}, \eqref{measure B+}, and the fact $\eta<<1$, we obtain the required estimate. \end{proof} \subsection{Stability under ternary adjunction}\label{subsec:ternary} Now, we prove Proposition \ref{bad set triple} and Proposition \ref{bad set triple measure} which will be the inductive step and the corresponding measure estimate of our proof for ternary adjunction of particles. To derive Proposition \ref{bad set triple} and Proposition \ref{bad set triple measure}, in addition to results from \cite{ternary}, we develop new algebraic and geometric techniques, thanks to which we can treat the newly formed ternary collisional configuration runs to a binary collision under time evolution. \subsubsection{Ternary adjunction} For convenience, given $v\in\mathbb{R}^d$, let us denote \begin{equation}\label{pre-post notation triple} \left(\mathbb{S}_1^{2d-1}\times B_R^{2d}\right)^+(v)=\big\{(\omega_1,\omega_2,v_{1},v_{2})\in\mathbb{S}_{1}^{2d-1}\times B_R^{2d}:b_3(\omega_1,\omega_2,v_{1}-v,v_{2}-v)>0\big\}, \end{equation} where $b_3$ is the ternary cross-section given in \eqref{cross}. Recall from \eqref{back-wards flow} that given $m\in\mathbb{N}$ and $Z_m=(X_m,V_m)\in\mathbb{R}^{2dm}$, we denote the backwards in time free flow as $Z_m(t)=(X_m-tV_m,V_m),\quad t\geq 0.$ \begin{proposition}\label{bad set tilde triple} Consider parameters $\alpha,\epsilon_0,R,\eta,\delta$ as in \eqref{choice of parameters} and $\epsilon_3<<\alpha$. Let $m\in\mathbb{N}$, $\bar{Z}_m=(\bar{X}_m,\bar{V}_m)\in G_m(\epsilon_0,0)$, $\ell\in\{1,...,m\}$, and $X_m\in B_{\alpha/2}^{dm}(\bar{X}_m)$. Let us denote \begin{equation*} \mathcal{F}_{m+2}^\ell=\left\{(i,j)\in\left\{1,...,m+2\right\} \times \left\{1,...,m+2\right\} : i\neq\ell,\text{ } i\leq\min\left\{j,m\right\}\right\}. \end{equation*} Then there is a subset $\widetilde{\mathcal{B}}_{\ell}^3(\bar{Z}_m)\subseteq (\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_m)$ such that: \begin{enumerate}[(i)] \item For any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in (\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_{m})\setminus\widetilde{\mathcal{B}}_{\ell}^3(\bar{Z}_m)$, one has: \begin{equation}\label{pre-tilde} \begin{aligned} d_2(x_i(t),x_j(t))&>\sqrt{2}\epsilon_3,\quad\forall (i,j)\in\mathcal{F}_{m+2}^\ell,\quad\forall t\geq 0,\\ d_3(x_\ell(t);x_{m+1}(t),x_{m+2}(t))&>\sqrt{2}\epsilon_3,\quad\forall t\geq 0,\\ Z_{m+2}&\in G_{m+2}(\epsilon_0/2,\delta),\\ \bar{Z}_{m+2}&\in G_{m+2}(\epsilon_0,\delta). \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} &Z_{m+2}=(x_1,...,x_\ell,...,x_m,x_{m+1},x_{m+2},\bar{v}_1,...,\bar{v}_\ell,...,\bar{v}_m,v_{m+1},v_{m+2}),\\ &x_{m+i}=x_{\ell}+\sqrt{2}\epsilon_3\omega_i,\quad\forall i\in\{1,2\},\\ &\bar{Z}_{m+2}=(\bar{x}_1,...,\bar{x}_\ell,...,\bar{x}_m,\bar{x}_{m},\bar{x}_{m},\bar{v}_1,...,\bar{v}_\ell,...,\bar{v}_m,v_{m+1},v_{m+2}),\\ \end{aligned} \end{equation*} \item For any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in (\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_{\ell})\setminus\widetilde{\mathcal{B}}_{\ell}^3(\bar{Z}_m)$, one has: \begin{equation}\label{post-tilde} \begin{aligned} d_2(x_i(t),x_j(t))&>\sqrt{2}\epsilon_3,\quad\forall (i,j)\in\mathcal{F}_{m+2}^\ell,\quad\forall t\geq 0,\\ d_3(x_\ell(t);x_{m+1}(t),x_{m+2}(t))&>\sqrt{2}\epsilon_3,\quad\forall t\geq 0,\\ Z_{m+2}^*&\in G_{m+2}(\epsilon_0/2,\delta),\\ \bar{Z}_{m+2}^*&\in G_{m+2}(\epsilon_0,\delta). \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} &Z_{m+2}^*=(x_1,...,x_\ell,...,x_m,x_{m+1},x_{m+2},\bar{v}_1,...,\bar{v}_\ell^*,...,\bar{v}_m,v_{m+1}^*,v_{m+2}^*),\\ &x_{m+i}=x_{\ell}+\sqrt{2}\epsilon_3\omega_i,\quad\forall i\in\{1,2\},\\ &\bar{Z}_{m+2}^*=(\bar{x}_1,...,\bar{x}_\ell,...,\bar{x}_m,\bar{x}_{m},\bar{x}_{m},\bar{v}_1,...,\bar{v}_\ell^*,...,\bar{v}_m,v_{m+1}^*,v_{m+2}^*),\\ &(\bar{v}_{\ell}^*,v_{m+1}^*,v_{m+2}^*)=T_{\omega_1,\omega_2}(\bar{v}_{\ell},v_{m+1},v_{m+2}). \end{aligned} \end{equation*} \end{enumerate} There also holds the measure estimate: \begin{equation}\label{measure estimate tilde} |\widetilde{\mathcal{B}}_\ell^3(\bar{Z}_m)|\lesssim mR^{2d}\eta^{\frac{d-1}{4d+2}}, \end{equation} where $|\cdot|$ denotes the product measure on $\mathbb{S}_1^{2d-1}\times B_R^{2d}$. \end{proposition} \begin{proof} This Proposition follows from the statement and the proof of Proposition 9.2. and the statement of Proposition 9.4. from \cite{ternary}. \end{proof} We rely on Proposition \ref{bad set tilde triple} to derive Proposition \ref{bad set triple} and Proposition \ref{bad set triple measure}. Recall the notation from \eqref{interior phase space} \begin{align*}\mathring{\mathcal{D}}_{m+2,\epsilon_2,\epsilon_3}=\big\{Z_{m+2}=(X_{m+2},V_{m+2})\in\mathbb{R}^{2d(m+2)}:\text{ } d_2(x_i,x_j)>\epsilon_2,\quad\forall (i,j)\in\mathcal{I}_{m+2}^2,&\\ \text{and } d_3(x_i;x_j,x_k)>\sqrt{2}\epsilon_3,\quad\forall(i,j,k)\in\mathcal{I}_{m+2}^3\big\}&, \end{align*} where $\mathcal{I}_{m+2}^2,\mathcal{I}_{m+2}^3$ are given by \eqref{index 2}-\eqref{index 3} respectively. \begin{proposition}\label{bad set triple} Consider parameters $\alpha,\epsilon_0,R,\eta,\delta$ as in \eqref{choice of parameters} and $\epsilon_2<<\eta^2\epsilon_3<<\alpha$. Let $m\in\mathbb{N}$, $\bar{Z}_m=(\bar{X}_m,\bar{V}_m)\in G_m(\epsilon_0,0)$, $\ell\in\{1,...,m\}$ and $X_m\in B_{\alpha/2}^{dm}(\bar{X}_m)$. Then there is a subset $\mathcal{B}_{\ell}^3(\bar{Z}_m)\subseteq (\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_\ell)$ such that: \begin{enumerate}[(i)] \item For any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in (\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_{\ell})\setminus\mathcal{B}_{\ell}^3(\bar{Z}_m)$, one has: \begin{align} Z_{m+2}(t)&\in\mathring{\mathcal{D}}_{m+2,\epsilon_2,\epsilon_3},\quad\forall t\geq 0,\label{in phase pre}\\ Z_{m+2}&\in G_{m+2}(\epsilon_0/2,\delta)\label{epsilon/2 pre}\\ \bar{Z}_{m+2}&\in G_{m+2}(\epsilon_0,\delta),\label{epsilon pre} \end{align} where \begin{equation}\label{pre-collisional notation ternary} \begin{aligned} &Z_{m+2}=(x_1,...,x_\ell,...,x_m,x_{m+1},x_{m+2},\bar{v}_1,...,\bar{v}_\ell,...,\bar{v}_m,v_{m+1},v_{m+2}),\\ &x_{m+i}=x_{\ell}-\sqrt{2}\epsilon_3\omega_i,\quad\forall i\in\{1,2\},\\ &\bar{Z}_{m+2}=(\bar{x}_1,...,\bar{x}_\ell,...,\bar{x}_m,\bar{x}_{m},\bar{x}_{m},\bar{v}_1,...,\bar{v}_\ell,...,\bar{v}_m,v_{m+1},v_{m+2}),\\ \end{aligned} \end{equation} \item For any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in (\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_{\ell})\setminus\mathcal{B}_{\ell}^3(\bar{Z}_m)$, one has: \begin{align} Z_{m+2}^*(t)&\in\mathring{\mathcal{D}}_{m+2,\epsilon_2,\epsilon_3},\quad\forall t\geq 0,\label{in phase post}\\ Z_{m+2}^*&\in G_{m+2}(\epsilon_0/2,\delta),\label{epsilon/2 post}\\ \bar{Z}_{m+2}^*&\in G_{m+2}(\epsilon_0,\delta),\label{epsilon post} \end{align} where \begin{equation}\label{post-collisional notation ternary} \begin{aligned} &Z_{m+2}^*=(x_1,...,x_\ell,...,x_m,x_{m+1},x_{m+2},\bar{v}_1,...,\bar{v}_\ell^*,...,\bar{v}_m,v_{m+1}^*,v_{m+2}^*),\\ &x_{m+i}=x_{\ell}+\sqrt{2}\epsilon_3\omega_i,\quad\forall i\in\{1,2\},\\ &\bar{Z}_{m+2}^*=(\bar{x}_1,...,\bar{x}_\ell,...,\bar{x}_m,\bar{x}_{m},\bar{x}_{m},\bar{v}_1,...,\bar{v}_\ell^*,...,\bar{v}_m,v_{m+1}^*,v_{m+2}^*),\\ &(\bar{v}_{\ell}^*,v_{m+1}^*,v_{m+2}^*)=T_{\omega_1,\omega_2}(\bar{v}_{\ell},v_{m+1},v_{m+2}). \end{aligned} \end{equation} \end{enumerate} \end{proposition} \begin{proof} By symmetry we may assume that $\ell=m$. Recall the set $\widetilde{\mathcal{B}}_m^3(\bar{Z}_m)$ from Proposition \ref{bad set tilde triple} satisfying \eqref{pre-tilde}-\eqref{post-tilde}. We will construct a set $\mathcal{A}_m(\bar{Z}_m)\subseteq(\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_m)$, such that for any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in (\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_m)\setminus \mathcal{A}_m(\bar{Z}_m)$: \begin{itemize} \item Using notation from \eqref{pre-collisional notation ternary} for the precollisional case, we have \begin{equation}\label{pre-collisional claim} |x_i(t)-x_j(t)|>\epsilon_2,\quad\forall t\geq 0,\quad \forall i,j\in\left\{m,m+1,m+2\right\} \text{ with } i<j . \end{equation} \item Using notation from \eqref{post-collisional notation ternary} for the postcollisional case, we have \begin{equation}\label{post-collisional claim} |x_i(t)-x_j(t)|>\epsilon_2,\quad\forall t\geq 0,\quad \forall i,j\in\left\{m,m+1,m+2\right\} \text{ with } i<j . \end{equation} \end{itemize} Then thanks to Proposition \ref{bad set tilde triple} and \eqref{pre-collisional claim}-\eqref{post-collisional claim}, the set $$\mathcal{B}_m^3(\bar{Z}_m):=\widetilde{\mathcal{B}}_m^3(\bar{Z}_m)\cup \mathcal{A}_m(\bar{Z}_m),$$ will satisfy \eqref{in phase pre}-\eqref{epsilon pre}, \eqref{in phase post}-\eqref{epsilon post}. Let us introduce the following notation: \begin{equation}\label{gamma def} \gamma:=\frac{\epsilon_2}{\epsilon_3}<<\eta^2,\quad\text{since }\epsilon_2<<\eta^2\epsilon_3, \text{ by assumption,} \end{equation} and \begin{equation}\label{gamma'} \gamma'=\left(1-\frac{\gamma}{2}\right)^{1/2}<1. \end{equation} \textbf{Construction of the set satisfying \eqref{pre-collisional claim}}: Here we use notation from \eqref{pre-collisional notation ternary}. We distinguish the following cases: $\bullet$ Case $(i,j)=(m,m+1)$: Consider $t\geq 0$. We have \begin{equation}\label{expr for m,m+1} \begin{aligned} |x_i(t)-x_j(t)|^2&=|x_{m}(t)-x_{m+1}(t)|^2\\ &=|\sqrt{2}\epsilon_3\omega_1+(v_{m+1}-\bar{v}_m)t|^2\\ &=2\epsilon_3^2|\omega_1|^2+2\sqrt{2}\epsilon_3\langle\omega_1,v_{m+1}-\bar{v}_m\rangle t+|v_{m+1}-\bar{v}_m|^2t^2. \end{aligned} \end{equation} We define the sets \begin{align} \Omega_{1}&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:|\omega_1|\leq\sqrt{\gamma}\}\label{omega_1},\\ A_{m,m+1}&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\left|\langle\omega_1,v_{m+1}-\bar{v}_m\rangle\right|\geq\gamma'|\omega_1||v_{m+1}-\bar{v}_m|\}\label{A_m,m+1}. \end{align} Consider the second degree polynomial in $t$: \begin{equation} P(t)=(2-\gamma)\epsilon_3^2|\omega_1|^2+2\sqrt{2}\epsilon_3\langle\omega_1,v_{m+1}-\bar{v}_m\rangle t+|v_{m+1}-\bar{v}_m|^2t^2 \end{equation} Let $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus(\Omega_1\cup A_{m,m+1})$. The polynomial $P$ has discriminant \begin{align*} \Delta&=8\epsilon_3^2|\langle\omega_1,v_{m+1}-\bar{v}_m\rangle|^2-4(2-\gamma)\epsilon_3^2|\omega_1|^2|v_{m+1}-\bar{v}_m|^2\\ &=8\epsilon_3^2|\langle\omega_1,v_{m+1}-\bar{v}_m\rangle|^2-8\gamma'^2\epsilon_3^2|\omega_1|^2|v_{m+1}-\bar{v}_m|^2\\ &=8\epsilon_3^2\left(|\langle\omega_1,v_{m+1}-\bar{v}_m\rangle|^2-\gamma'^2|\omega_1|^2|v_{m+1}-\bar{v}_m|^2\right)\\ &<0 \end{align*} since $(\omega_1,\omega_2,v_{m+1},v_{m+2})\notin A_{m,m+1}$. Since $\gamma<<1$, we obtain $P(t)>0$, for all $ t\geq 0,$ or in other words \begin{equation}\label{before omega} 2\epsilon_3^2|\omega_1|^2+2\sqrt{2}\epsilon_3\langle\omega_1,v_{m+1}-\bar{v}_m\rangle t+|v_{m+1}-\bar{v}_m|^2t^2>\gamma\epsilon_3^2|\omega_1|^2. \end{equation} Since $(\omega_1,\omega_2,v_{m+1},v_{m+2})\notin \Omega_1$, expressions \eqref{expr for m,m+1}, \eqref{before omega} yield \begin{equation} |x_m(t)-x_{m+1}(t)|^2>\gamma\epsilon_3^2|\omega_1|^2>\gamma^2\epsilon_3^2=\epsilon_2^2. \end{equation} Therefore for any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus(\Omega_1\cup A_{m,m+1})$, we have $$|x_m(t)-x_{m+1}(t)|>\epsilon_2,\quad\forall t\geq 0.$$ $\bullet$ Case $(i,j)=(m,m+2)$: We follow a similar argument using the sets \begin{align} \Omega_{2}&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:|\omega_2|\leq\sqrt{\gamma}\}\label{omega_2},\\ A_{m,m+2}&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\left|\langle\omega_2,v_{m+2}-\bar{v}_m\rangle\right|\geq\gamma'|\omega_2||v_{m+2}-\bar{v}_m|\}\label{A_m,m+2}, \end{align} to conclude that for all $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus (\Omega_2\cup A_{m,m+2})$, we have $$|x_{m+2}(t)-x_{m}(t)|>\epsilon_2,\quad\forall t\geq 0. $$ $\bullet$ Case $(i,j)=(m+1,m+2)$: We follow a similar argument using the sets \begin{align} \Omega_{1,2}&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:|\omega_1-\omega_2|\leq\sqrt{\gamma}\},\label{omega_1,2}\\ B_{m+1,m+2}&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\nonumber\\ &\hspace{0.8cm}\left|\langle\omega_1-\omega_2,v_{m+1}-v_{m+2}\rangle\right|\geq\gamma'|\omega_1-\omega_2||v_{m+1}-v_{m+2}|\}\label{A_m+1,m+2}. \end{align} to conclude that for all $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus (\Omega_{1,2}\cup B_{m+1,m+2})$, we have $$|x_{m+1}(t)-x_{m+2}(t)|>\epsilon_2,\quad\forall t\geq 0.$$ Defining \begin{equation}\label{A-} \mathcal{A}_m^-(\bar{Z}_m)=\Omega_1\cup\Omega_2\cup\Omega_{1,2}\cup A_{m,m+1}\cup A_{m,m+2}\cup B_{m+1,m+2}, \end{equation} we obtain that \eqref{pre-collisional claim} holds for $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus\mathcal{A}_m^-(\bar{Z}_m)$. \textbf{Construction of the set satisfying \eqref{post-collisional claim}}: Here we use notation from \eqref{post-collisional notation ternary}. We distinguish the following cases: $\bullet$ Case $(i,j)=(m,m+1)$: We follow a similar argument to the precollisional case, using the set $\Omega_1$, defined in \eqref{omega_1}, and the set \begin{align} A_{m,m+1}^*&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\left|\langle\omega_1,v_{m+1}^*-\bar{v}_{m}^*\rangle\right|\geq\gamma'|\omega_1||v_{m+1}^*-\bar{v}_{m}^*|\}\label{A_m,m+1*}, \end{align} to conclude that for all $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus (\Omega_2\cup A_{m,m+1}^*),$ we have $$|x_{m+1}(t)-x_{m}(t)|>\epsilon_2,\quad\forall t\geq 0.$$ $\bullet$ Case $(i,j)=(m,m+2)$: We follow a similar argument to the precollisional case, using the set $\Omega_2$, defined in \eqref{omega_2}, and the set \begin{align} A_{m,m+2}^*&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\left|\langle\omega_2,v_{m+2}^*-\bar{v}_{m}^*\rangle\right|\geq\gamma'|\omega_2||v_{m+2}^*-\bar{v}_{m}^*|\}\label{A_m,m+2*}, \end{align} to conclude that for all $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus (\Omega_2\cup A_{m,m+2}^*)$, we have $$|x_{m+2}(t)-x_{m}(t)|>\epsilon_2,\quad\forall t\geq 0.$$ $\bullet$ Case $(i,j)=(m+1,m+2)$: We follow a similar argument to the precollisional case, using the set $\Omega_{1,2}$, defined in \eqref{omega_1,2}, and the set \begin{align} B_{m+1,m+2}^*&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\\ &\hspace{0.8cm}\left|\langle\omega_1-\omega_2,v_{m+1}^*-v_{m+2}^*\rangle\right|\geq\gamma'|\omega_1-\omega_2||v_{m+1}^*-v_{m+2}^*|\},\label{A_m+1,m+2*} \end{align} to conclude that for all $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus (\Omega_2\cup B_{m+1,m+2}^*)$, we have $$|x_{m+1}(t)-x_{m+2}(t)|>\epsilon_2,\quad\forall t\geq 0.$$ Defining \begin{equation}\label{A+} \mathcal{A}_m^+(\bar{Z}_m)=\Omega_1\cup\Omega_2\cup\Omega_{1,2}\cup A_{m,m+1}^*\cup A_{m,m+2}^*\cup B_{m+1,m+2}^*, \end{equation} we obtain that \eqref{post-collisional claim} holds for $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})\setminus\mathcal{A}_m^+(\bar{Z}_m)$. Defining \begin{equation}\label{A} \mathcal{A}_m(\bar{Z}_m)=(\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_m)\cap\left(\mathcal{A}_m^-(\bar{Z}_m)\cup \mathcal{A}_m^+(\bar{Z}_m)\right), \end{equation} \eqref{pre-collisional claim}-\eqref{post-collisional claim} hold for any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in(\mathbb{S}_1^{2d-1}\times B_R^{2d})^+(\bar{v}_m)\setminus\mathcal{A}_m(\bar{Z}_m)$. The set \begin{equation} \mathcal{B}_m^3(\bar{Z}_m)=\widetilde{\mathcal{B}}_m^3(\bar{Z}_m)\cup\mathcal{A}_m(\bar{Z}_m), \end{equation} satisfies \eqref{in phase pre}-\eqref{epsilon pre}, \eqref{in phase post}-\eqref{epsilon post}, thus it is the set we need to conclude the proof. \end{proof} \subsubsection{Measure estimate for ternary adjunction} We now provide the corresponding measure estimate for the set $\mathcal{B}_\ell^3(\bar{Z}_m)$ appearing in Proposition \ref{bad set triple}. To estimate the measure of this set, we will strongly rely on the results of Section \ref{sec:geometric}. \begin{proposition}\label{bad set triple measure} Consider parameters $\alpha,\epsilon_0,R,\eta,\delta$ as in \eqref{choice of parameters} and $\epsilon_2<<\eta^2\epsilon_3<<\alpha$. Let $m\in\mathbb{N}$, $\bar{Z}_m\in G_m(\epsilon_0,0)$, $\ell\in\{1,...,m\}$ and $\mathcal{B}_{\ell}^3(\bar{Z}_m)$ be the set appearing in the statement of Proposition \ref{bad set triple}. Then the following measure estimate holds: \begin{equation*} \left|\mathcal{B}_{\ell}^3(\bar{Z}_m)\right|\lesssim mR^{2d}\eta^{\frac{d-1}{4d+2}}, \end{equation*} where $|\cdot|$ denotes the product measure on $\mathbb{S}_1^{2d-1}\times B_R^{2d}$. \end{proposition} \begin{proof} By symmetry, we may assume $\ell=m$. Recall that \begin{equation}\label{B} \mathcal{B}_m^3(\bar{Z}_m)=\widetilde{\mathcal{B}}_m^3(\bar{Z}_m)\cup\mathcal{A}_m(\bar{Z}_m), \end{equation} where $\widetilde{\mathcal{B}}_m^3(\bar{Z}_m)$ is given by Proposition \ref{bad set tilde triple} and $\mathcal{A}_m(\bar{Z}_m)$ is given by \eqref{A}. Estimate \eqref{measure estimate tilde} yields \begin{equation}\label{estimate of wide tilde} |\widetilde{\mathcal{B}}_{m}^3(\bar{Z}_m)|\lesssim mR^{2d}\eta^{\frac{d-1}{4d+2}}, \end{equation} so it suffices to estimate the measure of $\mathcal{A}_m(\bar{Z}_m)$. By \eqref{A}, it suffices to estimate the measure of $\mathcal{A}_m^-(\bar{Z}_m)$ and $\mathcal{A}_m^+(\bar{Z}_m)$ which are given by \eqref{A-}, \eqref{A+} respectively. Let us recall the notation from \eqref{gamma def}-\eqref{gamma'}: $$\gamma=\frac{\epsilon_2}{\epsilon_3}<<\eta^2,\quad\gamma'=\sqrt{1-\frac{\gamma}{2}}.$$ \textbf{Estimate of $\mathcal{A}_m^-(\bar{Z}_m)$:} Recall from \eqref{A-} that \begin{equation}\label{A- measure representation} \mathcal{A}_m^-(\bar{Z}_m)=\Omega_1\cup\Omega_2\cup\Omega_{1,2}\cup A_{m,m+1}\cup A_{m,m+2}\cup B_{m+1,m+2}, \end{equation} where $\Omega_1,A_{m,m+1}$ are given by \eqref{omega_1}-\eqref{A_m,m+1}, $\Omega_2,A_{m,m+2}$ by \eqref{omega_2}-\eqref{A_m,m+2} and $\Omega_{1,2}, B_{m+1,m+2}$ are given by \eqref{omega_1,2}-\eqref{A_m+1,m+2}. $\bullet$ Estimate for $\Omega_1, \Omega_2$: Without loss of generality, it suffices to estimate the measure of $\Omega_1$. Recalling notation from \eqref{cube parameters 1}, Fubini's Theorem and Lemma \ref{estimate of cubes} yield \begin{equation}\label{measure omega_1} |\Omega_1|=\int_{B_R^{2d}}\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{M_1(\sqrt{\gamma})}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\lesssim R^{2d}\gamma^{d/2}, \end{equation} A symmetric argument yields \begin{equation}\label{measure omega_2} |\Omega_2|\lesssim R^{2d}\gamma^{d/2}, \end{equation} $\bullet$ Estimate for $\Omega_{1,2}$: Recalling notation from \eqref{strip}, \eqref{omega_1,2} yields $$\Omega_{1,2}=(\mathbb{S}_1^{2d-1}\cap W_{\sqrt{\gamma}}^{2d})\times B_R^{2d},$$ Therefore, Fubini's Theorem and Lemma \ref{strip lemma} imply \begin{equation}\label{measure omega_1,2} |\Omega_{1,2}|= \int_{B_R^{2d}}\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{W_{\sqrt{\gamma}}^{2d}}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\lesssim R^{2d}\gamma^{\frac{d-1}{4}}. \end{equation} $\bullet$ Estimate for $A_{m,m+1}$: Recalling notation from \eqref{shell parameters}, the set $A_{m,m+1}$, which was defined in \eqref{A_m,m+1}, can be written as \begin{equation*} A_{m,m+1}=\left\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\omega_1\in S(\gamma',v_{m+1}-\bar{v}_m)\right\} \end{equation*} Therefore, the representation of the $(2d-1)$- unit sphere \eqref{representation of sphere for fixed omega_1} and Lemma \ref{shell estimate} yield \begin{align} |A_{m,m+1}|&\leq\int_{B_R^{2d}}\int_{B_1^d}\int_{\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}}\mathds{1}_{S(\gamma',v_{m+1}-\bar{v}_m)}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\nonumber\\ &\lesssim R^{2d}\arccos\gamma'\nonumber\\ &=R^{2d}\arccos\sqrt{1-\frac{\gamma}{2}}.\label{measure A_m,m+1} \end{align} $\bullet$ Estimate for $A_{m,m+2}$: We follow a similar argument as in the previous case to obtain \begin{equation}\label{measure A_m,m+2} |A_{m,m+2}|\lesssim R^{2d}\arccos\sqrt{1-\frac{\gamma}{2}}. \end{equation} $\bullet$ Estimate for $B_{m+1,m+2}$: Recalling notation from \eqref{difference shell parameters}, \eqref{A_m+1,m+2} yields \begin{equation*}B_{m+1,m+2}=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:(\omega_1,\omega_2)\in N(\gamma',v_{m+1}-v_{m+2})\}. \end{equation*} Therefore, using Lemma \ref{estimate of difference in shell}, we obtain \begin{align} |B_{m+1,m+2}|&=\int_{B_R^{2d}}\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{N(\gamma',v_{m+1}-v_{m+2})}(\omega_1,\omega_2)\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\nonumber\\ &\lesssim R^{2d}\arccos\gamma'\nonumber\\ &=R^{2d}\arccos\sqrt{1-\frac{\gamma}{2}}.\label{measure for A_m+1,m+2} \end{align} Using \eqref{A- measure representation} and estimates \eqref{measure omega_1}-\eqref{measure for A_m+1,m+2}, we obtain \begin{equation}\label{measure A-} |\mathcal{A}_m^-(\bar{Z}_m)|\lesssim R^{2d}\left(\gamma^{d/2}+\gamma^{\frac{d-1}{4}}+\arccos\sqrt{1-\frac{\gamma}{2}}\right). \end{equation} \textbf{Estimate of $\mathcal{A}_m^+(\bar{Z}_m)$:} Recall from \eqref{A+} that \begin{equation}\label{A+ measure representation} \mathcal{A}_m^+(\bar{Z}_m)=\Omega_1\cup\Omega_2\cup\Omega_{1,2}\cup A_{m,m+1}^*\cup A_{m,m+2}^*\cup B_{m+1,m+2}^*, \end{equation} where $\Omega_1$, $\Omega_2$, $\Omega_{1,2}$, $A_{m,m+1}^*$, $A_{m,m+2}^*$, $B_{m+1,m+2}^*$ are given by \eqref{omega_1}, \eqref{omega_2}, \eqref{omega_1,2}, \eqref{A_m,m+1*}-\eqref{A_m+1,m+2*} respectively. We already have estimates for $\Omega_1$, $\Omega_2$, $\Omega_{1,2}$ from \eqref{measure omega_1}-\eqref{measure omega_1,2}, hence it suffices to derive estimates for $A_{m,m+1}^*$, $A_{m,m+2}^*$, $B_{m+1,m+2}^*$. For the rest of the proof we consider a parameter $0<\beta<<1$ which will be chosen later in terms of $\eta$, see \eqref{choice of beta in terms of eta}. $\bullet$ Estimate for $A_{m,m+1}^*$: Recall from \eqref{A_m,m+1*} the set \begin{equation}\label{A_m,m+1 measure representation} A_{m,m+1}^*=\left\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:|\langle\omega_1,v_{m+1}^*-\bar{v}_m^*\rangle|\geq\gamma'|\omega_1||v_{m+1}^*-\bar{v}_m^*|\right\}. \end{equation} But for any $(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}$, the ternary collisional law \eqref{formulas ternary} implies $$v_{m+1}^*-\bar{v}_m^*=v_{m+1}-\bar{v}_m-2c_{\omega_1,\omega_2,\bar{v}_m,v_{m+1},v_{m+2}}\omega_1-c_{\omega_1,\omega_2,\bar{v}_m,v_{m+1},v_{m+2}}\omega_2,$$ where \begin{equation}\label{c A_m,m+1*} c_{\omega_1,\omega_2,\bar{v}_m,v_{m+1},v_{m+2}}=\frac{\langle\omega_1,v_{m+1}-\bar{v}_m\rangle+\langle\omega_2,v_{m+2}-\bar{v}_m\rangle}{1+\langle\omega_1,\omega_2\rangle}. \end{equation} For convenience, we denote $$c:=c_{\omega_1,\omega_2,\bar{v}_m,v_{m+1},v_{m+2}}.$$ Therefore, by \eqref{A_m,m+1 measure representation}, we may write \begin{align*} A_{m,m+1}^*&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\\ &\hspace{0.8cm}|\langle\omega_1,v_{m+1}-\bar{v}_m-2c\omega_1-c\omega_2\rangle|\geq\gamma'|\omega_1||v_{m+1}-\bar{v}_m-2c\omega_1-c\omega_2|\}. \end{align*} By Fubini's Theorem we have \begin{equation}\label{total integral A_m,m+1*} |A_{m,m+1}^*|\leq\int_{\mathbb{S}_1^{2d-1}\times B_R^d}\int_{B_R^d}\mathds{1}_{V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}}(v_{m+1})\,dv_{m+1}\,d\omega_1\,d\omega_2\,dv_{m+2} \end{equation} where given $(\omega_1,\omega_2,v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^d$ we write \begin{equation}\label{V_m,m+1 proof} V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}=\left\{v_{m+1}\in B_R^d:(\omega_1,\omega_2,v_{m+1},v_{m+2})\in A_{m,m+1}^*\right\}. \end{equation} Recall from \eqref{annulus I1} the set \begin{equation}\label{bad annulus 1} I_{1}=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}\left|1-2\left|\omega_1\right|^2\right|\leq 2\beta\right\}. \end{equation} Using \eqref{total integral A_m,m+1*}, we obtain \begin{equation}\label{decomposed integral A_m,m+1*} |A_{m,m+1}^*|= \widetilde{I}_1+\widetilde{I}_1', \end{equation} where \begin{align} \widetilde{I}_1&=\int_{(\mathbb{S}_1^{2d-1}\cap I_1)\times B_R^d}\int_{B_R^d}\mathds{1}_{V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}}(v_{m+1})\,dv_{m+1}\,d\omega_1\,d\omega_2\,dv_{m+2},\label{I_1, A_m,m+1}\\ \widetilde{I}_1'&=\int_{(\mathbb{S}_1^{2d-1}\setminus I_1)\times B_R^d}\int_{B_R^d}\mathds{1}_{V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}}(v_{m+1})\,dv_{m+1}\,d\omega_1\,d\omega_2\,dv_{m+2}.\label{I, A_m,m+1*} \end{align} We treat each of the terms in \eqref{decomposed integral A_m,m+1*} separately. \textit{Estimate for} $\widetilde{I}_1$: By \eqref{I_1, A_m,m+1}, Fubini's Theorem and Lemma \ref{estimate on annulus I_1}, we obtain \begin{equation}\label{estimate for I_1} \widetilde{I}_1\lesssim R^{2d}\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{I_1}\,d\omega_1\,d\omega_2\lesssim R^{2d}\beta. \end{equation} \textit{Estimate for} $\widetilde{I}_1'$: Let us fix $(\omega_1,\omega_2,v_{m+2})\in (\mathbb{S}_1^{2d-1}\setminus I_1)\times B_R^d$. We define the smooth map $F_{\omega_1,\omega_2,v_{m+2}}^1:B_R^d\to\mathbb{R}^d$, by: \begin{equation}\label{definition F1} F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1}):=v_{m+1}^*-\bar{v}_{m}^*=v_{m+1}-\bar{v}_m-2c\omega_1-c\omega_2, \end{equation} where $c$ is given by \eqref{c A_m,m+1*}. We are showing that we may change variables under $F_{\omega_1,\omega_2,v_{m+2}}^1$, as long as $(\omega_1,\omega_2,v_{m+1})\in(\mathbb{S}_1^{2d-1}\setminus I_1)\times B_R^d$ i.e. we are showing that $F_{\omega_1,\omega_2,v_{m+2}}^1$ has non-zero Jacobian and is injective. In particular we will see that the Jacobian is bounded from below by $\beta$. We first show the Jacobian has a lower bound $\beta$ . Differentiating with respect to $v_{m+1}$, we obtain $$\frac{\partial F_{\omega_1,\omega_2,v_{m+2}}^1 }{\partial v_{m+1}}=I_d+(-2\omega_1-\omega_2)\nabla_{v_{m+1}}^Tc.$$ Recalling \eqref{c A_m,m+1*}, we have $$\nabla_{v_{m+1}}^Tc=\frac{1}{1+\langle\omega_1,\omega_2\rangle}\omega_1^T.$$ Using Lemma \ref{linear algebra lemma} from the Appendix, we get \begin{align*} \jac F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1})&=\det\left( I_d+\frac{1}{1+\langle\omega_1,\omega_2\rangle}(-2\omega_1-\omega_2)\omega_1^T\right)\\ &=1+\frac{-2|\omega_1|^2-\langle\omega_1,\omega_2\rangle}{1+\langle\omega_1,\omega_2\rangle}\\ &=\frac{1-2|\omega_1|^2}{1+\langle\omega_1,\omega_2\rangle}. \end{align*} Since $(\omega_1,\omega_2)\notin I_1$, we have $\left|1-2\left|\omega_1\right|^2\right|>2\beta$, hence \begin{equation} \left|\jac F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1})\right|=\frac{\left|1-2\left|\omega_1\right|^2\right|}{1+\langle\omega_1,\omega_2\rangle}>\frac{2\beta}{1+\langle\omega_1,\omega_2\rangle}\geq\frac{4\beta}{3}>\beta, \end{equation} since $\displaystyle\frac{1}{2}\leq 1+\langle\omega_1,\omega_2\rangle\leq\displaystyle\frac{3}{2}$, by \eqref{bound on inverse quotient}. Thus \begin{equation}\label{bound on inverse Jacobian A_m,m+1*} \left|\jac F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1})\right|^{-1}<\beta^{-1},\quad\forall v_{m+1}\in B_R^d. \end{equation} We now show that $F_{\omega_1,\omega_2,v_{m+2}}^1$ is injective. For this purpose consider $v_{m+1},\xi_{m+1}\in B_R^d$ such that \begin{align} F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1})&=F_{\omega_1,\omega_2,v_{m+2}}^1(\xi_{m+1})\nonumber\\ \Leftrightarrow v_{m+1}-\xi_{m+1}&=\frac{\langle v_{m+1}-\xi_{m+1},\omega_1\rangle}{1+\langle\omega_1,\omega_2\rangle}(2\omega_1+\omega_2),\label{one-one} \end{align} thanks to \eqref{c A_m,m+1*}. Therefore, there is $\lambda\in\mathbb{R}$ such that \begin{equation}\label{diff v in terms of omega} v_{m+1}-\xi_{m+1}=\lambda(2\omega_1+\omega_2), \end{equation} so replacing $v_{m+1}-\xi_{m+1}$ in \eqref{one-one} with the right hand side of \eqref{diff v in terms of omega}, we obtain $$ \lambda(1-2|\omega_1|^2)=0, $$ which yields $\lambda=0$, since we have assumed $(\omega_1,\omega_2)\notin I_1$. Therefore $v_{m+1}=\xi_{m+1},$ thus $F_{\omega_1,\omega_2,v_{m+2}}^1$ is injective. Since $(\omega_1,\omega_2,v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^d$ and $\bar{v}_m\in B_R^d$, Cauchy-Schwartz inequality yields that, for any $v_{m+1}\in B_R^d$, we have \begin{align*}|F_{\omega_1,\omega_2,v_{m+2}}(v_{m+1})|&\leq |v_{m+1}|+|\bar{v}_m|+\frac{|\omega_1|(|v_{m+1}|+|\bar{v}_m|)+|\omega_2|(|\bar{v}_m|+|v_{m+2}|)}{1+\langle\omega_1,\omega_2\rangle}(2|\omega_1|+|\omega_2|)\leq 26R, \end{align*} since $\frac{1}{2}\leq 1+\langle\omega_1,\omega_2\rangle\leq\frac{3}{2}$, by \eqref{bound for b relative to c}, and $(\omega_1,\omega_2,v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^d$. Therefore \begin{equation}\label{inclusion 1 A_m,m+1*} F_{\omega_1,\omega_2,v_{m+2}}^1[B_R^d]\subseteq B_{26R}^d. \end{equation} Additionally, recalling \eqref{V_m,m+1 proof}, \eqref{A_m,m+1 measure representation} and \eqref{definition F1}, we have $$V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}=\{v_{m+1}\in B_R^d: \langle\omega_1,F_{\omega_1,\omega_2,v_{m+2}}(v_{m+1})\rangle\geq\beta|\omega_1||F_{\omega_1,\omega_2,v_{m+2}}(v_{m+1})|\},$$ thus \begin{equation}\label{V in terms of U} v_{m+1}\in V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}\Leftrightarrow F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1})\in U_{\omega_1}, \end{equation} where \begin{equation}\label{U, A_m,m+1*} U_{\omega_1}=\left\{\nu\in\mathbb{R}^d:\langle\omega_1,\nu\rangle\geq\gamma'|\omega_1||\nu|\right\}. \end{equation} Hence \begin{equation}\label{inclusion 2 A_m,m+1*} \mathds{1}_{V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}}(v_{m+1})=\mathds{1}_{U_{\omega_1}}(F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1})),\quad\forall v_{m+1}\in B_R^d. \end{equation} Therefore, performing the substitution $\nu:=F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1})$, and using \eqref{bound on inverse Jacobian A_m,m+1*}, we obtain \begin{equation}\label{bound on velocity integral} \int_{B_R^d}\mathds{1}_{V_{\omega_1,\omega_2,v_{m+2}}^{m,m+1}}(v_{m+1})\,dv_{m+1} =\int_{B_R^d}\mathds{1}_{U_{\omega_1}}(F_{\omega_1,\omega_2,v_{m+2}}^1(v_{m+1}))\,dv_{m+1}\nonumber\leq \beta^{-1}\int_{B_{26R}^d}\mathds{1}_{U_{\omega_1}}(\nu)\,d\nu. \end{equation} Recalling notation from \eqref{shell parameters} and \eqref{U, A_m,m+1*}, we have \begin{equation}\label{equality of chars U,A, A_m,m+1*} \mathds{1}_{U_{\omega_1}}(\nu)=\mathds{1}_{S(\gamma',\nu)}(\omega_1),\quad\forall \omega_1\in B_1^d,\quad \forall\nu\in B_{26R}^d. \end{equation} Therefore, using \eqref{I, A_m,m+1*}, \eqref{bound on velocity integral}, Fubini's Theorem and \eqref{equality of chars U,A, A_m,m+1*}, we obtain \begin{align} I_1'&\leq\beta^{-1}\int_{(\mathbb{S}_1^{2d-1}\setminus I_1)\times B_R^d}\int_{B_{26R}^d}\mathds{1}_{U_{\omega_1}}(\nu)\,d\nu\,d\omega_1\,d\omega_2\,dv_{m+2}\nonumber\\ &\leq\beta^{-1}\int_{B_{26R}^d\times B_R^d}\int_{B_1^d}\int_{\mathbb{S}_{\sqrt{1-|\omega_2|^2}}^{d-1}}\mathds{1}_{S(\gamma',\nu)(\omega_1)}\,d\omega_1\,d\omega_2\,d\nu\,dv_{m+2}\nonumber\\ &\lesssim R^{2d}\beta^{-1}\arccos\gamma'\label{use of shell lemma A_m,m+1*}\\ &=R^{2d}\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}},\label{estimate on I A_m,m+1*} \end{align} where to obtain \eqref{use of shell lemma A_m,m+1*} we use Lemma \ref{shell estimate}. Combining \eqref{decomposed integral A_m,m+1*}, \eqref{estimate for I_1}, \eqref{estimate on I A_m,m+1*}, we obtain \begin{equation}\label{measure estimate of A_m,m+1*} |A_{m,m+1}^*|\leq R^{2d}\left(\beta+\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}}\right). \end{equation} $\bullet$ Estimate for $A_{m,m+2}^*$: The argument is entirely symmetric, using the set \begin{equation*} V_{\omega_1,\omega_2,v_{m+1}}^{m,m+2}=\left\{v_{m+2}\in B_R^d:(\omega_1,\omega_2,v_{m+1},v_{m+2})\in A_{m,m+2}^*\right\}, \end{equation*} for fixed $(\omega_1,\omega_2,v_{m+1})\in\mathbb{S}_1^{2d-1}\times B_R^d$ and the map $$F^2_{\omega_1,\omega_2,v_{m+1}}(v_{m+2})=v_{m+2}-\bar{v}_m-c\omega_1-2c\omega_2.$$ We obtain the estimate \begin{equation}\label{measure A_m,m+2*} |A_{m,m+2}^*|\lesssim R^{2d}\left(\beta+\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}}\right), \end{equation} $\bullet$ Estimate for $B_{m+1,m+2}^*$: The estimate for $B_{m+1,m+2}^*$ is in the same spirit as the previous estimates, however we will need to distinguish cases depending on the size of the impact directions. The reason for that is that we rely on Lemma \ref{lemma on I_1,2} from Section \ref{sec:geometric} which provides estimates on hemispheres of the $(2d-1)$-unit sphere. Recall from \eqref{A_m+1,m+2*} the set \begin{align} B_{m+1,m+2}^*&=\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:|\langle\omega_1-\omega_2,v_{m+1}^*-v_{m+2}^*\rangle|\geq\gamma'|\omega_1-\omega_2||v_{m+1}^*-v_{m+2}^*|\}.\label{A_m+1,m+2* measure representation} \end{align} The ternary collisional law \eqref{formulas ternary} yields $v_{m+1}^*-v_{m+2}^*=v_{m+1}-v_{m+2}-c(\omega_1-\omega_2),$ where $c$ is given by \eqref{c A_m,m+1*}. Thus we may write \begin{align*}B_{m+1,m+2}^*=&\{(\omega_1,\omega_2,v_{m+1},v_{m+2})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}:\\ &|\langle\omega_2-\omega_1,v_{m+2}-v_{m+1}-c(\omega_2-\omega_1)\rangle|\geq \gamma'|\omega_2-\omega_1||v_{m+2}-v_{m+1}-c(\omega_2-\omega_1)|\}. \end{align*} Recall from \eqref{sphere 2<1}-\eqref{sphere 1<2}, the sets \begin{align*} \mathcal{S}_{1,2}&=\left\{(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}:|\omega_1|<|\omega_2|\right\},\quad\mathcal{S}_{2,1}=\left\{(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}:|\omega_2|<|\omega_1|\right\}. \end{align*} We also recall from \eqref{I_1,2}-\eqref{I_2,1} the sets \begin{align*} I_{1,2}&=\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}\left|\left|\omega_1\right|^2+2\langle\omega_1,\omega_2\rangle\right|\leq\beta\},\quad I_{2,1}=\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}\left|\left|\omega_2\right|^2+2\langle\omega_1,\omega_2\rangle\right|\leq\beta\}. \end{align*} We clearly have \begin{align} |&B_{m+1,m+2}^*|=\int_{\mathbb{S}_1^{2d-1}\times B_R^{2d}}\mathds{1}_{B_{m+1,m+2}^*}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\nonumber\\ &=\int_{\mathcal{S}_{1,2}\times B_R^{2d}}\mathds{1}_{B_{m+1,m+2}^*}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}+\int_{\mathcal{S}_{2,1}\times B_R^{2d}}\mathds{1}_{B_{m+1,m+2}^*}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\nonumber\\ &= \widetilde{I}_{1,2}+\widetilde{I}_{1,2}'+\widetilde{I}_{2,1}+\widetilde{I}_{2,1}',\label{decomposed integral A_m+1,m+2*} \end{align} where \begin{align} \widetilde{I}_{1,2}&=\int_{(\mathcal{S}_{1,2}\cap I_{1,2})\times B_R^{2d}}\mathds{1}_{B_{m+1,m+2}^*}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2},\label{I_1,2 proof}\\ \widetilde{I}_{1,2}'&=\int_{(\mathcal{S}_{1,2}\setminus I_{1,2})\times B_R^{2d}}\mathds{1}_{B_{m+1,m+2}^*}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2},\label{I_1,2'}\\ \widetilde{I}_{2,1}&=\int_{(\mathcal{S}_{2,1}\cap I_{2,1})\times B_R^{2d}}\mathds{1}_{B_{m+1,m+2}^*}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\label{I_2,1' proof},\\ I_{2,1}'&=\int_{(\mathcal{S}_{2,1}\setminus I_{2,1})\times B_R^{2d}}\mathds{1}_{B_{m+1,m+2}^*}\,d\omega_1\,d\omega_2\,dv_{m+1}\,dv_{m+2}\label{I_2,1'}. \end{align} We treat each of the terms in \eqref{decomposed integral A_m+1,m+2*} separately. \textit{Estimate for} $\widetilde{I}_{1,2}$: By \eqref{I_1,2 proof}, Fubini's Theorem and Lemma \ref{lemma on I_1,2}, we obtain \begin{equation}\label{estimate on I_1,2} \widetilde{I}_{1,2}\lesssim R^{2d}\int_{\mathcal{S}_{1,2}}\mathds{1}_{I_{1,2}}\,d\omega_1\,d\omega_2\lesssim R^{2d}\beta. \end{equation} \textit{Estimate for} $\widetilde{I}_{2,1}$: Similarly, we obtain \begin{equation}\label{estimate on I_2,1} \widetilde{I}_{2,1}\lesssim R^{2d}\beta. \end{equation} \textit{Estimate for} $I_{1,2}'$: From \eqref{I_1,2'}, we obtain \begin{equation}\label{I_1,2' with V} I_{1,2}'\leq\int_{\mathcal{S}_{1,2}\setminus I_{1,2}}\int_{B_R^d}\int_{B_R^d}\mathds{1}_{V_{\omega_1,\omega_2,v_{m+1}}^{m+1,m+2}}(v_{m+2})\,dv_{m+2}\,dv_{m+1}\,d\omega_1\,d\omega_2, \end{equation} where given $(\omega_1,\omega_2,v_{m+1})\in(\mathcal{S}_{1,2}\setminus I_{1,2})\times B_R^d$, we denote \begin{equation}\label{V A_m+1,m+2*} V_{\omega_1,\omega_2,v_{m+1}}^{m+1,m+2}=\left\{v_{m+2}\in B_R^d:(\omega_1,\omega_2,v_{m+1},v_{m+2})\in B_{m+1,m+2}^*\right\}. \end{equation} Let us fix $(\omega_1,\omega_2,v_{m+1})\in (\mathcal{S}_{1,2}\setminus I_{1,2})\times B_R^d$. We define the map $F_{\omega_1,\omega_2,v_{m+1}}^{1,2}:B_R^d\to\mathbb{R}^d$ by $$F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})=v_{m+2}-v_{m+1}-c(\omega_2-\omega_1),$$ where $c$ is given by \eqref{c A_m,m+1*}. \begin{comment} {\color{red} Go up to Differentiating with respect to $v_{m+2}$, we obtain $$\frac{\partial F_{\omega_1,\omega_2,v_{m+1}}^{1,2} }{\partial v_{m+2}}=I_d-(\omega_2-\omega_1)\nabla_{v_{m+2}}^Tc.$$ But $$c=\frac{\langle\omega_1,v_{m+1}-v_m\rangle+\langle\omega_2,v_{m+2}-v_m\rangle}{1+\langle\omega_1,\omega_2\rangle},$$ so $$\nabla_{v_{m+2}}^Tc=\frac{1}{1+\langle\omega_1,\omega_2\rangle}\omega_2^T.$$ Using Lemma \ref{linear algebra lemma}, we get \begin{align*} \jac F_{\omega_1,\omega_2,v_{m+1}}^1(v_{m+2})&=\det\left( I_d-\frac{1}{1+\langle\omega_1,\omega_2\rangle}(\omega_2-\omega_1)\omega_2^T\right)\\ &=1+\frac{\langle\omega_1,\omega_2\rangle-|\omega_2|^2}{1+\langle\omega_1,\omega_2\rangle}\\ &=\frac{|\omega_1|^2+2\langle\omega_1,\omega_2\rangle}{1+\langle\omega_1,\omega_2\rangle}, \end{align*} since $$|\omega_1|^2+|\omega_2|^2=1.$$} \end{comment} In a similar way as in the estimate of of $|A_{m,m+1}^*|$, for any $(\omega_1,\omega_2)\notin I_{1,2}$, we have \begin{equation} \left|\jac F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})\right|=\frac{\left|\left|\omega_1\right|^2+2\langle\omega_1,\omega_2\rangle\right|}{1+\langle\omega_1,\omega_2\rangle}>\frac{\beta}{1+\langle\omega_1,\omega_2\rangle}\geq\frac{2\beta}{3}, \end{equation} Thus \begin{equation}\label{bound on inverse Jacobian A_m+1,m+2*} \left|\jac F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})\right|^{-1}\leq \frac{3\beta^{-1}}{2},\quad\forall v_{m+2}\in B_R^d. \end{equation} Similarly to the estimate for $|A_{m,m+1}^*|$, we show also that $F_{\omega_1,\omega_2,v_{m+1}}^{1,2}$ is injective. \begin{comment} {\color{red} For this {\color{red} change prime}purpose consider $v_{m+2},v_{m+2}'\in B_R^d$ such that \begin{align} F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})&=F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2}')\nonumber\\ \Leftrightarrow v_{m+2}-v_{m+2}'&=\frac{\langle v_{m+2}-v_{m+2}',\omega_2\rangle}{1+\langle\omega_1,\omega_2\rangle}(\omega_2-\omega_1)\label{one-one A_m+1,m+2*} \end{align} Therefore, there is $\lambda\in\mathbb{R}$ such that $$v_{m+2}-v_{m+2}'=\lambda(\omega_2-\omega_1),$$ so replacing $v_{m+2}-v_{m+2}'$ in \eqref{one-one A_m+1,m+2*} and using the fact that $(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}$, we get $$ \lambda\left(\left|\omega_1\right|^2+2\langle\omega_1,\omega_2\rangle\right)=0, $$ which yields $\lambda=0$, since we have assumed $(\omega_1,\omega_2)\notin I_{1,2}$. Therefore $F_{\omega_1,\omega_2,v_{m+1}}^{1,2}$ is injective. Consequently, $F_{\omega_1,\omega_2,v_{m+1}}^{1,2}$ is injective and has lower bounded Jacobian by $2\beta/3$ for $(\omega_1,\omega_2,v_{m+1})\in(\mathbb{S}_1^{2d-1}\setminus I_{1,2})\times B_R^d$.} \end{comment} Since $(\omega_1,\omega_2,v_{m+1})\in\mathbb{S}_1^{2d-1}\times B_R^d$ and $\bar{v}_m\in B_R^d$, Cauchy-Schwartz inequality yields that, for any $v_{m+2}\in B_R^d$, we have \begin{align*}|F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})|&\leq |v_{m+2}|+|v_{m+1}|+\frac{|\omega_1|(|v_{m+1}|+|\bar{v}_m|)+|\omega_2|(|v_{m+2}|+|\bar{v}_m|)}{1+\langle\omega_1,\omega_2\rangle}(|\omega_2|+|\omega_1|)\leq 18R, \end{align*} since $\frac{1}{2}\leq 1+\langle\omega_1,\omega_2\rangle\leq\frac{3}{2}$. Therefore \begin{equation}\label{inclusion 1 A_m+1,m+2*} F_{\omega_1,\omega_2,v_{m+1}}^{1,2}[B_R^d]\subseteq B_{18R}^d. \end{equation} Additionally \begin{equation*} v_{m+2}\in V_{\omega_1,\omega_2,v_{m+1}}^{m+1,m+2}\Leftrightarrow F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})\in U_{\omega_1,\omega_2}, \end{equation*} where \begin{equation}\label{U, A_m+1,m+2*} U_{\omega_1,\omega_2}=\left\{\nu\in\mathbb{R}^d:\langle\omega_2-\omega_1,\nu\rangle\geq\gamma'|\omega_2-\omega_1||\nu|\right\}. \end{equation} Hence \begin{equation}\label{inclusion 2 A_m+1,m+2*} \mathds{1}_{V_{\omega_1,\omega_2,v_{m+1}}^{m+1,m+2}}(v_{m+2})=\mathds{1}_{U_{\omega_1,\omega_2}}(F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})),\quad\forall v_{m+2}\in B_R^d. \end{equation} Therefore, performing the substitution $\nu:=F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2})$, and using \eqref{bound on inverse Jacobian A_m+1,m+2*}, we obtain \begin{equation}\label{bound on velocity integral A_m+1,m+2*} \int_{B_R^d}\mathds{1}_{V_{\omega_1,\omega_2,v_{m+1}}^{m+1,m+2}}(v_{m+2})\,dv_{m+2} =\int_{B_R^d}\mathds{1}_{U_{\omega_1,\omega_2}}(F_{\omega_1,\omega_2,v_{m+1}}^{1,2}(v_{m+2}))\,dv_{m+2}\leq \beta^{-1}\int_{B_{18R}^d}\mathds{1}_{U_{\omega_1,\omega_2}}(\nu)\,d\nu. \end{equation} Recalling the set $ N(\gamma',\nu)=\left\{(\omega_1,\omega_2)\in\mathbb{R}^{2d}:\langle\omega_1-\omega_2,\nu\rangle\geq\gamma'|\omega_1-\omega_2||\nu|\right\}, $ from \eqref{difference shell parameters} and \eqref{U, A_m+1,m+2*}, we have \begin{equation}\label{equality of chars U,A, A_m+1,m+2*} \mathds{1}_{U_{\omega_1,\omega_2}}(\nu)=\mathds{1}_{N(\gamma',\nu)}(\omega_1,\omega_2),\quad\forall (\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1},\quad\forall \nu\in B_{18R}^d. \end{equation} Therefore, using \eqref{I_1,2' with V}, \eqref{bound on velocity integral A_m+1,m+2*}, Fubini's Theorem and \eqref{equality of chars U,A, A_m+1,m+2*}, we obtain \begin{align} I_{1,2}'&\leq\beta^{-1}\int_{(\mathcal{S}_{1,2}\setminus I_{1,2})\times B_R^d}\int_{B_{18R}^d}\mathds{1}_{U_{\omega_1,\omega_2}}(\nu)\,d\nu\,d\omega_1\,d\omega_2\,dv_{m+1}\nonumber\\ &\leq\beta^{-1}\int_{ B_R^d\times B_{18R}^d}\int_{\mathbb{S}_1^{2d-1}}\mathds{1}_{N(\gamma',\nu)}(\omega_1,\omega_2)\,d\omega_1\,d\omega_2\,d\nu\,dv_{m+1}\nonumber\\ &\lesssim R^{2d}\beta^{-1}\arccos\gamma'\label{use of strip lemma A_m+1,m+2*}\\ &=R^{2d}\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}},\nonumber \end{align} where to obtain \eqref{use of strip lemma A_m+1,m+2*}, we use Lemma \ref{estimate of difference in shell}. Therefore, \begin{equation}\label{estimate I_12'} I_{12}'\leq R^{2d}\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}}. \end{equation} \textit{Estimate for} $I_{2,1}'$: The argument is entirely symmetric, using the set $$V_{\omega_1,\omega_2,v_{m+2}}^{m+1,m+2}=\left\{v_{m+1}\in B_R^d:(\omega_1,\omega_2,v_{m+1},v_{m+2})\in B_{m+1,m+2}^*\right\}.$$ for given $(\omega_1,\omega_2,v_{m+2})\in(\mathcal{S}_{2,1}\setminus I_{2,1})\times B_R^d$ and the map $F_{\omega_1,\omega_2,v_{m+2}}^{2,1}(v_{m+1})=v_{m+1}-v_{m+2}-c(\omega_1-\omega_2).$ We obtain \begin{equation}\label{estimate I_21'} I_{21}'\leq R^{2d}\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}}. \end{equation} Recalling \eqref{decomposed integral A_m+1,m+2*} and using \eqref{estimate on I_1,2}-\eqref{estimate on I_2,1}, \eqref{estimate I_12'}-\eqref{estimate I_21'}, we obtain \begin{equation}\label{measure A_m+1,m+2*} |B_{m+1,m+2}^*|\lesssim R^{2d}\left(\beta+\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}}\right) \end{equation} Recalling \eqref{A+ measure representation} and using \eqref{measure omega_1}-\eqref{measure omega_1,2}, \eqref{measure estimate of A_m,m+1*}, \eqref{measure A_m,m+2*}, \eqref{measure A_m+1,m+2*}, we obtain \begin{equation}\label{measure A+} |\mathcal{A}_m^+(\bar{Z}_m)|\lesssim R^{2d}\left(\gamma^{d/2}+\gamma^{\frac{d-1}{4}}+\beta+\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}}\right). \end{equation} Recalling \eqref{A}, using \eqref{measure A-}, \eqref{measure A+} and using the fact that $\gamma<<1$, we obtain \begin{equation}\label{measure A with gamma beta} |\mathcal{A}_m(\bar{Z}_m)|\lesssim R^{2d}\left(\gamma^{\frac{d-1}{4}}+\beta+\beta^{-1}\arccos\sqrt{1-\frac{\gamma}{2}}.\right) \end{equation} \textit{Choice of $\beta$}: Let us now choose $\beta$ in terms of $\eta$. Recalling that $\epsilon_2<<\eta^2\epsilon_3$ and \eqref{gamma def}, we have \begin{equation}\label{gamma bounded by eta} \gamma^{\frac{d-1}{4}}<<\eta^{\frac{d-1}{2}}. \end{equation} Moreover, since $\eta<<1$, we may assume \begin{equation}\label{eta simeq sin eta} \frac{\eta}{\sqrt{2}}\leq\sin\eta\leq\eta, \end{equation} Since $\gamma<<\eta^2$, \eqref{eta simeq sin eta} implies \begin{equation}\label{arc gamma bounded by eta} \gamma<<2\sin^2\eta\Rightarrow\arccos\sqrt{1-\frac{\gamma}{2}}<\eta. \end{equation} Choosing \begin{equation}\label{choice of beta in terms of eta} \beta=\eta^{1/2}<<1, \end{equation} estimates \eqref{measure A with gamma beta}-\eqref{gamma bounded by eta}, \eqref{arc gamma bounded by eta} imply \begin{equation}\label{final estimate on A} |\mathcal{A}_m(\bar{Z}_m)|\lesssim R^{2d}\left(\eta^{\frac{d-1}{2}}+\eta^{1/2}\right)\lesssim R^{2d}\eta^{\frac{d-1}{4d+2}}, \end{equation} since $\eta<<1$ and $d\geq 2$. The claim comes from \eqref{B}-\eqref{estimate of wide tilde} and \eqref{final estimate on A}. \end{proof} \section{Elimination of recollisions}\label{sec_elimination} In this section we reduce the convergence proof to comparing truncated elementary observables. We first restrict to good configurations and provide the corresponding measure estimate. This is happening in Proposition \ref{restriction to initially good conf}. We then inductively apply Proposition \ref{bad set double} and Proposition \ref{bad set double measure} or Proposition \ref{bad set triple} and Proposition \ref{bad set triple measure} (depending on whether the adjunction is binary or ternary) to reduce the convergence proof to truncated elementary observables. The convergence proof, completed in Section \ref{sec:convergence proof}, will then follow naturally, since the backwards $(\epsilon_2,\epsilon_3)$-flow and the backwards free flow will be comparable out of a small measure set. Throughout this section $s\in\mathbb{N}$ will be fixed, $(N,\epsilon_2,\epsilon_3)$ are given in the scaling \eqref{scaling} with $N$ large enough such that $\epsilon_2<<\epsilon_3$, and the parameters $n,R,\epsilon_0,\alpha,\eta,\delta$ satisfy \eqref{choice of parameters}. \subsection{Restriction to good configurations} Inductively using Lemma \ref{adjuction of 1} we are able to reduce the convergence proof to good configurations, up to a small measure set. The measure of the complement will be negligible in the limit. For convenience, given $m\in\mathbb{N}$, let us define the set \begin{equation}\label{both epsilon-epsilon_0} G_m(\epsilon_3,\epsilon_0,\delta):=G_m(\epsilon_3,0)\cap G_m(\epsilon_0,\delta). \end{equation} For $s\in\mathbb{N}$, we also recall from \eqref{separated space data} the set $\Delta_s^X(\epsilon_0)$ of well-separated spatial configurations. \begin{lemma}\label{initially good configurations} Let $s\in\mathbb{N}$. Let $s\in\mathbb{N}$, $\alpha,\epsilon_0,R,\eta,\delta$ be parameters as in \eqref{choice of parameters} and $\epsilon_2<<\epsilon_3<<\alpha$. Then for any $X_s\in\Delta_s^X(\epsilon_0)$, there is a subset of velocities $\mathcal{M}_s(X_s)\subseteq B_R^{ds}$ of measure \begin{equation}\label{measure of initialization} \left|\mathcal{M}_s\left(X_s\right)\right|_{ds}\leq C_{d,s} R^{ds}\eta^{\frac{d-1}{2}}, \end{equation} such that \begin{equation}\label{initialization} Z_s\in G_s(\epsilon_3,\epsilon_0,\delta),\quad\forall V_s\in B_R^{ds}\setminus \mathcal{M}_s(X_s). \end{equation} \end{lemma} \begin{proof} We use Proposition 11.2. from \cite{thesis} for $\epsilon=\epsilon_3$. \end{proof} For $s\in\mathbb{N}$ and $X_s\in\Delta_s^X(\epsilon_0)$, let us denote $\mathcal{M}_s^c(X_s)=B_R^{ds}\setminus\mathcal{M}_s(X_s).$ Consider $1\leq k\leq n$ and let us recall the observables $I_{s,k,R,\delta}^N$, $I_{s,k,R,\delta}^\infty$ defined in \eqref{bbgky truncated time}-\eqref{boltzmann truncated time}. We restrict the domain of integration to velocities giving good configurations. In particular, we define \begin{align} \widetilde{I}_{s,k,R,\delta}^N(t)(X_s)&=\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)f_{N,R,\delta}^{(s,k)}(X_s,V_s)\,dV_s\label{good observables BBGKY },\\ \widetilde{I}_{s,k,R,\delta}^\infty(t)(X_s)&=\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)f_{R,\delta}^{(s,k)}(X_s,V_s)\,dV_s\label{good observables Boltzmann}. \end{align} Let us apply Proposition \ref{initially good configurations} to restrict to initially good configurations. To keep track of all the possible adjuctions we recall recall the notation from \eqref{S_k}-\eqref{sigma tilde}: given $k\in\mathbb{N}$, we write $$S_k=\{\sigma=(\sigma_1,...,\sigma_k):\sigma_i\in\{1,2\}\},$$ and given $\sigma\in S_k$, we write \begin{align*} \widetilde{\sigma}_\ell&=\sum_{i=1}^\ell\sigma_i,\quad 1\leq \ell\leq k,\quad\widetilde{\sigma}_0=0. \end{align*} \begin{proposition}\label{restriction to initially good conf} Let $s,n\in\mathbb{N}$, $\alpha,\epsilon_0,R,\eta,\delta$ be parameters as in \eqref{choice of parameters}, $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling} with $\epsilon_2<<\epsilon_3<<\alpha$, and $t\in[0,T]$. Then, the following estimates hold: \begin{equation*} \sum_{k=1}^n\|I_{s,k,R,\delta}^N(t)-\widetilde{I}_{s,k,R,\delta}^N(t)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)} \leq C_{d,s,\mu_0,T}R^{ds}\eta^{\frac{d-1}{2}}\|F_{N,0}\|_{N,\beta_0,\mu_0}, \end{equation*} \begin{equation*} \sum_{k=1}^n\|I_{s,k,R,\delta}^\infty (t)-\widetilde{I}_{s,k,R,\delta}^\infty (t)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\leq C_{d,s,\mu_0,T}R^{ds}\eta^{\frac{d-1}{2}}\|F_{0}\|_{\infty,\beta_0,\mu_0}. \end{equation*} \end{proposition} \begin{proof} We present the proof for the BBGKY hierarchy case only. The proof for the Boltzmann hierarchy case is similar. Let us fix $X_s\in\Delta_s^X(\epsilon_0)$. We first assume that $k\in\left\{1,...,n\right\}$. Triangle inequality, an inductive application of estimate \eqref{a priori binary bound F_N}, estimate \eqref{a priori bound F_N,0} and part $(ii)$ of Proposition \ref{remark for initial} yield \begin{align} |I_{s,k,R,\delta}^N(t)(X_s)-&\widetilde{I}_{s,k,R,\delta}^N(t)(X_s)|\leq\sum_{\sigma\in S_k}\int_{\mathcal{M}_s(X_s)}|\phi_s(V_s)f_{N,R,\delta}^{(s,k,\sigma)}(t,X_s,V_s)|\,dV_s\nonumber\\ &\leq 2T\|\phi_s\|_{L^\infty_{V_s}}e^{-s\bm{\mu}(T)}\left(\frac{1}{8}\right)^{k-1}\|F_{N,0}\|_{N,\beta_0,\mu_0}\int_{\mathcal{M}_s(X_s)}e^{-\bm{\beta}(T)E_s(Z_s)}\,dV_s\label{use of card reduction}\\ &\leq 2T\|\phi_s\|_{L^\infty_{V_s}}e^{-s\bm{\mu}(T)}\left(\frac{1}{8}\right)^{k-1}|\mathcal{M}_s(X_s)|_{ds}\|F_{N,0}\|_{N,\beta_0,\mu_0},\label{reduction to elem 1<k<n} \end{align} where to obtain \eqref{use of card reduction}, we use \eqref{cardinality of S_k}. For $k=0$, part $(i)$ of Proposition \ref{remark for initial} and Remark \ref{T_N definition} similarly yield \begin{equation}\label{reduction to elem k=0} |I_{s,0,R,\delta}^N(t)(X_s)-\widetilde{I}_{s,0,R,\delta}^N(t)(X_s)|\leq\|\phi_s\|_{L^\infty_{V_s}}e^{-s\bm{\mu}(T)}|\mathcal{M}_s(X_s)|_{ds}\|F_{N,0}\|_{N,\beta_0,\mu_0}. \end{equation} \begin{comment} {\color{red} \begin{align} |I_{s,0,R,\delta}^N(t)(X_s)-&\widetilde{I}_{s,0,R,\delta}^N(t)(X_s)|\leq \int_{\mathcal{M}_s(X_s)}|\phi_s(V_s)f_{N,R,\delta}^{(0,k)}(t,X_s,V_s)|\,dV_s\nonumber\\ &\leq\|\phi_s\|_{L^\infty_{V_s}}e^{-s\bm{\mu}(T)}\|\mathcal{T}^t F_{N,0}\|_{N,\bm{\beta},\bm{\mu}}\int_{\mathcal{M}_s(X_s)}e^{-\bm{\beta}(T)E_s(Z_s)}\,dV_s\nonumber\\ &\leq\|\phi_s\|_{L^\infty_{V_s}}e^{-s\bm{\mu}(T)}\|\mathcal{T}^t F_{N,0}\|_{N,\beta_0,\mu_0}|\mathcal{M}_s(X_s)|_{ds}\nonumber\\ &=\|\phi_s\|_{L^\infty_{V_s}}e^{-s\bm{\mu}(T)}|\mathcal{M}_s(X_s)|_{ds}\|F_{N,0}\|_{N,\beta_0,\mu_0}.\label{reduction to elem k=0} \end{align} } \end{comment} The claim comes after using \eqref{reduction to elem 1<k<n}-\eqref{reduction to elem k=0}, adding over $k=0,...,n$, and using the measure estimate of Proposition \ref{initially good configurations}. \end{proof} \begin{remark}\label{no need for k=0} Given $s\in\mathbb{N}$ and $X_s\in\Delta_s^X(\epsilon_0)$, the definition of $\mathcal{M}_s(X_s)$ implies that $$\widetilde{I}_{s,0,R,\delta}^N(t)(X_s)=\widetilde{I}_{s,0,R,\delta}^\infty(t)(X_s).$$ Therefore, by Proposition \ref{restriction to initially good conf}, convergence reduces to controlling the differences $\widetilde{I}_{s,k,R,\delta}^N(t)-\widetilde{I}_{s,k,R,\delta}^\infty(t),$ for $k=1,...,n$, in the scaled limit. \end{remark} \subsection{Reduction to elementary observables}\label{reduction to elementary observables} Here, given $s\in\mathbb{N}$ and $1\leq k\leq n$, we express the observables $\widetilde{I}_{s,k,R,\delta}^N(t)$, $\widetilde{I}_{s,k,R,\delta}^\infty(t)$, defined in \eqref{good observables BBGKY }-\eqref{good observables Boltzmann}, as a superposition of elementary observables. For this purpose, given $\ell\in\mathbb{N}$, and recalling \eqref{velocity truncation of operators}, \eqref{BBGKY operator binary}, we decompose the BBGKY hierarchy binary truncated collisional operator as: \begin{equation*} \mathcal{C}_{\ell,\ell+1}^{N,R}=\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+1}^{N,R,+,i}-\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+1}^{N,R,-,i}, \end{equation*} where \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+1}^{N,R,+,i}g_{\ell+1}&(Z_\ell)=A_{N,\epsilon_2,\ell}^2\int_{\mathbb{S}_1^{d-1}\times B_R^{d}}b_2^+(\omega_1,v_{\ell+1}-v_i) g_{\ell+1}(Z_{\ell+1,\epsilon_2}^{i'})\,d\omega_1\,dv_{\ell+1}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+1}^{N,R,-,i}g_{\ell+1}&(Z_\ell)=A_{N,\epsilon_2,\ell}^2\int_{\mathbb{S}_1^{d-1}\times B_R^{d}}b_2^+(\omega_1,v_{\ell+1}-v_i) g_{\ell+1}(Z_{\ell+1,\epsilon_2}^{i})\,d\omega_1\,dv_{\ell+1}. \end{aligned} \end{equation*} and the ternary truncated collisional operator as: \begin{equation*} \mathcal{C}_{\ell,\ell+2}^{N,R}=\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+2}^{N,R,+,i}-\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+2}^{N,R,-,i}, \end{equation*} where \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+2}^{N,R,+,i}g_{\ell+2}&(Z_\ell)=A_{N,\epsilon_3,\ell}^3\int_{\mathbb{S}_1^{2d-1}\times B_R^{2d}}\frac{b_3^+(\omega_1,\omega_2,v_{\ell+1}-v_i,v_{\ell+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}} g_{\ell+2}(Z_{\ell+2,\epsilon_3}^{i*})\,d\omega_1\,d\omega_2\,dv_{\ell+1}\,dv_{\ell+2}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+2}^{N,R,-,i}g_{\ell+2}&(Z_\ell)=A_{N,\epsilon_3,\ell}^3\int_{\mathbb{S}_1^{2d-1}\times B_R^{2d}}\frac{b_3^+(\omega_1,\omega_2,v_{\ell+1}-v_i,v_{\ell+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}} g_{\ell+2}(Z_{\ell+2,\epsilon_3}^{i})\,d\omega_1\,d\omega_2\,dv_{\ell+1}\,dv_{\ell+2}. \end{aligned} \end{equation*} In order to expand the observable $\widetilde{I}_{s,k,R,\delta}^N(t)$ to elementary observables, we need to take into account all the possible particle adjuctions occurring by adding one or two particles to the system in each step. More precisely, given $\sigma\in S_k$, and $i\in\{1,...,k\}$, we are adding $\sigma_i\in\{1,2\}$ particle(s) to the existing $s+\widetilde{\sigma}_{i-1}$ particles in either precollisional or postcollisional way. In order to keep track of this process, given $1\leq k\leq n$, $\sigma\in S_k$, we introduce the notation \begin{align} \mathcal{M}_{s,k,\sigma}&=\left\{M=(m_1,...,m_k)\in\mathbb{N}^k:m_i\in\left\{1,...,s+\widetilde{\sigma}_{i-1}\right\},\quad\forall i\in\left\{1,...,k\right\}\right\}\label{M_k},\\ \mathcal{J}_{s,k,\sigma}&=\left\{J=(j_1,...,j_k)\in\mathbb{N}^k:j_i\in\left\{-1,1\right\},\quad\forall i\in\left\{1,...,k\right\}\right\}\label{J_k}.\\ \mathcal{U}_{s,k,\sigma}&=\mathcal{J}_{s,k,\sigma}\times\mathcal{M}_{s,k,\sigma}. \end{align} Under this notation, the BBGKY hierarchy observable functional $\widetilde{I}_{s,k,R,\delta}^N(t)$ can be expressed, for $1\leq k\leq n$, as a superposition of elementary observables \begin{equation}\label{superposition BBGKY} \widetilde{I}_{s,k,R,\delta}^N(t)(X_s)=\sum_{\sigma\in S_k}\sum_{(J,M)\in\mathcal{U}_{s,k,\sigma}}\left(\prod_{i=1}^kj_i\right)\widetilde{I}_{s,k,R,\delta,\sigma}^N(t,J,M)(X_s), \end{equation} where the elementary observables are defined by \begin{equation}\label{elementary observable BBGKY} \begin{aligned} \widetilde{I}_{s,k,R,\delta,\sigma}^N(t,J,M)(X_s)&=\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)\int_{\mathcal{T}_{k,\delta}(t)}T_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{N,R,j_1,m_1} T_{s+\widetilde{\sigma}_1}^{t_1-t_2}...\\ &...T_{s+\widetilde{\sigma}_{k-1}}^{t_{k-1}-t_k}\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{N,R,j_k,m_k} T_{s+\widetilde{\sigma}_k}^{t_m}f_{0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k...\,dt_{1}dV_s. \end{aligned} \end{equation} Similarly, given $\ell\in\mathbb{N}$, and recalling \eqref{boltzmann notation binary}, \eqref{boltzmann notation triary}, we decompose the Boltzmann hierarchy binary and ternary collisional operators as: \begin{equation*} \mathcal{C}_{\ell,\ell+1}^{\infty,R}=\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+1}^{\infty,R,+,i}-\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+1}^{\infty,R,-,i}, \end{equation*} where \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+1}^{\infty,R,+,i}g_{\ell+1}&(Z_\ell)=\int_{\mathbb{S}_1^{d-1}\times B_R^{d}}b_2^+(\omega_1,v_{\ell+1}-v_i) g_{\ell+1}(Z_{\ell+1}^{i'})\,d\omega_1\,dv_{\ell+1}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+1}^{\infty,R,-,i}g_{\ell+1}&(Z_\ell)=\int_{\mathbb{S}_1^{d-1}\times B_R^{d}}b_2^+(\omega_1,v_{\ell+1}-v_i) g_{\ell+1}(Z_{\ell+1}^{i})\,d\omega_1\,dv_{\ell+1}, \end{aligned} \end{equation*} \begin{equation*} \mathcal{C}_{\ell,\ell+2}^{\infty,R}=\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+2}^{\infty,R,+,i}-\sum_{i=1}^\ell\mathcal{C}_{\ell,\ell+2}^{\infty,R,-,i}, \end{equation*} where \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+2}^{\infty,R,+,i}g_{\ell+2}(Z_\ell)&=\int_{\mathbb{S}_1^{2d-1}\times B_R^{2d}}\frac{b_3^+(\omega_1,\omega_2,v_{\ell+1}-v_i,v_{\ell+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}} g_{\ell+2}(Z_{\ell+2}^{i*})\,d\omega_1\,d\omega_2\,dv_{\ell+1}\,dv_{\ell+2}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \mathcal{C}_{\ell,\ell+2}^{\infty,R,-,i}g_{\ell+2}(Z_\ell)&=\int_{\mathbb{S}_1^{2d-1}\times B_R^{2d}}\frac{b_3^+(\omega_1,\omega_2,v_{\ell+1}-v_i,v_{\ell+2}-v_i)}{\sqrt{1+\langle\omega_1,\omega_2\rangle}}g_{\ell+2}(Z_{\ell+2}^i)\,d\omega_1\,d\omega_2\,dv_{\ell+1}\,dv_{\ell+2}. \end{aligned} \end{equation*} Under this notation, the Boltzmann hierarchy observable functional $\widetilde{I}_{s,k,R,\delta}^\infty(t)$ can be expressed, for $1\leq k\leq n$, as a superposition of elementary observables \begin{equation}\label{superposition Boltzmann} \widetilde{I}_{s,k,R,\delta}^\infty(t)(X_s)=\sum_{\sigma\in S_k}\sum_{(J,M)\in\mathcal{U}_{s,k,\sigma}}\left(\prod_{i=1}^kj_i\right)\widetilde{I}_{s,k,R,\delta,\sigma}^\infty(t,J,M)(X_s), \end{equation} where the elementary observables are defined by \begin{equation}\label{elementary observable Boltzmann} \begin{aligned} \widetilde{I}_{s,k,R,\delta,\sigma}^\infty(t,J,M)(X_s)&=\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)\int_{\mathcal{T}_{k,\delta}(t)}S_s^{t-t_1}\mathcal{C}_{s,s+\widetilde{\sigma}_1}^{\infty,R,j_1,m_1} S_{s+\widetilde{\sigma}_1}^{t_1-t_2}...\\ &...S_{s+\widetilde{\sigma}_{k-1}}^{t_{k-1}-t_k}\mathcal{C}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_k}^{\infty,R,j_k,m_k} S_{s+\widetilde{\sigma}_k}^{t_m}f_{0}^{(s+\widetilde{\sigma}_k)}(Z_s)\,dt_k...\,dt_{1}dV_s. \end{aligned} \end{equation} \subsection{Boltzmann hierarchy pseudo-trajectories}\label{subsec Boltzmann pseudo} We introduce the following notation which we will be constantly using from now on. Let $s\in\mathbb{N}$, $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$, $1\leq k\leq n$, $\sigma\in S_k$ and $t\in[0,T]$. Let us recall from \eqref{collision times} the set $$\mathcal{T}_k(t)=\left\{(t_1,...,t_k)\in\mathbb{R}^k:0=t_{k+1}<t_k<...<t_1<t_0=t\right\},\quad t_0=t,\text{ }t_{k+1}=0.$$ Consider $(t_1,...,t_k)\in\mathcal{T}_k(t)$, $J=(j_1,...,j_k)$, $M=(m_1,...,m_k)$, $(J,M)\in\mathcal{U}_{s,k,\sigma}$. For each $i=1,...,k$, we distignuish two possible situation: \begin{align} &\text{If }\sigma_i=1,\text{ we consider } (\omega_{s+\widetilde{\sigma}_i},v_{s+\widetilde{\sigma}_i})\in\mathbb{S}_1^{d-1}\times B_R^d.\label{pseudo binary}\\ &\text{If }\sigma_i=2,\text{ we consider } (\omega_{s+\widetilde{\sigma}_i-1},\omega_{s+\widetilde{\sigma}_i},v_{s+\widetilde{\sigma}_i-1},v_{s+\widetilde{\sigma}_i})\in\mathbb{S}_1^{2d-1}\times B_R^{2d}.\label{pseudo triary} \end{align} For convenience, for each $i=1,...,k$, we will write $(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in\mathbb{S}_1^{d\sigma_i-1}\times B_R^{d\sigma_i}$ where $(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})$ is of the form \eqref{pseudo binary} if $\sigma_i=1$ and of the form \eqref{pseudo triary} if $\sigma_i=2$. We inductively define the Boltzmann hierarchy pseudo-trajectory of $Z_s$. Roughly speaking, the Boltzmann hierarchy pseudo-trajectory forms the configurations on which particles are adjusted during backwards in time evolution. Intuitively, assume we are given a configuration $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$ at time $t_0=t$. $Z_s$ evolves under backwards free flow until the time $t_1$ when the configuration $(\bm{\omega}_{\sigma,1},\bm{v}_{\sigma,1})$ is added, neglecting positions, to the $m_1$-particle, the adjunction being precollisional if $j_1=-1$ and postcollisional if $j_1=1$. We then form an $(s+\widetilde{\sigma}_1)$-configuration and continue this process inductively until time $t_{k+1}=0$. More precisely, we inductively construct the Boltzmann hierarchy pseudo-trajectory of $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$ as follows: {\bf{Time $t_0=t$}:} We initially define $ Z_s^\infty(t_{0}^-)=\left(x_1^\infty(t_0^-),...,x_s^\infty(t_0^-),v_1^\infty(t_0^-),...,v_s^\infty(t_0^-)\right):=Z_s. $ {\bf{Time $t_i$}, $i\in\{1,...,k\}$:} Consider $i\in\left\{1,...,k\right\}$ and assume we know $$Z_{s+\widetilde{\sigma}_{i-1}}^\infty (t_{i-1}^-)=\left(x_1^\infty(t_{i-1}^-),...,x_{s+\widetilde{\sigma}_{i-1}}^\infty(t_{i-1}^-),v_1^\infty(t_{i-1}^-),...,v_{s+\widetilde{\sigma}_{i-1}}^\infty(t_{i-1}^-)\right).$$ We define $Z_{s+\widetilde{\sigma}_{i-1}}^\infty (t_{i}^+)=\left(x_1^\infty(t_{i}^+),...,x_{s+\widetilde{\sigma}_{i-1}}^\infty(t_{i}^+),v_1^\infty(t_{i}^+),...,v_{s+\widetilde{\sigma}_{i-1}}^\infty(t_{i}^+)\right)$ as: \begin{equation*} Z_{s+\widetilde{\sigma}_{i-1}}^\infty(t_i^+):=\left(X_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_{i-1}^-\right)-\left(t_{i-1}-t_i\right)V_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_{i-1}^-\right), V_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_{i-1}^-\right)\right). \end{equation*} We also define $Z_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-)=\left(x_1^\infty(t_{i}^-),...,x_{s+\widetilde{\sigma}_{i}}^\infty(t_{i}^-),v_1^\infty(t_{i}^-),...,v_{s+\widetilde{\sigma}_{i}}^\infty(t_{i}^-)\right)$ as: \begin{equation*} \left(x_j^\infty(t_i^-),v_j^\infty(t_i^-)\right):=(x_j^\infty(t_i^+),v_j^\infty(t_i^+)),\quad\forall j\in\{1,...,s+\widetilde{\sigma}_{i-1}\}\setminus\left\{m_i\right\}, \end{equation*} For the rest of the particles, we distiguish the following cases, depending on $\sigma_i$: \begin{itemize} \item $\sigma_i=1$: If $j_i=-1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^\infty(t_i^-),v_{m_i}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{m_{i}}^\infty(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-),v_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}}\right), \end{aligned} \end{equation*} while if $j_i=1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^\infty(t_i^-),v_{m_i}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{m_{i}}^{\infty'}(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-),v_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}}'\right), \end{aligned} \end{equation*} where $ (v_{m_{i}}^{\infty'}(t_{i}^-),v_{s+\widetilde{\sigma}_{i}}')=T_{\omega_{s+\widetilde{\sigma}_{i}}}\left(v_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}}\right). $ \item $\sigma_i=2$: If $j_i=-1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^\infty(t_i^-),v_{m_i}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{m_{i}}^\infty(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}-1}^\infty(t_i^-),v_{s+\widetilde{\sigma}_{i}-1}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}-1}\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-),v_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}}\right), \end{aligned} \end{equation*} while if $j_i=1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^\infty(t_i^-),v_{m_i}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{m_{i}}^{\infty*}(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}-1}^\infty(t_i^-),v_{s+\widetilde{\sigma}_{i}-1}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}-1}^*\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-),v_{s+\widetilde{\sigma}_{i}}^\infty(t_i^-)\right)&:=\left(x_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}}^*\right), \end{aligned} \end{equation*} where $ (v_{m_{i}}^{\infty*}(t_{i}^-),v_{s+\widetilde{\sigma}_{i}-1}^*,v_{s+\widetilde{\sigma}_{i}}^*)=T_{\omega_{s+\widetilde{\sigma}_{i}-1},\omega_{s+\widetilde{\sigma}_{i}}}\left(v_{m_{i}}^\infty(t_{i}^+),v_{s+\widetilde{\sigma}_{i}-1},v_{s+\widetilde{\sigma}_{i}}\right). $ \end{itemize} {\bf{Time $t_{k+1}=0$}:} We finally obtain $$Z_{s+\widetilde{\sigma}_{k}}^\infty(0^+)=Z_{s+\widetilde{\sigma}_{k}}^\infty(t_{k+1}^+)=\left(X_{s+\widetilde{\sigma}_{k}}^\infty\left(t_{k}^-\right)-t_kV_{s+\widetilde{\sigma}_{k}}^\infty\left(t_k^-\right),V_{s+\widetilde{\sigma}_{k}}^\infty\left(t_k^-\right)\right).$$ The process is illustrated in the following diagram: \begin{center} \begin{tikzpicture}[node distance=2.5cm,auto,>=latex']\label{boltzmann pseudo diagram} \node[int](0-){\small$ Z_s^\infty(t_0^-)$}; \node[int,pin={[init]above:\small$\begin{matrix}(\bm{\omega}_{\sigma_1,1},\bm{v}_{\sigma_1,1}),\\(j_1,m_1)\end{matrix}$}](1+)[left of=0-,node distance=2.3cm]{\small$Z_s^\infty(t_1^+)$}; \node[int](1-)[left of=1+,node distance=1.5cm]{$Z_{s+\widetilde{\sigma}_1}^\infty(t_1^-)$}; \node[](intermediate1)[left of=1-,node distance=2cm]{...}; \node[int,pin={[init]above:\small$\begin{matrix}(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i}),\\(j_i,m_i)\end{matrix}$}](i+)[left of=intermediate1,node distance=2.5cm]{\small$Z_{s+\widetilde{\sigma}_{i-1}}^\infty(t_i^+)$}; \node[int](i-)[left of=i+,node distance=1.7cm]{\small$Z_{s+\widetilde{\sigma}_i}^\infty(t_i^-)$}; \node[](intermediate2)[left of=i-,node distance=2.2cm]{...}; \node[int](end)[left of=intermediate2,node distance=2.5cm]{\small$Z_{s+\widetilde{\sigma}_k}^\infty(t_{k+1}^+)$}; \path[<-] (1+) edge node {\tiny$t_{0}-t_1$} (0-); \path[<-] (intermediate1) edge node {\tiny$t_{1}-t_2$} (1-); \path[<-] (i+) edge node {\tiny$t_{i-1}-t_i$} (intermediate1); \path[<-] (intermediate2) edge node {\tiny$t_{i}-t_{i+1}$} (i-); \path[<-] (end) edge node {\tiny$t_{k}-t_{k+1}$} (intermediate2); \end{tikzpicture} \end{center} We give the following definition: \begin{definition}\label{Boltzmann pseudo} Let $s\in\mathbb{N}$, $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$, $(t_1,...,t_k)\in\mathcal{T}_k(t)$, $J=(j_1,...,j_k)$, $M=(m_1,...,m_k)$, $(J,M)\in\mathcal{U}_{s,k}$ and for each $i=1,...,k$, $\sigma\in S_k$, we consider $(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in\mathbb{S}_{1}^{d\sigma_i-1}\times B_R^{d\sigma_i}.$ The sequence $\{Z_{s+\widetilde{\sigma}_{i-1}}^\infty(t_i^+)\}_{i=0,...,k+1}$ constructed above is called the Boltzmann hierarchy pseudo-trajectory of $Z_s$. \end{definition} \subsection{Reduction to truncated elementary observables}\label{par_reduction to truncated} We will now use the Boltzmann hierarchy pseudo-trajectory to define the BBGKY hierarchy and Boltzmann hierarchy truncated observables. The convergence proof will then be reduced to the convergence of the corresponding truncated elementary observables. Given $\ell\in\mathbb{N}$, recall the notation from \eqref{both epsilon-epsilon_0}: $$G_\ell(\epsilon_3,\epsilon_0,\delta)=G_\ell(\epsilon_3,0)\cap G_\ell(\epsilon_0,\delta).$$ Given $t\in[0,T]$, we also recall from \eqref{separated collision times} the set $\mathcal{T}_{k,\delta}(t)$ of separated collision times: \begin{equation*} \mathcal{T}_{k,\delta}(t):=\left\{(t_1,...,t_k)\in\mathcal{T}_k(t):\quad 0\leq t_{i+1}\leq t_i-\delta,\quad\forall i\in [0,k]\right\},\quad t_{k+1}=0,\quad t_0=t. \end{equation*} Consider $t\in[0,T]$, $X_s\in\Delta_s^X(\epsilon_0)$, $1\leq k\leq n$, $\sigma\in S_k$ and $(J,M)\in\mathcal{U}_{s,k,\sigma}$ and $(t_1,...,t_k)\in\mathcal{T}_{k,\delta}$. By Proposition \ref{initially good configurations}, for any $V_s\in\mathcal{M}_s^c(X_s)$, we have $Z_s=(X_s,V_s)\in G_s(\epsilon_3,\epsilon_0,\delta)$ which in turn implies $ Z_s^\infty(t_1^+)\in G_s(\epsilon_0,0)$ since $t_0-t_1>\delta$. Now we observe that either \eqref{pre-delta-double-bar}, \eqref{post-delta-double-bar} from Proposition \ref{bad set double} (if the adjunction is binary), or \eqref{epsilon pre}, \eqref{epsilon post} from Proposition \ref{bad set triple} (if the adjunction is ternary), yield that there is a set $\mathcal{B}_{m_1}\left(Z_s^{\infty}\left(t_1^+\right)\right)\subseteq \mathbb{S}_1^{d\sigma_1-1}\times B_R^{d\sigma_1}$ such that : $$Z_{s+\widetilde{\sigma}_1}^\infty(t_2^+)\in G_{s+\widetilde{\sigma}_1}(\epsilon_0,0),\quad\forall (\bm{\omega}_{\sigma_1,1},\bm{v}_{\sigma_1,1})\in\mathcal{B}_{m_1}^c\left(Z_s^{\infty}\left(t_1^+\right)\right),$$ $$\mathcal{B}_{m_1}^c\left(Z_s^{\infty}\left(t_1^+\right)\right):=(\mathbb{S}_1^{d\sigma_i-1}\times B_R^{d\sigma_i})^+\left(v_{m_1}^\infty\left(t_1^+\right)\right)\setminus \mathcal{B}_{m_1}\left(Z_s^{\infty}\left(t_1^+\right)\right).$$ Clearly this process can be iterated. In particular, given $i\in\left\{2,...,k\right\}$, we have $$Z_{s+\widetilde{\sigma}_{i-1}}^\infty(t_{i}^+)\in G_{s+\widetilde{\sigma}_{i-1}}(\epsilon_0,0),$$ so there exists a set $\mathcal{B}_{m_i}\left(Z_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_{i}^+\right)\right)\subseteq \mathbb{S}_1^{d\sigma_i-1}\times B_R^{d\sigma_i}$ such that: \begin{equation}\label{pseudo applicable} Z_{s+\widetilde{\sigma}_{i}}^\infty(t_{i+1}^+)\in G_{s+\widetilde{\sigma}_{i}}(\epsilon_0,0),\quad\forall (\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in \mathcal{B}_{m_i}^c\left(Z_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_{i}^+\right)\right), \end{equation} where $$\mathcal{B}_{m_i}^c\left(Z_s^{\infty}\left(t_i^+\right)\right):=(\mathbb{S}_1^{d\sigma_i-1}\times B_R^{d\sigma_i})^+\left(v_{m_i}^\infty\left(t_i^+\right)\right)\setminus \mathcal{B}_{m_i}\left(Z_{s+\widetilde{\sigma}_i}^{\infty}\left(t_i^+\right)\right).$$ We finally obtain $Z_{s+\widetilde{\sigma}_{k}}^\infty(0^+)\in G_{s+\widetilde{\sigma}_{k}}(\epsilon_0,0)$. Let us now define the truncated elementary observables. Heuristically we will truncate the domains of adjusted particles in the definition of the observables $\widetilde{I}_{s,k,R,\delta}^N$, $\widetilde{I}_{s,k,R,\delta}^\infty$, defined in \eqref{good observables BBGKY }-\eqref{good observables Boltzmann}. More precisely, consider $1\leq k\leq n$, $\sigma\in S_k$, $(J,M)\in\mathcal{U}_{s,k,\sigma}$ and $t\in [0,T]$. For $X_s\in\Delta_s^X(\epsilon_0)$, Proposition \ref{initially good configurations} implies there is a set of velocities $\mathcal{M}_s(X_s)\subseteq B_R^{2d}$ such that $Z_s=(X_s,V_s)\in G_s(\epsilon_3,\epsilon_0,\delta),\quad\forall V_s\in\mathcal{M}_s^c(X_s)$. Following the reasoning above, we define the BBGKY hierarchy truncated observables as: \begin{equation}\label{truncated BBGKY} \begin{aligned} J_{s,k,R,\delta,\sigma}^N(t,J,M)(X_s)&=\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)\int_{\mathcal{T}_{k,\delta}(t)}T_s^{t-t_1}\widetilde{\mathcal{C}}_{s,s+\widetilde{\sigma}_{1}}^{N,R,j_1,m_1} T_{s+\widetilde{\sigma}_{1}}^{t_1-t_2}...\\ &...\widetilde{\mathcal{C}}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_{k}}^{N,R,j_k,m_k} T_{s+\widetilde{\sigma}_{k}}^{t_m}f_{0}^{(s+\widetilde{\sigma}_{k})}(Z_s)\,dt_k,...\,dt_{1}dV_s, \end{aligned} \end{equation} where for each $i=1,...,k$, we denote \begin{equation*} \begin{aligned} \widetilde{\mathcal{C}}_{s+\widetilde{\sigma}_{i-1},s+\widetilde{\sigma}_{i}}^{N,R,j_i,m_i}&g_{N,s+\widetilde{\sigma}_{i}}=\mathcal{C}_{s+\widetilde{\sigma}_{i-1},s+\widetilde{\sigma}_{i}}^{N,R,j_i,m_i}\left[g_{N,s+\widetilde{\sigma}_{i}}\mathds{1}_{(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in \mathcal{B}^c_{m_i}\left(Z_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_i^+\right)\right)}\right]. \end{aligned} \end{equation*} In the same spirit, for $X_s\in\Delta_s^X(\epsilon_0)$, we define the Boltzmann hierarchy truncated elementary observables as: \begin{equation}\label{truncated Boltzmann} \begin{aligned} J_{s,k,R,\delta,\sigma}^\infty(t,J,M)(X_s)&=\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)\int_{\mathcal{T}_{k,\delta}(t)}S_s^{t-t_1}\widetilde{\mathcal{C}}_{s,s+\widetilde{\sigma}_{1}}^{\infty,R,j_1,m_1} S_{s\widetilde{\sigma}_{1}}^{t_1-t_2}...\\ &...\widetilde{\mathcal{C}}_{s+\widetilde{\sigma}_{k-1},s+\widetilde{\sigma}_{k}}^{\infty,R,j_k,m_k} S_{s+\widetilde{\sigma}_{k}}^{t_m}f_{0}^{(s+\widetilde{\sigma}_{k})}(Z_s)\,dt_k,...\,dt_{1}dV_s, \end{aligned} \end{equation} where for each $i=1,...,k$, we denote \begin{equation*}\widetilde{\mathcal{C}}_{s+\widetilde{\sigma}_{i-1},s+\widetilde{\sigma}_{i}}^{\infty,R,j_i,m_i}g_{s+\widetilde{\sigma}_{i}}=\mathcal{C}_{s+\widetilde{\sigma}_{i-1},s+\widetilde{\sigma}_{i}}^{\infty,R,j_i,m_i}\left[g_{s+\widetilde{\sigma}_{i}}\mathds{1}_{(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in\mathcal{B}^c_{m_i}\left(Z_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_i^+\right)\right)}\right]. \end{equation*} Recalling the observables $\widetilde{I}_{s,k,R,\delta,\sigma}^N$, $\widetilde{I}_{s,k,R,\delta,\sigma}^\infty$ from \eqref{elementary observable BBGKY}, \eqref{elementary observable Boltzmann} and using Proposition \ref{bad set double measure} or Proposition \ref{bad set triple measure}, we obtain: \begin{proposition}\label{truncated element estimate} Let $s,n\in\mathbb{N}$, $\alpha,\epsilon_0,R,\eta,\delta$ be parameters as in \eqref{choice of parameters}, $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling} with $\epsilon_2<<\epsilon_3<<\alpha$ and $t\in[0,T]$. Then the following estimates hold: \begin{equation*} \begin{aligned}\sum_{k=1}^n\sum_{\sigma\in S_k}\sum_{(J,M)\in\mathcal{U}_{s,k,\sigma}}&\|\widetilde{I}_{s,k,R,\delta,\sigma}^N(t,J,M)-J_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\leq \\ &\leq C_{d,s,\mu_0,T}^n\|\phi_s\|_{L^\infty_{V_s}} R^{d(s+3n)}\eta^{\frac{d-1}{4d+2}}\|F_{N,0}\|_{N,\beta_0,\mu_0}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned}\sum_{k=1}^n\sum_{\sigma\in S_k}\sum_{(J,M)\in\mathcal{U}_{s,k,\sigma}}&\|\widetilde{I}_{s,k,R,\delta,\sigma}^\infty(t,J,M)-J_{s,k,R,\delta,\sigma}^\infty(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\leq \\ &\leq C_{d,s,\mu_0,T}^n\|\phi_s\|_{L^\infty_{V_s}} R^{d(s+3n)}\eta^{\frac{d-1}{4d+2}}\|F_{0}\|_{\infty,\beta_0,\mu_0}. \end{aligned} \end{equation*} \end{proposition} \begin{proof} As usual, it suffices to prove the estimate for the BBGKY hierarchy case and the Boltzmann hierarchy case follows similarly. Fix $k\in\left\{1,...,n\right\}$, $\sigma\in S_k$ and $(J,M)\in\mathcal{U}_{s,k,\sigma}$. We first estimate the difference: \begin{equation}\label{estimated difference} \widetilde{I}_{s,k,R,\delta}^N(t,J,M)(X_s)-J_{s,k,R,\delta}^N(t,J,M)(X_s). \end{equation} Cauchy-Schwartz inequality and triangle inequality imply \begin{align} |\langle\omega_1,v_1-v\rangle|&\leq 2R,\quad\forall \omega_1\in\mathbb{S}_1^{d-1},\quad\forall v,v_1\in B_R^d,\label{triangle on cross binary}\\ \big|b_3(\omega_1,\omega_2,v_1-v,v_2-v)\big|&\leq 4R,\quad\forall(\omega_1,\omega_2)\in\mathbb{S}_1^{2d-1}, \quad\forall v,v_1,v_2\in B_R^{d},\label{triangle on cross ternary} \end{align} so \begin{align} \int_{\mathbb{S}_1^{d-1}\times B_R^{d}}|\langle\omega_1,v_1-v\rangle|\,d\omega_1\,dv_1&\leq C_d R^{d+1}\leq C_dR^{3d},\quad\forall v\in B_R^d,\label{estimate on the rest of the terms binary}\\ \int_{\mathbb{S}_1^{2d-1}\times B_R^{2d}}|b_3(\omega_1,\omega_2,v_1-v,v_2-v_2)|\,d\omega_1\,d\omega_2\,dv_1\,dv_2&\leq C_d R^{2d+1}\leq C_dR^{3d},\quad\forall v\in B_R^d,\label{estimate on the rest of the terms ternary} \end{align} since $R>>1$. But in order to estimate the difference \eqref{estimated difference}, we integrate at least once over $\mathcal{B}_{m_i}\left(Z_{s+2i-2}^\infty\left(t_{i}^+\right)\right)$ for some $i\in\left\{1,...,k\right\}$. For convenience, given $v\in\mathbb{R}^d$, let us write \begin{equation}\label{mixed crossection} b_{\sigma_i}(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i},v):=\begin{cases} b_2(\omega_{s+\widetilde{\sigma}_i},v_{s+\widetilde{\sigma}_i}-v),\quad\text{if }\sigma_i=1,\\ b_3(\omega_{s+\widetilde{\sigma}_i-1},\omega_{s+\widetilde{\sigma}_i},v_{s+\widetilde{\sigma}_i-1}-v,v_{s+\widetilde{\sigma}_i}-v),\quad\text{if }\sigma_i=2. \end{cases} \end{equation} Under this notation, \eqref{triangle on cross binary}-\eqref{triangle on cross ternary} together with Proposition \ref{bad set double measure} or Proposition \ref{bad set triple measure}, depending on whether the adunction is binary or ternary, yield the estimate \begin{equation}\label{exclusion bad set 1} \begin{aligned} \int_{\mathcal{B}_{m_i}\left(Z_{s+\widetilde{\sigma}_{i-1}}^\infty\left(t_{i}^+\right)\right)}|b_{\sigma_i}(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i},v)|\,d\bm{\omega}_{\sigma_i,i}\bm{v}_{\sigma_i,i}&\leq C_d(s+\widetilde{\sigma}_{i-1})R^{d\sigma_i+1}\eta^{\frac{d-1}{2d\sigma_i+2}}\\ &\leq C_d(s+2k)R^{3d}\eta^{\frac{d-1}{4d+2}} ,\quad\forall v\in B_R^d, \end{aligned} \end{equation} since $R>>1$ and $\eta<<1$. Moreover, we have the elementary inequalities: \begin{align} \|f_{N,0}^{(s+\widetilde{\sigma}_k)}\|_{L^\infty}&\leq e^{-(s+\widetilde{\sigma}_k)\mu_0}\|F_{N,0}\|_{N,\beta_0,\mu_0}\leq e^{-(s+k)\mu_0}\|F_{N,0}\|_{N,\beta_0,\mu_0}\label{exclusion bad set 2 norms},\\ \int_{T_{k,\delta}(t)}\,dt_1...\,dt_k&\leq\int_0^t\int_0^{t_1}...\int_0^{t_{k-1}}\,dt_1...\,dt_k=\frac{t^k}{k!}\leq\frac{T^k}{k!}\label{exclusion bad set 2 time}. \end{align} Therefore, \eqref{estimate on the rest of the terms binary}-\eqref{exclusion bad set 2 time} imply \begin{equation*} \begin{aligned} \big|&\widetilde{I}_{s,k,R,\delta,\sigma}^N(t,J,M)(X_s)-J_{s,k,R,\delta,\sigma}^N(t,J,M)(X_s)\big|\\ &\leq \|\phi_s\|_{L^\infty_{V_s}}e^{-(s+k)\mu_0}\|F_{N,0}\|_{N,\beta_0,\mu_0}C_d R^{ds}C_d^{k-1}R^{3d(k-1)}(s+2k) C_dR^{3d}\eta^{\frac{d-1}{4d+2}}\frac{T^k}{k!}\\ &\leq C_{d,s,\mu_0,T}^k\|\phi_s\|_{L^\infty_{V_s}}\frac{(s+2k)}{k!}R^{d(s+3k)}\eta^{\frac{d-1}{4d+2}} \|F_{N,0}\|_{N,\beta_0,\mu_0}. \end{aligned} \end{equation*} Adding for all $(J,M)\in \mathcal{U}_{s,k,\sigma}$ we have $2^ks(s+\widetilde{\sigma}_1)...(s+\widetilde{\sigma}_{k-1})\leq 2^k(s+2k)^k$ contributions, thus \begin{equation*} \begin{aligned} &\sum_{(J,M)\in\mathcal{U}_{s,k,\sigma}}\|\widetilde{I}_{s,k,R,\delta,\sigma}^N(t,J,M)-J_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\\ &\leq C_{d,s,\mu_0,T}^k\|\phi_s\|_{L^\infty_{V_s}}R^{d(s+3k)}\frac{(s+2k)^{k+1}}{k!}\eta^{\frac{d-1}{4d+2}}\|F_{N,0}\|_{N,\beta_0,\mu_0}\\ &\leq C_{d,s,\mu_0,T}^k\|\phi_s\|_{L^\infty_{V_s}}R^{d(s+3k)}\eta^{\frac{d-1}{4d+2}}\|F_{N,0}\|_{N,\beta_0,\mu_0}, \end{aligned} \end{equation*} since $$\frac{(s+2k)^{k+1}}{k!}\leq\frac{2^{k+1}(s+k)(s+k)^{k}}{k!}\leq 2^{k+1}(s+k)e^{s+k}\leq C_s^k ,$$ Summing over $\sigma\in S_k$, $k=1,...,n$, we get the required estimate. \end{proof} In the next section, in order to conclude the convergence proof, we will estimate the differences of the corresponding BBGKY hierarchy and Boltzmann hierarchy truncated elementary observables in the scaled limit. \section{Convergence proof}\label{sec:convergence proof} Recall from Subsection \ref{par_reduction to truncated} that given $s\in\mathbb{N}$, $t\in[0,T]$, and parameters satisfying \eqref{choice of parameters}, we have reduced the convergence proof to controlling the differences: $$J_{s,k,R,\delta}^N(t,J,M)-J_{s,k,R,\delta}^\infty (t,J,M)$$ for given $1\leq k\leq n$ and $(J,M)\in\mathcal{U}_{s,k}$, where $J_{s,k,R,\delta}^N(t,J,M)$, $J_{s,k,R,\delta}^\infty (t,J,M)$ are given by \eqref{truncated BBGKY}, \eqref{truncated Boltzmann}. This will be the aim of this section. Throughout this section $s\in\mathbb{N}$, $\phi_s\in C_c(\mathbb{R}^{ds})$ will be fixed, $(N,\epsilon_2,\epsilon_3)$ are in the scaling \eqref{scaling}, $\beta_0>0$, $\mu_0\in\mathbb{R}$, $T>0$ are given by the statements of Theorem \ref{well posedness BBGKY} and Theorem \ref{well posedness boltzmann}, and the parameters $n,\delta,R,\eta,\epsilon_0,\alpha$ satisfy \eqref{choice of parameters}. \subsection{BBGKY hierarchy pseudo-trajectories and proximity to the Boltzmann hierarchy pseudo-trajectories} In the same spirit as in Subsection \ref{subsec Boltzmann pseudo}, we may define the BBGKY hierarchy pseudo-trajectory. Consider $s\in\mathbb{N}$, $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling}, $k\in\mathbb{N}$ and $t\in[0,T]$. Let us recall from \eqref{collision times} the set $$\mathcal{T}_k(t)=\left\{(t_1,...,t_k)\in\mathbb{R}^k:0=t_{k+1}<t_k<...<t_1<t_0=t\right\},$$ where we use the convention $t_0=t$ and $t_{k+1}=0$. Consider $(t_1,...,t_k)\in\mathcal{T}_k(t)$, $\sigma\in S_k$, $J=(j_1,...,j_k)$, $M=(m_1,...,m_k)$, $(J,M)\in\mathcal{U}_{s,k,\sigma}$ and for each $i=1,...,k$, we consider $(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in\mathbb{S}_{1}^{d\sigma_i-1}\times B_R^{d\sigma_{i}}.$ The process followed is similar to the construction of the Boltzmann hierarchy pseudo-trajectory. The only difference is that we take into account the diameter $\epsilon_2$ or the interaction zone $\epsilon_3$ of the adjusted particles in each step. More precisely, we inductively construct the BBGKY hierarchy pseudo-trajectory of $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$ as follows: {\bf{Time $t_0=t$}:} We initially define $ Z_s^N(t_{0}^-)=\left(x_1^N(t_0^-),...,x_s^N(t_0^-),v_1^N(t_0^-),...,v_s^N(t_0^-)\right):=Z_s. $ {\bf{Time $t_i$}, $i\in\{1,...,k\}$:} Consider $i\in\left\{1,...,k\right\}$ and assume we know $$Z_{s+\widetilde{\sigma}_{i-1}}^N (t_{i-1}^-)=\left(x_1^N(t_{i-1}^-),...,x_{s+\widetilde{\sigma}_{i-1}}^N(t_{i-1}^-),v_1^N(t_{i-1}^-),...,v_{s+\widetilde{\sigma}_{i-1}}^N(t_{i-1}^-)\right).$$ We define $Z_{s+\widetilde{\sigma}_{i-1}}^N (t_{i}^+)=\left(x_1^N(t_{i}^+),...,x_{s+\widetilde{\sigma}_{i-1}}^N(t_{i}^+),v_1^N(t_{i}^+),...,v_{s+\widetilde{\sigma}_{i-1}}^N(t_{i}^+)\right)$ as: \begin{equation*} Z_{s+\widetilde{\sigma}_{i-1}}^N(t_i^+):=\left(X_{s+\widetilde{\sigma}_{i-1}}^N\left(t_{i-1}^-\right)-\left(t_{i-1}-t_i\right)V_{s+\widetilde{\sigma}_{i-1}}^N\left(t_{i-1}^-\right), V_{s+\widetilde{\sigma}_{i-1}}^N\left(t_{i-1}^-\right)\right). \end{equation*} We also define $Z_{s+\widetilde{\sigma}_{i}}^N(t_i^-)=\left(x_1^N(t_{i}^-),...,x_{s+\widetilde{\sigma}_{i}}^N(t_{i}^-),v_1^N(t_{i}^-),...,v_{s+\widetilde{\sigma}_{i}}^N(t_{i}^-)\right)$ as: \begin{equation*} \left(x_j^N(t_i^-),v_j^N(t_i^-)\right):=(x_j^N(t_i^+),v_j^N(t_i^+))\quad\forall j\in\{1,...,s+\widetilde{\sigma}_{i-1}\}\setminus\left\{m_i\right\}, \end{equation*} For the rest of the particles, we distiguish the following cases, depending on $\sigma_i$: \begin{itemize} \item $\sigma_i=1$: If $j_i=-1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^N(t_i^-),v_{m_i}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+),v_{m_{i}}^N(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^N(t_i^-),v_{s+\widetilde{\sigma}_{i}}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+)-\epsilon_2\omega_{s+\widetilde{\sigma}_{i}},v_{s+\widetilde{\sigma}_{i}}\right), \end{aligned} \end{equation*} while if $j_i=1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^N(t_i^-),v_{m_i}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+),v_{m_{i}}^{N'}(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^N(t_i^-),v_{s+\widetilde{\sigma}_{i}}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+)+\epsilon_2\omega_{s+\widetilde{\sigma}_{i}},v_{s+\widetilde{\sigma}_{i}}'\right), \end{aligned} \end{equation*} where $ (v_{m_{i}}^{N'}(t_{i}^-),v_{s+\widetilde{\sigma}_{i}}')=T_{\omega_{s+\widetilde{\sigma}_{i}}}\left(v_{m_{i}}^N(t_{i}^+),v_{s+\widetilde{\sigma}_{i}}\right). $ \item $\sigma_i=2$: If $j_i=-1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^N(t_i^-),v_{m_i}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+),v_{m_{i}}^N(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}-1}^N(t_i^-),v_{s+\widetilde{\sigma}_{i}-1}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+)-\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_{i}-1},v_{s+\widetilde{\sigma}_{i}-1}\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^N(t_i^-),v_{s+\widetilde{\sigma}_{i}}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+)-\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_{i}},v_{s+\widetilde{\sigma}_{i}}\right), \end{aligned} \end{equation*} while if $j_i=1$: \begin{equation*} \begin{aligned} \left(x_{m_i}^N(t_i^-),v_{m_i}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+),v_{m_{i}}^{N*}(t_{i}^+)\right),\\ \left(x_{s+\widetilde{\sigma}_{i}-1}^N(t_i^-),v_{s+\widetilde{\sigma}_{i}-1}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+)+\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_{i}-1},v_{s+\widetilde{\sigma}_{i}-1}^*\right),\\ \left(x_{s+\widetilde{\sigma}_{i}}^N(t_i^-),v_{s+\widetilde{\sigma}_{i}}^N(t_i^-)\right)&:=\left(x_{m_{i}}^N(t_{i}^+)+\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_{i}},v_{s+\widetilde{\sigma}_{i}}^*\right), \end{aligned} \end{equation*} where $ (v_{m_{i}}^{N*}(t_{i}^-),v_{s+\widetilde{\sigma}_{i}-1}^*,v_{s+\widetilde{\sigma}_{i}}^*)=T_{\omega_{s+\widetilde{\sigma}_{i}-1},\omega_{s+\widetilde{\sigma}_{i}}}\left(v_{m_{i}}^N(t_{i}^+),v_{s+\widetilde{\sigma}_{i}-1},v_{s+\widetilde{\sigma}_{i}}\right). $ \end{itemize} {\bf{Time $t_{k+1}=0$}:} We finally obtain $$Z_{s+\widetilde{\sigma}_{k}}^N(0^+)=Z_{s+\widetilde{\sigma}_{k}}^N(t_{k+1}^+)=\left(X_{s+\widetilde{\sigma}_{k}}^N\left(t_{k}^-\right)-t_kV_{s+\widetilde{\sigma}_{k}}^N\left(t_k^-\right),V_{s+\widetilde{\sigma}_{k}}^N\left(t_k^-\right)\right).$$ The process is illustrated in the following diagram: \begin{center} \begin{tikzpicture}[node distance=2.5cm,auto,>=latex']\label{bbgjy pseudo diagram} \node[int](0-){\small$ Z_s^N(t_0^-)$}; \node[int,pin={[init]above:\small$\begin{matrix}(\bm{\omega}_{\sigma_1,1},\bm{v}_{\sigma_1,1}),\\(j_1,m_1)\end{matrix}$}](1+)[left of=0-,node distance=2.3cm]{\small$Z_s^N(t_1^+)$}; \node[int](1-)[left of=1+,node distance=1.5cm]{$Z_{s+\widetilde{\sigma}_1}^N(t_1^-)$}; \node[](intermediate1)[left of=1-,node distance=2cm]{...}; \node[int,pin={[init]above:\small$\begin{matrix}(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i}),\\(j_i,m_i)\end{matrix}$}](i+)[left of=intermediate1,node distance=2.5cm]{\small$Z_{s+\widetilde{\sigma}_{i-1}}^N(t_i^+)$}; \node[int](i-)[left of=i+,node distance=1.7cm]{\small$Z_{s+\widetilde{\sigma}_i}^N(t_i^-)$}; \node[](intermediate2)[left of=i-,node distance=2.2cm]{...}; \node[int](end)[left of=intermediate2,node distance=2.5cm]{\small$Z_{s+\widetilde{\sigma}_{k}}^N(t_{k+1}^+)$}; \path[<-] (1+) edge node {\tiny$t_{0}-t_1$} (0-); \path[<-] (intermediate1) edge node {\tiny$t_{1}-t_2$} (1-); \path[<-] (i+) edge node {\tiny$t_{i-1}-t_i$} (intermediate1); \path[<-] (intermediate2) edge node {\tiny$t_{i}-t_{i+1}$} (i-); \path[<-] (end) edge node {\tiny$t_{k}-t_{k+1}$} (intermediate2); \end{tikzpicture} \end{center} We give the following definition: \begin{definition}\label{BBGKY pseudo} Let $s\in\mathbb{N}$, $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$, $(t_1,...,t_k)\in\mathcal{T}_k(t)$, $J=(j_1,...,j_k)$, $M=(m_1,...,m_k)$, $(J,M)\in\mathcal{U}_{s,k}$ and for each $i=1,...,k$, $\sigma\in S_k$, we consider $(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in\mathbb{S}_{1}^{d\sigma_i-1}\times B_R^{d\sigma_i}.$ The sequence $\{Z_{s+\widetilde{\sigma}_{i-1}}^N(t_i^+)\}_{i=0,...,k+1}$ constructed above is called the BBGKY hierarchy pseudo-trajectory of $Z_s$. \end{definition} We now state the following elementary proximity result of the corresponding BBGKY hierarchy and Boltzmann hierarchy pseudo-trajectories. \begin{lemma}\label{proximity} Let $s\in\mathbb{N}$, $Z_s=(X_s,V_s)\in\mathbb{R}^{2ds}$, $1\leq k\leq n$, $\sigma\in S_k$, $(J,M)\in \mathcal{U}_{s,k,\sigma}$, $t\in[0,T]$ and $(t_1,...,t_k)\in\mathcal{T}_{k}(t)$. For each $i=1,...,k$, consider $(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in\mathbb{S}_{1}^{d\sigma_i-1}\times\mathbb{R}^{d\sigma_i}$. Then for all $i=1,...,k$ and $\ell=1,...,s+\widetilde{\sigma}_{i-1}$, we have \begin{equation}\label{proximity claim} |x_{\ell}^N(t_{i}^+)-x_{\ell}^\infty(t_{i}^+)|\leq \sqrt{2}\epsilon_3 (i-1),\quad v_\ell^N(t_{i}^+)=v_\ell^\infty(t_{i}^+). \end{equation} Moreover, if $s<n$, then for each $i\in\{1,...,k\}$, there holds: \begin{equation}\label{total proximity} \left|X_{s+\widetilde{\sigma}_{i-1}}^N(t_i^+)-X_{s+\widetilde{\sigma}_{i-1}}^\infty(t_i^+)\right|\leq n^{3/2}\epsilon_3. \end{equation} \end{lemma} \begin{proof} We first prove \eqref{proximity claim} by induction on $i\in\left\{1,...,k\right\}$. For $i=1$ the result is trivial since the pseudo-trajectories initially coincide by construction. Assume the conclusion holds for $i\in\left\{1,...,k-1\right\}$ i.e. for all $\ell\in\left\{1,...,s+\widetilde{\sigma}_{i-1}\right\}$, there holds: \begin{equation}\label{proximity induction} |x_{\ell}^N(t_{i}^+)-x_\ell^\infty(t_{i}^+)|\leq \sqrt{2}\epsilon_3(i-1)\quad\text{and}\quad v_\ell^N(t_{i}^+)=v_{\ell}^\infty(t_{i}^+). \end{equation} We prove the conclusion holds for $(i+1)\in\{2,...,k\}$. We need to take different cases for $j_i\in\{-1,1\}$ and $\sigma_i\in\{1,2\}$. $\bullet\quad \sigma_i=1, j_i=-1$: For the Boltzmann pseudo-trajectory we get \begin{equation*} \begin{aligned} & x_\ell^\infty(t_{i+1}^+)=x_\ell^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^\infty(t_{i}^+),\quad v_{\ell}^\infty(t_{i+1}^+)=v_\ell^\infty(t_{i}^+),\quad \forall \ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ & x_{m_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^\infty(t_{i}^+),\quad v_{m_i}^\infty(t_{i+1}^+)=v_{m_i}^\infty(t_{i}^+),\\ &x_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i},\quad v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}, \end{aligned} \end{equation*} while for the BBGKY hierarchy pseudo-trajectory we get \begin{equation*} \begin{aligned} & x_\ell^N(t_{i+1}^+)=x_\ell^N(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^N(t_{i}^+),\quad v_{\ell}^N(t_{i+1}^+)=v_\ell^N(t_{i}^-),\quad \forall \ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ & x_{m_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{m_i}^N(t_{i}^+),\quad v_{m_i}^N(t_{i+1}^+)=v_{m_i}^N(t_{i}^-),\\ &x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i}-\epsilon_2\omega_{s+\widetilde{\sigma}_i},\quad v_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}. \end{aligned} \end{equation*} So, for any $\ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}$, the induction assumption \eqref{proximity induction} implies \begin{align*} &v_\ell^N(t_{i+1}^+)=v_\ell^N(t_{i}^+)=v_\ell^\infty(t_{i}^+)=v_\ell^\infty(t_{i+1}^+),\\ &|x_\ell^N(t_{i+1}^+)-x_\ell^\infty(t_{i+1}^+)|=|x_\ell^N(t_{i}^+)-x_\ell^\infty(t_{i}^+)|\leq\sqrt{2}\epsilon_3(i-1). \end{align*} Moreover, since $\epsilon_2<<\epsilon_3$, for $\ell=s+\widetilde{\sigma}_i$ we get \begin{equation*} \begin{aligned} &v_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}=v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+),\\ & |x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)-x_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)|\leq |x_{m_i}^N(t_{i}^+)-x_{m_i}^\infty(t_{i}^+)|+\epsilon_2|\omega_{s+\widetilde{\sigma}_i}|\leq \sqrt{2}\epsilon_3(i-1)+\epsilon_2<\sqrt{2}\epsilon_3 i. \end{aligned} \end{equation*} $\bullet\quad \sigma_i=1, j_i=1$: For the Boltzmann hierarchy pseudo-trajectory we get \begin{equation*} \begin{aligned} &x_\ell^\infty(t_{i+1}^+)=x_\ell^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^\infty(t_{i}^+),\quad v_{\ell}^\infty(t_{i+1}^+)=v_\ell^\infty(t_{i}^+),\quad\forall \ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ &x_{m_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{m_i}^{\infty'}(t_{i}^+),\quad v_{m_i}^\infty(t_{i+1}^+)=v_{m_i}^{\infty'}(t_{i}^+),\\ &x_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i}',\quad v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}'. \end{aligned} \end{equation*} and for the BBGKY hierarchy pseudo-trajectory we obtain \begin{equation*} \begin{aligned} &x_\ell^N(t_{i+1}^+)=x_\ell^N(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^N(t_{i}^+),\quad v_{\ell}^N(t_{i+1}^+)=v_\ell^N(t_{i}^+),\quad \forall \ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ &x_{m_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{m_i}^{N'}(t_{i}^+),\quad v_{m_i}^N(t_{i+1}^+)=v_{m_i}^{N'}(t_{i}^+),\\ &x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i}'+\epsilon_2\omega_{s+\widetilde{\sigma}_i}, \quad v_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}'. \end{aligned} \end{equation*} For $\ell\in\left\{1,...,s+\widetilde{\sigma}_{i-1}\right\}\setminus\left\{m_i\right\}$, the induction assumption \eqref{proximity induction} yields \begin{equation*} \begin{aligned} &v_\ell^N(t_{i+1}^+)=v_\ell^N(t_{i}^+)=v_{\ell}^\infty(t_{i}^+)=v_{\ell}^\infty(t_{i+1}^+),\\ &|x_\ell^N(t_{i+1}^+)-x_\ell^\infty(t_{i+1}^+)|=|x_\ell^N(t_{i}^+)-x_\ell^\infty(t_{i}^+)|\leq\sqrt{2}\epsilon_3(i-1). \end{aligned} \end{equation*} and for $\ell=m_i$, it yields \begin{equation*} \begin{aligned} &v_{m_i}^N(t_{i+1}^+)=v_{m_i}^{N'}(t_{i}^+)=v_{m_i}^{\infty'}(t_{i}^+)=v_{\ell}^\infty(t_{i+1}^+),\\ &|x_{m_i}^N(t_{i+1}^+)-x_{m_i}^\infty(t_{i+1}^+)|=|x_{m_i}^N(t_{i}^+)-x_{m_i}^\infty(t_{i}^+)|\leq\sqrt{2}\epsilon_3(i-1). \end{aligned} \end{equation*} Moreover, since $\epsilon_2<<\epsilon_3$, for $\ell=s+\widetilde{\sigma}_i$, we obtain \begin{equation*} \begin{aligned} &v_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}'=v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+),\\ &|x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)-x_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)|\leq |x_{m_i}^N(t_{i}^+)-x_{m_i}^\infty(t_{i}^+)|+\epsilon_2|\omega_{s+\widetilde{\sigma}_i}|\leq \sqrt{2}\epsilon_3(i-1)+\epsilon_2<\sqrt{2}\epsilon_3 i. \end{aligned} \end{equation*} $\bullet\quad\sigma_i=2, j_i=-1$: For the Boltzmann hierarchy pseudo-trajectory we get \begin{equation*} \begin{aligned} &x_\ell^\infty(t_{i+1}^+)=x_\ell^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^\infty(t_{i}^+),\quad v_{\ell}^\infty(t_{i+1}^+)=v_\ell^\infty(t_{i}^+),\quad \forall \ell\in\{1,...,s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ &x_{m_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{m_i}^\infty(t_{i}^+),\quad v_{m_i}^\infty(t_{i+1}^+)=v_{m_i}^\infty(t_{i}^+),\\ & x_{s+\widetilde{\sigma}_i-1}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{\widetilde{\sigma}_i-1},\quad v_{\ell}^\infty(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i-1},\\ &x_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i},\quad v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}, \end{aligned} \end{equation*} while for the BBGKY hierarchy pseudo-trajectory we get \begin{equation*} \begin{aligned} & x_\ell^N(t_{i+1}^+)=x_\ell^N(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^N(t_{i}^+),\quad v_{\ell}^N(t_{i+1}^+)=v_\ell^N(t_{i}^-),\quad\forall \ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ & x_{m_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{m_i}^N(t_{i}^+),\quad v_{m_i}^N(t_{i+1}^+)=v_{m_i}^N(t_{i}^-),\\ &x_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i-1}-\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_i-1},\quad v_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i-1},\\ &x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i}-\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_i},\quad v_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}. \end{aligned} \end{equation*} So, for any $\ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}$, the induction assumption \eqref{proximity induction} implies \begin{align*} &v_\ell^N(t_{i+1}^+)=v_\ell^N(t_{i}^+)=v_\ell^\infty(t_{i}^+)=v_\ell^\infty(t_{i+1}^+),\\ &|x_\ell^N(t_{i+1}^+)-x_\ell^\infty(t_{i+1}^+)|=|x_\ell^N(t_{i}^+)-x_\ell^\infty(t_{i}^+)|\leq\sqrt{2}\epsilon_3(i-1), \end{align*} Moreover, for $\ell=s+\widetilde{\sigma}_i-1$ we get \begin{equation*} \begin{aligned} &v_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i-1}=v_{s+\widetilde{\sigma}_i-1}^\infty(t_{i+1}^+),\\ &|x_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)-x_{s+\widetilde{\sigma}_i-1}^\infty(t_{i+1}^+)|\leq |x_{m_i}^N(t_{i}^+)-x_{m_i}^\infty(t_{i}^+)|+\sqrt{2}\epsilon_3|\omega_{s+\widetilde{\sigma}_i-1}|\leq \sqrt{2}\epsilon_3(i-1)+\sqrt{2}\epsilon_3=\sqrt{2}\epsilon_3 i, \end{aligned} \end{equation*} and for $\ell=s+\widetilde{\sigma}_i$ we get \begin{equation*} \begin{aligned} &v_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}=v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+),\\ &|x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)-x_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)|\leq |x_{m_i}^N(t_{i}^+)-x_{m_i}^\infty(t_{i}^+)|+\sqrt{2}\epsilon_3|\omega_{s+\widetilde{\sigma}_i}|\leq \sqrt{2}\epsilon_3(i-1)+\sqrt{2}\epsilon_3=\sqrt{2}\epsilon_3 i. \end{aligned} \end{equation*} $\bullet\quad \sigma_i=2,j_i=1:$ For the Boltzmann hierarchy pseudo-trajectory we get \begin{equation*} \begin{aligned} &x_\ell^\infty(t_{i+1}^+)=x_\ell^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^\infty(t_{i}^+),\quad v_{\ell}^\infty(t_{i+1}^+)=v_\ell^\infty(t_{i}^+),\quad\forall \ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ &x_{m_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{m_i}^{\infty*}(t_{i}^+),\quad v_{m_i}^\infty(t_{i+1}^+)=v_{m_i}^{\infty*}(t_{i}^+),\\ & x_{s+\widetilde{\sigma}_{i}-1}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_{i}-1}^*,\\&v_{s+\widetilde{\sigma}_{i}-1}^\infty(t_{i+1}^+)=v_{s+\widetilde{\sigma}_{i}-1}^*,\\ &x_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=x_{m_i}^\infty(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i}^*,\quad v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}^*. \end{aligned} \end{equation*} and for the BBGKY hierarchy pseudo-trajectory we obtain \begin{equation*} \begin{aligned} &x_\ell^N(t_{i+1}^+)=x_\ell^N(t_{i}^+)-(t_{i}-t_{i+1})v_{\ell}^N(t_{i}^+),\quad v_{\ell}^N(t_{i+1}^+)=v_\ell^N(t_{i}^+),\quad \forall \ell\in\{1,..., s+\widetilde{\sigma}_{i-1}\}\setminus\{m_i\},\\ &x_{m_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{m_i}^{N*}(t_{i}^+),\quad v_{m_i}^N(t_{i+1}^+)=v_{m_i}^{N*}(t_{i}^+),\\ & x_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i-1}^*+\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_i-1},\\ &v_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i-1}^*,\\ &x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=x_{m_i}^N(t_{i}^+)-(t_{i}-t_{i+1})v_{s+\widetilde{\sigma}_i}^*+\sqrt{2}\epsilon_3\omega_{s+\widetilde{\sigma}_i},\\ &v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}^*. \end{aligned} \end{equation*} For $\ell\in\left\{1,...,\widetilde{\sigma}_{i-1}\right\}\setminus\left\{m_i\right\}$, the induction assumption \eqref{proximity induction} yields \begin{equation*} \begin{aligned} &v_\ell^N(t_{i+1}^+)=v_\ell^N(t_{i}^+)=v_{\ell}^\infty(t_{i}^+)=v_{\ell}^\infty(t_{i+1}^+),\\ &|x_\ell^N(t_{i+1}^+)-x_\ell^\infty(t_{i+1}^+)|=|x_\ell^N(t_{i}^+)-x_\ell^\infty(t_{i}^+)|\leq\sqrt{2}\epsilon_3(i-1). \end{aligned} \end{equation*} Thus, for $\ell=m_i$ \begin{equation*} \begin{aligned} &v_{m_i}^N(t_{i+1}^+)=v_{m_i}^{N*}(t_{i}^+)=v_{m_i}^{\infty*}(t_{i}^+)=v_{\ell}^\infty(t_{i+1}^+),\\ &|x_{m_i}^N(t_{i+1}^+)-x_{m_i}^\infty(t_{i+1}^+)|=|x_{m_i}^N(t_{i}^+)-x_{m_i}^\infty(t_{i}^+)|\leq\sqrt{2}\epsilon_3(i-1), \end{aligned} \end{equation*} for $\ell=s+\widetilde{\sigma}_i-1$ \begin{equation*} \begin{aligned} &v_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i-1}^*=v_{s+\widetilde{\sigma}_i-1}^\infty(t_{i+1}^+),\\ &|x_{s+\widetilde{\sigma}_i-1}^N(t_{i+1}^+)-x_{s+\widetilde{\sigma}_i-1}^\infty(t_{i+1}^+)|\leq |x_{m_i}^N(t_{i}^+)-x_{m_i}^\infty(t_{i}^+)|+\sqrt{2}\epsilon_3|\omega_{s+\widetilde{\sigma}_i-1}|\leq \sqrt{2}\epsilon_3(i-1)+\sqrt{2}\epsilon_3=\sqrt{2}\epsilon_3 i, \end{aligned} \end{equation*} and for $\ell=s+\widetilde{\sigma}_i$ \begin{equation*} \begin{aligned} &v_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)=v_{s+\widetilde{\sigma}_i}^*=v_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+),\\ &|x_{s+\widetilde{\sigma}_i}^N(t_{i+1}^+)-x_{m_i}^\infty(t_{i+1}^+)|\leq |x_{m_i}^N(t_{i}^+)-x_{s+\widetilde{\sigma}_i}^\infty(t_{i}^+)|+\sqrt{2}\epsilon_3|\omega_{s+\widetilde{\sigma}_i}|\leq \sqrt{2}\epsilon_3(i-1)+\sqrt{2}\epsilon_3=\sqrt{2}\epsilon_3 i. \end{aligned} \end{equation*} Combining all cases, \eqref{proximity claim} is proved by induction. To prove \eqref{total proximity}, it suffices to add for $\ell=1,...,s+\widetilde{\sigma}_{i-1},$ and use the facts $1\leq i\leq k-1$, $\widetilde{\sigma}_{i-1}<\widetilde{\sigma}_i\leq\widetilde{\sigma}_{k-1}<2k\leq 2n$, from \eqref{bound on sigma}, and the assumption $s<n$. \end{proof} \subsection{Reformulation in terms of pseudo-trajectories} We will now re-write the BBGKY hierarchy and Boltzmann hierarchy truncated elementary observables in terms of pseudo-trajectories. Let $s\in\mathbb{N}$ and assume $s<n$. For the Boltzmann hierarchy case, there is always free flow between the collision times. Therefore, recalling \eqref{truncated Boltzmann} and \eqref{mixed crossection}, for $X_s\in\Delta_s^X(\epsilon_0)$, $1\leq k\leq n$, $\sigma\in S_k$, $(J,M)\in\mathcal{U}_{s,k,\sigma}$, $t\in [0,T]$ and $(t_1,...,t_k)\in\mathcal{T}_{k,\delta}(t)$, the Boltzmann hierarchy truncated elementary observable can be equivalently written as: \begin{equation}\label{truncated elementary boltzmann} \begin{aligned} &J_{s,k,R,\delta,\sigma}^\infty(t,J,M)(X_s)=\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)\int_{\mathcal{T}_{k,\delta}(t)}\int_{\mathcal{B}_{m_1}^c\left(Z_{s}^\infty\left(t_1^+\right)\right)}...\int_{\mathcal{B}_{m_k}^c\left(Z_{s+\widetilde{\sigma}_{k-1}}^\infty\left(t_{k}^+\right)\right)}\\ &\times\prod_{i=1}^{k}b_{\sigma_i}^+\left(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i},v_{m_{i}}^\infty\left(t_i^+\right) \right) f_{0}^{(s+\widetilde{\sigma}_{k})}\left(Z_{s+\widetilde{\sigma}_{k}}^\infty \left(0^+\right)\right)\prod_{i=1}^{k}\left(\,d\bm{\omega}_{\sigma_i,i}\,d\bm{v}_{\sigma_i,i}\right)\,dt_k...\,dt_1\,dV_s. \end{aligned} \end{equation} Now we shall see that due to Lemma \ref{proximity}, it is possible to make a similar expansion for the BBGKY hierarchy truncated elementary observables as well. More precisely, fix $X_s\in \Delta_s^X(\epsilon_0)$, $1\leq k\leq n$, $\sigma\in S_k$, $(J,M)\in\mathcal{U}_{s,k,\sigma}$, $t\in[0,T]$ and $(t_1,...,t_k)\in\mathcal{T}_{k,\delta}(t)$. Consider $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling} such that $\epsilon_2<<\eta^2\epsilon_3$ and $n^{3/2}\epsilon_3<<\alpha$. By Lemma \ref{initially good configurations}, given $V_s\in\mathcal{M}_s^c(X_s)$, we have $Z_s\in G_s(\epsilon_3,\epsilon_0,\delta).$ By the definition of the set $G_s(\epsilon_3,\epsilon_0,\delta)$, see \eqref{both epsilon-epsilon_0}, and the fact that $\epsilon_2<<\epsilon_3$, we have \begin{equation*} Z_s\in G_s(\epsilon_3,\epsilon_0,\delta)\Rightarrow Z_s(\tau)\in\mathring{\mathcal{D}}_{s,\epsilon_2,\epsilon_3},\quad\forall \tau\geq 0, \end{equation*} thus \begin{equation}\label{equality of the flows k=0} \Psi_{s}^{\tau-t_0}Z_{s}^N\left(t_0^-\right)=\Phi_{s}^{\tau-t_0}Z_{s}^N\left(t_0^-\right),\quad\forall \tau\in [t_{1},t_{0}] \end{equation} where $\Psi_{s}$, given in \eqref{liouville operator}, denotes the $s$-particle $(\epsilon_2,\epsilon_3)$-interaction zone flow and $\Phi_{s}$, given in \eqref{free flow operator}, denotes the $s$-particle free flow respectively. We also have $$Z_s=(X_s,V_s)\in G_s(\epsilon_3,\epsilon_0,\delta)\Rightarrow Z_{s}^\infty(t_1^+)\in G_s(\epsilon_0,0).$$ For all $i\in\{1,...,k\}$ inductive application of Proposition \ref{bad set double} or Proposition \ref{bad set triple}, depending on whether the adjunction is binary or ternary, implies that \begin{equation}\label{remak on Boltz pseudo} Z_{s+\widetilde{\sigma}_i}^\infty(t_{i+1}^+)\in G_{s+\widetilde{\sigma}_i}(\epsilon_0,0),\quad\forall (\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i})\in \mathcal{B}_{m_i}^c(Z_{s+\widetilde{\sigma}_{i-1}}^\infty(t_i^+)). \end{equation} Since we have assumed $n^{3/2}\epsilon_3<<\alpha$ and $s<n$, \eqref{total proximity} from Lemma \ref{proximity} implies \begin{equation}\label{N-positions} \left|X_{s+\widetilde{\sigma}_{i-1}}^N(t_{i}^+)-X_{s+\widetilde{\sigma}_{i-1}}^\infty(t_{i}^+)\right|\leq\frac{\alpha}{2},\quad\forall i=1,...,k. \end{equation} Then, \eqref{pre-0-double}, \eqref{post-0-double} from Proposition \ref{bad set double}, or \eqref{in phase pre}, \eqref{in phase post} from Proposition \ref{bad set triple}, depending on whether the adjunction is binary or ternary, yield that for any $i=1,...,k$, we have $$\Psi_{s+\widetilde{\sigma}_i}^{\tau-t_i}Z_{s+\widetilde{\sigma}_i}^N\left(t_i^-\right)=\Phi_{s+\widetilde{\sigma}_i}^{\tau-t_i}Z_{s+\widetilde{\sigma}_i}^N\left(t_i^-\right),\quad\forall \tau\in [t_{i+1},t_{i}],$$ where $\Psi_{s+\widetilde{\sigma}_i}$ and $\Phi_{s+\widetilde{\sigma}_i}$ denote the $(s+\widetilde{\sigma}_i)$-particle $(\epsilon_2,\epsilon_3)$-flow and the $(s+\widetilde{\sigma}_i)$-particle free flow, given in \eqref{liouville operator} and \eqref{free flow operator} respectively. In other words the backwards $(\epsilon_2,\epsilon_3)$-flow coincides with the free flow in $[t_{i+1},t_i]$. Finally, Lemma \ref{proximity} also implies that $$v_{m_i}^N(t_i^+)=v_{m_i}^\infty(t_i^+),\quad\forall i=1,...,k.$$ Therefore, for $X_s\in\Delta_s^X(\epsilon_0)$, and $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling} with $n\epsilon_3^{3/2}<<\alpha$ and $\epsilon_2<<\eta^2\epsilon_3$, the BBGKY hierarchy truncated elementary observable can be equivalently written as: \begin{equation}\label{truncated elementary bbgky} \begin{aligned} J_{s,k,R,\delta,\sigma}^N(t,J,M)(X_s)&=\bm{A_{N,\epsilon_2,\epsilon_3}^{s,k}}\int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)\int_{\mathcal{T}_{k,\delta}(t)}\int_{\mathcal{B}_{m_1}^c\left(Z_{s}^\infty\left(t_1^+\right)\right)}...\int_{\mathcal{B}_{m_k}^c\left(Z_{s+\widetilde{\sigma}_{k-1}}^\infty\left(t_k^+\right)\right)}\\ &\hspace{0.2cm}\times\prod_{i=1}^{k}b_{\sigma_i}^+\left(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i},v_{m_i}^\infty\left(t_i^+\right)\right) f_{N,0}^{(s+\widetilde{\sigma}_{k})}\left(Z_{s+\widetilde{\sigma}_{k}}^N \left(0^+\right)\right)\\ &\hspace{0.6cm}\times\prod_{i=1}^{k}\left( \,d\bm{\omega}_{\sigma_i,i}\,d\bm{v}_{\sigma_i,i}\right)\,dt_k...\,dt_1\,dV_s, \end{aligned} \end{equation} where, recalling \eqref{A binary}, \eqref{A triary}, we denote \begin{equation}\label{k-A} \bm{A_{N,\epsilon_2,\epsilon_3}^{s,k,\sigma}}=\prod_{i\in\{1,...,k\}:\sigma_i=1}A_{N,\epsilon_2,s+\widetilde{\sigma}_{i-1}}^2\prod_{i\in\{1,...,k\}:\sigma_i=2}A_{N,\epsilon_3,s+\widetilde{\sigma}_{i-1}}^3. \end{equation} \begin{remark}\label{inductive scaled limit}Notice that for fixed $s\in\mathbb{N}$ and $k\geq 1$ and $\sigma\in S_k$, the scaling \eqref{scaling} implies \begin{equation*} \bm{A_{N,\epsilon_2,\epsilon_3}^{s,k,\sigma}}\to 1,\quad\text{as }N\to\infty. \end{equation*} \end{remark} Let us approximate the BBGKY hierarchy truncated elementary observables by Boltzmann hierarchy truncated elementary observables defining some auxiliary functionals. Let $s\in\mathbb{N}$ and $X_s\in\Delta_s^X(\epsilon_0)$. For $ 1\leq k\leq n$, $\sigma\in S_k$ and $(J,M)\in\mathcal{U}_{s,k,\sigma}$, we define \begin{equation}\label{auxiliary functionals} \begin{aligned} &\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)(X_s)= \int_{\mathcal{M}_s^c(X_s)}\phi_s(V_s)\int_{\mathcal{T}_{k,\delta}(t)}\int_{\mathcal{B}_{m_1}^c\left(Z_{s}^\infty\left(t_1^+\right)\right)}...\int_{\mathcal{B}_{m_k}^c\left(Z_{s+\widetilde{\sigma}_{k-1}}^\infty\left(t_k^+\right)\right)}\\ &\times\prod_{i=1}^{k}b_{\sigma_i}^+\left(\bm{\omega}_{\sigma_i,i},\bm{v}_{\sigma_i,i},v_{m_{i}}^\infty\left(t_i^+\right)\right) f_{0}^{(s+\widetilde{\sigma}_{k})}\left(Z_{s+\widetilde{\sigma}_{k}}^N \left(0^+\right)\right)\prod_{i=1}^{k}\left(\,d\bm{\omega}_{\sigma_i,i}\,d\bm{v}_{\sigma_i,i}\right)\,dt_k...\,dt_1\,dV_s, \end{aligned} \end{equation} {\text{red} explain what it is} We conclude that the auxiliary functionals approximate the BBGKY hierarchy truncated elementary observables $J_{s,k,R,\delta}^N$, defined in \eqref{truncated elementary bbgky} \begin{proposition}\label{aux estimate 1} Let $s,n\in\mathbb{N}$, with $s<n$, $\alpha,\epsilon_0,R,\eta,\delta$ be parameters as in \eqref{choice of parameters}, and $t\in[0,T]$. Then for any $\zeta>0$, there is $N_1=N_1(\zeta,n,\alpha,\eta,\epsilon_0 )\in\mathbb{N}$, such that for all $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling} with $N>N_1$, there holds: \begin{equation*} \sum_{k=1}^n\sum_{\sigma\in S_k}\sum_{(J,M)\in\mathcal{U}_{s,k}}\|J_{s,k,R,\delta,\sigma}^N(t,J,M)-\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\leq C_{d,s,\mu_0,T}^n\|\phi_s\|_{L^\infty_{V_s}}R^{d(s+3n)}\zeta ^2. \end{equation*} \end{proposition} \begin{proof} Fix $1\leq k\leq n$, $\sigma\in S_k$ and $(J,M)\in\mathcal{U}_{s,k,\sigma}$. Consider $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling}. Remark \eqref{remark for epsilons} guarantees that we can consider $N$ large enough such that$\epsilon_2<<\eta^2\epsilon_3$ and $n^{3/2}\epsilon_3<<\alpha$. Triangle inequality and the inclusion $\Delta_s^X(\epsilon_0)\subseteq\Delta_s^X(\epsilon_0/2)$ yield \begin{align} &\|J_{s,k,R,\delta,\sigma}^N(t,J,M)-\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\nonumber\\ &\leq \|J_{s,k,R,\delta,\sigma}^N(t,J,M)-\bm{A_{N,\epsilon_2,\epsilon_3}^{s,k,\sigma}}\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0/2\right)\right)}\label{final estimate 1 term 1}\\ &\hspace{1cm}+|\bm{A_{N,\epsilon_2,\epsilon_3}^{s,k,\sigma}}-1|\|\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}.\label{final estimate 1 term 2} \end{align} We estimate each of the terms \eqref{final estimate 1 term 1}-\eqref{final estimate 1 term 2} separately. \textbf{Term \eqref{final estimate 1 term 1}}: Let us fix $(t_1,...,t_k)\in\mathcal{T}_{k,\delta}(t)$. Applying \eqref{pseudo applicable} for $i=k-1$, we obtain $$Z_{s+\widetilde{\sigma}_{k-1}}^\infty(t_k^+)\in G_{s+\widetilde{\sigma}_{k-1}}(\epsilon_0,0).$$ Since $s<n$ and $n^{3/2}\epsilon_3<<\alpha$, \eqref{total proximity}, applied for $i=k$, implies $$|X_{s+\widetilde{\sigma}_{k-1}}^N(t_k^+)-X_{s+\widetilde{\sigma}_{k-1}}^\infty(t_k^+)|\leq\frac{\alpha}{2}.$$ Therefore, \eqref{pre-delta-double}, \eqref{post-delta-double} from Proposition \ref{bad set double}, or \eqref{epsilon/2 pre}, \eqref{epsilon/2 post} from Proposition \ref{bad set triple}, depending on whether the adjunction is binary or ternary, imply \begin{equation}\label{G halfs} Z_{s+\widetilde{\sigma}_k}^N(0^+)\in G_{s+\widetilde{\sigma}_k}(\epsilon_0/2,0)\subseteq\Delta_{s+\widetilde{\sigma}_k}(\epsilon_0/2). \end{equation} Thus \eqref{estimate on the rest of the terms binary}-\eqref{estimate on the rest of the terms ternary}, \eqref{exclusion bad set 2 time}, \eqref{truncated elementary bbgky}-\eqref{auxiliary functionals} and crucially \eqref{G halfs} imply that for $N$ large enough, we have \begin{equation}\label{final estimate 2} \begin{aligned} \|J_{s,k,R,\delta,\sigma}^N(t,J,M)&-\bm{A_{N,\epsilon_2,\epsilon_3}^{s,k,\sigma}}\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0/2\right)\right)}\leq \\ &\leq \frac{C_{d,s,T}^k}{k!}\|\phi_s\|_{L^\infty_{V_s}}R^{d(s+3k)}\|f_{N,0}^{(s+\widetilde{\sigma}_k)}-f_0^{(s+\widetilde{\sigma}_k)}\|_{L^\infty(\Delta_{s+\widetilde{\sigma}_k}(\epsilon_0/2))}. \end{aligned} \end{equation} \textbf{Term \eqref{auxiliary estimate 2}}: By \eqref{exclusion bad set 2 norms}, we have $\|f_0^{(s+\widetilde{\sigma}_k)}\|_{L^\infty}\leq e^{-(s+k)\mu_0}\|F_0\|_{\infty,\beta_0,\mu_0}.$ Therefore, using \eqref{estimate on the rest of the terms binary}- \eqref{estimate on the rest of the terms ternary} and \eqref{exclusion bad set 2 time}, we obtain \begin{equation}\label{final estimate 3} \begin{aligned} \|\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\leq \frac{C_{d,s,\mu_0,T}^k}{k!}\|\phi_s\|_{L^\infty_{V_s}}R^{d(s+3k)}\|F_0\|_{\infty,\beta_0,\mu_0}. \end{aligned} \end{equation} Adding over all $(J,M)\in\mathcal{U}_{s,k,\sigma}$, $\sigma\in S_k$, $k=1,...,n$ and using \eqref{final estimate 2}-\eqref{final estimate 3}, we obtain the estimate \begin{equation*} \begin{aligned} \sum_{k=1}^n\sum_{\sigma\in S_k}\sum_{(J,M)\in\mathcal{U}_{s,k,\sigma}}\|&J_{s,k,R,\delta,\sigma}^N(t,J,M)-\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)\|_{L^\infty\left(\Delta_s\left(\epsilon_0\right)\right)}\leq C_{d,s,\mu_0,T}^n\|\phi_s\|_{L^\infty_{V_s}}R^{d(s+3n)}\\ &\hspace{-3cm}\times\left(\sup_{k\in\{1,...,n\}}\sup_{\sigma\in S_k}\|f_{N,0}^{(s+\widetilde{\sigma}_{k})}-f_0^{(s+\widetilde{\sigma}_{k})}\|_{L^\infty(\Delta_{s+\widetilde{\sigma}_{k}}(\epsilon_0))}+\|F_0\|_{\infty,\beta_0,\mu_0}\sup_{k\in\{1,...,n\}}\sup_{\sigma\in S_k}|\bm{A_{N,\epsilon_2,\epsilon_3}}^{s,k,\sigma}-1|\right). \end{aligned} \end{equation*} But since $n\in\mathbb{N}$, $\epsilon_0>0$ are fixed, \eqref{initial convergence to 0} implies \begin{equation*} \lim_{N\to\infty}\sup_{k\in \{1,...,n\}}\sup_{\sigma\in S_k}\|f_{N,0}^{(s+\widetilde{\sigma}_{k})}-f_0^{(s+\widetilde{\sigma}_{k})}\|_{L^\infty\left(\Delta_{s+\widetilde{\sigma}_{k}}\left(\epsilon_0\right)\right)}=0. \end{equation*} Moreover, Remark \ref{inductive scaled limit} yields \begin{equation*} \lim_{N\to\infty}\sup_{k\in \{1,...,n\}}\sup_{\sigma\in S_k}|\bm{A_{N,\epsilon_2,\epsilon_3}}^{s,k,\sigma}-1|= 0, \end{equation*} and the result follows. \end{proof} By the uniform continuity assumption, we also obtain the following estimate: \begin{proposition}\label{auxiliary estimate 2} Let $s,n\in\mathbb{N}$ with $s<n$, $\alpha,\epsilon_0,R,\eta,\delta$ be parameters as in \eqref{choice of parameters} and $t\in[0,T]$. Then for any $\zeta>0$, there is $N_2=N_2(\zeta,n)\in\mathbb{N}$, such that for all $(N,\epsilon_2,\epsilon_3)$ in the scaling \eqref{scaling} with $N>N_2$, there holds \begin{equation*} \sum_{k=1}^n\sum_{\sigma\in S_k}\sum_{(J,M)\in\mathcal{U}_{s,k,\sigma}}\|\widehat{J}_{s,k,R,\delta,\sigma}^N(t,J,M)-J_{s,k,R,\delta,\sigma}^\infty(t,J,M)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\leq C_{d,s,\mu_0,T}^n\|\phi_s\|_{L^\infty_{V_s}}R^{d(s+3n)}\zeta^2. \end{equation*} \end{proposition} \begin{proof} Let $\zeta>0$. Fix $1\leq k\leq n$, $\sigma\in S_k$ and $(J,M)\in\mathcal{U}_{s,k,\sigma}$. Since $s<n$, Lemma \ref{proximity} yields \begin{equation}\label{continuity for integral} |Z_{s+\widetilde{\sigma}_k}^N(0^+)-Z_{s+\widetilde{\sigma}_k}^\infty(0^+)|\leq \sqrt{6}n^{3/2}\epsilon_3,\quad\forall Z_s\in\mathbb{R}^{2ds}. \end{equation} Thus the continuity assumption \eqref{continuity assumption} on $F_0$, \eqref{continuity for integral}, the scaling \eqref{scaling}, and \eqref{epsilon with respect to N} from Remark \ref{remark for epsilons} imply that there exists $N_2=N_2(\zeta,n)\in\mathbb{N}$, such that for all $N>N_2$, we have \begin{equation}\label{continuity satisfied} |f_0^{(s+\widetilde{\sigma}_k)}(Z_{s+\widetilde{\sigma}_k}^N(0^+))-f_0^{(s+\widetilde{\sigma}_k)}(Z_{s+\widetilde{\sigma}_k}^\infty(0^+))|\leq C^{s+\widetilde{\sigma}_k-1}\zeta^2\leq C^{s+2k-1}\zeta^2 ,\quad\forall Z_s\in\mathbb{R}^{2ds}. \end{equation} In the same spirit as in the proof of Proposition \ref{aux estimate 1}, using \eqref{continuity satisfied}, \eqref{estimate on the rest of the terms binary}-\eqref{estimate on the rest of the terms ternary}, \eqref{exclusion bad set 2 time}, and summing over $(J,M)\in\mathcal{U}_{s,k,\sigma}$, $\sigma\in S_k$, $k=1,...,n$, we obtain the result. \end{proof} \subsection{Proof of Theorem \ref{convergence theorem}} We are now in the position to prove Theorem \ref{convergence theorem}. Fix $s\in\mathbb{N}$, $\phi_s\in C_c(\mathbb{R}^{ds})$ and $t\in[0,T]$. Consider $n\in\mathbb{N}$ with $s<n$, and assume there exist parameters $\alpha,\epsilon_0,R,\eta,\delta$ satisfying \eqref{choice of parameters} . Let $\zeta>0$ small enough. Triangle inequality, Propositions \ref{reduction}, \ref{restriction to initially good conf}, \ref{truncated element estimate}, \ref{aux estimate 1}, \ref{auxiliary estimate 2}, Remark \ref{no need for k=0} and part \textit{(i)} of Proposition \ref{approximation proposition}, yield that there is $N_0(\zeta,n,\alpha,\eta,\epsilon_0)\in\mathbb{N}$ such that for all $N>N_0$, we have \begin{equation}\label{final final bounds} \begin{aligned} & \|I_s^N(t)-I_s^\infty(t)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}\leq C\left(2^{-n}+e^{-\frac{\beta_0}{3}R^2}+ \delta C^n\right)+C^n R^{4dn}\eta^{\frac{d-1}{4d+2}}+C^nR^{4dn}\zeta^2, \end{aligned} \end{equation} where \begin{equation}\label{final C} C:=C_{d,s,\beta_0,\mu_0,T}\|\phi_s\|_{L^\infty_{V_s}}\max\left\{1,\|F_0\|_{\infty,\beta_0,\mu_0}\right\}>1, \end{equation} is an appropriate constant. Let us fix $\theta>0$. Recall that we have also fixed $s\in\mathbb{N}$ and $\phi_s\in C_c(\mathbb{R}^{ds})$. We will now choose parameters satisfying \eqref{choice of parameters}, depending only on $\zeta$, such that the right hand side of \eqref{final final bounds} becomes less than $\zeta$. \textit{Choice of parameters}: We choose $n\in\mathbb{N}$ and the parameters $\delta,\eta,R,\epsilon_0,\alpha $ in the following order: \begin{align} \bullet\hspace{0.2cm}& \max\left\{s,\log_2(C\zeta^{-1})\right\}<< n,& \text{(this implies $s<n$, $C2^{-n}<<\zeta$)},\\ \bullet\hspace{0.2cm}& \delta<< \zeta C^{-(n+1)},&\text{(this implies $C^{n+1}\delta<<\zeta$)},\label{first parameter}\\ \bullet\hspace{0.2cm}& \eta<<\zeta^{\frac{8d+4}{d-1}},\hspace{0.2cm} R<<\zeta^{-1/4dn}C^{-1/4d},&\text{(those imply $C^nR^{4dn}\eta^{\frac{d-1}{4d+2}}<<\zeta$ and $C^nR^{4dn}\zeta^2<<\zeta)$,}\\ \bullet\hspace{0.2cm}& \max\left\{1,\sqrt{3}\beta_0^{-1/2}\ln^{1/2}(C\zeta^{-1})\right\}< <R,&\text{(this implies $Ce^{-\frac{\beta_0 }{3}R^2}<<\zeta$),}\label{prefinal parameter}\\ \bullet\hspace{0.2cm}&\epsilon_0<<\eta\delta,\quad \epsilon_0<\theta,& \label{choice of epsilon0}\\ \bullet\hspace{0.2cm}&\alpha<<\epsilon_0\min\{1,R^{-1}\eta\}.&\label{final parameter} \end{align} Clearly \eqref{first parameter}-\eqref{final parameter} imply the parameters chosen satisfy \eqref{choice of parameters} and depend only on $\zeta$. Then, \eqref{final final bounds} and the choice of parameters imply that we may find $N_0(\zeta)\in\mathbb{N}$, such that for all $N>N_0$, there holds: \begin{equation*} \|I_s^N(t)-I_s^\infty(t)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}<\zeta. \end{equation*} But by \eqref{choice of epsilon0}, we have $\epsilon_0<\theta$, therefore we obtain \begin{equation*} \|I_s^N(t)-I_s^\infty(t)\|_{L^\infty\left(\Delta_s^X\left(\theta\right)\right)}\leq\|I_s^N(t)-I_s^\infty(t)\|_{L^\infty\left(\Delta_s^X\left(\epsilon_0\right)\right)}<\zeta, \end{equation*} and Theorem \ref{convergence theorem} is proved. \section{Appendix} In this appendix, we present some auxiliary results which are used throughout the paper. \subsection{Calculation of Jacobians} We first present an elementary Linear Algebra result, which will be useful throughout the manuscript for the calculation of Jacobians. For a proof see Lemma A.1. from \cite{thesis}. \begin{lemma}\label{linear algebra lemma} Let $n\in\mathbb{N}$, $\lambda\neq 0$ and $w,u\in\mathbb{R}^n$. Then \begin{equation*} \det(\lambda I_n+w u^T)=\lambda^n(1+\lambda^{-1}\langle w,u\rangle), \end{equation*} where $I_n$ is the $n\times n$ identity matrix. \end{lemma} \subsection{The binary transition map} Here, we introduce the binary transition map, which will enable us to control binary postcollisional configurations. Recall from \eqref{binary cross} the binary cross-section: $$b_2(\omega_1,\nu_1)=\langle\omega,v_1\rangle,\quad(\omega_1,\nu_1)\in\mathbb{S}_1^{d-1}\times \mathbb{R}^d.$$ Given $v_1,v_2\in\mathbb{R}^d$, we define the domain\footnote{we trivially extend the binary cross-section for any $\omega\in\mathbb{R}^d.$} $ \Omega:=\big\{\omega_1\in\mathbb{R}^d:|\omega_1|\leq 2,\text{ and }b_2(\omega_1,v_2-v_1)>0\big\}, $ and the set $ \mathcal{S}_{v_1,v_2}^+=\{\omega_1\in\mathbb{S}_1^{d-1}: b_2(\omega_1,v_2-v_1)>0\}\subseteq\Omega. $ We also define the smooth map $\Psi:\mathbb{R}^d\to\mathbb{R}$ by $ \Psi(\omega_1):=|\omega_1|^2. $ Notice that the unit $(d-1)$-sphere is given by level sets of $\Psi$ i.e. $ \mathbb{S}_1^{d-1}=[\Psi=1]. $ \begin{proposition}\label{transition prop} Consider $v_1,v_2\in\mathbb{R}^d$ and $r>0$ such that $ |v_1-v_2|=r. $ We define the binary transition map $\mathcal{J}_{v_1,v_2}:\Omega\to\mathbb{R}^d$ as follows\footnote{we trivially extend the binary collisional operator for any $\omega\in\Omega$.}: \begin{equation}\label{definition of v} \mathcal{J}_{v_1,v_2}(\omega_{1}):=r^{-1}(v_1'-v_2'),\quad\omega\in\Omega. \end{equation} The map $\mathcal{J}_{v_1,v_2}$ has the following properties: \begin{enumerate}[(i)] \item $\mathcal{J}_{v_1,v_2}$ is smooth in $\Omega$ with bounded derivative uniformly in $r$ i.e. \begin{equation}\label{matrix derivative lemma} \|D\mathcal{J}_{v_1,v_2}(\omega_1)\|_\infty\leq C_d,\quad\forall\omega_1\in\Omega, \end{equation} where $\|\cdot\|_\infty$ denotes the maximum element matrix norm of $D\mathcal{J}_{v_1,v_2,v_3}(\omega_1)$. \vspace{0.2cm} \item The Jacobian of $\mathcal{J}_{v_1,v_2}$ is given by: \begin{equation}\label{equality for jacobian}\jac(\mathcal{J}_{v_1,v_2})(\omega_1)\simeq r^{-d}b_2^d(\omega_1,v_2-v_1)>0,\quad\forall\omega_1\in\Omega. \end{equation} \item The map $\mathcal{J}_{v_1,v_2}:\mathcal{S}_{v_1,v_2}^+\to\mathbb{S}_1^{d-1}\setminus\{r^{-1}(v_1-v_2)\}$ is bijective. Morever, there holds \begin{equation}\label{S=level sets} \mathcal{S}_{v_1,v_2}^+=[\Psi\circ\mathcal{J}_{v_1,v_2}=1]. \end{equation} \item For any measurable $g:\mathbb{R}^{d}\to[0+\infty]$, there holds the change of variables estimate: \begin{equation}\label{substitution estimate} \int_{\mathcal{S}_{v_1,v_2}^+}(g\circ\mathcal{J}_{v_1,v_2}(\omega_1)|\jac\mathcal{J}_{v_1,v_2}(\omega_1)|\,d\omega_1\lesssim\int_{\mathbb{S}_1^{d-1}}g(\nu_1)\,d\nu_1. \end{equation} \end{enumerate} \end{proposition} \begin{proof} The proof is the binary analogue of the proof of Proposition 8.5. in \cite{ternary}. \begin{comment} We will use the following notation: \begin{equation}\label{vector notation} u=v_1-v_2. \end{equation} Notive that \begin{equation}\label{binary cross with respect to u} b_2(\omega_1,v_2-v_1)=-\langle\omega_1,u\rangle. \end{equation} We prove each claim separately: \textit{(i)}: By \eqref{definition of v}, \eqref{vector notation} and \eqref{binary formulas without}, we have \begin{equation}\label{jac after sur} \mathcal{J}_{v_1,v_2}(\omega_1)=r^{-1}\left(u-2\langle\omega_1,u\rangle\omega_1\right), \end{equation} Let us find the derivative of $\mathcal{J}_{v_1,v_2}$. Using \eqref{jac after sur}, we obtain \begin{equation}\label{derivative of binary transition} D\mathcal{J}_{v_1,v_2}(\omega_1)=-2r^{-1}\left(\langle\omega_1,u\rangle I_{d}+\omega_1 u^T\right), \end{equation} therefore $\mathcal{J}_{v_1,v_2,v_3}$ is smooth in $\Omega$. \textit{(ii)}: To calculate the Jacobian, we use \eqref{derivative of binary transition} and apply Lemma \ref{linear algebra lemma} for $n=d$, to obtain \begin{equation*} \jac(\mathcal{J}_{v,v_1})(\omega_1)=(-2r^{-1})^d\det\left(\langle\omega_1,u\rangle I_{d}+\omega_1 u^T\right)\simeq r^{-d}b_2^d(\omega_1,v_2-v_1)>0, \end{equation*} where we used \eqref{binary cross with respect to u} and the fact that $\omega_1\in\Omega$. \textit{(iii)}: Let us first prove that $\mathcal{J}_{v_1,v_2}:\mathcal{S}_{v_1,v_2}^+\to\mathbb{S}_1^{d-1}\setminus\{r^{-1}u\}$. Indeed, for $\omega_1\in\mathcal{S}_{v_1,v_2}^+$, conservation of relative velocities \eqref{relative veloc binary} and \eqref{reflection assumption} imply \begin{equation*} |\mathcal{J}_{v_1,v_2}(\omega_1)|=r^{-1}|v_1'-v_2'|=r^{-1}|v_1-v_2|=1. \end{equation*} Morever, assuming $\mathcal{J}_{v_1,v_2}(\omega_1)=r^{-1}u$ for some $\omega_1\in\mathcal{S}_{v_1,v_2}^+$, we obtain $\langle\omega_1,u\rangle=0$, which is a contradiction. We conclude that $\mathcal{J}_{v_1,v_2}:\mathcal{S}_{v_1,v_2}^+\to\mathbb{S}_1^{d-1}\setminus\{r^{-1}u\}$. \textit{Injectivity:} Consider $\omega_1,\omega_1'\in\mathcal{S}_{v_1,v_2}^+$ such that $$\mathcal{J}_{v_1,v_2}(\omega_1)=\mathcal{J}_{v_1,v_2}(\omega_1').$$ We have \begin{align} \mathcal{J}_{v_1,v_2}(\omega_1)&=r^{-1}\left(u-\langle\omega_1,u\rangle\omega_1\right),\nonumber\\ \mathcal{J}_{v_1,v_2}(\omega'_1)&=r^{-1}\left(u-\langle\omega_1',u\rangle\omega_1'\right),\nonumber \end{align} thus \begin{equation}\label{injectiveness of transition} \langle\omega_1,u\rangle\omega_1=\langle\omega_1',u\rangle\omega_1'. \end{equation} Since $\omega_1,\omega'_1\in\mathcal{S}_{v_1,v_2}^+$, \eqref{injectiveness of transition} implies that there is $\lambda\neq 0$ such that $$\omega'=\lambda\omega.$$ SInce $\omega_1,\omega_1'\in\mathcal{S}_{v_1,v_2}^+$ we also obtain that $\lambda=1$, thus $\omega=\omega'$. Therefore, $\mathcal{J}_{v_1,v_2}:\Omega\to\mathbb{R}^{d}\setminus\{r^{-1}u\}$ is injective. \textit{Surjectivity}: Consider $\nu_1\in\mathbb{S}_1^{d-1}\setminus\{r^{-1}u\}$. We investigate the possible solutions of the equation: \begin{equation}\label{wanna prove} \nu_1=\mathcal{J}_{v_1,v_2}\left(\omega_1\right), \end{equation} which is equivalent to the equation \begin{equation}\label{equivalent equation} 2\langle\omega_1,u\rangle\omega=u-r\nu_1. \end{equation} Since $\nu_1\neq r^{-1}u$, \eqref{equivalent equation} implies there is $\lambda\neq 0$ such that \begin{equation}\label{lambda dependence} \omega_1=\lambda(u-r\nu_1). \end{equation} Replacing \eqref{lambda dependence} to \eqref{wanna prove}, we obtain the following equation for $\lambda$: \begin{equation}\label{what lambda} 2\lambda^2\langle u-r\nu_1,u\rangle=1. \end{equation} To show surjectivity, consider $\nu_1\in\mathbb{S}_1^{d-1}\setminus\{r^{-1}u\}$. The fact that $\nu_1\in\mathbb{S}_1^{d-1}\setminus \{r^{-1}u\}$, \eqref{reflection assumption}, and Cauchy-Schwartz inequality yield $$\langle u,u-r\nu_1\rangle=|u|^2-r\langle u,\nu_1\rangle>0.$$ Motivated by \eqref{what lambda}, let us define $$\omega_1:=\frac{r\nu_1-u}{\sqrt{2\langle u-r\nu_1,u\rangle}},$$ Assumption \eqref{reflection assumption}, the fact that $\nu_1\in\mathbb{S}_1^{d-1}\setminus\{r^{-1}u\}$, and a straightforward algebraic calculation imply that \begin{equation}\label{solution unitary} \omega_1\in\mathcal{S}_{v_1,v_2}^{+}, \quad\nu_1=\mathcal{J}_{v_1,v_2}(\omega_1). \end{equation} Therefore $\mathcal{J}_{v_1,v_2}:\mathcal{S}_{v_1,v_2}^+\to\mathbb{S}_{1}^{d-1}\setminus\{r^{-1}u\}$ is surjective. \textit{Proof of \eqref{S=level sets}}: We have already seen that $\mathcal{J}_{v_1,v_2,v_3}(\mathcal{S}_{v_1,v_2,v_3}^+)\subseteq\mathbb{S}_1^{d-1}=[\Psi=1]$, so \begin{equation}\label{transition subset} \mathcal{S}_{v_1,v_2,v_3}^+\subseteq [\Psi\circ\mathcal{J}_{v_1,v_2,v_3}=1]. \end{equation} Let us prove that $[\Psi\circ\mathcal{J}_{v_1,v_2}=1]\subseteq\mathcal{S}_{v_1,v_2}^+$ as well. Consider $\omega_1\in [\Psi\circ\mathcal{J}_{v_1,v_2}=1]$. This means that $\nu_1:=\mathcal{J}_{v_1,v_2}(\omega_1)\in\mathbb{S}_1^{d-1}$. Since $\omega_1\in \Omega$, we also have $\nu_1\neq r^{-1}u$, thus the calculation made to prove surjectivity yields that $$\omega'_1:=\frac{r\nu_1-u}{\sqrt{2\langle u-r\nu_1,u\rangle}}\in\mathcal{S}_{v_1,v_2}^+,$$ satisfies $\mathcal{J}_{v_1,v_2}(\omega'_1)=\nu_1$ as well. Since $\mathcal{J}_{v_1,v_2}$ is injective on $\mathcal{S}_{v_1,v_2}^+$, we obtain $\omega_1=\omega'_1$, thus \begin{equation}\label{transition superset} \mathcal{S}_{v_1,v_2}^+\supseteq [\Psi\circ\mathcal{J}_{v_1,v_2}=1]. \end{equation} By \eqref{transition subset}-\eqref{transition superset}, we obtain \eqref{S=level sets}. \textit{(iv)}: Since $\mathcal{J}_{v_1,v_2}:\mathcal{S}_{v_1,v_2}^+\to\mathbb{S}_1^{d-1}\setminus\{r^{-1}u\}$ is bijective, we have $$\mathcal{S}^+_{v_1,v_2}=[\Psi\circ\mathcal{J}_{v_1,v_2}=1],$$ thus using notation from \eqref{indicatrix}, we have \begin{equation}\label{card=1} \mathcal{N}_{\mathcal{J}_{v_1,v_2}}(\nu_1,[\Psi\circ\mathcal{J}_{v_1,v_2}=1])=1,\quad\forall\nu_1\in\mathbb{S}_1^{d-1}\setminus\{r^{-1}u\}. \end{equation} Moreove, we have \begin{equation*} |\nabla\Psi(\nu_1)|^2=4|\nu_1|^2, \end{equation*} hence \begin{equation}\label{psi neq 0} \nabla\Psi(\nu_1)\neq 0,\quad\forall \nu_1\in [\frac{1}{2}<\Psi<\frac{3}{2}]. \end{equation} By \eqref{psi neq 0}, \eqref{equality for jacobian} we may use part \textit{(i)} of Lemma \ref{substitution lemma} for the function $g$, and $\gamma=1$, $\delta=1/2$, $F=\mathcal{J}_{v_1,v_2}$, $\Psi$ given by \eqref{Psi binary transition}. We have \begin{align} &\int_{\mathcal{S}_{v_1,v_2}^+}(g\circ\mathcal{J}_{v_1,v_2})(\omega_1)|\jac\mathcal{J}_{v_1,v_2}(\omega_1)|\frac{|\nabla\Psi(\mathcal{J}_{v_1,v_2}(\omega_1))|}{|\nabla(\Psi\circ\mathcal{J}_{v_1,v_2})(\omega_1)|}\,d\omega_1\nonumber\\ &=\int_{[\Psi\circ\mathcal{J}_{v_1,v_2}=1]}(g\circ\mathcal{J}_{v_1,v_2})(\omega_1)|\jac\mathcal{J}_{v_1,v_2}(\omega_1)|\frac{|\nabla\Psi(\mathcal{J}_{v_1,v_2}(\omega_1))|}{|\nabla(\Psi\circ\mathcal{J}_{v_1,v_2})(\omega_1)|}\,d\omega_1\label{S= level sets app}\\ &=\int_{[\Psi=1]}g(\nu_1)\mathcal{N}_{\mathcal{J}_{v_1,v_2}}(\nu_1,[\Psi\circ\mathcal{J}_{v_1,v_2}=1])\,d\nu_1\label{application of sub lemma}\\ &=\int_{\mathbb{S}_1^{d-1}}g(\nu_1)\mathcal{N}_{\mathcal{J}_{v_1,v_2}}(\nu_1,[\Psi\circ\mathcal{J}_{v_1,v_2}=1])\,d\nu_1\label{apply ellipsoid}\\ &=\int_{\mathbb{S}_1^{d-1}}g(\nu_1)\,d\nu_1,\label{equality of integrals sub} \end{align} where to obtain \eqref{S= level sets app} we use \eqref{S=level sets}, to obtain \eqref{application of sub lemma} we use part \textit{(i)} of Lemma \ref{substitution lemma}, to obtain \eqref{apply ellipsoid} we use \eqref{sphere as level sets}, and to obtain \eqref{equality of integrals sub} we use \eqref{card=1}. By the chain rule and \eqref{matrix derivative lemma}, for all $\omega_1\in \mathcal{S}_{v_1,v_2}^+$, we obtain \begin{align*} 0\neq \frac{|\nabla(\Psi\circ \mathcal{J}_{v_1,v_{2}})(\omega_1)|}{|\nabla\Psi(\mathcal{J}_{v_1,v_2}(\omega_1))|}&=\frac{|D^T\mathcal{J}_{v_1,v_{2}}(\omega_1)\nabla\Psi(\mathcal{J}_{v_1,v_{2}}(\omega_1))|}{|\nabla\Psi(\mathcal{J}_{v_1,v_2}(\omega_1))|}\\ &\leq\frac{\|D\mathcal{J}_{v_1,v_{2}}(\omega_1)\|_{\infty}|\nabla\Psi(\mathcal{J}_{v_1,v_2}(\omega_1))|}{|\nabla\Psi(\mathcal{J}_{v_1,v_2}(\omega_1))|}\\ &= C_d \|D\mathcal{J}_{v_1,v_2}(\omega_1)\|_\infty\\ &\leq C_d, \end{align*} and \eqref{substitution estimate} follows from \eqref{equality of integrals sub} and the above estimate. \end{comment} \end{proof}
2,877,628,088,739
arxiv
\section{Introduction}\label{sec:intro} The so-called TTbar deformations \cite{Smirnov:2016lqw,Cavaglia:2016oda} of two-dimensional quantum field theories (QFTs) has brought about a renewed interest to UV properties of Renormalization Group (RG) flows generated by higher dimensional (a.k.a. ``irrelevant'') operators. The TTbar deformation is defined as the one-parameter family of formal ``actions'' $\mathcal{A}_\alpha$, determined by the flow \begin{eqnarray}\label{Aalpha} \frac{d}{d\alpha}\mathcal{A}_\alpha = \int\,(T\bar T)_\alpha (x)\,d^2 x\;, \end{eqnarray} where $(T\bar T)_\alpha (x)$ is a special composite operator built from the components of the energy-momentum tensor associated with the theory $\mathcal{A}_\alpha$ \cite{Zamolodchikov:2004ce}. The deformation \eqref{Aalpha} has a number of notable properties. The theory $\mathcal{A}_\alpha$ is ``solvable'', in the sense that certain characteristics can be found exactly in terms of the corresponding ones in the undeformed theory $\mathcal{A}_{\alpha=0}$. This is remarkable, because the deformation operator $(T\bar T)_\alpha$ has exact dimension $4$, meaning the perturbation in \eqref{Aalpha} is ``irrelevant'' in the RG sense. Normally, such deformations are expected to break the short-distance structure of the quantum field theory, generally rendering the theory UV incomplete, and possibly violating causality at short scales. The abnormal UV properties of the theory $\mathcal{A}_\alpha$ are manifest already in the short-scale behavior of its finite-size ground-state energy. If the spatial coordinate of the 2D space-time is compactified on a circle of circumference $R$, its ground-state energy $E_\alpha(R)$ is determined exactly, via the equation \cite{Smirnov:2016lqw,Cavaglia:2016oda} \begin{eqnarray}\label{burgers2} E_\alpha(R)=E_0(R-\alpha E_\alpha(R))\;, \end{eqnarray} in terms of the ground state energy $E_0(R)$ of the undeformed theory, at $\alpha=0$. The equation \eqref{burgers2} shows that, depending on the sign of the deformation parameter $\alpha$, the ground state energy either develops a square root singularity at some $R_*\sim 1/\sqrt{|\alpha|}$, or has no short-distance singularity at all. Neither of these types of behavior is compatible with the idea of QFT as the RG flow stemming out of a UV fixed point. The theory defined by \eqref{Aalpha} therefore is not a local QFT in the Wilsonian sense \cite{Wilson:1973jj}. Moreover, at negative $\alpha$, the singularity at finite $R$ signals a very fast growth of the density of states at high energies, a common hallmark of string theories, leading to the Hagedorn transition \cite{Polchinski:1998rq}. The behavior of $E_\alpha(R)$ at positive $\alpha$ is possibly even more puzzling, as it suggests a finite number of states per unit volume, an unlikely feature if one thinks of a QFT as a system of continuously many interacting degrees of freedom, unless quantum gravity is involved\footnote{A relation of the TTbar deformation to the Jackiw-Teitelboim gravity was indeed proposed in \cite{Dubovsky:2017cnj,Dubovsky:2018bmo}.}. Therefore, the deformed theory determined by \eqref{Aalpha} cannot be considered a conventional UV complete local QFT. At the same time, however, the TTbar deformation has a number of robust features which makes one reluctant to simply dismiss it as ``pathological''. It is instead tempting to think that the deformation \eqref{Aalpha} exemplifies some meaningful extension of the notion of local QFT. In particular, an interesting interpretation of the theory $\mathcal{A}_\alpha$ in terms of its gravitational dual was proposed in \cite{McGough:2016lol}, where a relation to the state of the bulk gravity in the dual theory was suggested. Several questions about 2D physics of the deformed theory need to be elucidated in order to put such suggestions on a solid ground. For example, does the deformation preserve any part of the local structure of QFT? Notice how the very definition \eqref{Aalpha} depends on the notion of the energy-momentum tensor, conventionally a part of such a local structure. Another important question concerns the macro-causality in 2D space-time. While the deformation \eqref{Aalpha} with positive $\alpha$ is suspected to display super-luminal propagation \cite{Dubovsky:2017cnj,McGough:2016lol}, the case of negative $\alpha$ is most likely free from this problem. We will not dwell on this question, as it is the negative-$\alpha$ deformation which will be of interest to the present discussion. In any case, we believe it is important to understand the physical origin of the above abnormal short-distance properties. Another exact result about the theory $\mathcal{A}_\alpha$ concerns the deformation of its $S$-matrix, whose elements differ from the corresponding undeformed ones by a universal phase factor, available in closed form \cite{Dubovsky:2017cnj}. In particular, the $2 \to 2$ elastic scattering amplitude has the form \begin{eqnarray}\label{sdef} S_\alpha(\theta) = S_0(\theta)\,\exp\left(-i \alpha M^2 \sinh\theta\right)\;, \end{eqnarray} where $S_0(\theta)=S_{\alpha=0}(\theta)$ is the $2\to 2$ scattering amplitude of the undeformed theory. Here $\theta=\theta_1 - \theta_2$ is the difference between rapidities of the two particles involved -- assumed for simplicity to be identical -- and $M$ denotes their mass; in what follows we set the units so that $M=1$. A notable feature of the additional phases acquired under the deformation is their abnormally fast high-energy growth, which is evident already in the form \eqref{sdef}\footnote{A similar behavior of the scattering phase was previously found in non-commutative field theories \cite{Douglas:2001ba}.}. The scattering phase in \eqref{sdef} determines the density of two-particle states, suppressing it when $\alpha>0$ but greatly enhancing it at negative $\alpha$. In the latter case, one might be led to believe that the Hagedorn behavior is directly related to this rapid growth of the $2\to 2$ scattering phase. One of the results of the present work is to show that the situation is more subtle: the growth of the two-particle scattering phase in \eqref{sdef} is not a necessary condition for the formation of the singularity of the finite-size energy at finite real $R$. We will study certain generalizations of the TTbar deformation which can be defined whenever the original QFT is integrable \cite{Smirnov:2016lqw}. In most of such deformations, the scattering phases present a less exotic high-energy behavior -- i.e., they have finite limit at $\theta\to\infty$ -- while, at the same time, the overall density of states grows nonetheless exponentially with the energy, leading to the Hagedorn singularity. The generalizations of the TTbar deformations we will be interested in are based on the integrability of the original QFT. This assumes that the theory possesses infinitely many conserved local currents of higher Lorentz spins $s+1$, with $s$ taking values in the set $\{s\}$ of odd natural numbers: $s=1,3,5,7,...$\footnote{Generally, the set of spins $\{s\}$ of local Integrals of motion may be different in different integrable theories. Here we assume, again for simplicity, the most common situation -- represented e.g. by sinh-Gordon or sigma models -- where $\{s\}$ involves all odd natural numbers. In different models the CDD factor discussed below may be constrained by additional conditions, which however do not change the overall conclusions below.}. The deforming operators $T\bar T^{(s)}(x)$ are constructed from these currents in the exact same way as the operator $T\bar T (x)$ is built from the energy-momentum tensor, see \cite{Smirnov:2016lqw} for details. It can be then shown that the theory deformed by adding such operators retains its integrability, preserving the same set of conserved local currents. Therefore the deformations of an Integrable QFT (IQFT) by the operators $T\bar T^{(s)}$ generate an infinite-dimensional family of flows generalizing \eqref{Aalpha}, \begin{eqnarray}\label{Aalphas} \frac{\partial\mathcal{A}_{\{\alpha\}}}{\partial\alpha_s} = \int\,T\bar T^{(s)}_{\{\alpha\}}(x)\,d^2 x\;. \end{eqnarray} Here $\{\alpha\}$ denotes the infinite set of the deformation parameters $\{\alpha\}:=\{\alpha_s\}$, and the subscript $\{\alpha\}$ under the operator $T\bar T^{(s)}(x)$ is added to emphasize that it is constructed in terms of the conserved currents of the deformed theory $\mathcal{A}_{\{\alpha\}}$. In what follows we refer to \eqref{Aalphas} as the \emph{generalized TTbar flow}\footnote{In \cite{Conti:2019dxg}, a different family of generalizations of the TTbar flow, in which the deforming operators $T\bar T_{s}$ are asymmetrically constructed from the energy-momentum tensor and a higher-conserved current, was explored.}. For integrable theories, the infinite-parameter flow \eqref{Aalphas} generalizes the one-parameter deformation \eqref{Aalpha}. The latter corresponds to the special case $\alpha_s =0$ for $s>1$, and $\alpha_1=\alpha$. To distinguish them, below we often refer to \eqref{Aalpha} as the "TTbar proper", or simply TTbar, reserving the term "Generalized TTbar" to the generic deformation \eqref{Aalphas}. It was argued that the deformation \eqref{Aalphas} leads to the following deformation of the elastic two-particle $S$-matrix \begin{eqnarray}\label{sdefs} S_{\{\alpha\}}(\theta)=S_{\{0\}}(\theta)\,\Phi_{\{\alpha\}}(\theta)\,, \qquad \Phi_{\{\alpha\}}(\theta)=\exp\left\{-i \,\sum_{s\in2\mathbb{Z}+1}\,\alpha_s\,\sinh\left(s\,\theta\right)\right\}\;, \end{eqnarray} with the same notations as in \eqref{sdef} and \eqref{Aalphas}\footnote{The parameters $\alpha_s$ in \eqref{sdef} coincide with the flow parameters defined in \eqref{Aalphas} provided a specific normalization of the fields $T\bar T^{(s)}_{\{\alpha\}}(x)$ is chosen, otherwise the terms in the sum in \eqref{sdef} would have additional normalization-dependent numerical coefficients. The form \eqref{sdef} was explicitly derived in \cite{Smirnov:2016lqw} for the deformed sine-Gordon model, to leading order in the deformation parameters. However, this form of the $S$-matrix deformation under the flow \eqref{Aalphas} can be proven in the general case, using the methods of \cite{Cardy:2018sdv} or the approach developed in \cite{Kruthoff:2020hsi}. We will elaborate this point elsewhere.}. The phase factor $\Phi_{\{\alpha\}}(\theta)$ is known with the name of \emph{CDD factor} \cite{Castillejo:1955ed}. Generally, it is an energy-dependent phase factor $\Phi(\theta)$ which can be added to the $2\to 2$ scattering amplitude without violating the analyticity, unitarity and crossing symmetry conditions. The unitarity and crossing demand that $\Phi(\theta)$ satisfies the functional relations \begin{eqnarray}\label{cdddef} \Phi(\theta)\Phi(-\theta)=1\,, \qquad \Phi(\theta)=\Phi(i\pi-\theta)\;, \end{eqnarray} which $\Phi_{\{\alpha\}}(\theta)$ in \eqref{sdefs} obviously does term by term in the sum over $s$. Moreover, it is easy to see that (once the overall sign ambiguity is ignored) any solution of \eqref{cdddef} can be represented by the form \eqref{sdefs}, with the series in the exponential converging in some vicinity of the point $\theta=0$. However, the series does not need to converge at all $\theta$. The $S$-matrix analyticity forces $\Phi(\theta)$ to be a meromorphic function of $\theta$, with the locations of the poles constrained by the condition of macro-causality (more on this momentarily). Therefore, for \eqref{sdefs} to represent a physically sensible $S$-matrix, the sum over $s$ is allowed to have a finite domain of convergence, while its analytic continuation must admit the representation \begin{eqnarray}\label{cddg} \Phi_{\{\alpha\}}(\theta) = \Phi_{\text{pole}}(\theta) \,\Phi_{\text{entire}}(\theta)\;, \end{eqnarray} where the first factor absorbs all the poles located at finite $\theta$, whose number $N$ is in general arbitrary (possibly infinite), \begin{eqnarray}\label{phipole} \Phi_{\text{pole}}(\theta) = \prod_{p=1}^N \frac{\sinh\theta_p + \sinh\theta}{\sinh\theta_p - \sinh\theta}\;, \end{eqnarray} and \begin{eqnarray}\label{phientire} \Phi_{\text{entire}}(\theta) = \exp\left\{-i \,\sum_{s}\,a_s \,\sinh\left(s\,\theta\right)\right\}\;. \end{eqnarray} In this last factor, the series in the exponential is assumed to converge at all $\theta$, so that $\Phi_{\text{entire}}(\theta)$ represents an entire function of $\theta$. Macro-causality restricts possible positions of the poles $\theta_p$ to either the imaginary axis $\Re \theta_p=0$, or to the strips $\Im \theta_p \in [-\pi,0] \ \text{mod} \ 2\pi$ since, in virtue of \eqref{cdddef}, $\Phi(\theta)$ is a periodic function, $\Phi(2\pi i+\theta)=\Phi(\theta)$. Let us stress here that the representation (\ref{cddg}--\ref{phientire}) of the generic CDD factor $\Phi_{\{\alpha\}}(\theta)$ differs from the one given in \eqref{sdefs} only in the parameterization: any factor (\ref{cddg}--\ref{phientire}) can be written in the form \eqref{sdefs}, with the parameters $\alpha_s$ expressed in terms of $a_s$ and $\theta_p$, and conversely any factor $\Phi_{\{\alpha\}}(\theta)$ defined in \eqref{sdefs}, being analytically continued to the whole $\theta$-plane, can be written in the form \eqref{cddg}. In the present work we focus our attention on the class of $S$-matrices \eqref{sdefs} having CDD factors \eqref{cddg} for which the entire part \eqref{phientire} is absent\footnote{A first analysis of models whose $S$-matrix is deformed by a CDD factor consisting of only of a generic entire part \eqref{phientire} has been performed in \cite{Hernandez-Chifflet:2019sua}.}, \begin{eqnarray}\label{cddn} \Phi_{\{\alpha\}}(\theta)= \Phi_\text{pole}(\theta)\;, \end{eqnarray} and the product in \eqref{phipole} involves finitely many factors, i.e. $N<\infty$. Note that, unlike \eqref{sdef}, such CDD factors have regular limits at $\theta\to\pm \infty$. Therefore, if the undeformed $S$-matrix $S_0(\theta)$ behaves regularly -- presenting no abnormal growth of the scattering phase -- at large $\theta$, so does the deformed $S$-matrix $S_0(\theta)\Phi(\theta)$. We now raise the following question: how does an $S$-matrix deformation such as the one just described affect the short-distance behavior of the theory? Unfortunately, for the general TTbar deformation \eqref{Aalphas} no closed form of the finite-size energy levels similar to \eqref{burgers2} is available with which one could analyze their dependence on the size $R$ of the system. However, having an exact expression for the deformed IQFT $S$-matrix, the finite-size ground-state energy $E(R)$ can be obtained by solving the associated Thermodynamic Bethe Ansatz (TBA) equation \cite{Yang:1968rm,Zamolodchikov:1989cf}. In general, the form of the TBA equations depends on the particle spectrum of the theory. Here we consider, for simplicity, the case of a factorizable $S$-matrix involving only one kind of particles, having mass $M=1$. In this case the two-particle $S$-matrix consists of a single amplitude $S(\theta)$, which itself satisfies the equations \eqref{cdddef}. Therefore we can limit attention to the functions $S(\theta)$ of the form \eqref{phipole}\footnote{One can think of these as CDD deformations of the free $S$-matrix $S(\theta)=\pm 1$.}. There are two substantially different cases, depending on the sign of $S(0) = \sigma = \pm 1$. Following \cite{Zamolodchikov:1989cf}, we refer to these cases as the ``bosonic TBA'' when $\sigma=+1$ and ``fermionic TBA'' for $\sigma=-1$. Given $S(\theta)$, let $\varphi(\theta)$ be the derivative of the scattering phase, \begin{eqnarray}\label{varphidef} \varphi(\theta) = \frac{1}{i}\,\frac{d}{d\theta} \log S(\theta)\;. \end{eqnarray} Then the TBA equation takes the form of a non-linear integral equation for a single function $\epsilon(\theta)$, the \emph{pseudo-energy}, \begin{eqnarray}\label{tbas} \epsilon(\theta)=R\,\cosh\theta - \int\,\varphi(\theta-\theta')\,L(\theta')\,\frac{d\theta'}{2\pi}\;, \qquad \end{eqnarray} where \begin{eqnarray}\label{Ldef} L(\theta) := -\sigma\,\log\left(1-\sigma\,e^{-\epsilon(\theta)}\right)\;. \end{eqnarray} The ground state energy can then be recovered from the pseudo-energy via the following expression \begin{eqnarray}\label{etbas} E(R) = -\,\int_{-\infty}^{\infty} \,\cosh\theta\,L(\theta)\,\frac{d\theta}{2\pi}\;. \end{eqnarray} In most cases the TBA equations are impervious to the exact analytic derivation of their solutions but are amenable to numerical approaches. These can yield important insight into high energy, \emph{viz.} short distance, properties of the deformed theories \eqref{Aalphas}. A numerical solution can be obtained, with practically arbitrary accuracy, by numerical integration of \eqref{tbas}. This approach was employed to obtain $E(R)$ in a number of IQFT's with known $S$-matrices, see e.g. \cite{Zamolodchikov:1989cf,Zamolodchikov:1991pc}. Usually, the numerical solution is obtained by iterations, starting from a seed function, conventionally taken to be $\epsilon(\theta)=R\,\cosh\theta$, and successively substituting the result of the previous iteration in the right-hand-side in \eqref{tbas}. We will review this approach in \S \ref{subsec:iterative}. If one considers the $S$-matrix associated with a UV complete local IQFT -- such as a conformal field theory (CFT) perturbed by a relevant operator, the sine-Gordon model, or an integrable sigma-model -- the iterations turn out to converge for all $R>0$, and the resulting ground-state-energy $E(R)$ happens to be analytic at all positive real $R$, developing a Casimir singularity at $R=0$. But how adding a CDD factor to the $S$-matrix will affect the TBA solution? This question was addressed in the early 90's by Al. Zamolodchikov, who has considered the modification of the trivial fermionic $S$-matrix $S(\theta) = -1$ by the simplest possible rational CDD factor, namely \eqref{phipole} with $N=1$. In the resulting theory, the celebrated ``staircase model'' \cite{Zamolodchikov:1991pc}, the iterative solution of the TBA still converges at all positive $R$, producing a ground-state-energy $E(R)$ analytic for $R>0$. He also observed that when adding more general CDD factors the situation changes qualitatively. Typically, the convergence of the iterative solution breaks down at $R$ below a certain critical value $R_*$, and the form of the numerical solution at $R>R_*$, where the iterations converge, strongly indicates the existence of a square-root singularity of $E(R)$ at $R_*$ \cite{Zamolodchikov_unpublished}. A similar observation was made in \cite{Mussardo:1999aj}, where a particular CDD deformation of the trivial bosonic $S$-matrix $S(\theta) = 1$ was studied and the numerical solution of the associated TBA equation was found consistent with the existence of a singularity at finite $R_*>0$. We wish to stress that the presence of the singularity at finite $R_*$ and, moreover, its square-root character, are features very similar to the ones displayed by $E(R)$ in the TTbar deformed QFTs, as shown in Fig \ref{ERplotTTbar} below. In this work we study a few simple cases of CDD deformed TBA equations, using a refined numerical routine based on the so-called ``pseudo-arc-length continuation'' (PALC) method. This allows one to recover solutions to the TBA equation \eqref{tbas} which are unstable under the standard iterative approach. This method is explained in detail in \S \ref{sec:num_meth}. The object of our attention will be trivial $S$-matrices $S(\theta) = \sigma = \pm 1$ deformed by CDD factors \eqref{phipole} with $N=1,2$. The case $N=1$ with $\sigma=-1$ corresponds to either the sinh-Gordon or the staircase model, depending on the position of the pole. As mentioned just above, these models do not display any abnormal short-distance behavior and were extensively studied in the literature. The bosonic TBA with $N=1$ was considered in \cite{Mussardo:1999aj} and we will comment on it in \S \ref{sec:results}, along with the $N=2$ case. Of these, we mostly address the fermionic cases, although some results for the bosonic TBA are also presented. We find that for all allowed values of the parameters $\theta_p\, (p=1,2)$ the fermionic TBA equation \eqref{tbas} with sufficiently large $R$ possesses two real solutions, or ``branches'', which merge at some finite $R=R_*$. For $R<R_*$ these branches are likely to continue as a pair of conjugated complex-valued solutions. Of these two real solutions at $R>R_*$, one reproduces the iterative solution of the TBA equations \eqref{tbas}. We will call this solution the ``primary branch'', while referring to the other one as the ``secondary branch''. Let us stress here that it is the primary branch which directly corresponds to the deformed theory: $E(R)$ on the primary branch represents the finite-size vacuum energy of the deformed theory (in particular, at $R\to\infty$ the effect of the deformation disappears, as expected); it also gives the specific free energy of the deformed theory at temperature $T=1/R$ (in particular, it is the primary branch solution which correctly sums up the virial expansion associated with the input particle theory). In this sense, one could call the primary branch the ``physical'' one, although we will not use such a term\footnote{ The reason is that this would imply that the secondary branch is ``unphysical'', which we are reluctant to claim. Although the secondary branch definitely does not have direct interpretation in terms of ``physics'' of the input S-matrix, it might very well have some physical content of its own. In fact, understanding physical mechanism behind the secondary branch is one of the outstanding problems which remains open both for the generalized TTbar deformations and for the TTbar proper. }. The secondary branch always has lower energy $E(R)$ than the primary one, which is qualitatively similar to the behavior observed in the TTbar deformations with negative $\alpha$, see Fig \ref{ERplotTTbar}. Since the two branches merge at some finite $R=R_*$, this can be regarded as a ``turning point'', where the continuation along the graph of $E(R)$ turns backward into the secondary branch. This is precisely the kind of situation the PALC method is designed to deal with. The secondary branch remains real for all $R>R_*$ and, moreover, develops a linear asymptotic $\sim e_{\infty}\, R$ as $R\to \infty$. This, again, is in full qualitative agreement with the TTbar deformations, together with the important fact that the singularity of the pseudo-energy $\epsilon(\theta|R)$, viewed as a function of $R$, occurs at $R=R_*$ that is independent of $\theta$. Of the above features, the existence of primary and secondary branches with the turning point at finite $R_*$, independent from $\theta$, repeat \emph{verbatim} in the bosonic 2CDD model. On the other hand, we still cannot check the large $R$ behavior of the secondary branch with sufficient accuracy, due to some instability in the numerical procedure. We will return on this problem in a future work. It is likely that the general situation displayed in the models studied here, i.e. the solution of the TBA equation developing a square-root singularity at finite $R_*$, which signals the presence of a Hagedorn transition, remains qualitatively the same when more CDD poles are added in \eqref{phipole} -- with the possible exceptions of special domains, hypersurfaces of lower dimension, in the parameter space\footnote{Examples of such cases can be found in \cite{Martins:1992ht,Martins:1992yk,Dorey:2000zb}.}. This of course will have to be carefully verified. We regard the present work as a first step in the program of systematically studying the short-distance behavior of the generalized TTbar deformations \eqref{Aalphas} of IQFTs. The qualitative similarity to the TTbar-deformed QFTs, with negative $\alpha$, suggests that the same mechanism behind the formation of the Hagedorn singularities is at play in all of these models. Understanding the physics underlying this phenomenon remains the most important open problem in this context, as well as the main motivation for the present work. \section{From TBA to Hagedorn: the TTbar case}\label{sec:TTbar} Henceforth we will assume that the theory under consideration is integrable, with a factorizable $S$-matrix. Let us briefly remind how, in this case, equation \eqref{burgers2} can be derived from the $S$-matrix deformation \eqref{sdef} via the TBA equations. We will present a somewhat simplified version of the much more general arguments of \cite{Cavaglia:2016oda} (for related work see \cite{Dubovsky:2012wk,Caselle:2013dra} and the more recent \cite{LeClair:2021wfd,LeClair:2021opx}). Whereas the analysis in \cite{Cavaglia:2016oda} applies to all the energy eigenvalues of the TTbar deformed theory \eqref{Aalpha}, we limit our considerations to the ground-state energy, which we denote as $E(R)$. The advantage is that the simple arguments presented below apply to the deformation \eqref{sdef} of an essentially generic integrable theory. The only assumptions, made for simplicity, are that the particle scattering theory associated with $\mathcal{A}_0$ involves only one kind of neutral particles, with the factorizable scattering of fermionic type\footnote{Extension to the bosonic case $S(0)=+1$ is trivial. Less straightforward but still possible is the generalization to the cases of a scattering theory involving many species of particles,including the bound states, with different or equal masses. We will elaborate on such cases elsewhere.}, i.e. $S_0(0)=-1$. The goal is to emphasize some important properties of the solution which, as we will see, are shared by the TBA solutions by more general CDD deformations. The TBA equation \eqref{tbas} associated with the deformed $S$-matrix \eqref{sdef} has the following kernel \begin{eqnarray}\label{varphialpha} \varphi_\alpha (\theta-\theta') = \varphi_0 (\theta-\theta') - \alpha\,\cosh(\theta-\theta')\;. \end{eqnarray} Recall that the ground state energy $E_\alpha(R)$ is given by \eqref{etbas}, which in our case reads \begin{eqnarray} E_\alpha (R)= - \,\int_{-\infty}^{\infty} \,\cosh\theta\,\,L_\alpha(\theta|R)\,\frac{d\theta}{2\pi}\;, \label{eq:enalpha} \end{eqnarray} where $L_\alpha (\theta|R):=\log\left(1+e^{-\epsilon_\alpha (\theta|R)}\right)$ satisfies the deformed TBA equation \eqref{tbas}, \begin{eqnarray} \epsilon_\alpha(\theta|R) = R\, \cosh\theta - \int\, \varphi_\alpha(\theta-\theta')\,L_\alpha(\theta'|R)\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} Due to the fact that the pseudo-energy is even, as is easily shown, we can separate the dependence on $\theta$ and $\theta'$ in the rightmost term in the kernel \eqref{varphialpha} so that the TBA equation can be written as follows \begin{eqnarray} \epsilon_\alpha(\theta|R) = \left(R-\alpha\,E_\alpha(R)\right)\,\cosh\theta - \int\,\varphi_0(\theta-\theta')\,L_\alpha(\theta'|R)\,\frac{d\theta'}{2\pi}\;, \label{ttbartba2} \end{eqnarray} where we used the definition \eqref{eq:enalpha}. For reasons that will become clear shortly we have made explicit the fact that $\epsilon(\theta|R)$ and $L(\theta|R)$ are functions of $R$ as well as of the rapidity $\theta$. This last form \eqref{ttbartba2} shows that $\epsilon_\alpha(\theta|R)$ satisfies the same TBA equation as $\epsilon_0(\theta|R)$, only with $R$ replaced by $R-\alpha E_\alpha(R)$. It then follows that \begin{eqnarray} \epsilon_\alpha(\theta|R) = \epsilon_0(\theta|R-\alpha E_\alpha(R)) \end{eqnarray} which immediately implies the equation \eqref{burgers2} for the deformed energy. It is also worth reminding here how the singularity of $E_\alpha(R)$, signifying the Hagedorn density of states, follows from \eqref{burgers2}. This takes a particularly simple form in terms of the function $R_\alpha(E)$, inverse to the function $E_\alpha(R)$, where $\alpha$ is regarded as a fixed parameter, \begin{eqnarray}\label{burgers3} R_\alpha(E)=R_0(E)+\alpha\,E\;. \end{eqnarray} This expression shows that the graph of the deformed function $E_\alpha(R)$ differs from the graph of $E_0(R)$ just by an affine transformation $(R,E) \to (R+\alpha E, E)$ of the $(R,E)$ plane. If we assume, as we do, that the undeformed theory $\mathcal{A}_0$ is a conventional QFT, defined \`{a} la Wilson as the RG flow from some UV fixed point down to an IR one (see \cite{Wilson:1973jj}), then the graph of $E_0(R)$ looks qualitatively as shown in Fig \ref{EvacQFT}. \begin{figure}[ht] \centering \includegraphics{Figures/e0.pdf} \caption{\small{ {Finite-size ground state energy $E_0(R)$ of a conventional Wilsonian relativistic QFT. Its $R\to 0$ behavior $-\pi c/6R$ is controlled by the UV fixed point. At large $R$, $E_0(R)$ shows the linear behavior $\simeq \varepsilon_0 R$, with the slope $\varepsilon_0$ representing the bulk vacuum energy density. We have to stress that the TBA equations actually compute the difference $E_\text{vac}(R)-\varepsilon_{0} R$, and in our subsequent analysis $E(R)$ stands for this difference. (That is why in all plots below the $R\to\infty$ slope of the primary branch is always set to zero.)}}}\label{EvacQFT} \end{figure} At large $R$ the function $E_0(R)$ approaches a linear asymptotic $\varepsilon_0 R$, where $\varepsilon_0$ is the vacuum energy density of the infinite system, with the rate of the approach controlled by the IR fixed point, which, typically, is a non-critical one. On the other hand, at $R\to 0$ it diverges as the Casimir energy determined by the UV fixed point, $E_0(R) \to -\pi c/6R$, where $c$ is the Virasoro central charge of the UV fixed point CFT. Then, according to \eqref{burgers3}, the plot of $E_\alpha(R)$ will look as either one of the panels \emph{a}) or \emph{b}) in Fig \ref{ERplotTTbar}, depending on the sign of $\alpha$. In what follows we will concentrate our attention to the case of negative $\alpha$, shown in panel \emph{a}). Note that the curve $E_\alpha(R)$ has two branches, each of which having real values for $R$ above a certain critical value $R_*$. It is the upper ``primary'' branch that corresponds to the ground state energy of the TTbar-deformed theory \eqref{Aalpha}. \begin{figure}[ht] \centering \begin{subfigure}{4cm} \includegraphics{Figures/e-a.pdf} \caption{$\alpha<0$} \end{subfigure} \hspace{2cm} \begin{subfigure}{4cm} \includegraphics{Figures/e+a.pdf} \caption{$\alpha>0$} \end{subfigure} \caption{\small{Finite-size ground state energy of the TTbar deformed theory. (a) $\alpha <0$. The graph $E_\alpha(R)$ shows the ``turning point" at some finite $R_*$, which signals the Hagedorn transition. (b) $\alpha >0$. $E_\alpha(R)$ shows no singularity at $R=0$.}}\label{ERplotTTbar} \end{figure} The two branches merge at $R=R_*$, where the function $E_\alpha (R)$ develops a square-root branch point, i.e. the derivative $dE_\alpha(R)/dR$ diverges as $(R-R_*)^{-1/2}$. At $R<R_*$, the analytic continuation of $E_\alpha(R)$ returns complex values and the two branches are complex conjugate. It is the singularity at $R_*$ that signals the Hagedorn phenomenon in the deformed theory, which can be inferred as follows. When the Euclidean theory is considered in the geometry of a very long cylinder of circumference $R$, as shown in Fig \ref{cylinder}, its partition function $Z$ is saturated by the finite-size ground state \begin{figure}[ht] \centering \includegraphics{Figures/cylinder.pdf} \caption{\small{{The Euclidean space-time cylinder representing the finite-size geometry in our analysis. The coordinate $x$ is compactified on a circle of circumference $R$, while the length $L$ of the cylinder is assumed to be asymptotically large. In the picture where $y$ is regarded as the Euclidean ``time" the partition function \eqref{eq:logZ} is dominated by the finite-size ground state contribution. In the complementary picture, where $y$ is interpreted as spatial coordinate while $x$ plays the role of the Matsubara ``time", the same partition function is given by the thermal trace \eqref{eq:Ztrace}.}}}\label{cylinder} \end{figure} \begin{eqnarray}\label{eq:logZ} -\log Z \ \simeq L\,E_\alpha(R)\;, \end{eqnarray} where $L\to \infty$ is the length of the cylinder. This corresponds to the picture in which the coordinate $y$ along the cylinder is taken as the Euclidean time. Alternatively, if one uses the picture where $x$ plays the role of Matsubara time, the same partition function is represented as the trace \begin{eqnarray}\label{eq:Ztrace} Z = \text{tr}\left(e^{-R {\hat H}_x}\right) = \int_0^\infty \,d{\cal E}\,\,\Gamma({\cal E})\,e^{-R{\cal E}} = e^{-RF(R)} \end{eqnarray} where $d{\cal E} \Gamma({\cal E})\sim d{\cal E}\,e^{{\cal S}({\cal E})}$ denotes the density of states, i.e. the number of states in the energy interval $d{\cal E}$. While in a local QFT whose high-energy limit is governed by the UV fixed point the entropy ${\cal S}$ grows as ${\cal S}({\cal E}) \simeq \sqrt{2\pi c/3}\ \sqrt{L {\cal E}}$ as ${\cal E}\to \infty$ -- this is known as Cardy formula \cite{Cardy:1986ie} -- the singularity of $F(R)$ at finite positive $R=R_*$ is formed when the entropy ${\cal S}({\cal E})$ grows much faster, \begin{eqnarray}\label{hagedorn1} {\cal S}(\cal E) \simeq R_* \,{\cal E}\,, \end{eqnarray} so that the partition sum diverges at $R<R_*$. We will discuss the density of states in the TTbar deformed theories in more details in \S \ref{sec:discussion}. In the above discussion we have denoted by $R_*$ the position of the singularity of $E_\alpha(R)$ as a function of $R$. It is important to observe that the solution $\epsilon_\alpha (\theta|R)$ displays a singularity at the same position $R=R_*$, independent on the value of the rapidity $\theta$. In other words, in the two-dimensional space spanned by the variables $(\theta, R)$ the singularity of $\epsilon_\alpha (\theta|R)$ occurs along the line $(\theta, R=R_*)$. We will see that this feature of the singularity associated with the Hagedorn transition will be reproduced in the generalized TTbar flows studied below. As already mentioned, the enhancement of the density of states in the deformed theory is well expected. The scattering phase in \eqref{sdef} grows fast with the center of mass energy, leading to the increase of the density of two-particle states, implying a yet greater increase of the density of all multi-particle states. The calculation presented above demonstrates that the resulting entropy displays the Hagedorn behavior \eqref{hagedorn1}. It is then tempting to assume that the formation of the Hagedorn density \eqref{hagedorn1} is directly related to the fast growth of the scattering phase. In the next section we will show that the Hagedorn singularity develops just as well in models whose associated CDD factor has a finite behavior at high energies, as in \eqref{phipole} with finite $N$, which indicates that the physical origin of the Hagedorn transition in the deformed theories is substantially more intricate. \section{The models}\label{sec:models} Here we study the CDD deformations of the trivial (fermionic or bosonic) $S$-matrix by the pole factor \eqref{phipole}, which we write as \begin{eqnarray}\label{sNpole} S(\theta) = \sigma\,\prod_{p=1}^N \,\frac{i\sin u_p+\sinh\theta}{i\sin u_p-\sinh\theta} \end{eqnarray} where, as before, $\sigma=-$ (resp. $\sigma=+$) corresponds to the fermionic (resp. bosonic) case. The parameters $u_p$ may be taken to be complex and, in view of the obvious periodicity of $S(\theta)$, we may limit our attention to the strip $-\pi < \Re(u_p) < \pi$. The standard analytic requirements for the physical $S$-matrix, however, impose restrictions on the possible locations of the poles $\theta_p = iu_p$. Taking these restrictions into consideration, the parameters $u_p$ are allowed to be either real or complex with negative real parts. The poles $\theta_p = i u_p$, with real positive $u_p$ signal the existence of bound states -- new stable particles of mass $2M\,\cos(u_p/2)$. Since the presence of such particles violates our working assumption that the mass spectrum of the theory only involves a single kind of stable particle with mass $M$, henceforth we will assume that all parameters $u_p$ in \eqref{sNpole} possess a negative real part\footnote{This leaves out the possibility of having a pole at $\theta=2\pi i/3$ which may be identified with the same particle of the mass $M=1$. Such interpretation requires that $S(\theta)$ satisfies additional bootstrap condition. This possibility, known as the ``$\varphi^3$ property", cannot be realized in the 2CDD model considered in this work, but may be relevant when $N$ is greater than 2. We hope to address this type of models elsewhere. }: \begin{eqnarray} -\pi \leq \Re(u_p)\leq 0\;,\qquad \forall p=1,\ldots ,N\;. \label{eq:up_analiticity} \end{eqnarray} This leaves us with poles $\theta_p = i u_p$ lying in the unphysical region, i.e. the region of the complex center-of-mass energy $s$-plane reached by analytically continuing the scattering amplitude through the two-particle branch cut. When $u_p$ has nonzero imaginary parts, such poles are associated to unstable particles, having complex masses $M_p = 2M\,\cos(u_p/2)$, with the real and imaginary parts identified as usual with the mean center of mass energy and the width of the resonances. The poles with real negative $u_p$ do not have clear particle interpretation, but the number of such poles signify the increment of the scattering phase as the function of $\theta$ at low energies; these poles are often referred to as the virtual states (see e.g.\cite{perelomov1998quantum}). A final requirement is that of unitarity of the physical $S$-matrix, which demands that $S(-\theta) = S^* (\theta)$ at all real $\theta$, or, equivalently, that $S(\theta)$ takes real values at pure imaginary $\theta$. It follows that any non-real parameter $u_p$ in \eqref{sNpole} either has fixed real part $\Re(u_p)=-\pi/2$ or appears together with its conjugate $u_p^{\ast}$. We can then refine the range (\ref{eq:up_analiticity}) to the following three cases \begin{eqnarray} \textrm{a})&\quad&\Im(u_p) = 0\quad\textrm{and}\quad \Re(u_p)\in\left(-\pi,0\right)\;, \nonumber\\ \textrm{b})&\quad&\Im(u_p) \neq 0 \quad\textrm{and}\quad\Re(u_p)=-\frac{\pi}{2}\;, \label{eq:up_analiticity_refined}\\ \textrm{c})&\quad&\Im(u_p) >0 \quad\textrm{and}\quad \Re(u_p)\in\left(-\pi,-\frac{\pi}{2}\right)\cup\left(-\frac{\pi}{2},0\right] \nonumber\\ \phantom{\textrm{c})}&\quad&\phantom{\Im(u_p) >0}\quad \textrm{and}\quad \exists p'\leq N\quad \textrm{s.t.}\quad u_{p'} = u_p^{\ast}\;. \nonumber \end{eqnarray} Thus, each subfamily $(\sigma,N)$ of \eqref{sNpole} contains a number of, in principle, different models, determined by a given combination of the ranges \eqref{eq:up_analiticity_refined} for each of the parameters $\lbrace u_p\rbrace_{p=1}^N$. Some simple combinatorics\footnote{ Given the number $N$ of poles one needs to partition it into three non-negative integers $n_a$, $n_b$ and $n_c$ with the constraint that $n_a+n_b+2n_c=N$. Once a value of $n_c = 0,1,\ldots, \lfloor N/2\rfloor$ is chosen, one is obviously left with $N-2n_c+1$ non-equivalent arrangements of poles between the cases a) and b). Thus the number of different models is given by $\sum_{n_c=0}^{\lfloor N/2\rfloor}(N-2n_c+1)$, which gives the result \eqref{eq:number_of_models}. } tells us that this number is \begin{eqnarray} \frac{1}{4}N^2 + N +\frac{7+(-1)^N}{8} = \left\lbrace \begin{array}{l l l} \left(\frac{N}{2}+1\right)^2 & & N\in2\mathbb Z_{>0} \\ \\ \frac{N+1}{2}\frac{N+3}{2} & & N\in2\mathbb Z_{>0}-1 \end{array} \right. \;. \label{eq:number_of_models} \end{eqnarray} Since for any model determined by \eqref{sNpole}, with parameters in the ranges \eqref{eq:up_analiticity}, the mass spectrum contains a single stable excitation, the resulting single-particle TBA equation takes the simple form (\ref{tbas}), with the kernel $\varphi(\theta)$ being the derivative of the scattering phase which, in the case of \eqref{sNpole}, explicitly reads \begin{eqnarray} \varphi_{N\textrm{CDD}}(\theta) = \frac{1}{i}\frac{\partial}{\partial \theta} \log S_{N\textrm{CDD}}(\theta) = - \sum_{p=1}^N \frac{2\sin u_p\,\cosh\theta}{\sin^2 u_p +\sinh^2\theta} \;. \label{eq:TBA_kernel} \end{eqnarray} An equivalent, sometimes more useful, expression of this kernel is its partial fractions expansion \begin{eqnarray} \label{eq:kernelNCDD} \varphi_{N\textrm{CDD}}(\theta) = \sum_{p=1}^N\left[\frac{1}{\cosh\left(\theta+i(u_p+\frac{\pi}{2})\right)} + \frac{1}{\cosh\left(\theta-i(u_p+\frac{\pi}{2})\right)}\right]\;. \label{eq:TBA_kernel_v2} \end{eqnarray} In what follows, we are going to concentrate our attention on two particular subfamilies: the ``1CDD models'' where $N=1$ and the ``2CDD models'' with $N=2$. \paragraph{The 1CDD models} When $N=1$ the $S$-matrix \eqref{sNpole} consists of a single factor \begin{eqnarray} S_{1\textrm{CDD}}(\theta) = \sigma\,\frac{i\sin u_1+\sinh\theta}{i\sin u_1-\sinh\theta}\,. \label{1cdd} \end{eqnarray} According to the breakdown of cases \eqref{eq:up_analiticity_refined}, for each choice of the TBA statistics we only have two possible models, corresponding to the following ranges of the parameter $u_1$: \begin{enumerate}[label=(\alph*)] \item $u_1\in\mathbb{R}$ and $-\pi<u_1<0$, \item $u_1 = -\pi/2 + i \theta_0$ and $\theta_0 \in\mathbb R$. \end{enumerate} Considering at first the fermionic case $\sigma=-1$, one recognizes in \eqref{1cdd}, for the case (a), the well-known $S$-matrix of the sinh-Gordon model \begin{eqnarray} S_{\textrm{shG}}(\theta) = -\frac{i\sin u_1+\sinh\theta}{i\sin u_1-\sinh\theta}\,,\qquad -\pi<u_1<0\;. \label{eq:shG_Smat} \end{eqnarray} On the other hand, the case (b) corresponds to the $S$-matrix of the ``staircase model'', introduced in \cite{Zamolodchikov:1991pc} \begin{eqnarray} S_{\textrm{stair}}(\theta) = \frac{\sinh\theta-i\cosh\theta_0}{\sinh\theta+i\cosh\theta_0}\;,\qquad \theta_0 \in\mathbb R\;. \label{eq:stair_Smat} \end{eqnarray} In both the cases (a) and (b) of the fermionic 1CDD model, the iterative solution to the TBA equation converges at all positive values of $R$, producing a function $E(R)$ analytic in the half-line $R>0$ and displaying a Casimir-like singularity at $R=0$, in full agreement with the interpretation of $E(R)$ as the ground state energy of a UV complete local QFT. For what concerns the two bosonic 1CDD models, the solution of the TBA equation has a considerably more intricate behavior. The case (a) of $u_1$ real was first addressed in \cite{Mussardo:1999aj}, where it was observed that the iterative solution of the TBA equation only converges for sufficiently large radius $R>R_*>0$. The authors also noticed that the function $E(R)$ appears to develop some sort of singularity at $R=R_*$. Below in \S \ref{sec:results} we will show that the solution to the TBA equation, and, consequently, the ground state energy $E(R)$, possesses, as a function of $R$, two branches. These merge at $R=R_*$, meaning that $R_*$ is a square-root branching point. We also show that this behavior extends to the case (b) of complex parameter $u_1=-\pi/2+i\theta_0$. \paragraph{The 2CDD model} In the $N=2$ subfamily, a pair of CDD factors is present in \eqref{sNpole}: \begin{eqnarray} S_{\textrm{2CDD}}(\theta) = \sigma\,\frac{i\sin u_1 + \sinh\theta}{i\sin u_1 - \sinh\theta}\frac{i\sin u_2 + \sinh\theta}{i\sin u_2 - \sinh\theta}\;. \label{eq:2cdd_general} \end{eqnarray} Following the breakdown \eqref{eq:up_analiticity_refined}, we see that there are $4$ possibly distinct models, corresponding to the following ranges of the parameters $u_1$ and $u_2$ \begin{enumerate}[label=(\alph*)] \item $u_1\in\mathbb{R}$ and $-\pi<u_1<0$,\\ $u_2\in\mathbb{R}$ and $-\pi<u_2<0$, \item $\theta_0 \in\mathbb R$ and $u_1 = -\pi/2 + i \theta_0$, \\$u_2 \in\mathbb R$ and $-\pi<u_2<0$, \item[(b')] $u_1 \in\mathbb R$ and $-\pi<u_1<0$, \\$\theta_0 \in\mathbb R$ and $u_2 = -\pi/2 + i \theta_0$, \item $\theta_0 \in\mathbb R$ and $u_1 = -\pi/2 + i \theta_0$, \\$\theta_0' \in\mathbb R$ and $u_2 = -\pi/2 + i \theta_0'$, \item $\theta_0 \in \mathbb R$, $\gamma \in (-\pi/2,\pi/2)$, $u_1 = \gamma - \pi/2 + i \theta_0$ and $u_2 = u_1^{\ast}$. \end{enumerate} The model (a) can be considered as a special instance of the more general case d). On the other hand the models (c) and (b) -- equivalent to (b') -- are genuinely distinct. All the models above display, both for the bosonic and fermionic statistics, the same type of behavior observed in the bosonic 1CDD models mentioned above: the iterative procedure for solving the TBA equation \eqref{tbas} only converges for $R$ larger than some positive value $R_\ast>0$ and the ground state energy $E(R)$ apparently develops a singularity at $R=R_\ast$. While we are going to present some data for all the various 2CDD cases, we devoted most of our attention to the case (d), that we will call, with some definitional abuse, the ``2CDD model''. Its $S$-matrix and TBA kernels explicitly read as follows \begin{eqnarray} S_{\textrm{2CDD}}(\theta) = \sigma\, \frac{\sinh\theta - i\cosh(\theta_0+i\pi\gamma)}{\sinh\theta + i\cosh(\theta_0+i\pi\gamma)}\frac{\sinh\theta - i\cosh(\theta_0-i\pi\gamma)}{\sinh\theta + i\cosh(\theta_0-i\pi\gamma)}\;. \label{eq:2cdd_particular} \end{eqnarray} \begin{eqnarray} \varphi_{\textrm{2CDD}}(\theta) = \sum_{\eta,\eta'=\pm}\frac{1}{\cosh(\theta+\eta\theta_0+i\eta'\gamma)} \;. \label{eq:2cdd_kernel} \end{eqnarray} \subsection{Iterative solution}\label{subsec:iterative} The chances of a non-linear integral equation of the form \eqref{tbas} to be amenable to an explicit analytic solution are considerably slim. For this reason the main investigation approach to the TBA equations is of numerical nature\footnote{In some limiting cases, it is possible to derive exact expressions, e.g. for the ground-state energy in the conformal limit, via the so-called ``dilogarithm trick'', as explained nicely in \cite{Fendley:1993jh}.}. In most situations, a simple iterative procedure of the following type \begin{eqnarray} \epsilon_n(\theta) = R \cosh\theta + \sigma \intop \varphi(\theta - \theta') \log\left[1-\sigma e^{-\epsilon_{n-1}(\theta')}\right]\,\frac{d\theta}{2\pi}\;, \label{eq:iterative_routine} \end{eqnarray} appropriately discretized, is shown to converge to the actual solution \begin{eqnarray} \lim_{n\rightarrow\infty} \epsilon_n(\theta) = \epsilon(\theta)\;, \label{eq:limit_solution_iteration} \end{eqnarray} when the seed function $\epsilon_0(\theta)$ is chosen as the driving term\footnote{In the case in which the iterative procedure does converge, there is actually a vast freedom in the choice of the seed function. However the standard choice indicated in the main text is the most natural one.} \begin{eqnarray} \epsilon_0(\theta) = R\cosh\theta\;. \end{eqnarray} The existence and uniqueness of the limit \eqref{eq:limit_solution_iteration} has been proven rigorously in \cite{Fring:1999mn} for the fermionic single-particle\footnote{See also \cite{Hilfiker:2017jqg} for an extension to fermionic multi-particle TBA equations.} TBA equation \eqref{tbas} with a kernel satisfying the requirement \begin{eqnarray} \left\vert\left\vert\varphi\right\vert\right\vert_1 := \intop\, \left\vert\varphi(\theta)\right\vert \frac{d\theta}{2\pi} \leq 1\;. \end{eqnarray} The fermionic 1CDD models do satisfy this condition and, as such, the iteration procedure is guaranteed to converge nicely in the whole range $R\in\mathbb R_{>0}$, a fact which is easily verified numerically. All the other models we considered above, on the other hand, violate one or more of the hypotheses of the existence and uniqueness theorem in \cite{Fring:1999mn} -- being either of bosonic statistic, or having a kernel with $L^1$ measure $\vert\vert\varphi\vert\vert_1=2$, or both -- and are not guaranteed to possess a convergent iterative solution. Notice that the $L^1$ measure of the TBA kernel \eqref{eq:TBA_kernel_v2} counts the number of CDD factors \begin{eqnarray} \left\vert\left\vert \varphi_{N\textrm{CDD}} \right\vert\right\vert_1 = N\;, \label{eq:L1_kernel} \end{eqnarray} meaning that, in the class of models described by the $S$-matrix \eqref{sNpole}, only the subfamily with $(\sigma,N) = (-1,1)$ is guaranteed to have a convergent iterative solution. \begin{figure}[h!] \begin{center} \includegraphics{Figures/2CDD_gsE_iteration_model_comparison_v2.pdf} \end{center} \caption{Ground-state energies for the various models discussed above, along with that of the $T\bar{T}$-deformed free fermion (black dots). The empty (resp. filled) markers correspond to models with bosonic (resp. fermionic) statistics. The fermionic sinh-Gordon and staircase models can be solved iteratively all the way to the $R\to 0$ limit, while the rest fail to converge below a certain model-specific scale $R_*$. The parameters of the models were chosen as to allow a comfortable visual comparison between the curves and are the same for both bosonic and fermionic versions of the same model. Insets: inverse square of the (numerical) derivative. As shown by the fits (dotted lines), the fermionic sinh-Gordon and staircase models show the conventional UV behavior $\propto R^4$, while the other models develop a $\propto R$ behavior reminiscent of the square-root branching singularity of the ground state energy. } \label{fig:2CCDmodelscomp} \end{figure} We investigated numerically the 1CDD models (a) and (b) and the 2CDD models (b) to (d)\footnote{Remember that the 2CDD model (a) is really a sub-case of model (d).}, for both the bosonic and fermionic statistic, using the iterative procedure \eqref{eq:iterative_routine}. As already mentioned above we observed that only for the 1CDD fermionic models this procedure converges for all positive values of the radius $R$. In every other case, there exists a positive ``critical radius'' $R_{\ast}>0$ such that for $R\leq R_{\ast}$ the iterative routine stops converging. As $R$ approaches $R_{\ast}$ from larger values, we noticed that the rate of convergence of the iterative numerical routine slows down dramatically, a telltale sign of the existence of some kind of singularity nearby\footnote{This same ``critical slowing down'' of the numerical iterative procedure is observed as $R\rightarrow 0$ in any TBA system with iterative solution converging in $R\in\mathbb R_{>0}$. In this cases it reflects the existence of a Casimir-like singularity of the ground-state energy at $R=0$.}. In Figure \ref{fig:2CCDmodelscomp} we collected the plots of the ground-state energy $E(R)$ for one representative point in the parameter space for each of the models we mentioned above along with one for the $T\bar{T}$-deformed free fermion. The shape of the curves suggests that all the cases, apart from the fermionic 1CDD models, behave qualitatively in the same way as the $T\bar{T}$-deformed free fermion, that is to say they develop a square-root type singularity at some critical value of the radius $R=R_{\ast}>0$: \begin{eqnarray} E(R) \underset{R\rightarrow R_{\ast}^+}{\sim} c_0 + c_{1/2}\sqrt{R-R_{\ast}}+\mathcal{O}(R-R_{\ast})\;. \label{eq:supposed_square_root_behaviour} \end{eqnarray} In order to further confirm this suspicion we plotted the derivative of the ground-state energy to the power $-2$ in the vicinity of the supposed critical point. As we can see in the insets of Figure~\ref{fig:2CCDmodelscomp}, the numerical results are in good accord with the hypothesis that $R_{\ast}$ is a singular point of square root type, as expressed by \eqref{eq:supposed_square_root_behaviour}. \subsection{Two branches}\label{subsec:two_branches} Having our expectation confirmed leaves us with the question of how to deal numerically with such a square root critical point. In particular, the behavior \eqref{eq:supposed_square_root_behaviour} implies the existence of a secondary branch of the ground-state energy, behaving as \begin{eqnarray} \tilde{E}(R) \underset{R\rightarrow R_{\ast}^+}{\sim} c_0 - c_{1/2}\sqrt{R-R_{\ast}}+\mathcal{O}(R-R_{\ast})\;, \label{eq:supposed_square_root_behaviour_second_branch} \end{eqnarray} in the vicinity of the critical point. Here and below we are going to use the notation $\tilde{E}(R)$ for the secondary branch. We would like to be able to access numerically to this secondary branch and to explore its properties, e.g. its large $R$ behavior and the possible existence of further critical points. The iterative routine \eqref{eq:iterative_routine} is ill suited for this job and we need to employ a more refined method, the PALC mentioned in the introduction and described in \S \ref{sec:num_meth}. Deferring a more thorough analysis of the properties of $E(R)$ to \S \ref{sec:results}, let us present here its main qualitative features, concentrating on a single point in the parameter space of the fermionic 2CDD model (d) as a representative case. \begin{figure}[t!] \begin{center} \includegraphics{Figures/2branches.pdf} \end{center} \caption{Here is plotted the ground-state energy $E(R)$ for the model with $S$-matrix \eqref{eq:2cdd_particular} with $\theta_0 = 1/2$ and $\gamma = 3\pi/20$, obtained through the PALC routine described in \S \ref{sec:num_meth}. The numerical points are sided by three lines, approximating $E(R)$ for large $R$ on both branches and for $R\gtrsim R_{\ast}$.} \label{fig:both_branches} \end{figure} More specifically let us set $\theta_0 = 1/2$ and $\gamma = 3\pi/20$ and compute numerically the ground-state energy of the model defined by the $S$-matrix \eqref{eq:2cdd_particular}. The result is displayed in Figure \ref{fig:both_branches}. We see that the function $E(R)$ does indeed possess two branches with distinctly different IR behavior. The primary branch is characterized by the universal IR behavior \begin{eqnarray} E(R)\underset{R\rightarrow\infty}{\sim} - \frac{1}{\pi}\,K_1(R) + \mathcal{O}\left(e^{-2R}\right)\;, \label{eq:E_asymp_primary} \end{eqnarray} where $K_1$ stands for the modified Bessel function while the secondary branch approaches a linear behavior at large $R$ \begin{eqnarray} \tilde{E}(R)\underset{R\rightarrow\infty}{\sim} - \varepsilon_{-} R \;, \label{eq:E_asymp_secondary} \end{eqnarray} with a rate of approach likely to be some negative power of $R$. For the specific case depicted in Figure \ref{fig:both_branches} the coefficient of the linear term is found to be \begin{eqnarray} \varepsilon_{-}\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) = -2.87452\ldots\;, \end{eqnarray} while the constant term is vanishing up to the precision we used for our numerical routines. We will see in \S \ref{sec:results} that this is the asymptotic behavior predicted by analytical considerations. In the zoomed box in Figure \ref{fig:both_branches} we also plotted a fit of the function $E(R)$ in the vicinity of the critical point $R_{\ast}$. As expected the behavior in this region is best described by the square-root function \eqref{eq:supposed_square_root_behaviour} (and \eqref{eq:supposed_square_root_behaviour_second_branch} for the secondary branch), with the coefficients taking the following values \begin{eqnarray} c_0\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) &=& -1.11767\ldots\;, \nonumber \\ c_{1/2}\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) &=& 2.03547\ldots\;, \\ R_{\ast}\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) &=& 0.61478849\ldots\;.\nonumber \end{eqnarray} Another notable fact is that we see no trace of additional singular points: the PALC method can, apparently, reach arbitrarily large values of $R$ on the secondary branch and the resulting ground-state energy quickly approaches the expected asymptotic linear behavior. We note again that the behavior of $E(R)$ depicted in Figure \ref{fig:both_branches} is qualitatively identical to the one exhibited by the ground-state energy of $T\bar{T}$-deformed models for negative values of the deformation parameter $\alpha$, as described in \S \ref{sec:TTbar} (see e.g. Figure \ref{ERplotTTbar}). Finally, we stress that the features of $E(R)$ described here for a point in the parameter space of a specific model really are representative of the general behavior of the ground-state energy in the family of models defined by the S-matrices \eqref{sNpole}, at least for what concerns the case of fermionic statistics. As we will discuss in \S \ref{sec:results} the status of the models with bosonic statistics is still not completely settled. In particular it is still unclear whether the secondary branch of $E(R)$ displays additional critical points or continues undisturbed in the deep IR and, if this was the case, what type of behavior it follows. \section{Numerical Method}\label{sec:num_meth} The results displayed in the previous section suggest that the solution to the TBA equation \eqref{tbas}, for S-matrices of the form \eqref{sNpole}, may generally possesses a singular dependence on the parameter $R$. In particular the slope of the tangent to the graph of $E(R)$ apparently diverges at some $R=R_*$. Such critical points are known as \emph{turning points}. Their presence in the dependence of the ground-state energy $E$ on the system size $R$ evokes the case of the TTbar deformed models, in which all the quantities obtainable from the TBA display a square-root singularity at the same value $R=R_*$. The iterative procedure described in \S \ref{subsec:iterative} becomes unstable at $R\to R_*$, therefore it is not particularly suitable for analyzing the vicinity of the singular point. Fortunately, many powerful methods exist that are capable of handling numerically critical points in non-linear equations. We refer to the nice monograph by Allgower and Georg \cite{allgower2012numerical} for an introduction, paired with an extensive literature, to the subject. The simplest of these numerical routines is the already mentioned PALC method which, in spite of the simplicity of its implementation, will be entirely sufficient to handle the situations of interest for us. In this section we will quickly review this method and its main features. \subsection{The pseudo-arc-length continuation method} Before starting let us point out a trivial fact: the TBA equation \eqref{tbas} is \emph{non-linear}. It is then not at all surprising that its solutions can develop a highly non-trivial dependence on the parameters. Conversely, what is remarkable is that in the vast majority of instances known in the literature, the solution to the TBA equations display a simple behavior as functions of $R$. In full generality, we should expect a solution $\epsilon(\theta\vert R)$ to potentially present, as a function of $R$\footnote{In principle, the solution might possess critical points also in its dependence on the other parameters present in the TBA equation. We found no hint of such a possibility and we will thus simplify our discussion by concentrating on the dependence on the parameter $R$.}, any type of critical point imaginable. As we will see later, in the cases of the 1CDD and 2CDD models we are concerned with here, only turning points appear. We will thus restrict our attention to the simple cases in which every critical point is a turning point. This considerably simplifies both the discussion and the actual implementation of the PALC method, although, if needed, it is entirely possible -- and not exceedingly difficult -- to include the existence of bifurcations in the game. Since our goal is to analyze the TBA equation \eqref{tbas} numerically, we are going to describe the principles of the PALC for maps between finite-dimensional spaces. Let us then truncate and discretize the real $\theta$-line on a $N$-point lattice $\left\lbrace\theta_k\;\vert\; k=1,2,\ldots ,N\right\rbrace$ which, for the moment, we are not going to specify further. Now, consider a parametrized map $H$ which takes as input a parameter $R\in\mathbb R$ together with the values $\epsilon_k=\epsilon(\theta_k)\in\mathbb R$ of some real function on the lattice, and yields $N$ real numbers: \begin{eqnarray} H\;:\quad \begin{array}{c c c} \mathbb R^{N}\times \mathbb R & \longrightarrow & \mathbb R^{N}\\ & & \\ (\vec{\epsilon},R) & \longmapsto & \vec{H}(\vec{\epsilon},R) \end{array}\;, \end{eqnarray} where we packaged the values $\epsilon_k$ and $H_k$ into vectors $\vec{\epsilon}$ and $\vec{H}$. We wish to explore the following fixed-point condition \begin{eqnarray} \vec{H}(\vec{\epsilon},R) = \vec{0}\;. \label{eq:map_equation} \end{eqnarray} Note that the TBA equation \eqref{tbas}, appropriately discretized and truncated, can be written in the above form. By definition, the map $H$ acts between spaces of different dimensionality, meaning \begin{eqnarray} \textrm{dim}[\textrm{Ker}(H)]\geq 1\;, \end{eqnarray} or, in other words, the image of the null vector $\vec{0}\in\mathbb R^N$ under the inverse map $H^{-1}$ is a space of dimension at least $1$. Hence at a generic point, where $\textrm{dim}[\textrm{Ker}(H)] = 1$, this image is a curve \begin{eqnarray} C\;:\quad J\subset \mathbb R\;\longrightarrow \; \mathbb R^N\times \mathbb R\;. \end{eqnarray} We call this the \emph{solution curve} for the map $H$. Our goal is to follow the solution curve from a given starting point $C_i = (\vec{\epsilon}_i, R_i)$ to a final one $C_f = (\vec{\epsilon}_f, R_f)$. The most straightforward way to achieve this is to simply parametrize the curve by $R$ and employ some numerical iterative routine, such as the one reviewed in \S \ref{subsec:iterative}, to move from $C_i = C(R_i)$ to $C_f = C(R_f)$. However this simple-minded approach fails at any point in the parameter space where the rank of the Jacobian \begin{eqnarray} \mathcal{J}_{kl} = \frac{\partial H_k}{\partial \epsilon_l}\;, \end{eqnarray} is not maximal. There we can no longer rely on the implicit function theorem to solve (\ref{eq:map_equation}) for $\vec{\epsilon}$ in terms of $R$. More geometrically, what happens is that the curve $C(R)$ displays a turning point, where $\frac{d}{dR}C(R)$ diverges. Fortunately there exists a very simple cure for this problem: instead of parameterizing the curve $C$ by the parameter $R$, we can use an auxiliary quantity $s$, traditionally chosen to be the arc-length of $C$ or a suitable numerical equivalent, whence the name \emph{pseudo-arc-length} given to this approach. The condition (\ref{eq:map_equation}) then becomes \begin{eqnarray} \vec{H}(C(s)) = \vec{0}\;,\qquad s\in J\subset\mathbb R\;. \label{eq:eq_map_H} \end{eqnarray} In order to proceed, let us take a derivative of this condition with respect to the parameter $s$. We immediately obtain \begin{eqnarray} H'(C(s)) \dot{C}(s) = \vec{0}\;, \end{eqnarray} where the \emph{extended Jacobian} \begin{eqnarray} H'(C(s)) = \Bigg(\;\mathcal J\;\Bigg\vert\;\frac{d\vec{H}}{dR}\;\Bigg)\;, \end{eqnarray} is a $N\times(N+1)$ block matrix, while \begin{eqnarray} \dot{C}(s) = \left(\begin{array}{c} \frac{d}{ds} \vec{\epsilon} \\[0.1cm] \hline\\[-0.4cm] \frac{d}{ds} R \end{array}\right)\;, \end{eqnarray} is an $(N+1)$ column vector. At this point we seem to be short of $1$ condition, since we introduced an additional parameter. However, remember that we decided to choose $s$ as the (pseudo-)arc-length of $C$, which means \begin{eqnarray} \vert\vert \dot{C}(s)\vert\vert = 1\;. \end{eqnarray} Summing up, we converted our non-linear problem, supported by the starting point $(\vec{\epsilon}_i,R_i)$, into an initial value problem \begin{eqnarray} H'(C(s))\dot{C}(s) = \vec{0}\;,\qquad \vert\vert\dot{C}(s)\vert\vert = 1\;,\qquad C(s_i) = (\vec{\epsilon}_i,R_i)\;, \label{eq:in_val_prob_map_H} \end{eqnarray} capable of dealing with the presence of turning points. Still, this formulation is somewhat unnatural as it completely disregards the fact that the curve $C$ is the fixed point of the map $H$, and, as such, should enjoy powerful local contractive properties with respect to iterative solution methods -- such as Newton's method. We are then led to an integrated approach in which we numerically integrate (\ref{eq:in_val_prob_map_H}) very coarsely and subsequently employ some kind of iterative method to solve (\ref{eq:eq_map_H}) locally. This is the general strategy behind the approaches known as \emph{predictor-corrector routines}. In Appendix \ref{app:pred_corr} we are going to describe the one that we employed in this work and present a pseudo-code of its implementation. \section{Results for the 2CDD model}\label{sec:results} Here we present some results obtained using the numerical techniques of the previous Section. We first concentrate on the fermionic 2CDD models and then discuss some facts about the bosonic models. \subsection{Fermionic case} The numerical data we collected, of which we have shown some example in \S \ref{subsec:two_branches}, strongly indicate the following properties of the ground-state energy $E(R)$ as a function of $R$: \begin{itemize} \item[--] $E(R)$ is a double-valued function of $R$, in the range $R>R_{\ast}$ with values in the negative real numbers; \item[--] The point $R=R_{\ast}$ is a square-root branching point -- or, using the terminology of \S \ref{sec:num_meth}, a turning point -- of the function $E(R)$; \item[--] There is no sign of additional turning or singular points other than $R=R_{\ast}$; \item[--] The two branches display the large-$R$ behaviors \eqref{eq:E_asymp_primary} and \eqref{eq:E_asymp_secondary}. \end{itemize} We could not find a convincing analytic argument proving the first three properties and we regard them as experimental observations. On the other hand, the last property \eqref{eq:E_asymp_secondary} can be verified analytically, as we are now going to show. \subsubsection{The large $R$ behavior}\label{subsec:large_R_fermion} Let us analyze the possible behaviors of the TBA equation \eqref{tbas} at large $R$. To this end, we write the equation as follows \begin{eqnarray} \epsilon(\theta) = d(\theta) -\chi(\theta)\;, \label{eq:TBA_symbolic} \end{eqnarray} where $d(\theta)$ is the driving term and $\chi(\theta)$ the convolution: \begin{eqnarray} d(\theta) = R\cosh\theta\;,\qquad \chi(\theta) = \int\,\varphi(\theta-\theta')\,\log\left[1+e^{-\epsilon(\theta')}\right]\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} As $R\rightarrow \infty$, the driving term becomes large, $\sim R$, and, in order for the equation \eqref{eq:TBA_symbolic} to be satisfied, it has to be balanced by a similar behavior in either $\epsilon(\theta)$, $\chi(\theta)$ or both. The standard assumption is that \begin{eqnarray} \epsilon(\theta)\underset{R\rightarrow\infty}{\sim} d(\theta)\;,\qquad \chi(\theta)\underset{R\rightarrow\infty}{\ll}d(\theta)\;, \label{eq:standard_large_R} \end{eqnarray} which turns out to be consistent, since, as one easily verifies, \begin{eqnarray} \chi(\theta) \underset{R\rightarrow\infty}{\sim} \int\,\varphi(\theta-\theta')\,\log\left[1+e^{-R\cosh\theta'}\right]\,\frac{d\theta'}{2\pi} \underset{R\rightarrow\infty}{\sim} \frac{\varphi(\theta)}{\sqrt{2\pi R}} e^{-R} \underset{R\rightarrow\infty}{\ll} R \cosh\theta\;. \label{eq:standard_large_R_chi} \end{eqnarray} However this is not, in general, the only possibility. It might be the case that the convolution term $\chi(\theta)$ is diverging as $R\rightarrow \infty$ and becomes comparable with either $\epsilon(\theta)$, $d(\theta)$ or both. It is then not difficult to check that only two possibilities are consistent: \begin{enumerate} \item $\epsilon(\theta) \underset{R\rightarrow\infty}{\longrightarrow} 0$ \underline{and} the kernel $\varphi(\theta)$ is not integrable on the real line; \item $\epsilon(\theta) \underset{R\rightarrow\infty}{\sim} -R\,f(\theta)$ where $f(\theta)$ is positive only in some finite\footnote{The subset $\Theta$ cannot be infinite, since the equation \eqref{eq:TBA_symbolic} forces $\epsilon(\theta)$ to behave as $d(\theta)$ for $\theta\rightarrow \pm \infty$.} subset $\Theta\subset\mathbb{R}$ of the real line and negative everywhere else. \end{enumerate} The scenario 1 cannot arise for the class of models we are dealing with\footnote{This scenario is, however, possible in models whose $S$-matrix presents a non-vanishing factor $\Phi_{\textrm{entire}}(\theta)$ \eqref{phientire}. In particular it describes the large $R$ behavior of the secondary branch $\tilde{E}(R)$ in the $T\bar{T}$-deformed theories.}, since the kernels \eqref{eq:TBA_kernel_v2} are obviously bounded functions of $\theta\in\mathbb{R}$. The situation 2 is, on the other hand, a possible one. Let us explore its consequences. In the hypothesis that \begin{eqnarray}\label{eq:second_large_R} \epsilon(\theta) \underset{R\rightarrow\infty}{\sim} -R\,f(\theta)\;,\qquad \left\lbrace\begin{array}{l l} f(\theta) > 0\;,& \theta\in\Theta\subset\mathbb{R}\;, \\ f(\theta)\leq 0& \theta\in\Theta^{\perp} = \mathbb{R}-\Theta \;,\end{array}\right. \label{eq:negative_epsilon} \end{eqnarray} the convolution can be approximated as follows \begin{eqnarray} \chi(\theta) \underset{R\rightarrow\infty}{\sim} R\intop_{\Theta}\,\varphi(\theta-\theta')\,f(\theta')\,\frac{d\theta'}{2\pi} + \intop_{\mathbb{R}}\,\varphi(\theta-\theta')\,\log\left[1+e^{-R\left\vert f(\theta')\right\vert}\right]\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} Discarding the second term in the right-hand side, we arrive at the linear equation \begin{eqnarray} f(\theta) = -\cosh\theta + \intop_{\Theta}\,\varphi(\theta-\theta')\,f(\theta')\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} Due to our hypothesis on the function $f(\theta)$, we see that the integrand in the right-hand side above is positive for any $(\theta,\theta')\in\mathbb R^2$, which implies the following bound \begin{eqnarray} 0\leq \intop_{\Theta}\,\varphi(\theta-\theta')\,f(\theta')\,\frac{d\theta'}{2\pi} \leq \underset{t\in\Theta}{\textrm{Max}}\left[f(t)\right] \intop_{\Theta}\,\varphi(\theta-\theta')\,\frac{d\theta'}{2\pi} \;. \end{eqnarray} Now, let $\theta_{\textrm{M}}\in\Theta$ be such that $f(\theta_{\textrm{M}}) = \underset{t\in\Theta}{\textrm{Max}}\left[f(t)\right]$, then the following inequalities are true \begin{eqnarray} -\cosh\theta_{\textrm{M}} \leq f(\theta_{\textrm{M}}) \leq -\cosh\theta_{\textrm{M}} +f(\theta_{\textrm{M}}) \intop_{\Theta}\,\varphi(\theta_{\textrm{M}}-\theta')\,\frac{d\theta'}{2\pi} \;. \end{eqnarray} Rearranging the right inequality above, we find that \begin{eqnarray} \intop_{\Theta}\,\varphi(\theta_{\textrm{M}}-\theta')\,\frac{d\theta'}{2\pi} \geq 1+\frac{\cosh\theta_{\textrm{M}}}{f(\theta_{\textrm{M}})} > 1\;, \end{eqnarray} which we can interpret as a constraint on the class of models which allow for this scenario. In fact, remember that the integral of the kernel on the whole real line, \eqref{eq:L1_kernel}, counts the number $N$ of CDD factors appearing in the $S$-matrix \eqref{sNpole}. But, since we assumed that $\Theta$ is a finite subset of $\mathbb R$, we find that \begin{eqnarray} N > \intop_{\Theta}\,\varphi(\theta_{\textrm{M}}-\theta')\,\frac{d\theta'}{2\pi} > 1\quad \Longrightarrow \quad N>1\;. \label{eq:bound_on_N} \end{eqnarray} Thus we have found that the fermionic 1CDD models, namely sinh-Gordon and the staicase models, can only display the standard large $R$ behavior \eqref{eq:standard_large_R}, \eqref{eq:standard_large_R_chi}. We stress that this result should not be read as a proof of the absence of turning points in these models, but rather as a sanity check for the correctness of our computations, since the ground-state energy for fermionic 1CDD models is well known to be a smooth and monotonously increasing function of the radius in the whole range $R>0$. Conversely, all fermionic $N$CDD models with $N>1$ allow for both the standard large $R$ behavior \eqref{eq:standard_large_R}, \eqref{eq:standard_large_R_chi} and the non-standard one \eqref{eq:negative_epsilon}. Consequently, their ground-state energy will possibly display both the asymptotic behavior \eqref{eq:E_asymp_primary} and \eqref{eq:E_asymp_secondary}, where \begin{eqnarray} \varepsilon_{-} = \intop_{\Theta}\,\cosh\theta\,f(\theta)\,d\theta\;, \end{eqnarray} in accordance with the numerical data we have obtained. \subsubsection{Analysis of the numerical data}\label{sec:F2CDD_num} The fermionic 2CDD models were classified in \S \ref{sec:models} into cases (a) to (d). We have performed numerical analysis for all the different cases and the results show that the behaviors are qualitatively the same. Thus, we are going to show here the details of the numerical analysis only for the representative case (d). We begin by analyzing the numerical solution obtained through the PALC method for large values of $R$. It was argued in the previous section that the pseudoenergy should behave as in \eqref{eq:second_large_R}, assuming negative values in a finite subset of the real line and positive values elsewhere. This is indeed checked to be true for all the 2CDD models under consideration, as illustrated for a particular member of this family in Figure \ref{fig:second_branch_large_R}, and to be contrasted with the standard iterative solution (the primary branch) which is positive everywhere. The numerics indicate that the negativity region is always a single interval centered at the origin of the form $\Theta=\{\theta\in\mathbb{R}\,|\,-\Lambda\le\theta\le\Lambda\}$. They also indicate that the interval size $\Lambda$ is model-dependent. In particular, it seems to grow with $\theta_0$ and decreases with $\gamma$. Nevertheless, the precise dependence of $\Lambda$ on the parameters deserves further investigation. \begin{figure}[t!] \begin{center} \includegraphics{Figures/epslargerF.pdf} \end{center} \caption{ Pseudoenergy $\epsilon(\theta)$ for the secondary branch solution (blue) at large values of $R$, showing the expected behavior \eqref{eq:second_large_R}, namely it is below $0$ (marked with the dashed line) in a finite interval. Corresponding behavior of the iterative solution (red). Here the model parameters are $\theta_0=2$ and $\gamma=4\pi/10$, though we checked the qualitative picture to remain the same within the whole set of admissible values of $\theta_0$ and $\gamma$.} \label{fig:second_branch_large_R} \end{figure} We then proceed to analyze the secondary branch solution in the opposite extremum of $R$, i.e., as $R$ approaches the critical value $R_*$. For some of the plots it will be convenient to show the results in terms of the log-scale distance \begin{equation} x=\log(R/2) \end{equation} that alleviates the exponential dependence (with $x_*=\log(R_*/2) $ for the corresponding critical point). Here we find it more instructive to display $L(\theta)$ instead of the pseudoenergy itself in order to ease the comparison with the primary branch solution. The situation is illustrated in Figure \ref{fig:both_branches_R_crit}. The two branches approach each other as the value of $R$ decreases, eventually merging at $R=R_*$ after which they become complex-valued. For each $R$, the function $L(\theta)$ for the secondary branch is everywhere larger than the corresponding primary branch counterpart, which is compatible with the previously mentioned fact that it has lower energy (recall the overall minus sign in \eqref{etbas}). \begin{figure}[t!] \begin{center} \includegraphics{Figures/LclosecritF.pdf} \end{center} \caption{$L(\theta)$ for both the primary (red) and secondary (blue) branch solutions as $R$ approaches the critical value $R_*$. For each color (blue or red), the color gradient indicates the decrease of $R$ towards $R_*$, where the two branches merge. Here $\theta_0=5$ and $\gamma=4\pi/10$, which lead to $R_*\approx0.0192$. } \label{fig:both_branches_R_crit} \end{figure} The critical value $R_*$ could in principle have a dependence on $\theta$. We ran an extensive numerical test exploring this possibility, but all the numerical results indicate $\theta$-independence to high accuracy, even though at this moment we do not have an analytic proof of this property. The analyses were as follows. We first ran the iterative numerical routine and computed the pseudoenergy $\epsilon(\theta)$ for at least ten different values of $x$ differing from each other and from $x_*$ by $10^{-8}$. Then, we selected several values of $\theta$ and for each value we performed a square root fit of the form $a(\theta) + b(\theta) \sqrt{-x_*(\theta) + x}$. The fits were done using Mathematica's \texttt{NonlinearModelFit} function by giving an initial estimate for $x_*(\theta)$. By comparing all the obtained $x_*(\theta)$, we verified that they agree up to errors greater than $10^{-8}$ which was our minimal working precision. The analysis was performed for several values of $\theta_0$ and for $\gamma$ in the range $0 \le \gamma \le (99/200) \pi$. In many cases, when the number of necessary points in the discretized $\theta$ grid was not very high it was possible to work with even higher precision. In those cases, another way of getting $x_{*}$ with high precision is by assuming a square root behavior for the pseudoenergy and solving the resulting equations using Mathematica's \texttt{FindRoot} function. In addition, we also verified that $R_*$ depends smoothly on the model parameters $\theta_0$ and $\gamma$, as shown in Figure \ref{fig:xcdeps} for both the fermionic and bosonic models. In particular, for large $\theta_0$ we have the asymptotic behavior $x_*=\log(R_*/2)\approx-\theta_0+x_*^{(0)}$ (see \S\ref{sec:NRlimit} for a derivation in the special limit where $\gamma$ is close to $\pi/2$, for which $x_*^{(0)}=\log\log(2+2\sqrt{2})$; for other values of $\gamma$ the linear term remains the same, though $x_*^{(0)}$ is different). \begin{figure}[t!] \centering \begin{subfigure}[b]{6cm} \centering \includegraphics[width=\textwidth]{Figures/xcgam.pdf} \caption{$\gamma$ dependence of $x_*$.} \label{fig:xcgam} \end{subfigure} \begin{subfigure}[b]{6.31cm} \centering \includegraphics[width=\textwidth]{Figures/xcth0.pdf} \caption{$\theta_0$ dependence of $x_*$ with $\gamma=2\pi/5$} \label{fig:xcth0} \end{subfigure} \caption{Dependence of the critical $x_*$ in the model parameters. Black lines correspond to fermionic 2CDD models, red lines correspond to bosonic ones. On (a), we demonstrate the validity of the narrow resonance limit approximation for $x_*$ (red and black bullets/boxes), see in \ref{sec:NRlimit}. } \label{fig:xcdeps} \end{figure} \subsection{Bosonic case} We have also repeated the analysis described above using the PALC method to the case of bosonic systems. The numerical routine used in this case only differs from the fermionic case by a few signs. As already mentioned in \S\ref{sec:models}, the solutions to the TBA equation for the bosonic models have intricate behavior already for the 1CDD cases. It was first noticed in \cite{Mussardo:1999aj} (for the case of real $u_1$, in the notation of \eqref{1cdd}) that the numerical iterative routine stops converging for some $R_{*}$, signaling the presence of a singularity. In fact, we have verified numerically that all the bosonic models up to two CDD factors behave similarly to the fermionic 2CDD models of previous section, i.e., they have a ``primary branch” and a ``secondary branch” which merge at a critical scale $R_{*}$, where the energies $E(R)$ have square-root singularities in $R_{*}$, and the value of $R_{*}$ is independent of $\theta$. There is a simple argument based on the well-known relation between bosonic and fermionic TBA which makes this behavior of the bosonic $1$CDD model rather natural. Consider the TBA equation \eqref{tbas}, \eqref{Ldef} with $\sigma = +1$ and an $N$CDD kernel \eqref{sNpole}, and introduce the following function \begin{eqnarray} \tilde{\epsilon}(\theta) = \log\left[e^{\epsilon(\theta)} - 1\right]\;. \label{eq:epsilon_tilde} \end{eqnarray} Some trivial manipulations show that this function satisfies a fermionic TBA equation with kernel \begin{eqnarray} \tilde{\varphi}(\theta) = \varphi(\theta) + 2\pi \delta(\theta)\;, \label{eq:BosonicFermionic} \end{eqnarray} with the $\delta(\theta)$ being the Dirac $\delta$-function. Therefore, a general bosonic $N$CDD model is equivalent to the ($N+1$)CDD fermionic TBA, taken in the limit when $u_{N+1}\to 0$ (see \eqref{eq:kernelNCDD})\footnote{Notice that $\lim_{u\to 0}\text{log}\frac{i\sin u - \sinh\theta}{i\sin u - \sinh\theta} = i\pi\,\text{sign}(\theta)$, for the principal branch of the log function.}. Recalling the arguments presented in \S \ref{subsec:large_R_fermion}, we conclude that bosonic $N$CDD models admit two different types of large $R$ behaviors whenever $N>0$. The large $R$ regime of the pseudoenergy $\epsilon(\theta)$ for the primary branch is as expected and it is easily accessed numerically, however for the secondary branch it is more involved to compute it. By increasing the value of $R$, eventually we reach a value $R^{\prime}$ where the PALC method suddenly ceases to provide a real solution and reverts back to the primary branch solution. Analyzing the behavior of $\epsilon(\theta)$ for complex values of $\theta$, we verified that a pair of complex conjugate zeros of $z(\theta)=1-e^{-\epsilon(\theta)}$ is approaching the real axis and causing the numerical instability. In principle it is possible to refine the numerical methods so as to obtain solutions for $R>R^{\prime}$. However, it is not clear at the moment whether or not those singularities of $L(\theta)$ ever cross the real axis. In case they do, an analysis similar to the one performed in \cite{Dorey:1996re} for the excited state TBA could be carried out. We leave the analysis of the large $R$ behavior of the secondary branch in bosonic models for a future study. \begin{figure}[t!] \begin{center} \includegraphics{Figures/LclosecritB.pdf} \end{center} \caption{$L(\theta)$ for the 2CDD bosonic model of type (d) with $\theta_0=5$ and $\gamma=3\pi/10$, in which case $R_*\approx0.2382$. Similarly to the fermionic case the function $L(\theta)$ for the secondary branch is everywhere greater than the one for the first branch.} \label{1CDDbosonicoL} \end{figure} The behavior of the models for $R$ close to $R_{*}$ is illustrated in Figure \ref{1CDDbosonicoL} by the $L(\theta)$ function for a 2CDD model of type (d). The qualitative picture is similar to the fermionic case, i.e. the function $L(\theta)$ for the secondary branch solution is greater everywhere than the one for the primary branch and the two merge as the critical point is approached. We conclude this subsection by showing in Figure \ref{fig:xcdeps} the smooth dependence of $x_{*}$ on the model parameters and in particular in the limit $\gamma \rightarrow \pi/2$. {In addition, notice that the bosonic curve is always above of the fermionic curve for the same parameters. This can be understood by analyzing the map \eqref{eq:BosonicFermionic} and the fact that the additional delta function term always give a positive contribution to the convolution term of the TBA equations.} \subsection{Narrow resonance limit} \label{sec:NRlimit} Here we consider the special limit $\gamma\to\frac{\pi}{2}$ of the kernel~\eqref{eq:2cdd_kernel}. In this limit the poles of the kernel get closer to the real line, finally forming two Dirac $\delta$ functions. We shall refer to this as the Narrow Resonance (NR) limit. After integration of the delta functions and exponentiation, TBA \eqref{tbas} becomes the difference equation \begin{equation} \label{eq:NR} Y(\theta|R)= e^{-R\cosh \theta}[1-\sigma Y(\theta+\theta_0|R)]^{-\sigma}[1-\sigma Y(\theta-\theta_0|R)]^{-\sigma}\,, \end{equation} where we introduced the notation $Y(\theta|R)=e^{-\epsilon(\theta|R)}$. Note that this can be seen as an infinite set of equations relating the values of $Y$ on the grid points $\theta \in (-\theta_0,\theta_0)+\theta_0\mathbb{Z}$. Let us focus on the fermionic case ($\sigma=-1$). Introducing $y_k=Y(\theta+k \theta_0)$ and $g_k=e^{-R \cosh(\theta+k \theta_0)}$ we can write \eqref{eq:NR} as \begin{equation} y_k=g_k(1+y_{k-1})(1+y_{k+1}) \, , \qquad (k\in\mathbb{Z}) \end{equation} and look for a solution for different grids specified by a choice of $\theta$. This is an infinite set of equations, however starting with $k=0$ one can obtain an approximate solution by truncating the system for some $|k|\leq m$, since $g_k$ and $y_k$ decay very rapidly with $R$ and $\theta_0$, and hence with $k$. Truncating to $m=1$ leads to the quadratic equation \begin{equation} \label{eq:NRtr1} y_0=k_0[1+g_1(y_0+1)][1+g_{-1}(y_0+1)]\,. \end{equation} One can now choose the integer lattice (i.e., $\theta=0$), to get \begin{align}\label{eq:y0nrm} y_0 &=-1-e^{-R\cosh\theta_0}+\frac{1}{2}e^{R(1+2\cosh \theta_0)}\left(1\pm\sqrt{1-4e^{-R(1+\cosh\theta_0)}(1+e^{-R\cosh \theta_0})}\right). \end{align} The solution develops a square root singularity at $x_*\approx-\theta_0+\log\log(2(1+\sqrt{2}))$, which is compatible with our findings in \S\ref{sec:F2CDD_num}. This point is shown as a red bullet in Figure \ref{fig:xcgam}. In contrast to the general case, it is clear that here the branching point depends on the choice of $\theta$ lattice. Let us also comment that a similar analysis of the truncated system in the bosonic case ($\sigma=+1$) using the half-integer lattice leads to the black box shown in Figure \ref{fig:xcgam}.\footnote{ The analog of \eqref{eq:y0nrm} comes with a more complicated square root argument and no analytical solution for $x_*$ as a function of $\theta_0$ can be found in that case, although it is straightforward to find it numerically.} Note that the truncation to $m=1$ is only valid for sufficiently large $R$ and $\theta_0$. Increasing the truncation order leads to more coupled equations, which in turn can be recast as an (more complicated) algebraic equation for $y_0$, with parameters depending on $\theta$. The number of solutions increases accordingly. However, for any $\theta \in (-\theta_0,\theta_0)$ there is always a single pair of solutions which collide and form a branching point at some $x_*(\theta)\approx -\theta_0 + \text{const.}$, corresponding to real, positive $R_*(\theta)$, a feature that is not altered by increasing the truncation order. Finally, we remark that in the further special limit $\theta_0=0$, the difference equations~\eqref{eq:NR} become simple algebraic equations for $Y(\theta)$ that can be exactly solved both in the fermionic and the bosonic case, leading to exact expressions $x_*=\log \log 2$ and $x_*=\log \log \frac{3}{2}\sqrt{3}$ for the critical points, respectively. These points are also shown in Figure~\ref{fig:xcgam}, emphasizing the smooth nature of the limit $\gamma\to\frac{\pi}{2}$. In Figure~\ref{fig:NRlimit} we present as an example a solution with $m=8$ truncation together with the iterative solution of the integral equation \eqref{tbas} for $\theta_0=2$ and $\gamma$ approaching $\pi/2$, just before reaching the (first) critical $R_*(\theta)$ of the NR limit. The transition seems to be smooth, however we do not yet have a complete understanding of the nature of this limit. We plan to revisit the narrow resonance model in a more sistematic way in the future. \begin{figure} \centering \includegraphics{Figures/yvsthetaNR.pdf} \caption{Approaching the Narrow Resonance (NR) limit for $\theta_0=2$ and $x=1.75$} \label{fig:NRlimit} \end{figure} \section{Discussion}\label{sec:discussion} There are two general questions which we believe our results shed some light upon. One concerns the short-distance behavior of the theory under the generalized TTbar deformation \eqref{Aalphas}. Our results supports the expectation that, at least in the cases when the CDD factor in the associated $S$-matrix deformation has the form \eqref{cddn}, \eqref{phipole} with finite $N$, the theory develops the Hagedorn singularity corresponding to a density of high-energy states much greater than what is allowed in a Wilsonian QFT. Although we demonstrated this in a limited set of examples -- the 2CDD deformations of the free $S$-matrix with both fermionic and bosonic statistics and the 1CDD deformations of the free boson $S$-matrix -- this result likely extends to more general $N$CDD deformations, at least for massive theories involving only one kind of particles. In fact the case $N=\infty$, a model known as \emph{Elliptic sinh-Gordon}, is shown to display the same behavior as the ones studied here \cite{Cordova:2021fnr}. We note that this behavior is qualitatively the same as the one encountered under the ``TTbar proper'' deformation \eqref{Aalpha} of a generic local QFT. Moreover, the singularity of $E(R)$ at the Hagedorn point $R_*$ is a square-root branching point, exactly as in the TTbar deformations with negative $\alpha$. From a formal point of view, this nature of the singularity is not entirely unexpected. Indeed, the character of the singularity relates to the rate of approach of the Hagedorn asymptotic \eqref{hagedorn1} at high energy $\mathcal{E} \to \infty$. Assume that the approach is power-like\footnote{{It is interesting to compare this assumption with the analysis of thermodynamic stability in \cite{Barbon:2020amo}.}} \begin{eqnarray}\label{tohagedorn} S(\mathcal{E}) = R_*\,\mathcal{E} - \frac{a\,L^{\kappa+1}}{\mathcal{E}^\kappa} + \cdots \end{eqnarray} where $\kappa$ is some positive number, $L$ is the spatial size of the system which is assumed to be asymptotically large, and the dots represent yet higher negative powers of $\mathcal{E}$. The dependence on $L$ of the subleading term reflects the extensive nature of the entropy, which must behave as $L\,\sigma(\mathcal{E}/L)$ in the limit $L\to\infty$, with the intensive quantity - the entropy density $\sigma$ - depending on the energy density $\mathcal{E}/L$. Inspection of \eqref{tohagedorn} reveals the mass dimension of the coefficient $a$ to be $a\sim[\text{mass}]^{2\kappa+1}$. Having in mind that all the deformation parameters $\alpha_s$ in \eqref{Aalphas} have even integer dimensions, one could expect that the exponent $2\kappa+1$ is an integer. The lowest positive $\kappa$ consistent with this assumption is $\kappa=1$, and then \eqref{tohagedorn} leads exactly to the square-root singularity of $E(R)$. Still, the physics behind this simple character of the singularity appears mysterious. Analytic continuation of $E(R)$ below $R_*$ returns complex values of $E$. This likely signals an instability of the ground state at $R<R_*$ against some sort of decay. If so, what is the product(s) of the decay? Usually in a theory with finite range of interaction the decay of the unstable ground state goes through the process of nucleation, as in the ``false vacuum'' decay studied in \cite{Coleman:1977py,Kobzarev:1974cp}. However such a decay would imply a much weaker -- and analytically more complicated -- singularity at $R_*$. Therefore the simple algebraic character of the actual singularity appears puzzling. A different, but possibly related, question is the physical interpretation of the secondary branch of $E(R)$ discovered in \S \ref{sec:results}. An even more general question concerns the relation between the $S$-matrix and the underlying local structure. Suppose we are given an $S$-matrix, i.e. a collection of masses of stable particles as well as the full set of scattering amplitudes, satisfying all the standard requirements of the $S$-matrix theory - unitarity, analyticity, crossing and bootstrap conditions (see e.g. \cite{Eden:1966dnq, Iagolnitzer:1994xv}), with the singularity structure consistent with the macro-causality \cite{Iagolnitzer:1974wz}. Is there a local QFT generating such a scattering theory? The answer is generally no. There are consistent $S$-matrices which cannot be derived from Wilsonian QFT, and indeed do not have an underlying local structure, meaning a complete algebra of local operators. This possibility is famously realized in string theories. The results presented here support the expectation that the overwhelming majority of self-consistent $S$-matrices are not derivable from local QFT. Although this expectation arise from a general analysis of the RG flows \cite{Wilson:1973jj}, we substantiate it by providing concrete examples in 1+1 dimensions with factorizable $S$-matrices consisting of pure CDD factors. We studied a number of examples of such $S$-matrices and verified that they lead to the Hagedorn density of high-energy states \eqref{hagedorn1}, familiar to the string theories. What's more, it looks likely that this situation is rather general: with the exception of a small subset of ``local field-theoretic'' $S$-matrices, the bulk part of the space of consistent, factorizable S-matrices in 1+1 dimensions, leads to a Hagedorn transition. This statement of course needs to be verified on a more systematic level, but it is tempting to conjecture that this is a general situation, not limited to integrable theories and to low space-time dimensions. If so, would it mean that the majority of consistent $S$-matrices correspond to some kind of string theories? Or maybe there is a more general class of theories, besides the strings, which break the standard local structure of QFT while preserving macro-causality and exhibit the stringy density of states? The present work represents a first step of a project having as a goal the systematic analysis of the TBA equations for completely general CDD-deformed factorizable $S$-matrices, with arbitrarily complicated CDD factors \eqref{cddg}, possibly including the factors \eqref{phientire} with singular behavior at high energies. Clearly, also CDD deformations of more complicated $S$-matrices, involving more than one kind of particles -- possibly having mass degeneracies, a situation leading to off-diagonal scattering -- have to be studied. Such $S$-matrices lead to systems of TBA equations more complicated than the simple equation \eqref{tbas}. Nonetheless, we believe that the numerical methods adopted here, in particular the PALC routine, can be adopted in full generality. Finally, a similar analysis can be extended to the CDD deformed ``massless TBA systems'' (see e.g. \cite{Zamolodchikov:1991vx,Zamolodchikov:1992zr,Fendley:1993jh}). Although the physical foundation here is less firm -- since the notion of $S$-matrix is ambiguous for massless theories in 1+1 dimensions -- these cases might yield welcome surprises. \subsection*{Acknowledgements} AZ acknowledges warm hospitality extended to him at the International Institute of Physics, Natal, Brasil, where parts of this work were done. AZ is grateful to A. Polyakov and F. Smirnov for interest and discussions. SN wishes to thank R. Tateo, L. G. C\'{o}rdova and F. I. Schaposnik for their always interesting and useful comments and questions. Work of GC was supported by MEC and MCTIC. Work of TF was supported by the Serrapilheira Institute (grant number Serra-1812-26900). ML was supported by the National Research Development and Innovation Office of Hungary under the postdoctoral grant PD-19 No. 132118 and by the Fund TKP2020 IES (Grant No. BME-IE-NAT), under the auspices of the Ministry for Innovation and Technology. Early stage of ML's work was carried out at the International Institute of Physics, Natal, Brasil where he was supported by MEC and MCTIC. Work of SN is supported by NSF grant PHY-1915219. Work of AZ was partly supported by the NSF grant PHY-191509.
2,877,628,088,740
arxiv
\section{} \section{Introduction} A low-noise, stable magnetic field is useful in a broad range of scientific fields. Atom interferometry and microgravity~\cite{kubelka-langeThreelayerMagneticShielding2016,zoestBoseEinsteinCondensationMicrogravity2010}, electron microscopy~\cite{krivanekElectronMicroscopeAberrationcorrected2008}, nuclear magnetic resonance~\cite{mansfieldMultishieldActiveMagnetic1987,taylerInvitedReviewArticle2017}, magnetometry~\cite{bottiNoninvasiveSystemSimultaneous2006,bertoldiNoiseResponseCharacterization2006,bertoldiMagnetoresistiveMagnetometerImproved2005}, and atomic clock~\cite{ludlowOpticalAtomicClocks2015} experiments have all benefited from advances in magnetic field stabilisation. The dominant magnetic noise in these environments arises from dc magnetic field fluctuations due to geomagnetic fields, other nearby instruments, and magnetised objects. Specifically, in ultracold atomic physics~\cite{dedmanActiveCancellationStray2007,zhangCoherentZerofieldMagnetization2015,ottlHybridApparatusBoseEinstein2006} a low-noise, well-controlled magnetic field allows for measuring interaction-driven phenomena that occur at long timescales. For example, coherently-coupled quantum degenerate mixtures of $^{23}$Na atoms in the miscible $|F,m_F\rangle=|1,\pm 1 \rangle$ states can be used to study sine-Gordon Hamiltonian~\cite{gallemiDecayRelativePhase2019,sonDomainWallsRelative2002,iharaTransverseInstabilityDisintegration2019}. Passive magnetic shielding is well-suited for isolating an experiment by excluding magnetic fields from a contained volume. As opposed to active stabilisation~\cite{dedmanActiveCancellationStray2007,merkelMagneticFieldStabilization2019} or dynamical decoupling~\cite{trypogeorgosSyntheticClockTransitions2018,caiRobustDynamicalDecoupling2012}, it uses materials that have high magnetic permeability $\mu_r$ and so redirect the magnetic flux lines around the enclosed volume. Different materials have different properties and utilise different shielding mechanisms: high-$\mu_r$ materials screen quasi-dc fields up to a few 100\,Hz by flux-shunting, while highly conductive materials cancel magnetic fields induced by eddy currents oscillating at a few kHz~\cite{sumnerConvectionalMagneticShielding1987,burtOptimalThreelayerCylindrical2002}. The distortion of the magnetic field depends on the physical parameters of the material, the shield geometry and the frequency of the magnetic source. For a linear, homogeneous, isotropic, and non-dispersive material the working mechanism is simply understood using Maxwell's equations that relate the flux density $\mathbf{B}$ to the magnetic field $\mathbf{H}$, as $\mathbf{B}(r,t) = \mu_r\mu_0\mathbf{H}(r,t)$, where $\mu_0$ is the magnetic permeability of vacuum. In the absence of currents, the requirement that the tangential component of $\mathbf{H}$ and the normal component of $\mathbf{B}$ remain continuous across an interface of materials with different $\mu_r$, the field lines are bent nearly tangentially to the interface~\cite{celozziElectromagneticsShielding2008}. Although the principle mechanism is simple, an exhaustive design study is still required for optimising all the parameters of the shield. \begin{figure}[t] \centering \includegraphics[]{fig1.pdf} \caption{(a) Top and (b) side view of vacuum apparatus. The maximum (minimum) dimensions of the shield are 224\,mm (88\,mm) diameter and 700\,mm (50\,mm) length, bound by the vacuum apparatus, the size of the glass cell and the surrounding dc coils, and the presence of additional optics around the glass cell. The shield needs to have 10 openings to allow for optical access through all the horizontal/vertical windows of the glass cell and installation around the tube connecting it to the rest of the vacuum apparatus. All distances are in millimetres. } \label{fig:1} \end{figure} A compact source of cold $^{23}$Na atoms~\cite{lamporesiCompactHighfluxSource2013,colziSubDopplerCoolingSodium2016}, combined with a hybrid magneto-optical trap~\cite{colziProductionLargeBoseEinstein2018} leave enough space for the construction of a passive magnetic shield able to reduce the fluctuating external magnetic field. At the same time it is still necessary to produce well-controlled magnetic field inside the shielded volume to fix the quantisation axis and control the energy spacing between atomic states. The geometric design of the shield makes this possible without compromising its performance. \section{Design considerations} The optimisation of the shield design hinges on mainly three aspects: the shape of the shield, the choice of materials, and geometrical constraints of our experiment. The former sets the trade-off between shielding efficiency and saturation tolerance. The latter is determined by the geometry of the vacuum apparatus and the need for optical and electrical access. \subsection{Shape and material} The ideal shape of a magnetic shield is a sphere or an infinitely long cylinder since sharp corners generally lead to flux leakage. Precise machining and welding are necessary to minimise this effect. The overall size of the shield should be as small as possible since the attenuation $A = \mu_r d / 2R$ of an external magnetic field scales as the inverse of the shield radius $R$ at fixed thickness $d$, where $\mu_r$ is the relative permeability of the material. Multi-layered designs increase the performance of a shield of similar thickness and volume. The total attenuation is proportional to the product of the attenuation of individual layers. Each layer sees a reduced magnetic field that is already screened by a previous layer closer to the field source, which makes it unlikely to saturate. Except for the geometry, the only other parameter that affects the performance of the shield is the magnetic permeability $\mu_r$ which is tied to the choice of the material. In general ferromagnetic materials have large, non-linear $\mu_r$ that depends on the modulus of $\mathbf H$ and the residual magnetization $\mathbf M$, so that $\mathbf{B}(r,t)/\mu_0 = \mu_r(H)\mathbf H(r,t) + \mathbf M$. This leads to hysteresis in the response of the material to an external field. When $\mathbf B$ becomes too high, the magnetic domains of the material are all aligned with the external field, the material saturates and is unable to sustain any higher flux. Among all ferromagnetic elements and alloys one of the most common materials used for magnetic shielding is $\mu$-metal, which is a magnetically-soft alloy, composed of 80\% nickel, 5\% molybdenum, and iron. It has $\mu_r=4.7\times 10^5$, which allows it to reach high shielding efficiency but it saturates at relatively small magnetic fields~\footnote{We denote saturation values in units of Tesla, 1\,T = 10$^4$\,G.}, 0.75\,T. Other materials have a higher saturation value. For example Supra-50, an alloy composed of 48\% nickel and 52\% iron, saturates at 1.5\,T, roughly twice the field value of $\mu$-metal but has smaller $\mu_r=2\times 10^5$. \subsection{Experimental constraints} The main constraints on the shield geometry are determined by the dimensions of the ultra-high-vacuum cell that contains the atomic gas. A quartz, octagonal cell is welded on a horizontal tube of $\diameter$25\,mm and 80\,mm length, directly connected to the main vacuum apparatus (see Fig.~\ref{fig:1}). Each side-face window has $\diameter$23\,mm and a thickness of 4.8\,mm except for the top and bottom windows that are $\diameter$58\,mm and 6.4\,mm thick. The outer distance between two parallel faces is 71\,mm, while the distance between the center of the cell and the edge of the vacuum apparatus (up to the head of the screws used to fix the flange on which the cell is mounted) is 112\,mm. The shield comprises multiple pieces so that it can be mounted around the quartz cell. It cannot be completely hermetic since our experiments require optical access to the atomic sample for laser cooling and trapping~\cite{colziProductionLargeBoseEinstein2018}. The design requires at least ten apertures of $\diameter$30\,mm so that all directions defined by the cell windows are accessible with standard 25\,mm optics. Other smaller apertures ($\diameter$10\,mm) are necessary for routing cables that carry current to ac and dc coils inside the shield. The largest of these is a pair of quadrupole coils that have an inner (outer) diameter of 73\,mm (87\,mm), height of 10\,mm, are 30\,mm apart and can produce a maximum gradient of 200\,G/cm at the location of the atomic ensemble. The outer diameter of these coils limits the minimum diameter of the shield and the maximum magnetic field they produce needs to be below the saturation threshold of the shield. The atoms will be exposed to an axial bias magnetic field $\approx 100$\,mG generated by electromagnets, whose modulus is required to be as stable as possible. These numbers set the minimum and maximum radius of the shield to 44\,mm and 110\,mm respectively, and minimal internal lenght to 50\,mm. The outer length of the shield is limited by the presence of the optical table supporting the vacuum system, located 350\,mm below the center of the cell. The overall volume is limited to the dimensions of Fig.~\ref{fig:1}. With these constraints in mind we proceed with numerical simulations to optimise the design of the shield and assess its performance. \section{Finite element method simulations} We used the finite-element method (FEM) as implemented in COMSOL\footnote{COMSOL Multiphysics v. 5.4. www.comsol.com. COMSOL AB, Stockholm, Sweden.} to simulate the behaviour of the shield for different geometrical configurations. In a series of simulations we optimised the number of layers of the shield, their thickness, and inter-layer distance considering both the suppression of external magnetic fields and the saturation of the inner layer due to fields produced by our dc coils. \subsection{Number of layers} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig2.pdf} \caption{(a) Shield geometry used to investigate the difference between 3 and 4 layers. The inner layer was a cylinder with diameter and length equal to 120\,mm and each subsequent layer was increased by 40\,mm both in diameter and length. All layers were 1\,mm thick. The applied external field was 1\,G directed along $z$. (b) Magnetic flux density along $x$ (top) and $z$ (bottom) in the central region of the shield. Using more layers leads to a higher suppression of the external field along both directions.} \label{fig:2} \end{figure} We compared the attenuation of an external field directed along the vertical and horizontal axes, of a 3-layer and a 4-layer shield. For this simulation we used a simplified geometry composed of three or four concentrical cylinders, which are exposed to an external uniform magnetic field of 1\,G. The innermost cylinder had an internal radius of 60\,mm, length of 120\,mm and thickness of 1\,mm. Each subsequent layer was increased by 40\,mm both in diameter and length. In order to save computational time and memory, the magnetic permeability of the material used was set to $\mu_r=4\times10^4$, a conservative value with respect to the actual response of $\mu$-metal. Both geometries attenuate the external field by three orders of magnitude with the 4-layer shield outperforming the 3-layer shield by more than a factor of 3 (Fig.~\ref{fig:2}). It also leads to a more uniform field, especially along the horizontal plane. \subsection{Saturation} We then simulated the effect of the internal magnetic fields produced by the dc coils on the saturation of the shield material. We considered the magnetic field produced by a pair of coils placed above and below the two upper and lower windows of the glass cell with a mean radius and a relative distance of 31\,mm, producing a gradient field of 100\,G/cm. We modelled a single-cylinder $\mu$-metal shield with a length of 70\,mm, radius of 40\,mm and variable thickness of 1\,mm or 2\,mm surrounding the coils. Increasing the thickness or the distance between the shield and the magnetic field source lowers the magnetic flux in the bulk of the material. For a radius of 40\,mm, the maximum field value on the xy-plane is 0.33(1)\,T versus 0.15(1)\,T for a thickness of 1\,mm and 2\,mm respectively. The flux is more than halved when the thickness is doubled and the decay rate of the magnetic flux as a function of the radius is larger for smaller thickness (Fig.~\ref{fig:3}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig3.pdf} \caption{(a) Geometry used to investigate the effect of the non-uniform field produced by the dc coils on the shield. The shield has a length of 70\,mm, a varying radius from 40\,mm to 55\,mm and a thickness of 1\,mm (light) or 2\,mm (dark). (b) Maximum value of the magnetic flux inside the shield layer as a function of its radius. The field decreases by a factor of two for the thicker layer. The dashed lines are linear fits to the data showing a decrease of 75\,G/mm for the 1\,mm thick layer and 26\,G/mm for the 2\,mm thick layer.} \label{fig:3} \end{figure} The maximum field the shield experiences lies away from the xy-plane, near the edge of the dc coils and is 0.5\,T, close to the saturation threshold of $\mu$-metal. Due to this, we proceeded with a more realistic simulation of a single-cylinder shield made of Supra-50, that includes all the apertures needed. The shield was 2.5\,mm thick, had a length of 211\,mm and a radius of 54\,mm. For a gradient of 50\,G/cm the maximum field generated on the shield is 250\,G which is well below the saturation threshold of Supra-50. \subsection{Axial attenuation} The optimal spacing of the layers of the shield is attained when the radius of each layer is double that of the previous one~\cite{burtOptimalThreelayerCylindrical2002}. This leads to prohibitively large shield designs that cannot be accommodated in our experiment due to space constraints. We still optimised the axial inter-layer distance using a 4-layer shield with an inner-layer radius of 57\,mm, length of 84\,mm, radial inter-layer distance of 20\,mm, and an external magnetic field of 1\,G along the $x$- or $z$-axis. We investigated the role of the length of the innermost layer including all the openings. Increasing the length of the innermost layer in steps of 10\,mm while keeping the axial inter-layer distance fixed at 10\,mm increases the shielding efficiency, and leads to a more uniform residual magnetic field in the region of interest. The absolute value of the residual magnetic field at the centre of the shield decreases from 416\,$\mu$G to 154\,$\mu$G when the length of the shield increases from 84\,mm to 124\,mm. Similarly, keeping the length of the innermost layer fixed at 84\,mm and varying the axial inter-layer distance from 10\,mm to 30\,mm the residual field goes from 416\,$\mu$G to 43\,$\mu$G. The results of these simulations are shown in Table~\ref{tab:1}. By fixing the inter-layer distance to 10\,mm and increasing the length of the innermost layer, we were able to obtain good shielding performance along the axial direction. \begin{table}[ht] \caption{\label{tab:1} Residual magnetic field values for a 4-layer shield of varying length and inter-layer distance. The inter-layer distance (length) was fixed at 10\,mm (84\,mm) when varying the length (inter-layer distance). Longer designs with larger spacing between the layers lead to better performance.} \begin{ruledtabular} \begin{tabular}{cccccc} length (mm)& 84 & 104 & 124 & 84 & 84 \\ inter-layer distance (mm) & 10 & 10 & 10 & 20 & 30 \\ $\vert B\vert$ ($\mu$G) & 416 & 197 & 154 & 69 & 43 \\ \end{tabular} \end{ruledtabular} \end{table} \section{Implementation and characterisation} \subsection{Implementation} Based on the results of our simulations we finalised the design of the shield to that shown in Fig.~\ref{fig:4}. The shield is composed of four layers of different material and sizes. Its volume is 12 liters which makes it much more compact than existing designs~\cite{xuUltralowNoiseMagnetic2019}. The innermost layer is a 2.5\,mm-thick Supra-50 layer, while the outer layers are made out of 2\,mm-thick $\mu$-metal. Each layer is composed of two top/bottom pieces that are stacked on each other. The internal radius of the bottom piece is equal to the external radius of the top piece plus a 1\,mm clearance. This structure is modular and easy to assemble and disassemble. We used two sets of nylon supports to fix the spacing between subsequent layers (both top and bottom) along the axial and radial direction during assembly. The shield has ten $\diameter$30\,mm and two $\diameter$10\,mm openings, which are used for optical and cable access respectively. An additional set of three holes of $\diameter$4\,mm are present at the top and at the bottom of the outer layers to connect to the mounting pieces. Conclusive simulations accounting for the non-linear response and the saturation limit of the layers predict for this geometry an overall attenuation of an external magnetic field by a factor of $3\times 10^5$ (Fig.~\ref{fig:5}), and that the magnetic field produced by the dc coils leaves the inner-shield layer effectively unmagnetised. \begin{figure}[t] \centering \includegraphics[]{fig4.pdf} \caption{(a) Picture of the four layers that comprise the upper half of the shield. The lower left layer is made out of Supra-50 and the rest from $\mu$-metal. (b) The four bottom layers stacked into their final configuration. The layers are spaced using a set of nylon supports. (c) Technical drawing of the final shield arrangement showing all the relevant dimensions and openings. The shaded areas at the top and bottom are the nylon supports.} \label{fig:4} \end{figure} \subsection{Characterisation} We tested the performance of the shield before mounting it on the experimental apparatus, using a solenoid with a rectangular cross-section 57\,cm$\times39$\,cm and length of 1.5\,m, large enough to encompass the whole shield and provide a uniform magnetic field throughout its volume. The solenoid was made out of 138 windings of 0.8\,mm copper wire, spaced by a mean distance of 10\,mm and produced a field of 1.28\,G/A. We profiled the axial field of the solenoid using differential $\pm$0.5\,A measurements and found it to be uniform to within 6\%. We measured the attenuation of the solenoid magnetic field due to the presence of the shield layers using a Mag-13MCL100 low-noise 100\,$\mu$T field sensor from Bartington Instruments. \begin{figure}[ht!] \centering \includegraphics[]{fig5.pdf} \caption{Measurement of the magnetic field suppression along the axial and radial directions (points) compared to the simulated data (lines). The error on the horizontal axis is $\pm$3\,mm which is less than the width of the points. On the vertical axes we assume a systematic error of 6\% in addition to the resolution of the magnetometer. The shaded vertical regions corresponds to the shield layers.} \label{fig:5} \end{figure} The measurements are shown in Fig.~\ref{fig:5} along with the simulated field inside the shield. The remnant field inside the shield is less than 30\,$\mu$G and 3\,$\mu$G along the radial and axial directions respectively for a 1\,G external field which represents a suppression of roughly 6 orders of magnitude. After the shield was positioned in place around the vacuum cell, we tested its performance using ultracold atoms as magnetic field sensors. We measured the fluctuations of the magnetic field using a, microwave two-photon transition between $\ket{1, -1}$ and $\ket{1, 1}$ of $^{23}$Na ultracold atoms in a magnetic field with a Larmor frequency of 180\,kHz. After finding the resonant frequency between the two states, we use side-of-fringe pulses to prepare an equal superposition of the two states. We measure the atomic magnetisation fluctuations over time, which is sensitive to the Larmor-frequency shift due to the linear Zeeman effect. Using an independent calibration of the Rabi frequency we compute the magnetic field fluctuations from the magnetisation. Figure~\ref{fig:6} shows the fluctuations of the magnetic field resonance over a period of 4.5 hours. The biased fluctuations are clearly dominated by a slow drift of the order of 20\,$\mu$G, which we also verified by spectroscopical measurements before and after this stability measurement. \begin{figure}[t!] \centering \includegraphics[]{fig6.pdf} \caption{(a) Calculated magnetic field from atomic spectroscopy measurements over a period of 4.5 hours (left). The field drifts by about 20\,$\mu$G in that time while the shot-to-shot fluctuations are 2.6\,$\mu$G. Histogram of the distribution of the biased magnetic field fluctuations (right). (b) The Allan deviation of the field over the number of experimental iterations $n$ shows that the magnetic field stability is as low as 1.1\,$\mu$G after 500\,s. At short times the noise is dominated by shot-to-shot fluctuations $\propto n^{-1/2}$ while at long times the drift term $\propto n$ is larger. The cycle time of the experiment was 32\,sec and the interrogation time of the magnetic field was 2.6\,ms.} \label{fig:6} \end{figure} The Allan deviation (Fig.~\ref{fig:6}b) gives further insight into the nature of the fluctuations. Each point in the graph corresponds to a single realisation of the experiment with a cycle time of 32\,sec probing the field value for 2.6\,ms. From the first few points we infer a standard deviation of the unbiased fluctuations of 2.6\,$\mu$G. At short times the noise is limited by shot-to-shot and technical noise. After an integration time of 500\,sec the minimum deviation of the field is as low as 1.1\,$\mu$G. At larger times the Allan deviation is dominated by the drift term but still remains below 7\,$\mu$G. The drift can be corrected with low-bandwidth feedforward loop to eliminate this term~\cite{dedmanActiveCancellationStray2007,xuUltralowNoiseMagnetic2019}. The ambient magnetic field around the shield had an r.m.s. noise of $220\,\mu$G as measured by the magnetometer. This was mainly due to the 50\,Hz line-noise and its first odd harmonic. Without the shield in place we also observed abrupt magnetic field shifts of a few mG induced by the presence/absence of nearby magnetised objects up to a few meters away\footnote{The magnetised object was actually a car that was parked outside the window of our lab, so we affectionately refer to this measurement as `car-acterisation' of the magnetic field}. We measured a contribution of the line-noise, by scanning the time at which the field measurement was taken, to be roughly equal to the shot-to-shot fluctuations. This term was not present in the measurements of Fig.~\ref{fig:6} since the experiment was phase-locked to the line frequency. At the beginning of each sequence we used a degaussing ramp~\cite{thielDemagnetizationMagneticallyShielded2007} to account for the magnetic hysteresis of the shield. The degaussing ramp was comprised of 10 current pulses of alternating polarity whose amplitude diminished exponentially from 20\,A to 0.2\,A over 1.5 seconds and produced a field of 1.3\,G/A at the location of the atoms. We observed that after changing the magnetic field environment inside the shield, it takes about 20 repetitions of the experiment for the magnetic field to relax to its new value due to residual hysteresis of the shield. \section{Conclusions} Passive stabilisation of the ambient magnetic field using a precisely engineered ferromagnetic shield works extremely well for atomic physics experiments. Based on our extensive design study, we presented the implementation and characterisation of a compact magnetic shield that reduces the low-frequency noise in our laboratory by six orders of magnitude to the level of a few $\mu$G. This will allow for a new generation of experiments to be performed. For example, experiments where the coherence of the internal spin dynamics of a Rabi-coupled atomic system is longer than the typical timescales of many-body dynamics~\cite{gallemiDecayRelativePhase2019,tylutkiConfinementPrecessionVortex2016} and of the order of a few Hz, or that require the manipulation of atomic interactions through rf-induced Feshbach resonances~\cite{papoularMicrowaveinducedFanoFeshbachResonances2010} will be possible. \begin{acknowledgments} We acknowledge discussions and suggestions from MuShield Inc. and Amuneal Manufacturing. We thank Magnetic Shield Ltd. for collaborating on the design and manufacturing of the shield. We acknowledge fruitful discussions with Immanuel Bloch and Christophe Salomon. This work was supported by the Provincia Autonoma di Trento and the Istituto Nazionale di Fisica Nucleare under the FIS$\hbar$ project. We acknowledge received funding from the project NAQUAS of QuantERA ERA-NET Cofund in Quantum Technologies (Grant Agreement N. 731473) implemented within the European Union's Horizon 2020 Programme. \end{acknowledgments}
2,877,628,088,741
arxiv
\section{\label{sec:Intro}Introduction} The study of electromagnetic excitations localized on metal-dielectric interfaces, or surface plasmon polaritons (SPPs), has become of essential importance due to its potentiality for the implementation of sub-wavelength photonic circuits.\cite{Ebbesen_Nature_03,Plasmon_devices_05} As a result of such study, phenomena like total suppression of reflection,\cite{TSSR_Hutley76} transformation of polarization,\cite{TP_Hutley90,KatsPRB,KNN_PhysRev} enhanced light transmission,\cite{Ebbesen98_Nature,Ebbesen_theory_PRL01} formation of band-gaps\cite{FullGapBarnes_PRL96,Bozhe_PRL,SPIE_2D03,Maradud_2D_gaps_PRB02} and other interesting properties of plasmonic crystals have been discovered. Lately, a great deal of attention has been devoted to the creation of optical elements for SPPs,\cite{Ebbesen_45mirrors_06,Dereux_SPwaveguides_PRB04,Krenn_micro-nano-03, Krenn_metal_stripes-PRB01,EbbesBozhev_inter_nat06} as well as to the efficient coupling of light into and out of SPPs. These latter problems require a precise knowledge of the scattering coefficients of the dispersion centers (i.e. deviations from a flat metal-dielectric interface) placed on the path of SPPs. However, while many theoretical works have studied the problem of SPP scattering by rough surfaces, scattering from simple geometries is not so well known. This was undoubtedly due to the previous lack of reproducible experimental data, which are now available thanks to recent advances in the controlled patterning of metal surfaces. \begin{figure}[] \begin{center} \includegraphics[]{geometry.eps} \end{center} \caption{\label{geometry}(Color online) Schematic illustration of the studied system: SPP scattering at the inhomogeneity formed by either (a) the perturbation of the interface profile $h(x)$ or (b) the surface impedance of the metal $\xi(x)$.} \end{figure} From the theoretical side, the calculation of electromagnetic (EM) fields on a metal surface in the optical regime is a well-defined but difficult problem. Although the macroscopic Maxwell equations govern very accurately the interaction of the EM fields with the solid, their solution is difficult due to the different ranges of length scales involved (system size, wavelength, skin depth etc.). Several techniques have been applied to this problem, each of them with their advantages and drawbacks. The Greens's Dyadic technique\cite{MaradudGreffet_PRB94,Olivier,BozhDyadic_PRB} or the Discrete Dipole Approximation\cite{Draine,Andrei_APB06,Andrei_LPL06} are virtually exact methods which suffer from the large (quite often prohibitive) numerical cost involved with inversion of huge matrices and the need to calculate cumbersome Sommerfeld integrals. On the other hand, the mode matching technique is computationally much simpler, but it can only be directly applied to the case of indentations in a metal film (and not to protrusions).\cite{Luis_mirrors_05,Luis_modes_PRL04} Additionally, it requires the use of surface impedance boundary conditions (SIBC), which are only applicable when the metallic dielectric constant, $\epsilon$, satisfies $|\epsilon| \gg 1$. An alternative approach based on the Rayleigh approximation (which is valid for small scatterers) has been also extensively applied in the context of EM scattering by rough surfaces.\cite{Nano-optics_PR05} Even within this approximation, the calculation of the scattering coefficients requires solving a difficult integral equation.\cite{RoughSurfacePRB77,Tatarskii_JOSA,Shchegrov_PRL97} Several works have rendered this integral equation into a more manageable form by means of an additional approximation, which assigns a geometry-dependent ``local impedance'' to the surface relief (see, for instance, Refs. \onlinecite{MaradudinIBC, SanchesAppl.Lett.98,MaradudinPRB99}). Up to our knowledge, all works have concentrated on scatterers with translational symmetry in one direction, onto which the SPP impinges at normal incidence, except for the case of a circularly symmetric defect, considered in Ref.~\onlinecite{Shchegrov_PRL97} by using the reduced Rayleigh equation. In this work, we use the Rayleigh approximation combined with the SIBC at the metal-vacuum interface, systematically taking into account the geometry of the surface. Our first result is an integral equation governing the SPP scattering by arbitrary perturbations, which can be either spatial variations in metal permittivity or in surface profile. This integral equation is as simple as the one defined by a ``local impedance'',\cite{MaradudinIBC,SanchesAppl.Lett.98,MaradudinPRB99} but is far more accurate, as shown by comparisons with results obtained using the mode-matching technique. Due to its simplicity, the method developed provides a clear physical description of SPP scattering effects. As a first application, here we concentrate on the scattering properties of a single inhomogeneity, as a function of shape and geometrical parameters defining the defect. \section{\label{sec:setup}Rayleigh expansion approximation} \subsection{Fields representation and boundary conditions} Consider a monochromatic surface plasmon with frequency $\omega$, impinging along a metal-vacuum interface at normal incidence (with in-plane wavevector $k_p$) onto an inhomogeneity region, see Fig.~\ref{geometry}. In keeping with other works on scattering of SPPs, we consider that the metal is lossless. From a physicist's point of view, we expect this to be valid in the systems we have in mind, as far as the dimension of the inhomogeneity in the direction of SPP propagation is smaller than the absorption length. More mathematically, SPP scattering channels are well defined in a lossless metal, and current conservation provides a strong (although not definitive!) test for the theory. Nevertheless the proposed method could, mutatis mutandis, accommodate for absorption. In this case, instead of scattering coefficients, the outcome of the calculation would be spacial EM field distributions which could be used, for instance, to analyze scanning near-field optical microscope or leakage radiation experiments. The considered inhomogeneities can be due to variations either in the surface impedance or in the surface profile. Although the integral equations governing scattering will have the same functional form for both cases, the derivations are slightly different. Let us concentrate first in the latter case, which is somewhat more involved. Suppose that the metal has a dielectric constant $\epsilon$ (and correspondingly a surface impedance $\xi = 1/\sqrt{\epsilon}$) and that the surface relief profile has the functional form \begin{equation}\label{0} z = h(x), \quad h(x) = \int dk\, h(k)\exp(ikx). \end{equation} Assuming that both the variation of the surface relief and its derivative are small ($|h|\ll\lambda$ and $| \partial _{x}h|\ll1$), we may represent the field over the surface in the form of Rayleigh expansion.\cite{Book_waves} The non-zero EM field components in the vacuum half-space can be expanded in terms of the incident SPP plus scattered field as \begin{eqnarray}\label{2} \left\{ \begin{array}{c} E_x(x,z) \\ E_z(x,z) \\ H_y(x,z) \\ \end{array} \right\} &=& \left\{ \begin{array}{c} -k_{zp}/g \\ -k_{p}/g \\ 1 \\ \end{array} \right\} \exp\left[i(k_{p}x - k_{zp}z)\right] \\ \nonumber & & + \int dk \left\{ \begin{array}{c} E_{x}(k) \\ E_{z}(k) \\ H_y(k) \\ \end{array} \right\}\exp\left[i(kx-k_{z}z)\right] . \end{eqnarray} where $ g=\omega/c =2\pi/\lambda$ , $ k_{z}=\sqrt{g^2-k^2}$ and $ k_{zp}=\sqrt{g^2-k_{p}^2} = -\xi g$. The branch of the square root should be chosen such that $ \mathrm{Im}(k_{z})\geq0$, in order to satisfy the radiation condition. Notice that, within the SIBC, the SPP dispersion relation is $k_p= g(1-\xi^2)^{1/2} $, which approximates the exact SPP dispersion relation $k_p= g[\epsilon/(1+\epsilon)]^{1/2} $ at $|\epsilon|\gg 1$. Within the SIBC,\cite{Landau} the EM fields should satisfy \begin{equation}\label{4} \mathbf{E}_{t}(x,z) = \xi \mathbf{H}_{t}(x,z)\times\boldrm{n}(x) , \, \mathrm{at} \, z=h(x) , \end{equation} where $\boldrm{n} = (n_x, 0, n_z)$ is the unitary vector normal to the surface (directed into the metal half-space) and subscript $t$ corresponds to the tangential components of the fields. Notice that the SIBC assumes that the radius of curvature of the surface is much larger than the skin depth. However, as the comparison with results obtained by the modal expansion method will show, the SIBC still represents accurately scattering by defects where this condition is not fulfilled at a small number of points (as occurs for rectangular and triangular indentations). \subsection{The integral equation for the scattered field amplitudes} Expressing the tangential component of the electric field as \begin{equation}\label{r1} \mathbf{E}_{t}(x,z) = \mathbf{E}(x,z) - \boldrm{n}(x)[\mathbf{E}(x,z)\cdot \boldrm{n}(x)], \end{equation} the $x$ component of Eq. \eqref{4} gives \begin{equation}\label{r2} E_{x}(x,z)n_z(x)-E_{z}(x,z)n_x(x) = \xi H_y(x,z), \, \mathrm{at} \, z=h(x) \end{equation} Substituting the fields from Eq.~\eqref{2} into Eqs. \eqref{r2}, and using Maxwell equations $E_x=-(\imath/g) (\partial/\partial z)H_y$ and $E_z=(\imath/g)(\partial/\partial x)H_y$, provides an integral equation for the amplitudes of the scattered fields, for instance, for the magnetic component, $ H_y(k)$. A much more manageable equation can be obtained for smooth surface inhomogeneities by expanding the boundary condition over the small parametes $| \partial _{x}h|,|h|/\lambda\ll1$. As the surface normal vector has the form \begin{equation}\label{r4} \mathbf{n}=\frac{\mathbf{e}_z-\mathbf{e}_x \partial _{x}h}{\sqrt{1+( \partial _{x}h)^2}}, \end{equation} the expansion of these vector components up to second-order terms gives \begin{eqnarray}\label{r5} n_z &=& 1-\frac{1}{2}( \partial _{x}h)^2 + O(| \partial _{x}h|^4), \nonumber \\ n_x &= &- \partial _{x}h + O(| \partial _{x}h|^3). \end{eqnarray} Following an analogous procedure on the exponents, $\exp(- i k_{zp}h)$, $\exp(- i k_{z}h)$, appearing in the boundary conditions (but in the parameter $|h|/\lambda$), after some straightforward algebra we finally obtain an expanded integral equation, in which as many expansion terms (in powers of $ \partial _{x}h$ and $|h|/\lambda$) as necessary should be retained in order to satisfy energy conservation up to a required accuracy. However, as will be shown later, just considering up to the terms linear in $ \partial _{x}h$, $|h|/\lambda$, already provides accurate energy conservation. Moreover, we will show that this even occurs for rectangular or triangular defects which are, in principle, unfavorable cases as the shape slopes are not small everywhere. From the physical point of view this means that small-scale spatial components of the fields do not contribute essentially to the scattering. It is convenient to define the dimensionless wave-vector components $q=k/g$ (so that $q_{p}=k_{p}/g=\sqrt{1-\xi^2}$) and $q_z = k_{z} / g$, and the dimensionless Fourier amplitude of the relief defect $ \eta(q)= ig^2 h(k)$. Additionally, the renormalized field amplitudes $r(q)$ are defined by \begin{equation}\label{rq} r(q) = g H_y(k) \, G(q)^{-1}, \quad G(q)=1/(\xi + q_z), \end{equation} where $G(q)$ is the Green's function corresponding to the unperturbed SPP. Then we find that SPP scattering is governed by \begin{equation}\label{p1} r(q) + \int dq'\, U(q,q')G(q') r(q') = -U(q,q_{p}). \end{equation} In this equation $U(q,q')$ is the scattering potential which, in general, can be expressed as a series expansion in the Fourier image of the defect profile $ \eta(q)$ as \begin{equation}\label{p2} U(q,q')= U_1(q,q') + U_2(q,q')+..., \quad U_n(q,q') \sim\eta^{n}. \end{equation} Up to first order in $| \partial _{x} h|$, $|h|/\lambda$, the explicit expression for the potential is\cite{note1} \begin{equation}\label{rp} \begin{split} U(q,q') = [q'(q-q') - q'_zG(q')^{-1}]\eta(q-q'). \end{split} \end{equation} We have to keep in mind that, within the ``local impedance" approximation, the SPP scattering is also governed by Eq.~\eqref{p1} but, in this case, the scattering potential is $U_{local}(q,q') = \eta(q-q') \, (\epsilon-1)/\epsilon \approx \eta(q-q') $. This different functional form is not irrelevant: as we will show later, the fact that $U(q_p,q_p)=0$ (property not shared by $U_{local}$) has important consequences in the scattering of SPP by surface reliefs. In order to ascertain which is the best approximation, subsequent sections present comparisons between results obtained with the modal expansion and both potentials $U$ and $U_{local}$. Let us anticipate that the comparison favors the scattering potential $U(q,q')$ defined by Eq.~\eqref{rp}. Another point favoring $U(q,q')$ against $U_{local}(q,q')$ is to consider the perfect conductor limit ($\epsilon \rightarrow -\infty$), where the SIBC must transform to $(\partial / \partial n) H_y(x,z) = 0$, evaluated at $z=h(x)$. In this limit, while Eq. \eqref{4} transforms correctly, the use of ``local impedance" leads [by using Eqs. (5) and (25) of Ref.~\onlinecite{MaradudinIBC}] to the boundary condition $(\partial/ \partial z) H_y(x,z)= g^2 h(x)H_y(x,z)$, evaluated at $z=0$. The root of the problem seems to be that the series expansion in $h(x)$ [Eq.~(2.23) in Ref.~\onlinecite{MaradudinIBC}] diverges as $\epsilon\rightarrow\infty$ (note, for instance, that the second-order term is inversely proportional to the skin-depth, $d\sim1/\sqrt{-\epsilon}$). To complete this section, let us point out that the previously outlined formalism can be generalized to the case of defects due to impedance inhomogeneities. In this case, the metal surface is flat, but the surface impedance becomes a function of the $x$-coordinate \begin{equation}\label{i1} \xi(x) =\xi + \tilde{\xi}(x)= \xi + \int dk\, \tilde{\xi}(k)\exp(ikx). \end{equation} After applying SIBC, which in this case reads $E_{x}(x,z=0) = \xi(x)\, H_y(x,z=0)$, we find that SPP scattering is also controlled by Eq.~\eqref{p1}. The difference is that now the scattering potential is \begin{equation}\label{pot_imp} U(q,q')= \eta(q-q'), \end{equation} where $\eta(q) = g\tilde{\xi}(k)$ is the dimensionless Fourier amplitude of the modulation defect. \subsection{The transmission, reflection and out-of-plane scattering} Once the coefficients $r(q)$ are obtained, the integrals in Eqs.~\eqref{2} define the EM field everywhere in vacuum. However, in order to find the scattering coefficients, only the asymptotic values at large distances from the scattering center are needed. It is then convenient to prolong the sub-integral functions into the complex plane, taking into account the presence of poles. While the renormalized Fourier image of the field, $r(q)$, is not expected to present poles, the Green function $G(q)$ definitely does. In order to retain its causal character, an infinitesimally small damping should be included as $ \xi \rightarrow i\mathrm{Im}(\xi) + 0$. As a result, the magnetic field at the surface $z=0$ may be written in the following form \begin{eqnarray*}\label{trs2} H_y(x\rightarrow\infty,0) & = &(1 + \tau) \exp(ik_{p}x ), \\ H_y(x\rightarrow-\infty,0) & = & \exp(ik_{p}x ) + \rho\exp(-ik_{p}x ), \end{eqnarray*} where \begin{equation}\label{trs3} \begin{split} \tau = \frac{2\pi i\xi}{q_{p}}r(q_{p}), \quad \rho = \frac{2\pi i\xi}{q_{p}}r(-q_{p}). \end{split} \end{equation} The energy flux scattered out of the metal-vacuum interface is computed by integrating the Poynting vector over a rectangular parallelepiped defined by the plane $z=0$ and walls placed in the far-field parallel to the planes $Ox$, $Oy$, and $Oz$. Then, taking into account that the power per unit length of the incoming SPP is $q_{p}/(4|\xi|)$, energy conservation law has the following form \begin{equation}\label{trs4} \begin{split} 1-S-T-R=0, \end{split} \end{equation} where the reflection $R$, transmission $T$, and out-of-plane $S$ scattering coefficients are \begin{equation}\label{trs5} \begin{split} R = |\rho|^2, \quad T=|1+\tau|^2, \\ S= \frac{4\pi|\xi|}{q_{p}}\int\limits_{|q|<1} d q\cdot q_z|G(q)r(q)|^2. \end{split} \end{equation} The total out-of-plane scattered current can then be written in the following form \begin{eqnarray}\label{trs6} S & =& \int\limits_{-\pi/2}^{\pi/2} d \theta D(\theta), \nonumber \\ D(\theta)& =& \frac{4\pi|\xi|}{q_{p}}\cos^2\theta\left|r[q(\theta)]G[q(\theta)]\right|^2 \nonumber \\ &=&\frac{4\pi|\xi|\cos^2\theta}{q_{p}\left(|\xi|^2+ \cos^2\theta\right)}\left|r[q(\theta)]\right|^2. \end{eqnarray} where $D(\theta)$ is the differential reflection coefficient (DRC), which provides the angular dependence of the radiated energy. In this expression angle $\theta$ is defined from the normal, while $q(\theta)=\sin\theta$ and $q_z(\theta) = \cos\theta$. To summarize this section, the set of equations \eqref{p1}, \eqref{trs5} and \eqref{trs6} define the scattering coefficients for an SPP impinging normally into an arbitrary set of perturbations (with an axis of translational symmetry) in a flat metal surface within the Rayleigh + SIBC approximations. Such perturbations can be either variations in the surface relief (either indentations or protrusions) or variations in the surface impedance. \subsection{Perturbative approach: qualitative description of the results} In some cases, it is useful to estimate the solution of Eq.\eqref{p1} using a perturbative approach. Taking into account the representation of the scattering potential in \eqref{p2}, we seek the solution of Eq.~\eqref{p1} in the form of a series expansion in $\eta$:\footnote{Recall though that $\eta(q)$ must be a small parameter: even for small defect depths the approximation breaks for very stretched defects.} \begin{equation}\label{p3} r(q)= r_1(q) + r_2(q) +..., \quad r_n(q)\sim\eta^{n}. \end{equation} The first-order Born approximation (FOBA) gives us \begin{equation}\label{p4} r_1(q) =-U_1(q,q_{p})\sim \eta(q-q_{p}). \end{equation} The second-order term is \begin{equation}\label{p5} r_2(q) =-U_2(q,q_{p}) +\int dq'\, U_1(q,q') \, G(q') \, U_1(q',q_{p}). \end{equation} As usual, FOBA describes the scattering of the incident wave with in-plane wavevector, $q_{p}$, into a plane wave with the wavevector $q$, the momentum difference being provided by a single interaction with the defect. Therefore the scattering amplitude for this process is proportional to $\eta(q-q_{p})$. This explains the structure of the transmission coefficient. As Eq.~\eqref{trs5} shows, $T$ has contributions from both the incident SPP (with unit amplitude) and from the scattering of the incident plasmon ``into itself'' [the term proportional to $\eta(0)$]. Analogously, in FOBA the amplitude of the reflection is proportional to $\eta(-2q_{p})$ and represents the scattering of the incoming SPP into the backwardly-propagating one (having the wavevector $-q_p$). \section{\label{sec:setup2} SPP scattering by a single defect.} In this section we consider the dependence of scattering properties by single 1D defects of different shapes. The defect width is $a$ and the defect depth is $\mathrm{w}$ (see Fig. 2). Notice that, within the chosen coordinate system, indentations are characterized by $\mathrm{w}>0$, while protrusions have $\mathrm{w}<0$. The calculation is performed by solving numerically the integral equation \eqref{p1}, after applying an appropriate discretization in q-space. As the results are in good agreement with those obtained within first-order Born approximation, analytic expressions for the scattering coefficients are provided in several cases. \subsection {Scattering coefficients} Figure~\ref{RTS} renders the results of numerical calculations for $R$, $T$, $S$ in different instances. In (a) the comparison of these quantities for single indentations of different shapes is presented. In the case of a rectangular shape, $R$ behaves periodically with respect to $a/\lambda$, while in the case of a gaussian shape $R$ possesses only one maximum. Analogously, the transmittance presents oscillations in $a/\lambda$, in contrast with the single maximum that appears for the case of gaussian shape. In the same figure we present also the calculations for the triangular shape. These results show that even shallow defects present very rich shape and spectral dependences. \begin{figure*}[t] \begin{center} \epsfig{file=RTS.eps,width=17cm} \end{center} \caption{\label{RTS} (Color online) The dependency of the transmittance, T, reflectance, R, and emittance, S, upon the dimensional width of the defect of different shapes in the silver surface (at $600 nm$). The amplitude is $\mathrm{w}/\lambda = 0.02$. In (a) solid, dashed and dash-dotted curves correspond to the rectangular, gaussian and triangular shapes of indentation. Round markers correspond to the calculations using mode expansion. In (b) dashed (solid) curve corresponds to a gaussian indentation (protrusion); by the rectangular markers the Born approximation is presented.} \end{figure*} Figure ~\ref{RTS}(b) shows the scattering coefficients for both indentations and protrusions of gaussian shape. As can be seen from this figure, the scattering properties of indentations and protrusions are very similar for shallow defects. Nevertheless, for the considered parameters, protrusions present a slightly larger cross section, resulting in larger values for both $R$ and $S$ and smaller ones for $T$. We stress that, although the integral equations where derived assuming that the surface was smooth, energy conservation was fulfilled with an accuracy better than 1\% of the minimum value of $R$, $T$ and $S$, even for sharp defects. Additionally, we have checked that retaining second-order terms in $\eta(q)$ in the scattering potential leaves the results virtually unaltered. Moreover, solving the integral equation with the ``exact'' (not expanded) right-hand side does not produce any significant variation in the calculated scattering coefficients. In order to further validate the above mentioned approximations, leading to the results presented in Fig.~\ref{RTS}, additional calculations were carried out with the modal expansion technique\cite{Luis_mirrors_05}. While also using SIBC, this technique is applicable for {\it indentations} of any depth, going beyond the Rayleigh expansion. The comparison is presented in Fig.~\ref{RTS} (a), for the case of a rectangular indentation. As can be seen, the agreement is very good, the difference being attributed mostly to the fact that, in the modal expansion, ideal metal boundary conditions were used for the vertical ``walls'' of the rectangular indentation. We note in passing that using the ``local impedance" scattering potential gives very different results: the values of $T$ and $S$ differ by more than an order of magnitude from the ones presented in Figs. \ref{RTS} and \ref{DRC} (we performed the calculations with the potential presented in Ref.~\onlinecite{MaradudinPRB99} for the same set of parameters). We find that FOBA provides an accurate description for the behavior of the calculated scattering coefficients. This is illustrated in Fig.~\ref{RTS} (b), where the FOBA results for the gaussian defect are compared with the full solution of Eq.~\eqref{p1}. Notice that FOBA predicts the same reflectance for both protrusions and indentations, since $R\sim \mathrm{w}^2$. Further analysis shows that the second order approximation already accounts for the small differences between the scattering coefficients of indentations and protrusions found in Fig.~\ref{RTS} (b). This occurs because, in second-order approximation $R\sim |\mathrm{w} + \mathrm{w}^2\cdot \psi(a)|^2$, where the complex function $\psi(a)$ depends upon both shape and longitudinal size of the defect. Besides, FOBA enables us to find analytic expressions for the reflectance of SPPs by a single defect quite easily, in terms of $|\eta(-2q_{p})|^2$. For instance, for a rectangular or a gaussian defect of width $a$, we obtain \begin{equation} \eta(q)^{Rect}=i\Delta\frac{\sin(q \tilde{a})}{\pi q},\, \eta(q)^{Gauss}=\frac{i\tilde{a}\Delta}{2\sqrt{\pi }}e^{-q^2\tilde{a}^2/4}, \end{equation} where $\tilde{a} = a\pi/\lambda$ and $ \Delta = g\mathrm{w}$ for relief defects (for impedance defects $\Delta$ corresponds to the maximum value of $|\tilde{\xi}|$). Therefore, the reflectance of a single relief defect can be expressed as \begin{eqnarray}\label{R_FOBA} R^{Gauss} & = & 4\pi|\xi|^2 q_p^2 \Delta^2\tilde{a}^2e^{-2q_p^2\tilde{a}^2}, \nonumber\\ & & \\ R^{Rect} & = & 16|\xi|^2 q_{p}^2\Delta^2\tilde{a}^2 \mathrm{sinc}^2\left(2 q_{p} \tilde{a}\right). \nonumber \end{eqnarray} Thus, for rectangular defects $R$ behaves periodically as a function of $a/\lambda$, possessing minima at $a/\lambda = n/2q_{p}$, $n=1,2,...$; while the reflectance for defects of gaussian shape presents only one maximum at $a/\lambda = 1/\sqrt{2}\pi q_{p}$. This is in an excellent accordance with the strict numerical solution of Eq.~\eqref{p1}, see Fig.~\ref{RTS} (a), (b). However, as the transmittance in FOBA does not depend upon the width of the defect, higher order terms in the Born series are required in order to reproduce this dependence appropriately. \subsection{Out-of-plane radiation due to a single defect.} \begin{figure}[h!] \begin{center} \epsfig{file=DRC.eps,width=\columnwidth} \end{center} \caption{\label{DRC} (Color online) DRC, $D(\theta)$, for SPP scattering along a silver surface ($\lambda=600 nm$), for different defect types and widths. (a) gaussian indentation, (b) rectangular indentation and (c) impedance step (the metal is considered ideal inside the defect). Dash-double-dotted (black), dashed (red), dotted (green), dash-dotted (blue), solid (magenta) correspond to defect widths $a/\lambda= 0.1$, $0.25$, $0.5$, $0.6$, $0.8$, respectively. The depth of the indentation is $\mathrm{w}/\lambda =0.02$. Squares in (a) correspond to calculations within the first order Born approximation. Round markers in (b) correspond to the calculations using modal expansion} \end{figure} \begin{figure*}[] \begin{center} \epsfig{file=FFT.eps,width=16cm} \end{center} \caption{\label{FFT}(Color online) Dependence of the modulus of the renormalized Fourier image of the scattered field, $|r(q)|$, upon dimensionless wave-vector, $q$. Left panels show the calculation for gaussian indentations [$a/\lambda = 0.8$ in panel (a), $a/\lambda = 0.1$ in panel (b)], while right panels correspond to rectangular indentations [$a/\lambda = 0.8$ in panel (c), $a/\lambda = 0.1$ in panel (d)]. The depth of the indentation is $\mathrm{w}/\lambda =0.02$. In each panel the squares represent the $|r(q)|$ calculated within first order Born approximation, whereas dash-double-dotted (black) and solid (magenta) lines stands for the strict numerical solution; the green dashed curve renders the modulus of the $q_p$-shifted Fourier transformation of the defect, $|\eta(q-q_{p})|$. Notice the dips of all curves at $q=q_{p}$ due to the presence in the potential of the ``cutting'' function $|q-q_{p}|$. The blue shaded areas denote the regions in $q$ corresponding to out-of-plane radiative modes. } \end{figure*} In this section we analyze the angular distribution of the the energy radiated out of the plane after scattering. This behavior is represented in Fig.~\ref{DRC}, which renders the radiation diagrams for SPP scattering by gaussian indentations [panel (a)] and rectangular indentations [panel (b)], for different defect widths. The surface impedance is that of silver at $\lambda=600$nm ($\xi=-0.277i$). The case of an impedance defect (with zero impedance, as for perfect conductors) is also shown in Fig.~\ref{DRC}~(c). For comparison, the calculations for rectangular indentations were also performed with the modal expansion method [circles Fig~\ref{DRC}(b)]. Again, the agreement between the two methods is quite remarkable. Fig.~\ref{DRC} shows that the radiation diagrams present a non-trivial dependence on defect shape. For impedance defects, the angle at which maximum out-of-plane radiation occurs always points along the direction of propagation of the incident SPP. However, for narrow gaussian relief defects ($a<\lambda/2$), the maximum in DRC occurs at negative angles. In this case, as $a$ increases, the angle of maximal radiation shifts from negative angles to positive ones. This behavior of gaussian defects has also been reported for surface plasmons in a {\it thin metal film}, excited in a Kretschmman configuration\cite{MexicansPRB06}. For rectangular defects, the DRC behavior is more complex. At small $a/\lambda$ the DRC presents one emission lobe at a negative angle; as $a$ increases, the emission lobe moves to the normal direction and after a transition point (at approximately $a/\lambda$ = 1/2) the main lobe moves to positive angles, while a second emission lobe appears at a negative angle. Finally, for $a/\lambda \approx 1$ the amplitudes of the two lobes are comparable, see Fig.~\ref{DRC}. Within FOBA, the DRC can be analytically computed. From Eq.~\eqref{p4} and Eq.~\eqref{trs6} we obtain \begin{equation}\label{d2} \begin{split} D_1(\theta)=\frac{4\pi q_{p}|\xi|(\sin\theta-q_{p})^2\cos^2\theta}{ |\xi|^2+\cos^2\theta }\left| \eta(\sin\theta-q_{p})\right|^2. \end{split} \end{equation} As illustrated in Fig.~\ref{DRC} (a) for the case of gaussian indentations, FOBA results (square symbols) provide and excellent approximation to the full solution. This allows us to connect the DRC with the potentials corresponding to the SPP scattering within such approximation. For the case of impedance defects, Eq.~\eqref{pot_imp} and Eq.~\eqref{p1} imply that the renormalized spectra of the field $r(q)$ is proportional to $\eta(q-q_p)$, which is the Fourier image of the defect shifted by the SPP wave-vector\footnote{Evidently, the true spectra of the field, $H_y(k)$, is also strongly influenced by the SPP Green's function, see Eq.~\eqref{rq} }. Conversely, for relief defects, $r(q)$ is proportional not only to $\eta(q-q_{p})$ but also to $q-q_{p}$. This is illustrated in Fig.~\ref{FFT}, for gaussian (left panels) and rectangular (right panels) at $a/\lambda=0.8$ (top panels) and $a/\lambda=0.1$ (bottom panels). In all panels the modulus of the shifted Fourier image of the defect $|\eta(q-q_p)|$ is represented by the green dashed lines. Fig.~\ref{FFT} also shows the corresponding $|r(q)|$ for relief defects, obtained by solving the integral equation \eqref{p1} either strictly or within FOBA approximation. As expected, the narrower the defect, the more extended $|\eta(q-q_p)|$ (and therefore $|r(q)|$) in $q$-space. It is also clear that $|\eta(q-q_p)|$ is a smooth function for gaussian defects, while it presents oscillations for rectangular ones. These two properties are extremely useful in order to understand the behavior of DRC. Notice that $D(\theta)$ essentially follows the dependence of $|r(q)|^2 $ (see Eq.~\ref{trs6} within the range $-1<q<1$ i.e., the region corresponding to radiative modes, represented by the blue shaded areas in Fig.~\ref{FFT}). For impedance defects $|\eta(q-q_p)|$ presents its maximum at $q>0$, so the maximum out of plane emission always occurs in the forward direction, exactly in the same way as in the model of the \emph{local} surface impedance approximation\cite{MaradudinIBC,SanchesAppl.Lett.98,MaradudinPRB99}. However, for relief defects, forward direction emission is further inhibited by the presence of the ``cutting function" $q-q_{p}$ in the expression for $r(q)$. For narrow defects this results in $|r(q)|$'s having more weight on the $q<0$ region, so the emission is directed backwards. For wide gaussian defects ($a> 0.5\lambda$) the width in q-space of $|\eta(q)|$ is smaller than $q_p$ (recall that $q_p \approx 1$) and the emission occurs in the forward direction. For rectangular defects, as the defect widens, more and more oscillations in $|\eta(q-q_p)|$ enters into the $-1<q<1$ region, leading to different emission lobes, as shown in Fig.~\ref{DRC}(b). The angles of maximum out-of-plane emission can be obtained in FOBA by solving $ \partial _{\theta}D_1(\theta)=0$. While in the case of a rectangular shape this equation is transcendent with respect to variables $\theta$ and $a/\lambda$, it may be solved explicitly for gaussian shapes. Thus, we find that the maxima condition for gaussian reliefs is \begin{equation}\label{d3} \frac{a}{\lambda}(\theta) = \sqrt{\frac{2}{\pi^2(\sin\theta-q_{p})^2}+\varphi(\theta)}, \end{equation} where \begin{equation} \quad \varphi(\theta) = \frac{2|\xi|^2\tan\theta}{\pi^2\left(|\xi|^2+\cos^2\theta\right)(q_{p}-\sin\theta)\cos\theta}, \end{equation} while in the case of an impedance defect this condition is much simpler, $a/\lambda\,(\theta)=\sqrt{\varphi(\theta)}$. These dependencies are presented in Fig.~\ref{curve_max}, reflecting that while impedance defects emit preferentially in the forward direction, emission maxima occurs in the backward direction up to $a/\lambda \approx 0.42$ for the relief case . \begin{figure}[!h] \begin{center} \epsfig{file=curve.eps,width=\columnwidth} \end{center} \caption{\label{curve_max}(Color online) Dependence of angle of maximum out of plane radiation as a function of $a/\lambda$ for gaussian defects. The case of relief defects is represented in the red continuous curve, the blue broken curve corresponds to impedance defects.} \end{figure} \section{\label{concl}Conclusions} We have developed an approximate method (based on Rayleigh expansion and surface impedance boundary conditions) for the study of SPP scattering at arbitrary 1D inhomogeneities, and applied it to the case of single defects. We have compared the numerical solution of the integral equation in $k$-space with the calculations made by using mode expansion technique. The excellent agreement between them (together with the fulfillment of the energy conservation law) indicates that the theoretical formulation is correct and much more accurate than previous approximations based on {\it local} surface impedances. The case of a single scattering center is analyzed in detail. We have compared the scattering by impedance and relief inhomogeneities of different shapes, considering both protrusions and indentations. We have shown that the transmission, reflection and out-of-plane scattering, are defined essentially by the spectral properties of the inhomogeneity. We have shown that, out-of-plane radiation after scattering by impedance inhomogeneities is always directed in the forward direction (with respect to the SPP propagation). On the contrary, in the case of relief defects, the radiated energy may be directed into backward or forward directions (or both, in the case of rectangular defects), depending on the defect width and shape. Further theoretical work will be aimed at the scattering of SPP by the multiple scatterers, and, also, at the generalization of this approach to $2D$ inhomogeneities. \section{\label{ackn}Acknowledgments} We are grateful to Prof. A.V. Kats and Dr. J. A. S\'{a}nchez-Gil for helpful discussions and criticism. The authors acknowledge financial support from the European Network of Excellence Plasmo-Nano-Devices (FP6-2002-IST-1-507879) and the STREP ``Surface Plasmon Photonics'' (FP6-NMP4-CT2003-505699).
2,877,628,088,742
arxiv
\section{Introduction} \label{sec:intro} Continuum robots (CRs) are inspired by biological appendages such as octopus arms, elephant trunk, and monkey's tail \cite{walker2013continuous}. These robots are made of soft, elastic, compliant, and lightweight materials together with some rigid elements \cite{nazari2019forward}, which lead to the ability to manipulate geometrically complex objects and work in unstructured and confined environments \cite{camarillo2008mechanics}. They can smoothly move by elongation, contraction, and bending. Compared to their rigid counterparts, CRs are better for manipulating deformable objects, working in confined and unstructured spaces, and safely operating close to humans \cite{qi2017design}. These compliant robots can be actuated intrinsically, by employing pneumatic and hydraulic actuators, or extrinsically, by utilizing concentric tubes and tendons \cite{amanov2019tendon}. Concentric-tube CRs \cite{webster2006toward,gilbert2016concentric} and tendon-driven CRs have typically small diameter-to-length ratios \cite{amanov2019tendon}. The small ratio enables them to reach highly confined spaces as well as technically inspect unstructured environments \cite{burgner2015continuum}. \begin{figure} \begin{center} \begin{subfigure}[b]{.95\columnwidth} \includegraphics[width=\linewidth]{isoNannot.jpg} \caption{} \label{fig:CAD} \end{subfigure} % \begin{subfigure}[b]{.9\columnwidth} \includegraphics[width=\linewidth,height=5cm]{crRealN.jpg} \caption{} \label{fig:TDCR} \end{subfigure} % \caption{(a) CAD model and (b) prototype of tendon-driven continuum robot. A sample of robot motion can be seen in the supplemented video.} \end{center} \end{figure} Affected by the intrinsic compliance and the high number of degrees of freedom (DOFs), the control of CRs has been a challenge since their emergence. A set of control methods are based on the mathematical modeling of CRs \cite{chikhaoui2018control} whereas another set focus on the model-free control approaches \cite{george2018control,da2020challenges}. The precise control of CRs always needs feedback on the state of the robot, including the position and velocity of the robot tip and also the shape of the robot. The feedback is often provided by physical sensors attached to the robot tip. This sensing method is the most common one in non-medical tasks, but medical scenarios limit them due to health-related issues, including biocompatibility and sterilizability of medical CRs, plus the essence of employing CRs that are water-resistant and insensitive to the temperature variance \cite{nazari2021image}. Therefore, non-contact sensing modalities such as vision-based methods find an important place in medical interventions. Further, vision-based methods can be easily implemented in medical applications where the imaging system is available in the field \cite{fallah2020depth}. Vision-based control, also called visual servoing (VS), of CRs is a challenging problem among robotics researchers, where the robot motion is controlled based on real-time visual feedback provided by one or more imaging devices. The error between the current visual information and the desired one, initially generated based on the desired scene, is calculated and fed to the controller to generate the control command for the robot's actuators. Depending on where the imaging device is positioned in the control loop to observe the scene of interest, VS has two paradigms called eye-in-hand (EIH) and eye-to-hand (ETH) \cite{hutchinson1996tutorial}. In the former, the camera is mounted on the end-effector to observe the scene, whereas the latter requires the camera to be deployed in the environment on a fixed base to observe the scene of interest, including the robot. Early methods of extracting visual information from the captured images were focused on geometric features such as points, lines, corners, edges, ridges, and blobs. Relying on these features, two basic approaches have been proposed and employed by the researchers, called image-based visual servoing (IBVS) and position-based visual servoing (PBVS) \cite{hutchinson1996tutorial}. Wang \textit{et al.} \cite{wang2013visual,wang2016visual} selected the IBVS-EIH approach for kinematic control of a cable-driven soft robot. They developed an adaptive PD tracking controller by knowing the intrinsic and extrinsic camera parameters, but they did not consider visual sensing accuracy in the modeling. Model-less optimal feedback control in the IBVS-ETH scheme was used by Yip \textit{et al.} \cite{yip2014model} to control a planar tendon-driven CR in the task space. They estimated the image Jacobian using backwards differencing. Zhang \textit{et al.} \cite{zhang2017visual} modeled the statics of a cable-driven parallel soft robot and implemented an open-loop/closed-loop switching controller in an IBVS-ETH scheme. The open-loop controller was developed based on finite element modeling in the simulation environment, which is computationally inefficient for real-time control. Despite having several advantages, IBVS and PBVS have some disadvantages/limitations \cite{hutchinson1996tutorial,janabi2010comparison}. The real-time pose estimation in PBVS schemes is always a challenge. The other significant challenge in the feature-based methods is that visual features should be extracted from a process that requires camera pose measurement, robust feature extraction, feature matching, and real-time tracking, all of which are complex and computationally heavy \cite{marchand2005feature}. The success of the feature-based visual servoing, in fact, depends on the tracking success and performance, \textit{i.e.}, the speed, accuracy, robustness, and redundancy of the visual features \cite{ourak2019direct}. Using non-geometric VS, also called direct visual servoing (DVS), is an alternative to eliminate the feature tracking requirement. Photometric VS \cite{collewet2011photometric}, as a DVS method, is a solution to the problem in a 2-dimensional (2D) scenario. It exploits the full image as a whole, uses the luminance of all pixels in the image, and avoids extracting geometric image features. Due to the redundancy of visual information in the control loop, DVS schemes are more accurate and robust than geometric feature-based VS methods \cite{duflot2019wavelet}. Although these methods eliminate the feature extraction requirement, their convergence is inferior to that of the classical VS methods \cite{bateux2017visual}. The reason is that there are nonlinear relations between the image information and the robot's 3-dimensional (3D) motion. One way of tackling this problem was proposed by Bateux \textit{et al.} \cite{bateux2017visual,bateux2018training}, which is taking advantage of deep neural networks in image feature extraction and pose estimation. In this method, a convolutional neural network (CNN) was trained with the images captured from different scenes of an intended object and the corresponding pose of each image. The trained network was then employed to estimate the relative pose of the object seen by the camera. The target performed a highly precise, robust, and real-time 6-DOF PBVS of a rigid robotic manipulator. As an advantage, the proposed method does not necessarily need creating a new CNN because the user can re-purpose an existing pre-trained network tailored for a task close enough to the intended scenario. Also, the algorithm performed well on a scene that has never been seen at the training step \cite{bateux2018training}. This advantage enables the user to work on the generalization of the proposed CNN-based PBVS for a range of indoor and outdoor scenes that possess different illumination ranges. Felton \textit{et al.} \cite{felton2021training} proposed using a deep network for end-to-end DVS where the velocity of a camera mounted on a robot tip is predicted using a Siamese network. For training, they used a selected subset of the ImageNet dataset and evaluated the algorithm's performance on a 6-DOF rigid robot. Despite introducing several advantages, the approaches presented in \cite{bateux2017visual,bateux2018training,felton2021training} were not implemented on CRs, where tackling the control issue is a significant challenge. This research gap then motivated us to extend the benefits of their works, propose an end-to-end control approach, and evaluate the algorithm's performance and robustness experimentally. The objective of this paper is developing the first deep learning-based kinematic control of continuum robots utilizing DVS methods and its implementation in joint space. The deep network is used to extract the current joint space variables of the CR corresponding to the captured images, and then controlling the robot by a proportional controller. The contributions of our work are: \begin{itemize} \item Developing a deep learning-based direct VS algorithm. The deep network is structured by repurposing a pre-trained VGG-16 network. The network is re-trained using a self-provided dataset (generated by Blender software), which includes variations of only one target image with normal conditions, illumination changes, and occlusions. \item Conducting extensive simulation studies in Blender in normal and perturbed conditions and then evaluating the controller's performance on a real robot. The algorithm is experimentally validated in a variety of scenarios including normal operation of the robot within the full range of its workspace. The robustness is also analyzed against variations of the lighting in the environment and partial occlusion. \end{itemize} The remainder of the paper is organized as follows. Section \ref{sec:method} presents the methodology of the research by stating the control law, describing the deep network architecture, and introducing the simulation environment and the training steps. The intended scenarios are first simulated and then experimentally validated in Section \ref{sec:exp}. The results of the normal scenarios plus the robustness analyses are also presented in this section. Section \ref{sec:concl}, finally, concludes the article and briefly proposes future directions. \section{Methodology} \label{sec:method} There exist many challenges in implementing VS on CRs. Unlike rigid robots with stable designs and well-defined kinematic models, the flexibility and soft nature of CRs make them susceptible to various modeling inaccuracies and extremely sensitive to noise and disturbance. Therefore, regressing the desired camera velocity is not sufficient for accurate control of CRs. Examples of uncertainties include extreme hysteresis, backlash, dead zone, and high sensitivity to disturbance. To combat these uncertainties, we propose a joint space VS scheme to localize the end-effector at a target image frame. This is achieved by implementing an end-to-end deep learning model that directly computes the desired tendon velocities from camera images. In order to robustly train the model, a simulation environment is created to generate a robust training dataset. Our methodology is based on employing a deep learning network that has been already trained but repurposing it by changing the last layer and tailoring it for the desired task. Using the target image and the image frames captured in real time by a camera, the network produces the raw velocity commands that can direct the robot to the desired target after a subsequent scaling by a proportional controller. The intended network is trained using a user-generated dataset of RGB images produced utilizing Blender software. The performance of the proposed method is evaluated through extensive simulation and experimental studies in normal and changing conditions to prove that the algorithm is robust against lighting changes and partially occluded environments. \subsection{Prototype Design and Development} As shown in Fig. \ref{fig:CAD}, the prototype CR has one section comprised of a flexible backbone made of spring steel, four braided Kevlar lines as tendons, and spacer disks to route the tendons. The tendons were placed around the backbone with an offset of $1.8\, mm$ and angular distance of $90^{\circ}$ from each other. The tendons were routed toward the robot tip by eight equally distanced spacer disks, which were 3D printed using PLA filament. The disks were solidly attached to the backbone using steel-reinforced epoxy adhesive with a strength of $3960\, psi$. A custom fixture was 3D printed to rigidly mount a USB camera on the robot tip in an EIH mode. The fixture was screwed on the last spacer disk such that it guarantees the minimum space between the camera and the robot tip while having no contact between the fixture and the camera's electronic board. The tendons were actuated using Dynamixel AX-12A servomotors (Robotis, CA, USA). Table \ref{tab:CR_design} shows the prototype specifications. \begin{table} \caption{Prototype specifications.} \label{tab:CR_design} \begin{center} \begin{tabularx}{\columnwidth}{l l l} \hline \textbf{Prototype's Part} & \textbf{Specification} & \textbf{Value} \\ \hline \multirow{4}{*}{Backbone} & Density ($\rho$) & $7800\, Kg/m^3$ \\ & Young's modulus ($E$) & $207\, GPa$ \\ & Length ($L$) & $0.4\, m$ \\ & Radius ($r$) & $0.9\, mm$ \\ \hline Tendon & Breaking strength & $31.75\, Kg$ \\ \hline \multirow{3}{*}{USB camera} & Frame rate & $30\, fps$ \\ & Resolution & $640\times480\, pixels $ \\ & Field of view (FOV) & $19^{\circ}$ \\ \hline \end{tabularx} \end{center} \end{table} \subsection{Control Law} \begin{figure} \begin{center} \includegraphics[width=.85\columnwidth]{ctrl.PNG} \caption{Block diagram of the proposed visual servo controller comprised of a camera in an EIH mode, a tendon-driven CR, and a CNN model that takes the target image, $I_0$, and current image frame, $I$, and outputs desired tendon displacements, $\Delta q$, to be scaled by a control gain, $-\lambda$, and generate velocity command, $V$, for the robot drivers.} \label{fig:ctrlDiagram} \end{center} \end{figure} Classical VS approaches require complex Jacobian mapping, which is difficult to derive. Therefore, we aim to replace the entire mapping from image space to joint space with a learned model. Given an input image, the proposed model will output candidate velocities of each tendon such that the error between the current image frame, $I$, and the desired image frame, $I_0$, will be minimized to zero. As shown in Fig. \ref{fig:ctrlDiagram}, the output is multiplied by a gain, $-\lambda$, and fed into the CR. The gain adjusts the speed of the CR during experimentation. The larger the gain, the more aggressive the controller is in minimizing the error. The control law is then stated as \begin{align} v = -\lambda\,\, {f(I_0,I)} \label{eq:ctrlLaw} \end{align} where $v$ is the tendon velocity in $mm/s$, $\lambda$ is the control gain, $I_0$ is the target image, $I$ is the current image, and $f()$ is a function of the target and current images implemented on a modified VGG-16 network to output $\Delta q$. \subsection{Neural Network Design} In order to design an efficient neural network for our purpose, we utilized a VGG-16 backbone pre-trained on ImageNet to facilitate transfer learning \cite{vgg}. Having been trained on natural images, only the lower layers of the network will need to be trained to regress desired tendon velocities. For our model, the first 10 layers were frozen to speed up the training as they already contain low-level features from natural images. We modified this network by dropping out the last dense layer and replacing it with a dense layer with two outputs corresponding to $q_1$ and $q_2$. The activation function was set to linear (see Fig. \ref{fig:Model}). Various alternatives were considered to challenge this proposed model. Firstly, different backbones were considered, particularly ResNet50 and ResNet101. Secondly, we considered adding multiple dense layers to improve the nonlinear fitting of the CR model. However, the added dense layers resulted in the loss of spatial awareness, which is key to our proposed model \cite{posenet}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{model.PNG} \caption{Architecture of modified VGG-16 model.} \label{fig:Model} \end{center} \end{figure} \subsection{Simulation Environment} Training in the simulation provides various advantages over the real world. Not only are they much quicker in acquiring the data, but they also offer the opportunity to structure the environment to account for various noises and uncertainties, enabling the model to be more robust. On the contrary, attempting to learn the dynamics and uncertainties of the robot remains challenging in simulated environments. We resolved this problem by utilizing an open-source 3D computer graphics software called Blender and creating an environment that models the pose of the end-effector given a tendon displacement, $q$, value. This was achieved by using the forward kinematics of the robot to place and orient the virtual camera in the simulated environment. Being a single-section 2-DOF CR, we modeled the kinematics based on the constant curvature assumption, as presented by Rao \textit{et al.} \cite{rao2021model}. Whereas this approach ignores the dynamic effects of the CR, we propose that implementing robust vision control would allow the feedback loop to correct for most of the aforementioned challenges of the CR. Shadowing and occlusion were included to provide this robustness. Shadowing was achieved by adjusting the light source in the environment, whereas occlusion was achieved by placing black rectangles of random positions and dimensions within the image. Fig. \ref{fig:sampleImages} shows some samples from the simulation. \begin{figure} \begin{center} \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sample_imgs/orig.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sample_imgs/trans.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sample_imgs/lighting.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sample_imgs/occlude.png} \caption{} \end{subfigure} % \caption{Typical set of images used in simulation. (a) target image, (b) camera view at the tendon displacement of $(q_1,q_2) = (4,-3)\, mm$, (c) camera view with random lighting, and (d) camera view with occlusion.} \label{fig:sampleImages} \end{center} \end{figure} For acquiring the dataset, previous approaches made use of two sets of images; one randomly placed within a distance from the origin for general convergence, and the other one very close to the origin for fine-tuning \cite{bateux2018training}. We thought this binary approach would produce noisier joint commands, and therefore we implemented a continuous method. The farther the CR is away from the origin, the sparser the dataset is. Similarly, to produce more deterministic results, a spiral path was used to traverse all reaching points in the 3D world within a certain threshold. The intended path can stimulate nonlinearities of the robot very well while covering all quadrants of the robot's workspace, which is of our interest in experiments. The spiral path was generated using \begin{align} {q_1} = \frac{A}{n}x\,\cos\, (\frac{P}{n}x\, * \,2\pi) \label{eq:q1} \end{align} \begin{align} {q_2} = \frac{A}{n}x\,\sin\, (\frac{P}{n}x\, * \,2\pi) \label{eq:q2} \end{align} where $A$ is the maximum displacement of the tendon, $P$ is the total number of periods the CR makes, $n$ is the number of sample points, and $x$ is integers from $1$ to $n$. Fig. \ref{fig:spiral} shows the generated spiral path. \begin{figure} \begin{center} \includegraphics[width=.95\columnwidth,height=4cm]{spiralN.jpg} \caption{Camera path used to generate the dataset. Assuming the camera is exactly mounted on the robot's tip, $(x,y,z)$ are the coordinates of the camera's center point with respect to the robot's base.} \label{fig:spiral} \end{center} \end{figure} Using Blender, we created the environment and overlaid the desired scene. Employing a Python API, we moved the camera and light source around and captured images from the scene for training purposes. \subsection{Training and Validation} For training, we selected mean squared error (MSE) as the loss function because we designed our output activation function as linear. The ground truth of the dataset was generated based on equation (\ref{eq:tanh}), which keeps the ground truth between $-1$ and $1$, allowing for better training. \begin{align} q_{new} = \tanh \left(10\,q \right) \label{eq:tanh} \end{align} In comparison, we linearly mapped $-5\,mm$ and $5\,mm$ to $-1$ and $1$, respectively, clipping any values beyond. However, such an approach would not penalize the optimizer as much near the origin. Since we aim to propose sub-millimeter accuracy, utilizing equation (\ref{eq:tanh}) would force convergence while producing a much smoother velocity profile. The simulation environment was used to generate the dataset. To this end, 5000 images were acquired with a maximum amplitude of $7\,mm$ and a period of 20. Random lighting effect and random occlusion were included. These occlusions were represented as black rectangles overlaid at random positions to force the model to learn the full spatial features and make it more robust. As we used the classical VGG-16, the input image was RGB of size $224\times224$. The model was trained for 50 epochs with a batch size of 32 and a learning rate of $1e-5$ using Adam optimizer. The final MSE was determined to be $3e-5$. To validate our hypothesis, VGG-16 was swapped with ResNet51 and ResNet101. As expected, the training took substantially longer, and the MSE was inferior to that of the VGG. Similarly, two dense layers (1024 and 512, respectively) were added between the VGG and the final $q$ output to test our hypothesis. Nonetheless, the training took longer without any significant improvement to the MSE. Training on an Nvidia Titan Xp GPU was reasonably fast, taking less than 20 minutes on 50 epochs to train. The model inference was also extremely fast, taking about $15\, ms$ per frame\footnote{The code and dataset will be made available publicly once the paper is accepted.}. \section{Experimental Results} \label{sec:exp} \subsection{Simulation} Before conducting experiments in real-world scenarios, we performed some tests to validate the robustness and accuracy in simulation. Since the kinematics did not incorporate nonlinear effects when generating the dataset, we needed to include various uncertainties to prove the model's robustness within the simulation. \subsubsection{Modeling Uncertainties} Since the constant curvature assumption does not hold true in all situations, other uncertainties and disturbances were added to the simulation. Regarding geometric uncertainties, deriving from parameters of the robot including length, disks space, etc., Gaussian noise with a mean of $0$ and standard deviation of $0.01\, mm$ was added to the output $q$ values. Also, the outputs of the trained model were scaled to uniformly distributed random numbers in the range of $0.25$ to $4$. Random lighting was introduced to account for the vision uncertainties, and a region within the image was occluded with black rectangles. Instead of generating these random scene environments every iteration, we chose to regenerate these random uncertainties every 20 iterations to better model the changing conditions of the real-world environment. \subsubsection{Simulation Results} Simulating with the initial tendon displacements of $(q_1,q_2) = (6,-4)\, mm$ we noticed the CR is able to converge smoothly although less than $25\%$ of the target image was visible at the starting position. Moreover, the change in lighting, as shown in Fig. \ref{fig:sim_sequence}, did not impact the convergence of the CR. More interestingly, adding occlusion (at times greater than $80\%$) did not destabilize the CR and, as noted with the raw network output in Fig. \ref{fig:out}, was still able to counteract the Gaussian noise added to the actuation commands of the CR. A sample of simulation studies can be seen in the supplemented video. Having successfully validated in simulation, the next section will extend it to the real-world environment. \begin{figure} \begin{center} \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/1.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/25.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/50.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/75.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/100.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/225.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/250.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{sequence/299.png} \caption{} \end{subfigure} % \caption{Sequence of camera views in a typical simulation started at the tendon displacement of $(q_1,q_2) = (6,-4)\, mm$ at iteration numbers of (a) 1, (b) 25, (c) 50, (d) 75, (e) 100, (f) 225, (g) 250, and (h) 299.} \label{fig:sim_sequence} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=.95\columnwidth]{simN.jpg} \caption{Tendon displacement (top) and raw velocity commands from the network, before being multiplied by $-\lambda$, to stabilize the robot (bottom).} \label{fig:out} \end{center} \end{figure} \subsection{Experimental Validation} In order to test the practicality of the proposed end-to-end model, we applied it to the experimental setup developed for the purpose to showcase its robustness to various noises and uncertainties. Fig. \ref{fig:TDCR} shows the prototyped robot for the experiment. \subsubsection{Experiment Design} The physical environment was structured similar to the simulation environment, as shown in Fig. \ref{fig:TDCR}. To test the model's accuracy in converging the CR, the end-effector was moved to random positions, and the trained model attempted to minimize the difference between the current image frame $I$ to the target image $I_0$ on which the model has been trained. The range of motion was limited to $\pm 10\, mm$ for each tendon to keep the scene within the camera's field of view (FOV). Note that the scene with which the model was trained on was larger than the camera's FOV, which enabled our model to operate despite there being no overlap with the target image. In addition, to show the robustness of the controller, the robot was operated in dynamic lighting conditions, dynamic occlusion and finally partial static occlusions. \subsubsection{Results and Discussion} Experimenting from the initial tendon displacements of $(q_1,q_2) = (5,-7)\, mm$, the first row in Fig. \ref{fig:exp_main_imgs} shows that the CR converges to match the camera image to the target image. Normalizing and then subtracting the current and target images gives the images on the second row. Upon convergence, the overlap between the two images becomes highly precise. To evaluate the convergence quantitatively, the sum of absolute distance (SAD) between the normalized target and current image was calculated using \begin{align} SAD = \sum |I^* - I_0^*| \label{eq:SAD} \end{align} where $I^*$ is the normalized current image and $I_0^*$ is the normalized target image. As shown in Fig. \ref{fig:exp_main_plots}, the SAD value fails to approach zero, stating that the lighting environment and image exposure were slightly different. \begin{figure} \begin{center} \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s1.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s2.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s3.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s4.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s1b.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s2b.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s3b.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.24\columnwidth} \includegraphics[width=\linewidth]{Exp_imgs/s4b.png} \caption{} \end{subfigure} % \caption{(a-d) Sequence of camera views in a typical experiment started at the tendon displacement of $(q_1,q_2) = (5,-7)\, mm$, (e-h) Corresponding difference between the normalized target and intended images.} \label{fig:exp_main_imgs} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=.95\columnwidth]{expAugN.jpg} \caption{SAD between $I^*$ and $I_0^*$ (top), tendon displacement at each iteration (middle), and the raw velocity commands, before being multiplied by $-\lambda$, from the network to stabilize the robot (bottom).} \label{fig:exp_main_plots} \end{center} \end{figure} \begin{figure} \begin{center} \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v3.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v4.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v5.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v3_final.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v4_final.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v5_final.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v3b.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v4b.png} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth,height=2cm]{Exp_robust/v5b.png} \caption{} \end{subfigure} % \caption{(a-c) Initial views of the camera in the robustness analyses using dynamic lighting, dynamic occlusion, and partial static occlusion, respectively. (d-f) Corresponding converged views. (g-i) Corresponding difference between the target image and converged views.} \label{fig:robust_exp_img} \end{center} \end{figure} Note that $q_1$ and $q_2$ values do not approach 0 in Fig. \ref{fig:exp_main_plots} despite that being the origin with which the target image was taken. This can be assigned to the dynamic effects of the CR and, more specifically, its hysteresis, which will be addressed in our future research. Without the need for a computationally complex controller, our model can converge the camera frame to the target image by automatically compensating for the nonlinear effects. The raw velocity commands show that the model smoothly converges to the origin and stabilizes once it approaches. In order to test the robustness, different lighting conditions and occlusions were considered. As seen in Fig. \ref{fig:robust_exp_img}, the top row images show the CR at the starting position, $(q_1,q_2) = (-2,2)\, mm$. The left columns show the dynamic lighting scenario, the center column shows the dynamic occlusion scenario, and the right column shows static partial occlusion scenario. The bottom row shows the corresponding normalized differences with the target image at the final iteration for each scenario. Despite the uncertainties, the precise overlap confirms that the model has learned sufficiently through simulation alone and that it can robustly control the CR in an end-to-end fashion. Note that even though the simulation only used one 2D image of the target scene, which results in projection artifacts when simulating in 3D, the model successfully operated in the real 3D environment. Four samples of experimental validation and robustness analysis can be seen in the supplemented video. \section{Conclusion and Future Work} \label{sec:concl} In this paper, a deep direct visual servoing algorithm was proposed to control a single-section tendon-driven continuum robot in an eye-in-hand mode. The algorithm takes one image from the target and trains a modified VGG-16 network by generating a dataset using Blender software. The dataset includes different views of the scene plus illumination change and occlusion for robustness analysis. The algorithm was tested on a real robot developed by the team and showed good efficiency by fast and accurate convergence in normal scenes. Also, the algorithm's robustness was pursued in scenarios incorporating dynamic illumination changes as well as dynamic and static occlusions. In all robustness analysis tests, the algorithm managed to control the robot effectively and efficiently. Despite the good convergence rates we obtained, the research will be extended in our future work. The robot will be extended to multi sections, and the dynamic effects of the robot motion such as hysteresis, tendon slack, backlash, dead zone, and external disturbances will be investigated. We expect to include a nonlinear controller for future steps as dynamic uncertainties require us to consider different phenomena affecting the robot movements. \bibliographystyle{IEEEtran}
2,877,628,088,743
arxiv
\section{#1}\setcounter{equation}{0}} \newcommand{\IsPreprint}{1} \newcommand{\select}[1]{ \ifnum \IsPreprint=1 #1 \fi } \renewcommand{\textfraction}{0} \renewcommand{\topfraction}{0.95} \setcounter{topnumber}{2} \renewcommand{\bottomfraction}{0.95} \setcounter{bottomnumber}{2} \renewcommand{\floatpagefraction}{0.95} \setcounter{totalnumber}{3} \sloppy \begin{document} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \author{\\ \normalsize{Andrzej J. BURAS${}^{1,2}$, Markus E. LAUTENBACHER${}^{1}$, Gaby OSTERMAIER${}^{1}$\thanks{e-mail: {\tt {buras,lauten,gosterma}@feynman.t30.physik.tu-muenchen.de}}} \\ \ \\ {\small\sl ${}^{1}$ Physik Department, Technische Universit\"at M\"unchen,} \\ {\small\sl D-85748 Garching, Germany.}\\ {\small\sl ${}^{2}$ Max-Planck-Institut f\"ur Physik -- Werner-Heisenberg-Institut,}\\ {\small\sl F\"ohringer Ring 6, D-80805 M\"unchen, Germany.} } \date{} \title{ {\large\sf \rightline{MPI-Ph/94-14} \rightline{TUM-T31-57/94} \rightline{March 1994} } \bigskip \bigskip {\LARGE\sf Waiting for the Top Quark Mass, $K^+ \rightarrow \pi^+ \nu \bar\nu$, $B_s^0$-$\bar{B}_s^0$ Mixing and CP Asymmetries in $B$-Decays}\footnote{ Supported by the German Bundesministerium f\"ur Forschung und Technologie under contract 06 TM 732 and the CEC Science project SC1-CT91-0729.} } \maketitle \thispagestyle{empty} \begin{abstract} \noindent Anticipating improved determinations of $m_t$, $\mid V_{ub}/V_{cb} \mid$, $B_K$ and $F_B \sqrt{B_B}$ in the next five years we make an excursion in the future in order to find a possible picture of the unitarity triangle, of quark mixing and of CP-violation around the year~2000. We then analyse what impact on this picture will have the measurements of four possibly cleanest quantities: $BR(K^+ \to \pi^+ \nu \bar\nu)$, $x_d/x_s$, $\sin(2\alpha)$ and $\sin(2\beta)$. Our analysis shows very clearly that there is an exciting time ahead of us. In the course of our investigations we extend the analysis of the unitarity triangle beyond the leading order in $\lambda$ and we derive several useful analytic formulae for quantities of interest. \end{abstract} \newpage \setcounter{page}{1} \setcounter{footnote}{0} \renewcommand{\thefootnote}{\arabic{footnote}} \newsection{Introduction} Among the quantities studied in the rich field of rare and CP-violating decays \cite{winsteinwolfenstein:93,ritchiewojcicki:93,littenbergvalencia:93,burasharlander:92,nir:74,rosner:00,cassel:93} the branching ratio $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu})$, the ratio $x_d/x_s$ of $B^o_d-\bar B^o_d$ to $B^o_s-\bar B^o_s$ mixing and a class of CP-asymmetries in neutral B-decays, all being essentially free from any hadronic uncertainties, stand out as ideally suited for the determination of the CKM parameters. Simultaneously they appear to be in the reach of experimentalists in the next five to ten years. The decays $K_L \rightarrow \pi^{\circ} \nu \bar{\nu}$ and $B \rightarrow X_s\nu \bar{\nu}$ are also theoretically very clean but much harder to measure. $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu})$ and $x_d/x_s$ are probably the best quantities for the determination of the CKM element $V_{td}$ and consequently play important roles in constraining the shape of the unitarity triangle. The decay $K^+ \rightarrow\pi^+ \nu \bar{\nu}$ is dominated by short distance loop diagrams involving the heavy top quark and receives also sizable contributions from internal charm quark exchanges. The QCD corrections to this decay have been calculated in the leading logarithmic approximation long time ago \cite{ellishagelin:83,dibdunietz:91,buchallaetal:91}. The recent calculation \cite{buchallaburas:94} of next-to-leading QCD corrections reduced consi\-derably the theoretical uncertainty due to the choice of the renormalization scales present in the leading order expression. Since the relevant hadronic matrix element of the operator $\bar {s} \gamma_{\mu} (1- \gamma _{5})d~ \bar {\nu} \gamma _{\mu} (1- \gamma _{5}) \nu$ can be measured in the leading decay $K^+ \rightarrow \pi^{\circ} e^+ \nu$, the resulting theoretical expression for $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu})$ is only a function of the CKM parameters, the QCD scale $\Lambda \overline{_{MS}}$ and the quark masses $m_t$ and $m_c$. Moreover due to the work of ref.~\cite{buchallaburas:94} the scales in $m_t$ and $m_c$ are under control so that the sensitivity of $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu})$ to $m_c$ stressed in refs.~\cite{dib3:92,harrisrosner:92} is considerably reduced. The long distance contributions to $K^+ \rightarrow \pi^+ \nu \bar{\nu}$ have been considered in refs.~\cite{reinsehgal:89,hagelinlittenberg:89,luwise:94} and found to be very small: two to three orders of magnitude smaller than the short distance contribution at the level of the branching ratio. The top quark mass dependence and the QCD corrections to $B^o-\bar B^o$ mixing cancel in the ratio $x_d/x_s$ which depends only on the CKM parameters and SU(3)-flavour breaking effects in the relevant hadronic matrix elements. These SU(3) breaking effects contain much smaller theoretical uncertainties than the hadronic matrix elements present in $x_d$ and $x_s$ separately. The measurement of $x_d/x_s$ gives then a good determination of the ratio $\mid V_{td}/V_{ts}\mid$ and consequently of one side of the unitarity triangle. The CP-asymmetry in the decay $B_d^\circ \rightarrow \psi K_S$ allows in the standard model a direct measurement of the angle $\beta$ in the unitarity triangle without any theoretical uncertainties \cite{nir:74}. Similarly the decay $B_d^\circ \rightarrow \pi^+ \pi^-$ gives the angle $\alpha$, although in this case strategies involving other channels are necessary in order to remove hadronic uncertainties related to penguin contributions \cite{gronaulondon:91,gronaulondon:91a,nirquinn:91,nirquinn:91a,aleksandunietz:91}. The determination of the angle~$\gamma$ from CP asymmetries in neutral B-decays is more difficult but not impossible \cite{aleksankayserlondon:93}. At present $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu})$, $x_d/x_s$ and the CP asymmetries in neutral B-decays given by $\sin(2\phi_i)~(\phi_i=\alpha,\beta,\gamma)$ can be predicted using \begin{itemize} \item the values of $\mid V_{ub} / V_{cb}\mid$ and $\mid V_{cb}\mid$ extracted from tree level B-decays \item the analysis of the parameter $\epsilon _K$ describing the indirect CP violation in K$\rightarrow \pi \pi$ decays\\ and \item the analysis of $x_d = (\Delta M)_B/\Gamma_B$ describing the size of $B_d^\circ -\bar{B}{_d^\circ}$ mixing \end {itemize} All these ingredients are subject to theoretical uncertainties related to non-perturbative parameters entering the relevant formulae. Moreover the last two require the value of $m_t$. Consequently the existing predictions for $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu}$), $x_s$ and CP-asymmetries in B-decays are rather uncertain. In this paper we would like to address the following questions: \begin{itemize} \item What accuracy of theoretical predictions for $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu}$), $x_s$, $\sin(2\phi_i)$ and the unitarity triangle could one expect around the year 2000 assuming reasonable improvements for the values of $\mid V_{cb}\mid$, $\mid V_{ub}/V_{cb}\mid$, $m_t$ and the non-perturbative parameters in question? \item What would be the impact of a measurement of $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu}$) on the CKM parameters and in particular on the value of $\mid V_{td}\mid$? \item What would be the impact of a measurement of $x_s$ ? \item What would be the impact of a measurement of $\sin(2\beta)$ and how important would be simultaneous measurements of $\sin(2\alpha)$ and $\sin(2\gamma)$? \item How well should one measure $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu})$, $\sin(2\beta$), $V_{cb}$, $m_t$ and $x_d/x_s$ in order to obtain an acceptable determination of the CKM matrix on the basis of these five quantities alone? \end {itemize} As byproducts of these studies: \begin{itemize} \item we will update the analysis of $BR(K^+ \rightarrow \pi^+ \nu \bar{\nu}$), $x_s$, $\sin(2\phi_i)$ and of the unitarity triangle in view of theoretical and experimental developments which took place in 1993, \item we will extend the analysis of the unitarity triangle beyond the leading order in the expansion parameter $\lambda = \mid V_{us}\mid$\\ and \item we will derive several approximate analytic formulae and bounds which should be useful in following the developments in this field in the 90's. \end {itemize} Our paper is organized as follows. In Section 2 we extend the Wolfenstein parametrization and the analysis of the unitarity triangle beyond the leading order in $\lambda$ and we give improved formulae for $\sin(2\phi_i)$. In Section 3 we collect the formulae for $\varepsilon _K$, $B^\circ-\bar{B}^{\circ}$ mixing and $BR(K^+ \rightarrow\pi^+ \nu \bar{\nu})$ beyond leading order in $\lambda$. In Section 4 we list several analytic results which can be derived using Wolfenstein parametrization beyond leading $\lambda$, which to a very good accuracy represent exact numerical analysis. In Section 5 we systematically address the questions posed above. We end the paper with a brief summary and a number of conclusions. \newsection{Cabibbo-Kobayashi-Maskawa Matrix} \subsection{Standard Parametrization} We will dominantly use the standard parametrization \cite{particle:90} \begin{equation}\label{2.72} V= \left(\begin{array}{ccc} c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23} -c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta}& s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-s_{23}c_{12} -s_{12}c_{23}s_{13}e^{i\delta}&c_{23}c_{13} \end{array}\right) \end{equation} where $c_{ij}=\cos\theta_{ij}$ and $s_{ij}=\sin\theta_{ij}$ with $i$ and $j$ being generation labels ($i,j=1,2,3$). $c_{ij}$ and $s_{ij}$ can all be chosen to be positive. The measurements of the CP violation in K decays force $\delta$ to be in the range $0<\delta<\pi$. The extensive phenomenology of the last years has shown that $s_{13}$ and $s_{23}$ are small numbers: $O(10^{-3})$ and $O(10^{-2})$, respectively. Consequently to an excellent accuracy $c_{13}=c_{23}=1$ and the four independent parameters are given as follows \begin{equation}\label{2.73} s_{12}=\mid V_{us}\mid, \quad s_{13}=\mid V_{ub}\mid, \quad s_{23}=\mid V_{cb}\mid, \quad \delta, \end{equation} with the phase $\delta$ extracted from CP violating transitions or loop processes sensitive to $\mid V_{td}\mid$. The latter fact is based on the observation that for $0\le\delta\le\pi$, as required by the analysis of CP violation, there is a one--to--one correspondence between $\delta$ and $|V_{td}|$ given by \begin{equation}\label{10} \mid V_{td}\mid=\sqrt{a^2+b^2-2 a b \cos\delta}, \qquad a=\mid V_{cd} V_{cb}\mid, \qquad b=\mid V_{ud} V_{ub}\mid \end{equation} \subsection{Wolfenstein Parameterization Beyond Leading Order} We will also use the Wolfenstein parametrization \cite{wolfenstein:83}. It is an approximate parametrization of the CKM matrix in which each element is expanded as a power series in the small parameter $\lambda=\mid V_{us}\mid=0.22$: \begin{equation}\label{2.75} V= \left(\begin{array}{ccc} 1-{\lambda^2\over 2}&\lambda&A\lambda^3(\varrho-i\eta)\\ -\lambda& 1-{\lambda^2\over 2}&A\lambda^2\\ A\lambda^3(1-\varrho-i\eta)&-A\lambda^2& 1\end{array}\right) +O(\lambda^4) \end{equation} and the set (\ref{2.73}) is replaced by \begin{equation}\label{2.76} \lambda, \qquad A, \qquad \varrho, \qquad \eta \end{equation} The Wolfenstein parameterization has several nice features. In particular it offers in conjunction with the unitarity triangle a very transparent geometrical representation of the structure of the CKM matrix and allows to derive several analytic results to be discussed below. This turns out to be very useful in the phenomenology of rare decays and of CP violation. When using the Wolfenstein parametrization one should remember that it is an approximation and that in certain situations neglecting $O(\lambda^4)$ terms may give wrong results. The question then arises how to find $O(\lambda^4)$ and higher order terms? The point is that like in any perturbative expansion the $O(\lambda^4)$ and higher order terms are not unique. This is the reason why in different papers in the literature different $O(\lambda^4)$ terms can be found. The non-uniqueness of higher order terms in $\lambda$ is not troublesome however. As in any perturbation theory different choices of expanding in $\lambda$ will result in different numerical values for the parameters in (\ref{2.76}) extracted from the data without changing the physics when all terms are summed up. Here it suffices to find an expansion in $\lambda$ which allows for simple relations between the parameters (\ref{2.73}) and (\ref{2.76}). This will also restore the unitarity of the CKM matrix which in the Wolfenstein parametrization as given in (\ref{2.75}) is not satisfied exactly. To this end we go back to (\ref{2.72}) and we impose the relations \begin{equation}\label{2.77} s_{12}=\lambda \qquad s_{23}=A \lambda^2 \qquad s_{13} e^{-i\delta}=A \lambda^3 (\varrho-i \eta) \end{equation} to {\it all orders} in $\lambda$. In view of the comments made above this can certainly be done. It follows that \begin{equation}\label{2.84} \varrho=\frac{s_{13}}{s_{12}s_{23}}\cos\delta \qquad \eta=\frac{s_{13}}{s_{12}s_{23}}\sin\delta \end{equation} We observe that (\ref{2.77}) and (\ref{2.84}) represent simply the change of variables from (\ref{2.73}) to (\ref{2.76}). Making this change of variables in the standard parametrization (\ref{2.72}) we find the CKM matrix as a function of $(\lambda,A,\varrho,\eta)$ which satisfies unitarity exactly! We also note that in view of $c_{13}=1-O(\lambda^6)$ the relations between $s_{ij}$ and $\mid V_{ij}\mid$ in (\ref{2.73}) are satisfied to high accuracy. The relations in (\ref{2.84}) have been first used in ref.~\cite{schmidtlerschubert:92}. However, our improved treatment of the unitarity triangle presented below goes beyond the analysis of these authors. The procedure outlined above gives automatically the corrections to the Wolfenstein parametrization in (\ref{2.75}). Indeed expressing (\ref{2.72}) in terms of Wolfenstein parameters using (\ref{2.77}) and then expanding in powers of $\lambda$ we recover the matrix in (\ref{2.75}) and in addition find explicit corrections of $O(\lambda^4)$ and higher order terms. $V_{ub}$ remains unchanged. The corrections to $V_{us}$ and $V_{cb}$ appear only at $O(\lambda^7)$ and $O(\lambda^8)$, respectively. For many practical purposes the corrections to the real parts can also be neglected. The essential corrections to the imaginary parts are: \begin{equation}\label{2.83g} \Delta V_{cd}=-iA^2 \lambda^5\eta \qquad \Delta V_{ts}=-iA\lambda^4\eta \end{equation} These two corrections have to be taken into account in the discussion of CP violation. On the other hand the imaginary part of $V_{cs}$ which in our expansion in $\lambda$ appears only at $O(\lambda^6)$ can be fully neglected. In order to improve the accuracy of the unitarity triangle discussed below we will also include the $O(\lambda^5)$ correction to $V_{td}$ which gives \begin{equation}\label{2.83d} V_{td}= A\lambda^3 (1-\bar\varrho-i\bar\eta) \end{equation} with \begin{equation}\label{2.88d} \bar\varrho=\varrho (1-\frac{\lambda^2}{2}) \qquad \bar\eta=\eta (1-\frac{\lambda^2}{2}). \end{equation} In order to derive analytic results we need accurate explicit expressions for $\lambda_i=V_{td}V_{ts}^*$ where $i=c,t$. We have \begin{equation}\label{2.51} Im\lambda_t= -Im\lambda_c=\eta A^2\lambda^5 \end{equation} \begin{equation}\label{2.52} Re\lambda_c=-\lambda (1-\frac{\lambda^2}{2}) \end{equation} \begin{equation}\label{2.53} Re\lambda_t= -(1-\frac{\lambda^2}{2}) A^2\lambda^5 (1-\bar\varrho) \end{equation} Expressions (\ref{2.51}) and (\ref{2.52}) represent to an accuracy of 0.2\% the exact formulae obtained using (\ref{2.72}). The expression (\ref{2.53}) deviates by at most 2\% from the exact formula in the full range of parameters considered. In order to keep the analytic expressions in sections 3 and 4 in a transparent form we have dropped a small $O(\lambda^7)$ term in deriving (\ref{2.53}). After inserting the expressions (\ref{2.51})--(\ref{2.53}) in exact formulae for quantities of interest, further expansion in $\lambda$ should not be made. \subsection{Unitarity Triangle Beyond Leading Order} The unitarity of the CKM-matrix provides us with several relations of which \begin{equation}\label{2.87h} V_{ud}V_{ub}^* + V_{cd}V_{cb}^* + V_{td}V_{tb}^* =0 \end{equation} is the most useful one. In the complex plane the relation (\ref{2.87h}) can be represented as a triangle, the so-called ``unitarity--triangle'' (UT). Phenomenologically this triangle is very interesting as it involves simultaneously the elements $V_{ub}$, $V_{cb}$ and $V_{td}$ which are under extensive discussion at present. In the usual analyses of the unitarity triangle only terms $O(\lambda^3)$ are kept in (\ref{2.87h}) \cite{burasharlander:92,nir:74,harrisrosner:92,schmidtlerschubert:92,dibdunietzgilman:90,alilondon:93}. It is however straightforward to include the next-to-leading $O(\lambda^5)$ terms. We note first that \begin{equation}\label{2.88a} V_{cd}V_{cb}^*=-A\lambda^3+O(\lambda^7). \end{equation} Thus to an excellent accuracy $V_{cd}V_{cb}^*$ is real with $\mid V_{cd}V_{cb}^*\mid=A\lambda^3$. Keeping $O(\lambda^5)$ corrections and rescaling all terms in (\ref{2.87h}) by $A \lambda^3$ we find \begin{equation}\label{2.88b} \frac{1}{A\lambda^3}V_{ud}V_{ub}^* =\bar\varrho+i\bar\eta \qquad, \qquad \frac{1}{A\lambda^3}V_{td}V_{tb}^* =1-(\bar\varrho+i\bar\eta) \end{equation} with $\bar\varrho$ and $\bar\eta$ defined in (\ref{2.88d}). Thus we can represent (\ref{2.87h}) as the unitarity triangle in the complex $(\bar\varrho,\bar\eta)$ plane. This is shown in fig.~\ref{fig:triangle}. The length of the side $CB$ which lies on the real axis equals unity when eq.~(\ref{2.87h}) is rescaled by $V_{cd}V_{cb}^*$. We observe that beyond the leading order in $\lambda$ the point A {\it does not} correspond to $(\varrho,\eta)$ but to $(\bar\varrho,\bar\eta)$. Clearly within 3\% accuracy $\bar\varrho=\varrho$ and $\bar\eta=\eta$. Yet in the distant future the accuracy of experimental results and theoretical calculations may improve considerably so that the more accurate formulation given here will be appropriate. For instance the experiments at LHC should measure $ \sin(2\beta) $ to an accuracy of 2--3\% \cite{camilleri:93}. \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=2in \epsffile{triangle.ps} } \vspace{0.05in} }{ \caption[]{\small\sl Unitarity triangle in the complex $(\bar\varrho,\bar\eta)$ plane. \label{fig:triangle}} \end{figure} } Using simple trigonometry one can calculate $\sin(2\phi_i$) in terms of $(\bar\varrho,\bar\eta)$ with the result: \begin{equation}\label{2.89} \sin(2\alpha)=\frac{2\bar\eta(\bar\eta^2+\bar\varrho^2-\bar\varrho)} {(\bar\varrho^2+\bar\eta^2)((1-\bar\varrho)^2 +\bar\eta^2)} \end{equation} \begin{equation}\label{2.90} \sin(2\beta)=\frac{2\bar\eta(1-\bar\varrho)}{(1-\bar\varrho)^2 + \bar\eta^2} \end{equation} \begin{equation}\label{2.91} \sin(2\gamma)=\frac{2\bar\varrho\bar\eta}{\bar\varrho^2+\bar\eta^2}= \frac{2\varrho\eta}{\varrho^2+\eta^2} \end{equation} The lengths $CA$ and $BA$ in the rescaled triangle of fig.\ 1 to be denoted by $R_b$ and $R_t$, respectively, are given by \begin{equation}\label{2.94} R_b \equiv \frac{\mid V_{ud}V^*_{ub}\mid}{\mid V_{cd}V^*_{cb}\mid} = \sqrt{\bar\varrho^2 +\bar\eta^2} = (1-\frac{\lambda^2}{2})\frac{1}{\lambda} \left| \frac{V_{ub}}{V_{cb}} \right| \end{equation} \begin{equation}\label{2.95} R_t \equiv \frac{\mid V_{td}V^*_{tb}\mid}{\mid V_{cd}V^*_{cb}\mid} = \sqrt{(1-\bar\varrho)^2 +\bar\eta^2} =\frac{1}{\lambda} \left| \frac{V_{td}}{V_{cb}} \right| \end{equation} The expressions for $R_b$ and $R_t$ given here in terms of $(\bar\varrho, \bar\eta)$ are excellent approximations. Clearly $R_b$ and $R_t$ can also be determined by measuring two of the angles $\phi_i$: \begin{equation}\label{2.96} R_b=\frac{\sin(\beta)}{\sin(\alpha)}= \frac{\sin(\alpha+\gamma)}{\sin(\alpha)}= \frac{\sin(\beta)}{\sin(\gamma+\beta)} \end{equation} \begin{equation}\label{2.97} R_t=\frac{\sin(\gamma)}{\sin(\alpha)}= \frac{\sin(\alpha+\beta)}{\sin(\alpha)}= \frac{\sin(\gamma)}{\sin(\gamma+\beta)} \end{equation} \newsection{Basic Formulae} \subsection{Constraint from $\varepsilon_{K}$} The usual box diagram calculation together with the experimental value for $\varepsilon_K$ specifies a hyperbola in the $(\varrho, \eta)$ plane with $\eta > 0$ \cite{harrisrosner:92,dibdunietzgilman:90}. With our new coordinates $(\bar\varrho,\bar\eta)$ we get \begin{equation}\label{100} \bar\eta \left[(1-\bar\varrho) A^2 \eta_2 S(x_t) + P_0(\varepsilon) \right] A^2 B_K = 0.223 \end{equation} Here \begin{equation}\label{102} P_0(\varepsilon) = \left[ \eta_3 S(x_c,x_t) - \eta_1 x_c \right] \frac{1}{\lambda^4} \end{equation} \begin{equation}\label{103} S(x_c,x_t) = x_c \left[ \ln \frac{x_t}{x_c} - \frac{3x_t}{4(1-x_t)} \left(1+ \frac{x_t}{1-x_t}\ln x_t\right)\right] \end{equation} \begin{equation}\label{104} S(x_t) = x_t \left[ \frac{1}{4} + \frac{9}{4} \frac {1}{(1-x_t)} - \frac {3}{2} \frac{1}{(1-x_t)^2}\right] + \frac{3}{2} \left[ \frac{x_t}{x_t-1}\right]^3 \ln x_t \end{equation} where $x_i = m^2_i/M^2_W$. $B_K$ is the renormalization group invariant non-perturbative parameter describing the size of $<\bar{K^0} \mid (\bar s d)_{V-A}(\bar s d)_{V-A}\mid K^0>$ and $\eta_i$ represent QCD corrections to the box diagrams. In our numerical analysis we will use \begin{equation}\label{105} \eta_1 = 1.1 \mbox{\cite{herrlichnierste:93}},\qquad \eta_2 = 0.57 \mbox{\cite{burasjaminweisz:90}},\qquad \eta_3=0.36 \mbox{\cite{kaufmanetal:88,buchallaetal:90,dattaetal:90,flynn:90}} \mbox{(leading order)} \end{equation} The values for $B_K$ are specified below. \subsection{$B^o-\bar B^o$ Mixing} The experimental knowledge of the $B^o_d-\bar B^o_d$ mixing described by the parameter $x_d = \Delta M/\Gamma_B$ determines $\mid V_{td} \mid$. Using the usual formulae for box diagrams with top quark exchanges one finds \begin{equation}\label{105a} x_d = \mid V_{td} \mid ^2 P(B^o_d-\bar B^o_d) S(x_t) \end{equation} where \begin{equation}\label{105b} P(B^o_d-\bar B^o_d) = 3.89 \cdot 10^3 \left[ \frac{\tau_{B_d}}{1.5\,ps} \right] \left[ \frac{F_{B_d} \sqrt{B_{B_d}}}{200 ~MeV} \right] ^2 \left[ \frac{\eta_B}{0.55} \right] \end{equation} and consequently \begin{equation}\label{106} \mid V_{td} \mid = A \lambda ^3 R_t ,\qquad R_t = 1.63 \cdot \frac{R_0}{\sqrt{ S(x_t)}}. \end{equation} Here \begin{equation}\label{107} R_0 \equiv \sqrt{ \frac{x_d}{0.72}} \left[ \frac{200 MeV}{F_{B_d} \sqrt{B_{B_d}}} \right] \left[ \frac{0.038}{\kappa} \right] \sqrt{ \frac{0.55}{\eta_B}} \end{equation} and \begin{equation}\label{107a} \kappa \equiv \mid V_{cb} \mid \left[\frac{\tau_B}{1.5\,ps}\right]^{0.5} \end{equation} with $\tau_B$ being the B-meson life-time. $\eta_B$ is the QCD factor analogous to $\eta_2$ and calculated to be $\eta_B = 0.55$ \cite{burasjaminweisz:90}. $F_{B_d}$ is the B-meson decay constant and $B_{B_d}$ denotes a non-perturbative parameter analogous to $B_K$. The values of $x_d$, $F_{B_d} \sqrt{ B_{B_d}}$ and $|V_{cb}|$ will be specified below. It is well known (see for instance \cite{alilondon:93}) that the accuracy of the determination of $\mid V_{td} \mid$ and $R_t$ can be considerably improved by measuring simultaneously the $B^o_s-\bar B^o_s$ mixing described by $x_s$. Defining the ratio \begin{equation}\label{107b} R_{ds} = \frac{\tau_{B_d}}{\tau_{B_s}} \cdot \frac{m_{B_d}}{m_{B_s}} \left[ \frac{F_{B_d} \sqrt{B_{B_d}}}{F_{B_s} \sqrt{B_{B_s}}} \right]^2 \end{equation} we find \begin{equation}\label{107c} R_t = \frac{1}{\sqrt{R_{ds}}} \sqrt{\frac{x_d}{x_s}} \frac{1}{\lambda} \sqrt{1-\lambda^2(1-2 \varrho)} \end{equation} and using (\ref{106}) the matrix element $\mid V_{td} \mid$. The last factor in (\ref{107c}) describes small departure of $\mid V_{ts} \mid$ from $\mid V_{cb} \mid$. The $\varrho$ dependence in (\ref{107c}) can safely be neglected. In this way $R_t$ does not depend neither on $m_t$ nor on $ \mid V_{cb} \mid$. Since it is easier to calculate $R_{ds}$ than $R_0$, formula (\ref{107c}) gives a much more reliable determination of $R_t$ than (\ref{106}) provided $x_s$ has been measured. \subsection{The Rare Decay $K^{+} \to \pi^{+} \nu \bar\nu$} The $K^{+} \to \pi^{+} \nu \bar\nu$ branching ratio for one single neutrino flavor $l~(l = e,\mu,\tau)$ is given by \begin{equation}\label{107d} BR(K^{+} \to \pi^{+} \nu \bar\nu) = \frac{\alpha^2 BR(K^{+} \to \pi^0 e^+ \nu)}{V_{us}^2 2 \pi^2 \sin^4\theta_W} \cdot \mid V_{cs}^\ast V_{cd} X_{NL}^l + V_{ts}^\ast V_{td} X(x_t) \mid^2 \end{equation} Summing over three neutrino flavors, using eqs.~(\ref{2.51})--(\ref{2.53}) and setting \begin{equation}\label{107e} \alpha = \frac{1}{128} \qquad \sin ^2 \theta_W = 0.23 \qquad BR(K^{+} \to \pi^0 e^+ \nu) = 4.82 \cdot 10^{-2} \end{equation} we obtain \begin{equation}\label{108} BR(K^{+} \to \pi^{+} \nu \bar\nu) = 4.64 \cdot 10^{-11} A^4 X^2(x_t) \frac{1}{\sigma} \left[ (\sigma \bar\eta)^2 + \frac{2}{3} \left(\varrho^e_0 - \bar\varrho \right)^2 + \frac{1}{3} \left(\varrho^{\tau}_0 - \bar\varrho \right)^2 \right] \end{equation} with \begin{equation}\label{109} \varrho^l_0 = 1 + \frac{P^l_0}{A^2 X(x_t)} \qquad P^l_0 = \frac{X^l_{NL}}{\lambda^4} \qquad \sigma = \left( \frac{1}{1- \frac{\lambda^2}{2}} \right)^2 \end{equation} The function $X(x_t)$ is given as follow \begin{eqnarray}\label{109a} X(x_t) & = & \eta_{\rm X} \cdot X_0(x_t) \\ X_0(x_t) & =& \frac{x}{8} \left[ - \frac{2+x}{1-x} + \frac{3x -6}{(1-x)^2} \ln x \right] \qquad {\rm with} \quad \eta_X = 0.985 \end{eqnarray} where $\eta_{\rm X}$ is the NLO correction calculated in ref.~\cite{buchallaburas:93b} . For determining $P^l_0$ given in tab.~\ref{tab:plopar} we take the NLO results for $X^l_{NL}$ of ref.~\cite{buchallaburas:94}. Here $m_c \equiv \overline{m}_c(m_c)$. \begin{table}[htb] \caption[]{\small\sl Values of $P_0^l$ for various $\Lambda_{\overline {\rm MS}}~[MeV]$ and $m_c~[GeV]$} \vspace{0.05in} \begin{center} \begin{tabular}{|c||c|c|c||c|c|c|} \hline \multicolumn{1}{|c||}{ } & \multicolumn{3}{c||}{$P_0^e $} & \multicolumn{3}{c|}{$P_0^\tau $} \\ \hline $\Lambda_{\rm{\overline{MS}}} \backslash m_c$ & 1.25 & 1.30 & 1.35 & 1.25 &1.30 & 1.35 \\ \hline \hline 0.20 & 0.457 & 0.494 & 0.531 & 0.312 & 0.342 & 0.373 \\ \hline 0.25 & 0.441 & 0.477 & 0.515 & 0.296 & 0.326 & 0.357 \\ \hline 0.30 & 0.425 & 0.461 & 0.498 & 0.280 & 0.309 & 0.340 \\ \hline 0.35 & 0.408 & 0.444 & 0.480 & 0.262 & 0.292 & 0.322 \\ \hline \end{tabular} \end{center} \label{tab:plopar} \end{table} The measured value of BR($K^{+} \to \pi^{+} \nu \bar\nu$) determines an ellipse in the $(\bar\varrho,\bar\eta)$ plane centered at $(\varrho_0,0)$ with \begin{equation}\label{110} \varrho_0 = 1 + \frac{\bar{P_0}(K^+)}{A^2 X(x_t)} \qquad \bar{P_0}(K^+) = \frac{2}{3} P^e_0 + \frac{1}{3} P^\tau_0 \end{equation} and having the axes squared \begin{equation}\label{110a} \bar\varrho_1^2 = r^2_0 \qquad \bar\eta_1^2 = \left( \frac{r_0}{\sigma} \right)^2 \end{equation} where \begin{equation}\label{111} r^2_0 = \frac{1}{A^4 X^2(x_t)} \left[ \frac{\sigma \cdot BR(K^{+} \to \pi^{+} \nu \bar\nu)}{4.64 \cdot 10^{-11}} - \frac{2}{9} \left( P^e_0 - P^\tau_0 \right)^2 \right]. \end{equation} The last term in (\ref{111}) is very small and can safely be neglected. The ellipse defined by $r_0$, $\varrho_0$ and $\sigma$ given above intersects for the allowed range of parameters with the circle (\ref{2.94}) . This allows to determine $\bar\varrho$ and $\bar\eta$ with \begin{equation}\label{113} \bar\varrho = \frac{1}{1-\sigma^2} \left( \varrho_0 - \sqrt{\varrho_0^2 -(1-\sigma^2)(\varrho_0^2 -r_0^2+\sigma R_b^2)} \right) \qquad \bar\eta = \sqrt{R_b^2 -\bar\varrho^2} \end{equation} and consequently \begin{equation}\label{113aa} R_t^2 = 1+R_b^2 - 2 \bar\varrho \end{equation} where $\bar\eta$ is assumed to be positive. In the leading order of the Wolfenstein parametrization \begin{equation}\label{113ab} \sigma \to 1 \qquad \bar\eta \to \eta \qquad \bar\varrho \to \varrho \end{equation} and $BR(K^+ \to \pi^+ \nu \bar\nu)$ determines a circle in the $(\varrho,\eta)$ plane centered at $(\varrho_0,0)$ and having the radius $r_0$ of (\ref{111}) with $\sigma =1$. Formulae (\ref{113}) and (\ref{113aa}) simplify then to \begin{equation}\label{113a} R_t^2 = 1 + R_b^2 + \frac{r_0^2 - R_b^2}{\varrho_0} - \varrho_0 \qquad \varrho = \frac{1}{2} \left( \varrho_0 + \frac{R_b^2 - r_0^2}{\varrho_0} \right) \end{equation} in accordance with ref.~\cite{buchallaburas:94}. \subsection{$B^o$-Decays and Superweak Models} Although the CP-asymmetries in $B^0-$decays in which the final state is a CP eigenstate offer a way to measure the angles of the unitarity triangle, they may in principle fail to distinguish the standard model from superweak models. As discussed by G\'erard and Nakada \cite{gerardnakada:91} and by Liu and Wolfenstein \cite{liuwolfenstein:87}, non-vanishing asymmetries are also expected in superweak scenarios. In order to rule out superweak models one has to measure the asymmetries in two distinct channels and find that they differ from each other. As an example consider $B^0\to \psi K_S$ $(CP=-1)$ and $B^0 \to \pi^+\pi^-$ $(CP=1)$ for which the time integrated asymmetries are \begin{equation}\label{113c} A_{CP}(\psi K_S)=-\sin(2\beta) \frac{x_d}{1+x_d^2},~~~~~~~~~~ A_{CP}(\pi^+\pi^-)=-\sin(2\alpha) \frac{x_d}{1+x_d^2} \end{equation} Generally these two asymmetries could differ in the standard model both in sign and magnitude. In a superweak model however these asymmetries differ only by the sign of the CP-parity of the final state. Yet as emphasized by Winstein \cite{winstein:91} if $\sin 2\beta =-\sin 2\alpha$ it will be impossible to distinguish the standard model result from superweak models. This will happen for any $\bar\varrho >0$ and $\bar\eta$ given by \cite{winstein:91} \begin{equation}\label{113d} \bar\eta=(1-\bar\varrho) \sqrt{\frac{\bar\varrho}{2-\bar\varrho}} \end{equation} as can be easily verified using (\ref{2.89}) and (\ref{2.90}). Consequently $(\bar\varrho,\bar\eta)$ must lie sufficiently away from the curve of eq.~(\ref{113d}) in order to rule out the superweak scenario on the basis of $B^0$-decays to CP eigenstates. We will investigate in section \ref{sec:pheno} whether this is likely to happen in the future experiments. \newsection{Analytic Results} Now, we want to give a list of results following from the formulae above which can be presented in an analytic form. Some of these results appeared already in the literature. \subsection{Lower Bounds on $m_t$ and $B_K$ from $\varepsilon_K$} The hyperbola (\ref{100}) intersects the circle given by (\ref{2.94}) in two points. It is usually stated in the literature that one of these points corresponds to $\bar\varrho < 0$ and the other one to $\bar\varrho > 0$. For most values of $A$, $B_K$ and $m_t$ this is in fact true. However, with decreasing $A$, $B_K$ and $m_t$, the hyperbola (\ref{100}) moves away from the origin of the $(\bar\varrho, \bar\eta)$ plane and both solutions can appear for $\bar\varrho < 0$. For sufficiently low values of these parameters the hyperbola and the circle only touch each other at a small negative value of $\bar\varrho$. In this way a lower bound for $m_t$ as a function of $B_K$, $V_{cb}$ and $\mid V_{ub}/V_{cb} \mid$ can be found. With an accurate approximation for $S(x_t)$ \begin{equation}\label{114} S(x_t) = 0.784 \cdot x_t^{0.76} \end{equation} one can derive an analytic lower bound on $m_t$ \cite{buras:93}, which to an accuracy of 2\% reproduces the exact numerical result. It is given by \begin{equation}\label{115} (m_t)_{min} = M_W \left[ \frac{1}{2 A^2} \left(\frac{1}{A^2 B_K R_b} - 1.2 \right) \right]^{0.658} \end{equation} A detailed analysis of (\ref{115}) can be found in ref.~\cite{buras:93}. Here we want to stress that once $m_t$ has been determined, the same analysis gives the minimal value of $B_K$ consistent with measured $\varepsilon_K$ as a function of $\mid V_{cb} \mid$ and $\mid V_{ub} / V_{cb} \mid$. We find \begin{equation}\label{116} (B_K)_{min} = \left[ A^2 R_b \left( 2 x_t^{0.76} A^2 + 1.2 \right) \right]^{-1} \end{equation} Choosing $m_t = 180~GeV$ we show $(B_K)_{min}$ as a function of $\mid V_{cb} \mid$ for different values of $\mid V_{ub} / V_{cb} \mid$ in fig.~\ref{fig:bkmin}. For lower values of $m_t$ the bound is stronger. We observe that for $m_t \leq 180~GeV$, $\mid V_{ub}/V_{cb} \mid \leq 0.10$ and $\mid V_{cb} \mid \leq 0.040$ only values $B_K > 0.55$ are consistent with $\varepsilon_{K}$ in the framework of the standard model. \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=5in \rotate[r]{ \epsffile{bkmin.ps} } } \vspace{0.05in} }{ \caption[]{\small\sl Lower bound on $B_K$ for $\mid V_{ub}/V_{cb} \mid = 0.06$ (a), $\mid V_{ub}/V_{cb} \mid = 0.08$ (b), $\mid V_{ub}/ V_{cb} \mid = 0.10$ (c) from $\varepsilon_K$ and $m_t < 180\,GeV$. \label{fig:bkmin}} \end{figure} } \subsection{Upper Bound on $\sin(2 \beta)$} For the present range of $R_b$ the angle $\beta$ is smaller than $45^o$. This allows to derive an upper bound on $\sin(2 \beta)$, which depends only on $R_b$. As shown in fig.~\ref{fig:betamax} it is found to be \begin{equation}\label{117} (\sin(2 \beta))_{max} = 2 R_b \sqrt{1- R_b^2} \end{equation} \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=2in \epsffile{betamax.ps} } \vspace{0.00in} }{ \caption[]{ \small\sl Determination of $(\sin(2\beta))_{max}$. \label{fig:betamax}} \end{figure} } This implies \begin{equation}\label{118} (\sin(2 \beta))_{max} = \left\{ \begin{array}{r} 0.795 \quad (\beta_{max} = 26.3^o) \qquad \mid V_{ub}/V_{cb}\mid~ =~ 0.10\\ 0.663 \quad (\beta_{max} = 20.8^o) \qquad \mid V_{ub}/V_{cb}\mid~ =~ 0.08\\ 0.513 \quad (\beta_{max} = 15.4^o) \qquad \mid V_{ub}/V_{cb}\mid~ =~ 0.06 \end{array}\right. \end{equation} A lower bound on $\sin(2 \beta)$ can only be found numerically as it depends on $\bar\eta$. The result can be inferred from our numerical analysis in section~\ref{sec:pheno}. \subsection{$\sin(2 \beta)$ from $\varepsilon_K$ and $B^o-\bar B^o$ Mixing} Combining (\ref{100}) and (\ref{106}) one can derive an analytic formula for $\sin(2 \beta)$. We find \begin{equation}\label{119} \sin(2 \beta) = \frac{1}{1.33 \cdot A^2 \eta_2 R_0^2} \left[ \frac{0.223}{A^2 B_K} - \bar\eta P_0(\varepsilon) \right]. \end{equation} $ P_0(\varepsilon)$ is weakly dependent on $m_t$ and for $150 \leq m_t \leq 180 ~GeV$ one has $P_0(\varepsilon) \approx 0.26 \pm 0.02$. As $\bar\eta \leq 0.45$ for $ \mid V_{ub}/V_{cb} \mid \leq 0.1$ the first term in parenthesis is generally by a factor of 2--3 larger then the second term. Since this dominant term is independent of $m_t$, the values for $\sin(2 \beta)$ extracted from $\varepsilon_K$ and $B^o-\bar B^o$ mixing show only a weak dependence on $m_t$ as stressed in particular in ref.~\cite{rosner:00}. \subsection{Ambiguity in $\bar\varrho$} It is well known that in the analysis of $\varepsilon_K$ with fixed $ \mid V_{ub}/V_{cb} \mid$ and $V_{cb}$ one gets two solutions for $(\bar\varrho,\bar\eta)$ with $\bar\eta$ being larger for the solution with larger $\bar\varrho$ . The solution of this ambiguity in $\bar\varrho$ is very important for CP-violating decays $K_L^0 \to \pi^0 e^+ e^-$, $K_L^0 \to \pi^0 \nu \bar\nu$ and the CP-asymmetries in B-decays governed by $\sin(2 \beta)$, because $BR(K_L^0 \to \pi^0 e^+ e^-)$, $BR(K_L^0 \to \pi^0 \nu \bar\nu)$ and $\sin(2 \beta)$ are larger for the solution with larger $\bar\varrho$. The preferred solution in searches of CP violation corresponds in most cases to $\bar\varrho \geq 0$. This should be contrasted with any CP conserving transition sensitive to $\mid V_{td} \mid$, such as $B^o-\bar B^o$ mixing , $K^+ \to \pi^+ \nu \bar\nu$, $K_L \to \mu \bar\mu$, $B \to \mu \bar\mu$ which for given values of $m_t$, $F_B \sqrt{B_B}$ , $V_{cb}$, $x_d$ determine uniquely the value of $\bar\varrho$. Although several analysis of this determination have been presented in the literature (see in particular ref.~\cite{harrisrosner:92}) , we think it is useful to have simple analytic expressions helping to answer immediately, whether the favored solution $\bar\varrho \geq 0$ is chosen. \subsubsection{$B^o-\bar B^o$ Mixing} We require that $R_t \leq \sqrt{1+R_b^2}$. Then for a given value of $R_b$ one gets a positive $\bar\varrho$. Using the analytic formula (\ref{114}) and introducing the ``scaling'' variable \cite{burasharlander:92} \begin{equation}\label{120} z(B^o_d) = m_t \left[\frac{\kappa}{0.038} \right]^{1.32} \end{equation} we find using (\ref{106}) and (\ref{107}) the condition \begin{equation}\label{121} F_{B_d} \sqrt{B_{B_d}} \geq \sqrt\frac{0.55}{\eta_B} \sqrt\frac{x_d}{0.72} \left[ \frac{179 ~GeV}{z(B^o_d)} \right]^{0.76} \cdot \frac{200 ~MeV}{\sqrt{1+R_b^2}} \end{equation} When this inequality is satisfied the favored solution with $\bar\varrho \geq 0$ is bound to be chosen. Setting $\eta_B = 0.55$ we plot in fig.~\ref{fig:fbmax} the smallest value of $F_B \sqrt{B_B}$ consistent with (\ref{121}) as a function of $z(B^o_d)$ for different values of $|V_{ub}/V_{cb}|$ and $x_d = 0.72$. We observe that for $z(B^o_d) \leq 180~GeV$ one needs $F_{B_d} \sqrt{B_{B_d}} \geq 180~MeV$ in order to have $\bar\varrho \geq 0$. Using (\ref{107c}) we can also find a minimal value for $x_s$ consistent with $R_t \leq \sqrt{1+R_b^2}$. One gets to a very good approximation \begin{equation}\label{121a} (x_s)_{min} = \frac{x_d}{R_{ds} \lambda^2} \cdot \frac{1}{\sqrt{1+R_b^2}} \end{equation} For $R_{ds} = 1$ and $R_b = 1/3$ we have $(x_s)_{min} \simeq 18.6 \cdot x_d$. \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=5in \rotate[r]{ \epsffile{fbmax.ps} } } \vspace{0.02in} }{ \caption[]{\small\sl Lower bound on $F_{B_d} \sqrt{B_{B_d}}$ for $\mid V_{ub}/V_{cb} \mid = 0.06$ (a), $\mid V_{ub}/V_{cb} \mid = 0.08$ (b), $\mid V_{ub}/ V_{cb} \mid = 0.10$ (c) necessary for $\bar\varrho \ge 0$. \label{fig:fbmax}} \end{figure} } \subsubsection{$K^+ \to \pi^+ \nu \bar\nu$} An analogous condition can be derived from the decay $K^+ \to \pi^+ \nu \bar\nu$ by requiring $\sqrt{{\varrho_0}^2 + (R_b \sigma)^2} \geq r_0$ with $\varrho_0$ and $r_0$ defined in (\ref{110}) and (\ref{111}), respectively. Neglecting the tiny contribution of the second term in (\ref{111}), using the formula \begin{equation}\label{122} X(x_t) = 0.65 \cdot x_t^{0.575} \end{equation} which reproduces the function $X(x_t)$ to an accuracy of 0.5\% for the range of $m_t$ considered in this paper and introducing the variable \cite{burasharlander:92} \begin{equation}\label{123} z(K^+) =m_t \cdot \left[ \frac{V_{cb}}{0.038} \right]^{1.74} \end{equation} we find the condition \begin{eqnarray}\label{124} BR(K^+ \to \pi^+ \nu \bar\nu) \leq 4.64 \cdot 10^{-11} \frac{1}{\sigma} & \left\{ \left[0.40 \left( \frac{z(K^+)}{M_W} \right)^{1.15} + \bar P_0(K^+) \right]^2 \right. \\ & \left. + 0.16 \left(\frac{z(K^+)}{M_W} \right)^{2.30} (R_b\,\sigma)^2 \right\} \nonumber \end{eqnarray} This bound is shown in fig.~\ref{fig:brkmax} as a function of the variable $z(K^+)$. Although this solution is welcome in searches for CP violation, the experimental bound on $BR(K^+ \to \pi^+ \nu \bar\nu)$, which could be reached in the coming years \cite{kuno:92} , will be most probably above it. \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=5in \rotate[r]{ \epsffile{brkmax.ps} } } \vspace{0.03in} }{ \caption[]{\small\sl Upper bound on $BR(K^+ \to \pi^+ \nu \bar\nu)$ for $\mid V_{ub}/V_{cb} \mid = 0.06$ (a), $\mid V_{ub}/V_{cb} \mid = 0.08$ (b), $\mid V_{ub}/ V_{cb} \mid = 0.10$ (c) necessary for $\bar\varrho \ge 0$. \label{fig:brkmax}} \end{figure} } \newsection{Phenomenological Analysis}\label{sec:pheno} \subsection{First Look} \label{subsec:first} In order to describe the situation of 1994 after a possible top quark discovery we make first the following choices for the relevant parameters: \bigskip \underline{Range I} \begin{equation}\label{200} \begin{array}{rclrcl} \left| V_{cb} \right| & = & 0.038 \pm 0.004 & \mid V_{ub}/V_{cb} \mid & = & 0.08 \pm 0.02 \\ B_K & = & 0.7 \pm 0.2 & \sqrt{B_{B_d}} F_{B_d} & = & (200 \pm 30)~MeV \\ x_d & = & 0.72 \pm 0.08 & m_t & = & (165 \pm 15)~GeV \\ \end{array} \end{equation} The values of $\mid V_{cb} \mid$ and $\mid V_{ub}/V_{cb} \mid$ given here are consistent with the recent summary in \cite{stone:93}. The values of $B_K$ cover comfortably the range of most recent lattice $(B_K = 0.825 \pm 0.027)$ \cite{kilcupetal:93} and 1/N $(B_K = 0.7 \pm 0.1)$ \cite{bardeenetal:88} results. They also touch the range of values obtained in the hadron duality approach $(B_K = 0.4 \pm 0.1)$ \cite{pradesetal:91}. $\sqrt{B_{B_d}} F_{B_d} $ given here is in the ball park of various lattice and QCD sum rule estimates \cite{kronfeldmackenzie:93}. $x_d$ is in accordance with the most recent average of CLEO and ARGUS data \cite{cassel:93} and it is compatible with the LEP data. We set $\tau_B = 1.5~ps$ \cite{lueth:93} in the whole analysis because the existing small error on $\tau_B$ $(\Delta \tau_B = \pm 0.04~ps)$ has only a very small impact on our numerical results. The choice for $m_t$ requires certainly an explanation. The high precision electroweak studies give in the standard model typically $m_t \simeq 165 \pm 30~GeV$ where the central value corresponds to $m_H = 300~GeV$ \cite{hollik:93}. Since we work in the standard model we expect that $m_t$ will be found in this range. A top quark discovery at TEVATRON will certainly narrow this range by at least a factor of two. It is of interest to see what impact this would have for the phenomenology considered here. At this level of accuracy one has to state how $m_t$ is defined. The QCD corrections to $\varepsilon_K$, $B^o-\bar B^o$ mixing and $K^+ \to \pi^+ \nu \bar\nu$ used here correspond to the running top quark mass in the $\overline{MS}$ scheme evaluated at $m_t$ i.e. $m_t$ in (\ref{200}) and in all formulae of this paper stands for $\overline{m_t} (m_t)$. The physical top quark mass as the pole of the renormalized propagator is then given by \begin{equation}\label{201} m_t^{phys}(m_t) = m_t \left[ 1+ \frac{4 \alpha_s (m_t)}{3 \pi} \right] \end{equation} For the range of $m_t$ considered here $m_t^{phys}$ is by $7 \pm 1~GeV$ higher than $m_t$. For $\Lambda_{\overline{MS}}$ and $m_c$ affecting $BR(K^+ \to \pi^+ \nu \bar\nu)$ we use \begin{equation}\label{Lmsmcrange1} \Lambda_{\overline{MS}} = (0.275 \pm\,0.075)\,GeV \qquad m_c \equiv \overline{m}_c(m_c) = (1.3 \pm 0.05)\,GeV \end{equation} \select{ \medskip \hrule \smallskip \noindent {\bf Note of caution to the reader of the preprint version:} \\ Due due limitations of our plot program the labels in figs.~\ref{fig:utriag}, \ref{fig:sinesab}, \ref{fig:sinesbc} and \ref{fig:sin2bbr} read $\varrho$ and $\eta$ but in fact should read $\bar\varrho$ and $\bar\eta$. \smallskip \hrule \medskip } In fig.~\ref{fig:utriag}\,(I) we show the resulting unitarity triangle . To this end the analysis of $\varepsilon_K$ and of $B_d^o-\bar{B_d^o}$~mixing have been used. In tab.~\ref{tab:rangefirst} we show the resulting ranges for $\delta$, $\sin(2\phi_i)$, $BR(K^+ \to \pi^+ \nu \bar\nu)$, $ \mid V_{td} \mid$ and $x_s$ corresponding to the choice of the parameters in (\ref{200}). In calculating $x_s$ we have set $R_{ds} = 1$. We observe: \begin{itemize} \item The uncertainty in the value of $\sin(2\beta)$ is moderate. We find $\sin(2\beta) \simeq 0.59 \pm 0.21$. Consequently a large asym\-me\-try $A_{CP}(\psi K_s)$ is ex\-pected. In par\-tic\-ular $\sin(2\beta)~\geq~0.38$. \item The uncertainties in $\sin(2\alpha)$ and in $\sin(2\gamma)$ are huge. \item Similarly the uncertainties in the predicted values of $BR(K^+ \to \pi^+ \nu \bar\nu)$, $ \mid V_{td} \mid$ and $x_s$ are large \end{itemize} \begin{table}[htb] \caption[]{\small\sl Ranges for scan of basic parameters for range I as of eq.~(\ref{200}). split according to the two different solutions for the CKM phase $\delta$ in the first and second quadrant } \vspace{0.02in} \begin{center} \begin{tabular}{|c||c|c||c|c|} \hline \multicolumn{1}{|c||}{ } & \multicolumn{2}{c||}{1. Quadrant} & \multicolumn{2}{c|}{2. Quadrant} \\ \hline & Min & Max & Min & Max \\ \hline \hline $\delta$ & 44.5 & 90.0 & 90.0 & 135.9 \\ \hline $\sin(2 \alpha)$ & -0.67 & 0.74 & 0.50 & 1.00 \\ \hline $\sin(2 \beta)$ & 0.50 & 0.80 & 0.38 & 0.74 \\ \hline $\sin(2 \gamma)$ & 0 & 1.00 & -1.00 & 0 \\ \hline $\mid V_{td} \mid\cdot 10^3$ & 6.9 & 10.0 & 8.6 & 11.8 \\ \hline $x_s$ & 10.8 & 24.2 & 7.7 & 14.4 \\ \hline $BR(K^+ \to \pi^+ \nu \bar\nu) \cdot 10^{10}$ & 0.62 & 1.39 & 0.67 & 1.46 \\ \hline \end{tabular} \end{center} \label{tab:rangefirst} \end{table} \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=3in \rotate[r]{ \epsffile{utriag.ps} } } \vspace{0.05in} }{ \caption[]{\small\sl Unitarity triangle in the $(\bar\varrho,\bar\eta)$ plane determined by $\varepsilon_{K}$, $\mid V_{ub}/V_{cb} \mid$ and $x_d$ using ranges (I)--(III) as of eqs.~(\ref{200}), (\ref{210}) and (\ref{211}), respectively. \label{fig:utriag}} \end{figure} } \subsection{A Look in the Future} \label{subsec:look} It is to be expected that the uncertainties in (\ref{200}) will be reduced in the next five years through the improved determinations of $\mid V_{cb} \mid$ and $\mid V_{ub}/V_{cb} \mid$ at CLEO II \cite{cassel:93}, the improved measurements of $x_d$ and the discovery of top. We also anticipate that the extensive efforts of theorists, in particular using the lattice methods, will considerably reduce the errors on $B_K$ and $\sqrt{B_B} F_B$. We consider the following ranges of parameters: \bigskip \underline{Range II} \begin{equation}\label{210} \begin{array}{rclrcl} \left| V_{cb} \right| & = & 0.040 \pm 0.002 & \mid V_{ub}/V_{cb} \mid& = & 0.08 \pm 0.01 \\ B_K & = & 0.75 \pm 0.07 & \sqrt{B_{B_d}} F_{B_d} & = & (185 \pm 15)~MeV \\ x_d & = & 0.72 \pm 0.04 & m_t & = & (170 \pm 7)~GeV \\ \end{array} \end{equation} \underline{Range III} \begin{equation}\label{211} \begin{array}{rclrcl} \left| V_{cb} \right| & = & 0.040 \pm 0.001 & \mid V_{ub}/V_{cb} \mid & = & 0.08 \pm 0.005 \\ B_K & = & 0.75 \pm 0.05 & \sqrt{B_{B_d}} F_{B_d} & = & (185 \pm 10)~MeV \\ x_d & = & 0.72 \pm 0.04 & m_t & = & (170 \pm 5)~GeV \\ \end{array} \end{equation} For $\Lambda_{\overline{MS}}$ and $m_c$ we use \begin{equation}\label{Lmsmcrange2} \Lambda_{\overline{MS}} = 0.3\,GeV \qquad m_c = 1.3\,GeV \end{equation} For each range we repeat the analysis of subsection \ref{subsec:first}. The results are given in fig.~\ref{fig:utriag}\,(II) and (III) and tabs.~\ref{tab:rangeb} and \ref{tab:rangec}. \begin{table}[htb] \caption[]{ \small\sl Same as in tab.~\ref{tab:rangefirst} but for range~II as of eq.~(\ref{210}). } \vspace{0.05in} \begin{center} \begin{tabular}{|c||c|c||c|c|} \hline \multicolumn{1}{|c||}{ } & \multicolumn{2}{c||}{1. Quadrant} & \multicolumn{2}{c|}{2. Quadrant} \\ \hline & Min & Max & Min & Max \\ \hline \hline $\delta$ & 60.9 & 90.0 & 90.0 & 122.5 \\ \hline $\sin(2 \alpha)$ & -0.30 & 0.69 & 0.57 & 1.00 \\ \hline $\sin(2 \beta)$ & 0.57 & 0.73 & 0.46 & 0.69 \\ \hline $\sin(2 \gamma)$ & 0 & 0.85 & -0.91 & 0 \\ \hline $\mid V_{td} \mid\cdot 10^3$ & 8.1 & 9.8 & 9.0 & 10.8 \\ \hline $x_s$ & 11.2 & 17.6 & 9.1 & 13.0 \\ \hline $BR(K^+ \to \pi^+ \nu \bar\nu) \cdot 10^{10}$ & 0.83 & 1.22 & 0.86 & 1.3 \\ \hline \end{tabular} \end{center} \label{tab:rangeb} \end{table} \begin{table}[htb] \caption[]{ \small\sl Same as in tab.~\ref{tab:rangefirst} but for range~III as of eq.~(\ref{211}). } \vspace{0.05in} \begin{center} \begin{tabular}{|c||c|c||c|c|} \hline \multicolumn{1}{|c||}{ } & \multicolumn{2}{c||}{1. Quadrant} & \multicolumn{2}{c|}{2. Quadrant} \\ \hline & Min & Max & Min & Max \\ \hline \hline $\delta$ & 69.0 & 90.0 & 90.0 & 113.7 \\ \hline $\sin(2 \alpha)$ & 0.01 & 0.66 & 0.60 & 0.99 \\ \hline $\sin(2 \beta)$ & 0.60 & 0.70 & 0.52 & 0.66 \\ \hline $\sin(2 \gamma)$ & 0 & 0.67 & -0.69 & 0 \\ \hline $\mid V_{td} \mid\cdot 10^3$ & 8.4 & 9.6 & 9.1 & 10.4 \\ \hline $x_s$ & 11.9 & 15.6& 10.1 & 13.3 \\ \hline $BR(K^+ \to \pi^+ \nu \bar\nu) \cdot 10^{10}$ & 0.88 & 1.12 & 0.92 & 1.18 \\ \hline \end{tabular} \end{center} \label{tab:rangec} \end{table} We observe: \begin{itemize} \item The uncertainty in the value of $\sin(2\beta)$ has been considerably reduced. We find \begin{equation}\label{212} \sin(2 \beta) = \left\{ \begin{array}{rc} 0.60 \pm 0.14 & (\rm{range~II}) \\ 0.61 \pm 0.09 & (\rm{range~III}) \end{array}\right. \end{equation} \item The uncertainties in $\sin(2\alpha)$ and $\sin(2\gamma)$ although somewhat reduced remain very large. \item For $\mid V_{td} \mid$, $x_s$ and $BR(K^+ \to \pi^+ \nu \bar\nu)$ we find \begin{equation}\label{213} \mid V_{td} \mid = \left\{ \begin{array}{rc} (9.5 \pm 1.4)\cdot 10^{-3} & (\rm{range~II}) \\ (9.4 \pm 1.0)\cdot 10^{-3} & (\rm{range~III}) \end{array}\right. \end{equation} % \begin{equation}\label{214} x_s= \left\{ \begin{array}{rc} 13.3 \pm 4.3 & (\rm{range~II}) \\ 12.9 \pm 2.8 & (\rm{range~III}) \end{array}\right. \end{equation} % \begin{equation}\label{215} BR(K^+ \to \pi^+ \nu \bar\nu) = \left\{ \begin{array}{rc} (1.07 \pm 0.24)\cdot 10^{-10} & (\rm{range~II}) \\ (1.03 \pm 0.15)\cdot 10^{-10} & (\rm{range~III}) \end{array}\right. \end{equation} \end{itemize} This exercise implies that if the accuracy of various parameters given in (\ref{210}) and (\ref{211}) is achieved the determination of $\mid V_{td} \mid$ and the predictions for $\sin(2\beta)$ and $BR(K^+ \to \pi^+ \nu \bar\nu)$ are quite accurate. A sizable uncertainty in $x_s$ remains however. Another important message from this analysis is the inability of a precise determination of $\sin(2\alpha)$ and $\sin(2\gamma)$ on the basis of $\varepsilon_{K}$, $B^o - \bar{B^o}$, $|V_{cb}|$ and $|V_{ub}/V_{cb}|$ alone. Although the great sensitivity of $\sin(2\alpha)$ and $\sin(2\gamma)$ to various parameters has been already stressed by several authors, in particular in refs.~\cite{alilondon:93,dibdunietzgilman:90,gilmannir:90,lusignoli:92}, our analysis shows that even with the improved values of the parameters in question as given in (\ref{210}) and (\ref{211}) a precise determination of $\sin(2\alpha)$ and $\sin(2\gamma)$ should not be expected in this millennium. The fact that $\sin(2\beta)$ can be much easier determined than $\sin(2\alpha)$ and $\sin(2\gamma)$ is easy to understand. Since $R_t$ is generally by at least a factor of two larger than $R_b$, the angle $\beta$ is much less sensitive to the changes in the position of the point $A = (\bar\varrho,\bar\eta)$ in the unitarity triangle than the remaining two angles. \subsection{The Impact of $BR(K^+ \to \pi^+ \nu \bar\nu)$ and $x_d/x_s$} $BR(K^+ \to \pi^+ \nu \bar\nu)$ and $x_d/x_s$ determine $ \mid V_{td} \mid$ and $R_t$. If our expectations for the ranges discussed above are correct we should be able to have a rather accurate prediction for $BR(K^+ \to \pi^+ \nu \bar\nu)$ using the analysis of $\varepsilon_{K}$ and of $B_d^o - \bar{B_d^o}$ mixing. Measuring $BR(K^+ \to \pi^+ \nu \bar\nu)$ to similar accuracy would either confirm the standard model predictions or indicate some physics beyond the standard model. We infer from tabs.~\ref{tab:rangeb} and \ref{tab:rangec} that measurements of $BR(K^+ \to \pi^+ \nu \bar\nu)$ with the accuracy of $\pm 10\%$ would be very useful in this respect. The accuracy of predictions for $x_s$ are poorer as seen in (\ref{214}). A measurement of $x_s$ at a $\pm 10\%$ level will have therefore a considerable impact on the determination of the CKM parameters and in particular $R_t$ (see (\ref{107c})) provided $R_{ds}$ is known within $10\%$ accuracy. A numerical exercise is presented in subsection~\ref{subsec:mtop}. \subsection{The Impact of CP-asymmetries in B-decays} \label{subsec:impact} Measuring the CP-asymmetries in neutral B-decays will give the definitive answer whether the CKM description of CP violation is correct. Assuming that this is in fact the case, we want to investigate the impact of the measurements of $\sin(2\phi_i)$ on the determination of the unitarity triangle. Since in the rescaled triangle of fig.~\ref{fig:triangle} one side is known, it suffices to measure two angles to determine the triangle completely. It is well known that the measurement of the CP-asymmetry in the decay $B^o \to \psi K_s $ should give a measurement of $\sin(2\beta)$ without any theoretical uncertainties. One expects that prior to LHC experiments the error on $\sin(2\beta)$ should amount roughly to $\Delta \sin(2\beta) = \pm 0.06$ \cite{cassel:93,babar:93,albrechtetal:92}. The measurement of $\sin(2\alpha)$ is more difficult. It requires in addition the measurement of several channels in order to eliminate the penguin contributions. An error $\Delta \sin(2\alpha) = \pm 0.10$ prior to LHC could however be achieved at a SLAC B-factory \cite{babar:93}. In fig.~\ref{fig:sinesab} we show the impact of such measurements and also plot the curve (\ref{113d}) which represents superweak models. Specifically we take \begin{equation}\label{220} \sin(2 \beta) = \left\{ \begin{array}{r} 0.60 \pm 0.18 \qquad (\rm{a}) \\ 0.60 \pm 0.06 \qquad (\rm{b}) \end{array}\right. \end{equation} as an illustration of two measurements of $\sin(2\beta)$ with two different accuracies. Next we take the following three choices for $\sin(2\alpha)$ \begin{equation}\label{221} \sin(2 \alpha) = \left\{ \begin{array}{rc} -0.20 \pm 0.10 & (\rm{I}) \\ 0.10 \pm 0.10 & (\rm{II}) \\ 0.70 \pm 0.10 & (\rm{III}) \end{array}\right. \end{equation} \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=3in \rotate[r]{ \epsffile{sin2abtot.ps} } } \vspace{0.05in} }{ \caption[]{\small\sl Determination of the unitarity triangle in the $(\bar\varrho,\bar\eta)$ plane by measuring $\sin(2\beta)$ and $\sin(2\alpha)$ as of eqs.~(\ref{220}) and (\ref{221}), respectively. For $\sin(2\alpha)$ we always find two solutions in $(\bar\varrho, \bar\eta)$ and for $\sin(2\beta)$ we only use the solution consistent with $\mid V_{ub}/V_{cb} \mid \leq 0.1$. \label{fig:sinesab}} \end{figure} } In fig.~\ref{fig:sinesbc} we replace the impact of $\sin(2\alpha)$ by the impact of a measurement of $\sin(2\gamma)$ keeping $\sin(2\beta)$ unchanged. We choose the following values: \begin{equation}\label{222} \sin(2 \gamma) = \left\{ \begin{array}{rc} -0.50 \pm 0.10 & (\rm{I}) \\ 0 \pm 0.10 & (\rm{II}) \\ 0.50 \pm 0.10 & (\rm{III}) \end{array}\right. \end{equation} \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=3in \rotate[r]{ \epsffile{sin2bctot.ps} } } \vspace{0.05in} }{ \caption[]{\small\sl Determination of the unitarity triangle in the $(\bar\varrho,\bar\eta)$ plane by measuring $\sin(2\beta)$ and $\sin(2\gamma)$ as of eqs.~(\ref{220}) and (\ref{222}), respectively. For $\sin(2\gamma)$ we always find two solutions in $(\bar\varrho, \bar\eta)$ and for $\sin(2\beta)$ we only use the solution consistent with $\mid V_{ub}/V_{cb} \mid \leq 0.1$. \label{fig:sinesbc}} \end{figure} } We observe that the measurement of $\sin(2\alpha)$ or $\sin(2\gamma)$ in conjunction with $\sin(2\beta)$ at the expected precision will have a large impact on the accuracy of the determination of the unitarity triangle and of the CKM parameters. In order to show this more explicitly we take as an example: \begin{equation}\label{223} \sin(2\beta) = 0.60 \pm 0.06 \qquad \sin(2\alpha) = 0.10 \pm 0.10 \end{equation} and give in tab.~\ref{ranges2as2b} the predicted ranges for $\delta$, $\sin(2\gamma)$, $BR(K^+ \to \pi^+ \nu \bar\nu)$, $\mid V_{td} \mid$ and $x_s$ corresponding to the values of $\sin(2\beta)$ and $\sin(2\alpha)$ given in (\ref{223}) and $ \mid V_{cb} \mid$, $x_d$ and $m_t$ of (\ref{210}). We use only the solution of $\sin(2\beta)$ consistent with $\mid V_{ub}/V_{cb} \mid \leq 0.1$. \begin{table}[htb] \caption[]{ \small\sl Predicted ranges for various quantities calculated by restricting $\sin(2\alpha)$ and $\sin(2\beta)$ to the ranges of (\ref{223}) and using $ \mid V_{cb} \mid$, $x_d$ and $m_t$ of (\ref{210}). There is no allowed solution for the second quadrant. } \vspace{0.02in} \begin{center} \begin{tabular}{|c||c|c|} \hline & Min & Max \\ \hline \hline $\delta$ & 69.5 & 77.8 \\ \hline $\sin(2 \gamma)$ & 0.42 & 0.66 \\ \hline $\mid V_{td} \mid\cdot 10^3$ & 8.4 & 9.1 \\ \hline $x_s$ & 15.0 & 17.5 \\ \hline $BR(K^+ \to \pi^+ \nu \bar\nu) \cdot 10^{10}$ & 0.90 & 1.12 \\ \hline \end{tabular} \end{center} \label{ranges2as2b} \end{table} It should be stressed that this impressive accuracy can only be achieved by measuring $\sin(2\alpha)$ or $\sin(2\gamma)$ in addition to $\sin(2\beta)$. This is easy to understand in view of the fact that the expected accuracy of the measurements of $\sin(2\alpha)$ and $\sin(2\gamma)$ is considerably higher than the corresponding accuracy of the predictions on basis of $\varepsilon_{K}$, $B^o - \bar{B^o}$ mixing, $\mid V_{ub}/V_{cb} \mid$ and $\mid V_{cb} \mid$ alone. \subsection{$K^+ \to \pi^+ \nu \bar\nu$, $\sin(2\beta)$, $ \mid V_{cb} \mid$, $m_t$ and $x_d/x_s$ } \label{subsec:mtop} We would like to address now our last question posed in the introduction: How well should one measure $BR(K^+ \to \pi^+ \nu \bar\nu)$, $\sin(2\beta)$, $ \mid V_{cb} \mid$, $m_t$ and $x_d/x_s$ in order to obtain an acceptable determination of the CKM matrix on the basis of these five quantities alone. As we stated at the beginning of this paper $K^+ \to \pi^+ \nu \bar\nu$ and $\sin(2\beta)$ are essentially free of any theoretical uncertainties. $\mid V_{cb} \mid$ on the other hand is easier to determine than $\mid V_{ub}/V_{cb} \mid$ and once the top quark is discovered $m_t$ should be known relatively well. Finally $x_d/x_s$ determines directly $R_t$ by means of eq.~(\ref{107c}). In fig.~\ref{fig:sin2bbr} we show the result of this exercise taking (\ref{Lmsmcrange2}) and \begin{equation}\label{230} \sin(2\beta) = 0.60 \pm 0.06 \quad \mid V_{cb} \mid = 0.040 \pm 0.001 \quad m_t = (170 \pm 5)~GeV \end{equation} \begin{equation}\label{230a} BR(K^+ \to \pi^+ \nu \bar\nu) = \left\{ \begin{array}{rc} (1.0 \pm 0.2) \cdot 10^{-10} & (\rm{I}) \\ (1.0 \pm 0.1) \cdot 10^{-10} & (\rm{II}) \end{array}\right. \end{equation} In tab.~\ref{tab:sin2bbr} we give the predicted ranges of various quantities for the two cases considered. \begin{table}[htb] \caption[]{ \small\sl Ranges of various quantities calculated with constraints from eqs.~(\ref{230}) and (\ref{230a}) . } \vspace{0.02in} \begin{center} \begin{tabular}{|c||c|c||c|c|} \hline \multicolumn{1}{|c||}{ } & \multicolumn{2}{c||}{(I)} & \multicolumn{2}{c|}{(II)} \\ \hline & Min & Max & Min & Max \\ \hline \hline $\sin(2 \alpha)$ & -0.917 & 0.978 & -0.691 & 0.973 \\ \hline $\sin(2 \gamma)$ & -0.704 & 1.000 & -0.418 & 0.976 \\ \hline $\mid V_{td} \mid\cdot 10^3$ & 6.9 & 10.3 & 7.6 & 9.7 \\ \hline \end{tabular} \end{center} \label{tab:sin2bbr} \end{table} In addition we show in fig.~\ref{fig:sin2bbr} the result of a possible measurement of $x_d/x_s$ corresponding to $R_t = 1.0 \pm 0.1$. We observe that provided the expected accuracy of measurements is achieved we should have a respectable determination of $\mid V_{td} \mid$ this way. Fig.~\ref{fig:sin2bbr} indicates that for the $\Delta V_{cb}$ and $\Delta m_t$ assumed here, $BR(K^+ \to \pi^+ \nu \bar\nu)$ must be measured with a precision of $\pm 10\%$ to be competitive with $\Delta R_t = \pm 10\%$ extracted hopefully one day from $x_d/x_s$. The uncertainty in the predictions for $\sin(2\alpha)$ and $\sin(2\gamma)$ is very large as in the analysis of subsection~\ref{subsec:look}. \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=5in \rotate[r]{ \epsffile{sin2abBR.ps} } } \vspace{0.05in} }{ \caption[]{\small\sl Allowed ranges in the $(\bar\varrho, \bar\eta)$ plane with constraints from eqs.~(\ref{230}) and (\ref{230a}) for $BR(K^+ \to \pi^+ \nu \bar\nu)$ and $R_t = 1.0 \pm 0.1$. } \label{fig:sin2bbr} \end{figure} } \subsection{$\varepsilon_K$, $B_d^o - \bar{B}_d^o$ Mixing, $\sin(2\beta)$ and $\sin(2\alpha)$} It is useful to combine the results of subsections \ref{subsec:first}, \ref{subsec:look} and \ref{subsec:impact} by making the customary $\sin(2\beta)$ versus $\sin(2\alpha)$ plot \cite{nir:74}. This plot demonstrates very clearly the correlation between $\sin(2\alpha)$ and $\sin(2\beta)$. The allowed ranges for $\sin(2\alpha)$ and $\sin(2\beta)$ corresponding to the choices of the parameters in (\ref{200}), (\ref{210}) and (\ref{211}) are shown in fig.~\ref{fig:sin2bvs2a} together with the results of the independent measurements of $\sin(2\beta) = 0.60 \pm 0.06$ and $\sin(2\alpha)$ given by (\ref{221}). The latter are represented by dark shaded rectangles. The black rectangles illustrate the accuracy of future LHC measurements ($\Delta\sin(2\alpha) = \pm 0.04$, $\Delta\sin(2\beta) = \pm 0.02$) \cite{camilleri:93}. \fig{ \begin{figure}[htb] }{ \vspace{0.05in} \centerline{ \epsfysize=5in \rotate[r]{ \epsffile{sin2bvs2a.ps} } } \vspace{0.05in} }{ \caption[]{\small\sl $\sin(2\alpha)$ versus $\sin(2\beta)$ plot corresponding to the parameter ranges I-IV as of (\ref{200}), (\ref{210}), (\ref{211}) and (\ref{231}) and the dark shaded rectangles given by (\ref{221}) and (\ref{220})\,(b). The black rectangles illustrate the accuracy of future LHC measurements. \label{fig:sin2bvs2a}} \end{figure} } We also show the results of an analysis in which the accuracy of various parameters is as in (\ref{210}) but with the central values modified: \bigskip \underline{Range IV} \begin{equation}\label{231} \begin{array}{rclrcl} \left| V_{cb} \right| & = & 0.038 \pm 0.002 & \mid V_{ub}/V_{cb} \mid & = & 0.08 \pm 0.01 \\ B_K & = & 0.70 \pm 0.07 & \sqrt{B_{B_d}} F_{B_d} & = & (185 \pm 15)~MeV \\ x_d & = & 0.72 \pm 0.04 & m_t & = & (165 \pm 7)~GeV \\ \end{array} \end{equation} In addition we show the prediction of superweak theories which in this plot is represented by a straight line. There are several interesting features on this plot: \begin{itemize} \item The impact of the direct measurements of $\sin(2\beta)$ and $\sin(2\alpha)$ is clearly visible in this plot \item In cases III and IV we have examples where the measurements of $\sin(2\alpha)$ are incompatible with the predictions coming from $\varepsilon_{K}$ and $B^o - \bar{B^o}$ mixing. This would be a signal for physics beyond the standard model. The measurement of $\sin(2\alpha)$ is essential for this. \item The case IV shows that for a special choice of parameters the predictions for the asymmetries coming from $\varepsilon_{K}$, $B^o - \bar{B^o}$ mixing, $\mid V_{cb} \mid$ and $\mid V_{ub}/V_{cb} \mid$ can be quite accurate when these four constraints can only be satisfied simultaneously in a small area of the $(\bar\varrho, \bar\eta)$~space. Decreasing $\mid V_{cb} \mid$, $\mid V_{ub}/V_{cb} \mid$ and $m_t$ and increasing $F_B$ would make the allowed region in the case IV even smaller. \item We also observe that the future measurements of asymmetries and the improved ranges for the parameters relevant for $\varepsilon_{K}$ and $B^o - \bar{B^o}$ mixing will probably allow to rule out the superweak models. \end{itemize} \newsection{Summary and Conclusions} The top quark discovery and the measurements of $BR(K^+ \to \pi^+ \nu \bar\nu)$, $x_s$ and of CP violating asymmetries in B-decays will play crucial roles in the determination of the CKM parameters and in the tests of the standard model. Similarly the improvements in the determination of the CKM elements $V_{ub}$ and $V_{cb}$ in tree level B-decays and the improved calculations of the non-perturbative parameters like $B_K$ and $\sqrt{B_B} F_B$ will advance our understanding of weak decay phenomenology. In this paper we have made an excursion in the future trying to see what one could expect in this field in the coming five to ten years prior to LHC experiments. In the first part of the numerical analysis we have investigated how the top quark discovery together with the improved determinations of $\mid~V_{ub}/V_{cb}~\mid$, $\mid~V_{cb}~\mid$, $B_K$ and $\sqrt{B_B} F_B$ would allow for the determination of the unitarity triangle and more accurate predictions for $K^+ \to \pi^+ \nu \bar\nu$, $B_s^o - \bar{B_s^o}$~mixing and $\sin(2\phi_i)$. Our main findings in this part can be summarized as follows: \begin{itemize} \item We expect that around the year 2000 satisfactory predictions for $\mid V_{td} \mid$, $\sin(2\beta)$ and $BR(K^+ \to \pi^+ \nu \bar\nu)$ should be possible. \item A sizeable uncertainty in $x_s$ and huge uncertainties in $\sin(2\alpha)$ and in $\sin(2\gamma)$ will remain however. \end{itemize} In the second part of our analysis we have investigated the impact of future measurements of $BR(K^+ \to \pi^+ \nu \bar\nu)$, $x_s$ and $\sin(2\phi_i)$. Our main findings in this second part can be summarized as follows: \begin{itemize} \item The measurements of $\sin(2\alpha)$, $\sin(2\beta)$ and $\sin(2\gamma)$ will have an impressive impact on the determination of the CKM parameters and the tests of the standard model. \item This impact is further strengthened by combining the constraints considered in the two parts of our analysis as seen most clearly in fig.~\ref{fig:sin2bvs2a}. \item Future LHC B-physics experiments around the year 2005 will refine these studies as evident from fig.~\ref{fig:sin2bvs2a} and ref.~\cite{camilleri:93}. \end{itemize} In our analysis we have concentrated on quantities which have either been already measured $(\varepsilon_K, x_d)$ or quantities which are practically free from theoretical uncertainties such as $x_d/x_s$, $K^+ \to \pi^+ \nu \bar\nu$ and certain asymmetries in B-decays. We however stress at this point that the measurements of $\varepsilon^{\prime} / \varepsilon$, $B \to s \gamma$, $K_L \to \mu^+ \mu^{-}$, $K_L \to \pi^o e^+ e^-$, $K_L \to \pi^o \nu \bar\nu$ and other rare decays discussed in the literature are also very important for our understanding of weak decays. In particular a measurement of a non-zero $Re(\varepsilon^{\prime} / \varepsilon)$ to be expected in few years from now, will give most probably first signal of direct CP violation. Unfortunately, all these decays are either theoretically less clean than the decays considered here or they are more difficult to measure. Clearly some dramatic improvements in the experimental techniques and in non-perturbative methods could change this picture in the future. We hope that our investigations and the analytic formulae derived in this paper will facilitate the waiting for $m_t$, $K^+ \to \pi^+ \nu \bar\nu$, $B_s^o - \bar{B_s^o}$~mixing and CP asymmetries in B-decays. There is clearly a very exciting time ahead of us. \vskip 1cm \begin{center} {\large\bf Acknowledgement} \end{center} \noindent A.J.~Buras would like to thank the members of the CP-B panel at the Max-Planck-Institut in Munich for exciting discussions. \newpage {\small
2,877,628,088,744
arxiv
\section{Introduction} In late 19th century, due to the outstanding contribution of its founding fathers, Boltzmann, Maxwell and Gibbs, Statistical Thermodynamics has been successfully introduced as the general conceptual framework for understanding equilibrium thermodynamic phenomena by means of statistical mechanics~\cite{yanes,kittel}. \newline The early success of the kinetic theory of gases and the discovery of the mean field approach for the derivation of the classical van der Waals equation~\cite{stanley} nurtured the hope that an equally neat and clear description of critical phenomena would be as effective as it was away from the critical region. Although second order phase transitions, as for example the ergodicity breakdown of real gases at the triple point, have nowadays been completely framed into a rigorous scaffold~\cite{gallavotti} - also partially guided by techniques from the near field theory~\cite{parisi,kardar} - first order phase transitions, such as gas-liquid phase transitions, turned out to be more elusive. \newline Indeed, in spite of the accuracy of the celebrated van der Waals equation for the description of real gases, the behavior predicted within the critical region, where a real gas turns into a liquid, significantly departs from the experimental observations. Fig.\ref{fig:G1a} shows the typical isothermal curves of a real gas: above the critical temperature (for $T>T_{c}$) the pressure decreases as a strictly monotonic function of the volume; the critical point corresponds to the temperature $T =T_{c}$ where the critical isotherm develops an inflection point; below the critical temperature (at $T<T_{c}$) real isotherms are constant within a certain volume interval in spite of the oscillating behavior predicted by the classical van der Waals equation. Interestingly, the behavior of real isothermal curves within the critical region turns out to be intimately connected to the theoretical one via the celebrated Maxwell rule stating that the constant pressure plateau is placed in such a way it cuts lobes of equal areas on the associated van der Waals isotherm. As it is well known, the Maxwell rule corresponds to the condition of thermodynamic equilibrium such that, below the critical temperature, the Gibbs free energy develops two minima of equal value~\cite{Callen}. \\ The remarkable validity, although heuristic, of Maxwell's approach stimulated countless studies aimed at a rigorous statistical mechanical description of first order phase transitions as for instance in the works of Lebowitz and Penrose~\cite{lebowitz} and van Kampen~\cite{Kampen}, where large classes of pairwise interaction potentials for particles (continuous and hard-sphere-like respectively) are considered or the work by Griffiths~\cite{griffiths} that focusses on the study of analyticity properties of thermodynamic functions. \newline Alternative methods to analyze phase transitions have also been developed based on macroscopic approaches to thermodynamics. For instance, the Landau theory allows to construct suitable asymptotic expansions of the free energy in the order parameters to obtain information of the critical exponents in the vicinity of the critical point (see e.g.~~\cite{Toledano}); the Widom approach relies on the construction of effective free energy functions based on the analysis of their scaling properties~\cite{Widom}. Further recent developments in this direction led to the formulation of the thermodynamic limit as the semiclassical limit of nonlinear conservation laws where phase transitions are associated to shock solutions of a hyperbolic nonlinear PDE in the class of conservation laws~\cite{sumrule,Genovese,moro2,noi}. Such nonlinear PDEs can be also derived in mean field theories from the analysis of differential identities of the free energy as showed in~\cite{sumrule,Genovese,newman2,brankov1, brankov2} for the Curie-Weiss and the Sherrington-Kirkpatrick models, or from the analysis of thermodynamic Maxwell relations as showed in~\cite{moro1,moro2} for the van der Waals model. Both the microscopic statistical mechanical approach, via the study of correlation functions asymptotics, and the macroscopic thermodynamic approach, based on the expansion of the free energy in the vicinity of the critical point, show the intimate connection with the singularity and catastrophe theory - since the very first pioneering contributions by Arnold - and the Hopf bifurcation theory (see e.g.~\cite{Arnold}). \begin{figure}[h] \begin{center} \includegraphics[scale=.35]{vdW.pdf} \caption{Real gas isothermal curves: within the critical region, between the points $A$ and $B$ the behavior predicted by the van der Waals equation (dashed line) departs from the experimentally observed (solid line). The actual critical isotherms are constructed starting from the theoretical ones via the Maxwell equal areas rule.} \label{fig:G1a} \end{center} \end{figure} Despite the numerous progresses made in understanding phase transitions in a variety of contexts, from thermodynamics to classical and quantum field theory \cite{zamponi1,zamponi2}, or complex and biological systems \cite{mezard,MPV}, and the discovery of their intrinsic universality, a global analytical description of phase transitions for the van der Waals gas is still missing. In this work, inspired by the theory of nonlinear PDEs, in the class of nonlinear conservation laws, we propose a novel method such that given an equation of state assumed to be accurate outside the critical region allows to construct a partition function for a finite number of particles $N$ that is valid in the whole space of thermodynamic variables including the critical region. This partition function automatically encodes the Maxwell equal areas rule. \newline Based on the mean field assumption that configurations of equal volume are equally weighted, we obtain the general functional form of the partition function. For finite $N$, the partition function is, as expected, analytic in the space of thermodynamic variables but it develops a singularity in the thermodynamic limit $N\to \infty$. We use the Laplace method for the asymptotic evaluation of the partition function for large $N$ with the constraint that above the critical point, where the Laplace integral admits one single critical point, the leading asymptotics of the volume expectation value satisfies the classical van der Waals equation. Remarkably, this condition allows to fix uniquely the functional form of the partition function in such a way that the logarithm of the probability density gives the correct Gibbs free energy of the van der Waals model above the critical point. Finally, we prove that in the critical region, defined as the region in the space of thermodynamic variables where the Laplace integral admits multiple critical points, the leading asymptotics for the volume develops a discontinuous behavior, providing the exact analytical description of the first order phase transition. \section{The model.} Let us consider a fluid of $N$ identical particles of mass $m$, whose centre of mass is fixed at the origin of the reference frame, described by the Hamiltonian of the form \begin{equation} \label{Hamilt_gen} H_{N} = \sum_{l=1}^{N} \frac{{\bf p}_{l}^{2}}{2 m} - \frac{1}{2} \sum_{l,m =1}^{N} \psi ({\bf r}_{l}, {\bf r}_{m}) + P v({\bf r}_{1},\dots, {\bf r}_{N}), \end{equation} where ${\bf p}_{l}$ is the momentum of the particles $\psi({\bf r}_{l}, {\bf r}_{m})$ is a two-body interaction potential, and the last term models the interaction with an external field where $P>0$ is a real positive mean field coupling constant and the volume $v({\bf r}_{1},\dots, {\bf r}_{N})$ is defined as the minimum convex hull associated to the configuration $\{{\bf r}_{1},\dots, {\bf r}_{N}\}$. The partition function is given by the standard formula for a canonical ensemble \[ {\cal Z} = \int d^{N}{\bf p}_{i} d^{N}{\bf r}_{i} e^{-\beta H_{N}} \] where $\beta = (K_{B} T )^{-1}$ with $K_{B}$ the Boltzmann constant and $T$ the temperature. Let us observe that fixing the centre of mass breaks the translational invariance of the Hamiltonian~(\ref{Hamilt_gen}) that otherwise would lead to a divergent partition function ${\cal Z}$. Integration over the moment variables ${\bf p}_{l}$ returns ${\cal Z} = (2 \pi m K_{B} T)^{3N/2} Z$ where \begin{equation} \label{partition} Z = \int d^{N} {\bf r}_{i} \; \exp \left [ N \left( \frac{t}{2} \sum_{l,m} \psi \left({\bf r}_{l}, {\bf r}_{m} \right ) + x v \right) \right]. \end{equation} The rescaled variables $t = 1/(N K_{B} T)$ and $x = - P/(N K_{B} T)$ introduced above will allow us to define the thermodynamic limit and the choice of the notation emphasizes the formal analogy between the Gibbs free energy and the Hamilton-Jacobi function of the associated mechanical problem (see e.g.\cite{Genovese}). Let us introduce the density of free energy $$\alpha_{N} = -N^{-1} \log {\cal Z}.$$ This quantity is sometimes also referred to as {\em mathematical pressure}, see e.g. \cite{GRS}. The expectation value of a given observable ${\cal O}$ is defined in the usual manner, i.e. $$\av{{\cal O} } = {\cal Z}^{-1}\int d^{N}{\bf r} \; {\cal O} \; e^{-\beta H_{N}}.$$ In particular, let us observe that $\av{v} = - \partial \alpha_{N}/\partial x $. The thermodynamic regime is defined as the large particles limit $N \sim N_{A}$ where $N_{A} \simeq 6.022 \times10^{23}$ is Avogadro's number. In particular, for $n$ moles of a gas of molecules of hard core volume $b_{0}$ we have \[ N =n N_{A} \qquad N K_{B} = n R \qquad N b_{0} = n b_{m} \] where $R = N_{A} K_{B} \simeq 8.31 J/mol K$ is the gas constant and $b_{m} = N_{A} b_{0}$ is the molar hard core volume. Hence, the gas constant $R$ defines the typical scale for the variables $x$ and $t$. We now assume that for fixed values of the variables $x$ and $t$, configurations of equal volume occur with the same probability density, so that there exists a probability measure $\mu(v)$ such that the partition function~(\ref{partition}) is of the form \begin{equation} \label{Zmeasure} Z =\int_{b}^{\infty} d\mu(v) \end{equation} where $b = n b_{m}$ is the total hard core volume. This assumption gives a nonlinear generalization of the standard mean field approximation introduced for the statistical mechanical derivation of the van der Waals equation of state (see e.g.\cite{stanley}). We also note that, from a formal perspective, this ansatz is equivalent to the request that the moments $\av{v^{n}}$ for the model~(\ref{Hamilt_gen}) are such that the measure $\mu(v)$ is the solution to the {\it Stieltjes moments problem}~\cite{Akhiezer}, that is \[ \av{v^{n}} = \int_{b}^{+ \infty} v^{n} d \mu(v). \] Expressing the differential as $d\mu(v) = \mu'(v) dv $, the function $\mu'(v)$ gives the weight associated to a given volume configuration that, for fixed values of $x$ and $t$, is the same for all configurations of equal volume. As for the canonical ensemble the logarithm of the probability density in~(\ref{partition}) is linear in the variables $x$ and $t$ (this ensures entropy maximization at equilibrium), we have that the probability density $\mu'(v)$ is such that $\log \mu'(v) = N \left(x v +\frac{1}{2} t \phi(v) + \sigma(v) \right)$ for certain functions $\phi(v)$ and $\sigma(v)$. Hence, the partition function takes the form \begin{equation} \label{part_mom} Z = \int_{b}^{+\infty} d \mu(v) = \int_{b}^{+\infty} e^{N (x v +\frac{t}{2} \phi(v) + \sigma(v))} \; dv. \end{equation} In the following we will prove that the functions $\phi(v)$ and $\sigma(v)$ can be uniquely determined by the request that the expectation value $\av{v}$ evaluated, away from the critical region, according to the partition function~(\ref{part_mom}) satisfies, in the thermodynamic limit, the celebrated van der Waals equation \begin{equation}\label{classical_vdW} \left (P + \frac{a}{v^2} \right) \left (v - b \right) = n R T. \end{equation} We should stress that, as discussed in details below, the assumption about the existence of the measure in~(\ref{Zmeasure}) is a strong enough information to fix uniquely the functional form of $\varphi(v)$ and $\sigma(v)$ with no further specifications on the two-body potential $\psi({\bf r}_{l},{\bf r}_{m})$. However, we find instructive to present an heuristic phenomenological construction for a class of two-body nearest neighbourhood potential depending on the distance of the form $\psi({\bf r}_{l},{\bf r}_{m}) = \psi(r_{lm})$, where $r_{lm} = |{\bf r}_{l} - {\bf r}_{m} |$. In the thermodynamic limit, we can assume that for a given equilibrium configuration particles are on average approximately equidistant, i.e. $r_{lm} \simeq \bar{r}$ where $\bar{r}$ is the mean distance, then \[ \sum_{l,m} \psi(r_{lm}) \simeq \sum_{l,m} \psi(\bar{r}) \sim N \psi(\bar{r}). \] We used the fact that the number of nearest neighbourhood pairs is of order $N$. Let us consider for example an effective electric potential energy \[ \psi(\bar{r}) \simeq \frac{1}{N^{\beta}} \frac{q^{2}}{4 \pi \epsilon_{0} \bar{r}^{\alpha}} \simeq \frac{q^{2}}{4 \pi \epsilon_{0} v^{\alpha/3}} \] where we observed that the mean distance for a given volume configuration is related to the volume per particle by the relation $\bar{r} \simeq (v/N)^{1/3}$ and the exponent $\beta =\alpha/3$ is chosen to ensure the linear extensivity of the potential term in~(\ref{Hamilt_gen}). More in general we assume that in the thermodynamic limit we can write \[ \sum_{l,m} \psi({\bf r}_{l},{\bf r}_{m}) \simeq \phi(v), \] that is the potential energy can be expressed as a function of the volume. Under this assumption, we observe that the partition function~(\ref{partition}) can be equivalently written as \[ Z = \int_{b}^{\infty} dv \; {\cal D}_{N}(v) e^{N \left (\frac{t}{2} \phi(v) + x v \right)} \] where \[ {\cal D}_{N}(v) = \int d^{N}{\bf r}_{i} \delta \left (v - v({\bf r}_{1},\dots,{\bf r}_{N}) \right) \] gives the number of configurations of the prescribed volume $v$. A direct comparison with the formula~(\ref{part_mom}) leads us to the natural interpretation, for large $N$, of the function $\sigma(v)$ as the {\it configurational entropy} of the system i.e. \[ \sigma(v) \simeq \frac{1}{N} \log {\cal D}_{N}(v). \] \newline Let us now proceed by evaluating the leading order asymptotics, for $N\to \infty$, of the partition function~(\ref{part_mom}) in the region of thermodynamic variables $x$ and $t$ where $\log \mu'(v)$ admits one single critical point. Laplace's formula gives \begin{equation} \label{part_laplace} Z \simeq \sqrt{\frac{2 \pi}{N \alpha''(v^{\star})}} \; e^{- N \alpha(v^{\star})}, \qquad N \to \infty \end{equation} where $\alpha(v) = - x v - t \phi(v)/2 - \sigma(v)$ and $v^{\star}(x,t)$ is a stationary point for the potential $\alpha(v)$ such that $\alpha'(v^{\star}) = 0$, i.e. \begin{equation} \label{G_stat} x + \frac{t}{2} \phi'(v^{\star}) + \sigma'(v^{\star}) = 0. \end{equation} In particular, formula~(\ref{part_laplace}) implies that $\alpha = \lim_{N\to \infty} \alpha_{N}$. Identifying the external field constant $P$ in the Hamiltonian~(\ref{Hamilt_gen}) with the physical pressure in Eq.~(\ref{classical_vdW}) and choosing \begin{subequations} \label{matching} \begin{align} \label{matching1} \phi(v) =&2 a /\av{v}\\ \label{matching2} \sigma(v) =& \log \left(\av{v} - b \right) \end{align} \end{subequations} Eq.~(\ref{G_stat}) coincides with the van der Waals equation~(\ref{classical_vdW}). We also note that the asymptotic matching condition~(\ref{matching2}) allows us to evaluate the function ${\cal D}_{N}(v)$ for large $N$ that as expected is \[ {\cal D}_{N}(v) \simeq \left (\av{v} - b \right )^{N} \] The prescriptions~(\ref{matching}) are, according our procedure, the necessary matching conditions that uniquely fix the partition function~(\ref{part_mom}) consistently with the van der Waals equation of state, which is assumed to be accurate above the critical region. We note that $\alpha(v) = G/T$, where $G$ is the Gibbs free energy density of the van der Waals model. A direct calculation shows that the partition function so obtained \begin{equation} \label{partKG} Z = \int_{b}^{\infty} e^{N \left(x v + t \frac{a}{v} + \log \left(v - b \right) \right)} \; dv \end{equation} satisfies the Klein-Gordon equation in the light-cone variables \begin{equation} \label{KG} \frac{\partial^{2} Z}{\partial x \partial t} = N^{2} a Z. \end{equation} Let us also observe that the integral expression~(\ref{partKG}) can be explicitly evaluated at finite $N$ for $t =0$ and gives $\av{v}(x,0) = b - 1/x - 1/N x$ that coincides with the equation of state~(\ref{classical_vdW}) in the limit $T \to \infty$. Using the self-consistency equation \[ \av{v} = N^{-1} Z^{-1} \partial Z /\partial x \] the Klein-Gordon equation~(\ref{KG}) implies that the volume density satisfies the nonlinear viscous conservation law \begin{equation} \label{vfulleq} \der{\av{v}}{t} = \der{}{x} \left(\frac{a}{\av{v}} + \frac{1}{N} \der{\log \av{v}}{t} \right) \end{equation} of the type studied in~\cite{ALM} and that is related to the viscous analog of the Camassa-Holm equation. In the thermodynamic limit, above the critical temperature where the gradient of $\av{v}$ is bounded, the term of order $O(N^{-1})$ in~(\ref{vfulleq}) is negligible and the volume density satisfies the Riemann-Hopf type equation \[ \partial_{t}\av{v} = \partial_{x} (a/\av{v}) \] whose solution develops a gradient catastrophe in finite ``time'' $t$. As illustrated in Fig.~\ref{fig:G1}.a, the volume $\av{v}$ evolves in the space of thermodynamic parameters just like a nonlinear hyperbolic wave and the gradient catastrophe is associated to the critical point $x_{c} = - 1/8b$, $t_{c} = 27b/8a$, $v_{c} = 3 b$. Beyond the critical time $t_c$, the physical solution develops a shock discontinuity, corresponding to a first order phase transition, whose position at fixed $t > t_{c}$ is determined by the equal area rule and its speed $U$ is given the Rankine-Hugoniot condition $$U = -(a/v_l-a/v_r)/(v_l-v_r) = a/(v_l v_r)>0,$$ where $v_{l}$ and $v_{r}$ are the limiting values of $\av{v}$ respectively to the left and to the right of the jump. It was observed in~\cite{moro1} that the Rankine-Hugonoit condition is equivalent to the Clausius-Clapeyron equation implying that the shock speed is proportional to the latent heat associated to the first order phase transition and the trajectory of the shock is interpreted as the coexistence curve of the gas-liquid phase as shown in Fig.{\ref{fig:G1}.b}. Such connection between phase transitions and scalar shock waves was first observed in the context of magnetic models (see e.g.~\cite{noi} and reference therein), and in the classical thermodynamic setting in~\cite{moro2,moro1} where the notion of universality has been also discussed. It is interesting to compare the mean field partition function~(\ref{partKG}) associated to the model Hamiltonian~(\ref{Hamilt_gen}) and the equation of state obtained according to the standard canonical ensemble formalism. For the sake of simplicity let us consider an ideal gas of non-interacting particles of hard core volume $b$. In this case the $\phi(v) = 0$ and the coupling constant $P$ models the interaction with the external environment. We should stress that in the present formalism the gas does not occupy a prescribed volume, but the mean field partition function~(\ref{partKG}) accounts for all possible gas configurations over the whole space. Evaluating explicitly the formula~(\ref{partKG}) we obtain \[ Z = \frac{(N-1)!}{N^N} \left(-\frac{1}{x} \right)^{N+1} e^{N b x}. \] The equation of state for the gas of $N$ particles is given, in full analogy with the case of mean field spin systems (see e.g.~\cite{Genovese}) via the self-consistency equation \[ \av{v} = \frac{1}{N} \frac{1}{Z} \der{Z}{x} = b - \frac{N+1}{N x} \] or equivalently \[ (\av{v} - b) P = (N+1) K_{B} T. \] In the thermodynamic regime $N \simeq n N_{A}$ we obtain the well known ideal gas equation of state \begin{equation} \label{idealeq} (\av{v} - b) P = n R T, \end{equation} which allows us to identify the coupling constant $P$ with the physical pressure that plays the role of the external magnetic field in spin systems. \\ Within the canonical formalism the ideal gas constituted by a fixed number of particles $N$ of Hamiltonian \[ H = \sum_{l=1}^{N} \frac{{\bf p}_{l}^{2}}{2 m} \] is assumed to be in equilibrium with an external reservoir and occupy a prescribed volume, say $V$. The partition function is given by \[ {\cal Z} = \int d^{N}{{\bf p}_{i}} \; d^{N}{{\bf r}_{i}} \; e^{-H/K_{B} T} = (2 \pi m K_{B} T)^{3N/2} \left(\int d {\bf r} \right)^{N} = (2 \pi m K_{B} T)^{3N/2} {(V-b)}^{N}. \] The pressure \[ P = - \der{F}{V} \] is then defined in terms of the Helmholtz free energy $F(V,T) = - K_{B} T \log {\cal Z}$ gives the equation of state in the form~(\ref{idealeq}). Unlike the canonical formalism the mean field Hamiltonian~(\ref{Hamilt_gen}) encodes the boundary condition weighing different volume configurations that are intrinsically defined via the minimum convex hull. As a consequence the number $N$ associated to the spatial scale of the system takes into account of finite size effects, which are important near the criticality, in a consistent manner with the Maxwell rule. \begin{figure} \begin{center} \includegraphics[scale=.4]{Roots_g1.pdf} \includegraphics[scale=.4]{Roots_g2.pdf} \caption{a) van der Waals isothermal curves (for the choice of parameters $a=1$ and $b=3$) above and below the critical temperature $T_{c} = (N K_{B} t_{c})^{-1}$ . b) Shock trajectory (solid line) and critical sector (delimited by the dashes lines) associated to multivalued isotherms. } \label{fig:G1} \end{center} \end{figure} \vspace{.2cm} {\bf Remark.} We observe that the procedure presently described, that allows us to extend the van der Waals equation of state to the critical region, can straightforwardly be generalised to the class of equations of state obtained from (large volumes) virial expansions of the form (see e.g.~\cite{Landau}) \[ P = \frac{n R T}{v} \left( 1 + \frac{B_{1}(T)}{v} + \frac{B_{2}(T)}{v^{2}} \dots \right) \] with \[ B_{i}(T) = \frac{\alpha_{i+1}}{2 n R T} + \beta_{i+1} \qquad i = 1,2,3,\dots \] where $\alpha_{i}$ and $\beta_{i}$ are real constants given by the large volume asymptotic expansion of the functions $\sigma(v)$ and $\phi(v)$ of the form \begin{align*} \phi(v) = - \sum_{k=1}^{\infty} \frac{\alpha_{k+1}}{k v^{k}} \qquad \sigma(v) = \log V - \sum_{k=1}^{\infty} \frac{\beta_{k+1}}{k v^{k}}. \end{align*} \section{The critical region.} The subset of the space of thermodynamic variables $x$ and $t$ where the free energy $\alpha(v)$ admits multiple critical (stationary) points defines the critical region associated to the gas-liquid phase transition. In this case, the leading asymptotics at large $N$ of the partition function~(\ref{partKG}) is given by the formula \begin{equation} \label{part_laplace_mult} Z \simeq \sum_{i} \sqrt{\frac{2 \pi}{N \alpha''(v_{i})}} \; e^{- N \alpha(v_{i})}, \qquad N \to \infty \end{equation} where the sum runs over the local minima $v_{i} (x,t)$ of the free energy $\alpha(v)$. Hence, consistently with the classical description of the van der Waals phase transition, below the critical temperature the Gibbs free energy develops three stationary points, two of which are local minima. In the limit $N \to \infty$ the leading contribution to the partition function is given by the point of local minimum $v_{m}$ such that $\alpha(v_{m}) \leq \alpha(v_{i})$, for all $i \neq m$. Hence, within the critical region, the solution is given by \[ \av{v} = \lim_{N\to \infty}N^{-1} \partial_{x} \log Z = v_{m} \] where $v_{m}(x,t)$ is a root of the equation of state $\alpha'(v_{m}) = 0$ such that the Gibbs free energy has the lowest local minimum. The subset of the $(x,t)$-plane, such that $\alpha$ takes two equal minima $\alpha(v_{i}(x,t)) = \alpha(v_{j}(x,t))$, represents the curve of resonance of the exponential contributions in~(\ref{part_laplace}) and identifies the shock line shown in Fig.~\ref{fig:G1}.b. As already known from the theory of classical shocks for the viscous Burgers equation, such resonance condition is equivalent to the equal areas rule~\cite{Whitham}. \begin{figure} \begin{center} \vspace{.3cm} \includegraphics[scale=.4]{Roots_g3.pdf} \includegraphics[scale=.4]{Roots_g4.pdf} \\\includegraphics[scale=.4]{Roots_g5.pdf} \includegraphics[scale=.4]{Roots_g6.pdf} \caption{Solution for different values of $N$ and comparison with the classical shock at $t = 1.15 t_{c} $ for the choice of parameters $a=1$ and $b=3$.} \label{fig:G2} \end{center} \end{figure} In Fig.\ref{fig:G2} we plot the isothermal curves evaluated using the partition function~(\ref{partKG}). As $N$ increases the exact isothermal curves develop an inflection point and rapidly converge to the asymptotic behavior predicted by the Laplace formula. We should emphasize that the partition function~(\ref{partKG}) provides a global description of isothermal curves in the space of thermodynamic variables and the description of the phase transition is apparently accurate already for relatively small $N \simeq 10^{4}$ if compared with Avogadro's number. The formula~(\ref{partKG}), provides an explicit description of how finite size effects play the role of a singularity resolution mechanism. It also gives a statistical mechanical based interpretation the results obtained in~\cite{moro1} that allow to identify the multi-scale regime characterizing the universal local form of the equation of state \[ v = v_{c} + N^{-1/4} u \left( \frac{x-x_{c} + a (t-t_{c})/v_{c}^{2}}{N^{-3/4}}, \frac{t-t_{c}}{N^{-1/2}} \right) \] where \[ u(\xi,\tau) = -2 \der{\log \Lambda}{\xi} (\xi,\tau), \] $\Lambda(\xi,\tau)$ is the Pearcey integral \[ \Lambda(\xi,\tau) = \int_{-\infty}^{\infty} e^{-\frac{1}{8} (z^{4} - 2 \tau z^{2} + 4 \xi z)} \; dz \] and $(x_{c},t_{c},v_{c})$ are the coordinates of the critical point as evaluated above. \section{Concluding remarks.} This work shows how our approach based on the combination of Statistical Mechanics and nonlinear PDEs theory provides us with a novel and powerful tool to tackle phase transitions. This method leads to solution of perhaps the most known test-case that exhibits a first order phase transition (semi-heuristically described) such as the van der Waals model. In particular we have obtained the first global mean field partition function (eq.~(\ref{partKG})), for a system of finite number of particles. The partition function is a solution to the Klein-Gordon equation, reproduces the van der Waals isotherms away from the critical region and, in the thermodynamic limit $N\to \infty$ automatically encodes the Maxwell equal areas rule. The approach hereby presented is of remarkable simplicity, has been successfully applied to spin~\cite{newman2,brankov1,brankov2,Genovese, noi} and macroscopic thermodynamic systems~\cite{moro1, moro2} and can be further extended to include the larger class of models admitting partition functions of the form~(\ref{part_mom}) to be used to extend to the critical region general equations of state of the form~(\ref{G_stat}) including a class virial expansions.\\ \noindent {\bf Acknowledgements.} AB has been partially supported by Progetto giovani GNFM-INdAM 2014 "Calcolo parallelo molecolare" and AM has been partially supported by Progetto giovani GNFM-INdAM 2014 "Aspetti geometrici e analitici dei sistemi integrabili".
2,877,628,088,745
arxiv
\section{Introduction} Since the detection of the first planet orbiting a main sequence star, 51 Peg (Mayor \& Queloz 1995), the radial velocity (RV) method has become the most successful technique for detecting exoplanets as the vast majority have thus far been discovered in this way (Udry \& Santos 2007). This method is especially efficient for giant planets in close-in orbits owing to the large radial velocities they induce in the host star. The use of the RV technique to detect exoplanets around young and active stars requires, in addition, a careful characterization of stellar activity. An active region on the stellar surface can produce changes in the shape of the spectral lines, thus inducing a subsequent temporal variation of the RVs that may mimic a planetary reflex motion with a period equal to the rotational period of the star (Saar \& Donahue 1997). Some cases of false planetary detections are provided by Queloz et al. (2001), Bouvier et al. (2007), Huerta et al. (2008) and Hu\'elamo et al. (2008). Thus the challenge in using the RV technique to detect young planets lies in disentangling the increased levels of stellar activity of young stars from the RV signals of the planets. There is an absence of planets detected around stars younger than 100 Myr (Setiawan et al. 2007, Setiawan et al. 2008). Most RV searches for planetary companions have focussed mainly on stars older than 1 Gyr. Young stars were omitted from RV surveys until recently. Nevertheless, great effort has been made by several groups that have targeted young objects in their RV searches of planetary companions. For example, surveys are being carried out which focus on both nearby associations of young stars and moving groups with ages ranging 10--500 Myr; examples of which include $\beta$ Pic (12 Myr), UMa association (300 Myr), Pleiades (100 Myr), IC 2391 (35 Myr), Hyades (700 Myr), Taurus association (2 Myr), ChaI (2 Myr), TWA (10 Myr) (Paulson et al. 2004, Paulson \& Yelda 2007, Esposito et al. 2006, Huerta et. al 2007, Setiawan et al. 2007, Setiawan et al. 2008, Prato et al. 2008). Positive identification of planetary signatures from these efforts are few, with only two candidates to date: HD 70573 (Setiawan et al. 2007) and the controversial TW Hya (Setiawan et al. 2008). Planets orbiting around young stars are particularly valuable as they enable us to investigate some of the critical questions about the formation of both stellar and planetary systems. How, and at what stage planets form, what is the planet formation mechanism, and how they evolve are important questions which the study of young planetary systems will help to answer. In this paper we report strong evidence of a planetary candidate orbiting the young and active K5V star BD+20 1790. Sect.~2, is an overview of the properties and our previous studies of this star. The observational strategy and data analysis are presented in Sect. ~3. In Sect.~4 the nature of RV variations is investigated. An orbital solution for the data is presented in Sect.~5, and in Sect.~6 a discussion about planetary parameters, orbital solution, and how stellar activity and the planet are related is shown. Finally, we summarize and offer some concluding remarks in Sect.~7. \section{BD+20 1790: An overview} BD+20 1790 was classified by Jeffries (1995) as a K5Ve star, with a magnitude of $V = 9.9$. Mason et al. (1995) identified this star as the optical counterpart for the 2RE J072343.6+202500 EUV source, located in the ROSAT All-Sky Survey. L\'opez-Santiago et al. (2006) proposed its membership in the AB Dor kinematic moving group which has an estimated average age of 50 Myr. By comparing the equivalent width of Li $\lambda$ 6708 \AA \ with the spectral type, L\'opez-Santiago et al. (2006) derived an age estimate of 35--80 Myr. The main stellar parameters for BD+20 1790 are compiled in Table~\ref{star}. We obtained a value for the stellar radius from the measured rotational velocity and photometric period. Our estimated radius agrees with the previous K5V spectral classification (from Carrol \& Ostlie (2007) tables). Adopting this spectral type, we used the K5V temperature from the Carrol \& Ostlie (2007) tables. In conjunction with the photometric parameters, this enabled us to derive the luminosity, mass and surface gravity. Errors in the parameters were estimated by following the method of propagation of errors, i.e., the uncertainties were calculated from the errors in the variables involved in the determination of each parameter. It has been assumed null correlation between the different variables, in principle independent of each other. In order to test if assuming a fixed value for $T_{\rm eff}$ has a non-negligible effect in the errors computation, we investigated whether an error in $T_{\rm eff}$ could translate into uncertainties of derived parameters. We have considered an input error in $T_{\rm eff}$$\sim$10 $K$ and conducted an analysis in the propagation of $T_{\rm eff}$ error. Based on this analysis, we have noticed that not consider the error in the $T_{\rm eff}$ leads to an underestimation in the errors in mass and $\log g$, providing unreliable error bars for these parameters. The X-ray luminosity was calculated using the count rates and HR1 hardness ratios from the ROSAT All-Sky Survey. By combining the conversion factor $C_x$, computed by the formula from Fleming (1995), and the distance estimated by Reid et al. (2004), the stellar X-ray luminosity was calculated as $L_X$ = 1.6$\pm$ 0.5 $10^{29}$ $erg$ $s^{-1}$.\\ We compute a preliminary value of metallicity by using a grid of Kurucz et al. (1993) ATLAS9 atmospheres and the 2002 version of MOOG\footnote{The source code of MOOG 2002 can be downloaded at http://verdi.as.utexas.edu/moog.html} synthesis code (Sneden 1973). Atmospheric models were constructed with the data given in Table~\ref{star}. We used 12 Fe I lines selected from Gonz\'alez et al. (2001). We also calculated a value of metallicity by using 7 Fe I lines in the MOOG Abfind routine. We find an average value of $A[Fe]$ = 7.82$\pm$0.20 which, when assuming a solar value of $A[Fe]$ = 7.52, results in a $[Fe/H]$ = 0.30$\pm$0.20. As mentioned, this is a preliminary value, although compared with the average metallicity of stars of solar neighbourhood, we still could consider the star as metal-rich within the error bars. In a recent paper Carpenter et al. (2008) derived the temperature, gravity and metallicity for BD+20 1790, being their values of $T_{\rm eff}$ = 4408 $K$ and $\log g$ = 4.50, very close to the corresponding values presented in Table~\ref{star}. We also pointed out that the difference between metallicity values may be explained by the fact that Carpenter et al. (2008) only assumed a fixed metallicity of $[Fe/H]$ = 0.0, but not actually compute it. The Li I abundances analysis was also done in standard local thermodynamic equilibrium (ETL) using MOOG and ATLAS9 in the same way as with the metallicities. Abundances were derived by fitting synthetic spectra to the data. To determine Li abundances we perform a spectral synthesis around the Li I~6707~{\AA} resonance doublet, fitting all spectra between 6702 and 6712 \AA, taking into account the relation between Li6 and Li7 isotopes. We determine an average value of lithium abundance of $log N(Li)$ = 1.03$\pm$0.04 (where $log N(Li)$ = log (Li/H)+12). In order to study the stellar activity and the kinematics, we have carried out both spectroscopic and photometric monitoring over the past few years: high temporal and spectroscopic resolution and two band photometry. The simultaneous study of photospheric and chromospheric active regions is a powerful tool that allow us to trace, reconstruct and model the puzzle of the magnetic field topology, since these active regions are the fingerprints of magnetic fields (Collier Cameron 2001, Catalano et al. 2002, Frasca et al. 2005, Collier Cameron et al. 2002). Strong chromospheric activity was detected in several observing runs, described by Hern\'an-Obispo et al. (2005, 2007). In spite of the fact that the rotational velocity is not very high, $v {\rm sin}i\sim$10 km s$^{-1}$ (L\'opez-Santiago et al. 2006), all activity indicators are in emission above continuum, from Ca~{\sc ii} H \& K, to Ca~{\sc ii} IRT lines (see Fig.~\ref{fig:fig1}).\\ Through the study of profile line asymmetries of H$\alpha$ and H$\beta$ lines, prominence-like structures have been detected in the chromosphere of the star (Hern\'an-Obispo 2005, 2007). These can be observationally detected as transient absorption features superimposed on the line profile that are interpreted as the presence of cool material embedded in the surrounding hotter corona and co-rotating with star (Collier Cameron \& Robinson 1989a,b, Collier Cameron \& Woods 1992, Jeffries et al. 1993, Byrne et al. 1996, Eibe et al. 1998, Barnes et al. 2000, Donati et al. 2001). Several completed prominence-like transients have been detected with durations of orders of a few hours (see Hern\'an-Obispo 2005 for details). Modeling these chromospheric phenomenae is an important challenge in this case, due to the detection of these prominence-like structures in unstable positions, far from equatorial regions (Ferreira 2000, Jardine et al. 2001, Jardine \& van Balegooijen 2006). In addition, strong large optical flare events were observed. The gradual decay of the flares was observed for up to 5 hours. Fig.~\ref{fig:fig1} compares the activity indicators for the quiescent state and flare state. The energy released is on the order of $\sim$10$^{37}$ erg, while for largest solar flares the released energy is about $\sim$10$^{29}$--10$^{32}$ erg, thus ranging the flares of BD+20 1790 on the so-called \textit {superflare} regime (Rubenstein \& Schaefer 2000). The photometric observations yielded a light curve with evidence of rotational modulation, the semi-amplitude of which is up to $\Delta${\it{V}}$\sim$ 0.$^m$06 and indicates the presence of spots on the surface. The period analysis of the entire set of observations reveals a photometric period of 2.801 ($\pm$ 0.001) days, in agreement with the period given by the SuperWASP photometric survey (Norton et al. 2007). \begin{figure} \includegraphics[angle=-90,scale=0.30,clip]{11000fg1a.ps} \includegraphics[angle=-90,scale=0.30]{11000fg1b.ps} \includegraphics[angle=-90,scale=0.30]{11000fg1c.ps} \includegraphics[angle=-90,scale=0.30]{11000fg1d.ps} \caption{Chromospheric activity indicators. The dashed line indicates quiescent state, while solid line indicates flare state. From top to bottom and left to right: He I D$_{3}$ region, Ca~{\sc ii} K, H$\alpha$ and H$\beta$} \label{fig:fig1} \end{figure} \begin{table} \caption{Stellar Parameters of BD+20 1790} \label{star} \centering \begin{tabular}{l r } \hline\hline Parameter & Value \\ \hline Spectral Type & K5 V\\ $B-V$ & 1.15 \\ $M^{\mathrm{a}}$ & 0.63 $\pm$ 0.09 $M_{\sun}$\\ $T_{\rm eff}^{\mathrm{b}}$ & 4410 K\\ $\log g^{\mathrm{a}}$ & 4.53 $\pm$ 0.17 \\ $EW{\rm(Li)}^{\mathrm{a}}$ & 110 $\pm$ 3 m\AA\\ $Distance^{\mathrm{e}}$ & 25.4 $\pm$ 4 $pc$\\ $Age^{\mathrm{c}}$ & 35 - 80 Myr\\ $v {\rm sin}i^{\mathrm{d}}$ & 10.03 $\pm$ 0.47 km s$^{-1}$\\ $P_{\rm phot}^{\mathrm{a}}$ & 2.801 $\pm$ 0.001 days\\ $i^{\mathrm{a}}$ & 50.41 degrees\\ $R^{\mathrm{a}}$ & 0.71 $\pm$ 0.03 $R_{\sun}$ \\ $[Fe/H]^{\mathrm{a}}$ & 0.30 $\pm$ 0.20\\ $log N(Li)^{\mathrm{a}}$ & 1.03 $\pm$ 0.04 \\ $L_X^{\mathrm{a}}$ & 1.6 $\pm$ 0.5 $10^{29}$ erg s$^{-1}$ \\ $L^{\mathrm{a}}$ & 0.17 $\pm$ 0.04 $L_{\sun}$ erg s$^{-1}$\\ \hline \end{tabular} \begin{list}{}{} \item[$^{\mathrm{a}}$] This paper \item[$^{\mathrm{b}}$] From Carrol \& Ostlie, 2007 \item[$^{\mathrm{c}}$] From L\'opez Santiago et al. 2006 \item[$^{\mathrm{d}}$] From L\'opez Santiago 2005 \item[$^{\mathrm{e}}$] From Reid et al. 2004 \end{list} \end{table} A detailed and completed study of the chromospheric and photospheric activity characterization will be published in a forthcoming paper (Hern\'an-Obispo et al. 2009b, in prep.). \section{Observations and Data Analysis} In order to study and characterize active regions at photospheric and chromospheric levels, we carried out photometric and spectroscopic observations of the target. \subsection{Spectroscopic data} The observational strategy was designed to spectroscopically monitor chromospheric activity indicators with high temporal and spectral resolution. High resolution echelle spectra were obtained during four observing runs, from 2004 to 2007, detailed in Table~\ref{runs}. The exposure times ranged from 900 s to 1200 s, depending on weather conditions, in order to obtain a S/N typically greater than 140 for SARG runs and 80 for FOCES runs. The spectra in the time series observations were separated only by the CCD readout time, thus enabling us to obtain the highest temporal resolution possible. Our initial temporal cadence was designed to detect prominence-like transient features in the Balmer lines. Spectral types and RV standards were acquired with the same setup and configuration as the target. These standards were reduced and analysed in the same way as the target. The data were bias-subtracted, overscan-corrected and flat-fielded using standard routines in IRAF\footnote{IRAF is distributed by the National Optical Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.} package. The wavelength calibration was obtained by taking spectra of a Th-Ar lamp. Using Coud\'e spectrographs allowed a stable environment for the wavelength calibration, since flexures are not possible. Details about the spectrographs used can be seen in Pfeiffer et al. (1998) for FOCES spectrograph and Gratton et al. (2001) for SARG spectrograph. In order to enhance the accuracy in calibration we used about 10--12 lines identified per order; across all orders for SARG spectra and about 80 orders for FOCES spectra. The orders were calibrated simultaneously and the total fit has an rms value typically lower than 0.003 \AA. The spectra were normalized by a polynomial fit to the observed continuum. Heliocentric radial velocities were determined using a weighted cross-correlation method. The spectra of the star were correlated order by order against spectra of several RV standards with similar spectral type. Orders with chromospheric features and telluric lines were excluded. We calculated the uncertainties based on the cross-correlation peak height and the antisymmetric noise as described by Tonry \& Davis (1979). Also, by measuring RVs of the standard stars, we estimated the systematic errors and the accuracy of the RV measurements with our instrumental setup. The accuracy between standards for the same run and between runs is less than 0.05 km s$^{-1}$.\\ Additional echelle data were acquired in DDT mode at the FOCES spectrograph on December 2008. The telescope configuration and the setup were identically to previous FOCES runs, except for two nights in which a different CCD was used. Data were taken over 10 consecutive nights but due to bad weather conditions only five nights were acquired. Because of the time limitation in DDT mode, only one RV standard was observed. \begin{table*} \caption[]{Observing runs \label{runs}} \begin{center} \small \begin{tabular}{lllllllll} \noalign{\smallskip} \hline \hline \noalign{\smallskip} Date & Telescope & Instrument & CCD chip & Spect. range & Orders & Dispersion & FWHM$^{\rm c}$ & N. Obs. \\ & & & \# & ~~~~~~~(\AA) & & ~~~~~(\AA/pix) & ~~~(\AA) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} 29/03-6/04 2004 & 2.2m$^{\rm a}$ & FOCES & 2048x2048 24$\mu$m Site$\#$1d & 3720 - 10850 & 100 & 0.04 - 0.13 & 0.08 - 0.35 & 19 \\ 21-22/11/2004 & TNG $^{\rm b}$ & SARG & 2048x4096 13.5$\mu$m EEV & 4620 - 7920 & 52 & 0.07 - 0.11 & 0.07 - 0.17 & 43 \\ 15/04/2006 & TNG $^{\rm b}$ & SARG & 2048x4096 13.5$\mu$m EEV & 4620 - 7920 & 52 & 0.07 - 0.11 & 0.07 - 0.17 & 14 \\ 2-5/10/2007 & 2.2m$^{\rm a}$ & FOCES & 2048x2048 24$\mu$m Site$\#$1d & 3720 - 10850 & 100 & 0.04 - 0.13 & 0.08 - 0.35 & 10 \\ 12-13/12/2008 & 2.2m$^{\rm a}$ & FOCES & 2048x2048 15$\mu$m LORAL$\#$11i & 3830 - 10850 & 96 & 0.03 - 0.07 & 0.09 - 0.26 & 2 \\ 19-21/12/2008 & 2.2m$^{\rm a}$ & FOCES & 2048x2048 24$\mu$m Site$\#$1d & 3620 - 7360 & 100 & 0.04 - 0.13 & 0.08 - 0.35 & 3 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \end{center} \vspace{-0.25cm} {\small $^{\rm a}$ 2.2~m telescope at the German Spanish Astronomical Observatory (CAHA) (Almer\'{\i}a, Spain).\\ $^{\rm b}$ 3.58~m {\it Telescopio Nazionale Galileo} (TNG) at Observatorio del Roque de los Muchachos (La Palma, Spain).\\ $^{\rm c}$ The spectral resolution is determined as the FWHM at the arc comparison lines ranges. \\ } \end{table*} \subsection{Photometric data} The purposes of these observations were to determine the photometric period and to look for photometric variability. In addition, the study of the light curve, as well as spectrocopy, allow us to characterize the active regions in the photosphere (Catalano et al. 2002, Frasca et al. 2005, Biazzo et al. 2007). CCD differential aperture photometry was obtained using the $2.0$ m fully robotic Liverpool Telescope (Steele et al. 2004) at the Observatorio del Roque de los muchachos in La Palma, Spain. The observations were scheduled in monitoring mode. We obtained 22 photometric epochs during November and December 2007. Our observational strategy permitted us to obtain a photometric epoch every 3 nights on average. Each epoch consisted in alternating r$^\prime$ and g$^\prime$ exposures\footnote{Sloan r' and g' filters were used}, thereby obtaining quasi-simultaneous two band photometry. Custom made software\footnote{ATP, Automatic TATOOINE Photometry. http://www.am.ub.es/$\sim$anglada/atp/atp\_testing.htm} was used to automatically extract the photometry. By analysing intra night scatter we can infer a photometric accuracy of 3 mmag and 4 mmag per exposure ($r^\prime$ and $g^\prime$ bands respectively, see Fig.~\ref{fig:fig2}). We fit the best sine-wave model to the photometry sampling many periods between 0.1 and 50 days, on both bands. Plotting the post-fit residuals as a function of the period, a very strong minimum on the post-fit residuals is found at $2.801 \pm 0.001$ days in both bands (see Fig.~\ref{fig:fig3}). We note that the period and the amplitude are similar with those given by the SuperWASP survey (Norton et al. 2007). The different amplitude in each band is consistent with large spot or spot group covering at least $4\%$ of the surface. As can be seen in Fig.~\ref{fig:fig2}, the amplitude is larger at shorter wavelength, i. e. at $g^\prime$ band in this case. This color variation is correlated with variation in magnitude. The star appears redder when fainter, at minimum light and therefore bluer when brighter, at maximum light.\\ The full analysis of the photometry and its relation to the star activity requires simultaneous discussion with the spectroscopic data and a more detailed study of the star will be presented elsewhere (Hern\'an--Obispo et al. 2009b, in prep.) \begin{figure} \centering \includegraphics[angle=0,scale=0.92,clip]{11000fg2.eps} \caption{Photometry phased to the 2.801 days period. A linear trend and a zero point have been subtracted to both bands The residuals with respect to a simple sine-wave model are show in the lower panel.} \label{fig:fig2} \end{figure} \begin{figure} \centering \includegraphics[angle=0,scale=0.69,clip]{11000fg3.eps} \caption{Postfit residuals to the photometry as a function of the period. The sharper minima correspond to the 2.801 days period in both bands. The RVs period is marked in grey to illustrate the absence of related photometric signals.} \label{fig:fig3} \end{figure} \section{On the nature of the RV variations} Variations in the RV peak-to-peak amplitude of up to $\sim$2 km s$^{-1}$ were observed during all the observing runs. These variations are significantly larger than the individual measurement errors (0.10 to 0.20 km s$^{-1}$) or the systematic error (0.05 km s$^{-1}$), even when we consider the scatter between runs with different spectrographs and setups. \subsection{Searching for periodical signals on RV} A Least squares periodogram (see Appendix A) reveals one very significant peak at 7.783 days (see Fig.~\ref{fig:fig4}a). The data set contains 91 independent RV measurements. However, many of them are clustered together within groups of a few hours. The values we use to generate the periodogram and for orbital fitting (shown in Table~\ref{rv}), are averaged on a nightly basis. Fig.~\ref{fig:fig4}b shows the empirical False Alarm Probability (FAP) as a function of the Power. The 7.783 days peak has a FAP of $0.35\%$. \begin{figure} \centering \includegraphics[angle=0,scale=0.34,viewport=0 0 675 480,clip]{11000fg4a.eps} \includegraphics[angle=0,scale=0.32,viewport=0 0 690 480,clip]{11000fg4b.eps} \caption{\textbf {a. Up:} Least-Squares Periodogram of the nightly averaged radial velocity measurements. The 7.78 days peak has a FAP of $0.35\%$. The dotted horizontal line illustrates a FAP lower than $1\%$ and the dashed horizontal line a FAP lower than $5\%$.\textbf {b. Down:} Empirical False Alarm Probability as a function of the Power (red line). The gray bars illustrate the distribution of False Alarms with an arbitrary normalization used to derive the Empirical FAP. Note that the Y axis is in logarithmic scale. } \label{fig:fig4} \end{figure} It is worth noting that the RV period is larger than the photometric period. Nevertheless to test if the RV period could arise from rotational modulation we searched for significant frequencies in the data points of the photometry. There is no significant power at the RV period, and no secondary peaks are found in the aliasing frequencies of the RVs or the photometric period after the main signals are removed. To illustrate the absence of related photometric signals, we marked the RV period in Fig.~\ref{fig:fig3}, that shows the post-fit residuals of photometric data. In addition to this, there is no signal at photometric period in the RV data, as can be seen in Fig.~\ref{fig:fig4}a, that shows the RV periodogram. \begin{table} \caption{Radial Velocity} \label{rv} \centering \begin{tabular}{c c c} \hline\hline JD days & RV (km/s) & $\sigma$ (km/s) \\ \hline 2452388.3341$^{\mathrm{a}}$ & 9.23 & 0.19 \\ 2452389.3513$^{\mathrm{a}}$ & 8.94 & 0.14 \\ 2452390.3670$^{\mathrm{a}}$ & 8.52 & 0.38 \\ 2453099.3573$^{\mathrm{a}}$ & 7.82 & 0.06 \\ 2453100.3692 & 6.96 & 0.10 \\ 2453101.3748 & 7.34 & 0.07 \\ 2453102.3876 & 7.84 & 0.05 \\ 2454375.6480 & 7.96 & 0.08 \\ 2454378.6804 & 7.72 & 0.04 \\ 2453331.6400 & 8.71 & 0.03 \\ 2453332.6800 & 8.16 & 0.03 \\ 2453841.4250 & 7.73 & 0.03 \\ 2454812.7429 & 7.76 & 0.21 \\ 2454813.7240 & 7.67 & 0.18 \\ 2454820.5057 & 7.73 & 0.16 \\ 2454821.5126 & 7.53 & 0.16 \\ 2454822.5483 & 7.96 & 0.14 \\ \hline \end{tabular} \begin{list}{}{} \item[$^{\mathrm{a}}$] From L\'opez-Santiago 2005 \end{list} \end{table} \subsection{Stellar activity jitter} It is well known that spurious RV variations can be induced by stellar activity, especially due to changes in the profile of spectral lines caused by the presence of active regions, the so-called {\it stellar jitter} (Saar \& Donahue 1997, Saar 2009). The high level of activity detected in BD+20 1790, induced us at first to relate RV variations with active regions. Since we ruled out the possibility of variations due to systematic errors or any seasonal effect, the main concern was to determine if stellar activity was responsible.\\ It is widely accepted that the relationship of bisectors of the cross-correlation function (CCF) and RV is a powerful method to determine whether the RV variation may be due to stellar activity or a planetary companion (Queloz et al. 2001, Mart\'{\i}nez-Fiorenzano et al. 2005). The CCF was determined by using the same procedure as for the RV case, computing it for the regions which include the photospheric lines which are more sensitive to spot presence, while excluding chromospheric lines and telluric lines. The bisector inverse slope (BIS), defined as the difference of the average values of the top and the bottom zones, was computed to quantify the changes in the CCF bisector shape by using the method described by Queloz et al. (2001). In choosing the span zones we avoided wings and cores of the CCF profiles, where errors of bisectors measurements are large. In Fig.~\ref{fig:fig5} it can be seen that there is a lack of correlation between the BIS and RV variation for all the observing runs. This indicates that the RV variations are not due to variations in the asymmetry of the photospheric lines profile, and subsequently not due to stellar activity variations. The least squares periodogram of bisectors shows two tentative peaks around 2.8 days, and is shown in Fig.~\ref{fig:fig6}. \begin{figure} \centering \includegraphics[angle=-90,scale=.34]{11000fg5.ps} \caption{Bisector velocity span vs. Radial velocity for all the observing runs. Symbols represent the different runs: stars for FOCES 04A, diamonds for SARG 04B, circles for SARG 06A and triangles for FOCES 07B. The lack of correlation indicates that RV variations are not due to stellar activity.} \label{fig:fig5} \end{figure} \begin{figure} \centering \includegraphics[angle=0,scale=.33]{11000fg6.eps} \caption{Periodogram for bisectors of all runs. There is no clear period for bisector variations.} \label{fig:fig6} \end{figure} We have estimated the stellar jitter from Santos et al. (2000), that takes into account the Ca~{\sc ii} H \& K index. Assuming an average value for Ca~{\sc ii} H \& K index of about -4.2 and by using eq. [4], we derived a value for the stellar jitter of up to 10 m s$^{-1}$. This stellar jitter is added in quadrature to the RV error.\\ As an additional test we investigated the variation of stellar activity indicators, especially those that are ascribed to the presence of plage-like structures on the chromosphere, like Balmer lines, Ca~{\sc ii} H \& K and Ca~{\sc ii} IRT. The emission flux for these lines in active stars usually shows a periodic modulation (and subsequently the spectroscopic indices) which is most likely due to rotational modulation of plage-like structure emission. As is shown in Sect.~2, all chromospheric activity indicators are in emission above the continuum, indicating a very high level of activity. To avoid the photospheric contribution to the spectral profiles we applied the spectral subtraction technique described in detail by Montes et al. (1995). This technique makes use of the program {\sc STARMOD} developed at Penn State University (Barden 1985) and lately modificated by Montes et al. (1995). Also, in order to control the error and minimize the uncertainties, some routines of the astronomical data reduction package \reduceme\ \footnote{http://www.ucm.es/info/Astrof/software/reduceme/reduceme.html} developed at Universidad Complutense de Madrid (Cardiel 1999) were used. In these subtracted spectra, spectroscopic indices have been defined and computed following Saar \& Fisher (2000), K\"uster et al.(2003), Bonfils et al. (2007). Both, Ca~{\sc ii} IRT and Ca~{\sc ii} H \& K indices were only determined for FOCES runs, due to the wavelength range coverage of the spectrograph. To avoid contamination from telluric lines we only consider the 8662\AA\ Ca~{\sc ii} IRT line. We searched for periodic signals in the spectroscopic indices by computing their Least squares periodograms. Fig.~\ref{fig:fig7} shows the variation with time (orbital phase folded in this case) for Ca~{\sc ii} IRT, Ca~{\sc ii} H \& K, H$\alpha$ and H$\beta$ indices. The corresponding periodogram computed shows more noise rather than a clear signal. This result is also seen in the indices figures as a non-modulation of the activity index. As an example, Fig.~\ref{fig:fig8} shows the periodogram for the H$\alpha$ index. \begin{figure} \includegraphics[angle=-90,scale=0.32]{11000fg7a.ps} \includegraphics[angle=-90,scale=0.32]{11000fg7b.ps} \includegraphics[angle=-90,scale=0.32]{11000fg7c.ps} \includegraphics[angle=-90,scale=0.32]{11000fg7d.ps} \caption{Spectroscopic index for chromospheric activity indicators, phased folded orbital period. From top to bottom: H$\alpha$ (squares), H$\beta$ (circles), Ca~{\sc ii} IRT (triangles) and Ca~{\sc ii} H \& K (stars). The dashed line is indicating the quiescent state. Error bars for indices are of order of 0.001} \label{fig:fig7} \end{figure} As pointed out by Walter (1994), the rotational modulation of chromospheric lines due to plages is not always detectable in very active stars. Furthermore, in this case the flares could contaminate the data, masking the actual period of variation of the indices. In order to investigate this possibility we removed the data affected by flare events. Due to the different wavelength range coverage of spectrographs, we considered only H$\alpha$ and H$\beta$ indices. For H$\alpha$ index, we have found a tentative rotational modulation with a period of 2.77 days, similar to photometric period (see Fig.~\ref{fig:fig9}). However, the postfit residuals show in Fig.~\ref{fig:fig10} that this could be a misleading signal, even pure noise. For H$\beta$ index, no clear modulation has been found.\\ The lack of variability of BIS and spectroscopic indices with RV period, and the absence of a photometric period larger than $2.8$ days, strongly support the planetary companion hypothesis. \begin{figure} \centering \includegraphics[angle=0,scale=0.35,viewport=0 0 681 477,clip]{11000fg8.eps} \caption{Periodogram for H$\alpha$ index. There is no clear period for index variations.} \label{fig:fig8} \end{figure} \begin{figure} \centering \includegraphics[angle=-90,scale=0.36]{11000fg9.ps} \caption{H$\alpha$ index for the data without flare events. It can be seen a modulation with a period of about 2.77 days, similar to photometric period.} \label{fig:fig9} \end{figure} \begin{figure} \centering \includegraphics[angle=-90,scale=0.34]{11000fg10.ps} \caption{Postfit residuals to H$\alpha$ index for no-flare data as a function of the period. There is no clear period for index variations.} \label{fig:fig10} \end{figure} \subsection{RV wavelength dependence} Desort et al. (2007) (hereafter D07) pointed out that the color dependence (with wavelength) of the RV peak-to-peak amplitude with spots can be used as a diagnostic to discriminate between stellar activity or planetary companions. Due to the contrast between spots and the surrounding photosphere being greater in the visible than at IR wavelengths, it is expected that an attenuation of RV amplitude towards red wavelengths would be seen. Observationally this effect has been shown by e.g. Mart\'in et al. (2006), Hu\'elamo et al. (2008) and Prato et al. (2008). If the RV variations are due to a planet, the RV amplitude should be the same at every wavelength range. We investigated a possible chromatic dependence by computing the RV in two different ranges of wavelength, one for red and near-IR wavelengths (7650 to 10000 \AA) and the other for blue (4300 to 4800 \AA). The resulting RV peak-to-peak amplitude is 2.19$\pm$0.20 km s$^{-1}$ for the near-IR range and 2.20$\pm$0.20 km s$^{-1}$ for the blue range. The values differ by only 0.5\% and agree within the uncertainties. Additional RV infrared follow-up can allow us to confirm this. In a forthcoming paper (Hern\'an--Obispo et al. 2009d, in prep.), we will present the first results of the study of the RVs of BD+20 1790 in the near-IR range. \subsection{RV variation by empirical spots and plages?} To estimate an order of magnitude of the expected RV amplitude due to spots, we use empirical relations derived by Saar \& Donahue 1997 (hereafter SD97) and D07. These relations connect the RV amplitude with the spot filling factor {\rm $f_s$} and $v{\rm sin}$i. We consider both relations by D07 and SD97, because D07 relations take into account the spectral type and the whole spectral range (except telluric and chromospheric lines) to compute the empirical RV, whereas SD97 uses a single line and G5V spectral type. Using SD97 eq. [1], we derived an amplitude of up to 575 m s$^{-1}$ and by using D07 eq [5] we similarly estimate an amplitude of up to 600 m s$^{-1}$. As mentioned, these results are taken as a quantitative estimation. There are more effects that are not taken into account here, like the spot location at stellar surface given by the colatitude $\theta$, and the spot temperature. SD97 eq. [1] and D07 eq. [5] considered the simple case of an equatorial spot, but SD97 assumed a $T_{spot}$ = 0 K and D07 assumed an spot temperature 1000 K cooler than the photosphere. The difference between the RV amplitude derived from both equations could be due to this different spot temperature. On the other hand, we can estimate the spot filling factor that could produce the RV signal of our data. We considered an average semi-amplitude of 1 km s$^{-1}$. The {\rm $f_s$} estimated from SD97 is therefore 23\% while D07 indicates 19\%. The {\rm $f_s$} measured from photometric variation is about 4\%. These results indicate that the spot filling factor needed to explain the RV variation due purely to spots is not in agreement with the photometry. Saar (2003) and Saar (2009) showed significant efforts to model plage-induced RV jitter. Although the models are mostly applicable to solar-like stars, we could estimate the plage filling factor {\rm $f_p$} that could produce the RV signal by using the Saar (2009) equation that connects the RV amplitude with $v{\rm sin}$i $>$ 6 km s$^{-1}$. This {\rm $f_p$} estimated is about 70\%, that strongly suggets that the RV variation is not due to chromospheric plages. \subsection{What would the RV signal be without a planet?} It is important to remark that empirical relations derived by SD07 and D07 do not take into account the chromatic effect of spots on the RV signal. We therefore investigated how much RV signal would be expected in the absence of a planet, and the degree of RV attenuation with wavelength (assuming the RVs are due to cool spots). In order to quantify the attenuation if the cause of variations were spots, we try to investigate how much spots affect the line profiles. However, BD+20 1790 has a low $v {\rm sin}$i to model the photosphere by generating Doppler imaging spot maps. To carry out a realistic approximation to the problem, we construct realistic spot maps by using spectra of another star with similar characteristics, LO Peg, that is widely studied in the literature and its photospheric activity is well-known (Jeffries \& Jewell 1993, Jeffries et al. 1994, Eibe et al. 1998, Eibe et al. 1999, Barnes et al. 2005). LO Peg is a K5V--K7Ve star, identified by Jeffries \& Jewell (1993) as a member of the Local Association, with an estimated age of 20--30 Myr. Jeffries et al. (1994) determined the inclination to be $50^{\circ}$. The level of activity is similar to BD+20 1790, but LO Peg is a rapid rotator ($v {\rm sin}i$ $\sim$69 km s$^{-1}$). The LO Peg photometry suggests a spot filling factor of up to 1.5\%. Using the Doppler imaging program, DoTS (Collier Cameron 1997), and an input starspot image derived for LO Peg (Barnes et al. 2005), we generated a set of line profiles for a star with vsini = 10 km/s (i.e. matching that of BD+20 1790) over a complete rotation phase. The profiles thus contain asymmetries due to starspots from the observed LO Peg image. We used appropriate temperatures for the BD+20 1790 photosphere and estimated the spots to possess temperatures which were up to 1000 K cooler. Profiles were generated for the three different wavelengths of 4000 \AA, 6717 \AA~and 10000 \AA. The radial velocity variations were then calculated in order to estimate the relative amplitudes due to spot induced variations at each of the three wavelengths. The RV attenuation with wavelengths relative to 4000 \AA~is 16\% at 6717 \AA~and about 30\% at 10000 \AA~, as illustrated in Fig.~\ref{fig:fig11}. Assuming at a first approach the same {\rm $f_s$} for LO Peg and BD+20 1790, the RV signal for BD+20 1790 should be about 1.5 km s$^{-1}$ at 10000 \AA. \begin{figure} \centering \includegraphics[angle=-90,scale=.33]{11000fg11.ps} \caption{Radial velocity amplitude variation with wavelength, computed for LO Peg profiles} \label{fig:fig11} \end{figure} However, in Hern\'an--Obispo et al. 2009d (in prep.), we find only about 0.5\% attenuation in the near-IR region relative to the visible, 6717 \AA~region. This result is an additional argument in support the existence of the planetary companion. \subsection{RV jitter from flares} We have estimated the rate of flare occurrence as the fractional amount of the total observing time (for all runs) where a flare was detected. Thus, we get a flare frequency of occurrence of $\sim$ 40\%. This higher rate raises the question of how much RV jitter we should expect from large flares, if any. Saar (2009) presents the first approach to this issue, concluding that RV jitter due to flare occurrence would be non-negligible, although probably be a stochastic jitter component. Chromospheric activity indicators exhibited an enhancement at flare state, the broad emission of Balmer lines and He I $D_{3}$ in emission being the most notable features (see Fig.~\ref{fig:fig1}). As pointed out by Saar (2009), although these lines are excluded when we measure RV, it is possible that a significant core filling in photospheric lines occurs when there is a flare event. The cause could be upper photospheric heating. Results by Houdebine (1992) state that heating is propagated down to low photospheric levels. A second related problem is the effect of large flares on BIS. While it has not been studied until now, it is expected to be more pronounced, since bisectors are more sensitive to changes in line profiles. To our knowledge it is reported here for the first time. Fig.~\ref{fig:fig12}~a shows the relationship of H$\alpha$ index vs. BIS, where the dashed line corresponds to the quiescent state, and higher values for H$\alpha$ index indicate the occurrence of a flare event. It is seen that the scatter for BIS is higher when a flare occurs. Outliers at quiescent state correspond to a low S/N rate. Similar BIS behaviour is seen in Fig.~\ref{fig:fig12}~b, that shows H$\beta$ index vs. BIS. \begin{figure} \centering \includegraphics[angle=-90,scale=.32]{11000fg12a.ps} \includegraphics[angle=-90,scale=.32]{11000fg12b.ps} \caption{\textbf {Up}: H$\alpha$ index vs. BIS. The dashed line is indicating the quiescent state.\textbf{ Down}: H$\beta$ index vs. BIS. For both, it is seen that the scatter for BIS is higher when occur flare events. Error bars for the indices are of order of 0.001.} \label{fig:fig12} \end{figure} \begin{figure} \centering \includegraphics[angle=-90,scale=.345]{11000fg13a.ps} \includegraphics[angle=-90,scale=.345]{11000fg13b.ps} \caption{Radial velocity variability of BD+20 1790.\textbf{a. Up}: Circular orbit. \textbf {b. Down}: Eccentric orbit. Values marked with circle symbol represent FOCES runs except stars that represent DDT FOCES 08B run. Diamond symbol are for SARG runs.} \label{fig:fig13} \end{figure} \section{Orbital solution for BD+20 1790 b} We computed the orbital solution for the RV data using a standard Keplerian fit with the RV period estimated by the Least squares periodogram. The fit was obtained firstly only considering the FOCES data, averaged by night, in order to avoid intra-night scatter. After this, we added the SARG data to improve the fit. The results for the fit considering only FOCES data or all data from the two spectrographs were compatible within uncertainties. With the addition of RVs measured in winter 2008 (DDT FOCES 08b run), the Least squares periodogram is strikingly improved, and the 7.78 day peak clearly dominates the power spectrum. Attempts to perform a Keplerian fit using the second and the third highest periodogram peaks produced significantly worse folded curves. We have included to perform the fit the RV set computed by L\'opez--Santiago (2005). A first fit (see Fig.~\ref{fig:fig13}a) derives a close-in massive planet ($a =0.066$~AU, $M_2 \sin i = 6.54 M_{jup}$) in a circular orbit ($e = 0.05$) with a rotational period of 7.7834 days and a reduced $\chi^2$ of 1.07. Also we present a second fit (see Fig.~\ref{fig:fig13}b) with the same period for an eccentric orbit ($a = 0.066$~AU, $M_2 \sin i = 6.15 M_{jup}$, $e = 0.14$, $\chi^2 = 0.997$). Due to the sampling of the data, we cannot discard a possible eccentric orbit. Orbital elements for both solutions are compiled in Table~\ref{planet} and discussed in the next section. As additional test, we computed the orbital solution removing the data affected by flare events. The fit derives a solution ($a =0.066$~AU, $M_2 \sin i = 6.54 M_{jup}$, $e = 0.01$, $K = 0.91$ km s$^{-1}$) compatible with the solution when considering all the data. The fit is presented in Fig.~\ref{fig:fig14}. \begin{figure} \centering \includegraphics[angle=-90,scale=.345]{11000fg14.ps} \caption{Radial velocity variation of BD+20 1790 computed considering only the data that are not affected by flares. Circle symbol represent FOCES runs except stars that represent DDT FOCES 08B run. Diamond symbol are for SARG runs.} \label{fig:fig14} \end{figure} \begin{table} \caption{Orbital Parameters of BD+20 1790 b} \label{planet} \centering \begin{tabular}{l c c c} \hline\hline Parameter & Solution 1 & Solution 2 & \\ \hline $P_{\rm orb}$ & 7.7834 $\pm$ 0.0004 & 7.7834 $\pm$ 0.0004 & days \\ $T_{\rm conj}^{\mathrm{a}}$ & 3085.8 $\pm$ 0.5 & 3086.30 $\pm$ 0.18 & \small HJD \\ $a$ & 0.066 $\pm$ 0.001 & 0.066 $\pm$ 0.002 &AU \\ $e$ & 0.05 $\pm$ 0.02 & 0.14 $\pm$ 0.04 &\\ $K$ & 0.93 $\pm$ 0.03 & 0.84 $\pm$ 0.06 & km s$^{-1}$\\ $\gamma$ & 8.22 $\pm$ 0.01 & 8.12 $\pm$ 0.04 & km s$^{-1}$\\ $\omega$ & 200.4 $\pm$ 21.8 & 120.7 $\pm$ 14.0 & \small degrees \\ $M_2 sini$ & 6.54 $\pm$ 0.57 & 6.15 $\pm$ 0.59 & $M_{jup}$\\ $rms$ & 138.9 & 132.3 & m s$^{-1}$\\ $\chi^2$ & 1.071 & 0.997 & \\ \hline \end{tabular} \begin{list}{}{} \item[$^{\mathrm{a}}$] Time of periastron passage \end{list} \end{table} \section{Discussion} The lack of a relation between the BIS and spectroscopic indices with the RV period, as well as the different RV and photometrical period strongly suggest that the RV variations are due to a planetary companion. However, it is possible that the RV variations are actually due to a combination of phenomena (activity and planet).\\ Stellar magnetic activity may be influenced and enhanced by the presence of a close-in giant planet, as proposed by Cuntz et al. (2000), Cuntz \& Shkolnik (2002) and Lanza (2008). Thus, the presence of a planetary companion could be an interpretation for the high level of stellar activity detected. In a recent paper, Lanza (2009) proposes a new model that predicts the formation of prominence-like structures in very highly active stars with close-in giant planets. Also, as presented in Sec.~2 and Sec.~4.6, the large flares, with energy releases in the superflare regime, and the high rate of flare ocurrence, could find a source in addition to stellar activity in the reconnection of the stellar coronal field as the planet moves inside the Alfv\'en radius of the star (Ip et al. 2004). In a forthcoming paper we explore in detail these possible star-planet interactions (Hern\'an--Obispo et al. 2009c, in prep). In addition, as suggested by the statistical analysis by Kashyap et al.(2008), the X-ray flux from stars with close-in giant planets is on average 4 times greater than those with more distant planetary companions. For the 'close-in' sub-sample, the X-ray luminosity is $L_X$ = $10^{28.5}$ $erg$ $s^{-1}$ on average. The X-ray luminosity of BD+20 1790 is 5 times brighter than this average which is consistent with chromospheric and X-ray emission induced by the presence of a massive close-in companion (Lanza 2009).\\ Even though the stellar activity could swallow the RV signal of a planetary companion, we can detect it for BD+20 1790 b since it is a massive planet. The RV variation is large enough even though the RV accuracy is tipically about 150 m s$^{-1}$. Due to the observational strategy (the data are not part of a planet-search program), the eccentricity is poorly constrained. Indeed there is no "a priori" reason to discard an eccentric orbit since the circularization time-scale computed is up to several Gyr, but more data is required to properly characterize the eccentricity. RV optical and infrared follow-up over twice the RV period will enable us to constrain the orbital solution as well as confirm the presence of the planet. More massive exoplanets $M_2\sin i\sim$5$M_{jup}$ with orbital periods longer than about 6 days have eccentricities significantly larger than lower mass planets (Udry \& Santos 2007). Another possibility is that additional undetected longer period planets are maintaining the eccentricity of BD+20 1790 b. Both situations have been discussed in detail by Wu \& Murray (2003). \\ It is worth noting however that the star is metal-rich, as presented in Sec.~2. The existence of a correlation between stellar metallicity and planet mass has been reported by e.g. Santos et al. (2001), Fischer \& Valenti (2005), Guillot et al. (2006). Massive planets tend to form around metal-rich stars, i.e., planets that orbit around metal-rich stars also have higher mass cores. \\ Compared to other planets of similar masses and orbits\footnote{Observational data for the more than 370 exoplanets are compiled on the {\it Extrasolar Planets Encyclopaedia} ({\it http://exoplanet.eu}), mantained by J. Scheneider}, and taking into account statistical results described in recent reviews (Udry \& Santos 2007), BD+20 1790 b does not exhibit unusual characteristics, except for its young age and its relatively high mass. We used a complimentary method to determine the stellar age from Mamajek \& Hillenbrand (2008) (hereafter MH08), that uses the fractional X-ray luminosity, $R_X$ = $L_X$/$L_{bol}$. MH08 demonstrate that $R_X$ has the same age-inferring capability as the chromospheric index $R^{\prime}_{HK}$. By using their equation [A3] we estimated an age for BD+20 1790 of up to 35 Myr. Considering a value for $log R^{\prime}_{HK}$ = -4.2 on average, we can also estimate the age with the new relation proposed by MH08, by equation [3]. We computed an age of up to 58 Myr. These values are in agreement with the range estimated by L\'opez-Santiago et al. (2006). Lowrance et al. (2005) included this star BD+20 1790 {in} a coronographic survey for substellar companions using the coronograph on NICMOS/HST and the 200 inch Hale Telescope (Palomar Osbservatory). No companions were found beyond $10$ AU. However, the orbital solutions we find suggest a semi-major axis below $0.1$ AU, clearly beyond their resolving capabilities. Great care must therefore be taken when extrapolating properties of early stellar evolution stages from the characteristics of the latter stages, since the current knowledge about planetary system evolution is still somewhat speculative. The exoplanetary zoo is such that new planets with unusual properties require a replanting of planet formation and migration scenarios. Planets discovered around young stars could be the missing link that reconstruct the scenarios between exoplanets and protoplanetary disks. Indeed, further study of BD+20 1790 b has the potential to improve our understanding of planetary systems at early evolutionary stages. \section{Conclusions} This paper describes the investigation of RV variations for the young and active K5V star BD+20 1790. Based upon the analysis of the BIS of the CCF, as well as activity indicators and photometry, the presence of a planetary companion is shown to be the best interpretation. The orbital solution results in a companion with a mass in the planetary regime. No photometric period larger than $2.8$ days strongly supports the planetary origin of the observed RV variations. Two solutions for the orbit are computed and discussed. The presence of a close-in massive planet could also be an explanation for the high level of stellar activity. Since the RV data are not part of a planet search program, we can consider our results as serendipitous evidence of a planetary companion. Indeed additional RV optical and infrared follow-up will enable us to constrain the orbital solution as well as confirm the presence of the planet. This is thus far the youngest main sequence star for which a planetary candidate has been reported. \begin{acknowledgements} We thank Calar Alto Observatory for allocation of director's discretionary time to this programme. This work was supported by the Spanish Ministerio de Educaci\'on y Ciencia (MEC) under grant AYA2005-02750, Ministerio de Ciencia e Innovación (MICINN) under grant AYA2008-06423-C03-03 and “The Comunidad de Madrid” under PRICIT project S-0505/ESP-0237 (ASTROCAM). MCGO acknowledges financial support from the European Commission in the form of a Marie Curie Intra European Fellowship (PIEF-GA-2008-220679). MHO and GAE thank Dr. Chriss Moss, support astronomer at the LT for his help and patience. Also MHO thanks Dr. Santos Pedraz, support astronomer at the Calar Alto Observatory for his help with DDT run. MHO is grateful to Dr. Jos\'e Antonio Caballero for valuable discussions, and also Dr. Laurence R. Doyle for his suggestions that was the initial inspiration for this work. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. The authors gratefully acknowledge the valuable comments and suggestions of an anonymous referee, that are helped to improve the paper. \end{acknowledgements}
2,877,628,088,746
arxiv
\section{Conclusion} We have shown the connection between mutual information estimators and neural network classifiers through the variational form of mutual information. The connection explains the rationale behind the use of sigmoid, softmax and cross-entropy from an information-theoretic perspective. The connection also brings a new insight to understand neural network classifiers. There exists previous work that called the negative log-likelihood (NLL) loss as maximum mutual information estimation~\cite{bahl1986maximum,lecun2006tutorial}. Despite this naming similarity, that work does not show the relationship between softmax and mutual information that we have shown here. The connection between neural network classifiers and mutual information evaluators provides more than an alternative view on neural network classifiers. Via converting neural network classifiers to mutual information estimators, we receive two positive consequences for practical applications. First, we improve the classification accuracy, in particular when the datasets are unbalanced. The new mutual information estimators even outperform the prior state-of-the-art neural network classifiers. Second, using the pointwise mutual information between the inputs and labels, we can locate the objects within images more precisely. We also provide a more information-theoretic interpretation of class activation maps. We believe that this opens new ways to understand how neural network classifiers work and improve their performance. \begin{comment} We have shown the connection between mutual information and softmax classifier through the variational form of mutual information. The connection explains the rationale behind the use of softmax cross-entropy from an information theoretic perspective, which brings a new insight to understand such classifiers. There exists previous work that called the negative log-likelihood (NLL) loss as maximum mutual information estimation~\cite{bahl1986maximum,lecun2006tutorial}. Despite this naming similarity, that work does not show the relationship between softmax and mutual information that we have shown here. We utilise the connection between classification and mutual information to improve the weakly-supervised object localisation task. To this end, we propose a new way to compute the classification activation map, which is based on the difference between PMIs. The experimental results show the practicality of the information theoretic approach. We believe that this opens new ways to understand and interpret how neural network classifiers work. \end{comment} \section{Weakly Supervised Object Localisation} \section{Informative Class Activation Maps: \\ Estimating Mutual Information Between Regions and Labels } \label{sec:cam} In this section, we show that viewing neural network classifiers as mutual information estimators contributes to a more interpretable neural network classifier, via identifying regions of an image that contain high mutual information with a label. There exist previous work exhibiting how to identify regions of an image corresponding to particular labels, known as class activation maps (CAM). We further formalise CAMs to be related to information theory. Furthermore, with the new view of neural network classifiers as mutual information evaluators, we are able to depict the quantitative relationship between the information of the entire image and its local regions about a label. We call our new CAM Informative Class Activation Map (infoCAM), since it is based on information theory. Moreover, infoCAM can also improve the performance of the weakly supervised object localisation (WSOL) task than the original CAM. To explain infoCAM, we first introduce the concept and definition of the class activation map. We then show how to apply it to the weakly supervised object localisation (WSOL) task. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{imgs/Info-CAM-Illustration.png} \caption{A visualisation of the infoCAM procedure for the WSOL task. The task aims to draw a bounding box for the target object in the original image. The procedure includes: 1) feed input image into a CNN to extract its feature maps, 2) evaluate PMI difference between the true and the other labels of input image for each region within the feature maps, 3) generate the bounding box by keeping the regions exceeding certain infoCAM values and find the largest connected region and 4) interpolate and map the bounding box to the original image.} \label{fig:infoCAM-Illustration} \end{figure*} \subsection{CAM: Class Activation Map} Contemporary classification CNNs such as AlexNet~\cite{krizhevsky2012imagenet} and Inception~\cite{szegedy2015going} consist of stacks of convolutional layers interleaved with pooling layers for extracting visual features. These convolutional layers result in feature maps. A feature map is a collection of 2-dimensional grids. The size of the feature map depends on the structure of convolution and pooling layers. Generally the feature map is smaller than the original image. The number of feature maps corresponds to the number of convolutional filters. The feature maps from the final convolutional layer are usually averaged, flattened and fed into the fully-connected layer for classification~\cite{lin2014network}. Given $K$ feature maps $g_1, .. , g_K$, the fully-connected layer consists of weight matrix $W \in \mathbb{R}^{M \times K}$, where $w_k^y$ represents the scalar weight corresponding to class $y$ for feature $k$. We use $g_k(a,b)$ to denote a value of 2-dimensional spatial point $(a,b)$ with feature $k$ in map $g_k$. In~\cite{choe2019attention}, the authors propose a way to interpret the importance of each point in feature maps. The importance of spatial point $(a,b)$ for class $y$ is defined as a weighted sum over features: \begin{align} \label{eq:cam_def} M_{y}(a, b) = \sum_{k} w_k^{y} g_{k} (a, b). \end{align} We redefine $M_{y}(a, b)$ as an intensity of the point $(a, b)$. The collection of these intensity values over all grid points forms a class activation map (CAM). CAM highlights the most relevant region in the feature space for classifying $y$. The input going to the softmax layer corresponding to the class label $y$ is: \begin{align} \sum_{a,b} M_{y}(a, b) = n(\mathbf{x})_y. \end{align} Intuitively, weight $w_k^y$ indicates the overall importance of the $k$th feature to class $y$, and intensity $M_{y}(a, b)$ implies the importance of the feature map at spatial location $(a, b)$ leading to the classification of image $\mathbf{x}$ to $y$. The aim of WSOL is to identify the region containing the target object in an image given a label, without any pixel-level supervision. Previous approaches tackle the WSOL task by creating a bounding box from the CAM~\cite{choe2019attention}. Such a CAM contains all important locations that exceed a certain intensity threshold. The box is then upsampled to match the size of the original image. \subsection{InfoCAM: Informative Class Activation Map} In section \ref{sec:nn_as_mi_evaluators}, we show that softmax classifier carries an explicit implication between inputs and labels in terms of information theory. We extend the notion of mutual information from being a pair of an input image and a label to regions of the input image and labels to capture the regions that have high mutual information with labels. To simplify the discussion, we assume here that there is only one feature map, \textit{i}.\textit{e}., $K=1$. However, the following results can be easily applied to the general cases where $K>1$ without loss of generality. We introduce a region $R$ containing a subset of grid points in feature map $g$. Mutual information is an expectation of the point-wise mutual information (PMI) between two variables, \textit{i}.\textit{e}., $\mathbb{I}(\mathbf{X},Y) = \mathbb{E}[\text{PMI}(\mathbf{x},y)]$. Given two instances of variables, we can estimate their PMI via \autoref{eq:softmax_mi}, \textit{i}.\textit{e}., \begin{align} \PMI(\mathbf{x}, y) = n(\mathbf{x})_y - \log\sum_{y'=1}^{M}\exp(n(\mathbf{x})_{y'}) + \log M. \notag \end{align} The PMI is close to $\log M$ if $y$ is the maximum argument in log-sum-exp. To find a region which is the most beneficial to the classification, we compute the difference between PMI with true label and the average of the other labels and decompose it into a point-wise summation as \begin{align} \Diff(\PMI(\mathbf{x})) = \PMI(\mathbf{x}, y^*) - \frac{1}{M-1}\sum_{y' \neq y^*}\PMI(\mathbf{x}, y') \notag\\ = \sum_{(a,b)\in g} w^{y*} g(a,b) - \frac{1}{M-1}\sum_{y' \neq y^*} w^{y'} g(a,b). \notag \end{align} The point-wise decomposition suggests that we can compute the PMI differences with respect to a certain region. Based on this observation, we propose a new CAM, named informative CAM or infoCAM, with the new intensity function $M_{y}^{\Diff}(R)$ between region $R$ and label $y$ defined as follows: \begin{align} M_{y}^{\Diff}(R) = \sum_{(a,b)\in R} w^yg(a,b) - \frac{1}{M-1}\sum_{y' \neq y}w^{y'}g(a,b). \label{eq:info_cam_correct} \end{align} The infoCAM highlights the region which decides the classification boundary against the other labels. Moreover, we further simplify \autoref{eq:info_cam_correct} to be the difference between PMI with the true and the most-unlikely labels according to the classifier's outputs, denoting as infoCAM+, with the new intensity: \begin{align} M_{y}^{\Diff^+}(R) = \sum_{(a,b)\in R} w^y g(a,b) - w^{y'}g(a,b), \label{eq:info_cam_correct_min} \end{align} where $y' = \underset{m}{\arg \min} \sum_{(a,b)\in R} w^{m}g(a,b)$. The complete procedure of WSOL with infoCAM is visually illustrated in \autoref{fig:infoCAM-Illustration}. We first feed an input image into a CNN to extract its feature maps. Then instead of computing the CAM of the feature map, we compute infoCAM of varying regions from the input image and the class label. Afterwards, we generate the bounding box for the object by preserving regions surpassing a certain intensity level. Then, we generate the bounding box that covers the largest connected remaining regions~\cite{zhou2016learning}. Finally, we interpolate the generated bounding box to the original image size and merge the two. \section{Object Localisation with InfoCAM} \label{sec:exp} In this section, we demonstrate experimental results with infoCAM for WSOL. We first describe the experimental settings and then present the results. \input{secs/local_result_tbl.tex} \subsection{Experimental settings} We evaluate WSOL performance on CUB-200-2011~\cite{wah2011caltech} and Tiny-ImageNet~\cite{tiny_imagenet}. CUB-200-2011 consists of 200 bird specifies, including 5,994 training and 5,794 validation images. Each bird class contains roughly the same number of instances, thus the dataset is approximately balanced. Since the dataset only depicts birds, not including other kinds of objects, variations due to class difference are subtle~\cite{dubey2018pairwise}. Therefore, CNN-based classifiers tend to concentrate on the most discriminative areas within an image while disregarding other regions that are similar among all the birds~\cite{wang2019camdrop}. Such nuance-only detection can lead to localisation accuracy degradation~\cite{choe2019attention}. Tiny-ImageNet is a reduced version of ImageNet in terms of both class number, number of instances per class and image resolution. It includes 200 classes, and each consists of 500 training and 50 validation images, and is balanced. Unlike CUB-200-2011 comprising only birds, Tiny-ImageNet contains a wide range of objects from animals to daily supplies. Compared with the full ImageNet, training classifiers on Tiny-ImageNet is faster due to image resolution reduction and quantity shrinkage, yet classification becomes more challenging~\cite{odena2017conditional}. \begin{figure}[t!] \centering \includegraphics[width=0.5\linewidth]{imgs/bird-with-heatmap.png} \caption{Visualisation of comparison between CAM and infoCAM+. Red and green boxes represent the ground truth and prediction, respectively. Brighter regions represent higher CAM or infoCAM+ values.} \label{fig:reg-sig} \end{figure} To perform an evaluation on localisation, we first need to generate a bounding box for the object within an image. We generate a bounding box in the same way as in~\cite{zhou2016learning}. Specifically, after evaluating infoCAM within each region of an image, we only retain the regions whose infoCAM values are more than 20\% of the maximum infoCAM and abandon all the other regions. Then, we draw the smallest bounding box that covers the largest connected component. We follow the same evaluation metrics in~\cite{choe2019attention} to evaluate localisation performance with two accuracy measures: 1) localisation accuracy with known ground truth class (GT Loc.), and 2) top-1 localisation accuracy (Top-1 Loc.). GT Loc. draws the bounding box from the ground truth of image labels, whereas Top-1 Loc. draws the bounding box from the predicted most likely image label and also requires correct classification. The localisation of an image is judged to be correct when the intersection over union of the estimated bounding box and the ground-truth bounding box is greater than 50\%. \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{imgs/all-birds.png} \caption{Visualisation of localisation with ResNet50 without using ADL on CUB-200-2011. Images in the second and the third row correspond to CAM and infoCAM+, respectively. Estimated (green) and ground-truth (red) bounding boxes are shown separately.} \label{fig:cam-illustration-butterfly} \end{figure*} We adopt the same network architectures and hyper-parameters as in~\cite{choe2019attention}, which shows the current state-of-the-art performance. Specifically, the network backbone is ResNet50~\cite{he2016deep} and a variation of VGG16~\cite{szegedy2015going}, in which the fully connected layers are replaced with global average pooling (GAP) layers to reduce the number of parameters. The traditional softmax is used as the final layer since both datasets are well balanced. InfoCAM requires the region parameter $R$. We apply a square region for the region parameter $R$. The size of the region $R$ is set as $5$ and $4$ for VGG and ResNet in CUB-200-2011, respectively, and $3$ for both VGG and ResNet in Tiny-ImageNet. These models are tested with the Attention-based Dropout Layer (ADL) to tackle the localisation degradation problem~\cite{choe2019attention}. ADL is designed to randomly abandon some of the most discriminative image regions during training to ensure CNN-based classifiers cover the entire object. The ADL-based approaches demonstrate state-of-the-art performance in CUB-200-2011~\cite{choe2019attention} and Tiny-ImageNet~\cite{choe2018improved} for the WSOL task and are computationally efficient. We test ADL with infoCAMs to enhance WSOL capability. To prevent overfitting in the test dataset, we evenly split the original validation images to two data piles, one still used for validation during training and the other acting as the final test dataset. We pick the trained model from the epoch that demonstrates the highest top-1 classification accuracy in the validation dataset and report the experimental results with the test dataset. All experiments are run on two Nvidia 2080-Ti GPUs, with the PyTorch deep learning framework~\cite{paszke2017automatic}. \subsection{Experimental Results} Table~\ref{table:cam_info_cam} shows the localisation results on CUB-200-2011 and Tiny-ImageNet. The results demonstrate that infoCAM can consistently improve accuracy over the original CAM for WSOL under a wide range of networks and datasets. Both infoCAM and infoCAM+ perform comparably to each other. ADL improves the performance of both models with CUB-200-2011 datasets, but it reduces the performance with Tiny-ImageNet. We conjecture that dropping any part of a Tiny-ImageNet image with ADL significantly influences classification since the images are relatively small. \autoref{fig:reg-sig} highlights the difference between CAM and infoCAM. The figure suggests that infoCAM gives relatively high intensity on the object to compare with that of CAM, which only focuses on the head part of the bird. Figure~\ref{fig:cam-illustration-butterfly} in the Appendix presents additional examples of visualisation for comparing localisation performance of CAM to infoCAM, both without the assistance of ADL\footnote{Please refer to the supplementary material for more Tiny-ImageNet visualisation results.}. From these visualisations, we notice that the bounding boxes generated from infoCAM are formed closer to the objects than the original CAM. That is, infoCAM tends to precisely cover the areas where objects exist, with almost no extraneous or lacking areas. For example, CAM highlights the bird heads in CUB-200-2011, whereas infoCAM also covers the bird bodies. \textbf{Ablation Study}: InfoCAM differs from CAM in two ways: 1) the new intensity function and 2) region-based intensity smoothing with parameter $R$. We conduct an ablation study to investigate which feature(s) help to localise objects. The results suggest that both components are indispensable to improve the performance of the localisation. For the detailed results, please refer to the ablation study table in the Appendix. \section{Introduction} Neural network classifiers play an important role in contemporary machine learning and computer vision~\cite{lecun2015deep}. Since the emergence of AlexNet~\cite{krizhevsky2012imagenet}, much research has been done to improve the performance of neural network classifiers. To overcome the vanishing gradient in deep networks, the residual connection and various activation functions have been proposed~\cite{he2016deep,nair2010rectified,maas2013rectifier}. To improve generalisation, better regularisation techniques such as dropout have been developed~\cite{srivastava2014dropout}. To reach better local minima, various optimisation techniques have been suggested~\cite{duchi2011adaptive,kingma2014adam}. Although many architectural choices and optimisation methods have been explored, relatively fewer considerations have been shown on the final layer of the neural network classifier: the cross-entropy loss with the softmax output. The combination of softmax with cross-entropy is a standard choice to train neural network classifiers. It measures the cross-entropy between the ground truth label $y$ and the output of the neural network $\hat{y}$. The network's parameters are then adjusted to reduce the cross-entropy via back-propagation. While it seems sensible to reduce the cross-entropy between the labels and predicted probabilities, it still remains a question as to what relation the network aims to model between input $x$ and label $y$ via this loss function, \textit{i}.\textit{e}., , softmax with cross-entropy. In this work, for neural network classifiers, we explorer the connection between \emph{cross-entropy with softmax} and \emph{mutual information between inputs and labels}. From a variational form of mutual information, we prove that optimising model parameters using the softmax with cross-entropy is equal to maximising the mutual information between input data and labels when the distribution over labels is uniform. This connection provides an alternative view on neural network classifiers: they are mutual information estimators. We further propose a probability-corrected version of softmax that relaxes the uniform distribution condition. This new information-theoretic view of neural network classifiers being mutual information estimators allows us to directly access the most informative regions of input with respect to the labels, given classification tasks. The access to the most informative regions for the labels leads us to develop infoCAM that can locate the most relevant regions for the labels within an image, given an object classification task. Compared to the traditional class activation map, infoCAM exhibits better performance in the weakly supervised object localisation task. In summary, we outline our contributions as follows: \begin{itemize} \item The previous view on cross-entropy with softmax only reflects the relationship between the outputs and the labels. We show that with minor modifications to softmax, neural network classifiers then become mutual information estimators. As a result, these mutual information estimators exhibit the information-theoretic relationship between the inputs and the labels. \item We empirically demonstrate that our mutual information estimators can \emph{accurately} evaluate mutual information. We also show mutual information estimators can perform classification more accurately than traditional neural network classifiers. When the dataset is imbalanced, the estimators outperform the state-of-the-art classifier for our example. \item We propose the informative class activation map (infoCAM) which locates the most informative regions for the labels within an image via mutual information. For the weakly supervised object localisation task, we achieve a new state-of-the-art result on Tiny-ImageNet with infoCAM. \end{itemize} \section{Introduction} Neural network classifiers play an essential role in artificial neural networks~\cite{abiodun2018state}. First, they can solve long-term challenges that are hard for traditional approaches~\cite{lecun2006tutorial}. For example, image classification has been a challenging problem, however, neural network classifiers' performance surpasses other traditional approaches by a large margin~\cite{krizhevsky2012imagenet}. Also, neural network classifiers can contribute to training models for object detection~\cite{he2017mask}. \begin{comment} Also, neural network classifiers can contribute to training generative adversarial networks. Specifically, a generative adversarial network uses a neural network classifier to discern whether an image is generated or not. Therefore, the quality of the neural network classifier decides the performance of the generated images. \end{comment} \begin{comment} Second, apart from classification, they can contribute to other learning tasks. We illustrate examples. First, image classification has been a challenging problem. However, neural network classifiers made breakthroughs. Their performance surpasses other traditional approaches with a large margin. Neural network classifiers play an essential role in the study of artificial neural networks~\cite{abiodun2018state}: not only can they solve important challenges with which traditional approaches have long-term difficulties~\cite{lecun2006tutorial}, but their breakthroughs can also promote other learning tasks. As illustrations: to the former, a neural network named AlexNet makes significant progress toward this problem by surpassing other traditional approaches with a large margin in 2012, leading the current boom of deep learning~\cite{krizhevsky2012imagenet}; to the latter, training a pleasing generative adversarial network, a modern unsupervised learning model, depend on the quality of the internal neural network classifier to discern the genuineness of the given data~\cite{goodfellow2014generative}. \end{comment} Information theory has been long employed to study neural network classifiers~\cite{gish1990probabilistic}. For example, cross-entropy is a term from information theory. It is almost the canonical loss function for neural network classifiers~\cite{pang2019rethinking}. Using information theory to study neural network classifiers is intuitive and plausible. We justify as follows. Neural network classifiers process complex and dimensional information to classify the given data to certain classes. To quantify the amount of information that is processed by neural network classifiers, we can resort to information theory, since information theory is a principled framework for measuring how much information one can obtain from data~\cite{belghazi2018mutual}. It is popular and widely-accepted to study and optimise neural network classifiers from information-theoretic perspectives. However, we argue that current connections between neural network classifiers and information theory lack coherence. We justify as follows. To obtain cross-entropy for training a neural network classifier, we need to convert the outputs to be within the range between 0 and 1. We conduct the normalisation with a sigmoid or softmax~\cite{goodfellow2016deep}. Afterwards, it is popular to interpret the normalised values as probabilities, since they are smaller than one. Nevertheless, these 'artificially' normalised outputs are not consistent with the definition of probability. To explain, if one changes the normalisation function from one to the other, the normalised values vary accordingly. However, from the perspective of the law of large numbers, probabilities are inherent properties so they should be constant. \begin{comment} Despite the pervasiveness and popularity of defining and analysing neural network classifiers from information-theoretic perspectives, we argue current connections between neural network classifiers and information theory may lack coherence. We justify this argument as follows: one imperative step for evaluating cross-entropy of a neural network classifier is to convert the outputs to be within the range between 0 and 1, using normalisation functions such as sigmoid or softmax~\cite{goodfellow2016deep}. The converted results are commonly interpreted as probabilities since they are non-negative and smaller than one. However, these artificially normalised outputs are not consistent with the definition of probability: the normalised values vary in response to changing normalisation methods, whilst from the perspective of the law of large numbers, probabilities are inherent properties so they should be of constancy. \end{comment} In sum, it is popular to use cross-entropy as the loss function to train neural network classifiers. However, in order to use cross-entropy, we need to interpret (or assume) the normalised outputs of neural network classifiers as probabilities, and the interpretation seems to be artificial. Thus, we seek an alternative information-theoretic view of neural network classifiers that is more coherent. That is, the new view contains less artificial assumptions. To this end, we propose viewing neural network classifiers as mutual information evaluators: using a variational form of mutual information to prevent interpreting outputs as probabilities. \begin{comment} In sum, using cross-entropy as the loss function to train neural network classifiers requires interpreting normalised values as probabilities; such interpretation, however, seem to be artificial. Thus, we seek for an alternative information-theoretic view of neural network classifiers that is more coherent, \textit{i}.\textit{e}., , less artificial interpretations. To this end, we propose viewing neural network classifiers as mutual information evaluators: using a variational form of mutual information to prevent interpreting outputs as probabilities. \end{comment} We propose to view neural network classifiers as mutual information evaluators. This new information-theoretic view is not only better from the theoretical perspective, but can also contribute to practical applications. We exhibit two in this paper. First, to improve classification accuracy. Instead of using cross-entropy as loss functions, we can use mutual information can improve the performance of neural network classifiers. Second, to more precisely locate within images the objects which are labels overall. We decompose the overall pointwise mutual information into the sum of sub-regions of an image, and identify the ones with high decomposed mutual information. \begin{comment} The new information-theoretic view of neural network classifiers can not only be more plausible in theory, but it can also help to optimise the application neural network classifiers in various practical tasks via considering the outputs as the mutual information between the inputs and the labels. We demonstrate two applications in this paper: improving classification accuracy and locating the objects relating to the labels of an image more precisely. The former is due to modifying loss functions to make the opposite of loss become mutual information and the latter receives motivation from building the mathematical relationship between the mutual information of the entire image and its sub-regions. \end{comment} In summary we outline our contributions in the corresponding sections as follows: \begin{itemize} \item We show how to convert a neural network classifier into a mutual information evaluator. We also empirically show that the converted mutual information evaluator can accurately estimate mutual information. \item We show mutual information evaluators can perform classification more accurately than traditional neural network classifiers. When the dataset is imbalanced, the evaluators outperform the state-of-the-art classifier. \item We show how viewing neural network classifiers as mutual information evaluators can help to locate the most relevant regions of images about certain labels. We achieve a new state-of-the-art result on Tiny-ImageNet. \end{itemize} \begin{comment} In summary, we outline our contributions in the corresponding sections as follows: \begin{itemize} \item In \autoref{sec:conn}, we prove that classification neural networks that optimise their weights to minimise softmax cross-entropy are equivalent to those that maximise mutual information between inputs and labels with a balanced dataset. \item In \autoref{sec:mi_class_exp}, we empirically evaluate the effectiveness of classification mutual information estimators via synthetic and real-world datasets. \item In \autoref{sec:cam}, we propose infoCAM. A map that reveals the most relevant regions of an image with respect to the target label based on the difference of mutual information. \item In \autoref{sec:exp}, we demonstrate the performance of the infoCAM on WSOL results, achieving a new state-of-the-art result on Tiny-ImageNet. \end{itemize} \end{comment} \begin{comment} using a variational form of mutual information instead of interpreting outputs as probabilities. It is intuitive and plausible to study neural network classifiers with information theory: neural network classifiers are designed to process complex and multidimensional information in order to classify the given data to certain classes, and information theory provides a principled framework for quantifying the information that has been processed by neural networks. Despite various information-theoretic terms have been employed For example, to the former, image classification has been a long-lasting challenge since the inception of computer vision, as successful recognition of objects lays the key foundation for endless down-stream applications, such as building autonomous cars and developing face IDs. In 2012, a neural network named AlexNet makes significant progress toward this problem by surpassing other traditional approaches with a large margin, leading the current boom of deep learning~\cite{krizhevsky2012imagenet,lecun2015deep}. and surpasses by a large margin all the traditional approaches at the time~\cite{krizhevsky2012imagenet,lecun2015deep}. It is intuitive and plausible to study neural networks with information theory: both natural and artificial neural networks are evolved or designed to process complex and multidimensional information, and information theory provides a principled framework for quantifying the information that has been processed by neural networks. \section{Introduction} In 2012, AlexNet makes significant progress toward ILSVRC: the ImageNet Large Scale Visual Recognition Challenge, and surpasses by a large margin all the traditional approaches at the time~\cite{krizhevsky2012imagenet,lecun2015deep}. As a result, such classification neural networks start playing a crucial role in the contemporary machine learning and computer vision community~\cite{lecun2015deep}. Apart from their strong categorisation capability, classification neural networks contribute to other tasks, \textit{e}.\textit{g}. generative adversarial networks use a classification network for producing visually-realistic images~\cite{goodfellow2014generative}, and object segmentation networks such as Mask-RCNN use classification networks for object detection~\cite{he2017mask}. The softmax function, or softmax in short, is the basic building block of the final layer of classification models. Previous studies interpret softmax as a function that transforms non-normalised values to probabilities since the outputs of softmax sum up to one and are non-negative~\cite{bridle1990probabilistic}. Despite the popularity of such an interpretation, softmax under this view seems to be an artificial adjustment to enforce that the outputs of classification neural networks satisfy probability axioms. This raises a question on an alternative view of softmax being more than a transformation function. In this paper, we present an information-theoretic interpretation of softmax with cross-entropy. With a variational form of mutual information, we formally prove that optimising model parameters with the softmax cross-entropy is equal to maximising the mutual information between input data and labels via assuming uniform distribution of labels. The connection provides an alternative view on the classifier as a mutual information estimator. We further propose a probability-corrected version of softmax which relaxes the uniform distribution condition. Based on experiments with synthetic datasets, we demonstrate the performance of softmax on mutual information estimation. The connection between classification and information gives a new intuition on interpreting the output of the classifier. As an application, we investigate the image classification problem, especially targeting a class activation map for weakly-supervised object classification tasks. The class activation map aims to find the region which is the most relevant to the target class. We propose a new approach, dubbed as infoCAM, to compute a class activation map based on the point-wise mutual information obtained by classification. Through the experiments, we evaluate the effectiveness of infoCAM on weakly-supervised object localisation with Tiny-ImageNet~\cite{tiny_imagenet} and CUB-200-2011~\cite{wah2011caltech} datasets. In summary, we outline our contributions in the corresponding sections as follows: \begin{itemize} \item In \autoref{sec:conn}, we prove that classification neural networks that optimise their weights to minimise softmax cross-entropy are equivalent to those that maximise mutual information between inputs and labels with a balanced dataset. \item In \autoref{sec:mi_class_exp}, we empirically evaluate the effectiveness of classification mutual information estimators via synthetic and real-world datasets. \item In \autoref{sec:cam}, we propose infoCAM. A map that reveals the most relevant regions of an image with respect to the target label based on the difference of mutual information. \item In \autoref{sec:exp}, we demonstrate the performance of the infoCAM on WSOL results, achieving a new state-of-the-art result on Tiny-ImageNet. \end{itemize} \end{comment} \section{NN Classifiers as MI Estimators} \label{sec:nn_as_mi_evaluators} In this section, we prove that a neural network classifier with cross entropy loss and softmax output estimates the mutual information between inputs and labels. \section{Impact of PC-softmax on Classification} \label{sec:mi_class_exp} In this section, we measure the empirical performance of PC-softmax as mutual information (MI) and the influence of PC-softmax on the classification task. Since it is impossible to obtain correct MI from real-world datasets, we first construct synthetic data with known properties to measure the MI estimation performance, and then we use two real-world datasets to measure the impact of PC-softmax on classification tasks. \subsection{Mutual information estimation task} To construct a synthetic dataset with a pair of continuous and discrete variables, we employ a Gaussian mixture model: \begin{align} P(x) &= \sum_{y=1}^{M} P(y) \mathcal{N}(\mathbf{x} | \mathbf{\mu}_y, \mathbf{\Sigma}_y) \notag\\ P(x | y) &= \mathcal{N}(\mathbf{x} | \mathbf{\mu}_y, \mathbf{\Sigma}_y), \notag \end{align} where $P(y)$ is a prior distribution over the labels. To form a classification task, we use $x$ as an input variable, and $y$ as a label. For the experiments, we use five mixtures of isotropic Gaussian, each of which has a unit diagonal covariance matrix with different means. We set the parameters of the mixtures to make them overlap in significant proportions of their distributions. We generate two sets of datasets: one with uniform prior and the other with non-uniform prior distribution over labels, $p(y)$. For the uniform prior, we sample 12,000 data points from each Gaussian, and for the non-uniform prior, we sample unequal number of data points from each Gaussian. In addition, we vary the dimension of Gaussian distribution from 1 to 10. The detailed statistics for the Gaussian parameters and the number of samples are available in \autoref{table:synthetic_dataset_spec}. To train classification models, we divide the dataset into training, validation and test sets. We use the validation set to find the best parameter configuration of the classifier. We aim to compare the difference of true and softmax-based estimated mutual information $\mathbb{I}(\mathbf{X}, Y)$. The mutual information is, however, intractable. We thus approximate it via Monte Carlo (MC) methods using the true probability density function, expressed as: \begin{equation} \label{eq:mc_mi} \mathbb{I}(\mathbf{X}, Y) \approx \frac{1}{N} \sum_{i=1}^{N} \log \left( \frac{P(\mathbf{x}_i | y_i)}{P(\mathbf{x}_i)} \right), \end{equation} where $(\mathbf{x}_i, y_i)$ forms a paired sample. \autoref{eq:mc_mi} attains equality as $N$ approaches infinity. We use four layers of a feed-forward neural network with the ReLU as an activation for internal layers and softmax as an output layer\footnote{All model details used in this paper are available in the supplementary material.}. We train the model with softmax on balanced dataset and with PC-softmax on unbalanced dataset. We compare the experimental results against mutual information neural estimator (MINE) proposed in \cite{belghazi2018mutual}. Note that MINE requires having a pair of input and label variables as an input of an estimator network, the classification-based MI-estimator seems more straightforward for measuring mutual information between inputs and labels of classification tasks. \autoref{table:syn_balanced} summarises the experimental results with the balanced dataset. With the balanced dataset, there is no difference between softmax and PC-softmax. Note that the MC estimator has access to explicit model parameters for estimating mutual information, whereas the softmax estimator measures mutual information based on the model outputs without accessing the true distribution. We could not find a significant difference between MC and the softmax estimator. Additionally, we report the accuracy of the trained model on the classification task. \autoref{table:syn_imbalanced} summarises the experimental results with the unbalanced dataset. The results show that the PC-softmax slightly under-estimates mutual information when compared with the other two approaches. It is worth noting that the classification accuracy of PC-softmax consistently outperforms the original softmax. The results show that the MINE slightly under-estimate the MI as the input dimension increases. \subsection{Classification task} We test the classification performance of softmax and PC-softmax with two real-world datasets: MNIST~\cite{lecun2010mnist} and CUB-200-2011~\cite{wah2011caltech}. We construct balanced and unbalanced versions of the MNIST dataset. For the balanced-MNIST, we use a subset of the original dataset. For the unbalanced-MNIST, we randomly subsample one tenth of instances for digits 0, 2, 4, 6 and 8 from the balanced-MNIST. With CUB-200-2011, we follow the same training and validation splits as in~\cite{cui2018large}. As a result of such splitting, the training set is approximately balanced, where out of the total 200 classes, 196 of them contain 30 instances and the remaining 6 classes include 29 instances. To construct an unbalanced dataset, similar to MNIST, we randomly drop one half of the instances from one half of the bird classes. We adopt a simple convolutional neural network as a classifier for MNIST. The model contains two convolutional layers with max pooling layer and the ReLU activation, followed by two fully connected layers with the final softmax. For CUB-200-2011, we apply the same architecture as Inception-V3~\cite{cui2018large} We measure both the micro accuracy and the average per-class accuracy of the two softmax versions on both datasets. The average per-class accuracy alleviates the dominance of the majority classes in unbalanced datasets. The classification results are shown in \autoref{table:emp_soft}. PC-softmax is significantly more accurate than softmax on unbalanced datasets in terms of the average per-class accuracy. \begin{table}[t!] \begin{subtable}[t]{\linewidth} \centering \begin{tabular}{c r r r r r} \toprule \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{CUB-200-2011} \\ & Balanced& Unbalanced& Balanced& Unbalanced\\ \midrule softmax & 97.95 & 96.81 & 89.23 & 89.21 \\ PC-softmax & 97.91 & 96.86 & 89.18 & \textbf{89.73}* \\ \bottomrule \end{tabular} \caption{Classification accuracy (\%).} \end{subtable} \vspace{1em} \begin{subtable}[t]{\linewidth} \centering \begin{tabular}{c r r r r r} \toprule \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{CUB-200-2011} \\ & Balanced& Unbalanced& Balanaced& Unbalanced\\ \midrule softmax & 97.95 & 95.05 & 89.21 & 84.63 \\ PC-softmax & 97.91 & \textbf{96.30} & 89.16 & \textbf{87.69} \\ \bottomrule \end{tabular} \caption{Average per-class accuracy (\%).} \end{subtable} \vspace{1em} \caption{Classification accuracy of using softmax and PC-softmax. Numbers of instances for different labels are the same for a balanced dataset and are significantly distinct for an unbalanced dataset. Bold values denote p-values less than 0.05 with the Mann-Whitney U test\protect\footnotemark.} \label{table:emp_soft} \end{table} \footnotetext{Accuracy with * is higher than the current state-of-the-art~\cite{cui2018large}.} \section{Connecting Mutual Information to Softmax} \label{sec:conn} In this section, we show the connection between mutual information and the classification neural network. \end{comment} To view neural network classifiers as mutual information estimators, we need to discuss two separate cases related to the dataset: whether it is balanced or imbalanced. \subsection{Softmax with Balanced Dataset} Softmax is widely used to map outputs of neural networks into a categorical probabilistic distribution for classification. Given neural network $n(\mathbf{x}):\mathcal{X} \rightarrow \mathbb{R}^{M}$, softmax $\sigma:\mathbb{R}^{M} \rightarrow \mathbb{R}^{M}$ is defined as: \begin{align} \sigma(n(\mathbf{x}))_y = \frac{\exp( n(\mathbf{x})_y )}{\sum_{y'=1}^{M}\exp( n(\mathbf{x})_{y'})}. \end{align} Expected cross-entropy is often employed to train a neural network with softmax output. The expected cross-entropy loss is \begin{align} \label{eq:cross_entropy} L = - \mathbb{E}_{(\mathbf{X},Y)}[ n(\mathbf{x})_y - \log({\sum_{y'=1}^{M}\exp( n(\mathbf{x})_{y'})}) ], \end{align} where the expectation is taken over the joint distribution of $X$ and $Y$. Given a training set, one can train the model with an empirical distribution of the joint distribution. We present an interesting connection between cross-entropy with softmax and mutual information in the following theorem. In a bid for conciseness, we only provide proof sketches for \autoref{thm:equality} and \autoref{thm:softmax_im} here. Please refer to the appendix for rigorous proofs. \begin{comment} \begin{thm} \label{thm:equality} Let $f_\phi(\mathbf{x},y)$ be $n(\mathbf{x})_y$. The lower bound of mutual information in \autoref{eq:mi_f_low_bound} can be obtained by minimising the expected cross-entropy with softmax for classification up to constant $\log M$ under the uniform label distribution. \end{thm} \end{comment} \begin{thm} \label{thm:equality} Let $f_\phi(\mathbf{x},y)$ be $n(\mathbf{x})_y$. Infimum of the expected cross-entropy loss with softmax outputs is equivalent to the mutual information between input and output variables up to constant $\log M$ under uniform label distribution. \end{thm} \begin{proof} Let $f_\phi(\mathbf{x},y) = n(\mathbf{x})_y$, then the lower bound is \begin{align} \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{\exp(n(\mathbf{x})_y)}{ E_{y'}[ \exp(n(\mathbf{x})_{y'}) ]} \bigg]. \end{align} If the distribution of the label is uniform then, it can be rewritten as \begin{align} &\mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{\exp(n(\mathbf{x})_y)}{ 1/M \sum_{y'=1}^{M} \exp(n(\mathbf{x})_{y'}) } \bigg] \notag \\ &= \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{\exp(n(\mathbf{x})_y)}{ \sum_{y'=1}^{M} \exp(n(\mathbf{x})_{y'}) } \bigg] + \log M, \label{eq:softmax_mi} \end{align} which is equivalent to the negative expected cross-entropy loss (\ref{eq:cross_entropy}) up to constant $\log M$. Hence, the infimum of the expected cross entropy is equal to the mutual information between input and output variables since the supremum of r.h.s in \autoref{eq:mi_f_low_bound} is the mutual information. \end{proof} Note that the constant does not change the gradient of the objective. Consequently, the solutions of both the mutual information maximisation and the softmax cross-entropy minimisation optimisation problems are the same. \subsection{Softmax with Imbalanced Dataset} The uniform label distribution assumption in \autoref{thm:equality} is restrictive since we cannot access the true label distribution, often assumed to be non-uniform. To relax the restriction, we propose a probability-corrected softmax (PC-softmax): \begin{align} \label{eq:prob_cor_softmax} \sigma_p(n(\mathbf{x}))_y = \frac{\exp( n(\mathbf{x})_y )}{\sum_{y'=1}^{M}P(y')\exp( n(\mathbf{x})_{y'})}, \end{align} where $P(y')$ is a distribution over label $y'$. In experiments, we optimise the revised softmax with empirical distribution on ${P}(y')$ estimated from the training set. We show the equivalence between optimising the classifier and maximising mutual information with the new softmax below. \begin{comment} \begin{thm} \label{thm:softmax_im} The mutual information between two random variable $X$ and $Y$ can be obtained via the infimum of cross-entropy with PC-softmax in \autoref{eq:prob_cor_softmax} under a mild condition on $n$. \end{thm} \end{comment} \begin{thm} \label{thm:softmax_im} The mutual information between two random variables $X$ and $Y$ can be obtained via the infimum of cross-entropy with PC-softmax in \autoref{eq:prob_cor_softmax}, using a neural network. Such an evaluation is strongly consistent. \end{thm} See the proofs in the appendix for the proof of \autoref{thm:softmax_im}. Mutual information is often used in generative models to find the maximally informative representation of an observation~\cite{hjelm2018learning,zhao2017infovae}, whereas its implication in classification has been unclear so far. The results of this section imply that the neural network classifier with softmax optimises its weights to maximise the mutual information between inputs and labels under the uniform label assumption. \subsection{Localisation of multiple objects with InfoCAM} \begin{figure}[ht] \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.99\linewidth]{imgs/multi_mnist_original.png} \caption{Original input images. } \label{fig:sub-first} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.99\linewidth]{imgs/multi_mnist_0_orig.png} \caption{CAM localisation. } \label{fig:sub-second} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.99\linewidth]{imgs/multi_mnist_0_infocam.png} \caption{InfoCAM localisation. } \label{fig:sub-second} \end{subfigure} \caption{Visualisation of comparison between CAM and infoCAM for the multi-MNIST dataset. Each image has one or two digits in the left and/or right. We aim to extract digit 0 in each image. } \label{fig:multi_mnist} \end{figure} \begin{comment} \begin{figure}[t!] \centering \includegraphics[width=0.3\linewidth]{imgs/multi_mnist_original.png} \includegraphics[width=0.3\linewidth]{imgs/multi_mnist_0_orig.png} \includegraphics[width=0.3\linewidth]{imgs/multi_mnist_0_infocam.png} \caption{Visualisation of comparison between CAM and infoCAM+ for the multi-MNIST dataset. Left: original input images, each with one or two digits in the left and/or right; Middle: Localisation of digit 0 with the original CAM; Right: Localisation of digit 0 with infoCAM. } \label{fig:multi_mnist} \end{figure} \end{comment} So far, we have shown the results of localisation from a multi-class classification problem. We further extend our experiments on localisation to multi-label classification problems. A softmax function is a generalisation of its binary case, a sigmoid function. Therefore, we can apply infoCAM to each label for a multi-label classification problem, which is a collection of binary classification tasks. For the experiment, we construct a double-digit MNIST dataset where each image contains up to two digits randomly sampled from the original MNIST dataset~\cite{lecun2010mnist}. We locate one digit on the left-side, and the other on the right-side. Some of the images only contain a single digit. For each side, we first decide whether to include a digit from a Bernoulli distribution with mean of 0.7. Then each digit is randomly sampled from a uniform distribution. However, we remove the images that contain no digits. Random samples from the double-digit MNIST are shown in \autoref{fig:sub-first}. \begin{table}[t!] \centering \begin{tabular}{c l l l l l l l l l l} \toprule \multirow{2}{*}{\makecell{Type}} & \multicolumn{10}{c}{Digit Classification Accuracy (\%)} \\ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \midrule sigmoid & 1.00 & 0.84 & 0.86 & 0.94 & 0.89 & 0.87 & 0.87 & 0.86 & 1.00 & 1.00 \\ PC-sigmoid & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \bottomrule \end{tabular} \caption{Comparison between the classification accuracy results with sigmoid and PC-sigmoid on the double-digit MNIST dataset. } \label{tbl:sig_pc_sig_acc} \end{table} We first compare the classification accuracy results between using the original sigmoid and PC-sigmoid. As shown in \autoref{tbl:sig_pc_sig_acc}, PC-sigmoid increases the classification accuracy for each digit type on the test set. InfoCAM improves the localisation accuracy for the WSOL task as well. CAM achieves the localisation accuracy of 91\%. InfoCAM enhances the localisation accuracy to 98\%. Qualitative visualisations are displayed in \autoref{fig:multi_mnist}. We aim to preserve the regions of an image that are most relevant to a digit, and erase all the other regions. From the visualisation, one can see that infoCAM localises digits more accurately than CAM. \begin{comment} Experimental results with multi-digit MNIST~\cite{lecun2010mnist} and a subset of COCO datasets~\cite{lin2014microsoft} verified our expectation. Qualitative visualisations are displayed in \autoref{fig:multi_mnist} and \autoref{fig:multi_coco}. As to quantitative results, for multi-digit MNIST, the localisation accuracy with the original CAM and infoCAM are 91\% and 98\% respectively; for COCO, the localisation accuracy with the original CAM and infoCAM are 89\% and 94\% respective. For detailed experimental settings and results, please refer to the appendix. \end{comment} \section{Preliminaries} \label{sec:pre} In this section, we first define the notations used throughout this paper. We then introduce the definition of mutual information and variational forms of mutual information. \subsection{Notation} We let training data consist of $M$ classes and $N$ labelled instances as $\{ (\mathbf{x}_{i}, y_{i}) \}_{i=1}^{N}$, where $y_i \in \mathcal{Y} = \{ 1, ... , M \}$ is a class label of input $\mathbf{x}_i$. We let $n_{\mathbf{\phi}}(\mathbf{x}): \mathcal{X} \rightarrow \mathbb{R}^M$ be a neural network parameterised by $\phi$, where $\mathcal{X}$ is a space of input $\mathbf{x}$. Without additional clarification, we assume $\mathcal{X}$ to be a compact subset of $D$-dimensional Euclidean space. We denote by $P_{XY}$ some joint distribution over $\mathcal{X} \times \mathcal{Y}$, with $(\mathbf{X}, Y) \sim P_{XY}$ being a pair of random variables. $P_{X}$ and $P_{Y}$ are the marginal distributions of $\mathbf{X}$ and $Y$, respectively. We remove a subscript from the distribution if it is clear from context. \subsection{Variational Bounds of Mutual Information} Mutual information evaluates the mutual dependence between two random variables. The mutual information between $\mathbf{X}$ and $Y$ can be expressed as: \begin{equation} \label{eq:orig_mi_def} \mathbb{I}(\mathbf{X}, Y) = \int_{\mathbf{x} \in \mathcal{X}} \bigg[ \sum_{y \in \mathcal{Y}} P(\mathbf{x}, y) \log \big( \frac{P(\mathbf{x}, y)}{P(\mathbf{x}) P(y)} \big) \bigg] d\mathbf{x}. \end{equation} Equivalently, following~\cite{poole2019variational}, we may express the definition of mutual information in \autoref{eq:orig_mi_def} as: \begin{equation} \label{eq:cond_mi_def} \mathbb{I}(\mathbf{X}, Y) = \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{P(y | \mathbf{x})}{P(y)} \bigg], \end{equation} where $\mathbb{E}_{(\mathbf{X}, Y)}$ is the abbreviations of $\mathbb{E}_{(\mathbf{X}, Y) \sim P_{XY}}$. Computing mutual information directly from the definition is, in general, intractable due to integration. \textbf{Variational form}: Barber and Agakov introduce a commonly used lower bound of mutual information via a variational distribution $Q$~\cite{barber2003algorithm}, derived as: \begin{align} \label{eq:ba_def} \mathbb{I}(\mathbf{X}, Y) &= \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{P(y | \mathbf{x})}{P(y)} \bigg] \notag \\ &= \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{Q(y | \mathbf{x})}{P(y)} \frac{P(y | \mathbf{x})}{Q(y | \mathbf{x})} \bigg] \notag \\ &= \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{Q(y | \mathbf{x})}{P(y)} \bigg] + \underbrace{\mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{P(\mathbf{x},y)}{Q(\mathbf{x},y)} \bigg]}_{D_{KL}(P(\mathbf{x}, y) || Q(\mathbf{x}, y))} \notag - \underbrace{\mathbb{E}_{(\mathbf{X})} \bigg[ \log \frac{P(\mathbf{x})}{Q(\mathbf{x})} \bigg]}_{D_{KL}(P(\mathbf{x}) || Q(\mathbf{x}))} \notag \\ &\ge \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{Q(\mathbf{x},y)}{P(\mathbf{x})P(y)} \bigg]. \end{align} The inequality in \autoref{eq:ba_def} holds since KL divergence maintains non-negativity. This lower bound is tight when variational distribution $Q(\mathbf{x},y)$ converges to joint distribution $P(\mathbf{x},y)$, i.e., $Q(\mathbf{x},y) = P(\mathbf{x},y)$. The form in \autoref{eq:ba_def} is, however, still hard to compute since it is not easy to make a tractable and flexible variational distribution $Q(\mathbf{x},y)$. Variational distribution $Q(\mathbf{x},y)$ can be considered as a constrained function which has to satisfy the probability axioms. Especially, the constraint is challenging to model with a function estimator such as a neural network. To relax the function constraint, McAllester \textit{et al}. \cite{mcallester2018formal} further apply reparameterisation and define $Q(\mathbf{x},y)$ in terms of an unconstrained function $f_{\phi}$ parameterised by $\phi$ as: \begin{equation} \label{eq:repara_q} Q(\mathbf{x},y) = \frac{P(\mathbf{x})P(y)}{E_{y' \sim P_Y}[ \exp(f_{\mathbf{\phi}}(\mathbf{x}, y')) ]} \exp(f_{\mathbf{\phi}}(\mathbf{x}, y)). \end{equation} As a consequence, the variational lower bound of mutual information $\mathbb{I}(\mathbf{X}, Y)$ can be rewritten with function $f_{\mathbf{\phi}}$ as: \begin{align} \label{eq:mi_f_low_bound} \mathbb{I}(\mathbf{X}, Y) \ge \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{\exp(f_{\mathbf{\phi}}(\mathbf{x}, y))}{ E_{y'}[ \exp(f_{\mathbf{\phi}}(\mathbf{x}, y')) ]} \bigg]. \end{align} Thus, one can estimate mutual information without any constraint on $f$. Through the reparameterisation, the MI estimation can be recast as an optimisation problem. \section{Preliminaries} \label{sec:pre} In this section, we first define the notations used throughout this paper. We then introduce the definition of mutual information and variational forms of mutual information. \subsection{Notation} We let training data consisting of $M$ classes and $N$ labelled instances as $\{ (\mathbf{x}_{i}, y_{i}) \}_{i=1}^{N}$, where $y_i \in \mathcal{Y} = \{ 1, ... , M \}$ is a class label of the input $\mathbf{x}_i$. We let $n_{\mathbf{\phi}}(\mathbf{x}): \mathcal{X} \rightarrow \mathbb{R}^M$ be a neural network parameterised by $\phi$, where $\mathcal{X}$ is a space of input $\mathbf{x}$. Without additional clarification, we assume $\mathcal{X}$ to be a compact subset of $D$-dimensional Euclidean space. We denote by $P_{XY}$ some joint distribution over $\mathcal{X} \times \mathcal{Y}$, with $(\mathbf{X}, Y) \sim P_{XY}$ a pair of random variables. $P_{X}$ and $P_{Y}$ are the marginal distributions of $\mathbf{X}$ and $Y$, respectively. We remove a subscript from the distribution if it is clear from context. \subsection{Variational Bounds of Mutual Information} Mutual information evaluates the mutual dependence between two random variables. The mutual information between $\mathbf{X}$ and $Y$ can be expressed as: \begin{equation} \label{eq:orig_mi_def} \mathbb{I}(\mathbf{X}, Y) = \int_{\mathbf{x} \in \mathcal{X}} \bigg[ \sum_{y \in \mathcal{Y}} P(\mathbf{x}, y) \log \big( \frac{P(\mathbf{x}, y)}{P(\mathbf{x}) P(y)} \big) \bigg] d\mathbf{x}. \end{equation} Equivalently, following~\cite{poole2019variational} we may express the definition of mutual information in \autoref{eq:orig_mi_def} as: \begin{equation} \label{eq:cond_mi_def} \mathbb{I}(\mathbf{X}, Y) = \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{P(y | \mathbf{x})}{P(y)} \bigg], \end{equation} where $\mathbb{E}_{(\mathbf{X}, Y)}$ is the abbreviations of $\mathbb{E}_{(\mathbf{X}, Y) \sim P_{XY}}$. Computing mutual information directly from the definition is, in general, intractable due to integration. \textbf{Variational form}: Barber and Agakov introduce a commonly used lower bound of mutual information via a variational distribution $Q$~\cite{barber2003algorithm}, derived as: \begin{align} \label{eq:ba_def} \mathbb{I}(\mathbf{X}, Y) &= \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{P(y | \mathbf{x})}{P(y)} \bigg] \notag \\ &= \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{Q(y | \mathbf{x})}{P(y)} \frac{P(y | \mathbf{x})}{Q(y | \mathbf{x})} \bigg] \notag \\ &= \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{Q(y | \mathbf{x})}{P(y)} \bigg] + \underbrace{\mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{P(\mathbf{x},y)}{Q(\mathbf{x},y)} \bigg]}_{D_{KL}(P(\mathbf{x}, y) || Q(\mathbf{x}, y))} \notag - \underbrace{\mathbb{E}_{(\mathbf{X})} \bigg[ \log \frac{P(\mathbf{x})}{Q(\mathbf{x})} \bigg]}_{D_{KL}(P(\mathbf{x}) || Q(\mathbf{x}))} \notag \\ &\ge \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{Q(\mathbf{x},y)}{P(\mathbf{x})P(y)} \bigg]. \end{align} The inequality in \autoref{eq:ba_def} holds since KL divergence maintains non-negativity. This lower bound is tight when variational distribution $Q(\mathbf{x},y)$ converges to joint distribution $P(\mathbf{x},y)$, i.e., $Q(\mathbf{x},y) = P(\mathbf{x},y)$. The form in \autoref{eq:ba_def} is, however, still hard to compute since it is not easy to make a tractable and flexible variational distribution $Q(\mathbf{x},y)$. Variational distribution $Q(\mathbf{x},y)$ can be considered as a constrained function which has to satisfy the probability axioms. Especially, the constraint is challenging to model with a function estimator such as a neural network. To relax the function constraint, McAllester \textit{et al}. \cite{mcallester2018formal} further apply reparameterisation and define $Q(\mathbf{x},y)$ in terms of an unconstrained function $f_{\phi}$ parameterised by $\phi$ as: \begin{equation} \label{eq:repara_q} Q(\mathbf{x},y) = \frac{P(\mathbf{x})P(y)}{E_{y' \sim P_Y}[ \exp(f_{\mathbf{\phi}}(\mathbf{x}, y')) ]} \exp(f_{\mathbf{\phi}}(\mathbf{x}, y)). \end{equation} As a consequence, the variational lower bound of mutual information $\mathbb{I}(\mathbf{X}, Y)$ can be rewritten with function $f_{\mathbf{\phi}}$ as: \begin{align} \label{eq:mi_f_low_bound} \mathbb{I}(\mathbf{X}, Y) \ge \mathbb{E}_{(\mathbf{X}, Y)} \bigg[ \log \frac{\exp(f_{\mathbf{\phi}}(\mathbf{x}, y))}{ E_{y'}[ \exp(f_{\mathbf{\phi}}(\mathbf{x}, y')) ]} \bigg]. \end{align} One can estimate mutual information without any constraint on $f$. Through the reparameterisation, the MI estimation can be recast as an optimisation problem. \section{Related Work} \begin{comment} \subsection{Loss Functions of Neural Network Classifiers} There are two common loss functions that people use in neural network classifiers: sigmoid and softmax. The former is for binary classification, and the latter addresses situations with multiple classes. In this paper, we focus on softmax since sigmoid can be generalised to be a special form of softmax. More details can be found in \autoref{sec:nn_as_mi_evaluator}. To be more specific about softmax, it was originally proposed in \cite{bridle1990probabilistic} to act as a ``soft'' version of the argmax function: To make a particular dimension close to 1 while forcing values of the other dimensions being 0. Thus, it is not considered as mapping values as probabilities in the original paper; instead, it is interpreted as a ``winner-take-all'' operation~\cite{bridle1990probabilistic}. These two interpretations of softmax as an ``winner-take-all'' operation and probability normalisation are artificial, motivating us seeking a more theoretic view of softmax. Sigmoid is to map a value to be within the range between 0 and 1. Sigmoid is the inverse function of the logit function $\text{logit}(p) = \frac{p}{1 - p}$ the logit function maps probabilities to $(-\infty, +\infty)$, thus sigmoid maps \subsection{Learning Useful Representations with Mutual Information} Extracting useful representations from input data is attracting increasing attention. For example, \cite{hjelm2018learning} and \cite{belghazi2018mutual} aim to produce latent vectors that have the most mutual information with the input data. Moreover, Studying deep learning from the perspective of information theory is attracting increasing attention. Mutual information is used as an objective to discover informative representations of the given data. However, its implication for classification is still unclear. Furthermore, Tishby \textit{et al}. propose the information-bottleneck theory aiming to explain the relationship between the inputs and labels of deep learning models \cite{tishby2015deep}. Nevertheless, they did not formally show the relationship between the exact outputs of neural network classifiers and the mutual information quantity. \end{comment} \section{Network Architectures} In this section, we illustrate neural network architectures that have been utilised in the previous experiments. \autoref{fig:softmax_mi_architecture} demonstrates the architecture of the softmax mutual information neural estimator in \autoref{sec:mi_class_exp}. \autoref{fig:mnist_class_architecture} demonstrates the architecture of the network that are utilised to show PC-softmax leads to higher classification accuracy on the unbalanced MNIST dataset as in \autoref{sec:mi_class_exp}. We explain in \autoref{sec:exp} on how to convert the VGG16 architecture to the VGG16-GAP architecture. Such VGG16-GAP is used in infoCAM. We illustrate in \autoref{fig:vgg_gap_architecture} on how to convert the former to the latter architecture. For ResNet50 and Inception-V3, the architectures are identical to \cite{he2016deep} and \cite{szegedy2016rethinking}. For more detailed information, please refer to the actual implementation, which we plan to make public. \begin{figure}[!htbp] \centering \includegraphics[width=0.6\linewidth]{imgs/softmax-mi-architecture.png} \caption{The neural network architecture of the softmax mutual information estimator. The softmax in the last layer can be either the traditional or the PC one. } \label{fig:softmax_mi_architecture} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.6\linewidth]{imgs/mnist-classification-architecture.png} \caption{The neural network architecture that is utilised to show PC-softmax leads to higher classification accuracy on the unbalanced MNIST dataset. The softmax in the last layer can be either the traditional or the PC one. } \label{fig:mnist_class_architecture} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.6\linewidth]{imgs/vgg-gap-architecture.png} \caption{Illustration on the conversion from VGG16 to VGG16-GAP. } \label{fig:vgg_gap_architecture} \end{figure} \section{Visualisation of Data Distributions} We show both theoretically and experimentally in \autoref{sec:mi_class_exp} and \autoref{sec:nn_as_mi_evaluators} that neural network classifiers can be considered as mutual information estimators. In this section, we provide visualisation on the distributions of the data that are used to test the effectiveness of the softmax-based mutual information estimator. In such visualisation as \autoref{fig:synthetic_data_distribution}, data points are stratified subsets of the test datasets, so that it can reflect the dataset imbalance. Since it is impossible to visualise data whose dimension is greater or equal to three, we apply principle component analysis (PCA) to reduce the dimension to two. Furthermore, data of different class labels become more distinguishable as dimension increases. This can account for the reason why classification accuracy increases as the dimension rises. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{imgs/synthetic-data-distribution.png} \caption{Illustration of the synthetic dataset for evaluating the softmax-based mutual information estimator. For data whose dimension is greater or equal to three, the visualisation is on the results of PCA. The same colour represents the identical class. } \label{fig:synthetic_data_distribution} \end{figure} \section{Further Result} In this section, we present some further results on localisation and classification. \subsection{Localisation and Classification Result} \autoref{table:cam_info_cam_full} is a reproduction of main result with the classification results. Note that the classification performances of CAM and infoCAM is the same since we do not modify the training objective of infoCAM. The result can be used to understand the effect of ADL on the classification task. \begin{table*} \centering \begin{tabular}{c l c c c c} \toprule & & \makecell{GT \\ Loc. (\%)} & \makecell{Top-1 \\ Loc. (\%)} & \makecell{Top-1 \\ Cls (\%)} & \makecell{Top-5 \\ Cls (\%)} \\ \midrule \multirow{6}{*}{\makecell{VGG-\\16-\\GAP}} & CAM& 42.49 & 31.38 & 73.97 & 91.83 \\ & CAM (ADL) & 71.59 & 53.01 & 71.05 & 90.20 \\ & {infoCAM} & {52.96} & {39.79} & - & - \\ & {infoCAM} (ADL) & {73.35} & {53.80} & - & - \\ & {infoCAM+} & {59.43} & {44.40} & - & - \\ & {infoCAM+} (ADL) & \textbf{75.89} & {54.35} & - & - \\ \midrule \multirow{6}{*}{\makecell{ResNet-\\50}} & CAM& 61.66 & 50.84 & 80.54 & 94.09 \\ & CAM (ADL) & 57.83 & 46.56 & 79.22 & 94.02 \\ & {infoCAM} & {64.78} & {53.22} & - & - \\ & {infoCAM} (ADL) & {67.75} & {54.71} & - & - \\ & {infoCAM+} & {68.99} & \textbf{55.83} & - & - \\ & {infoCAM+} (ADL) & {69.63} & {55.20} & - & - \\ \bottomrule \end{tabular} \caption{Evaluation results of CAM and infoCAM on CUB-2011-200. Note that the classification accuracy of infoCAM is the same as those of CAM. InfoCAM always outperforms CAM on localisation of objects under the same model architecture.} \label{table:cam_info_cam_full} \end{table*} \begin{table*} \centering \centering \begin{tabular}{c l c c c c} \toprule & & \makecell{GT \\ Loc. (\%)} & \makecell{Top-1 \\ Loc. (\%)} & \makecell{Top-1 \\ Cls (\%)} & \makecell{Top-5 \\ Cls (\%)} \\ \midrule \multirow{6}{*}{\makecell{VGG-\\16-\\GAP}} & CAM& 53.49 & 33.48 & 55.25 & 79.19 \\ & CAM (ADL) & 52.75 & 32.26 & 52.48 & 78.75 \\ & {infoCAM} & {55.50} & {34.27} & - & -\\ & {infoCAM} (ADL) & {53.95} & {33.05} & - & - \\ & {infoCAM+} & {55.25} & {34.27} & - & -\\ & {infoCAM+} (ADL) & {53.91} & {32.94} & - & - \\ \midrule \multirow{6}{*}{\makecell{ResNet-\\50}} & CAM& 54.56 & 40.55 & 66.45 & 86.22 \\ & CAM (ADL) & 52.66 & 36.88 & 63.21 & 83.47 \\ & {infoCAM}& \textbf{57.79} & \textbf{43.34} & - & - \\ & {infoCAM} (ADL) & {54.18} & {37.79} & - & - \\ & {infoCAM+}& {57.71} & {43.07} & - & - \\ & {infoCAM+} (ADL) & {53.70} & {37.71} & - & - \\ \bottomrule \end{tabular} \caption{Evaluation results of CAM and infoCAM on Tiny-ImageNet. Note that the classification accuracy of infoCAM is the same as those of CAM. InfoCAM always outperforms CAM on localisation of objects under the same model architecture.} \label{table:cam_info_cam_full} \end{table*} \subsection{Ablation Study} \autoref{tbl:ablation} shows the result of ablation study. We have tested the importance of three features: 1) ADL, 2) region parameter $R$ and 3) the second subtraction term in \autoref{eq:info_cam_correct}. To combine the result in the main text, the result suggests that both region parameter and subtraction term are necessary to increase the performance of localisation. The choice of ADL depends on the dataset. We conjecture that ADL is inappropriate to apply Tiny-ImageNet since the removal of any part of tiny image, which is what ADL does during training, affects the performance of the localisation to compare with its application to relatively large images. \begin{table}[t!] \centering \begin{subtable}[t]{\linewidth} \centering \begin{tabular}{c c c l l} \toprule ADL & \makecell{$R$} & \makecell{Subtraction \\ Term} & \makecell{GT Loc. (\%)} & \makecell{Top-1 \\ Loc. (\%)} \\ \midrule \multirow{3}{*}{N} & N& N& 42.49& 31.38 \\ & N& Y& 47.59 $\uparrow$& 35.01 $\uparrow$ \\ & Y& N& 53.40 $\uparrow$& 40.19 $\uparrow$ \\ \midrule \multirow{3}{*}{Y} & N& N& 71.59& 53.01 \\ & N& Y& 75.78 $\uparrow$& 54.28 $\uparrow$ \\ & Y& N& 73.56 $\uparrow$& 53.94 $\uparrow$ \\ \bottomrule \end{tabular} \caption{Localisation results on CUB-200-2011 with VGG-GAP.} \label{tbl:ablation} \end{subtable} \vspace{1em} \begin{subtable}[t]{\linewidth} \centering \begin{tabular}{c c c l l} \toprule ADL & \makecell{$R$} & \makecell{Subtraction \\ Term} & \makecell{GT Loc. (\%)} & \makecell{Top-1 \\ Loc. (\%)} \\ \midrule \multirow{3}{*}{N} & N& N& 54.56& 40.55 \\ & N& Y& 54.29 $\downarrow$& 40.51 $\downarrow$ \\ & Y& N& 57.73 $\uparrow$& 43.34 $\uparrow$ \\ \midrule \multirow{3}{*}{Y} & N& N& 52.66& 36.88 \\ & N& Y& 52.52 $\downarrow$& 37.08 $\uparrow$ \\ & Y& N& 54.15 $\uparrow$& 37.76 $\uparrow$ \\ \bottomrule \end{tabular} \caption{Localisation results on CUB-200-2011 with ResNet50.} \end{subtable} \caption{Ablation study results on the importance of the region parameter $R$ and the subtraction term within the formulation of infoCAM. Y and N indicates the use of corresponding feature. Arrows indicates the relative performance against the case where both features are not used.} \label{table:ablation_study} \end{table} \subsection{Localisation Examples from Tiny-ImageNet} We present examples from the Tiny-ImageNet dataset in \autoref{fig:cam-illustration-butterfly}. Such examples show the infoCAM draws tighter bound toward target objects. \begin{comment} We present more examples from both datasets in \autoref{fig:cam-illustration-butterfly}. Again, we can find the localisation with CAM focuses on the head area of birds on CUB-200-2011 dataset. The examples from Tiny-ImageNet show the infoCAM draws tighter bound toward target objects. \end{comment} \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{imgs/tiny-imagenet-results.png} \caption{Visualisation of localisation with ResNet50 on CUB-200-2011 and TinyImageNet, without the assistance of ADL. The images in the second row are generated from the original CAM approach and the ones in the third row correspond to infoCAM. The red and green bounding boxes are ground truth and estimations, respectively. } \label{fig:cam-illustration-butterfly} \end{figure*} \begin{comment} \section{Experimental Details about Multi-Object Localisation} We employ the multi-MNIST and COCO datasets to study the performance of infoCAM for localising multiple objects within an image. We first describe the settings of multi-MNIST. Each image of the dataset consists of two digits, with one in the left and one in the right. The locations of the two digits are symmetric. As to the settings of COCO, instead of using the giant original dataset, we only use images of persons and cats. That is, there are three classes of images: persons only, cats only and persons and cats. For each class, there are 90 images for training and 10 images for testing. We use the very small subset of the original dataset to reduce the requirement of computational resources. \end{comment} \section{Proofs} \label{sec:proofs} In this section, we provide rigorous proofs of \autoref{thm:equality} and \autoref{thm:softmax_im}. The structure of proof is similar to the proof used in \cite{belghazi2018mutual}. We assume the input space $\Omega = \mathbf{X} \times Y$ being a compact domain of $\mathcal{R}^{d}$, where all measures are Lebesgue and are absolutely continuous. We restrict neural networks to produce a single continuous output, denoted as $n(\mathbf{x})_y$. We restate the two theorems for quick reference. \textbf{Theorem 1.} \textit{Let $f_\phi(\mathbf{x},y)$ be $n(\mathbf{x})_y$. Minimising the cross-entropy loss of softmax-normalised neural network outputs is equivalent to maximising \autoref{eq:mi_f_low_bound}, \textit{i}.\textit{e}., the lower bound of mutual information, under the uniform label distribution. That is, if the dataset is balanced, then training a neural network via minimising cross-entropy with softmax equals enhancing a estimator toward more accurately evaluating the mutual information between data and label.} \textbf{Theorem 2.} \textit{The mutual information between two random variable $X$ and $Y$ can be obtained via the infimum of cross-entropy with PC-softmax in \autoref{eq:prob_cor_softmax}. Such an evaluation is strongly consistent. } The proof technique that we have used to prove Theorem 2 is similar to the one used in \cite{belghazi2018mutual}. \begin{lem} Let $\eta > 0$. There exists a family of neural network functions $n_{\mathbf{\phi}}$ with parameter $\mathbf{\phi}$ in some compact domain such that \begin{align} |\mathbb{I}(\mathbf{X};Y) - \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y)| \le \eta, \end{align} where \begin{align} \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y) = \underset{\mathbf{\phi}}{\sup} \ \mathbb{E}_{(\mathbf{X},Y)} \big[ n_{\mathbf{\phi}} \big] - \mathbb{E}_{\mathbf{X}} \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y]. \end{align} \end{lem} \begin{proof} Let $n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y) = \PMI(\mathbf{X},Y) = \log \frac{P(\mathbf{X}, Y)}{P(\mathbf{X})P(Y)}$. We then have: \begin{align} \mathbb{E}_{(\mathbf{X},Y)} [n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y] = \mathbb{I}(\mathbf{X};Y) \quad \text{and} \quad \mathbb{E}_{\mathbf{X}}\mathbb{E}_{Y}[ \exp{(n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y)} ] = 1. \end{align} Then, for neural network $n_{\mathbf{\phi}}$, the gap $\mathbb{I}(\mathbf{X};Y) - \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y)$: \begin{align} \mathbb{I}(\mathbf{X};Y) - \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y) &= \mathbb{E}_{(\mathbf{X},Y)} [ n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y) - n_{\mathbf{\phi}}(\mathbf{X},Y) ] + \mathbb{E}_{\mathbf{X}} \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] \notag \\ &\le \mathbb{E}_{(\mathbf{X},Y)} [ n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y) - n_{\mathbf{\phi}}(\mathbf{X},Y) ] + \log \mathbb{E}_{\mathbf{X}}\mathbb{E}_{Y}[\exp(n_{\mathbf{\phi}}(\mathbf{x})_y)] \notag \\ &\le \mathbb{E}_{(\mathbf{X},Y)} [ n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y) - n_{\mathbf{\phi}}(\mathbf{X},Y) ] \notag\\ & \quad + \mathbb{E}_{\mathbf{X}}\mathbb{E}_{Y}[\exp(n_{\mathbf{\phi}}(\mathbf{x})_y) - \exp{(n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y)} ]. \label{eq:mi_diff} \end{align} \autoref{eq:mi_diff} is positive since the neural mutual information estimator evaluates a lower bound. The equation uses Jensen's inequality and the inequality $\log x \le x - 1$. We assume $\eta > 0$ and consider $n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y$ is bounded by a positive constant $M$. Via the universal approximation theorem \cite{hornik1989multilayer}, there exists $n_{\mathbf{\phi}}(\mathbf{x})_y \le M$ such that \begin{align} \mathbb{E}_{(\mathbf{X},Y)} | n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y) - n_{\mathbf{\phi}}(\mathbf{X},Y) | \le \frac{\eta}{2} \quad \text{and} \quad \mathbb{E}_{\mathbf{X}}\mathbb{E}_{Y}|n_{\mathbf{\phi}}(\mathbf{x})_y - n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y | \le \frac{\eta}{2} \exp{(-M)}. \label{eq:mi_diff_up} \end{align} By utilising that $\exp$ is Lipschitz continuous with constant $\exp(M)$ over $(-\infty, M]$, we have \begin{align} \mathbb{E}_{\mathbf{X}}\mathbb{E}_{Y}|\exp(n_{\mathbf{\phi}}(\mathbf{x})_y) - \exp(n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y) | \le \exp(M) \cdot \mathbb{E}_{\mathbf{X}}\mathbb{E}_{Y}|n_{\mathbf{\phi}}(\mathbf{x})_y - n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y | \le \frac{\eta}{2}. \label{eq:mi_diff_lips} \end{align} Combining \autoref{eq:mi_diff}, \autoref{eq:mi_diff_up} and \autoref{eq:mi_diff_lips}, we then obtain \begin{align} |\mathbb{I}(\mathbf{X};Y) - \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y)| &\le \mathbb{E}_{(\mathbf{X},Y)} | n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y) - n_{\mathbf{\phi}}(\mathbf{X},Y) | \notag \\ &\quad + \mathbb{E}_{\mathbf{X}}\mathbb{E}_{Y}|\exp(n_{\mathbf{\phi}}(\mathbf{x})_y) - \exp(n_{\mathbf{\phi}}^{\ast}(\mathbf{x})_y) | \notag \\ &= \frac{\eta}{2} + \frac{\eta}{2} = \eta. \end{align} \end{proof} \begin{lem} Let $\eta > 0$. Given a family of neural networks $n_{\mathbf{\phi}}$ with parameter $\mathbf{\phi}$ in some compact domain, there exists $N \in \mathbb{N}$ such that \begin{align} \forall n \ge N, \text{Pr}\big( | \widehat{\mathbb{I}}_{n}(\mathbf{X};Y) - n_{\mathbf{\phi}}(\mathbf{X};Y) | \le \eta \big) = 1. \end{align} \begin{proof} We start by employing the triangular inequality: \begin{gather} | \widehat{\mathbb{I}}_{n}(\mathbf{X};Y) - \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y) | \notag \\ \le \underset{\mathbf{\phi}}{\sup} \ |\mathbb{E}_{(\mathbf{X},Y)}[n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y)] - \mathbb{E}_{(\mathbf{X},Y)_n}[n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y)]| \notag \\ + \underset{\mathbf{\phi}}{\sup} \ |\mathbb{E}_{\mathbf{X}} \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] - \mathbb{E}_{\mathbf{X}_n} \log \mathbb{E}_{Y_n} [\exp(n_{\mathbf{\phi}})_y]| \label{eq:mi_diff_n_phi} \end{gather} We have stated previously that neural network $n_{\mathbf{\phi}}$ is bounded by $M$, \textit{i}.\textit{e}., $n_{\mathbf{\phi}}(\mathbf{x})_y \le M$. Using the fact that $\log$ is Lipschitz continuous with constant $\exp(M)$ over the interval $[\exp(-M), \exp(M)]$. We have \begin{equation} | \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] - \log \mathbb{E}_{Y_n} [\exp(n_{\mathbf{\phi}})_y] | \le \exp(M) \cdot |\mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] - \mathbb{E}_{Y_n} [\exp(n_{\mathbf{\phi}})_y]| \end{equation} Using the uniform law of large numbers \cite{geer2000empirical}, we can choose $N \in \mathbb{N}$ such that for $\forall n \ge N$ and with probability one \begin{equation} \underset{\mathbf{\phi}}{\sup} \ |\mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] - \mathbb{E}_{Y_n} [\exp(n_{\mathbf{\phi}})_y]| \le \frac{\eta}{4} \exp(-M). \end{equation} That is, \begin{align} | \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] - \log \mathbb{E}_{Y_n} [\exp(n_{\mathbf{\phi}})_y] | \le \frac{\eta}{4} \end{align} Therefore, using the triangle inequality we can rewrite \autoref{eq:mi_diff_n_phi} as: \begin{gather} | \widehat{\mathbb{I}}_{n}(\mathbf{X};Y) - \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y) | \le \underset{\mathbf{\phi}}{\sup} \ |\mathbb{E}_{(\mathbf{X},Y)}[n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y)] - \mathbb{E}_{(\mathbf{X},Y)_n}[n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y)]| \notag \\ + \underset{\mathbf{\phi}}{\sup} \ |\mathbb{E}_{\mathbf{X}} \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] - \mathbb{E}_{\mathbf{X}_n} \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y]| + \frac{\eta}{4}. \label{eq:mi_diff_add_const} \end{gather} Using the uniform law of large numbers again, we can choose $N \in \mathbb{N}$ such that for $\forall n \ge N$ and with probability one \begin{align} \underset{\mathbf{\phi}}{\sup} \ |\mathbb{E}_{\mathbf{X}} \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y] - \mathbb{E}_{\mathbf{X}_n} \log \mathbb{E}_{Y} [\exp(n_{\mathbf{\phi}})_y]| \le \frac{\eta}{4} \label{eq:margin_less} \end{align} and: \begin{align} \underset{\mathbf{\phi}}{\sup} \ |\mathbb{E}_{(\mathbf{X},Y)}[n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y)] - \mathbb{E}_{(\mathbf{X},Y)_n}[n_{\mathbf{\phi}}^{\ast}(\mathbf{X},Y)]| \le \frac{\eta}{2}. \label{eq:join_less} \end{align} Combining \autoref{eq:mi_diff_add_const}, \autoref{eq:margin_less} and \autoref{eq:join_less} leads to \begin{equation} | \widehat{\mathbb{I}}_{n}(\mathbf{X};Y) - \underset{\phi}{\sup} \ \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y) | \le \frac{\eta}{2} + \frac{\eta}{4} + \frac{\eta}{4} = \eta. \end{equation} \end{proof} \end{lem} Now, combining the above two lemmas, we prove that our mutual information evaluator is strongly consistent. \begin{proof} Using the triangular inequality, we have \begin{align} |\mathbb{I}(\mathbf{X};Y) - \widehat{\mathbb{I}}_{n}(\mathbf{X};Y)| \le |\mathbb{I}(\mathbf{X};Y) - \mathbb{I}_{\mathbf{\phi}}(\mathbf{X};Y)| + |\widehat{\mathbb{I}}_{n}(\mathbf{X};Y) - n_{\mathbf{\phi}}(\mathbf{X};Y)| \le \epsilon. \end{align} \end{proof} \section{Temporary} Remember that one of our goal is to find the most representative patch from an image which is highly informative to identify the label of the image. The point-wise mutual information between input image patch $X_i$ and the label $Y$ would be highly appreciated to solve such problem. Let $X$ can be decomposed into multiple features $X = \{X_1, X_2, ..., X_R\}$, the mutual information between $X$ and $Y$ can be rewritten as \begin{align} MI(X,Y) &= \mathbb{E}_{(X,Y)}\bigg[ \log \frac{P(X|Y)}{P(X)} \bigg] \\ &= \mathbb{E}_{(X,Y)}\bigg[ \log \frac{P(X_1,..,X_R|Y)}{P(X_1,...,X_R)} \bigg] \notag \\ &= \mathbb{E}_{(X,Y)}\bigg[ \log \frac{P(X_i|X_{-i}, Y)}{P(X_i|X_{-i})} \bigg] \hfill \text{(by chain-rule)} \notag \\ &\geq \mathbb{E}_{(X,Y)}\bigg[ \log \frac{f(X_i, X_{-i}, Y)}{\mathbb{E}_{(X_{-i}',Y')}[f(X_i, X_{-i}', Y')]} \bigg] \quad \notag\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{(By \autoref{eq:mi_f_low_bound})} \notag, \end{align} where $f$ is a function that maps $\mathcal{X}_i \times \mathcal{X}_i^{R-1} \times \mathcal{Y} \rightarrow \mathbb{R}$. Note that function $f$ should be permutation invariant to make the order of $X_i$ irrelevant. Let $\Pi_R$ be the set of all permutations of indices ${1,...,R}$. A function $f$ is permutation invariant iff $f(X) = f(\pi(X))$, $\forall \pi \in \Pi_R$ \cite{zaheer2017deep}. An example of such function is: \begin{align} f(\{x_1,...,x_R\}) = \sigma(\text{pool}(\{\phi(x_1),...,\phi(x_R)\})), \end{align} where pool is a pooling operation, $\phi$ is learnable function, and $\sigma$ is a non-linear activation function. We can learn a neural network $f:X^R \rightarrow \mathbb{R}^{|\mathcal{Y}|}$ with the following objective: \begin{align} L =\mathbb{E}_{(X,Y)}\Big[& \log f(X_i, X_{-i}, Y) \notag\\ &- \log \mathbb{E}_{(X_{-i}',Y')}\big[f(X_i, X_{-i}', Y')\big] \Big]. \end{align} Note that in its maximum, we can obtain a point-wise mutual information between $X_i$ and $Y$. The empirical loss function will be \begin{align} L =\sum_{(x^{n},y^{n}) \in \mathcal{D}}\Big[& \log f(x_i^n, x_{-i}^n, y^n) \notag\\ &- \log \big[f(x_i, x_{-i}', y')\big] \Big], \end{align} where we take a randomly sample of $x_{-i}'$ and $y'$ from an empirical distribution. For some applications such as image modelling, $x_i$ can be considered as a patch of image $x_i$. Image patches, however, should not be modelled by permutation invariant functions, because adjacency structure between patches are important factor to model the image. To model such structured data, we can place non-invariant neural network blocks before $f$. For example, the output of CNN is a set of image features which is modelled via permutation invariant function. Therefore, we can construct an end-to-end learning model by replacing an input of $f$ through CNN layers \begin{align} L =\sum_{(x^{n},y^{n}) \in \mathcal{D}}\Big[& \log f([g(x^n)]_i, [g(x^n)]_{-i}, y^n) \\ & -\log \big[f([g(x^n)]_i, [g(x')]_{-i}, y')\big] \Big], \notag \end{align} where $g$ is a set of layers taking $x\in\mathcal{X}$ as an input, and, with a slight abuse of notation, we let $[\cdot]_i$ and $[\cdot]_{-i}$ is a selector operation used to represent $i$th output and the output except $i$th component inside the brackets, respectively. \subsection{PMI with Embedded Classifier} We explicitly model $f(x_i, x_{-i}, y) = [n_\phi(x, x_{-i})]_{y}$. \subsection{Experiments} We need two experiments. First, WSOL. Second, classification via $f$. WSOL needs to achieve SOTA. WSOL task is slightly different from info-CAM, because we can directly compute the PMI from the objective. The classification needs to show reasonable performance. Therefore, we can use interpretable model without losing too much classification performance. However, it would be the best if the classification performance is good as well.
2,877,628,088,747
arxiv
\section{Introduction} \label{sec:intro} \IEEEPARstart{A}{}key issue for power-system planning is the contribution of renewable and other emerging energy resources to meeting demand reliably. Mechanical failures, planned maintenance, or lack of generating resource in real-time may leave a system with insufficient capacity to meet load---requiring load curtailment. The contribution of a resource to serving demand reliably is measured typically by estimating capacity-value metrics, defined through the effect that its addition to the system has on the calculated risk of load-curtailment events. The issue of real-time resource availability is particularly salient with renewable resources, as their output is governed by uncontrollable weather conditions. An IEEE Task Force focused on techniques for estimating the capacity value of wind power published a survey on that technology \cite{Keane2011a}. This new paper has a similar purpose of surveying methods for estimating the capacity value of solar power and recent activity applicable to both wind and solar. We place strong emphasis on critical review of modelling methodology, particularly with respect to capacity markets and statistical modelling, which distinguishes our review work from related publications \cite{Keane2011a,Soder2019,doorman2016}. The paper builds on earlier Task Force papers which concentrate more specifically on solar power \cite{duignan2012a,dent2016a}---while the high-level topics covered in this new paper are broadly similar to those in a previous conference paper \cite{dent2016a}, the material is revised entirely for this as the Task Force's final report apart from Sections~\ref{sec:methodology:probability}--\ref{sec:methodology:metrics} (these cover the essentials of the relevant probabilistic and statistical modelling, where the Task Force's thinking has evolved less rapidly.) Throughout the Task Force's activity, there is particular emphasis on matters of solar-resource assessment (with which the power-system community may be less familiar as compared to wind). In the solar-specific sections, we focus on photovoltaic (PV) solar rather than concentrating solar power (CSP). CSP has intrinsic energy-storage capability \cite{pfenninger2014a,madaeni2013b}, providing some control of co-incidence of output with high demands. This characteristic of CSP makes relevant modelling approaches fundamentally different from PV. A brief discussion regarding the interaction between solar power and co-located energy storage, which is applicable to CSP, is given in Section~\ref{sec:methodology:hybrid}. This paper addresses four major issues that are related to solar power. First, Section~\ref{sec:resource} discusses key properties and assessment of solar resource. Solar availability features unique spatial and temporal correlations, which are modified by design considerations such as panel orientation and the inclusion of sun-tracking systems or energy storage. Section~\ref{sec:methodology} provides a detailed discussion of the statistical methods that are used for adequacy-index and capacity-value estimation---much of which applies equally to other variable generation (VG) technologies, as well as solar. We highlight the importance of capturing statistical relationships between renewable resource and demand, and consequences of limited data. It discusses also relevant theory associated with capacity markets. Section~\ref{sec:survey} surveys recent capacity value studies and practice in the industrial and research literature, emphasising consequences of different methodology choices. Finally, Section~\ref{sec:concl} concludes and discusses key research needs in this area. \section{PV-Resource Assessment} \label{sec:resource} Surface solar irradiance follows predictable diurnal and seasonal cycles. However, solar irradiance can be difficult to model and forecast, due to cloud cover and other meteorological effects. The recent emergence of PV and its distributed nature make reliable long-term output data rare, forcing reliance on modeled PV-generation data \cite{pfenninger2016a}. Weather variability occurs at different temporal and spatial scales, from clouds moving across individual panels (seconds to minutes \cite{gami2017a}) to weather fronts moving over a region (hours to days \cite{Jewell1987a}) to multi-day regimes that dictate continental-scale weather patterns \cite{grams2017a}. Fig.~\ref{fig:resource:italy} demonstrates the variability in PV output at a single location over short timescales and that this variability is reduced if many PV systems over a wide area are aggregated. \begin{figure} \centering \includegraphics[width=\columnwidth]{CFitaly.pdf} \caption{Daily-average capacity factor for a single PV system near Milan and for PV deployed across Italy during~$2015$. Single-site and Italy-wide data are from PVOutput and Terna, respectively.} \label{fig:resource:italy} \end{figure} Modelling PV power output accurately is hampered by the difficulty of estimating solar irradiance \cite{coimbra2013a}, especially due to cloud cover. Aerosols and other atmospheric particles scatter incoming light even with clear skies \cite{schroedter-homscheidt2013a}, affecting the productivity of concentrating technologies (CSP and concentrating PV) and, to a lesser degree, PV. Moreover, deposition of aerosols and particles on panels affects productivity \cite{Micheli2017a}. Output depends also on many secondary parameters: the PV technology that is used, tilt and azimuth angles, whether panels are fixed or have tracking systems, module temperature \cite{huld2010a}, and panel shading as a function of sun angle \cite{huld2008a}. Fig.~\ref{fig:resource:orientation} illustrates the impact of orientation on PV output, using data for Jaen, Spain. Other weather variables play a role as well: the severity of soiling is mediated by rainfall \cite{kimber2006a} and snow can cover panels (reducing output) and reflect sunlight off the ground (increasing output) \cite{Ryberg2015a}. Finally, a PV system's inverter determines AC power output, with an efficiency that depends on utilization (power level and input voltage) and operating temperature \cite{boyson2007a}. It is common for inverters to be undersized relative to peak DC output of a panel, giving flattened power-output peaks. While this affects summer peak output in particular, snow affects winter peaks. PV output during both summer and winter peaks contribute to the capacity value of a PV system. \begin{figure} \centering \includegraphics[width=\columnwidth]{Jaen.pdf} \caption{Simulated power generation for a $1$-kW system installed in Jaen, Spain, averaged hourly over all days of~$2015$ (left) and summed over the entire year (right). The tilted systems are installed at $35$-degree tilt angle, facing exactly south, east, or west. Data from \url{https://www.renewables.ninja}.} \label{fig:resource:orientation} \end{figure} \subsection{Calculating Power Output} A key challenge in modeling overall system performance is obtaining accurate irradiance data. Several methods exists to convert irradiance to DC power output from PV panels. Common approaches are empirical models, which are parameterized from manufacturer datasheets, and experimental data \cite{soto2006a,huld2010a}. The two primary weather inputs---module irradiance and temperature---are modified by the secondary parameters that are described above, requiring assumptions (\textit{e}.\textit{g}., on panel orientation) or additional data (\textit{e}.\textit{g}., aerosol optical depth or snowfall volume). These secondary parameters are of critical importance for the diurnal profile of PV generation, which, in turn, is relevant for its capacity contribution. The impacts of these secondary parameters are illustrated in Fig.~\ref{fig:resource:orientation}. The sun's average power output (the solar constant) and inclination are fundamental values. Thus, libraries such as PVLIB \cite{holmgren2015a} can estimate overall power output over a typical meteorological year (TMY) easily. TMY data provide synthetic hourly power outputs, which are sufficient for many types of analyses \cite{Janjai2009a,Remund2010a}. The sufficiency of TMY data stem, in part, from national-scale solar capacity factors having less internannual variability compared to those for wind (\textit{e}.\textit{g}., $\pm 0.3$\% in Europe versus $\pm 1.5$\% for wind \cite{Staffell2016b,pfenninger2016a}). However, the use of TMY data requires correct depiction of the temporal and spatial dependency of PV generation under real weather conditions and preserving correlations with temperature, demand, and wind \cite{fattori2017a}. \subsection{Sources of Irradiance and Weather Data} There are three primary sources of data: ground-based measurements, satellite imagery, and meteorological reanalyses. Ground-station data are best for accuracy and high temporal resolution. However, freely available data are limited and of mixed quality, suffering from missing data, measurement errors, and time aggregation. Data are available from Baseline Surface Radiation Network (BSRN) \cite{pangea2017a}, Global Energy Balance Archive (GEBA) \cite{wild2013a}, Surface Radiation Budget (SURFRAD) \cite{NOAA2017b}, Southern African Universities Radiometric Network (SAURAN) \cite{brooks2015a}, and some national weather services. Geostationary weather satellites cover specific regions and provide half-hourly images which can be processed to derive direct and diffuse surface irradiance \cite{rigollier2004a}. Meteosat covers Europe, northern Africa, and parts of Asia, with free data available through Satellite Application Facility on Climate Monitoring (CM-SAF) \cite{kothe2017a} and Copernicus Atmosphere Monitoring Service (CAMS) \cite{Copernicus2017c}. Geostationary Operational Environmental Satellite (GOES) covers the Americas \cite{NOAA2017d}, but no equivalent data provider exists. Prospective users of GOES must process imagery themselves or use derived products, such as National Solar Radiation Data Base (NSRDB) \cite{NREL2017e}. While satellite data are considered state-of-the-art (due to high spatial resolution), they suffer from extensive periods of missing data and do not provide global coverage yet \cite{pfenninger2016a}. Reanalyses are more consistent across space and time and provide global coverage, created by assimilating historical meteorological measurements into a numerical weather-prediction model \cite{dee2011a}. As such, reanalyses generate internally-consistent pictures of the state of the global atmosphere. Thus, reanalyses are gaining traction in simulating wind resources \cite{Staffell2016b,drew2015a,aparicio2017a}. However, spatial resolution is coarse, typically \textit{via} a $20$-km to $100$-km square grid \cite{dee2011a}. Moreover, reanalyses' focus on three-dimensional atmospheric flow means that solar irradiance so far has not been a primary consideration. Nevertheless, with appropriate bias correction, reanalyses can provide accurate PV-output simulations \cite{pfenninger2016a}. Recently, several turnkey services have launched which provide freely available PV (and wind) simulations based on reanalysis data, including from National Renewable Energy Laboratory \cite{NREL2017e}, European Climatic Energy Mixes Demonstrator \cite{EC2017f}, Photovoltaic Geographical Information System \cite{JRC2017g}, Joint Research Centre's European Meteorological derived HIgh resolution RES generation dataset (covering Europe) \cite{aparicio2017a}, and the Renewables.ninja web platform (which offers simulations that are based on CM-SAF) \cite{pfenninger2016a,Staffell2016b}. \subsection{Measured Power-Output Data} Metered data from individual PV systems are an alternative to simulation. These are more challenging to obtain than for other generation technologies, due to the small and distributed nature of PV. For example, there are $1.4$~million PV systems in Australia \cite{APVI2017h}, compared to less than $300$~generators registered in National Electricity Market \cite{AEMO2016a}. The lack of metered data poses a challenge to system operators, for which PV output is visible only as a reduction of demand \cite{NG2012a}. Early government-funded field trials produced metered output data from small numbers of PV panels (\textit{e}.\textit{g}., $229$~systems in United Kingdom from between~$2002$ and~$2006$ \cite{munzinger2006a}). Such datasets are becoming increasingly common, with some providing comprehensive real-time updates, for example from Australian Photovoltaic Institute ($6000$~systems in Australia \cite{APVI2017h}) and Sheffield Solar ($1700$~systems in United Kingdom \cite{SS2017i}). These rely on the proliferation of web-enabled inverters, which can upload data with high temporal resolution (\textit{e}.\textit{g}., five-minute) data to online aggregator services. System operators in many regions include PV output as part of their public data now. These data must be estimated, often by combining bottom-up approaches that are listed above with top-down statistical estimation \cite{SS2017i}, as operators cannot meter every PV system in a country. \subsection{Future Improvements} Many methodologies, including cloud imagery, physical climate models, and machine learning, are employed to improve solar-power modeling \cite{Pedro2012b,bessa2016a,coimbra2013a}. No single technique appears to be dominant for all applications. However, hybrid or ensemble machine-learning models appear to offer better accuracy than other techniques \cite{voyant2017a}. With improved models, important data issues remain: averaging data to hourly or lower resolutions, PV generation modeled inaccurately, and errors in electricity-demand data contribute uncertainty in PV capacity values \cite{gami2017a}. Even seemingly small systematic errors (\textit{e}.\textit{g}., a $30$-minute shift in some modelled data) can have a large impact on capacity-value estimates if they affect the relative timing of peak PV generation and demand \cite{gami2017a}. From a decision-analytic perspective, there is also a need to build statistical error models for the relationship between resource datasets and real-world analogues, \textit{i}.\textit{e}., going beyond improved central estimates of historic resource. Improvements in the data and modeling for solar-power prediction brings real benefits for system planning and operations, \textit{e}.\textit{g}., the California system must handle extensive over-generation of solar power, with system-wide curtailment of solar power in~$2019$ exceeding $921$~GWh \cite{CAISO2017k}. \section{Methodology} \label{sec:methodology} This section outlines the general framework that is used for risk-based adequacy and capacity-value assessments in systems with substantial VG penetrations. Most of the material is applicable equally to all VG. Thus, this material seldom makes specific reference to solar power. Specific consideration of energy storage is beyond the scope of this paper. Thus, energy storage is not discussed except in Section~\ref{sec:methodology:hybrid} \subsection{Probability Background} \label{sec:methodology:probability} In adequacy assessment, we are interested in the values of available conventional capacity, $X_t$, available VG capacity, $Y_t$, and demand, $D_t$, during multiple points in time, which are indexed by $t$. Let the (random) vector, $S_t=(X_t,Y_t,D_t)$, denote the system state at $t=1, \dots, n$ within the period that is under study. The system margin, $Z_t=X_t+Y_t-D_t$, is a function of $S_t$. A full probability model for the system would be sequential, describing $S_t$ as a stochastic process over the entire time period. Such a stochastic process is needed to calculate some risk metrics, \textit{e}.\textit{g}., frequency and duration indices, or the distribution of total energy unserved across the period under study. However, some quantities, such as loss-of-load expectation (LOLE), which is defined as: \begin{equation} [\text{LOLE}] = \sum_{t=1}^n \prob{Z_t<0}, \end{equation} may be defined in terms of the marginal distributions of $S_t$ integrated over time. LOLE may be specified equivalently in terms of a simpler \emph{time-collapsed} or \emph{snapshot} model with a time-independent state vector, $S=(X,Y,D)$, the distribution of which is specified by: \begin{equation} \label{equ:methodology:probability:lole:timeIndep} \prob{S \in A} = \frac{1}{n} \sum_{t=1}^n \prob{S_t \in A}, \end{equation} for any event, $A$. In~(\ref{equ:methodology:probability:lole:timeIndep}) the distribution of the state vector, $S$, is the same as that of state vector, $S_t$, sampled at a uniformly randomly chosen point in time. The specification in~(\ref{equ:methodology:probability:lole:timeIndep}) is helpful for some computational or theoretical analyses. Using~(\ref{equ:methodology:probability:lole:timeIndep}), LOLE is given as $(\Delta t) \prob{Z<0}$, and expected energy unserved as $(\Delta t) \E{\max \{-Z,0\}}$, where $Z=X+Y-D$ and $\Delta t$ is the length of the period under study. The distribution of $S$ typically is estimated from the empirical distribution of observations of $S_t$. Thus, the time-collapsed model is used almost always in adequacy studies that measure risk using quantities, such as LOLE, which do not require a full sequential model. \subsection{Statistical Estimation} In the use of probabilistic and statistical concepts such as independence or correlation, it is essential to be clear as to which of the sequential and time-collapsed models these refer. For example, suppose $Y_t$ is available solar power at time~$t$ and that at any given time, $t$, the random variables, $Y_t$ and $D_t$, are independent (neither being informative about the other given the knowledge at time~$t$). Because daily minimum demand usually occurs overnight when it is dark, within the \emph{time-collapsed} model the lowest values of $D$ are associated with zero values of $Y$, introducing substantial probabilistic dependence between these two time-collapsed random variables. In reality, even conditional on information at time~$t$, there is typically still some dependence between variable generation, $Y_t$, and demand, $D_t$, due to the existence of unmodelled weather effects, which influence both $Y_t$ and $D_t$. This modifies the dependence between the corresponding time-collapsed random variables, $Y$ and $D$. If dependence between VG output and demand is considered in a time-collapsed model, often this is done using a `hindcast' approach, in which the empirical historical distribution of VG-output/demand pairs, $(y_\tau,d_\tau)$, is used as the predictive joint distribution of $(Y,D)$. The random variable, $X$, usually is assumed independent of the pair, $(Y,D)$, with a distribution estimated from an appropriate model. Then: \begin{equation} [\text{LOLE}] = \frac{\Delta}{N} \sum_\tau \prob{X+y_\tau < d_\tau}, \end{equation} where $\Delta$ is the length of a time step, $N$ is the number of historic years of data, and the sum is over historic times, $\tau$. Inevitably, there are limited relevant data in the hindcast approach for estimation of the empirical distribution at times of high demand and low VG output, which dominate the estimates of risk measures. This can be dealt with by using statistical extreme-value theory to smooth the extremes of a dataset \cite{Wilson2019}. To the best of our knowledge, the only works using more sophisticated direct joint modelling of the relationship between VG output and demand in a time-collapsed model are the work of Wilson \textit{et al}.\ \cite{wilson2018a} (which uses temperature as an explanatory variable for both wind and demand, and invokes independence of wind and demand conditional on temperature and on time of day, week, and year); and the work of Gao and Gorinevsky \cite{Gao2018} (which uses quantile regression to model explicitly the distribution of wind conditional on demand). Studies that consider estimation of the uncertainty that arises from the use of limited numbers of years of data typically assume that a result derived from the longest available dataset is `the truth' \cite{hasche,MadaeniSioshansiDenholm2012b}. However, this is not fully satisfactory, as the result may be driven by a small number of historic weather systems, and there may be a tendency for extreme peaks to cluster in neighbouring years, reducing further the number of fully independent datapoints. Some discussion of this is provided in the literature \cite{wilson2018a,Wilson2019}, although more work in this area and on the consequences for decision support is required. Most studies using a \emph{sequential} model assume that VG output and demand may be modelled as independent processes within the season under study \cite{billinton2011a,troffaes2015a}. In reality, as discussed above, some dependence between these processes may be introduced by the variability of the weather. There is little research on multivariate-stochastic-process modelling of VG output and demand for adequacy assessment \cite{wilson2016a,antarescore}. \subsection{Capacity-Value Metrics} \label{sec:methodology:metrics} Capacity-value metrics are used commonly to visualise the contribution of VG (or other resources) in adequacy studies \cite{Keane2011a}. For instance, in the time-collapsed model and with respect to the loss of load probability (LOLP) risk index, the effective load-carrying capability (ELCC) of a resource, $Y$, when added to a background, $M$, is given by the solution of: \begin{equation} \label{equ:methodology:metrics:elcc} \prob{M<0} = \prob{M+Y < [\text{ELCC}]_{Y,M}}, \end{equation} and the equivalent firm capacity (EFC) is given by solving: \begin{equation} \label{equ:methodology:metrics:efc} \prob{M+Y<0} = \prob{M + [\text{EFC}]_{Y,M} < 0}. \end{equation} These capacity-value metrics are functions of the chosen risk metric and the background, $M$, to which it is added, as well as of the additional capacity. Thus, it is incorrect to refer to the capacity value of $Y$ without that \textit{caveat}, or to use a single capacity-value figure across multiple circumstances \cite{Zachary2019}. This nuance is particularly important in capacity-market applications. Such capacity-value metrics are also non-additive, \textit{i}.\textit{e}., the ELCC (or EFC) of an addition, $Y_1+Y_2$, typically will not equal the sum of the ELCCs (of EFCs) of $Y_1$ and $Y_2$ added to the same background. As is clear from~(\ref{equ:methodology:metrics:elcc}) and~(\ref{equ:methodology:metrics:efc}), when adding a single relatively small resource to the background of a much larger system, ELCC and EFC take very similar values. This similarity applies when calculating the marginal capacity value of a single unit in a capacity market. In other applications, it might be of interest to calculate the capacity value of an entire fleet of wind or solar generation when added to the background of the other resource and demand. In such cases, ELCC and EFC may take different values and it is necessary to consider which capacity value metric is appropriate. ELCC is used most commonly, however it is not always clear whether this choice is considered carefully with respect to the specific application. Various special cases (\textit{e}.\textit{g}., small $Y$ and exponentially distributed $X$) are surveyed by Dent and Zachary \cite{Dent2014a}, building on earlier work \cite{garver1966a,dragoon2006a,dannunzio2008b,zachary2012a}. These cases are helpful in understanding what is driving the results of capacity-value calculations. Computation is usually sufficiently straightforward that these special cases are not needed typically for model tractability. \subsection{Including VG in Capacity-Remuneration Mechanisms} \label{sec:methodology:capMech} Capacity-remuneration mechanisms (CRMs) incentivise the presence of an appropriate level of generation and equivalent capacity for resource-adequacy purposes. They take a range of forms, with a useful taxonomy that is provided by Agency for Cooperation of Energy Regulators (ACER) \cite{capacity} and summarized in Fig.~\ref{fig:methodology:capMech:taxonomy}. Further detailed surveys of CRMs may be found in other works \cite{ferc,creg,doorman2016}, with Table~$0.1$ in the latter providing a more granular taxonomy than that of ACER. For crediting VG in CRMs, appropriate modelling of the adequacy contribution of the resource is needed. This applies similarly to all volume-based mechanisms, and in a different manner to price-based mechanisms. Thus, this section describes the theory behind volume- and price-based CRMs, particularly the role of capacity-value metrics in including offers from VG. \begin{figure} \centering \includegraphics[width=\columnwidth]{CRMv2.pdf} \caption{Taxonomy of capacity-remuneration mechanisms \cite{capacity}.} \label{fig:methodology:capMech:taxonomy} \end{figure} \subsubsection{Volume-Based CRMs} Here a central authority defines a volume of capacity to procure, \textit{e}.\textit{g}., based on a target risk level or a cost-benefit analysis. Then, typically an auction is held to determine the units that are selected and the capacity price. There is a standard theory for capacity procurement in volume-based markets, in which all offers are from resources equivalent to conventional generation \cite{Zachary2019}. Suppose that (to a good approximation) adding or subtracting a limited capacity of conventional resource shifts the distribution of the margin, $Z$, with changes in the shape or width of that distribution being a lower-order effect. Then, it is possible to define the volume of capacity in terms of expected available capacity, with the product offered by an individual unit being its expected available capacity. Units are added in ascending order of their ratio of offer price to expected capacity, until the sum of their expected available capacities equals the target. This is referred to then as an auction with expected available capacity (sometimes referred to as `de-rated capacity') as a `simple additive commodity'. Without significant additional complication, the fixed capacity target could be replaced by a demand curve, implying that at a higher auction price the amount procured will be lower. The assumptions that are required to run an auction with an additive commodity do not hold when non-conventional resources, such as VG or energy storage, participate in the market. Instead, the above mechanism may be generalised by adding units in ascending order of the ratio of offer price to the marginal EFC against the background of the finally accepted set of resources, until a specified risk target is reached. Crucially, however, the final accepted set of resources cannot be known \textit{ex ante}. Thus, it is necessary to perform an iterative process of running the auction and recalculating EFCs with the latest auction outcome, until convergence is obtained \cite{Zachary2019}. This is in contrast to how quantity-based capacity markets operate currently, wherein all bidders submit price/quantity offers that are based on their (possibly de-rated) capacity, which is determined \textit{ex ante}. Therefore, quantity-based CRMs (as structured currently) cannot consider contributions of all types of resource on an equal basis. Volume-based CRMs typically require specification of a penalty if a contracted resource cannot deliver when required. One specific form of penalty is a reliability option (RO) \cite{vazquez2002market}, which is a one-way contract for differences against the energy-market price. Whenever the market price rises above a specified level, any firms that hold a RO are required to pay the difference between the strike price and the market price to the system operator. VG can face significant risk in taking on such contracts, due to its uncertain and variable output. However, this is not discussed in detail here as penalty regimes are a separate matter from capacity value and procurement. \subsubsection{Price-Based CRMs} Under price-based CRMs, the regulator or system operator determines the total remuneration for capacity, and how this is assigned \textit{ex post} to resources according to their performance. The total capacity investment is a market outcome, based on incentives provided by the CRM and other sources of income. Total remuneration typically is calculated as the product of a volume element (the total generation capacity that is required to ensure system adequacy) and price element. The volume element is calculated similarly to the capacity target for a volume-based mechanism, and is multiplied by a specified per-MW cost of new entry to give the total remuneration. Variants include pre-$2001$ England and Wales, wherein there was no fixed capacity payment. Instead, for each time the total payment was the product of day-ahead LOLP and a specified value of lost load \cite{newbery}. Price-based mechanisms do not require the use of a de-rating factor or capacity value in a capacity auction, as the outcome of the generator availability is used to distribute the revenues. Thus, the complications surrounding \textit{ex ante} assignment of capacity values do not arise. However, this means that resources are rewarded implicitly on the basis of some form of mean output, which may not reflect well a resource's contribution within an \textit{ex ante} risk calculation. This is particularly problematic for VG, the contribution of which within probabilistic risk calculations can be much less than that of firm capacity equal to its mean output. \subsection{Generation-Expansion Models} Several works embed adequacy risk calculations in generation-expansion optimization models \cite{munoz2015a,sigrin2014a,bothwell2017}. These works minimise the cost of capital investment, unserved energy, and (possibly) operations. Typically, unserved-energy costs are included through a hindcast risk calculation using multiple years of demand and VG-output data. To give a linear optimization model, it is necessary to simplify representation of conventional generators, \textit{e}.\textit{g}., assuming that conventional-plant availability is deterministic and equal to its mean. Bothwell and Hobbs \cite{bothwell2017} assess social-welfare losses if VG capacity is credited inappropriately and express the value of additional VG in terms of its marginal EFC at an economic optimum. They do not provide, however, a practical scheme for operating a CRM with both VG and conventional generation. In energy-system models with wider scope, \textit{e}.\textit{g}., The Integrated MARKAL-EFOM System (TIMES), security of electricity supply is represented typically \textit{via} a target de-rated margin of installed capacity over peak demand \cite{price}, as embedding any kind of risk calculation would be too computationally expensive. \subsection{Hybrid VG and Energy Storage} \label{sec:methodology:hybrid} At a system level, energy storage can enhance the capacity value of VG \cite{Stenclik2018}. Here we consider integration of energy storage with VG at a single site (\textit{i}.\textit{e}., with a single grid connection). Such energy storage can be inherent in the VG, as for CSP plants \cite{pfenninger2014a,madaeni2013b}, or dedicated energy storage that is co-located with VG, as in grid-connected microgrids \cite{Mitra2012}. Typically, integrated energy storage can recharge only from the associated VG resource (\textit{e}.\textit{g}., heat from irradiance in the case of CSP), and not directly from the grid \cite{rustomji2016}. Thus, capacity value can be computed only for the integrated system. Examples of such capacity-value estimations for CSP include the work of Madaeni \textit{et al}.\ \cite{madaeni2013b}, which uses a capacity-value approximation that is based on the $10$~highest-LOLP hours of each year. They conclude that increased energy-storage capacity increases capacity value and reduces its interannual variation. Usaola \cite{usaola2013a} study a CSP plant with deterministic dispatch using a time-sequential Monte Carlo calculation and obtain qualitatively similar results, with differences arising from sizing of the CSP plant and different generation and demand statistics. Mills and Rodriguez \cite{mills2019} consider a looser form of coupling, wherein PV that is co-located with batteries share inverters, necessitating an integrated assessment. On the other hand, if VG and energy storage can be operated independently (\textit{e}.\textit{g}., a battery with a separate inverter), the capacity value of the integrated system may be calculated as the sum of capacity values of its constituent components, \emph{if} two conditions are satisfied. First, the contribution of the VG and energy storage must be small with respect to the total system size, so that their capacity values are marginal \cite{Zachary2019}. Second, each constituent capacity-value calculation must account for the ability to re-dispatch existing generation and energy storage. The difference of this \emph{integrated capacity value} and a simple dispatch adjustment can be very substantial---up to an order of magnitude for a combination of pumped hydroelectric energy storage and solar \cite{Karier2017,Byers2019}. \section{Survey of Current Practice} \label{sec:survey} This section reviews the literature to illustrate points made earlier. It does not attempt an exhaustive literature survey of practice, as in Doorman \textit{et al}.\ \cite{doorman2016} and S\"{o}der \textit{et al}.\ \cite{Soder2019}, which are referenced as relevant. The number of individual works that are cited in this section is relatively small, as many studies use similar methodologies. One limitation of many broader surveys is that they do not provide our critical discussion of technical modeling approaches. \subsection{Recent Methodology-Related Research} As described in Section~\ref{sec:methodology}, if a statistical relationship between VG output and demand is taken into account, this is done typically through the `hindcast' approach. We note examples of formative works taking such an approach with wind \cite{Keane2011a,Soder2019} and solar generation \cite{gami2017a,abdel2017} generation. Several studies review the variants of methodology that are used in different studies or the consequences of different approaches for numerical results. Mills and Wiser \cite{mills2012b} provide a list of the capacity-value approaches that are used in different utilities for planning purposes. Madaeni \textit{et al}.\ \cite{madaeni2012comparison} use the western United States as a case study, Zhou \textit{et al}.\ \cite{zhou2018valuing} emphasise the impacts of mis-estimating capacity value, and Awara \textit{et al}.\ \cite{Awara2018} survey the impact on calculation results of making different modelling decisions. Other recent research considers associated data issues. Gami \textit{et al}.\ \cite{gami2017a} examine consequences for calculation results of input data resolutions such as temporal resolution and ambiguity over definitions of data fields in recording PV output. Madaeni \textit{et al}.\ \cite{madaeni2013b} use hindcast to compare how different approximations to the full risk calculation affect LOLE-based ELCC results. Abdel-Karim \textit{et al}.\ \cite{abdel2017} demonstrate carefully how issues in data rounding affect comparison of results from different codes, in the context of using the hindcast approach on the IEEE Reliability Test System. \subsection{Capacity Markets} Capacity-value metrics for VG are of most relevance in volume-based CRMs: renewables often do not participate in strategic reserve/targeted mechanisms. Price-based CRMs do not require assigning a capacity value \emph{ex ante} (\textit{cf}.\ Section~\ref{sec:methodology:metrics} and examples such as the Nordic system \cite{Soder2019}). In volume-based CRMs, the most common method of accounting for the adequacy contribution of different technologies is application of a de-rating factor. Thus, a unit is compensated for only a portion of its nameplate capacity in auction processes and in consequent payments, to account for its estimated statistical availability properties. Mean availability is used typically for conventional generation. Applying an appropriate de-rating factor to VG is challenging, however, as we discuss previously. A range of modelling approaches for resource-adequacy assessments, partly based on the characteristics of the relevant power system, can be used. Bothwell and Hobbs \cite{bothwell2017} and S\"{o}der \textit{et al}.\ \cite{Soder2019} include surveys of current practice in North America and Europe, with the latter examining the case of wind generation only but providing a survey of a much larger number of systems. Table~$3$ in the work of S\"{o}der \textit{et al}.\ \cite{Soder2019} summarises the methods that are used to determine capacity value of wind in the systems that are surveyed. Where wind is eligible for capacity payments, a risk-based capacity-value metric is used typically, \textit{e}.\textit{g}., marginal EFC in Great Britain, average EFC in Italy, and marginal ELCC in Ireland. Some systems, particularly those that rely on strategic reserves, such as the Nordics, preclude renewables from receiving payments at all. Great Britain permits wind generators to receive a capacity payment if they are not in receipt of low-carbon support, which in practice means that most wind farms do not participate. The Irish and Italian systems allow all renewable projects to participate in capacity auctions. However, to date, renewable projects represent only a tiny proportion of successful offers in Ireland and Italy. From a risk-modelling perspective, there are different contexts in which it may be necessary to consider VG within capacity auctions. Clearly, in systems in which VG receives a capacity payment on the basis of a risk-based capacity-value metric, it must be included in the risk calculations. There are other examples in which VG does not receive capacity payments, but is included in the risk modelling which underpins the capacity market, \textit{e}.\textit{g}., in Finland, where wind can reduce the need for strategic reserves. In other markets (\textit{e}.\textit{g}., Sweden), wind is excluded explicitly, which potentially could lead to over-procurement of other capacity. Other systems use a summary statistic of an estimated probability distribution of available resource to represent the contribution of VG in capacity markets or policy-facing resource-adequacy studies. For instance, PJM uses the mean conditional on summer-peak hours, Texas uses mean from highest-load hours during the previous $10$~years, Spain uses the lower fifth quantile of the distribution, and a system that was proposed (but never implemented) in Alberta uses $250$~hours of lowest historic margin during the last five years, which accounts for significant risk contribution in the maintenance season. All of these approaches credit VG on the basis of its own properties, \textit{i}.\textit{e}., in contrast with a risk-based approach, not on how its properties affect the risk level in the system as a whole. This property of these approach has potential serious consequences when VG penetration is very high, as it is in Texas. However, these approaches may be more appropriate at very low penetrations of VG, which can be checked on a case-by-case basis. Bothwell and Hobbs \cite{bothwell2017} examine the economic consequences of using alternatives to an appropriate risk-based capacity credit (\textit{e}.\textit{g}., techniques that are employed in ERCOT, IESO, ISO New England, PJM, and California). It is not clear in all cases whether historic metered output is used, or whether historic meteorological data are used in combination with a future scenario of installed VG capacity. The former has the advantage of being based on actual historical performance, whereas the latter often is preferable as it permits consideration of newer or future sites where there is little or no metered historic record. \section{Conclusions} \label{sec:concl} This paper reviews methods that are used for adequacy risk assessment considering solar power and other VG technologies, and for assessing the capacity value of VG installations. This includes the spatial and temporal properties of solar output, solar-design considerations, methods for capacity-value assessment and including VG in CRMs. Our survey of current practice reveals broad heterogeneity, confirming that a review paper of this type is warranted. Although there is a growing literature on reliability assessment and capacity value considering solar and other VG, several outstanding issues call for additional research. While considerable advances have been made in resource assessment of solar and wind power, there is little work on building error models quantifying the consequences of uncertainty in reconstruction of historic resources. Further statistical work on resource-adequacy assessment is needed. This includes work on non-sequential approaches beyond hindcast and joint VG/demand modelling for sequential models and on use of these more advanced approaches in practical circumstances. The overall emphasis should be on how these various developments could improve decision analysis. Finally, there is limited understanding of how to operate capacity markets on a technology-neutral basis with a full range of resources, including conventional plant, VG, energy storage, and other emerging resources. \bibliographystyle{IEEEtran}
2,877,628,088,748
arxiv
\section{Introduction} \label{sec:intro} \input{Sections/Introduction} \section{Motivating Example} \label{sec:example} \input{Sections/example} \section{Overview} \label{overview} \input{Sections/Overview} \section{Data Preparation} \label{sec:data-preparation} \input{Sections/DataCollection} \section{Proposed Neural Network Architecture} \label{sec:neural-network} \input{Sections/network_archi} \section{Experimental Setup} \label{sec:experiment} \input{Sections/Experiment} \section{Evaluation and Results} \label{sec:results} \input{Sections/Results} \section{Ablation Study} \label{sec:ablation_study} \input{Sections/ablation} \section{Threats to validity} \input{Sections/threat} \label{subsec:threats} \section{Related Work} \label{sec:related-works} \input{Sections/RelatedWorks} \vspace{-4mm} \section{Conclusions} \label{sec:conclusions} \input{Sections/Conclusion} \section{Acknowledgement} The research was partially supported by a grant "Code Review Measurement" granted by Samsung Research Bangladesh. \subsection{Data Collection} \label{subsec:data-collection} A learning-based automated code repair approach based on code review requires a large pool of review comments and the associated source code before and after the fix in the training dataset. We choose Gerrit \cite{gerrit} for collecting the data as it is a standard and widely used code review tool. We created a \textit{GerritMiner} in Java using Gerrit REST API \cite{gerrit-rest-api} and mined 15 Open Source Gerrit repositories consisting of a large number of Java files (Table \ref{tab:project-distribution}). We mined code review comments and associated code files submitted roughly from December 2008 to November 2019. The mining process took approximately 2.5 months on a Intel\textregistered \space Core\textsuperscript{\tiny TM} i7-7700 Processor. {\small \begin{table}[ht] \centering \begin{tabular}{lrrrr} \textbf{Project Name} & \textbf{\#CR} & \textbf{\#Java CR} & \textbf{Train} & \textbf{Test} \\ \hline Acumos~\cite{acumos} & 6773 & 1387 & 881 & 47 \\ Android~\cite{android} & 246253 & 23683 & 12512 & 689 \\ Asterix~\cite{asterix} & 68033 & 23058 & 8509 & 453 \\ Cloudera~\cite{cloudera} & 151010 & 8623 & 3538 & 197 \\ Couchbase~\cite{couchbase} & 68864 & 1347 & 808 & 45 \\ Eclipse~\cite{eclipse} & 51919 & 16903 & 11612 & 621 \\ Fd IO~\cite{fdio} & 26281 & 866 & 612 & 34 \\ Gerrithub~\cite{gerrithub} & 116464 & 2102 & 1334 & 66 \\ Googlereview~\cite{googlesource} & 141410 & 23857 & 13849 & 734 \\ Iotivity~\cite{iotivity} & 61462 & 1286 & 847 & 48 \\ Others~\cite{omnirom,opencord,polarsys,unicorn,carbonrom} & 10201 & 878 & 558 & 27 \\ \hline \textbf{Total} & \textbf{948670} & \textbf{103990} & \textbf{55060} & \textbf{2961} \end{tabular} \caption{Project-wise data distribution.} \label{tab:project-distribution} \vspace{-6mm} \end{table} } We mine 1,068,536 code reviews altogether. To ensure that our model is learning only meaningful changes, we carefully discard all code reviews that did not trigger any change near the review comment. We also discard all follow-up conversations to a previous review because they contain incomplete information in our context. Following previous studies in the literature \cite{tufano2019learning, tufano2019empirical, chen2018sequencer, codit, encore}, we intend to work on program repair in Java files only. Hence, for our experiments, we only work on the \textit{.java} files. \subsection{Noise Removal} \label{subsec:Noise} Previous studies~\cite{bosu,developersee,pred_usefulness} show that every code review comment may not always be useful or relevant to the changes. After manual investigation on our dataset, we curate a list of such comments (shown in Table~\ref{tab:irrelevant}) and discard them. 1.32\% inline comments were discarded after this step. \begin{table}[ht] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|} \hline Irrelevant Review Comment \\ \hline \begin{tabular}[c]{@{}l@{}}same as above, same as the above, same here, see comment above, same question here,\\ perhaps this as well, see comment above, as discussed, new comment as above, same,\\ see above, similar to above, same concern as above, same comment as above, and here,\\ here too, same comments as above, same thing, same complaint here, same as below,\\ nit, ditto, thanks, fixed with the next upload, uh no, nice, nice thanks, love it,\\ `ok, fixed with next update', `yes,~you are right', done, likewise, i see, and again\end{tabular} \\ \hline \end{tabular}% } \caption{List of Irrelevant Review Comment.} \label{tab:irrelevant} \end{table} \subsection{Input Representation} \label{subsec:input-representation} In this section, we discuss how raw source code files were formatted for the learning model. \subsubsection{Change Localization:} \label{subsubsec:change-localization} To identify the exact location of changes in the codes of training data, we build a \textit{``Change Calculation''} tool using \textit{Java DiffUtils} \cite{javadiffutils}. The tool takes the code file before and after the change and calculates the two files' differences. We consider that the code change that is closest to the code review location is a result of the code review. Bosu \textit{et al.} \cite{bosu} demonstrated that useful code review comments trigger a change close to the line where the comment was submitted. We refer this line as \textit{review\_line}. To investigate the case with our dataset, we observe the line difference between \textit{review\_line} and the place of the nearest code change. This is shown in Figure \ref{fig:line-cover}. The distribution shows that 91.27\% of the nearest changes are within 5 line difference from the review\_line. Hence, we consider each change starting within the window of 5 lines to develop training data and discard the sample corresponding to the changes starting outside this window. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{images/line_coverage.pdf} \caption{Line distance distribution between corresponding line of code review and nearest code change.} \label{fig:line-cover} \end{figure} After deciding on the relevancy of code change and review comments, we explicitly concentrate on the code change. We term the buggy source code as \textit{code before change}, and the lines changed there as \textit{focus}. We mark focus with two special tokens, i.e., \texttt{<|startfocus|>} and \texttt{<|endfocus|>}. We call the fixed code as \textit{code after change} and changed portion within the focus as the \textit{target}. We elaborate on the terms in an example presented in Figure \ref{fig:change-localization}. Observing the data, the model learns to change the content of the \textit{focus} into the \textit{target} using the code review comment and the surrounding code as context. We take different measures for identifying the \textit{focus} and the \textit{target} depending on whether the code change is an insert, delete, or update operation. Figure \ref{fig:change-localization} demonstrates these three measures. We design our system so that in a production environment, a reviewer can select one or multiple lines of the code and submit a comment. The model will consider the selected lines as \textit{focus} and try to predict some solutions for it. Replacing the \textit{focus} with one of the predicted solutions will generate a syntactically, semantically, and stylistically correct code. The author will select a suitable one from the predicted solutions. Now, we discuss how we deal with the three types of edits. \begin{enumerate} \item \textbf{Insert:} Intuitively, we expect the reviewer to type the comment just before the line where he or she would like the code to be inserted. Hence, we consider the previous line of the insertion operation as the focus. In Figure \ref{fig:change-localization} the reviewer submits a comment on \texttt{Line 5}, suggesting the author to add an else block. Accordingly, the author inserts an else block after line 5. In this case, we consider \texttt{line 5} as the \textit{focus} and the inserted code, along with the selected line, is considered the \textit{target}. \item \textbf{Delete:} In delete operation, the code inside \textit{focus} is no longer present in the changed commit. Hence, to indicate deletion, our model produce a special token \texttt{<|del|>} as the \textit{target}. In Figure \ref{fig:change-localization}, the reviewer selects \texttt{Line 6,7,8} and requests the author to delete the else block. Here \texttt{Line 6,7,8} is considered as the \textit{focus} and a special token \texttt{<|del|>} is considered as the \textit{target}. \item \textbf{Update:} The update operation is more straightforward, i.e., the lines in the original code that require change are considered \textit{focus}, and the corresponding changed lines are considered the \textit{target}. \end{enumerate} \vspace{-4mm} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{images/three_case.pdf} \caption{Change localization for insert, delete, and update operation.} \label{fig:change-localization} \end{figure} \subsubsection{Code Review Aware Tokenization:} \label{subsubsec:tokenization} The commonly used tokenization method (mentioned as \textit{soft tokenization} in this paper) applied in \cite{chen2018sequencer, tufano2019learning, tufano2019empirical, codit} does not contain whitespace tokenization or identifier splitting. In a shared programming environment, maintaining a consistent style is an essential task for the programmers. Therefore, we propose a tokenization method (named as \textit{hard tokenization}). \textit{Hard tokenization} method has two unique features, as discussed below. \begin{enumerate} \item \textbf{Whitespace Tokenization:} Programmers frequently use consecutive whitespaces (tabs and space) to indent their code. As we want to preserve these coding styles, we need to consider the whitespaces in our model. However, considering each whitespace character as an individual token would significantly affect the input stream and influence the model's learning process. Therefore, we replace consecutive whitespaces with single tokens. \item \textbf{Splitting \texttt{camelCase} and \texttt{snake\_case} Identifiers:} Identifier names such as variable, function, or class names contain human language components that carry meaning about the identifier's functionality. Reviewers often make comments about the atomic components, instead of the full identifier name. Hence, we split all \texttt{camelCase} and \texttt{snake\_case} identifiers so our model can identify those atomic components from code and execute the instruction given in the review comment. An example of the splitting process is presented in Figure \ref{fig:tokenization-table}. It also reduces the size of vocabulary~\cite{Allamanis, vocalmodel}. Our dataset's total number of unique tokens was reduced from 1,99,361 to 43,753 (78.05\% reduction) due to identifier splitting. \end{enumerate} We implement it using multiple tokenizers used in NLP ( TweetTokenizer, WordPunctTokenizer, MWETokenizer in NLTK Library \cite{nltk}), which allow us to tokenize both code and code review in the same format. \begin{figure}[ht] \vspace{-8mm} \centering \includegraphics[width=0.9\linewidth]{images/tok_example.pdf} \vspace{-2mm} \caption{Demonstration of \textit{hard} and \textit{soft tokenization} method. \textit{Hard tokenization} splits tokens in atomic Natural Language units, and considers whitespace groups as special tokens.} \label{fig:tokenization-table} \vspace{-4mm} \end{figure} \vspace{-2mm} \begin{comment} \begin{figure}[ht] \centering \includegraphics[width=0.99\linewidth]{images/MotivatingExample.pdf} \vspace{-2mm} \caption{Code Change Triple} \label{fig:code-change-triple} \end{figure} \end{comment} \subsubsection{Input Sequence:} \label{subsubsec:input-sequence} In our proposed design, an essential task would be to provide the code change's surrounding context to the learning model. The ability to provide the right context would help the model understand the defect better, copy tokens from the surrounding code, reduce overfitting, and improve generalization. It also needs to consider the following goals: \begin{enumerate} \item Reduce code and code review into a reasonably concise sequence of tokens as a sequence to sequence neural network suffers from a long input size. \item Subsume as much useful information as possible to allow the model to capture the context better. \end{enumerate} We feed a context of window size $W$ to the model. The context consists of the $focus$ and its surrounding tokens. We apply the following rules to generate the context for a review comment. \begin{enumerate} \item If the \textit{focus} is written within a function scope and the function is smaller than $W$ tokens, we consider the entire function as input. \item If the focus is inside a function scope and the function is larger than W tokens, we keep up to W tokens (up to W/2 tokens from the preceding part of the focus and W/2 tokens from the focus and subsequent part) within that function scope as input. \item If the \textit{focus} in the global scope, we can follow a similar strategy, i.e., taking up to $W/2$ tokens from the preceding part of the focus and $W/2$ tokens from the focus and subsequent part for input. \end{enumerate} The seq2seq network structure we adopt (Section~\ref{subsec:network}) commonly uses 400 to 800 tokens in the input sequence and 100 tokens as output for similar applications such as code summarization \cite{see2017get, paulus2017deep}. We limit the context window $W$ of code to 400. Another element of input sequence, i.e., code review comments are within 200 tokens in 98.725\% cases in our dataset (Figure \ref{fig:comment-len}). Hence, we limit comment size as 200 tokens and consider the first 200 tokens of comments only if it exceeds this length. Thus, the input sequence reaches up to 600 tokens when comments are added to code. We empirically observed that longer sequences result in deteriorating performance. Figure \ref{fig:token_limits} also presents the distribution of focus length and target length. \begin{figure}[ht] \begin{subfigure}{.333\linewidth} \centering \includegraphics[width=\linewidth]{images/input_focus_token_dist.pdf} \caption{Focus Token Size Distribution} \label{fig:input-focus} \end{subfigure}% \begin{subfigure}{.333\linewidth} \centering \includegraphics[width=\linewidth]{images/comment_token_dist.pdf} \caption{Comment Token Size Distribution} \label{fig:comment-len} \end{subfigure}% \begin{subfigure}{.333\linewidth} \centering \includegraphics[width=\linewidth]{images/target_token_dist.pdf} \caption{Target Token Size Distribution} \label{fig:target-len} \end{subfigure}% \caption{Token Size Distribution in our dataset. Our token size limit covers majority of our dataset.} \label{fig:token_limits} \end{figure} \subsection{Test Set Generation} \label{subsec:testdata} We created a standard test dataset to evaluate different models with different parameters and settings on the same ground. The dataset is suitable to be tested by the models with code review, without code review, and with hard and soft tokenization. We removed all duplicate data points from the dataset and sorted our data in reverse chronological order of the comment submission time. Then we selected the most recent 5\% data from each project. This ensures that our reported performance represents our model's ability to predict future code changes by learning the past code change patterns. Also, by taking 5\% data from each project, we ensure that the test set represents all projects in the dataset. The size of the test data collected from each project is shown in Table \ref{tab:project-distribution}. \subsection{Dataset Description} This section describes our neural network model's specific implementation details, evaluation criteria, and comparison method for state-of-the-art models. \subsection{Evaluation Criteria} \label{subsec:evaluation} We evaluate each of our models in the standard test set $T$ described in Section \ref{subsec:testdata}. For each $t \in T$ we perform inference with Beam Search Decoding~\cite{rush2013optimal} with beam size $k$=$10$, which was commonly used in literature \cite{tufano2019empirical} and our empirical finding also supports it. We measure the top-1 accuracy, i.e., the percentage of fixes that our model predicts as the top-most suggestion, top-5 accuracy, i.e., the percentage of fixes that it predicts as one of the first five suggestions, and similarly measure top-10 accuracy. We further manually analyzed the predictions made by the models and evaluate their quality for different types of code changes (\ref{tab:taxonomy}). \subsection{Network Parameters} \label{subsec:Parameters} We experiment with our model with different parameter settings and justify the choices in an ablation study (Section \ref{sec:ablation_study}). The best performance is obtained with the following model architecture. \begin{itemize} \item Input Embedding: 2002$\times$256 ($model\_c$), 10002$\times$256 \\ ($model\_cc$); 2,000 and 10,000 vocabulary and 2 special tokens each. \item Input sequence length: 400 ($model\_c$), 600 ($model\_cc$) \item Output sequence length: 100 (both $model\_c$ and $model\_cc$) \item Encoder Bidirectional LSTM size: 256$\times$128$\times$2 \item Bridge between Encoder and Decoder: 128$\times$128$\times$2 + 128$\times$2 \item Decoder LSTM size: 512$\times$256 \item Global Attention: 256$\times$256$\times$3 + 512$\times$256 + 256$\times$2 \item Token Generator Decoder: 2000$\times$256 \item Copy Generator: 256$\times$2000 + 2000 + 256$\times$1 + 1 \item Coverage Attention: \emph{False} \item Beam size during inference: 10 \end{itemize} \subsection{Hardware and Training Time} We trained our model on NVIDIA\textsuperscript{\tiny\textregistered} V100 Tensor Core GPU with 16 GB VRAM, 16GB RAM and eight-core CPU on Google Cloud Platform. Training each sequence to sequence model up to 80,000 training steps took nearly 72 hours of training time. \subsection{Comparison with State-of-the-Art Models} \label{subsec:sota} In this section, we discuss the methodology of comparing our models with two recent comparable works~\cite{tufano2019learning, chen2018sequencer}. Tufano \textit{et al.}~\cite{tufano2019learning} create two different neural machine translation models, one with functions less than 50 tokens, and the other with functions between 50 to 100 tokens in \textit{soft tokenization}. The dataset for their study is collected from three large code repositories: Android \cite{android}, Google Source \cite{googlesource}, and Ovirt \cite{ovirt}. One of these (Android) is common with our dataset (Table \ref{tab:project-distribution}). Since our model requires review comments, it is infeasible to use the exact dataset proposed by them \cite{tufano2019learning}. Therefore, we decided to use test data extracted from Android project only for a fair comparison. We created two code change test datasets from Android project with two settings of token size, as mentioned below: \begin{enumerate} \item $Test_{small}$: containing 292 instances where token count is: $0 < token\_ count \leq 50$; \item $Test_{medium}$: containing 246 instances where token count is: $50 < token\_ count \leq 100$. \end{enumerate} Both of these test datasets contain only functions with a single code review comment and a single code change. These data points are carefully removed from our training dataset. We reproduce the two models proposed by Tufano \textit{et al.} \cite{tufano2019learning} with the data and source code released by the authors~\cite{tufano-code} and achieve nearly identical validation result as reported in their paper. After validating their model, we compare our approach with them by training their model with our dataset. The results comparing Tufano \textit{et al.} and our models on $Test_{small}$ and $Test_{medium}$ datasets are shown in Table \ref{tab:tufano}. SequenceR \cite{chen2018sequencer} performs single line update operations inside functions with defects, given the code of the function and the line number of the defect. Their model cannot handle insert, delete, multi-line operations, and defects outside of function scope (i.e., for comments and global data). To make a comparison with SequenceR \cite{chen2018sequencer}, we selected 349 instances from our standard test dataset (Section \ref{subsec:testdata}) that comprise single line update operations only. We implemented SequenceR preprocessing, training, and test pipeline with the help of their source code \cite{sequencer-code} and achieve similar performance with the reported performance of in their paper. We created two different implementations of SequenceR after validating the original work. The first model was trained with the 35,578 training data provided with their paper and termed as $SequenceR$.The second model is trained with training data collected from our mined data that satisfy the SequneceR dataset constraints. We have trained it with 56,000 data to make a fair comparison with our model. This implementation of SequenceR is referred as $SequenceR_{new}$. We test both these models and our best models with and without code review on the prepared test set of 349 data. The comparison is shown in Table \ref{tab:sequencer}. \begin{comment} Reference ~\cite{tufano2019learning} is one of the pioneering works in Deep Learning based program repair. This scheme creates two different Neural Machine Translation models, one with changes less than 50 tokens, the other with changes between 50 to 100 tokens. The dataset for their study is collected from three large code repositories: Android \cite{android}, Google Source \cite{google-source}, and Ovirt \cite{ovirt}. One of these (Android) is common with our dataset (Table \ref{tab:project-distribution}). Our dataset is built as a representative of real-world projects. Hence, it does not contain sufficient number of examples that satisfy the constraints to create a training dataset for Tufano \textit{et al.} models. Therefore, we decide to use only the entries of the common project in our training data and create a testset for fair comparison. We create two code change datasets from our mined data from Android project, identical to the dataset prepared by Tufano \textit{et al.} One of the datasets contains 292 datapoints of functions less than or equal to 50 tokens in the tokenization method proposed by Tufano et al., another contains 246 datapoints of functions with length between 50 and 100 tokens in the same tokenization method. Both of these datasets contain functions with a single code review comment and a single code change. We term them as $Test_{small}$ and $Test_{medium}$, respectively. These datasets fulfill the requirements of both Tufano \textit{et al.} and ours. Before training our models, we carefully remove these datapoints from our training data. We reproduce the two models proposed by Tufano \textit{et al.} with the data and source code released by the authors~\cite{tufano-code} and achieve nearly identical validation result as reported in their paper. The models of Tufano \textit{et al.} and our models are tested on the same $Test_{small}$ and $Test_{medium}$ datasets for comparison. The results of this comparison is shown in Table \ref{tab:tufano}. SequenceR \cite{chen2018sequencer} performs single line update operations inside functions with defects, given the code of the function and the line number of the defect. This problem domain is a subset of our target domain, as we deal with multi-line insert, delete, and update operations; both inside and outside of function scope (i.e., for comments and global data). To make a comparison with SequenceR \cite{chen2018sequencer}, we selected 349 datapoints from our standard test dataset (Section \ref{subsec:testdata}) that comprises single line update operations. We implemented SequenceR preprocessing, training, and test pipeline with the help of the source code and trained models released with the paper \cite{sequencer-code}. We created two different implementation of SequenceR. First, we loaded trained weights of the best performing model released with the paper. This model is termed as $SequenceR_{pretrained}$. Second, we calculated all code changes from our mined data that meets SequenceR requirements. We found 1,79,311 such datapoints. Although the original SequenceR model was trained on 35,578 data, we train it with 56,000 data randomly selected from the 1,79,311 eligible data so that the training dataset size equals that of our models. We keep the hyperparameters of there model exactly as mentioned in the paper, except only batch size, which is increased to 100 from 32 mentioned in the paper, to make full utilization of our GPU. This implementation of SequenceR is referred as $SequenceR_{new}$. We test both these models and two of our best models on the 349 test data. The comparison is shown in Table \ref{tab:sequencer}. \end{comment} \subsection{System Overview} \label{subsec:overview} The objective of this study is two-fold. \begin{enumerate} \item Suggesting fixes for the broader range of changes (defects) raised in peer code reviews. \item Exploiting code review comments written in natural language as an oracle to improve the fix suggestions' quality. \end{enumerate} \textcolor{red}{ } In a code review platform, such as Gerrit, when a developer submits a code patch, a reviewer is assigned and notified to review the code. The reviewer inspects the code, and if the reviewer identifies a defect, he or she highlights one or multiple lines in the code and submits a code review. By \textit{defect}, in this paper, we imply any issue discussed in the review comment that can be related to program functionality, a naming convention, coding style, or even spelling mistakes. The developer addresses the comment and submits a follow-up patch. Finally, when there are no more issues in the code, the reviewer approves the code, and it is merged with the main codebase. \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{images/aprpipeline.pdf} \caption{Steps showing training of Automatic Program Repair with code review. Model learns to predict code change ($f$), from code before change ($C_d$), defect location, and code review comment ($R$). Replacing the defect $d$ (red round box) with the code change ($f$) creates the fixed code ($C_f$).} \label{fig:aprpipeline} \vspace{-4mm} \end{figure} By analyzing these code patches, we can identify the code fragment that was changed due to the code review. In this study, our goal is to create a learning-based system that can predict the changed code automatically by observing a large number of code changes and code review comments in historical data. Once deployed in a production environment, when a reviewer highlights a defect in a code file and writes a review comment, our model will produce multiple fix suggestions for the defect. The developer can choose one of the model's suggestions or write his/her code fix. Figure \ref{fig:aprpipeline} shows different steps of training phase of our model. \subsection{Problem Definition} \label{subsec:prob-definition} We design the task of program repair as a sequence-to-sequence problem. We create two sequence learning models that attempt to repair a defect. Our first model $model\_cc$ is given the code before change $C_d$ and defect location $l$, along with the code review $R$ as input, and tasked to predict the code change $f$ for the defect $d$ (Figure \ref{fig:aprpipeline}). \unpara{Model I} \noindent In training time the location $l$ is identified with our `change localization' method (Section \ref{subsubsec:change-localization}). After deployment, we assume that the review comment will localize the bugs, and our deep learning model will fix the error. Hence, the prediction by $model\_cc$ can be defined as \vspace{2mm} \centerline{ $\hat{f}$ = $\underset{f}{\arg \max } \;P\left(f \;|\; C_d, \;l, \;R \right) $ } \unpara{Model II} \noindent Our second model $model\_c$ is given the code before change $C_d$, and the location $l$ for defect $d$ as input and tasked to predict the code change $f$. Hence, the prediction by $model\_c$ can be defined as, \vspace{2mm} \centerline{ $\hat{f}$ = $\underset{f}{\arg \max } \;P\left(f \;|\; C_d, \;l \right) $ } By replacing the defect $d$ from defected code $C_d$ with the fix suggestion $f$, we get the fixed code $C_{f}$. \vspace{2mm} \centerline{$C_{f} $ = $ C_d - d $ + $ f$} \vspace{2mm} Using beam search decoding~\cite{rush2013optimal}, the developer will be offered the top $N$ fixed code suggestion $\{C_{f_1}, ... , C_{f_N}\}$ to choose, where $N \in \mathbb N$. \subsection{How effective is code review in automatic code repair?} \label{subsec:hypothesis} In this study, we aim to show whether code review can improve automatic code repair performance. We train our model with two different settings, with and without code review termed as \textit{\textbf{$model\_{cc}$}} and \textit{\textbf{$model\_{c}$}}, respectively. The construction of these models is discussed in Section \ref{subsec:network}. \begin{figure}[ht] \includegraphics[width=1\linewidth]{images/hard-soft-c-cc.png} \caption{Top-1, top-5, and top-10 test accuracy of model trained without code review (c) and with code review (cc) for both hard and soft tokenization method.} \label{fig:hard-soft-c-cc} \end{figure} Figure \ref{fig:hard-soft-c-cc} and Table~\ref{tab:baseline} clearly show that incorporating the code review comments improve the prediction accuracy for both \textit{hard tokenization} and \textit{soft tokenization} methods in all top-1, top-5, and top-10 predictions. Since Hard Tokenization has better top-1 accuracies on both \textit{\textbf{$model\_{cc}$}} and \textit{\textbf{$model\_{c}$}}, we decided to use it for further analysis. \begin{table}[ht] \centering \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Top-1} & \textbf{Top-5} & \textbf{Top-10} \\ \midrule \textbf{Baseline \textit{model\_c}} & 16.29 & 20.94 & 23.37 \\ \textbf{Baseline \textit{model\_cc}} & 19.59 & 27.73 & 31.51 \\ \textbf{Relative improvement} & \textbf{+20.33} & \textbf{+32.41} & \textbf{+34.82} \\ \bottomrule \end{tabular} \caption{Baseline model accuracy (in percent) for \textit{model\_c} and \textit{model\_cc} in \textit{hard tokenization}, and relative improvement of \textit{model\_cc} over \textit{model\_c}.} \label{tab:baseline} \end{table} \subsection{How effectively does the model perform in comparison with state-of-the-art techniques?} \label{subsec:ablation} We aim to evaluate our model on a benchmark against well-established approaches \cite{chen2018sequencer, tufano2019learning}. To ensure that we replicate the exact settings used by previous architectures, we generate separate test cases for comparing our models, as described in Section \ref{subsec:testdata}. \subsubsection{Comparing with the methodology proposed by Tufano \textit{et al.} \texorpdfstring{\cite{tufano2019learning}}{Lg}} As discussed earlier in Section~\ref{subsec:sota}, We show the comparison among Tufano \textit{et al.}~\cite{ tufano2019learning} and our models on $Test_{small}$ and $Test_{medium}$ in Table~\ref{tab:tufano}. The result shows both {$model\_{c}$} and {$model\_{cc}$} outperform Tufano et al~\cite{ tufano2019learning} on both test sets. \begin{table}[ht] \centering \begin{tabular}{cccl} \toprule Model & Top n Prediction & $Test_{small}$(292) & $Test_{medium}$(246)\\ \midrule \multirow{3}{3em}{Tufano \textit{et al}.\cite{tufano2019learning}} & 1 & 2 (0.68\%) & 1 (0.41\%) \\ & 5 & 6 (2.05\%) & 3 (1.22\%) \\ & 10 & 7 (2.40\%) & 4 (1.63\%) \\ \midrule \multirow{3}{3em}{$model\_{c}$} & 1 & 21 (7.19\%) & 11 (4.47\%) \\ & 5 & 52 (17.80\%) & 38 (15.44\%) \\ & 10 & 80 (27.40\%) & 46 (18.69\%) \\ \midrule \multirow{3}{3em}{$model\_{cc}$} & 1 & 31 (10.61\%) & 24 (9.76\%) \\ & 5 & 71 (24.31\%) & 55 (22.36\%) \\ & 10 & 93 (31.85\%) & 63 (25.61\%) \\ \bottomrule \end{tabular} \caption{Comparison of Tufano \textit{et al}.\cite{tufano2019learning} and our models.} \label{tab:tufano} \end{table} \subsubsection{Comparing with the methodology proposed by Chen et al. \texorpdfstring{\cite{chen2018sequencer}}{Lg}} \begin{table}[ht] \centering \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Top 1} & \textbf{Top 5} & \textbf{Top 10} \\ \midrule SequenceR$_{original}$ & 1.27\% & 1.52\% & 2.02\% \\ SequenceR$_{new}$ & 3.03\% & 6.58\% & 7.34\% \\ \textbf{$model\_c$} & 3.54\% & 12.91\% & 16.96\% \\ \textbf{$model\_cc$} & 8.86\% & 18.48\% & 25.31\% \\ \bottomrule \end{tabular} \caption{Comparison with SequenceR \cite{chen2018sequencer}.} \label{tab:sequencer} \end{table} We evaluate two different implementations of SequenceR, as mentioned in Section~\ref{subsec:sota}. First, we apply the SequenceR model, released by the authors, and evaluate it on the test data (named it as SequenceR$_{original}$). To make a fair comparison, we also create a training data set for SequenceR from our corpus and train the SequenceR again ( named as SequenceR$_{new}$). Both of the results are displayed in Table \ref{tab:sequencer}. We can see that the original SequenceR model performs very poorly on our test set. This poor performance is attributed to the difference in the vocabulary of the training and test dataset. Our models perform significantly better than SequenceR$_{new}$. We can see that the top-1 prediction of our $model\_c$ is comparable to the performance of SequenceR$_{new}$. However, the top-1 prediction of $model\_cc$ is significantly better because of the addition of code review comments. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{images/coverage.pdf} \caption{Manually created taxonomy for 501 randomly sampled data from test set.} \label{fig:type-pi-chart} \end{figure} \subsection{Which types of changes can our models correctly predict?} \label{subsec:factor} We expect our models to suggest fixes for all types of issues reported in code reviews. To inspect the ability to address different types of issues in real scenarios, we conducted a study. We randomly selected 501 samples from our test set. We present the study outcome on the generated fixes of models $model\_{c}$ and $model\_{cc}$. Two authors performed the manual categorization of different code reviews. To begin with, they jointly labelled 100 samples by discussing each review. The purpose was to develop a shared understanding and to remove individual bias as much as possible. Based on the understanding, they labelled 100 more samples independently. The Cohen kappa \cite{cohenkappa,cohenkappa2} value is 0.64, which indicates substantial agreement between them. Furthermore, the two authors discussed the reviews where disagreements occurred with other authors and converged to a common ground. Next, the remaining 400 samples were labelled equally by the authors independently. We categorized the possible code changes in four major classes: 1) Bug Fix~\cite{chen2018sequencer, tufano2019empirical, codit} 2) Refactoring~\cite{tufano2019learning} 3) Stylistic change (changes related to indentation and formatting) and 4) Non-code change (changes in documentation and annotation). This is the first work to consider and repair changes that belong to the last two categories to the best of our knowledge. Figure~\ref{fig:type-pi-chart} presents the distribution of the major classes. Note that these two categories cover 34\% changes in our dataset. We show how our models $model\_{c}$ ~\& $model\_{cc}$ perform for changes of different categories in Table~\ref{tab:taxonomy} when we consider top-10 accuracy. We illustrate some of the successfully generated examples by $model\_{cc}$ for each major class sub-categories. \subsubsection{Bug Fix} This category consists of code changes that are necessary to overcome system glitch, incorrect output, and unwanted behavior ~\cite{bug_def}. We observe a total of 14 sub-categories under Bug Fix. In the successful case illustrated in Table \ref{tab:bug_1} from the project GoogleReview, we see the reviewer shows an argument why the exception should be thrown conditionally as a high-level overview. Our model successfully generates the target code as specified by the reviewer. \begin{table}[ht] \vspace{-1mm} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{ll} \hline \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Review}: Should this be done conditionally? Otherwise it would try to update username even \\ though it is already set? Or am I missing something?\end{tabular}} \\ \hline \multicolumn{1}{l|} {\begin{tabular}[c]{@{}l@{}} \textbf{Code Before Change:} \\ \texttt{try }\{..\} \\\texttt{catch (OrmDuplicateKeyException dupeErr)}\{ \\~~\textcolor{red}{\texttt{if (!other.isPresent() || }} \\ \textcolor{red}{\texttt{!other.get().accountId().equals(accountId))} \{} \\~~~~\texttt{throw new IllegalArgumentException("username "} \\ \texttt{ + username + " already in use")}; \\~~\textcolor{red}{\texttt{\}}} \\ \texttt{\}} \end{tabular}} & \begin{tabular}[c]{@{}l@{}} \textbf{Code After Change:} \\ \texttt{try }\{..\} \\ \texttt{catch (OrmDuplicateKeyException dupeErr)}\{ \\ ~~~~\texttt{throw new IllegalArgumentException("username "} \\ \texttt{+ username + " already in use")};\\~~\texttt{\}} \\ \end{tabular} \\ \hline \end{tabular}% } \vspace{-2mm} \caption{An example of bug-fix successfully generated by our $model\_cc$. Deleted tokens are marked by red color.} \label{tab:bug_1} \vspace{-2mm} \end{table} \subsubsection{Refactoring} Refactoring includes code changes intended for code maintenance, which does not change external behavior \cite{refactor_def} of the system. We observe a total of 23 sub-categories under Refactoring. We illustrate a successful sample generated by $model\_{cc}$ from the project Android. \begin{table}[ht] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{ll} \hline \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Review}: This is redundant: If the right hand side of the '||' operator is evaluated, it is because \texttt{ni != null}. \end{tabular}} \\ \hline \multicolumn{1}{l|} {\begin{tabular}[c]{@{}l@{}} \textbf{Code Before Change:} \\ \texttt{if (ni == null || \textcolor{red}{(ni != null \&\&} !ni.isConnected()\textcolor{red}{)}) \{} \\ ~~\texttt{if (LOGD)} \\ ~~~~\texttt{~~~~Log.d(TAG, "forceRefresh: no connectivity");} \\ ~~\texttt{return false;} \\ \} \end{tabular}} & \begin{tabular}[c]{@{}l@{}} \textbf{Code After Change:} \\ \texttt{if (ni == null || !ni.isConnected()) \{} \\ ~~\texttt{if (LOGD)}\\ ~~~~\texttt{~~~~Log.d(TAG, "forceRefresh: no connectivity");} \\ ~~\texttt{return false;} \\ \} \\ \end{tabular} \\ \hline \end{tabular}% } \vspace{-2mm} \caption{An example of refactoring successfully generated by our $model\_cc$. Deleted tokens are marked by red color.} \label{tab:refac_1} \vspace{-4mm} \end{table} \subsubsection{Stylistic Change} Code changes required to ensure proper indentation and formatting such as newline insertion, tab spacing, whitespace addition/deletion are considered under this category~\cite{indentation_def}. We illustrate this with two successful test samples. In the first example, the reviewer emphasizes to add whitespace before \texttt{BluetoothDevice.PHY\_LE\_2M}. Our model generates a correct solution by adding a whitespace token between \texttt{!=} and \texttt{BluetoothDevice.PHY\_LE\_2M}. Similarly, a second example breaks a long line into two lines as the reviewer commented. \begin{table}[ht] \vspace{-1mm} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{ll} \hline \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Review:} Missing space\end{tabular}} \\ \hline \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Before Change:} \\ \texttt{public Builder setSecondaryPhy(int secondaryPhy) \{} \\ ~~\texttt{if (secondaryPhy != BluetoothDevice.PHY\_LE\_1M \&\&} \\ ~~\texttt{ ~ secondaryPhy \mybox[fill=red!20]{!=BluetoothDevice.PHY\_LE\_2M} \&\&} \\ ~~\texttt{ secondaryPhy != BluetoothDevice.PHY\_LE\_CODED)}\{...\}\\...\} \end{tabular}} & \begin{tabular}[c]{@{}l@{}} \textbf{Code After Change:} \\ \texttt{public Builder setSecondaryPhy(int secondaryPhy) \{} \\ ~~\texttt{if (secondaryPhy != BluetoothDevice.PHY\_LE\_1M \&\&} \\ ~~\texttt{ secondaryPhy != BluetoothDevice.PHY\_LE\_2M \&\&} \\ ~~\texttt{ secondaryPhy != BluetoothDevice.PHY\_LE\_CODED)}\{...\}\\...\}\end{tabular} \\ \hline \vspace{-4mm} \\ \hline \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Review:} Long line\end{tabular}} \\ \hline \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Before Change:} \\ \texttt{public Bundle createTransportModeTransform} \\ \texttt{(IpSecConfig c, IBinder binder) throws RemoteException \{..\}}\end{tabular}} & \begin{tabular}[c]{@{}l@{}}\textbf{Code After Change:} \\ \texttt{public Bundle createTransportModeTransform} \\ \texttt{(IpSecConfig c, IBinder binder)}\mybox[fill=green!20]{~~~~~~~~~~~~~~~~~~~~}\\ \mybox[fill=green!20]{~~~} \texttt{throws RemoteException \{..\}}\end{tabular} \\ \hline \end{tabular}% } \vspace{-2mm} \caption{Two examples of stylistic change successfully generated by our $model\_cc$. The red highlighted portion indicates the code region where a space was added between the tokens and The green highlighted portion indicates the code region where a newline was added.} \label{tab:sty_chan} \vspace{-2mm} \end{table} \subsubsection{Non-code change} We consider changes in non-code regions such as string value, log, code comment, documentation, annotation, and copyright license header under this category~\cite{code_comment,documentation}. We analyze an example of this category, where the reviewer mentions `2017' as the appropriate copyright license year for the file. Our model was able to capture the domain-specific context and generate the intended target in this case. Similarly, the second example removes an unnecessary annotation. \begin{table}[ht] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{ll} \hline \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Review:} 2017 \end{tabular}} \\ \hline \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}} \textbf{Code Before Change:} \\ \texttt{* Copyright (C) \textcolor{red}{2016} The Android Open Source Project} \end{tabular}} & \begin{tabular}[c]{@{}l@{}} \textbf{Code After Change:} \\ \texttt{* Copyright (C) \textcolor{green}{2017} The Android Open Source Project}\end{tabular} \\ \hline \vspace{-4mm} \\ \hline \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Review:} Is this really nullable? What's the point of doing a ref-update validator without a refdb?\end{tabular}} \\ \hline \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}\textbf{Code Before Change:} \\ \texttt{public BatchRefUpdateValidator} \\ \texttt{(SharedRefDatabase sharedRefDb, ...., } \\ \texttt{\textcolor{red}{@Nullable} @Assisted RefDatabase refDb)} \\ \texttt{\{...\}}\end{tabular}} & \begin{tabular}[c]{@{}l@{}}\textbf{Code After Change:} \\ \texttt{public BatchRefUpdateValidator} \\ \texttt{(SharedRefDatabase sharedRefDb, ...., } \\ \texttt{@Assisted RefDatabase refDb)} \\ \texttt{\{...\}} \end{tabular} \\ \hline \end{tabular}% } \vspace{-2mm} \caption{Two examples of non-code change successfully generated by our $model\_cc$. Deleted and inserted tokens are marked by red and green color respectfully.} \label{tab:non_code} \vspace{-4mm} \end{table} \begin{comment} \subsection{How does the Model Capture Code Review Instructions?} To understand the reason behind the success of code review in program repair, we analyze the 311 datapoints where $model\_cc$ succeeds, but $model\_c$ fails in Top-10 predictions. We find 68 (21.86\%) datapoints where the code review contained at least 1 token that appears in the prediction but is not available in the input source code. Furthermore, there are cases, where the code reviews might not directly contribute tokens to the predictions, but provide instructions for what type of change to perform. Our change calculation methodology in Section \ref{subsubsec:change-localization} tells us what kind of change a particular datapoint went through: insert, delete, or update. We investigate the relevance of the contextual information provided in the code review by performing a TF-IDF \cite{tf-idf} analysis on the code review comments of the 932 successful Top-10 predictions by $model\_cc$. Given a set of documents, TF-IDF tells us how important a word is to a document in the set of documents. Thus we find the most important words for each of our change types, and show the results in Figure \ref{fig:word-cloud} in the form of word clouds. Common English stopwords \cite{stopwords} that does not contain a programming meaning, and programming identifier tokens were removed during the TF-IDF analysis. We can see that for insert operation, \textit{missing}, \textit{annotation}, etc. words are more important, and for delete operation, \textit{remove}, \textit{not}, \textit{needed}, etc. are found more important. In case of update operation, changes that takes place within a line such as, \textit{space}, \textit{whitespace}, \textit{nit} (short for nit-pick) are found more important. Hence, we can see that the model is able to capture unique tokens provided in the code review and apply them to the program repair. We can also see, in successful code repairs the code review comment has an association with the type of change performed. \fboxsep=1m \fboxrule=1p \begin{figure*} \begin{minipage}[b]{0.33\textwidth} \fcolorbox{black}{black}{\includegraphics[width=0.93\textwidth]{images/insert_wordcloud.pdf}}\\ \subcaption{Insert} \end{minipage}% \begin{minipage}[b]{0.33\textwidth} \fcolorbox{black}{black}{\includegraphics[width=0.93\textwidth]{images/delete_wordcloud.pdf}}\\ \subcaption{Delete} \end{minipage}% \begin{minipage}[b]{0.33\textwidth} \fcolorbox{black}{black}{\includegraphics[width=0.93\textwidth]{images/update_wordcloud.pdf}}\\ \subcaption{Update} \end{minipage}% \caption{Important words in the review comments of the successful repairs by $model\_cc$} \label{fig:word-cloud} \end{figure*} Here are a few examples of code fixes where $model\_cc$ succeeded but $model\_c$ failed in Top-10 predictions. We can see that the model directly propagates a token from the review comment to the prediction and implements the instruction provided code review. And without the help of the review, understanding the intention of these change would have been very difficult. \begin{tcolorbox}[colback=white, colframe=white!20!black, center, valign=top, halign=left, center title, width=\linewidth, boxrule=0.5pt, left=3pt,right=3pt,top=3pt,bottom=3pt] {\small \textbf{Project Name}: Eclipse \newline \textbf{Code Review}: If min is useless, just use 0 as min? \newline \textbf{Code Under Focus}: \texttt{min = Math.min(Math.abs(val), min);}\newline \textbf{Target Code}: \texttt{min = 0;}} \end{tcolorbox} \begin{tcolorbox}[colback=white, colframe=white!20!black, center, valign=top, halign=left, center title, width=\linewidth, boxrule=0.5pt, left=3pt,right=3pt,top=3pt,bottom=3pt] {\small \textbf{Project Name}: Asterix \newline \textbf{Code Review}: class/enum names start with a capital letter, so should be "SecondaryUnnestMapOutputVarType" \newline \textbf{Code Under Focus}: \texttt{enum secondaryUnnestMapOutputVarType \{}\newline \textbf{Target Code}: \texttt{enum SecondaryUnnestMapOutputVarType \{} } \end{tcolorbox} \end{comment} \begin{comment} \oaishi{7.4 might not be needed. Instead can we provide some examples where model\_c fails but model\_cc succeeds?} \subsection{What are the limitations of code review aided program repair?} The most significant limitation of our study comes from noisy data, i.e, code review comments that are not related to the subsequent code change. The change calculation method (Section \ref{subsubsec:change-localization}) might capture a change that is not caused by the review comment, or the review comment might be `not useful' \cite{bosu}. In either of these cases, the model will learn non-meaningful changes from these comments. To understand this effect, we manually analyzed 1429 data, randomly sampled from our training and test dataset. We found that 325 (22.64\%) of the review comments are not related to the code change. Eliminating these non-related review comments is a non-trivial task and beyond the scope of our current work. We analyze 77 data where $model\_cc$ failed, but $model\_c$ succeeded. We find that irrelevance of comment plays an essential role in $model\_cc$'s failure. For example, in the following instance, the review comment is ambiguous. The $model\_c$ predicts it correctly in Top-10 predictions, but $model\_cc$ fails to do so. \begin{tcolorbox}[colback=white, colframe=white!20!black, center, boxrule=0.5pt, valign=top, halign=left, center title, width=\linewidth, left=3pt,right=3pt,top=3pt,bottom=3pt] {\small \textbf{Project name}: Asterix \newline \textbf{Code review}: Make securedLSNs a static final constant or non-public and provide accessors if needed. \newline \textbf{Code under focus}: \texttt{\textcolor{blue}{\textbf{public}} Map\textcolor{blue}{\textbf{<}}TxnId, \textcolor{blue}{\textbf{Long>}} securedLSNs;}\newline \textbf{Target Code}: \texttt{\textcolor{blue}{\textbf{protected}} Map\textcolor{blue}{\textbf{<}}TxnId, \textcolor{blue}{\textbf{Long>}} securedLSNs;}\newline \textbf{model\_cc prediction \#7:} \texttt{\textcolor{blue}{\textbf{private}} Map\textcolor{blue}{\textbf{<}}TxnId, \textcolor{blue}{\textbf{Long>}} securedLSNs;}} \end{tcolorbox} This is another example where $model\_cc$ fails due to error in our change calculation and defect localization. Here, the comment referred to line 3, but the change calculation mechanism captured another nearby change in lines 1,2. Formatting changed to reduce space. \begin{tcolorbox}[colback=white, colframe=white!20!black, center, boxrule=0.5pt, valign=top, halign=left, center title, width=\linewidth, left=3pt,right=3pt,top=3pt,bottom=3pt] {\small \textbf{Project Name}: Asterix \newline \textbf{Code Review}: `finally' is not followed by whitespace. \newline \textbf{Code}: \texttt{\textbf{<|startfocus|>}\newline 1.\tab\tab while(confiscatedPages.contains(c))\{\newline 2.\tab\tab\tab throw new IllegalStateException(); \}\newline \tab\space\space\textbf{<|endfocus|>}\newline 3.\tab\tab \}\}\} finally\{ confiscateLock.unlock(); \}\newline } \textbf{Target Code}: \texttt{throw new IllegalStateException();}} \end{tcolorbox} \end{comment} \begin{comment} \subsection{Ablation Study} \label{subsec:ablation-study} \textcolor{red}{Should we remove this? TA: try to explain why 2000.... why not 4000...??} \subsubsection{Effect of Vocabulary Size in Code Repair} In the network architecture level, the primary distinction between $model\_c$ and $model\_cc$ is that the first one contains 2,000 vocabulary extracted from source code and the former contains 2,000 input vocabulary from source code, and 8,000 input vocabulary from review comments. Naturally the question arises, is the disparity in the performance of these models caused by the larger vocabulary in $model\_cc$? We investigate this by creating a new model identical to $model\_c$ with the most frequent 10,000 tokens from source code as the input vocabulary. We refer to it as $model\_c$\textit{+}. Figure \ref{fig:vocab-size-transfer} shows that $model\_c$\textit{+} fails to outperform $model\_cc$. Hence, we can infer that the performance increase in $model\_cc$ is not attributed to the larger vocabulary size. \begin{figure} \includegraphics[width=1\linewidth]{images/vocab-size-transfer.png} \caption{Effect of Vocabulary Size and Transfer Learning in Prediction Accuracy.} \label{fig:vocab-size-transfer} \vspace{-2mm} \end{figure} Figure \ref{fig:vocab-size-transfer} shows that although increasing the vocab size in $model\_c$ increases the performance in Top-1 prediction very slightly, the performance decreases in Top-5 and Top-10 accuracy. Therefore, we can conclude that the performance increase in $model\_cc$ is not attributed to the larger vocabulary size. Thus we can infer that the vocabulary that is improving the model performance is, in fact, coming from the review comments. \anindya{Recheck the previous sentence.} \subsubsection{Effect of Transfer Learning in Code Repair} Training without code review comments comes with a natural constraint that each of the code change in our training data must have an associated code review. Consequently, the numerous codebases available in open source repositories cannot be exploited for code repair in our context. To exploit those codebases, we investigate if learning from these code changes (without any associated review) can improve performance. The pre-training data collection method adopted for this purpose is described in details in Section \ref{subsec:pretraining}. We collect 7,19,213 code changes from our iterative code patches during mining and designing a sequence learning network with more parameters (described in Section \ref{subsec:network}). As this code change dataset contains all code changes less than 100 tokens long, it is a superset of both the previous training and test dataset. Hence, we carefully made sure that it excluded the test datapoints. \anindya{The previous two lines seem inconsistent. Also, data collection for pre-training can be discussed here briefly and omitted from Section 3/4. This will save some space there.} The changes that happen because of code review might not be the same as the changes that happen without code review. Hence, we designed our training as a Transfer Learning mechanism used in Deep Learning. We first pre-train our large neural network with the large code change dataset and then further fine-tune the network with the relatively small training dataset. From Figure \ref{fig:vocab-size}, we can see that the use of this pre-training method noticeably improves the performance of the model without code review. However, it still falls behind the model with code review. \subsubsection{Network Properties and Parameters} Our benchmark models for $model\_c$ and $model\_cc$ described in Section \ref{subsec:network} are built upon practical applicability of the system, our intuitive understanding of the problem domain, observation on the training data, and greedy hyperparameter searching. In this sub-section, we report our findings varying different hyperparameters and network properties that eventually led us to find the best performing models. We trained each of these networks up to 80,000-100,000 training steps and saved a checkpoint after every 400 steps. For each network setting, we report the highest validation accuracy among all saved checkpoints. \begin{table}[ht] \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Top-1} & \textbf{Top-5} & \textbf{Top-10} \\ \midrule \textbf{Baseline \textit{model\_c}} & 16.29 & 20.94 & 23.37 \\ \textbf{Baseline \textit{model\_cc}} & 19.59 & 27.73 & 31.51 \\ \textbf{Reletive improvement} & \textbf{+20.33} & \textbf{+32.41} & \textbf{+34.82} \\ \bottomrule \end{tabular} \caption{Baseline model accuracy (in percent) for \textit{model\_c} and \textit{model\_cc}, and relative improvement of \textit{model\_cc} over \textit{model\_c}.} \label{tab:baseline} \vspace{-4mm} \end{table} \begin{table}[ht] \centering \hspace*{-2mm} \scalebox{.87}{ \begin{tabular}{c|lccc} \toprule & \textbf{Modified property} & \textbf{Top-1} & \textbf{Top-5} & \textbf{Top-10} \\ \midrule \multirow{7}{*}{\rotatebox[origin=c]{90}{\textit{\textbf{model\_cc}}}} & \textbf{Baseline \textit{model\_cc}} & 0 & 0 & 0 \\ \cline{2-5} & 1. Soft tokenization & -3.03 & \textbf{+5.33} & \textbf{+11.22} \\ \cline{2-5} & 2. Without custom vocabulary selection & -3.10 & -12.91 & -16.82 \\ \cline{2-5} & \begin{tabular}[c]{@{}l@{}}3. Larger vocabulary \\(10k from code, 10k from CR)\end{tabular} & -3.44 & -9.74 & -11.89 \\ \cline{2-5} & \begin{tabular}[c]{@{}l@{}}4. Smaller vocabulary \\ (1k from code, 5k from CR)\end{tabular} & -7.06 & -3.89 & -1.82 \\ \cline{2-5} & 5. Smaller embedding size (128) & -4.65 & -2.8 & -1.82 \\ \cline{2-5} & 6. With coverage mechanism & -2.93 & -4.01 & -2.46 \\ \cline{2-5} & 7. Transfer learning & -15.34 & -5.84 & -3.75 \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\textit{\textbf{model\_c}}}} & \textbf{Baseline \textit{model\_c}} & 0 & 0 & 0 \\ \cline{2-5} & 8. Soft tokenization & -10.81 & -4.2 & -1.87 \\ \cline{2-5} & 9. Larger vocabulary (10k from code) & +0.82 & -3.7 & -4.47 \\ \cline{2-5} & \begin{tabular}[c]{@{}l@{}}10. Smaller vocabulary (1k from code) \\ and smaller embedding size (128)\end{tabular} & -1.65 & -2.25 & -4.04 \\ \cline{2-5} & \begin{tabular}[c]{@{}l@{}}11. Larger focus and target (200 tokens)\end{tabular} & -1.24 & -5.16 & -3.61 \\ \cline{2-5} & 12. Transfer learning & \textbf{+8.71} & \textbf{+19.35} & \textbf{+20.37} \\ \cline{2-5} & 13. Transfer learning, large vocab (10k) & +0.62 & +10.96 & +7.94 \\ \bottomrule \end{tabular} } \caption{Comparison with different network parameters and properties.} \label{tab:ablation} \vspace{-8mm} \end{table} We can see from Table \ref{tab:baseline} that baseline $model\_cc$ is nearly 35\% better than $model\_c$ in Top-10 accuracy. The network parameters of the baseline models are described in Section \ref{subsec:network}. In Table \ref{tab:ablation}, we see how other parameters and properties have an impact on each of the networks. The table only presents the parameters that are different from that of the benchmark models. We can see that soft tokenization performs better than hard tokenization method in terms of accuracy. This is understandable as whitespaces consist of nearly half of all tokens in hard tokenization. By adopting hard tokenization, we have set a \textcolor{red}{lower bound/higher expectation for our network}. Reproducing the original fixed code from soft tokenized prediction is nearly impossible. Hence, we traded accuracy for practicality. \anindya{Recheck previous two lines and see if they can be made more understandable.} We can also see that pre-training the $model\_c$ with large code change dataset significantly improves the performance. However, no hyperparameter or network property change gives $model\_c$ improvement to outperform $model\_cc$. We see a side by side comparison in Figure \ref{fig:vocab-size-transfer}. \end{comment} \subsection{Does Transfer Learning work in code repair?} \begin{table}[ht] \centering \hspace*{-2mm} \resizebox{0.8\textwidth}{!}{% \begin{tabular}{c|lcccc} \toprule & \textbf{ID} & \textbf{Modified property} & \textbf{Top-1} & \textbf{Top-5} & \textbf{Top-10} \\ \midrule \multirow{7}{*}{\rotatebox[origin=c]{90}{\textit{\textbf{model\_cc}}}} & \textbf{1} & \textbf{Baseline \textit{model\_cc}} & - & - & - \\ \cline{2-6} & 2 & Soft tokenization & -3.03 & \textbf{+5.33} & \textbf{+11.22} \\ \cline{2-6} & 3 & Without custom vocabulary selection & -3.10 & -12.91 & -16.82 \\ \cline{2-6} & 4 & \begin{tabular}[c]{@{}l@{}}Larger vocabulary \\(10k from code, 10k from CR)\end{tabular} & -3.44 & -9.74 & -11.89 \\ \cline{2-6} & 5 & \begin{tabular}[c]{@{}l@{}}Smaller vocabulary \\ (1k from code, 5k from CR)\end{tabular} & -7.06 & -3.89 & -1.82 \\ \cline{2-6} & 6 & Smaller embedding size (128) & -4.65 & -2.8 & -1.82 \\ \cline{2-6} & 7 & With coverage mechanism & -2.93 & -4.01 & -2.46 \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\textit{\textbf{model\_c}}}} & \textbf{8} & \textbf{Baseline \textit{model\_c}} & - & - & - \\ \cline{2-6} & 9 & Soft tokenization & -10.81 & -4.2 & -1.87 \\ \cline{2-6} & 10 & Larger vocabulary (10k from code) & +0.82 & -3.7 & -4.47 \\ \cline{2-6} & 11 & \begin{tabular}[c]{@{}l@{}}Smaller vocabulary (1k from code) \\ and smaller embedding size (128)\end{tabular} & -1.65 & -2.25 & -4.04 \\ \bottomrule \end{tabular} } \caption{Comparison with different network parameters and properties.} \label{tab:ablation} \end{table} We perform an ablation study to understand the relative importance of each design choice for our model. We show these results in Table \ref{tab:ablation}. We define our final models as \textit{baselines} in the table (please see section \ref{subsec:Parameters} for the details about the hyperparameters of these two models). We have two major modifications in our \textit{baselines} including: customized vocabulary (section~\ref{subsubsec:vocab}) and exclusion of coverage mechanism (section~\ref{subsec:network}). To analyze the effect of the coverage mechanism, we train a model with coverage mechanism enabled. The result shows that performance drops after enabling coverage mechanism (ID7). This might be because coverage penalizes repetition of tokens in output, whereas programs usually have repetitions. Without custom vocabulary, the model's performance also decreases (ID3). We consider the review comment and code separately in our custom vocabulary and take the most frequent tokens from them separately. Whereas in ID3, review comment and code vocabularies are merged and most frequent tokens are taken from them jointly. The accuracy decreases in ID3 because the programming language tokens are larger in frequency, so the natural language tokens are not much prevalent in the vocabulary of ID3. In Table \ref{tab:ablation}, ID5 and ID11 show that a smaller vocabulary than \textit{baselines} perform worse. ID4 and ID10 show that a larger vocabulary performs poorly as well. Thus, it is clear that our baseline models adopt the best choice of vocabulary size. ID6 and ID11 show that a smaller embedding size than 256 performs worse. \subsection{Network Architecture} In this section, we discuss the Neural Machine Translation (NMT) model that learns code transformation both with and without peer code review. We also describe the NMT model's changes to incorporate both programming language and natural language in the same input stream. \subsection{Pointer Generator Architecture} \label{subsec:network} We train a \textit{pointer generator network} as implemented by Gehrmann \textit{et al.}~\cite{gehrmann2018bottom} available in OpenNMT Pytorch distribution~\cite{opennmt}, which is a state-of-the-art sequence-to-sequence (seq2seq) architecture for text summarization. Seq2seq networks used for summarization can generate desired output text by deriving and summarizing information from the most relevant parts of the input (i.e., code and review comment). This architecture's ability to generate desired text/code is already established in literature \cite{chen2018sequencer, tufano2019learning, codit}. However, they applied the seq2seq model only for source code, whereas we have adopted both the source code and natural language code reviews. For this purpose, we incorporate a custom vocabulary as detailed later. We apply LSTM~\cite{hochreiter1997long} as base RNN with attention mechanism \cite{bahdanau} and copy mechanism \cite{copy} for handling out-of-vocab (OOV) tokens. Programming languages generally result in a large vocabulary because of arbitrary identifier names~\cite{infinitevar}. The basic seq2seq network fails to manage this. In previous studies \cite{chen2018sequencer,codit}, copy mechanism \cite{copy} has been proven effective to overcome the \textit{large vocab problem}~\cite{infinitevar}. When the seq2seq decoder faces an OOV token, the copy mechanism enables the decoder to copy the token from the input token stream directly and place it to the output token stream. In the original implementation of the pointer generator network, coverage mechanism \cite{coverage} is used to limit the repetition of the tokens in network output. However, programming language keywords repeat frequently. Hence, the use of a coverage mechanism affects program repair (Table \ref{tab:ablation}). Therefore, we exclude it from our recommended model presented in the next subsections. \subsection{Augmented Vocabulary} \label{subsubsec:vocab} Our model uses the separate vocabulary for source and target because it has to encode information from both code and natural language code review comments but generate only code tokens. Moreover, code review has a less dominant presence in our input sequence. The average length for a code review and source code in our dataset are 36.80 and 320.38 tokens, respectively. If we apply the standard procedure to create vocabulary followed in the literature~\cite{cho2014learning, cho2014properties, bahdanau, see2017get, dos2014deep, chen2018sequencer, tufano2019learning, tufano2019empirical, bader2019getafix, codit}, most code review tokens will be considered as out-of-vocabulary (OOV) tokens. Hence, our model will fail to gain a contextual understanding of the code review instructions, even with a copy mechanism \cite{copy}. To combat this issue, we propose a larger vocabulary for comments compared to code segments. We consider various combinations of vocabulary size as discussed in Section \ref{sec:ablation_study}. Similar to SequenceR \cite{chen2018sequencer}, we find that large code vocabulary affects model performance. We have experimented with different combinations of vocab sizes (see Section \ref{sec:ablation_study}) and found the best configuration containing the most frequent 2,000 tokens from the codes and 8,000 tokens from the comments. They cover 93.56\% and 98.86\% tokens out of total source code and code review comment tokens, respectively. \subsection{Training with Code Change and Code Review Comment (\texorpdfstring{$model\_cc$}{Lg})} \label{subsec:model-cc} We create a baseline model with the pointer generator network~\cite{gehrmann2018bottom} that takes both the code (before change) and the code review comment as an input. This model is termed as $model\_cc$. To separate review comment and code from each other, we wrap the review comments with two special tokens \emph{<|startcomment|>} and \emph{<|endcomment|>} and the code with special tokens \emph{<|startcode|>} and \emph{<|endcode|>}. Finally, we concatenate them to produce a single input stream. As discussed earlier in Section~\ref{subsubsec:input-sequence}, the code review and code are limited to 200 tokens and 400 tokens, respectively. Thus the network has an input size of 600. The input vocabulary of baseline $model\_cc$ contains 10,000 tokens; 2,000 are from code and 8,000 are from review comment as described in \ref{subsubsec:vocab}. The output of $model\_cc$ has a maximum length of 100 tokens and 2,000 vocabularies. \subsection{Training with Code Change Only (\texorpdfstring{$model\_c$}{Lg})} \label{subsec:model-c} We create a second baseline model with the pointer generator network~\cite{gehrmann2018bottom} that predicts code change by watching only the code before the change. This network is termed as $model\_c$. Since this model does not consider the review comment, it has to deal with smaller input vocabulary and input sequences than $model\_cc$. Specifically, the network's input code and output are limited to 400 tokens and 100 tokens, respectively. The most frequent 2,000 code tokens in the training dataset are considered for both the input and output vocabulary of $model\_c$. The output of $model\_c$ is identical to $model\_cc$. \subsection{Inference and Detokenization} \label{subsec:inference} After training the model, we use the trained model to generate suggestions. During inference, we prepare our input following hard tokenization, as discussed in Section \ref{subsubsec:input-sequence}. We use beam search decoding~\cite{rush2013optimal} to generate multiple possible suggestions similar to previous works~\cite{tufano2019learning,chen2018sequencer,tufano2019empirical}. We generate our target patches by detokenizing the suggestions from the model. Our \textit{Hard Tokenization} method prevents any information loss. Thus, the source code can be reproduced trivially, preserving whitespace, indentation, and coding style from the token stream.
2,877,628,088,749
arxiv
\section{Introduction} The naturality and adequacy of the language of gerbe theory in the setting of the mechanics of the topologically charged bosonic loop, captured by the two-dimensional non-linear $\sigma$-model, and the efficiency of its higher-geometric, cohomological and categorial methods in the canonical description \cite{Gawedzki:1987ak,Suszek:2011hg}, symmetry analysis \cite{Gawedzki:2007uz,Gawedzki:2008um,Gawedzki:2010rn,Gawedzki:2012fu,Suszek:2012ddg} and constructive geometric quantisation \cite{Gawedzki:1987ak,Gawedzki:1999bq,Gawedzki:2002se,Gawedzki:2003pm,Gawedzki:2004tu,Suszek:2011hg} of field theories from this distinguished class, has, by now, attained the status of a widely documented, albeit clearly insufficiently exploited, fact. Introduced in the disguise of the Deligne--Beilinson hypercohomology in the pioneering works of Alvarez \cite{Alvarez:1984es} and Gaw\c{e}dzki \cite{Gawedzki:1987ak}, the language has found -- since the advent of the geometric formulation of gerbe theory worked out by Murray {\it et al.} in Refs.\,\cite{Murray:1994db,Murray:1999ew,Stevenson:2000wj,Bouwknegt:2001vu,Carey:2002,Carey:2004xt} -- ample structural applications in the study of $\sigma$-models and the associated conformal field theories and string theories, and in particular in a neat cohomological classification of quantum-mechanically consistent field theories of the type indicated (also in the presence of boundaries and defects in the two-dimensional spacetime \cite{Fuchs:2007fw,Runkel:2008gr,Suszek:2011hg}), in a concrete formulation of a universal Gauge Principle \cite{Gawedzki:2010rn,Gawedzki:2012fu,Suszek:2011,Suszek:2012ddg,Suszek:2013}, going well beyond the na\"ive minimal-coupling scheme, and in the resulting classification of obstructions against the gauging of rigid symmetries (or gauge anomalies) and of inequivalent gaugings, and -- finally -- in a rigorous geometric description of defects and their fusion in the said theories, in which the r\^ole of defects in the modelling of symmetries and dualities between theories has been elucidated and turned into a handy field-theoretic tool \cite{Fuchs:2007fw,Runkel:2008gr,Suszek:2011hg}. The models that afford the farthest insight and the richest pool of formal methods and constructions are those with a high internal symmetry, reflecting -- in consequence of their geometric nature -- a high symmetry of the target of propagation of the loop, to wit, the Wess--Zumino--Witten $\sigma$-models of loop dynamics on (compact) Lie groups \cite{Witten:1983ar,Gawedzki:1990jc,Gawedzki:1999bq,Gawedzki:2001rm} and their gauged variants \cite{Goddard:1984vk,Gawedzki:1988hq,Gawedzki:1988nj,Karabali:1988au,Hori:1994nc,Gawedzki:2001ye} defining that dynamics on the associated homogeneous spaces. The generating nature of these models in the category of rational conformal field theories in two dimensions and -- not unrelatedly -- their holographic correspondence with the three-dimensional Chern--Simons topological gauge field theory in the presence of Wilson loops, give a measure of the theoretical significance of a good understanding of these models offered by gerbe theory, and simultaneously provide us with numerous and varied means of verification of its field-theoretic predictions. From it, a picture of a coherent and unified higher-geometric and -algebraic description scheme of two-dimensional field theories with a topological charge emerges in which the constructions central to the systematic development of conformal field theory, often beyond the scope of alternative methods, find their manageable geometrisation, {\it e.g.}, a methodical construction of orbifolds and orientifolds of known $\sigma$-models in terms of gerbes with an equivariant structure resp.\ a Jandl structure \cite{Gawedzki:2003pm,Schreiber:2005mi,Gawedzki:2007uz,Gawedzki:2010G}, extending naturally to the formulation of $\sigma$-models on spaces of orbits of the action of continuous groups in what can be thought of as a natural generalisation of the concept of a worldsheet orbifold of \Rcite{Frohlich:2009gb} (going back to the seminal papers \cite{Dixon:1985jw,Dixon:1986jc} of Dixon, Harvey, Vafa and Witten) using the gauge-symmetry defects of Refs.\,\cite{Suszek:2011,Suszek:2012ddg,Suszek:2013} determined by the data of the relevant equivariant structure ({\it cp} also \Rcite{Runkel:2008gr} for an early instantiation of the idea); explicit equivariant geometric quantisation \cite{Gawedzki:1987ak,Gawedzki:1999bq,Gawedzki:2002se,Gawedzki:2003pm,Gawedzki:2004tu} in terms of the Cheeger--Simons differential characters provided by gerbe theory, leading to a hands-on realisation of Segal's idea of functorial quantisation advanced in \Rcite{Segal:2002}, and to the discovery of a new species of Dirichlet branes (the so-called non-abelian branes) over fixed points of the action of an orbifold group \cite{Gawedzki:2004tu} (the latter were first noticed by Douglas and Fiol in Refs.\,\cite{Douglas:1998xa,Douglas:1999hq}); and even, somewhat surprisingly, the elucidation of the peculiar structure of the emergent spectral noncommutative geometry of the maximally symmetric D-branes on the target Lie group \cite{Recknagel:2006hp}, determined by the loop-mechanical deformation of the Dirac operator, and so also of the associated differential calculus, given by the superconformal current of the relevant super-WZW $\sigma$-model in the spirit of \Rcite{Frohlich:1993es}.\smallskip Among the phenomena and constructions of the loop mechanics {\it not} covered (at least not in all generality) by gerbe theory to date, two stand out as particularly significant and hence pressing: \begin{itemize} \item[-] a rigorous and exhaustive treatment of purely loop-mechanical dualities, such as T-duality, with view -- among other things -- to the construction, by means of an adaptation of the aforementioned generalised worldsheet orbifolding procedure, of (classical) geometries modelled on riemannian geometries of fixed topology (of a toroidal principal bundle over a given base) only locally, and with the global structure of an `orbifold' with respect to a suitably defined action of -- instead of the standard diffeomorphism group of the model space $\,{\mathbb{R}}^{\x n}\,$ -- the T-duality `group' of gerbe-theoretic $\sigma$-model dualities (with the group law captured by fusion of the corresponding T-duality bi-branes) determining the relevant `gluing' data -- these constructions, known under the name of T-folds \cite{Hull:2004in,Hull:2006qs}, would place us outside the paradigm of riemannian geometry; \item[-] an extension of the hitherto successful formalism of gerbe theory to models with supersymmetry. \end{itemize} As for the former issue, we shall not have anything to say in the present work, except for the comment that its proper analysis calls for the application of the methods recently developed in Refs.\,\cite{Gawedzki:2012fu,Suszek:2012ddg} and is a subject of an ongoing research, to be reported shortly. It is the latter point that we want to tackle herein, with the intention of clarifying the fundamental concepts and working out the basic formal tools through a case study focused on a target superspace whose geometric simplicity, as reflected in the (trivial) topology and (high) symmetry, gives hope for a relatively straightforward separation of that which is peculiar to such theories, and hence truly novel, from the standard intricacies and technical complexities of a higher-geometric and -algebraic analysis of a low-dimensional field theory with a topological charge. Thus, our endeavour is meant to be a prelude to more advanced studies in the direction that it sets, preparing the ground for subsequent developments in which the complexity of the geometries considered will no longer obscure the basic mechanisms at play in a supersymmetric $\sigma$-model.\smallskip The (pre)history of supersymmetry starts with the works of Miyazawa \cite{Miyazawa:1966mfa}, largely overlooked at an early stage of development of the idea and its associated mathematical formalism, as laid out in the later works of Gervais, Golfand, Volkov and Akulov \cite{Gervais:1971ji,Golfand:1971iw,Volkov:1972jx,Volkov:1973ix,Akulov:1974xz}, and -- in particular -- Wess and Zumino \cite{Wess:1973kz,Wess:1974tw} in which the theoretical concept was rediscovered and boosted in the direction of the at that time much promising and exciting applications in the model-building of high-energy physics. The related theory of supergeometry, based on the notion of a supermanifold, was worked out a little later by Berezin, Le\"ites, Schwarz and Voronov \cite{Berezin:1975,Schwarz:1984,Voronov:1984}, its geometric content clarified by the structure theorem of Gaw\c{e}dzki and Batchelor \cite{Gawedzki:1977pb,Batchelor:1979a}. These new concepts were assimilated and adapted by the string-theoretic community very early on, and gave rise to a plethora of consistent models free of the pathologies of their purely bosonic counterparts, of which we name only the original breakthrough models of the superstring due to Green and Schwarz \cite{Green:1983wt,Green:1983sg}, their higher-dimensional analogues for super-$p$-branes \cite{Achucarro:1987nc}, the celebrated anti-de Sitter superstring models of Refs.\,\cite{Metsaev:1998it,Arutyunov:2008if,Fre:2008qc,DAuria:2008vov} and the M-brane models of \Rcite{deWit:1998yu,Claus:1998fh}, as well as the superstring \cite{Bergshoeff:1985su} and su\-per\-mem\-brane \cite{Bergshoeff:1987cm} theories in curved supergravity backgrounds. It borders on impossible to do justice to a vast area of research such as this one and to recapitulate its development over the decades in a concise form adequate for our purposes, and so instead of doing this, we refer the interested Reader to the excellent reviews and introductory materials on the subject, {\it e.g.}, Refs.\,\cite{Weinberg:1999,Martin:1997ns} for an introduction to the physical, even phenomenological, aspects of the idea of supersymmetry, and Refs.\,\cite{Deligne:1999sgn,Freed:1999,Varadarajan:2004} for the more mathematically oriented mind looking in the same direction, as well as Refs.\,\cite{DeWitt:1992,Rogers:2007} for a gentle introduction to supergeometry. That which renders such a solution all the more apposite is the current somewhat uncertain phenomenological status of supersymmetry as a feature of fundamental interactions, which seems to imply that more weight should be attached to the motivation for the study of field theories exhibiting supersymmetry in an unbroken form than to the standard historical retrospective. In our case, the general motivation is of three-fold nature. On the one hand, there is a purely mathematical argument: Supergeometry, and in particular the theory of Lie supergroups, is a field of a robust mathematical development and it seems only natural to transplant the ideas and methods of (bosonic) higher geometry unto it with view to furthering its progress, especially that this, in the theoretical context in hand, naturally leads to the emergence of a variety of mathematical structures interesting in their own right, such as, {\it e.g.}, the Lie-$n$-(super)algebras and $L_\infty$-(super)algebras of Baez and Huerta \cite{Baez:2010ye,Huerta:2011ic} that correspond to classes in higher groups of Chevalley--Eilenberg cohomology encountered in the construction of super-$\sigma$-models. On the other hand, there is a physical argument: It is the supersymmetric string theory in the distinguished supergravitational backgrounds of the anti-de Sitter type that forms the basis of one of the very few and important direct applications of string theory in the {\it predictive} description of observable phenomena involving strongly interacting elementary coloured particles (outside the perturbative r\'egime) {\it via} the conjectural AdS/CFT `correspondence' -- the super-$\sigma$-models of loop dynamics of relevance in this context, originally advanced by Metsaev and Tseytlin in \Rcite{Metsaev:1998it}, are precisely of the distinguished type described above as the corresponding supermanifolds with the anti-de Sitter space as the body are homogeneous spaces of certain Lie supergroups, {\it e.g.}, \begin{eqnarray}\nn &{\rm AdS}_5\x{\mathbb{S}}^5\cong{\rm SU}(2,2\,\vert\,4)/\bigl({\rm SO}(1,4)\x{\rm SO}(5)\bigr)\,,&\cr\cr &{\rm AdS}_4\x{\mathbb{S}}^7\cong{\rm OSp}(8\,\vert\,4)/\bigl({\rm SO}(1,3)\x{\rm SO}(7)\bigr)\,,\quad\qquad{\rm AdS}_7\x{\mathbb{S}}^4\cong{\rm OSp}(6,2\,\vert\,4)/\bigl({\rm SO}(1,6)\x{\rm SO}(4)\bigr)\,,& \end{eqnarray} and so developing new geometric tools for these models might shed some light on the fundamental nature of the still incompletely understood correspondence of much physical relevance. Finally, there is a mixed mathematical-physical argument: Given the successes of the gerbe-theoretic paradigm established for the bosonic two-dimensional $\sigma$-model with the topological charge, it is tempting to test its universality by attempting to adapt it to an environment in which cohomological mechanisms altogether different from the previously encountered sheaf-theoretic and purely de Rham ones are at work and demand geometrisation, namely, the Cartan--Eilenberg {\it supersymmetry-invariant} cohomology of superdifferential forms on a Lie supergroup resp.\ its homogeneous space.\medskip Our choice of target supermanifolds to be studied, that is to say homogeneous spaces of Lie supergroups, has far reaching field-theoretic consequences. In the simplest case of the super-Minkowskian spacetime\footnote{The notation will be clarified in the main text.} $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$,\ the corresponding super-$\sigma$-models are simply super-counterparts of the bosonic WZW $\sigma$-models mentioned earlier \cite{Henneaux:1984mh}, and more generally they can be thought of as super-variants of gauged WZW $\sigma$-models, in conformity with the findings of \Rcite{Gawedzki:1988hq}. In the bosonic setting, there are simple geometric mechanisms that effect a quantisation of the topological charge and fix the relative normalisation of the topological and `metric' terms in the action functional. In the former case, and for a {\it compact} Lie group, Dirac's argument is usually adduced, which secretly captures the integrality condition for the periods of the curvature of the gerbe whose holonomy along the embedded worldsheet defines the topological Wess--Zumino term of the $\sigma$-model action functional. In the latter case, it is the requirement of the existence of a (bi-)chiral (centrally extended) loop-group symmetry induced by left- and right-regular translations on the group manifold, and hence also of the (bi-)chiral Virasoro symmetry obtained from it {\it via} the standard Sugawara construction, that does the job. In the supergeometric setting at hand, we are confronted with the following obstacles that get in our way if we try to imitate the bosonic scheme: The non-compactness and topological triviality of the target supermanifold, and of the underlying (super)symmetry group, renders Dirac's argument ineffective, hence no quantisation of the topological charge is observed and the de Rham cohomology behind the topological term is as trivial as that of the bosonic body of the supermanifold\footnote{In general, this follows from a theorem by Kostant \cite{Kostant:1975}. In the cases studied, it can be checked directly.}. Hence, apparently, the super-$\sigma$-models of interest seem to have no non-trivial gerbe-theoretic content. Furthermore, the local symmetry fixing the relative normalisation of the two terms in the action functional of the super-$\sigma$-model, known as $\kappa$-symmetry \cite{deAzcarraga:1982njd,Siegel:1983hh,Siegel:1983ke}, while readily shown to have a simple geometric origin in the linearised (and further constrained) right-regular action of the Lie supergroup on itself, has a rather cumbersome and peculiar field-theoretic realisation in that it necessarily mixes the metric and topological (that is, gerbe-theoretic) components of the standard (Nambu--Goto resp.\ Polyakov) action functional and -- on top of that -- requires for the closure of its (commutator) algebra not only an enhancement by worldsheet diffeomorphisms (which is understandable in view of its origin and relation to the chiral symmetries of the bosonic WZW $\sigma$-model -- this is simply a super-instantiation of the Sugawara mechanism) but also the imposition of field equations of the super-$\sigma$-model \cite{McArthur:1999dy}, which seems to preclude its geometrisation in the form of an equivariant structure on the object geometrising the de Rham super-cocycle that determines the topological term of the action functional. A moment's thought reveals that both obstacles can and therefore ought to be circumnavigated, and it is the purpose of the present paper to demonstrate how to do it and study the consequences.\smallskip The triviality of the de Rham cohomology does not imply -- in consequence of the same non-compactness of the supersymmetry group that kills it, but with it also the implications of the Cartan--Eilenberg theorem for the relation between the standard de Rham cohomology and its invariant version -- triviality of the supersymmetric ({\it i.e.}, supersymmetry-invariant) de Rham cohomology, and -- indeed -- the Green--Schwarz super-$(p+2)$-cocycles on the super-Minkowskian spacetime defining the Wess--Zumino terms of the respective super-$p$-brane super-$\sigma$-models bear witness to that. A simple argument due to Rabin and Crane \cite{Rabin:1984rm,Rabin:1985tv} then shows that the invariant de Rham cohomology actually encodes information on the nontrivial {\it topology} of a supermanifold of the same type as $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ ({\it i.e.}, modelled on the same vector bundle in the sense of the Gaw\c{e}dzki--Batchelor Theorem), namely, an orbifold of the super-Minkowskian spacetime by the natural action of the discrete Kosteleck{\'y}--Rabin supersymmetry group constructed in \Rcite{Kostelecky:1983qu} in the context of supersymmetric lattice field theory. This implies that the Green--Schwarz super-$\sigma$-model should be understood as a theory of embeddings of the super-$p$-brane worldvolume in the topologically nontrivial supertarget, and puts the topological term of that model on equal footing with the topological term of the bosonic WZW $\sigma$-model with a compact (and topologically nontrivial) Lie-group target. This means, in particular, that we should look for an appropriate geometrisation of the Green--Schwarz super-$(p+2)$-cocycles that define the topological term. Following this line of reasoning to its logical conclusion, we readily realise that in the present context `appropriate' is equivalent to `supersymmetry-(left-)invariant', which simply means that we may reproduce the geometrisation procedure of cohomological descent that associates a (bosonic) $p$-gerbe with a standard de Rham $(p+2)$-cocycle (to be detailed shortly) as long as we ensure that each supermanifold obtained in the procedure and -- as part of it -- surjectively submersed onto $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ is equipped with a Lie-supergroup structure that projects, along the surjective submersion, to the original Lie-supergroup structure on $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$,\ and -- finally -- that the superdifferential forms defined on these supermanifolds and employed in the said procedure are left-invariant with respect to the natural (left) action of the respective Lie supergroups on their support (that is, on themselves). The success of a (super)geometrisation project thus outlined hinges on two classic cohomological results that carry over from the bosonic world to the supergeometric setting (as demonstrated in App.\,\ref{app:LieAlgCohom}), to wit, the equivalence between the Cartan--Eilenberg invariant cohomology of the Lie (super)group and the Chevalley--Eilenberg cohomology of its Lie (super)algebra with values in the trivial module $\,{\mathbb{R}}\,$ in conjunction with the correspondence between classes in the second cohomology group of the latter cohomology and equivalence classes of (super)central extensions of the Lie (super)algebra by that module. These results translate the original geometric problem of finding a surjective submersion over the original supermanifold equipped with a Lie-supergroup structure and such that the pullback of the original Cartan--Eilenberg super-cocycle to it trivialises in the corresponding Cartan--Eilenberg cohomology into a purely algebraic one: In a systematic procedure laid out by de Azc\'arraga {\it et al.} in \Rcite{Chryssomalakos:2000xd}, we identify various Cartan--Eilenberg super-2-cocycles engendered by the Green--Schwarz super-$(p+2)$-cocycles and associate with them supercentral extensions of the underlying super-Minkowskian algebra, subsequently demonstrated to integrate to supercentral extensions of the Lie supergroup $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\equiv{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on which the pullbacks of the respective super-$(p+2)$-cocycles trivialise partially, whereupon the procedure can be repeated with respect to these partial (supersymmetric) trivialisations. This leads to a family of so-called extended superspacetimes of the type first considered in \Rcite{Chryssomalakos:2000xd} which we then take to be {\it the} surjective submersions of the gerbe-theoretic geometrisation scheme. This basic idea is then reapplied at higher levels of Murray's geometrisation ladder \cite{Murray:1994db}, ultimately leading to the emergence of a new (super)geometric species -- a {\bf Green--Schwarz super-$p$-gerbe}, the central result of the work reported herein (explicited for $\,p\in\{0,1,2\}$).\smallskip At this stage, the structural affinity with the bosonic WZW $\sigma$-model becomes a rich source of intuitions concerning anticipated properties of the newly constructed (super)geometric objects -- their verification seems to provide the right measure of evidence in support of our claim of naturality of the construction postulated in the paper. The first of these properties is the amenability of a {\it distinguished} realisation of the {\it rigid} (or {\it global}) supersymmetry of the super-$\sigma$-model under consideration to gauging, as reflected in the existence of an appropriate supersymmetry-equivariant structure on the associated super-$p$-gerbe, in conformity with the findings of Refs.\,\cite{Gawedzki:2010rn,Gawedzki:2012fu,Suszek:2011,Suszek:2012ddg,Suszek:2013}. Here, as before, `appropriate' means `supersymmetry-(left-)-invariant' but the concept has to be adapted to the changed circumstances in which the spaces on which the supersymmetry group acts are components of the nerve of the action groupoid of the group subject to gauging. The existing knowledge on the obstructions against gauging of the various possible actions of the {\it maximal} symmetry group in the bosonic setting suggest the Lie supergroup $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ in the adjoint ralisation as a candidate for the (maximal) gauge group of the super-$\sigma$-model on $\,{\rm sMink}^{1,d_1\,\vert\,D_{1,d-1}}\,$ (or, more adequately, as the structure group of the principal bundle that implements the gauge symmetry in the standard manner, {\it cp.}, \Rcite{Gawedzki:2010rn,Gawedzki:2012fu}), and -- indeed -- the corresponding ${\rm Ad}_\cdot$-equivariant structure can be consistently defined on the super-$p$-gerbe (as has been verified explicitly for $\,p\in\{0,1\}$). We emphasise once more that this is a nontrivial consistency check of our main proposal. \smallskip Finally, we come to the second apparent obstacle indicated above: the obstruction to the geometrisation of the gauge supersymmetry of the super-$\sigma$-model in the form of a full-fledged standard equivariant structure on the super-$p$-gerbe. The relevance of this issue follows from the field-theoretic r\^ole played by the supersymmetry, which is that of a mechanism effectively removing the spurious ({\it i.e.}, pure gauge) spinorial degrees of freedom and thereby restoring an actual balance between the bosonic and fermionic degrees of freedom in the theory, and of an algebraic structure extending the worldvolume-diffeomorphism algebra. The absence of its analysis in any rigorous discussion of the higher-geometric content of the Green--Schwarz super-$\sigma$-models renders that discussion fundamentally incomplete. The first problem with $\kappa$-symmetry, which is the mixing of the metric and gerbe-theoretic components of the supergeometric background, can be remedied easily by passing to an equivalent formulation of the super-$\sigma$-model, originally due to Hughes and Polchinski \cite{Hughes:1986dn}, in which an enlargement of the covariant configuration bundle (or the `space of lagrangean fields') is accompanied by a replacement of the metric term in the original (Nambu--Goto resp.\ Polyakov) action functional with the pullback, along the lagrangean embedding field, of a distinguished super-$(p+1)$-form on the enlarged target supermanifold, with the topological term left unchanged, {\it i.e.}, pulled back from the original target supermanifold. The new super-$(p+1)$-form being manifestly supersymmetry-invariant, this leads to an extension of the pullback of the previously constructed Green--Schwarz super-$p$-gerbe by a {\it trivial} super-$p$-gerbe on the enlarged target supermanifold and an effective unification of metric and gerbe-theoretic components of the original supergeometric background. The latter emerge from what may rightly be termed the {\bf extended Green--Schwarz super-$p$-gerbe} only upon imposition of constraints (corresponding to the field equations for the extra lagrangean fields) that reduce the Hughes--Polchinski action functional to its Nambu--Goto ancestor. Rather conveniently, these constraints, of the type considered long ago by Ivanov and Ogievetsky in \Rcite{Ivanov:1975zq} in the context of nonlinear realisations of symmetries and recently revived by McArthur \cite{McArthur:1999dy,McArthur:2010zm} and West {\it et al.} \cite{West:2000hr,Gomis:2006xw,Gomis:2006wu} in the context of (super-)$\sigma$-model-building, admit straightforward supergeometrisation, which puts us in a position to enquire in a meaningful manner as to the existence of an equivariant structure on the extended super-$p$-gerbe reflecting the gauge supersymmetry in the Hughes--Polchinski formulation of the Green--Schwarz super-$\sigma$-model. Alas, not all problems with incorporating $\kappa$-symmetry in the newly established super-gerbe-theoretic formalism admit an equally satisfactory solution in the Hughes--Polchinski formulation. The construction of a full-blown $\kappa$-equivariant structure remains an open question and -- more fundamentally -- the realisation of global supersymmetry in the presence of the constraints mentioned above, of direct relevance to the very definition of gerbe-theoretic structures on the enlarged target supermanifold, requires further scrutiny. That said, it deserves to be emphasised that a most {\it natural} concept of an element-wise realisation of the symmetry under consideration on the extended super-$p$-gerbe in the form of a partial equivariant structure (termed `weak' in what follows) can be defined and constructed explicitly in the distinguished cases with $\,p\in\{0,1\}$.\ While far from being fully understood, this construction lends additional and highly nontrivial support to the main claim of the present work, which is that the (super)geometrisation of the Green--Schwarz super-$(p+2)$-cocycles postulated hereunder should be regarded as the proper counterpart of the well-established geometrisation scheme for de Rham $(p+2)$-cocycles, to be considered in the setting of the supersymmetric supergeometry of homogeneous spaces of Lie supergroups. {\it Addendum:} The notion of the supergerbe, understood as a geometrisation of the Green--Schwarz super-$(p+2)$-cocycle, was discussed from a formal point of view by Fiorenza, Sati and Schreiber in \Rcite{Fiorenza:2013nha}. The Author is grateful to Urs Schreiber for kindly drawing his attention to that article. \medskip The paper is organised as follows: \begin{itemize} \item In Section \ref{sec:Bose}, we recapitulate those elements of gerbe theory that become essential in subsequent analyses, and review the resulting canonical description and geometric quantisation of the bosonic two-dimensional $\sigma$-model with a topological term that the gerbe-theoretic approach naturally provides, with special emphasis on the geometric (and cohomological) structures that describe symmetries of the $\sigma$-model induced by automorphisms of the target space, and in particular those amenable to gauging; we complement the introductory part with a definition of a bundle 2-gerbe for the sake of handy reference in a later supersymmetric generalisation. \item In Section \ref{sec:stensor}, we introduce the broad class of supergeometries of direct interest to us in the present work and its planned continuation. These are supermanifolds endowed with the structure of a homogeneous space of a Lie supergroup and a distinguished representative of a class in the corresponding Cartan--Eilenberg cohomology that determines the topological term in the action functional of the supersymmetric $\sigma$-model to be studied, that is the Green--Schwarz super-$\sigma$-model that describes the geometrodynamics of standard super-$p$-branes. We recall the two formulations of the super-$\sigma$-model used in later considerations: the Nambu--Goto formulation and the Hughes--Polchinski formulation, and subsequently identify sufficient conditions for the (classical) equivalence of the two formulations, whereupon the equivalence is stated in Props.\,\ref{prop:IHCart} and \ref{prop:IHCartMink} and proven. \item In Section \ref{sec:sMinktarget}, we zoom in on the specific backgrounds of super-$p$-brane propagation that constitute the main subject of the study reported herein, to wit, the super-Minkowskian backgrounds equipped with the Green--Schwarz super-$(p+2)$-cocycles that we list, alongside their non-supersymmetric primitives used in subsequent analysis. \item Section \ref{sec:GSgerbe} contains the main proposals and results of the study reported herein: the geometrisation scheme for the previously introduced Green--Schwarz super-$(p+2)$-cocyles (for $\,p\in\{0,1,2\}$), resulting in the general definition (Defs.\,\ref{def:CaEs0g} and \ref{def:CaEs1g}) and explicit construction (Defs.\,\ref{def:s0gerbe}, \ref{def:s1gerbe} and \ref{def:s2gerbe} and Props.\,\ref{prop:s0gerbe} and \ref{prop:s1gerbe}) of the corresponding Green--Schwarz super-$p$-gerbes on the super--Minkowskian spacetime, and a detailed discussion of a natural realisation on these novel supergeometric objects of the geometric action of the supersymmetry group lifted from their base, culminating in the definition (Defs.\,\ref{def:SUSYequiv0} and \ref{def:SUSYequiv1}) and explicit construction (Props.\,\ref{prop:Adequivstr0} and \ref{prop:Adequivstr1}) of the corresponding suppersymmetry-(${\rm Ad}_\cdot$-)equivariant structure (for $\,p\in\{0,1\}$). \item In Section \ref{sec:kappa}, we rederive, in a purely geometric manner, the (spinorial) $\kappa$-symmetry of the Green--Schwarz super-$\sigma$-model for the super-0-brane and the superstring in the Hughes--Pol\-chin\-ski formulation and extract the resulting (Cartan-)geometric constraints that have to be imposed on the field configurations of the model in order for the equivalence with the standard Nambu--Goto formulation to hold, and -- subsequently -- those which render the $\kappa$-symmetry of the super-$\sigma$-model well-defined. These serve to codetermine suitable restrictions of the covariant configuration bundles of the super-$\sigma$-models, on which we then erect the respective extended Green--Schwarz super-$p$-gerbes. The section ends with a preliminary discussion of an element-wise realisation on the newly constructed super-$p$-gerbes (with $\,p\in\{0,1\}$) of the infinitesimal action of the $\kappa$-symmetry in the form of what we call a weak equivariant structure. \item Section \ref{ref:CandO} summarises the main constructions and findings reported in the present paper and indicates directions of potential future research based upon them. \item The Appendices contain introductory material and certain ancillary results on Lie superalgebras and their (Chevalley--Eilenberg) cohomology, as well as some technical proofs of statements articulated in the main text. \end{itemize} \bigskip \noindent{\bf Acknowledgements:} This work is a humble tribute to a Friend and Teacher, Professor Krzysztof Gaw\c{e}dzki, on the occasion of His seventieth birthday. Any attempt at a concise verbalisation of a meaningful and yet -- of necessity -- sufficiently formal acknowledgement of His r\^ole in the scientific and extra-scientific formation of the author is bound to fail short of the sincere intention, and so shall be omitted. This leaves the author with the pleasurable obligation of expressing a deep and true thankfulness to his Colleagues and Friends at and outside the Department of Mathematical Methods in Physics of the Faculty of Physics at the University of Warsaw for creating and maintaining an inspiring atmosphere of scientific work and human interaction in which the spirit of the late Professor Krzysztof Maurin finds its very fitting incarnation, as well as for their understanding of the author's other passions, including that for the defense of fundamental civil rights and liberties of his fellow citizens, in which understanding their human sensitivity and decency is congenially reflected -- a rare source of satisfaction and relief in these sad times. Finally, the author cannot but acknowledge, without the least gratitude but with, instead, deepest civil despondency and a poignant awareness of a rapidly growing cultural alienation within a largely indifferent and populism-prone society of the post-truth era, the steadfast and disquietingly methodical efforts on the part of the current pro-authoritarian government of the Republic of Poland, of a truly bewildering intensity and scope and devastating sociological ramifications, to keep him, alongside many other active members of the Polish civil society, as occupied -- be it with acts of civil disobedience, stubborn street protests, confrontation with the party-controlled prosecution, the incessant (and sadly ineffective) write-up of petitions and letters of grievance or various activities aimed at raising social consciousness of the government's heinous wrongdoing and the complex context of the current civilisational devolution -- and consequently as withdrawn from research as a passion-driven individual with a non-trivial charge of civil sensitivity and a rich historical memory of the villainy of totalitarian r\'egimes can ever be made by a government with the intelectual deficiencies, cultural ignorance, moral depravity and documented propensity for increasingly frequent abysmal paroxysms of barbarism pure of this one. \newpage \section{Recapitulation of the gerbe theory for the bosonic $\sigma$-model}\label{sec:Bose} In this opening section, we consider the monophase bosonic two-dimensional non-linear $\sigma$-model with a spacetime $\,(\Sigma,\gamma)$,\ termed the \textbf{worldsheet}, given by a closed two-dimensional manifold $\,\Sigma\,$ with an intrinsic metric $\,\gamma$,\ and a covariant configuration bundle $\,\Sigma\x M\longrightarrow\Sigma\,$ whose fibre $\,M$,\ termed the \textbf{target space}, is a differentiable manifold of class\footnote{Formulation of the $\sigma$-model requires the target space to be of class $\,C^2\,$ only, but we shall assume higher degree of smoothness to keep subsequent formul\ae ~simpler.} $\,C^\infty$.\ The model is defined by an action functional $\,S_\sigma\,$ with domain $\,C^\infty(\Sigma,M)\,$ whose stationary points are (generalised) harmonic maps $\,x\ :\ \Sigma\longrightarrow M$. A rigorous formulation of the monophase $\sigma$-model calls for additional structure on $\,M$,\ to wit, a metric tensor $\,{\rm g}\in\Gamma({\mathsf T}^*M\underset{\rm{\tiny sym}}{\otimes_{M,{\mathbb{R}}}}{\mathsf T}^*M)\,$ (giving rise to the Levi-Civita connection $\,\nabla_{\rm{LC}}^{\rm g}$) and an abelian bundle gerbe (with connection and curving) $\,\mathcal{G}\,$ of curvature $\,{\rm H}\equiv\curv(\mathcal{G})\in Z^3_{\rm{dR}}(M)\,$ with periods in $\,2\pi{\mathbb{Z}}$.\ The two tensors $\,{\rm g}\,$ and $\,{\rm H}\,$ are related by the requirement of the vanishing of the Weyl anomaly\footnote{The anomaly is usually computed and presented as a perturbative series in the string tension $\,\a'$.} of the $\sigma$-model, \begin{eqnarray}\nn R_{\mu\nu}(\nabla_{\rm{LC}}^{\rm g})-\tfrac{1}{4}\,\left({\rm g}^{-1}\right)^{\a\gamma}\,\left({\rm g}^{-1}\right)^{\beta\delta}\,{\rm H}_{\mu\a\beta}\,{\rm H}_{\nu\gamma\delta}+O(\a')=0\,, \end{eqnarray} a prerequisite of a non-anomalous realisation of the conformal (gauge) symmetry of the classical field theory in the quantum r\'egime. The metric on $\,M\,$ determines -- through the induction of the first fundamental form $\,x^*{\rm g}\,$ on $\,\Sigma\,$ along $\,x\,$ -- the so-called metric term in $\,S_\sigma$,\ which we choose -- with hindsight -- to write in the Nambu--Goto form\footnote{There exists an alternative, and classically essentially equivalent form of the metric term, termed the Polyakov form, which, however, will not be employed in the present work.} \begin{eqnarray}\nn S_{\rm{metr, NG}}[x]:=\int_\Sigma\,\Vol(\Sigma)\,\sqrt{\bigl\vert{\rm det}_{(2)}\,\bigl(x^*{\rm g}\bigr)\bigr\vert}\,, \end{eqnarray} whereas the gerbe defines the topological Wess--Zumino term that exponentiates to a Cheeger--Simons differential character $\,{\rm Hol}_\mathcal{G}\,$ termed the (surface) holonomy of gerbe $\,\mathcal{G}\,$ (and computed along map $\,x$), altogether giving rise to a well-defined Dirac--Feynman amplitude (written for $\,\hbar=1$) \begin{eqnarray}\label{eq:sibos} \cA_{\rm{DF}}[x]:={\rm exp}\left({\mathsf i}\,S_\sigma[x]\right)\equiv{\rm exp}\left({\mathsf i}\,S_{\rm{metr, NG}}[x]\right)\cdot{\rm Hol}_\mathcal{G}(x)\,. \end{eqnarray} The holonomy can most concisely be described as the image of the isoclass of the flat pullback gerbe $\,x^*\mathcal{G}\,$ under the composite isomorphism\footnote{The isomorphism can readily be derived by examining a sheaf-theoretic description of the flat gerbe (note that every gerbe over $\,\Sigma\,$ is flat for dimensional reasons) and following the long exact sequence in the sheaf cohomology of $\,\Sigma\,$ induced by the standard exponential short exact sequence $\,0\longrightarrow\unl{\mathbb{Z}}\xrightarrow{\ 2\pi\cdot\ }\unl{\mathbb{R}}\xrightarrow{\ {\rm exp}(\cdot)\ }\unl{\rm U}(1)\longrightarrow 0\,$ of sheaves of locally constant maps on $\,\Sigma$.} \begin{eqnarray}\nn \mathcal{W}^3(\Sigma;0)\cong\check{H}^2\bigl(\Sigma,\unl{\rm U}(1)\bigr)\cong{\rm U}(1) \end{eqnarray} between the group $\,\mathcal{W}^3(\Sigma;0)\,$ of isoclasses of flat gerbes over $\,\Sigma\,$ (with the class of the tensor product of representatives as the group action) and $\,{\rm U}(1)\,$ (we assume $\,\Sigma\,$ to be connected). The intermediate group is the second \v{C} ech-cohomology group of $\,\Sigma\,$ with values in the sheaf of constant maps to $\,{\rm U}(1)$. It stands to reason that a structural (non-na\"ive) supersymmetrisation of the $\sigma$-model affects the various components $\,{\rm g},\mathcal{G}\,$ of the geometric backgound of the loop propagation. While the candidate extension of the tensor $\,{\rm g}\,$ under such supersymmetrisation is not -- as shall be elucidated shortly -- difficult to conceive and quantify, at least in geometrically simple circumstances, it is not at all clear even how to approach the supergeometric counterpart of $\,\mathcal{G}$.\ Therefore, it seems apposite to first present a number of equivalent descriptions and fundamenal properties of the gerbe and its field-theoretic guises with view to establishing a vast scope of constructions from which to choose those that generalise to the supergeometric setting naturally and usefully. Below, we demonstrate the many faces of the gerbe $\,\mathcal{G}\,$ with a fixed {\bf curvature} $\,{\rm H}\in Z^3_{\rm{dR}}(M)$,\ to be understood as a geometrisation of the de Rham 3-cocycle $\,{\rm H}\,$ on the base $\,M$,\ much in the same manner as a line bundle (with connection) is to be understood as a geometrisation of the de Rham 2-cocycle $\,{\rm F}\in Z^2_{\rm{dR}}(M)\,$ of its curvature. \subsection{Gerbe theory in a nutshell} The point of departure of our recapitulation is the cohomological description of the gerbe. Thus, any local trivialisation of the (co)homology of $\,M\,$ yields a presentation of $\,\mathcal{G}\,$ in terms of its sheaf-theoretic data: Given a good open cover\footnote{A good open cover is an open cover $\,\{\mathcal{O}_i\}_{i\in\mathscr{I}}\,$ with all non-empty (finite) multiple intersections $\,\mathcal{O}_{i_1}\cap\mathcal{O}_{i_2}\cap\cdots\cap\mathcal{O}_{i_N},\ i_1,i_2,\ldots,i_N\in\mathscr{I},\ N\in{\mathbb{N}}^\x\,$ contractible. In the light of Weil's proof of the Weil--de Rham Theorem, reported in \Rcite{Weil:goc1952}, such a cover always exists on a differentiable manifold of class $\,C^2$,\ a property implicitly assumed in constructing the $\sigma$-model.} $\,\mathcal{O}_M:=\{\mathcal{O}_i\}_{i\in\mathscr{I}}\,$ ($\mathscr{I}\,$ is an index set and we denote, for any $\,N\in{\mathbb{N}}^\x$,\ sets $\,\mathscr{I}_N:=\{\ (i_1,i_2,\ldots,i_N)\in\mathscr{I}^{\x N}\ |\ \exists\ \mathcal{O}_{i_1 i_2\cdots i_N}\equiv\mathcal{O}_{i_1}\cap\mathcal{O}_{i_2}\cap\cdots\cap\mathcal{O}_{i_N}\neq\emptyset\ \}$), the gerbe is identified with a class \begin{eqnarray}\nn \left[(B_i,A_{jk},g_{lmn})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2,\ (l,m,n)\in\mathscr{I}_3}\right] \end{eqnarray} of a \v{C} ech--de Rham 2-cocycle trivialising the de Rham 3-cocycle $\,{\rm H}\,$ over $\,\mathcal{O}_M$,\ with data $\,(B_i,A_{jk},g_{lmn})\in\Omega^2(\mathcal{O}_i)\x\Omega^1(\mathcal{O}_{jk})\x C^\infty(\mathcal{O}_{lmn},{\rm U}(1))$ defined by the relations \begin{eqnarray}\nn &{\mathsf d} B_i={\rm H}\mathord{\restriction}_{\mathcal{O}_i}\,,\qquad{\mathsf d} A_{jk}=(B_k-B_j)\mathord{\restriction}_{\mathcal{O}_{jk}}\,,\qquad{\mathsf i}\,{\mathsf d}\log g_{lmn}=(A_{mn}-A_{ln}+A_{lm})\mathord{\restriction}_{\mathcal{O}_{lmn}}\,,&\cr\cr &\bigl(g_{pqr}\cdot g_{oqr}^{-1}\cdot g_{opr}\cdot g_{opq}^{-1}\bigr)\mathord{\restriction}_{\mathcal{O}_{opqr}}=1\,,\quad(o,p,q,r)\in\mathscr{I}_4& \end{eqnarray} up to redefinitions, for arbitrary $\,(C_i,h_{jk})\in\Omega^1(\mathcal{O}_i)\x C^\infty(\mathcal{O}_{jk},{\rm U}(1))$, \begin{eqnarray} (B_i,A_{jk},g_{lmn})\longmapsto\bigl(B_i+{\mathsf d} C_i,A_{jk}+(C_k-C_j)\mathord{\restriction}_{\mathcal{O}_{jk}}-{\mathsf i}\,{\mathsf d}\log h_{jk},g_{lmn}\cdot \bigl(h_{mn}^{-1}\cdot h_{ln}\cdot h_{lm}^{-1}\bigr)\mathord{\restriction}_{\mathcal{O}_{lmn}}\bigr)\cr\label{eq:1iso} \end{eqnarray} in the 2nd real Deligne--Beilinson hypercohomology group $\,{\mathbb{H}}^2\left(M,\mathcal{D}(2)^\bullet \right)$,\ {\it i.e.}, the cohomology of the total complex of the bicomplex formed by an extension of the bounded Deligne complex \begin{eqnarray}\nn \mathcal{D}(n)^\bullet\quad\equiv\quad \unl{\rm U}(1)_M\xrightarrow{\ \tfrac{1}{{\mathsf i}}\,{\mathsf d}\log\ }\unl{\Omega^1(M)}\xrightarrow{\ {\mathsf d}\ }\unl{\Omega^2(M)}\xrightarrow{\ {\mathsf d}\ }\cdots\xrightarrow{\ {\mathsf d}\ }\unl{\Omega^n(M)} \end{eqnarray} of sheaves of locally smooth maps and $p$-forms (for $\,p\in\ovl{1,n}\,$ with, in the present case, $\,n=2$) in the direction of the \v{C} ech cohomology associated with $\,\mathcal{O}_M$,\ {\it cp.}\ \Rcite{Johnson:2003}. Of course, a given gerbe may -- just like a line bundle -- trivialise over an open cover $\,\mathcal{O}_M\,$ that is not good in the sense specified above -- we call the latter a trivialising open cover in the present context. The gerbe may also, and equivalently, be realised as a purely geometric object \begin{eqnarray}\nn \mathcal{G}=({\mathsf Y} M,\pi_{{\mathsf Y} M},{\rm B},L,\nabla_L,\mu_L)\,, \end{eqnarray} known also as the bundle gerbe: Given an arbitrary surjective submersion \begin{eqnarray}\nn \pi_{{\mathsf Y} M}\ :\ {\mathsf Y} M\longrightarrow M \end{eqnarray} on whose total space there exists a globally smooth primitive \begin{eqnarray}\nn {\rm B}\in\Omega^2({\mathsf Y} M) \end{eqnarray} of the pullback \begin{eqnarray}\nn \pi_{{\mathsf Y} M}^*{\rm H}={\mathsf d}{\rm B} \end{eqnarray} (termed the {\bf curving} of the gerbe), we erect, over the double fibred product $\,{\mathsf Y}^{[2]}M\equiv{\mathsf Y} M\x_M{\mathsf Y} M\,$ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{ & {\mathsf Y}^{[2]}M \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y} M \ar[rd]_{\pi_{{\mathsf Y} M}} & & {\mathsf Y} M \ar[ld]^{\pi_{{\mathsf Y} M}} \\ & M & }\,, \end{eqnarray} a principal bundle \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{{\mathbb{C}}^\x \ar[r] & L \ar[d]^{\pi_L} \\ & {\mathsf Y}^{[2]}M } \end{eqnarray} with connection (termed the {\bf connection} of the gerbe and represented by the covariant derivative) $\,\nabla_L\,$ of curvature \begin{eqnarray}\nn \curv(\nabla_L)={\rm pr}_2^*{\rm B}-{\rm pr}_1^*{\rm B}\,, \end{eqnarray} endowed with a fibrewise groupoid structure, {\it i.e.}, a connection-preserving principal-bundle isomorphism\footnote{The tensor product $\,L_1\otimes L_2\,$ of principal ${\mathbb{C}}^\x$-bundles $\,L_\a,\ \a\in\{1,2\}\,$ is defined, after \Rcite{Brylinski:1993ab}, as the (principal) bundle $\,(L_1\x L_2)/{\mathbb{C}}^\x\,$ associated with $\,L_1\,$ through the defining ${\mathbb{C}}^\x$-action on $\,L_2$,\ to be denoted by $\,\vartriangleleft$.\label{foot:Cxprintens}} (termed the {\bf groupoid structure}) \begin{eqnarray}\nn \mu_L\ :\ {\rm pr}_{1,2}^*L\otimes{\rm pr}_{2,3}^*L\xrightarrow{\ \cong\ }{\rm pr}_{1,3}^*L \end{eqnarray} over the triple fibred product $\,{\mathsf Y}^{[3]}M\equiv{\mathsf Y} M\x_M{\mathsf Y} M\x_M{\mathsf Y} M\,$ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=1.5cm@R=1cm}{ & {\mathsf Y}^{[3]}M \ar[rd]^{{\rm pr}_3} \ar[d]_{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y} M \ar[rd]_{\pi_{{\mathsf Y} M}} & {\mathsf Y} M \ar[d]_{\pi_{{\mathsf Y} M}} & {\mathsf Y} M \ar[ld]^{\pi_{{\mathsf Y} M}} \\ & M & }\,, \end{eqnarray} (with its canonical projections $\,{\rm pr}_{i,j}\equiv({\rm pr}_i,{\rm pr}_j)\ :\ {\mathsf Y}^{[3]}M\longrightarrow {\mathsf Y}^{[2]}M,\ (i,j)\in\{(1,2),(2,3),$ $(1,3)\}$),\ subject to the associativity constraint \begin{eqnarray}\label{eq:mugrpd} {\rm pr}_{1,2,4}^*\mu_L\circ({\rm id}_{{\rm pr}_{1,2}^*L}\otimes{\rm pr}_{2,3,4}^*\mu_L)={\rm pr}_{1,3,4}^*\mu_L\circ({\rm pr}_{1,2,3}^*\mu_L\otimes {\rm id}_{{\rm pr}_{3,4}^*L}) \end{eqnarray} over the quadruple fibred product $\,{\mathsf Y}^{[4]}M\equiv{\mathsf Y} M\x_M{\mathsf Y} M\x_M{\mathsf Y} M\x_M{\mathsf Y} M\,$ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=1.5cm@R=1cm}{ & & {\mathsf Y}^{[4]}M \ar[rrd]^{{\rm pr}_4} \ar[rd]_{{\rm pr}_3} \ar[ld]^{{\rm pr}_2} \ar[lld]_{{\rm pr}_1} & & \\ {\mathsf Y} M \ar[rrd]_{\pi_{{\mathsf Y} M}} & {\mathsf Y} M \ar[rd]^{\pi_{{\mathsf Y} M}} & & {\mathsf Y} M \ar[ld]_{\pi_{{\mathsf Y} M}} & {\mathsf Y} M \ar[lld]^{\pi_{{\mathsf Y} M}} \\ & & M & & }\,, \end{eqnarray} (with its canonical projections $\,{\rm pr}_{i,j,k}\equiv({\rm pr}_i,{\rm pr}_j,{\rm pr}_k)\ :\ {\mathsf Y}^{[4]}M\longrightarrow {\mathsf Y}^{[3]}M\,$ and $\,{\rm pr}_{i,j}\equiv({\rm pr}_i,{\rm pr}_j)\ :\ {\mathsf Y}^{[4]}M\longrightarrow {\mathsf Y}^{[2]}M,\ i,j\in\{1,2,3,4\}$), {\it cp.}\ Refs.\,\cite{Murray:1994db,Murray:1999ew,Stevenson:2000wj}. Equivalence between the two pictures: the cohomological and the geometric one is established, on the one hand, with the help of the construction of the nerve of the trivialising open cover $\,\mathcal{O}_M$,\ whose components become the $M$-fibred powers of the surjective submersion $\,\bigsqcup_{i\in\mathscr{I}}\,\mathcal{O}_i\longrightarrow M\,$ over which the various collections of local data define smooth geometric objects (in the standard differentiable structure of a disjoint union of manifolds), and, on the other hand, with the help of local sections of the various surjective submersions employed in the geometric description: $\,\pi_{{\mathsf Y} M},\ \pi_L\,$ and those derived from them, providing us with local data of the geometric objects $\,{\rm B},\nabla_L\,$ and $\,\mu_L$.\ Thus, the geometric objects determined by the \v{C} ech--Deligne 2-cocycle $\,(B_i,A_{ij},g_{ijk})_{i,j,k\in\mathscr{I}}\,$ are \begin{eqnarray}\nn &\pi_{{\mathsf Y} M}\ :\ \bigsqcup_{i\in\mathscr{I}}\,\mathcal{O}_i\longrightarrow M\ :\ (x,i)\longmapsto x\,,\qquad{\rm B}\mathord{\restriction}_{\mathcal{O}_i}=B_i\,,&\cr\cr &\pi_L={\rm pr}_1\ :\ L=\bigl(\bigsqcup_{(j,k)\in\mathscr{I}_2}\,\mathcal{O}_{jk}\bigr)\x{\mathbb{C}}^\x\longrightarrow\bigsqcup_{(j,k)\in\mathscr{I}_2}\,\mathcal{O}_{jk}\equiv{\mathsf Y}^{[2]}M\,,\qquad\nabla_L\mathord{\restriction}_{\mathcal{O}_{jk}}={\mathsf d}+\tfrac{1}{{\mathsf i}}\,A_{jk}\,,&\cr\cr &\mu_L\bigl((x,l,m,z_1),(x,m,n,z_2)\bigr)=\bigl(x,l,n,g_{lmn}(x)\cdot z_1\cdot z_2\bigr)\,,\quad (x,l,m,n)\in\bigsqcup_{(i,j,k)\in\mathscr{I}_3}\,\mathcal{O}_{ijk}\equiv{\mathsf Y}^{[3]}M\,.& \end{eqnarray} Conversely, given geometric data $\,({\mathsf Y} M,\pi_{{\mathsf Y} M},{\rm B},L,\nabla_L,\mu_L)\,$ and a choice of an open cover $\,\{\mathcal{O}_i\}_{i\in\mathscr{I}}\,$ of $\,M\,$ with local sections $\,\sigma_i\ :\ \mathcal{O}_i\longrightarrow{\mathsf Y} M\,$ giving rise to local sections $\,\sigma_{i_1 i_2\ldots i_N}\equiv(\sigma_{i_1},\sigma_{i_2},\ldots,\sigma_{i_N})\ :\ \mathcal{O}_{i_1 i_2\ldots i_N}\longrightarrow{\mathsf Y}^{[N]}M\,$ and sufficiently fine for the sets $\,\sigma_{ij}(\mathcal{O}_{ij})\subset{\mathsf Y}^{[2]}M\,$ to support flat (unital) local sections $\,s_{ij}=s^{-1}_{ji}\circ\tau\ :\ \sigma_{ij}(\mathcal{O}_{ij})\longrightarrow L$,\ with $\,\tau\ :\ {\mathsf Y}^{[2]}M\longrightarrow{\mathsf Y}^{[2]}M\ :\ (y_1,y_2)\longmapsto(y_2,y_1)$,\ we define local data \begin{eqnarray}\nn B_i=\sigma_i^*{\rm B}\,,\qquad A_{ij}\otimes s_{ij}\circ\sigma_{ij}={\mathsf i}\,\sigma_{ij}^*(\nabla_L s_{ij})\,,\qquad\mu\bigl(s_{ij}\circ\sigma_{ij},s_{jk}\circ\sigma_{jk}\bigr)=(s_{ik}\circ\sigma_{ik})\vartriangleleft g_{ijk}\,. \end{eqnarray} Under the correspondence, the cohomological equivalence relation behind the definition of the class $\,\left[(B_i,A_{jk},g_{lmn})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2,\ (l,m,n)\in\mathscr{I}_3}\right]\,$ translates into the notion of an isomorphism between bundle gerbes: Given two such gerbes $\,\mathcal{G}_\a=({\mathsf Y}_\a M,\pi_{{\mathsf Y}_\a M},{\rm B}_\a,L_\a,\nabla_{L_\a},\mu_{L_\a}),\ \a\in\{1,2\}$,\ we call them {\bf 1-isomorphic} if there exists a quintuple \begin{eqnarray}\nn \Phi=({\mathsf Y}\sfY_{1,2}M,\pi_{{\mathsf Y}\sfY_{1,2}M},E,\nabla_E,\a_E) \end{eqnarray} itself termed a {\bf 1-isomorphism} and composed of a surjective submersion \begin{eqnarray}\nn \pi_{{\mathsf Y}\sfY_{1,2}M}\ :\ {\mathsf Y}\sfY_{1,2}M\longrightarrow{\mathsf Y}_1 M\x_M{\mathsf Y}_2 M\equiv{\mathsf Y}_{1,2}M \end{eqnarray} over the fibred product $\,{\mathsf Y}_{1,2}M$,\ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{ & {\mathsf Y}_{1,2}M \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}_1 M \ar[rd]_{\pi_{{\mathsf Y}_1 M}} & & {\mathsf Y}_2 M \ar[ld]^{\pi_{{\mathsf Y}_2 M}} \\ & M & }\,, \end{eqnarray} of a principal bundle \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{{\mathbb{C}}^\x \ar[r] & E \ar[d]^{\pi_E} \\ & {\mathsf Y}\sfY_{1,2}M } \end{eqnarray} with connection $\,\nabla_E\,$ of curvature \begin{eqnarray}\nn \curv(\nabla_E)=\pi_{{\mathsf Y}\sfY_{1,2}M}^*\bigl({\rm pr}_2^*{\rm B}_2-{\rm pr}_1^*{\rm B}_1\bigr)\,, \end{eqnarray} and of a connection-preserving principal-bundle isomorphism \begin{eqnarray}\nn \a_E\ :\ (\pi_{{\mathsf Y}\sfY_{1,2}M}\x\pi_{{\mathsf Y}\sfY_{1,2}M})^*\circ{\rm pr}_{1,3}^*L_1\otimes{\rm pr}_2^*E\xrightarrow{\ \cong\ }{\rm pr}_1^*E\otimes(\pi_{{\mathsf Y}\sfY_{1,2}M}\x\pi_{{\mathsf Y}\sfY_{1,2}M})^*\circ{\rm pr}_{2,4}^*L_2 \end{eqnarray} over the fibred product $\,{\mathsf Y}^{[2]}{\mathsf Y}_{1,2}M={\mathsf Y}\sfY_{1,2}M\x_M{\mathsf Y}\sfY_{1,2}M\,$ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{ & {\mathsf Y}^{[2]}{\mathsf Y}_{1,2}M \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}\sfY_{1,2}M \ar[rd]_{\pi_{{\mathsf Y}_1 M}\circ{\rm pr}_1\circ\pi_{{\mathsf Y}\sfY_{1,2}M}\quad} & & {\mathsf Y}\sfY_{1,2}M \ar[ld]^{\quad\pi_{{\mathsf Y}_2 M}\circ{\rm pr}_2\circ\pi_{{\mathsf Y}\sfY_{1,2}M}} \\ & M & }\,, \end{eqnarray} subject to the coherence constraint expressed by the commutative diagram of connection-pre\-serv\-ing principal-bundle isomorphisms \begin{eqnarray} \alxydim{@C=.15cm@R=1.5cm}{ & \pi_{1,2}^*\circ{\rm pr}_{1,3}^*L_1\otimes\pi_{2,3}^*\circ{\rm pr}_{3,5}^*L_1\otimes{\rm pr}_3^*E \ar[rd]^{\qquad\pi_{1,2,3}^*\circ{\rm pr}_{1,3,5}^*\mu_{L_1}\otimes{\rm id}_{{\rm pr}_3^*E}} \ar[ld]_{{\rm id}_{\pi_{1,2}^*\circ{\rm pr}_{1,3}^*L_1}\otimes{\rm pr}_{2,3}^*\a_E\qquad} & \\ \pi_{1,2}^*\circ{\rm pr}_{1,3}^*L_1\otimes{\rm pr}_2^*E\otimes\pi_{2,3}^*\circ{\rm pr}_{4,6}^*L_2 \ar[d]_{{\rm pr}_{1,2}^*\a_E\otimes{\rm id}_{\pi_{2,3}^*\circ{\rm pr}_{4,6}^*L_2}} & & \pi_{1,3}^*\circ{\rm pr}_{1,5}^*L_1\otimes{\rm pr}_3^*E \ar[d]^{{\rm pr}_{1,3}^*\a_E} \\ {\rm pr}_1^*E\otimes\pi_{1,2}^*\circ{\rm pr}_{2,4}^*L_2\otimes\pi_{2,3}^*\circ{\rm pr}_{4,6}^*L_2 \ar[rr]_{{\rm id}_{{\rm pr}_1^*E}\otimes\pi_{1,2,3}^*\circ{\rm pr}_{2,4,6}^*\mu_{L_2}} & & {\rm pr}_1^*E\otimes\pi_{1,3}^*\circ{\rm pr}_{2,6}^*L_2 }\cr\label{diag:grb1isocoh} \end{eqnarray} over the fibred product $\,{\mathsf Y}^{[3]}{\mathsf Y}_{1,2}M\equiv{\mathsf Y}\sfY_{1,2}M\x_M{\mathsf Y}\sfY_{1,2}M\x_M{\mathsf Y}\sfY_{1,2}M\,$ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=4cm@R=1cm}{ & {\mathsf Y}^{[3]}{\mathsf Y}_{1,2}M \ar[rd]^{{\rm pr}_3} \ar[d]_{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}\sfY_{1,2}M \ar[rd]_{\pi_{{\mathsf Y}_1 M}\circ{\rm pr}_1\circ\pi_{{\mathsf Y}\sfY_{1,2}M}\qquad} & {\mathsf Y}\sfY_{1,2}M \ar[d]_{\pi_{{\mathsf Y}_2 M}\circ{\rm pr}_2\circ\pi_{{\mathsf Y}\sfY_{1,2}M}} & {\mathsf Y}\sfY_{1,2}M \ar[ld]^{\qquad\pi_{{\mathsf Y}_3 M}\circ{\rm pr}_3\circ\pi_{{\mathsf Y}\sfY_{1,2}M}} \\ & M & }\,, \end{eqnarray} with \begin{eqnarray}\nn &\pi_{i,j}=(\pi_{{\mathsf Y}\sfY_{1,2}M}\x\pi_{{\mathsf Y}\sfY_{1,2}M})\circ{\rm pr}_{i,j}\,,\quad(i,j)\in\{(1,2),(2,3),(1,3)\}\,,&\cr\cr &\pi_{1,2,3}=\pi_{{\mathsf Y}\sfY_{1,2}M}\x\pi_{{\mathsf Y}\sfY_{1,2}M}\x\pi_{{\mathsf Y}\sfY_{1,2}M}\,.& \end{eqnarray} In view of the results obtained in \Rcite{Waldorf:2007mm}, we may always assume the surjective submersion of the 1-isomorphism to be of the distinguished form $\,\pi_{{\mathsf Y}\sfY_{1,2}M}={\rm id}_{{\mathsf Y}_{1,2}M}$,\ which we do in what follows unless expressly stated otherwise. The situation just described is concisely denoted as \begin{eqnarray}\nn \Phi\ :\ \mathcal{G}_1\xrightarrow{\ \cong\ }\mathcal{G}_2\,. \end{eqnarray} In fact, the transformation \eqref{eq:1iso} is left unchanged by secondary redefinitions \begin{eqnarray}\nn (C_i,h_{jk})\longmapsto\bigl(C_i-{\mathsf i}\,{\mathsf d}\log f_i,h_{jk}\cdot\bigl(f_k^{-1}\cdot f_j\bigr)\mathord{\restriction}_{\mathcal{O}_{jk}}\bigr)\,, \end{eqnarray} which indicates the existence of isomorphisms between 1-isomorphisms, or {\bf 2-isomorphisms}, with local data $\,[(f_i)_{i\in\mathscr{I}}]\,$ (defined up to local constants). In the geometric language, and for a given pair of 1-isomorphisms $\,\Phi_\beta=({\mathsf Y}^\beta{\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}^\beta{\mathsf Y}_{1,2}M},E_\beta,\nabla_{E_\beta},\a_{E_\beta}),\ \beta\in\{1,2\}\,$ between bundle gerbes $\,\mathcal{G}_\a=({\mathsf Y}_\a M,\pi_{{\mathsf Y}_\a M},{\rm B}_\a,L_\a,\nabla_{L_\a},\mu_{L_\a}),\ \a\in\{1,2\}$,\ a 2-isomorphism is represented\footnote{Strictly speaking, we should consider classes of such triples with respect to the following equivalence relation: $\,({\mathsf Y}_1{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}_1{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M},\beta_1)\sim({\mathsf Y}_2{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}_2{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M},\beta_2)\,$ iff there exist surjective submersions $\,\pi_\a\ :\ Z\longrightarrow{\mathsf Y}_\a{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M,\ \a\in\{1,2\}\,$ with the property $\,\pi_{{\mathsf Y}_1{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M}\circ\pi_1=\pi_{{\mathsf Y}_2{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M}\circ\pi_2$,\ and such that $\,\pi_1^*\beta_1=\pi_2^*\beta_2$.} by a triple \begin{eqnarray}\nn \varphi=({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M},\beta) \end{eqnarray} composed of a surjective submersion \begin{eqnarray}\nn \pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M}\ :\ {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M\longrightarrow{\mathsf Y}^1{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_{1,2}M}{\mathsf Y}^2{\mathsf Y}_{1,2}M\equiv{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M \end{eqnarray} and a connection-preserving principal-bundle isomorphism \begin{eqnarray}\nn \beta\ :\ ({\rm pr}_1\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M})^*E_1\xrightarrow{\ \cong\ }({\rm pr}_2\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M})^*E_2 \end{eqnarray} subject to the coherence constraint expressed by the commutative diagram of connection-preserving principal-bundle isomorphism \begin{eqnarray}\label{diag:betacohalpha} \alxydim{@C=2.5cm@R=1.75cm}{ p_{1,1}^*L_1\otimes\pi_{1,2}^*E_1 \ar[r]^{(\pi_1\x\pi_1)^*\a_{E_1}} \ar[d]_{{\rm id}_{p_{1,1}^*L_1}\otimes{\rm pr}_2^*\beta} & \pi_{1,1}^*E_1\otimes p_{2,1}^*L_2 \ar[d]^{{\rm pr}_1^*\beta\otimes{\rm id}_{p_{2,1}^*L_2}} \\ p_{1,1}^*L_1\otimes\pi_{2,2}^*E_2 \ar[r]_{(\pi_2\x\pi_2)^*\a_{E_2}} & \pi_{2,1}^*E_2\otimes p_{2,1}^*L_2 } \end{eqnarray} over $\,{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M\x_M{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M$,\ with \begin{eqnarray}\nn &\pi_i={\rm pr}_i\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M}\,,\qquad\pi_{j,k}=\pi_j\circ{\rm pr}_k\,,\quad i,j,k\in\{1,2\}\,,&\cr\cr &p_{l,m}={\rm pr}_l\circ\pi_{{\mathsf Y}^m{\mathsf Y}_{1,2}M}\circ\pi_m\x{\rm pr}_l\circ\pi_{{\mathsf Y}^m{\mathsf Y}_{1,2}M}\circ\pi_m\,,\quad l,m\in\{1,2\}\,.& \end{eqnarray} We denote the 2-isomorphism as \begin{eqnarray}\nn \varphi\ :\ \Phi_1\xLongrightarrow{\ \cong\ }\Phi_2\,. \end{eqnarray} For details of the correspondence indicated above, consult, {\it e.g.}, Refs.\,\cite{Murray:1999ew,Gawedzki:2002se} and \cite{Waldorf:2007mm}. Our subsequent discussion calls for several additional elementary objects and constructions of the theory of gerbes. The first among them is the trivial gerbe over $\,M\,$ which is none other than a de Rham 3-coboundary $\,{\rm H}={\mathsf d}{\rm B}\,$ with a globally smooth primitive $\,{\rm B}\in\Omega^2(M)$,\ with an obvious cohomological representation (associated with an arbitrary open cover $\,\mathcal{O}_M$) \begin{eqnarray}\nn \mathcal{I}_{\rm B}=[({\rm B}\mathord{\restriction}_{\mathcal{O}_i},0,1)_{i\in\mathscr{I}}]\,, \end{eqnarray} and a simple geometrisation \begin{eqnarray}\nn \mathcal{I}_{\rm B}=(M,{\rm id}_M,{\rm B},M\x{\mathbb{C}}^\x,{\mathsf d},\mu) \end{eqnarray} with the trivial groupoid structure \begin{eqnarray}\nn \mu\ :\ (M\x{\mathbb{C}}^\x)\otimes(M\x{\mathbb{C}}^\x)\longrightarrow M\x{\mathbb{C}}^\x\ :\ \bigl((x,z_1),(x,z_2)\bigr)\longmapsto(x,z_1\cdot z_2)\,. \end{eqnarray} A trivial 1-isomorphism is defined analogously as a trivial principal ${\mathbb{C}}^\x$-bundle. The next concept is that of the tensor product $\,\mathcal{G}_1\otimes\mathcal{G}_2\,$ of (bundle) gerbes $\,\mathcal{G}_\a,\ \a\in\{1,2\}\,$ over a common base $\,M$.\ This has a trivial cohomological description over a common trivialising open cover, to wit, given the respective local data $\,\left[(B^\a_i,A^\a_{jk},g^\a_{lmn})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2,\ (l,m,n)\in\mathscr{I}_3}\right]$, \begin{eqnarray}\nn &&\left[(B^1_i,A^1_{jk},g^1_{lmn})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2,\ (l,m,n)\in\mathscr{I}_3}\right]\otimes\left[(B^2_i,A^2_{jk},g^2_{lmn})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2,\ (l,m,n)\in\mathscr{I}_3}\right]\cr\cr &=&\left[(B^1_i+B^2_i,A^1_{jk}+A^2_{jk},g^1_{lmn}\cdot g^2_{lmn})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2,\ (l,m,n)\in\mathscr{I}_3}\right]\,. \end{eqnarray} The geometric counterpart of this construction, for the choice of respective geometrisations $\,\mathcal{G}_\a=({\mathsf Y}_\a M,\pi_{{\mathsf Y}_\a M},{\rm B}_\a,L_\a,\nabla_{L_\a},\mu_{L_\a}),\ \a\in\{1,2\}$,\ is the bundle gerbe \begin{eqnarray}\nn \mathcal{G}_1\otimes\mathcal{G}_2&=&\bigl({\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}_1 M}\circ{\rm pr}_1,{\rm pr}_1^*{\rm B}_1+{\rm pr}_2^*{\rm B}_2,{\rm pr}_{1,3}^*L_1\otimes{\rm pr}_{2,4}^*L_2,{\rm pr}_{1,3}^*\nabla_{L_1}\otimes{\rm id}_{{\rm pr}_{2,4}^*L_2}\cr\cr &&+{\rm id}_{{\rm pr}_{1,3}^*L_1}\otimes{\rm pr}_{2,4}^*\nabla_{L_2},{\rm pr}_{1,3,5}^*\mu_{L_1}\otimes{\rm pr}_{2,4,6}^*\mu_{L_2}\bigr)\,. \end{eqnarray} The construction of the tensor product descends naturally to (stable) isomorphisms between (bundle) gerbes: Given gerbes $\,\mathcal{G}_\a,\ \a\in\{1,2,3,4\}\,$ and isomorphisms $\,\Phi_\beta\ :\ \mathcal{G}_\beta\xrightarrow{\ \cong\ }\mathcal{G}_{\beta+2},\ \beta\in\{1,2\}$,\ we may define a tensor-product isomorphism $\,\Phi_1\otimes\Phi_2\ :\ \mathcal{G}_1\otimes\mathcal{G}_2\xrightarrow{\ \cong\ }\mathcal{G}_3\otimes\mathcal{G}_4$.\ If the respective local data are $\,[(C^\beta_i,h^\beta_{jk})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2}]$,\ we have \begin{eqnarray}\nn [(C^1_i,h^1_{jk})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2}]\otimes[(C^2_i,h^2_{jk})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2}]=[(C^1_i+C^2_i,h^1_{jk}\cdot h^2_{jk})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2}]\,. \end{eqnarray} When expressed in terms of the respective geometrisations $\,\Phi_\beta=({\mathsf Y}\sfY_{\beta,\beta+2}M,\pi_{{\mathsf Y}\sfY_{\beta,\beta+2}M},E_\beta,\nabla_{E_\beta},\a_{E_\beta})$,\ the tensor product takes the form \begin{eqnarray}\nn \Phi_1\otimes\Phi_2&=&\bigl({\mathsf Y}\sfY_{1,3}M\x_M{\mathsf Y}\sfY_{2,4}M,({\rm id}_{{\mathsf Y}_1 M}\x\tau_{{\mathsf Y}_3 M,{\mathsf Y}_2 M}\x{\rm id}_{{\mathsf Y}_4 M})\circ(\pi_{{\mathsf Y}\sfY_{1,3}M}\x\pi_{{\mathsf Y}\sfY_{2,4}M}),{\rm pr}_1^*E_1\otimes{\rm pr}_2^*E_2,&\cr\cr &&\quad{\rm pr}_1^*\nabla_{E_1}\otimes{\rm id}_{{\rm pr}_2^*E_2}+{\rm id}_{{\rm pr}_1^*E_1}\otimes{\rm pr}_2^*\nabla_{E_2},{\rm pr}_{1,3}^*\a_{E_1}\otimes{\rm pr}_{2,4}^*\a_{E_2}\bigr)\,, \end{eqnarray} with the fibred product $\,{\mathsf Y}\sfY_{1,3}M\x_M{\mathsf Y}\sfY_{2,4}M\,$ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{ & {\mathsf Y}\sfY_{1,3}M\x_M{\mathsf Y}\sfY_{2,4}M \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}\sfY_{1,3}M \ar[rd]_{\pi_{{\mathsf Y}_1 M}\circ{\rm pr}_1\circ\pi_{{\mathsf Y}\sfY_{1,3}M}\qquad} & & {\mathsf Y}\sfY_{2,4}M \ar[ld]^{\qquad\pi_{{\mathsf Y}_4 M}\circ{\rm pr}_2\circ\pi_{{\mathsf Y}\sfY_{2,4}M}} \\ & M & }\,, \end{eqnarray} and with \begin{eqnarray}\nn \tau_{{\mathsf Y}_3 M,{\mathsf Y}_2 M}\ :\ {\mathsf Y}_3 M\x_M{\mathsf Y}_2 M\longrightarrow{\mathsf Y}_2 M\x_M{\mathsf Y}_3 M\ :\ (y_3,y_2)\longmapsto(y_2,y_3)\,. \end{eqnarray} We may also conceive the tensor product of a pair of 2-isomorphisms $\,\varphi_\gamma\ :\ \Phi^1_\gamma\xLongrightarrow{\ \cong\ }\Phi^2_\gamma,\ \gamma\in\{1,2\}\,$ between isomorphisms $\,\Phi^\beta_\gamma\ :\ \mathcal{G}_\gamma\xrightarrow{\ \cong\ }\mathcal{G}_{\gamma+2},\ \beta\in\{1,2\}$.\ For the (respective) local data $\,[(f_i^\gamma)_{i\in\mathscr{I}}]$,\ we obtain \begin{eqnarray}\nn [(f_i^1)_{i\in\mathscr{I}}]\otimes[(f_i^2)_{i\in\mathscr{I}}]=[(f_i^1\cdot f_i^2)_{i\in\mathscr{I}}]\,, \end{eqnarray} whereas in the language of the respective geometrisations $\,\varphi_\gamma=({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{\gamma,\gamma+2}M,\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{\gamma,\gamma+2}M},\beta_\gamma)$,\ the tensor product is the 2-isomorphism \begin{eqnarray}\nn \varphi_1\otimes\varphi_2&=&\bigl({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,3}M\x_M{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,4}M,\cr\cr &&\ ({\rm id}_{{\mathsf Y}^1{\mathsf Y}_{1,3}M}\x\tau_{{\mathsf Y}^2{\mathsf Y}_{1,3}M,{\mathsf Y}^1{\mathsf Y}_{2,4}M}\x{\rm id}_{{\mathsf Y}^2{\mathsf Y}_{2,4}M})\circ(\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,3}M}\x\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,4}M}),{\rm pr}_1^*\beta_1\otimes{\rm pr}_2^*\beta_2\bigr)\,, \end{eqnarray} with the fibred product $\,{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,3}M\x_M{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,4}M\,$ described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{ & {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,3}M\x_M{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,4}M \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,3}M \ar[rd]_{\pi_{{\mathsf Y}_1 M}\circ{\rm pr}_1\circ\pi_{{\mathsf Y}^1{\mathsf Y}_{1,3}M}\circ{\rm pr}_1\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,3}M}\qquad\qquad\qquad} & & {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,4}M \ar[ld]^{\qquad\qquad\qquad\pi_{{\mathsf Y}_4 M}\circ{\rm pr}_2\circ\pi_{{\mathsf Y}^2{\mathsf Y}_{2,4}M}\circ{\rm pr}_2\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,4}M}} \\ & M & }\,, \end{eqnarray} and with \begin{eqnarray}\nn \tau_{{\mathsf Y}^2{\mathsf Y}_{1,3}M,{\mathsf Y}^1{\mathsf Y}_{2,4}M}\ :\ {\mathsf Y}^2{\mathsf Y}_{1,3}M\x_M{\mathsf Y}^1{\mathsf Y}_{2,4}M\longrightarrow{\mathsf Y}^1{\mathsf Y}_{2,4}M\x_M{\mathsf Y}^2{\mathsf Y}_{1,3}M\ :\ (y^2_{1,3},y^1_{2,4})\longmapsto(y^1_{2,4},y^2_{1,3})\,. \end{eqnarray} Stable isomorphisms and 2-isomorphisms can be not only tensored, but also composed. Given 1-isomorphisms $\,\Phi_\beta\ :\ \mathcal{G}_\beta\xrightarrow{\ \cong\ }\mathcal{G}_{\beta+1},\ \beta\in\{1,2\}\,$ between gerbes $\,\mathcal{G}_\a,\ \a\in\{1,2,3\}$,\ we may define the composite 1-isomorphism $\,\Phi_2\circ\Phi_1\ :\ \mathcal{G}_1\xrightarrow{\ \cong\ }\mathcal{G}_3\,$ with local data \begin{eqnarray}\nn [(C^2_i,h^2_{jk})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2}]\circ[(C^1_i,h^1_{jk})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2}]=[(C^1_i+C^2_i,h^1_{jk}\cdot h^2_{jk})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2}]\,, \end{eqnarray} and with a geometrisation \begin{eqnarray}\nn \Phi_2\circ\Phi_1&=&\bigl({\mathsf Y}\sfY_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}\sfY_{2,3}M,{\rm pr}_{1,4}\circ(\pi_{{\mathsf Y}\sfY_{1,2}M}\x\pi_{{\mathsf Y}\sfY_{2,3}M}),{\rm pr}_1^*E_1\otimes{\rm pr}_2^*E_2,\cr\cr &&\quad{\rm pr}_1^*\nabla_{E_1}\otimes{\rm id}_{{\rm pr}_2^*E_2}+{\rm id}_{{\rm pr}_1^*E_1}\otimes{\rm pr}_2^*\nabla_{E_2},({\rm id}_{{\rm pr}_1^*{\rm pr}_1^*E_1}\otimes{\rm pr}_{2,4}^*\a_{E_2})\circ({\rm pr}_{1,3}^*\a_{E_1}\otimes{\rm id}_{{\rm pr}_2^*{\rm pr}_2^*E_2})\bigr)\,, \end{eqnarray} where the fibred product $\,{\mathsf Y}\sfY_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}\sfY_{2,3}M\,$ is described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{ & {\mathsf Y}\sfY_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}\sfY_{2,3}M \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}\sfY_{1,2}M \ar[rd]_{{\rm pr}_2\circ\pi_{{\mathsf Y}\sfY_{1,2}M}\qquad} & & {\mathsf Y}\sfY_{2,3}M \ar[ld]^{\qquad{\rm pr}_1\circ\pi_{{\mathsf Y}\sfY_{2,3}M}} \\ & {\mathsf Y}_2 M & }\,. \end{eqnarray} In the case of 2-isomorphisms, we encounter two types of composition. Given two pairs of 1-isomorphisms $\,\Phi_\gamma^\beta\ :\ \mathcal{G}_\gamma\xrightarrow{\ \cong\ }\mathcal{G}_{\gamma+1},\ \beta,\gamma\in\{1,2\}\,$ between gerbes $\,\mathcal{G}_\a,\ \a\in\{1,2,3\}\,$ and two 2-isomorphisms $\,\varphi_\gamma\ :\ \Phi_\gamma^1\xLongrightarrow{\ \cong\ }\Phi_\gamma^2\,$ between the former, we define the {\bf horizontal composition} $\,\varphi_2\circ\varphi_1\ :\ \Phi_2^1\circ\Phi_1^1\xLongrightarrow{\ \cong\ }\Phi_2^2\circ\Phi_1^2\,$ as the 2-isomorphism with local data \begin{eqnarray}\nn [(f_i^2)_{i\in\mathscr{I}}]\circ[(f_i^1)_{i\in\mathscr{I}}]=[(f_i^2\cdot f_i^1)_{i\in\mathscr{I}}] \end{eqnarray} and -- for $\,\varphi_\gamma=({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{\gamma,\gamma+1}M,\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{\gamma,\gamma+1}M},\beta_\gamma)\,$ -- a geometrisation \begin{eqnarray}\nn \varphi_2\circ\varphi_1&=&\bigl(({\mathsf Y}^1{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}^1{\mathsf Y}_{2,3}M)\x_{{\mathsf Y}_{1,3}M}({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,3}M)\x_{{\mathsf Y}_{1,3}M}\cr\cr &&\quad({\mathsf Y}^2{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}^2{\mathsf Y}_{2,3}M),{\rm pr}_{1,2,5,6},\pi^{2\,*}_{1,2,3}d_{\Phi_2^2\circ\Phi_1^2}\circ({\rm pr}_3^*\beta_1\otimes{\rm pr}_4^*\beta_2)\circ\pi^{1\,*}_{1,2,3}d_{\Phi_2^1\circ\Phi_1^1} \bigr)\,, \end{eqnarray} written in terms of the surjective submersions \begin{eqnarray}\nn \pi^1_{1,2,3}&=&\bigl({\rm id}_{{\mathsf Y}^1{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}^1{\mathsf Y}_{2,3}M}\x({\rm pr}_1\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M})\x({\rm pr}_1\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,3}M})\bigr)\circ {\rm pr}_{1,2,3,4}\,,\cr\cr \pi^2_{1,2,3}&=&\bigl(({\rm pr}_2\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M})\x({\rm pr}_2\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{2,3}M})\x{\rm id}_{{\mathsf Y}^2{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}^2{\mathsf Y}_{2,3}M}\bigr)\circ {\rm pr}_{3,4,5,6} \end{eqnarray} and of the canonical (connection-preserving principal-bundle) isomorphisms \begin{eqnarray}\nn d_{\Phi_2^\beta\circ\Phi_1^\beta}\ :\ {\rm pr}_1^*({\rm pr}_1^*E_1^\beta\otimes{\rm pr}_2^*E_2^\beta)\xrightarrow{\ \cong\ }{\rm pr}_2^*({\rm pr}_1^*E_1^\beta\otimes{\rm pr}_2^*E_2^\beta)\,,\quad\beta\in\{1,2\} \end{eqnarray} over the respective fibred products $\,({\mathsf Y}^1{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}^1{\mathsf Y}_{2,3}M)\x_{{\mathsf Y}_{1,3}M}({\mathsf Y}^1{\mathsf Y}_{1,2}M\x_{{\mathsf Y}_2 M}{\mathsf Y}^1{\mathsf Y}_{2,3}M)$,\ derived in \Rcite{Waldorf:2007mm}. For any pair $\,\varphi_\delta\ :\ \Phi_\delta\xLongrightarrow{\ \cong\ }\Phi_{\delta+1},\ \delta\in\{1,2\}\,$ of 2-isomorphisms between 1-isomorphisms $\,\Phi_\delta,\Phi_{\delta+1}\ :\ \mathcal{G}_1\xrightarrow{\ \cong\ }\mathcal{G}_2\,$ between given gerbes $\,\mathcal{G}_1\,$ and $\,\mathcal{G}_2$,\ on the other hand, we may define their {\bf vertical composition} $\,\varphi_2\bullet\varphi_1\ :\ \Phi_1\xLongrightarrow{\ \cong\ }\Phi_3\,$ as the 2-isomorphism with local data \begin{eqnarray}\nn [(f_i^2)_{i\in\mathscr{I}}]\bullet[(f_i^1)_{i\in\mathscr{I}}]=[(f_i^2\cdot f_i^1)_{i\in\mathscr{I}}] \end{eqnarray} and -- for $\,\Phi_\delta=({\mathsf Y}^\delta{\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}^\delta{\mathsf Y}_{1,2}M},E_\delta,\nabla_{E_\delta},\a_{E_\delta})\,$ and $\,\varphi_\delta=({\mathsf Y}\sfY^{\delta,\delta+1}{\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}\sfY^{\delta,\delta+1}{\mathsf Y}_{1,2}M},\beta_\delta)\,$ -- a geometrisation \begin{eqnarray}\nn \varphi_2\bullet\varphi_1=\bigl({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M\x_{{\mathsf Y}^2{\mathsf Y}_{1,2}M}{\mathsf Y}\sfY^{2,3}{\mathsf Y}_{1,2}M,{\rm pr}_1\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M}\x{\rm pr}_2\circ\pi_{{\mathsf Y}\sfY^{2,3}{\mathsf Y}_{1,2}M},{\rm pr}_2^*\beta_2\circ{\rm pr}_1^*\beta_1\bigr)\,. \end{eqnarray} The above structure can be organised into a (weak) 2-category with (bundle) gerbes as 0-cells (or objects), 1-isomorphisms as 1-cells and 2-isomorphisms as 2-cells, which puts us in the higher-categorial context of the loop (quantum) mechanics. Finally, we should mention the pullback of the various structures introduced heretofore along smooth maps between their bases. It is completely straightforward to describe it in the local cohomological description. Indeed, let $\,f\in C^\infty(M_1,M_2)\,$ and let \begin{eqnarray}\nn \bigl[\bigl(X^p_{i^1_1},X^{p-1}_{i^2_1 i^2_2},\ldots,X^0_{i^{p+1}_1 i^{p+1}_2\ldots i^{p+1}_{p+1}}\bigr)_{i^1_1\in\mathscr{I}^2,\ (i^2_1,i^2_2)\in\mathscr{I}^2_2,\ldots,(i^{p+1}_1,i^{p+1}_2,\ldots,i^{p+1}_{p+1})\in\mathscr{I}^2_{p+1}}\bigr]\,,\quad p\in\{0,1,2\} \end{eqnarray} be local data of an object (a gerbe for $\,p=2$,\ a 1-isomorphism for $\,p=1$,\ and a 2-isomorphism for $\,p=0$) on $\,M_2\,$ associated with an open cover $\,\mathcal{O}_{M_2}=\{\mathcal{O}^2_i\}_{i\in\mathscr{I}^2}\,$ of $\,M_2$.\ In order to define the pullback of that object over $\,M_1\,$ in terms of its local data, we need to fix an open cover $\,\{\mathcal{O}^1_j\}_{j\in\mathscr{I}^1}\,$ of $\,M_1\,$ together with a map $\,\phi\ :\ \mathscr{I}^1\longrightarrow\mathscr{I}^2\,$ subordinate to $\,f\,$ in the sense expressed by the condition \begin{eqnarray}\nn \forall_{i^1\in\mathscr{I}^1}\ :\ f\bigl(\mathcal{O}^1_{i^1}\bigr)\subset\mathcal{O}^2_{\phi(i^1)} \end{eqnarray} (which may require passing to a refinement of $\,\mathcal{O}_{M_2}$), whereupon we define \begin{eqnarray}\nn &&f^*[\bigl(X^p_{i^1_1},X^{p-1}_{i^2_1 i^2_2},\ldots,X^0_{i^{p+1}_1 i^{p+1}_2\ldots i^{p+1}_{p+1}}\bigr)_{i^1_1\in\mathscr{I}^2,\ (i^2_1,i^2_2)\in\mathscr{I}^2_2,\ldots,(i^{p+1}_1,i^{p+1}_2,\ldots,i^{p+1}_{p+1})\in\mathscr{I}^2_{p+1}}\bigr]\cr\cr &\equiv&[\bigl(Y^p_{j^1_1},Y^{p-1}_{j^2_1 j^2_2},\ldots Y^0_{j^{p+1}_1 j^{p+1}_2\ldots j^{p+1}_{p+1}}\bigr)_{j^1_1\in\mathscr{I}^1,\ (j^2_1,j^2_2)\in\mathscr{I}^1_2,\ldots,(j^{p+1}_1,j^{p+1}_2,\ldots,j^{p+1}_{p+1})\in\mathscr{I}^1_{p+1}}\bigr] \end{eqnarray} by the formul\ae \begin{eqnarray}\nn Y^{p-k}_{j^{k+1}_1 j^{k+1}_2\ldots j^{k+1}_{k+1}}:=f^*X^{p-k}_{\phi(j^{k+1}_1)\phi(j^{k+1}_2)\ldots\phi(j^{k+1}_{k+1})}\,,\quad k\in\ovl{0,p}\,. \end{eqnarray} We complete our presentation by giving definitions of pullbacks of the geometrisations of the local data that we introduced earlier. Thus, given a gerbe $\,\mathcal{G}=({\mathsf Y} M_2,\pi_{{\mathsf Y} M_2},{\rm B},L,\nabla_L,\mu_L)\,$ over the codomain of $\,f$,\ we first erect an arbitrary surjective submersion $\,\pi_{{\mathsf Y} M_1}\ :\ {\mathsf Y} M_1\longrightarrow M_1\,$ endowed with a smooth map $\,\widehat f\ :\ {\mathsf Y} M_1\longrightarrow{\mathsf Y} M_2\,$ that covers $\,f\,$ in the sense specified by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=1.5cm@R=1.5cm}{{\mathsf Y} M_1 \ar[r]^{\widehat f} \ar[d]_{\pi_{{\mathsf Y} M_1}} & {\mathsf Y} M_2 \ar[d]^{\pi_{{\mathsf Y} M_2}} \\ M_1 \ar[r]_{f} & M_2 } \end{eqnarray} (we may, {\it e.g.}, take $\,{\mathsf Y} M_1=M_1\x_{M_2}{\mathsf Y} M_2\,$ with $\,\pi_{{\mathsf Y} M_1}={\rm pr}_1\,$ and $\,\widehat f={\rm pr}_2$), and subsequently define \begin{eqnarray}\nn f^*\mathcal{G}=\bigl({\mathsf Y} M_1,\pi_{{\mathsf Y} M_1},\widehat f^*{\rm B},(\widehat f\x\widehat f)\mathord{\restriction}_{{\mathsf Y}^{[2]}M_1}^*L,(\widehat f\x\widehat f)\mathord{\restriction}_{{\mathsf Y}^{[2]}M_1}^*\nabla_L,(\widehat f\x\widehat f\x\widehat f)\mathord{\restriction}_{{\mathsf Y}^{[3]}M_1}^*\mu_L\bigr)\,. \end{eqnarray} Similarly, in order to pull back a 1-isomorphism $\,\Phi=({\mathsf Y}\sfY_{1,2}M_2,\pi_{{\mathsf Y}\sfY_{1,2}M_2},E,\nabla_E,\a_E)\,$ between gerbes $\,\mathcal{G}_\a=({\mathsf Y}_\a M_2,\pi_{{\mathsf Y}_\a M_2},{\rm B}_\a,L_\a,\nabla_{L_\a},\mu_{L_\a}),\ \a\in\{1,2\}\,$ along $\,f$,\ we choose a surjective submersion $\,\pi_{{\mathsf Y}\sfY_{1,2}M_1}\ :\ {\mathsf Y}\sfY_{1,2}M_1\longrightarrow{\mathsf Y}_{1,2}M_1\equiv{\mathsf Y}_1 M_1\x_{M_1}{\mathsf Y}_2 M_1\,$ alongside a map $\,\check f_{1,2}\ :\ {\mathsf Y}\sfY_{1,2}M_1\longrightarrow{\mathsf Y}\sfY_{1,2}M_2\,$ satisfying the condition described by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=1.5cm@R=1.5cm}{{\mathsf Y}\sfY_{1,2}M_1 \ar[r]^{\check f_{1,2}} \ar[d]_{\pi_{{\mathsf Y}\sfY_{1,2}M_1}} & {\mathsf Y}\sfY_{1,2}M_2 \ar[d]^{\pi_{{\mathsf Y}\sfY_{1,2}M_2}} \\ {\mathsf Y}_{1,2}M_1 \ar[r]_{\widehat f_1\x\widehat f_2} & {\mathsf Y}_{1,2}M_2 } \end{eqnarray} (for $\,\widehat f_\a\,$ the respective covers of $\,f$), whereupon we define \begin{eqnarray}\nn f^*\Phi=\bigl({\mathsf Y}\sfY_{1,2}M_1,\pi_{{\mathsf Y}\sfY_{1,2}M_1},\check f_{1,2}^*E,\check f_{1,2}^*\nabla_E,(\check f_{1,2}\x\check f_{1,2})\mathord{\restriction}_{{\mathsf Y}^{[2]}{\mathsf Y}_{1,2}M_1}^*\a_E\bigr)\ :\ f^*\mathcal{G}_1\xrightarrow{\ \cong\ }f^*\mathcal{G}_2\,. \end{eqnarray} We complete our construction of the pullback functor between the (weak) 2-categories of gerbes over the two manifolds related by the smooth map $\,f\,$ by taking, for any 2-isomorphism $\,\varphi=({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_2,$ $\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_2},\beta)\,$ between 1-isomorphisms $\,\Phi_\beta=({\mathsf Y}^\beta{\mathsf Y}_{1,2}M,\pi_{{\mathsf Y}^\beta{\mathsf Y}_{1,2}M},E_\beta,\nabla_{E_\beta},\a_{E_\beta}),\ \beta\in\{1,2\}\,$ between gerbes $\,\mathcal{G}_\a=({\mathsf Y}_\a M,\pi_{{\mathsf Y}_\a M},{\rm B}_\a,L_\a,\nabla_{L_\a},\mu_{L_\a}),\ \a\in\{1,2\}$,\ a surjective submersion $\,\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_1}\ :\ {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_1\longrightarrow{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M_1\,$ together with a map $\,\widetilde f^{1,2}_{1,2}\ :\ {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_1\longrightarrow{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_2\,$ that renders the following diagram commutative, \begin{eqnarray}\nn \alxydim{@C=1.5cm@R=1.5cm}{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_1 \ar[r]^{\widetilde f^{1,2}_{1,2}} \ar[d]_{\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_1}} & {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_2 \ar[d]^{\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_2}} \\ {\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M_1 \ar[r]_{\check f^1_{1,2}\x\check f^2_{1,2}} & {\mathsf Y}^{1,2}{\mathsf Y}_{1,2}M_2 } \end{eqnarray} (for $\,\check f^\beta_{1,2}\,$ the respective covers of $\,\widehat f_1\x\widehat f_2$), and then write \begin{eqnarray}\nn f^*\varphi=\bigl({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_1,\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}M_1},\widetilde f^{1,2\,*}_{1,2}\beta\bigr)\ :\ f^*\Phi^1\xLongrightarrow{\ \cong\ }f^*\Phi^2\,. \end{eqnarray} This exhausts the list of rudimentary concepts and constructions of the standard gerbe theory that we shall have a need for in the main part of our subsequent discussion. Prior to passing to the field-theoretic applications of the formalism recapitulated above, we close this section by giving -- after \Rcite{Stevenson:2001grb2} -- one last definition which is going to serve as a reference for our supergeometric constructions. Thus, we consider an object one degree higher in the natural hierarchy of geometrisations of de Rham classes, to wit, a {\bf bundle 2-gerbe} over a manifold $\,M\,$ with connection of {\bf curvature} given by a de Rham 3-cocycle with periods in $\,2\pi{\mathbb{Z}}$,\ \begin{eqnarray}\nn {\rm J}\in Z^4_{\rm dR}(M)\,, \end{eqnarray} to be understood as a quintuple \begin{eqnarray}\nn \mathcal{G}^{(2)}=({\mathsf Y} M,\pi_{{\mathsf Y} M},{\rm C},\mathcal{G},\mathcal{M}_\mathcal{G},\mu_\mathcal{G})\,, \end{eqnarray} composed of a surjective submersion \begin{eqnarray}\nn \pi_{{\mathsf Y} M}\ :\ {\mathsf Y} M\longrightarrow M \end{eqnarray} supporting a global primitive \begin{eqnarray}\nn {\rm C}\in\Omega^3({\mathsf Y} M) \end{eqnarray} of the pullback \begin{eqnarray}\nn \pi_{{\mathsf Y} M}^*{\rm J}={\mathsf d}{\rm C}\,, \end{eqnarray} alongside a bundle gerbe $\,\mathcal{G}\,$ over the fibred square $\,{\mathsf Y}^{[2]}M\,$ with connection of curvature \begin{eqnarray}\nn {\rm H}=\bigl({\rm pr}_2^*-{\rm pr}_1^*\bigr){\rm C} \end{eqnarray} together with a 1-isomorphism \begin{eqnarray}\nn \mathcal{M}_\mathcal{G}\ :\ {\rm pr}_{1,2}^*\mathcal{G}\otimes{\rm pr}_{2,3}^*\mathcal{G}\xrightarrow{\ \cong \ }{\rm pr}_{1,3}^*\mathcal{G} \end{eqnarray} of bundle gerbes over the fibred cube $\,{\mathsf Y}^{[3]}M\,$ (termed the {\bf product} of the 2-gerbe) and a 2-isomorphism \begin{eqnarray}\nn \alxydim{@C=4.cm@R=2cm}{{\rm pr}_{1,2}^*\mathcal{G}\otimes{\rm pr}_{2,3}^*\mathcal{G}\otimes{\rm pr}_{3,4}^*\mathcal{G} \ar[r]^{{\rm pr}_{1,2,3}^*\mathcal{M}_{\mathcal{G}}\otimes{\rm id}_{{\rm pr}_{3,4}^*\mathcal{G}}} \ar[d]_{{\rm id}_{{\rm pr}_{1,2}^*\mathcal{G}}\otimes{\rm pr}_{2,3,4}^*\mathcal{M}_{\mathcal{G}}} & {\rm pr}_{1,3}^*\mathcal{G}\otimes{\rm pr}_{3,4}^*\mathcal{G} \ar[d]^{{\rm pr}_{1,3,4}^*\mathcal{M}_{\mathcal{G}}} \ar@{=>}[dl]|{\ \mu_{\mathcal{G}}\ } \\ {\rm pr}_{1,2}^*\mathcal{G}\otimes{\rm pr}_{2,4}^*\mathcal{G} \ar[r]_{{\rm pr}_{1,2,4}^*\mathcal{M}_{\mathcal{G}}} & {\rm pr}_{1,4}^*\mathcal{G} } \end{eqnarray} between the 1-isomorphisms of bundle gerbes over the fourfold fibred product $\,{\mathsf Y}^{[4]}M\,$ (termed the {\bf associator} of the 2-gerbe) subject to the coherence constraints expressed by the commutative diagram of 2-isomorphisms (here, $\,X_{i_1 i_2\ldots i_k}\equiv{\rm pr}_{i_1 i_2\ldots i_k}^*X\,$ for any $\,i_1,i_2,\ldots,i_k\in\{1,2,3,4,5\},\ k\in\{2,3,4\}$) {\tiny\begin{eqnarray}\nn \alxydim{@C=-3.3cm@R=1.5cm}{ & \mathcal{M}_{\mathcal{G}\,1,4,5}\circ(\mathcal{M}_{\mathcal{G}\,1,3,4}\otimes{\rm id}_{\mathcal{G}_{4,5}})\circ(\mathcal{M}_{\mathcal{G}\,1,2,3}\otimes{\rm id}_{\mathcal{G}_{3,4}}\otimes{\rm id}_{\mathcal{G}_{4,5}}) \ar@{=>}[ld]_{{\rm id}_{\mathcal{M}_{\mathcal{G}\,1,4,5}}\circ(\mu_{\mathcal{G}\,1,2,3,4}\otimes{\rm id}_{{\rm id}_{\mathcal{G}_{4,5}}})\qquad\quad} \ar@{=>}[rd]^{\qquad\quad\mu_{\mathcal{G}\,1,3,4,5}\circ{\rm id}_{\mathcal{M}_{\mathcal{G}\,1,2,3}\otimes{\rm id}_{\mathcal{G}_{3,4}}\otimes{\rm id}_{\mathcal{G}_{4,5}}}} & \\ \mathcal{M}_{\mathcal{G}\,1,4,5}\circ(\mathcal{M}_{\mathcal{G}\,1,2,4}\otimes{\rm id}_{\mathcal{G}_{4,5}})\circ({\rm id}_{\mathcal{G}_{1,2}}\otimes\mathcal{M}_{\mathcal{G}\,2,3,4}\otimes{\rm id}_{\mathcal{G}_{4,5}}) \ar@{=>}[d]_{\mu_{\mathcal{G}\,1,2,4,5}\circ{\rm id}_{{\rm id}_{\mathcal{G}_{1,2}}\otimes\mathcal{M}_{\mathcal{G}\,2,3,4}\otimes{\rm id}_{\mathcal{G}_{4,5}}}} & & \mathcal{M}_{\mathcal{G}\,1,3,5}\circ({\rm id}_{\mathcal{G}_{1,3}}\otimes\mathcal{M}_{\mathcal{G}\,3,4,5})\circ(\mathcal{M}_{\mathcal{G}\,1,2,3}\otimes{\rm id}_{\mathcal{G}_{3,4}}\otimes{\rm id}_{\mathcal{G}_{4,5}}) \ar@{=}[d] \\ \mathcal{M}_{\mathcal{G}\,1,2,5}\circ({\rm id}_{\mathcal{G}_{1,2}}\otimes\mathcal{M}_{\mathcal{G}\,2,4,5})\circ({\rm id}_{\mathcal{G}_{1,2}}\otimes\mathcal{M}_{2,3,4}\otimes{\rm id}_{\mathcal{G}_{4,5}}) \ar@{=>}[rd]_{{\rm id}_{\mathcal{M}_{\mathcal{G}\,1,2,5}}\circ({\rm id}_{{\rm id}_{\mathcal{G}_{1,2}}}\otimes\mu_{\mathcal{G}\,2,3,4,5})\qquad\qquad\ } & & \mathcal{M}_{\mathcal{G}\,1,3,5}\circ(\mathcal{M}_{\mathcal{G}\,1,2,3}\otimes{\rm id}_{\mathcal{G}_{3,5}})\circ({\rm id}_{\mathcal{G}_{1,2}}\otimes{\rm id}_{\mathcal{G}_{2,3}}\otimes\mathcal{M}_{\mathcal{G}\,3,4,5}) \ar@{=>}[ld]^{\ \qquad\qquad\mu_{\mathcal{G}\,1,2,3,5}\circ{\rm id}_{{\rm id}_{\mathcal{G}_{1,2}}\otimes{\rm id}_{\mathcal{G}_{2,3}}\otimes\mathcal{M}_{\mathcal{G}\,3,4,5}}} \\ & \mathcal{M}_{\mathcal{G}\,1,2,5}\circ({\rm id}_{\mathcal{G}_{1,2}}\otimes\mathcal{M}_{\mathcal{G}\,2,3,5})\circ({\rm id}_{\mathcal{G}_{1,2}}\otimes{\rm id}_{\mathcal{G}_{2,3}}\otimes\mathcal{M}_{3,4,5}) & } \end{eqnarray}} between the 1-isomorphisms of bundle gerbes over the fifthfold fibred product $\,{\mathsf Y}^{[5]}M$. \medskip \subsection{A rigorous definition, canonical description \& geometric quantisation of the $\sigma$-model}\label{sub:defcanquant} The most immediate application of the formalism developed heretofore is an explicit formula for (the logarithm of) the holonomy, determined by local data of the gerbe. The formula calls for an extra technical ingredient, to wit, a choice of a tesselisation $\,\triangle(\Sigma)\,$ of the worldsheet, consisting of plaquettes (whose set will be denoted as $\,\gt{P}_{\triangle(\Sigma)}$), edges and vertices, subordinate, for a given map $\,x\in C^\infty(\Sigma,M)$,\ to the open cover $\,\mathcal{O}_M=\{\mathcal{O}_i\}_{i\in\mathscr{I}}$,\ by which we mean that there exists a map $\,i_\cdot\ :\ \triangle(\Sigma)\longrightarrow\mathscr{I}\,$ with the property \begin{eqnarray}\nn \forall_{\xi\in\triangle(\Sigma)}\ :\ x(\xi)\subset\mathcal{O}_{i_\xi}\,. \end{eqnarray} With all the requisite data in place, we have \begin{eqnarray}\nn -{\mathsf i}\,\log{\rm Hol}_\mathcal{G}(x)&=&\sum_{p\in\gt{P}_{\triangle(\Sigma)}} \left[\int_p\,(x\mathord{\restriction}_p)^*B_{i_p}+\sum_{e\subset\partial p}\left(\int_e\,(x\mathord{\restriction}_e)^* A_{i_p i_e}-{\mathsf i}\,\sum_{v\in\partial e}\,\varepsilon_{pev}\,\log g_{i_p i_e i_v}\bigl(x(v)\bigr)\right)\right]\,, \end{eqnarray} where $\,\varepsilon_{pev}=1\,$ if $\,v\,$ sits at the end of $\,e\,$ with respect to the orientation of the edge induced (as the orientation of the boundary) from that of $\,p$,\ and $\,\varepsilon_{pev}=-1\,$ otherwise. The above formula is a natural point of departure for the analysis that establishes the nature of geometric objects that complement the monophase background $\,(M,{\rm g},\mathcal{G})\,$ in the presence of world-sheet defects, {\it cp.}\ \Rcite{Runkel:2008gr}. The presence of the gerbe on the target space $\,M\,$ is also reflected directly in the canonical description of the $\sigma$-model. The latter description is readily established in the first-order formalism of Tulczyjew, Gaw\c{e}dzki, Kijowski and Szczyrba ({\it cp.}\ Refs.\,\cite{Gawedzki:1972ms,Kijowski:1973gi,Kijowski:1974mp,Kijowski:1976ze,Szczyrba:1976,Kijowski:1979dj}, and also \Rcite{Saunders:1989jet} for an elementary modern treatment) that provides us with a (pre)symplectic structure on the space of states of the monophase theory and a Poisson algebra on the set of smooth functions on it. The first step towards its derivation consists in associating with the lagrangean density $\,\mathscr{L}_\sigma\ :\ J^1(\Sigma\x M)\longrightarrow\bigwedge^2{\mathsf T}^*\Sigma\,$ of the $\sigma$-model, defined on the total space of the first-jet bundle $\,J^1(\Sigma\x M)\,$ of its covariant configuration bundle, with standard (adapted) coordinates $\,(x^a,\xi^b_i)\,$ on the fibre $\,J^1_{(\sigma^1,\sigma^2)}(\Sigma\x M)\,$ over a point $\,(\sigma^1,\sigma^2)\in\Sigma\,$ in the worldsheet, the Poincar\'e--Cartan form on $\,J^1(\Sigma\x M)\,$ (written in the standard notation that employs the symbol $\,\delta\,$ for vertical differentials on $\,J^1(\Sigma\x M)$) \begin{eqnarray}\nn \Theta_\sigma(x^a,\xi^b_i)&=&\left(\mathscr{L}_\sigma(x^a,\xi^b_i)-\xi^a_i\,\tfrac{\partial\mathscr{L}_\sigma}{\partial\xi^a_i}(x^a,\xi^b_i)\right)\,\Vol(\Sigma)+\delta x^a\,\tfrac{\partial\mathscr{L}_\sigma}{\partial\xi^a_i}(x^a,\xi^b_i)\wedge\left(\partial_i\righthalfcup\Vol(\Sigma)\right)\,. \end{eqnarray} It is not difficult to see that the extremals of the functional \begin{eqnarray}\nn S_{\Theta_\sigma}\ :\ \Gamma\bigl(J^1(\Sigma\x M)\bigr)\longrightarrow{\mathbb{R}}\ :\ \Psi\longmapsto\int_\Sigma\,\Psi^*\Theta_\sigma \end{eqnarray} are first jets of extremals of $\,S_\sigma$,\ and this observation justifies the definition of a {\bf presymplectic form} $\,\Omega_\sigma\,$ on the space of states (of a single loop) $\,{\mathsf P}_\sigma={\mathsf T}^*{\mathsf L} M\,$ of the $\sigma$-model, the space itself being coordinatised by Cauchy data $\,\Psi\mathord{\restriction}_\mathscr{C}\equiv(x^a,p_b)\,$ of extremals $\,\Psi\,$ of $\,S_{\Theta_\sigma}\,$ supported on a model Cauchy section, or equitemporal slice, $\,\mathscr{C}\equiv{\mathbb{S}}^1\subset\Sigma\,$ of the worldsheet. The definition reads \begin{eqnarray}\nn \Omega_\sigma[\Psi\mathord{\restriction}_\mathscr{C}]=\delta\int_\mathscr{C}\,\left(\Psi\mathord{\restriction}_\mathscr{C}\right)^*\Theta_\sigma\,, \end{eqnarray} and it depends only on the homotopy class of $\,\mathscr{C}\,$ within $\,\Sigma$.\ Through direct computation, we arrive at the explicit form \begin{eqnarray}\nn ({\mathsf P}_\sigma,\Omega_\sigma)=\left({\mathsf T}^*{\mathsf L} M,\delta\theta_{{\mathsf T}^*{\mathsf L} M}+\pi_{{\mathsf T}^*{\mathsf L} M}^*\int_\mathscr{C}\,{\rm ev}^*{\rm H} \right)\,, \end{eqnarray} expressed in terms of the bundle projection $\,\pi_{{\mathsf T}^*{\mathsf L} M}\ :\ {\mathsf T}^*{\mathsf L} M\longrightarrow{\mathsf L} M$,\ the canonical (action) 1-form $\,\theta_{{\mathsf T}^*{\mathsf L} M}\,$ on $\,{\mathsf T}^*{\mathsf L} M\,$ (with local presentation $\,\theta_{{\mathsf T}^*{\mathsf L} M}[x,p]=\int_\mathscr{C}\,\Vol(\mathscr{C})\,p_a(\cdot)\,\delta x^a(\cdot)$), and the standard evaluation map \begin{eqnarray}\nn {\rm ev}\ :\ \mathscr{C}\x{\mathsf L} M\longrightarrow M\ :\ (\varphi,\gamma)\longmapsto\gamma(\varphi)\,. \end{eqnarray} The 2-form serves to define a Poisson bracket of hamiltonians on $\,{\mathsf P}_\sigma$,\ {\it i.e.} those smooth functionals $\,h\,$ on $\,{\mathsf P}_\sigma\,$ for which there exist smooth vector fields $\,\mathcal{V}$,\ termed ({\bf globally}) {\bf hamiltonian}, satisfying the relation \begin{eqnarray}\label{eq:canHamcond} \mathcal{V}\righthalfcup\Omega_\sigma=-\delta h\,. \end{eqnarray} Indeed, for any two such functionals $\,h_A,\ A\in\{1,2\}$,\ and the corresponding vector fields $\,\mathcal{V}_A$,\ we may define a bracket \begin{eqnarray}\nn \{h_1,h_2\}_{\Omega_\sigma}[\Psi\mathord{\restriction}_\mathscr{C}]:=\mathcal{V}_2\righthalfcup\mathcal{V}_1\righthalfcup\Omega_\sigma[\Psi\mathord{\restriction}_\mathscr{C}]\,, \end{eqnarray} and the Jacobi identity follows automatically from the closedness of $\,\Omega_\sigma$.\ A detailed discussion of the thus defined canonical description of the $\sigma$-model and its adaptations to the multi-phase setting can be found in Refs.\,\cite{Suszek:2011hg,Suszek:2012ddg}. The ultimate confirmation of the naturality and functionality of gerbe theory in the analysis of the bosonic $\sigma$-model comes with the derivation of a quantisation scheme from that theory. The latter scheme is based on Gaw\c{e}dzki's transgression map (extended to the polyphase setting and subsequently employed in the analysis of symmetries and dualities of the $\sigma$-model in Refs.\,\cite{Suszek:2011hg,Suszek:2012ddg}) \begin{eqnarray}\nn \tau\ :\ {\mathbb{H}}^2\left(M,\mathcal{D}(2)^\bullet\right)\longrightarrow{\mathbb{H}}^1\left({\mathsf L} M,\mathcal{D}(1)^\bullet\right) \end{eqnarray} that canonically associates with (the isomorphism class of) $\,\mathcal{G}\,$ (the isomorphism class of) a principal bundle \begin{eqnarray}\label{eq:transbund} \alxydim{@C=.5cm@R=1cm}{{\mathbb{C}}^\x \ar[r] & \mathscr{L}_\mathcal{G} \ar[d]^{\pi_{\mathscr{L}_\mathcal{G}}} \\ & {\mathsf L} M\equiv C^\infty({\mathbb{S}}^1,M) } \end{eqnarray} over the configuration space $\,{\mathsf L} M\,$ of the $\sigma$-model, with connection $\,\nabla_{\mathscr{L}_\mathcal{G}}\,$ of curvature \begin{eqnarray}\nn \curv(\nabla_{\mathscr{L}_\mathcal{G}})=\int_{{\mathbb{S}}^1}\, {\rm ev}^*{\rm H}\,, \end{eqnarray} termed the {\bf transgression bundle}, and thus induces over the phase space $\,{\mathsf T}^*{\mathsf L} M\,$ of the monophase $\sigma$-model a {\bf pre-quantum bundle} of the $\sigma$-model \begin{eqnarray}\label{eq:preqbund} \mathscr{L}_\sigma:=({\mathsf T}^*{\mathsf L} M\x{\mathbb{C}}^\x)\otimes\pi_{{\mathsf T}^*{\mathsf L} M}^*\mathscr{L}_\mathcal{G}\,, \end{eqnarray} where the trivial tensor factor is taken to carry the global connection 1-form $\,\theta_{{\mathsf T}^*{\mathsf L} M}$.\ It ought to be emphasised that the transgression bundle $\,\mathscr{L}_\mathcal{G}\,$ can be reconstructed explicitly, on the basis of The Clutching Theorem, by sewing together its local trivialisations given in terms of the local data $\,(B_i,A_{jk},g_{lmn})_{i\in\mathscr{I},\ (j,k)\in\mathscr{I}_2,\ (l,m,n)\in\mathscr{I}_3}\,$ of $\,\mathcal{G}\,$ over the pullback along $\,\pi_{{\mathsf T}^*{\mathsf L} M}\,$ of an overcomplete basis \begin{eqnarray}\nn \mathcal{O}_\gt{i}\equiv\mathcal{O}_{\triangle({\mathbb{S}}^1),i_\cdot}=\{\ X\in{\mathsf L} M \quad\vert\quad \forall_{(e,v)\in\gt{E}_{\triangle({\mathbb{S}}^1)}\x\gt{V}_{\triangle( {\mathbb{S}}^1)}}\ :\ x(e)\subset\mathcal{O}^M_{i_e}\quad\land\quad x(v)\in \mathcal{O}^M_{i_v} \ \}\,, \end{eqnarray} of the compact-open topology of the Fr\'echet manifold $\,{\mathsf L} M\,$ indexed by pairs $\,\gt{i}\equiv(\triangle({\mathbb{S}}^1) ,i_\cdot)\,$ composed of a tesselisation $\,\triangle({\mathbb{S}}^1)\,$ of the unit circle, with its set of edges $\,\gt{E}_{\triangle({\mathbb{S}}^1)}\,$ and its set of vertices $\,\gt{V}_{\triangle({\mathbb{S}}^1)}$,\ and a choice $\,i_\cdot\ :\ \triangle({\mathbb{S}}^1)\longrightarrow\mathscr{I}\ :\ \xi \longmapsto i_\xi\,$ of assignment of indices of $\,\mathcal{O}_M\,$ to elements of $\,\triangle({\mathbb{S}}^1)$.\ By varying these two choices arbitrarily, whereby an index set $\,\mathscr{I}_{{\mathsf L} M}\,$ is formed, we cover all of $\,{\mathsf L} M$,\ thus forming an open cover $\,\mathcal{O}_{{\mathsf L} M}=\{\mathcal{O}_\gt{i} \}_{\gt{i}\in\mathscr{I}_{{\mathsf L} M}}\,$ of the free-loop space $\,{\mathsf L} M$.\ It is straightforward to describe intersections of elements of the open cover $\,\mathcal{O}_{{\mathsf L} M}$,\ {\it cp.}\ \Rcite{Gawedzki:1987ak}. Given a pair $\,\mathcal{O}_{\gt{i}^\a},\ \a\in\{1,2\}\,$ with the respective triangulations $\,\triangle_\a({\mathbb{S}}^1)\,$ (consisting of edges $\,e_\a\in\gt{E}_{\triangle_\a({\mathbb{S}}^1)}\,$ and vertices $\,v_\a\in\gt{E}_{\triangle_\a({\mathbb{S}}^1)}$) and index assignments $\,i^\a_\cdot\ :\ \triangle_\a({\mathbb{S}}^1)\longrightarrow\mathscr{I}\ :\ \xi_\a\mapsto i^\a_{\xi_\a}$,\ we consider the triangulation $\,\ovl\triangle({\mathbb{S}}^1)\,$ obtained by intersecting $\,\triangle_1({\mathbb{S}}^1)\,$ with $\,\triangle_2({\mathbb{S}}^1)$,\ by which we mean that the edges $\,\ovl e\,$ of $\,\ovl\triangle({\mathbb{S}}^1)\,$ are the non-empty intersections of the edges of the $\,\triangle_\a({\mathbb{S}}^1)$,\ and its vertices $\,\ovl v\,$ are taken from $\,\gt{V}_{\triangle_1({\mathbb{S}}^1)}\cup\gt{V}_{\triangle_2({\mathbb{S}}^1)}$.\ A non-empty double intersection $\,\mathcal{O}_{\gt{i}^1}\cap\mathcal{O}_{\gt{i}^2}=:\mathcal{O}_{\gt{i}^1 \gt{i}^2}\,$ is then labelled by the triangulation $\,\ovl\triangle({\mathbb{S}}^1)$,\ taken together with the indexing convention such that $\,i^\a_{\ovl e}\,$ is the \v Cech index assigned -- via $\,\gt{i}^\a\,$ -- to the edge of $\,\triangle_\a({\mathbb{S}}^1)\,$ containing $\,\ovl e\in\ovl\triangle({\mathbb{S}}^1)$,\ and $\,i^\a_{\ovl v}\,$ is the \v Cech index assigned -- via $\,\gt{i}^\a\,$ -- to $\,\ovl v\,$ if $\,\ovl v\in\triangle_n({\mathbb{S}}^1)$,\ or the \v Cech index assigned -- also via $\,\gt{i}^\a\,$ -- to the edge of $\,\triangle_\a({\mathbb{S}}^1)\,$ containing $\,\ovl v\,$ otherwise. With the foregoing description in hand, we may finally write out explicit formul\ae ~for local data of the transgression bundle: we begin with local connection 1-forms (written for $\,x\in\mathcal{O}_\gt{i}\,$ and $\,\gt{i}=(\triangle({\mathbb{S}}^1),i_\cdot)$) \begin{eqnarray}\nn E_\gt{i}[x]=-\sum_{e\in\gt{E}_{\triangle({\mathbb{S}}^1)}}\,\int_e\,(x\mathord{\restriction}_e)^*B_{i_e}- \sum_{v\in\gt{V}_{\triangle({\mathbb{S}}^1)}}\,x^*A_{i_{e_+(v)}i_{e_-(v)}}(v)\,, \end{eqnarray} where $\,e_+(v)\,$ and $\,e_-(v)\,$ denote the incoming and the outgoing edge meeting at $\,v$,\ respectively; this (leads to and) is augmented with the definition of ${\rm U}(1)$-valued transition maps (written for $\,y\in\mathcal{O}_{\gt{i}\gt{j}}\,$ with $\,(\gt{i},\gt{j})\in\mathscr{I}_{{\mathsf L} M\,2}$) \begin{eqnarray}\nn G_{\gt{i}\gt{j}}[y]=\prod_{\ovl e\in\gt{E}_{\ovl\triangle({\mathbb{S}}^1)}}\,{\rm e}^{-{\mathsf i}\, \int_{\ovl e}\,(y\mathord{\restriction}_{\ovl e})^*A_{i_{\ovl e}j_{\ovl e}}}\,\prod_{\ovl v \in\gt{V}_{\ovl\triangle({\mathbb{S}}^1)}}\,g_{i_{\ovl e_+(\ovl v)}i_{\ovl e_-(\ovl v)}j_{\ovl e_+(\ovl v)}}\bigl(y(\ovl v)\bigr)\cdot g_{j_{\ovl e_+(\ovl v)} j_{\ovl e_-(\ovl v)}i_{\ovl e_-(\ovl v)}}\bigl(y(\ovl v)\bigr)^{-1}\,, \end{eqnarray} in which the $\,\ovl e\,$ are edges and the $\,\ovl v\,$ are vertices of the triangulation $\,\ovl\triangle({\mathbb{S}}^1)\,$ described above. As previously, the incoming (resp.\ outgoing) edge of $\,\ovl\triangle({\mathbb{S}}^1)\,$ at the vertex $\,\ovl v\,$ is denoted by $\,\ovl e_+(\ovl v)\,$ (resp.\ $\,\ovl e_-(\ovl v)$). The data satisfy the standard cohomological identities (written for $\,(\gt{i},\gt{j},\gt{k})\in\mathscr{I}_{{\mathsf L} M\,3}$) \begin{eqnarray}\nn (E_\gt{j}-E_\gt{i})\mathord{\restriction}_{\mathcal{O}_{\gt{i}\gt{j}}}={\mathsf i}\,\delta\log G_{\gt{i}\gt{j}}\,,\qquad\qquad\bigl(G_{\gt{j}\gt{k}} \cdot G_{\gt{i}\gt{k}}^{-1}\cdot G_{\gt{i}\gt{j}}\bigr)\mathord{\restriction}_{\mathcal{O}_{\gt{i}\gt{j}\gt{k}}}=1\,. \end{eqnarray} Under a gauge transformation of the gerbe $\,\mathcal{G}\,$ with local data $\,(C_i,h_{jk})_{i\in\mathscr{I}\,,\ (j,k)\in\mathscr{I}_2}$ of \Reqref{eq:1iso}, the local connection 1-forms undergo the induced gauge transformation \begin{eqnarray}\nn E_\gt{i}\longmapsto E_\gt{i}-{\mathsf i}\,\delta\log H_\gt{i}\,, \end{eqnarray} where \begin{eqnarray}\nn H_\gt{i}[x]=\prod_{e\in\gt{E}_{\triangle({\mathbb{S}}^1)}}\,{\rm e}^{{\mathsf i}\,\int_e\,(x\mathord{\restriction}_e)^*C_{i_e}}\,\prod_{v\in\gt{V}_{\triangle({\mathbb{S}}^1)}}\,h_{i_{e_+(v)}i_{e_-(v)}}\bigl(x(v)\bigr)^{-1}\,. \end{eqnarray} With the help of these data, we define those of the pre-quantum bundle over elements $\,\mathcal{O}^*_\gt{i}\equiv\pi_{{\mathsf T}^*{\mathsf L} M}^{-1}(\mathcal{O}_\gt{i})\,$ of the pullback cover, to wit, the local symplectic potentials \begin{eqnarray}\nn \vartheta_{\sigma\,\gt{i}}=\theta_{{\mathsf T}^*{\mathsf L} M}\mathord{\restriction}_{\mathcal{O}^*_\gt{i}}+\pi_{{\mathsf T}^*{\mathsf L} M}^*E_\gt{i} \end{eqnarray} and the corresponding gluing maps \begin{eqnarray}\nn \gamma_{\sigma\,\gt{i}\gt{j}}=\pi_{{\mathsf T}^*{\mathsf L} M}^*G_{\gt{i}\gt{j}}\,. \end{eqnarray} The construction of the transgression bundle is a key step towards Dirac's geometric quantisation of the model in what can be regarded as an explicit realisation of Segal's abstract categorial quantisation paradigm. In it, the Hilbert space \begin{eqnarray}\nn \mathcal{H}_\sigma:=\Gamma_{\rm{pol}}(\mathscr{L}_\sigma) \end{eqnarray} assigned to a loop is the space of suitably polarised sections of the pre-quantum bundle on which hamiltonians are realised as (certain sections of the sheaf of) 1st-order differential operators. The Dirac--Feynman amplitudes for surfaces $\,\Sigma\,$ with boundaries are now readily seen to play the r\^ole of (linear) transport operators between Hilbert spaces assigned to the cobordant loops of $\,\Sigma\,$ -- {\it cp.}\ \Rcite{Gawedzki:1987ak}, but also \Rcite{Suszek:2011hg} for more details. In this picture, a wave functional $\,\Psi\in\Gamma_{\rm{pol}}(\mathscr{L}_\sigma)\,$ in the position polarisation admits -- at least formally\footnote{In the case of target manifolds given by homogeneous spaces of Lie groups, the formal construction can be concretised with the help of an invariant Haar measure, {\it cp.} , {\it e.g.}, Refs.\,\cite{Felder:1988sd,Gawedzki:1999bq}, and one may anticipate that an analogous construction works for homogeneous spaces of Lie supergroups, {\it cp.}\ \Rcite{Williams:1984}.} -- a path-integral presentation \begin{eqnarray}\nn \Psi[\phi]=\int_{x\mathord{\restriction}_{\partial\Sigma_{\rm in}}=\phi}\,\mathscr{D} x\,{\rm e}^{{\mathsf i}\,S_\sigma[x]} \end{eqnarray} written for a worldsheet $\,\Sigma_{\rm in}={\mathbb{D}}^2$,\ parameterising the trajectory of an `incoming' state, {\it cp.}\ \Rcite{Gawedzki:1987ak} (such formal expressions are also considered in the framework of perturbative quantisation of a lagrangean field theory, cp \Rcite{Cattaneo:2012}).\medskip The above considerations pave the way to a systematic analysis of symmetries of the $\sigma$-model, both local (or gauge) and global (or rigid). Vector fields on $\,{\mathsf P}_\sigma\,$ whose flows realise the former span the kernel of $\,\Omega_\sigma$,\ {\it cp.}\ \Rcite{Gawedzki:1972ms}. Among those of the latter kind, we find canonical lifts\footnote{{\it Cp.}, {\it e.g.}, \Rxcite{Sec.\,4B}{Gotay:1997eg}.} $\,\widetilde\mathcal{K}\in\Gamma({\mathsf T}{\mathsf P}_\sigma)$,\ from $\,M\,$ to $\,{\mathsf P}_\sigma\equiv{\mathsf T}^*{\mathsf L} M$,\ of fundamental vector fields $\,\mathcal{K}\in\Gamma({\mathsf T} M)\,$ associated with (left) automorphisms of the (typical) fibre $\,M\,$ of the covariant configuration bundle $\,\Sigma\x M\,$ of the $\sigma$-model. As was demonstrated in \Rcite{Gawedzki:2010rn}, they come from Killing vector fields $\,\mathcal{K}\,$ of the target-space metric $\,{\rm g}\,$ that satisfy the strong invariance condition\footnote{The condition implies the weaker one: $\,\pLie{\mathcal{K}}{\rm H}=0$,\ and the latter integrates to the invariance condition $\,\ell_g^*{\rm H}={\rm H}\,$ for the action $\,\ell_\cdot\ :\ {\rm G}_\sigma\x M\longrightarrow M\,$ of (the connected component of) the global-symmetry group $\,{\rm G}_\sigma\ni g\,$ of the $\sigma$-model.} \begin{eqnarray}\label{eq:genHamcond} \mathcal{K}\righthalfcup{\rm H}=-{\mathsf d}\kappa \end{eqnarray} for some $\,\kappa\in\Omega^1(M)$.\ Vector fields satisfying condition \eqref{eq:genHamcond} (and its generalisations in which the 3-form $\,{\rm H}\,$ is replaced by an arbitrary closed $(p+2)$-form on the target space) will be called {\bf generalised hamiltonian with respect to} $\,{\rm H}$,\ by analogy with \Reqref{eq:canHamcond}. They span a Lie subalgebra within the Lie algebra $\,(\Gamma({\mathsf T} M),[\cdot,\cdot])\,$ of smooth vector fields on $\,M\,$ which we denote as \begin{eqnarray}\nn \gt{g}_\sigma=\corr{\ \mathcal{K}\in\Gamma({\mathsf T} M) \quad\vert\quad \pLie{\mathcal{K}}{\rm g}=0\ \land\ \exists_{\kappa\in\Omega^p(M)}\ :\ \mathcal{K}\righthalfcup{\rm H}=-{\mathsf d}\kappa \ }_{\mathbb{R}} \end{eqnarray} in what follows. Their lifts are determined by the strong equivariance condition \begin{eqnarray}\nn \pLie{\widetilde\mathcal{K}}\theta_{{\mathsf T}^*{\mathsf L} M}=0\,, \end{eqnarray} which we may think of as the condition of preservation of the canonical connection 1-form $\,\theta_{{\mathsf T}^*{\mathsf L} M}\,$ on $\,{\mathsf T}\sfT^*{\mathsf L} M$,\ and so they take the form \begin{eqnarray}\label{eq:vecfieldlift} \widetilde\mathcal{K}[x,p]=\int_\mathscr{C}\,\Vol(\mathscr{C})\,\bigl(\mathcal{K}^a\bigl(x(\cdot)\bigr)\,\tfrac{\delta\ }{\delta x^a(\cdot)}-p_a(\cdot)\,\partial_b\mathcal{K}^a\bigl(x(\cdot)\bigr)\,\tfrac{\delta\ }{\delta p_b(\cdot)}\bigr)\,. \end{eqnarray} We shall denote the ${\mathbb{R}}$-linear span of all pairs $\,(\mathcal{K},\kappa)\,$ described above as \begin{eqnarray}\nn \gt{G}_\sigma=\corr{\ \gt{K}\equiv(\mathcal{K},\kappa)\in\Gamma({\mathsf T} M\oplus_{M,{\mathbb{R}}}{\mathsf T}^* M) \quad\vert\quad \pLie{\mathcal{K}}{\rm g}=0\ \land\ \mathcal{K}\righthalfcup{\rm H}=-{\mathsf d}\kappa \ }_{\mathbb{R}}\,. \end{eqnarray} It forms an algebra (over $\,{\mathbb{R}}$) with respect to the skew Vinogradov-type bracket, twisted (in the sense of \v{S}evera--Weistein, {\it cp.}\ \Rcite{Severa:2001qm}) by the 3-form $\,{\rm H}$, \begin{eqnarray}\nn \Vbra{\cdot}{\cdot}^{\rm H}\ &:&\ \gt{G}_\sigma\x\gt{G}_\sigma\longrightarrow\gt{G}_\sigma\cr\cr &:&\ \bigl((\mathcal{K}_1,\kappa_1),(\mathcal{K}_2,\kappa_2)\bigr)\longmapsto\bigl([\mathcal{K}_1,\mathcal{K}_2],\pLie{\mathcal{K}_1}\kappa_2-\pLie{\mathcal{K}_2}\kappa_1-\tfrac{1}{2}\,{\mathsf d}\bigl(\mathcal{K}_1\righthalfcup\kappa_2-\mathcal{K}_2\righthalfcup\kappa_1\bigr)+\mathcal{K}_1\righthalfcup\mathcal{K}_2\righthalfcup{\rm H}\bigr)\,, \end{eqnarray} a fact first noted in \Rcite{Alekseev:2004np} and subsequently generalised (to the polyphase setting) and exploited in \Rcite{Suszek:2012ddg}. This struture may equivalently be understood as coming from the standard ({\it i.e.}, untwisted) Courant bracket on Hitchin's generalised tangent bundle $\,{\mathsf T}^{1,1}M\equiv{\mathsf T} M\oplus_{M,{\mathbb{R}}}{\mathsf T}^*M\longrightarrow M\,$ twisted by the \v{C} ech--de Rham data of the gerbe geometrising $\,{\rm H}$.\ This interpretation of the \v{S}evera--Weistein twist was originally advanced in \Rcite{Hitchin:2005in} and elaborated in \Rcite{Suszek:2012ddg}. The physical meaning of the algebra is revealed through the construction of the {\bf Noether currents} ($\widehat t\,$ is the normalised tangent vector field on $\,{\mathbb{S}}^1$) \begin{eqnarray}\nn J_\gt{K}(\cdot)=\mathcal{K}^a\bigl(x(\cdot)\bigr)\,p_a(\cdot)+(x_*\widehat t)^a(\cdot)\kappa_a\bigl(x(\cdot)\bigr) \end{eqnarray} of the theory. These are (spatial) densities of the standard {\bf Noether hamiltonians} (or {\bf charges}) $\,Q_\gt{K}\,$ of the symmetry, \begin{eqnarray}\nn Q_\gt{K}=\int_{{\mathbb{S}}^1}\,\Vol({\mathbb{S}}^1)\,J_\gt{K}(\cdot)\equiv\widetilde\mathcal{K}\righthalfcup\theta_{{\mathsf T}^*{\mathsf L} M}+\widetilde\kappa\,, \end{eqnarray} defined in terms of the canonical lift \eqref{eq:vecfieldlift} of the vector field, and that of the 1-form $\,\kappa$, \begin{eqnarray}\nn \widetilde\kappa[x,p]=\int_{{\mathbb{S}}^1}\,{\rm ev}^*\kappa\,, \end{eqnarray} to the space of states $\,{\mathsf T}^*{\mathsf L} M$,\ the former satisfying the standard hamiltonian relation \begin{eqnarray}\nn \widetilde\mathcal{K}\righthalfcup\Omega_\sigma=-\delta Q_\gt{K}\,. \end{eqnarray} These furnish an anomalous\footnote{Note the purely {\it classical} nature of the anomaly in question.} field-theoretic realisation of $\,\gt{G}_\sigma$,\ of the simple form ($t\,$ and $\,\phi\,$ are -- respectively -- the time and space coordinate on $\,\Sigma$) \begin{eqnarray} \{J_{\gt{K}_1}(t,\phi),J_{\gt{K}_2}(t,\phi')\}_{\Omega_\sigma}=J_{\Vbra{\gt{K}_1}{\gt{K}_2}^{\rm H}}(t,\phi)\,\delta(\phi- \phi')-2\corr{\gt{K}_1,\gt{K}_2}\bigl(t,\tfrac{1}{2}(\phi+\phi')\bigr)\,\delta'(\phi-\phi')\,,\cr\label{eq:curranom} \end{eqnarray} in which \begin{eqnarray}\nn \corr{\cdot,\cdot}\ :\ \Gamma({\mathsf T}^{1,1}M)^{\x 2}\longrightarrow{\mathsf T}^{1,1}M\ :\ \bigl((\mathcal{V}_1,\omega_1),(\mathcal{V}_2,\omega_2)\bigr)\longmapsto\tfrac{1}{2}\,(\mathcal{V}_1\righthalfcup\omega_2+\mathcal{V}_2\righthalfcup\omega_1) \end{eqnarray} is a natural non-degenerate pairing on $\,\Gamma({\mathsf T}^{1,1}M)$.\ Finally, in the geometric quantisation scheme distinguished by the geometric data of the field theory in hand, the canonical lifts \eqref{eq:vecfieldlift} and the Noether hamiltonians jointly determine their quantum-mechanical counterparts, with restrictions \begin{eqnarray}\nn \widehat Q_\gt{K}\mathord{\restriction}_{\mathcal{O}^*_\gt{i}}=-{\mathsf i}\,\pLie{\widetilde\mathcal{K}}-\widetilde\mathcal{K}\righthalfcup\vartheta_{\sigma\,\gt{i}}+Q_\gt{K}\,,\qquad\gt{i}\in\mathscr{I}_{{\mathsf L} M}\,. \end{eqnarray} Passing from the infinitesimal to the global level of realisation of $\sigma$-model symmetries, we are led to distinguish between the rigid and gauge symmetries. Denote the action of the symmetry Lie group $\,{\rm G}_\sigma\,$ with the Lie algebra $\,\gt{g}_\sigma\,$ on the manifold $\,M\,$ as \begin{eqnarray}\nn \ell_\cdot\ :\ {\rm G}_\sigma\x M\longrightarrow M\,. \end{eqnarray} The rigid symmetries are neatly captured by families, indexed by the symmetry group $\,{\rm G}_\sigma$,\ of gerbe 1-isomorphisms \begin{eqnarray}\nn \Phi_g\ :\ \ell_g^*\mathcal{G}\xrightarrow{\ \cong\ }\mathcal{G}\,,\qquad g\in{\rm G}_\sigma \end{eqnarray} that transgress to automorphisms of the (pre)quantum bundle. The 1-isomorphisms can be regarded as geometrisations of the invariance condition \eqref{eq:genHamcond}. In the framework of the local(ised) field theory, we are led to demand that a global symmetry of a given field-theoretic model, which can be interpreted passively as invariance of the model under certain distinguished transformations of the reference system (in the space of internal degrees of freedom), and hence also as equivalence between its distinguished classical configurations, be promoted to a local one, or gauged -- this is, morally, the content of the universal gauge principle. The r\^ole of the symmetry algebra $\,(\gt{G}_\sigma,\Vbra{\cdot}{\cdot}^{\rm H})\,$ in the description of the gauging of the global-symmetry group $\,{\rm G}_\sigma\,$ was clarified by the author in \Rcite{Suszek:2012ddg}, {\it cp.}\ also \Rcite{Gawedzki:2012fu}. Thus, $\,{\rm G}_\sigma\,$ can be gauged only if the extension of the ${\rm H}$-twisted Vinogradov bracket $\,\Vbra{\cdot}{\cdot}^{\rm H}\,$ to the $C^\infty(M,{\mathbb{R}})$-linear span of an arbitrary basis $\,\{\gt{K}_A=(\mathcal{K}_A,\kappa_A)\}_{A\in\ovl{1,{\rm dim}\,{\rm G}_\sigma}}\,$ of $\,\gt{G}_\sigma\,$ determined by a basis $\,\{t_A\}_{A\in\ovl{1,{\rm dim}\,{\rm G}_\sigma}}\,$ of the Lie algebra $\,\gt{g}_\sigma$,\ for which we have \begin{eqnarray}\nn \mathcal{K}_A(x)\equiv\tfrac{{\mathsf d}\ }{{\mathsf d} t}\mathord{\restriction}_{t=0}\ell_{{\rm e}^{t\vartriangleright t_A}}(x)\,,\quad x\in M\,, \end{eqnarray} defines a Lie algebroid over $\,M\,$ (with the obvious anchor $\,{\rm pr}_1\ :\ \bigoplus_{A\in\ovl{1,{\rm dim}\,{\rm G}_\sigma}}\,C^\infty(M,{\mathbb{R}})\,\gt{K}_A\longrightarrow\Gamma({\mathsf T} M)$), which then turns out to be isomorphic with the action algebroid $\,\gt{g}_\sigma{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}} M$,\ that is with the tangent Lie algebroid of the action groupoid $\,{\rm G}_\sigma{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}} M$.\ The appearance of the latter in the present context is by no means a coincidence -- indeed, the groupoid of principal bundles with $\,{\rm G}_\sigma{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}} M\,$ as the structure groupoid ({\it cp.}\ \Rcite{Moerdijk:2003mm}) was shown in \Rcite{Suszek:2012ddg} to naturally quantify the data of the relevant gauged $\sigma$-model, {\it i.e.}, the choice of the principal bundle $\,{\mathsf P}_{{\rm G}_\sigma}\longrightarrow\Sigma\,$ with the structure group $\,{\rm G}_\sigma\,$ and a choice of a global section of the associated bundle $\,{\mathsf P}_{{\rm G}_\sigma}\x_{\ell_\cdot}M\equiv({\mathsf P}_{{\rm G}_\sigma}\x M)/{\rm G}_\sigma$,\ the latter being identified with a lagrangean field of the gauged $\sigma$-model. Furthermore, it is over the nerve \begin{eqnarray}\nn \alxydim{@C=1.5cm@R=1.5cm}{ \cdots \ar@<.75ex>[r]^{d_\bullet^{(3)}\quad} \ar@<.25ex>[r] \ar@<-.25ex>[r] \ar@<-.75ex>[r] & {\rm G}_\sigma^{\x 2}\x M \ar@<.5ex>[r]^{\ d_\bullet^{(2)}} \ar@<0.ex>[r] \ar@<-.5ex>[r] & {\rm G}_\sigma\x M \ar@<.5ex>[r]^{\quad d_\bullet^{(1)}} \ar@<-.5ex>[r] & M} \end{eqnarray} of the small category $\,{\rm G}_\sigma{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}} M$,\ with face maps (written for $\,x\in M,\ g,g_k\in{\rm G}_\sigma,\ k\in\ovl{1,m}\,$ with $\,m\in{\mathbb{N}}^\x$) \begin{eqnarray}\nn &&d_0^{(1)}(g,x)=x\equiv{\rm pr}_1(g,x)\,,\qquad\qquad d_1^{(1)}(g,x)=\ell_g(x)\,,\cr\cr\cr &&d_0^{(m)}(g_{m},g_{m-1},\ldots,g_1,x)=(g_{m-1},g_{m-2}, \ldots,g_1,x)\,,\cr\cr &&d_{m}^{(m)}(g_{m},g_{m-1},\ldots,g_1 ,x)=\bigl(g_{m},g_{m-1},\ldots,g_2 ,\ell_{g_1}(x)\bigr)\,,\cr\cr &&d_i^{(m)}(g_{m},g_{m-1},\ldots,g_1,x)=(g_{m},g_{m-1},\ldots,g_{m+2-i} ,g_{m+1-i}\cdot g_{m-i},g_{m-1-i},\ldots,g_1,x)\,,\quad i\in\ovl{1,m-1}\,, \end{eqnarray} that the full-fledged gauging procedure was developed in Refs.\,\cite{Gawedzki:2010rn,Gawedzki:2012fu} and ultimately justified, in its structural form proposed by the authors, in terms of a generalised worldsheet gauge-defect construction in \Rcite{Suszek:2012ddg}. The necessary and sufficient condition for the said procedure to work is the existence of a {\bf ${\rm G}_\sigma$-equivariant structure} on the gerbe $\,\mathcal{G}\,$ of the $\sigma$-model, composed of a 1-isomorphism \begin{eqnarray}\nn \Upsilon\ :\ d_1^{(1)\,*}\mathcal{G}\xrightarrow{\ \cong\ }d_0^{(1)\,*}\mathcal{G}\otimes\mathcal{I}_{\rho_{\theta_{\rm L}}} \end{eqnarray} of gerbes over the arrow manifold $\,{\rm G}_\sigma\x M\,$ of the action groupoid, written in terms of the distinguished 2-form \begin{eqnarray}\nn \rho_{\theta_{\rm L}}={\rm pr}_2^*\kappa_A\wedge{\rm pr}_1^*\theta_{\rm L}^A-\tfrac{1}{2}\,{\rm pr}_2^*(\mathcal{K}_A\righthalfcup\kappa_B)\,{\rm pr}_1^*(\theta_{\rm L}^A\wedge\theta_{\rm L}^B) \end{eqnarray} in whose definition $\,\theta_{\rm L}=\theta_{\rm L}^A\otimes_{\mathbb{R}} t_A\,$ is the standard $\gt{g}_\sigma$-valued left-invariant Maurer--Cartan 1-form on $\,{\rm G}_\sigma$,\ and of a 2-isomorphism \begin{eqnarray}\nn \qquad\alxydim{@C=1.5cm@R=2cm}{ \bigl(d^{(1)}_1\circ d^{(2)}_1 \bigr)^*\mathcal{G} \ar[r]^{d^{(2)\,*}_2\Upsilon\hspace{1cm}} \ar[d]_{d^{(2)\,*}_1\Upsilon} & \bigl(d^{(1)}_1\circ d^{(2)}_0\bigr)^*\mathcal{G}\otimes\mathcal{I}_{d^{(2)\,*}_2\rho_{\theta_{\rm L}}} \ar[d]^{d^{(2)\,*}_0\Upsilon \otimes{\rm id}_{\mathcal{I}_{d^{(2)\,*}_2\rho_{\theta_{\rm L}}}}} \ar@{=>}[dl]|{\,\gamma\ } \\ \bigl(d^{(1)}_0\circ d^{(2)}_1\bigr)^*\mathcal{G}\otimes\mathcal{I}_{d^{(2)\,*}_1\rho_{\theta_{\rm L}}} \ar@{=}[r] & \bigl(d^{(1)}_0\circ d^{(2)}_0\bigr)^*\mathcal{G}\otimes\mathcal{I}_{d^{(2) *}_0\rho_{\theta_{\rm L}}+d^{(2)\, *}_2\rho_{\theta_{\rm L}}}} \end{eqnarray} between the 1-isomorphisms over $\,{\rm G}_\sigma^{\x 2}\x M$,\ satisfying, over $\,{\rm G}_\sigma^{\x 3}\x M$,\ the coherence condition \begin{eqnarray} d_1^{(3)\,*}\gamma\bullet\bigl({\rm id}_{(d_2^{(2)}\circ d_1^{(3)})^*\Upsilon}\circ d_3^{(3)\,*}\gamma \bigr)=d_2^{(3)\,*}\gamma\bullet\bigl(\bigl(d_0^{(3)\,*}\gamma \otimes{\rm id}_{{\rm id}_{\mathcal{I}_{(d_2^{(2)}\circ d_1^{(3)})^*\rho_{\theta_{\rm L}}}}}\bigr)\circ{\rm id}_{(d_2^{(2)}\circ d_3^{(3)})^*\Upsilon}\bigr)\,.\cr \label{eq:geq2isocoh} \end{eqnarray} The structure ensures that a suitable extension of the pullback gerbe $\,{\rm pr}_2^*\mathcal{G}\,$ over $\,{\mathsf P}_{{\rm G}_\sigma}\x M$,\ determined by the choice of the gauge bundle $\,{\mathsf P}_{{\rm G}_\sigma}$,\ descends to the covariant configuration bundle $\,({\mathsf P}_{{\rm G}_\sigma}\x M)/{\rm G}_\sigma\,$ of the gauged $\sigma$-model and thus enables the definition of the Wess--Zumino term. Alternatively, it provides the necessary and sufficient data for an arbitrary topological gauge-defect embedded in the worldsheet that implements the gauge symmetry, {\it cp} \Rcite{Suszek:2012ddg}. Whenever the action $\,\ell_\cdot\,$ is free and proper, so that the orbit space $\,M/{\rm G}_\sigma\,$ carries the structure of a manifold, all this implies that the $\sigma$-model descends to the orbit manifold $\,M/{\rm G}_\sigma\,$ in that it determines a $\sigma$-model with the latter as a target space with a metric and a gerbe over it, and equivalence classes of such descended $\sigma$-models are essentially enumerated by inequivalent ${\rm G}_\sigma$-equivariant structures on $\,\mathcal{G}$.\ If the said conditions are not satisfied, on the other hand, it makes sense to regard the gauged $\sigma$-model as the {\it definition} of the induced loop mechanics on the space $\,M/{\rm G}_\sigma\,$ -- indeed, it is defined over a manifold directly related to the homotopy (${\rm G}_\sigma$-)quotient of $\,M$. Under the assumption of the existence of a measure $\,\mathscr{D} x\,$ on the space of maps $\,x\ :\ \Omega\longrightarrow M\,$ \emph{invariant} under the induced action $\,(g,x)\longmapsto\ell_g\circ x$,\ the above presentation enables us to discuss quantum lifts of (global) symmetries of the classical theory in an explicit manner. Indeed, let the induced action preserve the lagrangean density of the $\sigma$-model up to a total derivative (which is necessary for the action functional for the closed worldsheet to remain invariant under symmetry transformations), \begin{eqnarray}\nn \mathscr{L}_\sigma\left(\ell_g\circ x,\partial(\ell_g\circ x)\right)\,\Vol(\Omega)-\mathscr{L}_\sigma(x,\partial x)\,\Vol(\Omega)={\mathsf d} J_g(x,\partial x)\,, \end{eqnarray} for some $\,J_g(x,\partial x)\in\Omega^p(\Omega)$.\ We then obtain the induced realisation of $\,{\rm G}\,$ on the quantum space of states in the form \begin{eqnarray}\nn \left(R(g)\Psi\right)[\phi]:=\int_{\ell_g\circ x\mathord{\restriction}_{\partial\Omega_{\rm in}}=\phi}\,\mathscr{D}(\ell_g\circ x)\,{\rm e}^{{\mathsf i}\,S_\sigma[\ell_g\circ x]}= \int_{x\mathord{\restriction}_{\partial\Omega_{\rm in}}=\ell_{g^{-1}}\circ\phi}\,\mathscr{D} x\,{\rm e}^{{\mathsf i}\,S_\sigma[x]}\cdot{\rm e}^{{\mathsf i}\,\int_{\partial\Omega_{\rm in}}\,J_g(x,\partial x)}\,. \end{eqnarray} If, furthermore, \begin{eqnarray}\label{eq:jpullJ} J_g=x^*\jmath_g \end{eqnarray} for some {\bf target symmetry current} $\,\jmath_g\in\Omega^1(M)$,\ then we may rewrite the above definition as \begin{eqnarray}\nn \left(R(g)\Psi\right)[\phi]=c_g[\phi]\cdot\Psi\bigl[\ell_{g^{-1}}\circ\phi\bigr]\,,\qquad\qquad c_g[\phi]:={\rm e}^{{\mathsf i}\,\int_{\partial\Omega_{\rm in}}\,\bigl(\ell_{g^{-1}}\circ\phi\bigr)^*\jmath_g}\,. \end{eqnarray} Thus, to a realisation of the classical (global-)symmetry group on the quantum space of states, there is associated an {\bf action 1-cochain} on $\,{\rm G}\,$ with values in ${\rm U}(1)$-valued functionals on the classical space of states. The space of such functionals carries the structure of a ${\rm G}$-module with a (left) ${\rm G}$-action \begin{eqnarray}\nn (g_2\vartriangleright c_{g_1})[\phi]:=c_{g_1}\bigl[\ell_{g_2^{-1}}\circ\phi\bigr]\,. \end{eqnarray} In order to have an actual representation of the symmetry group on quantum states, we must demand that the 1-cochain be a 1-cocycle. Indeed, we have \begin{eqnarray}\nn \bigl(R(g_1)\circ R(g_2)\Psi\bigr)[\phi]&=&c_{g_1}[\phi]\cdot\bigl(R(g_2)\Psi\bigr)\bigl[\ell_{g_1^{-1}}\circ\phi\bigr]=c_{g_1}[\phi] \cdot c_{g_2}\bigl[\ell_{g_1^{-1}}\circ\phi\bigr]\cdot\Psi\bigl[\ell_{g_2^{-1}}\circ\ell_{g_1^{-1}}\circ\phi\bigr]\cr\cr &=&c_{g_1}[\phi] \cdot c_{g_2}\bigl[\ell_{g_1^{-1}}\circ\phi\bigr]\cdot\Psi\bigl[\ell_{(g_1\cdot g_2)^{-1}}\circ\phi\bigr]\cr\cr &=&(\delta_{\rm G} c)_{g_1,g_2}[\phi]\cdot\bigl(R(g_1\cdot g_2)\Psi\bigr)[\phi] \end{eqnarray} with the {\bf homomorphicity 2-cocycle} \begin{eqnarray}\nn (\delta_{\rm G} c)_{g_1,g_2}[\phi]&=&c_{g_2}\bigl[\ell_{g_1^{-1}}\circ\phi\bigr]\cdot c_{g_1\cdot g_2}[\phi]^{-1}\cdot c_{g_1}[\phi]\cr\cr &=&{\rm e}^{{\mathsf i}\,\int_{\partial\Omega_{\rm in}}\,[(\ell_{g_2^{-1}}\circ(\ell_{g_1^{-1}}\circ\phi))^*\jmath_{g_2}-(\ell_{(g_1\cdot g_2)^{-1}}\circ\phi)^*\jmath_{g_1\cdot g_2}+(\ell_{g_1^{-1}}\circ\phi)^*\jmath_{g_1}]}\cr\cr &=&{\rm e}^{{\mathsf i}\,\int_{\partial\Omega_{\rm in}}\,(\ell_{(g_1\cdot g_2)^{-1}}\circ\phi)^*(\jmath_{g_2}-\jmath_{g_1\cdot g_2}+\ell_{g_2}^*\jmath_{g_1})}\,, \end{eqnarray} the latter being determined by the {\bf current 2-cocycle} \begin{eqnarray}\nn (\delta_{\rm G}\jmath)_{g_1,g_2}:=\jmath_{g_1}\vartriangleleft g_2-\jmath_{g_1\cdot g_2}+\jmath_{g_2}\,,\qquad\qquad\jmath_{g_1}\vartriangleleft g_2:=\ell_{g_2}^*\jmath_{g_1} \end{eqnarray} whose triviality in the de Rham cohomology of $\,M\,$ is a necessary and sufficient condition for the coclosedness of $\,c_g$.\ The existence of a \emph{projective} representation (and so also of a standard linear representation of a central extension of $\,{\rm G}$), on the other hand, requires only that the group-coboundary of the above 1-cochain be a 2-cocycle on $\,{\rm G}\,$ with values in the trivial ${\rm G}$-module $\,{\rm U}(1)$, \begin{eqnarray}\nn d_{g_1,g_2}:=(\delta_{{\rm G}}c)_{g_1,g_2}\in Z^2\bigl({\rm G},{\rm U}(1)\bigr)\,, \end{eqnarray} which is to say that it satisfies the identity \begin{eqnarray}\nn (\delta_{\rm G} d)_{g_1,g_2,g_3}:=d_{g_1,g_2}\cdot d_{g_1,g_2\cdot g_3}^{-1}\cdot d_{g_1\cdot g_2,g_3}\cdot d_{g_2,g_3}^{-1}=1 \end{eqnarray} for arbitrary $\,g_1,g_2,g_3\in{\rm G}$.\ Indeed, given such a 2-cocycle, we may define a standard action of the central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\rm U}(1)\longrightarrow\widehat{{\rm G}}:={\rm G}\,{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\rm U}(1)\longrightarrow{\rm G}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the symmetry group $\,{\rm G}$,\ with the group operation determined by the 2-cocycle as \begin{eqnarray}\label{eq:projisext} \widehat{{\rm G}}\x\widehat{{\rm G}}\longrightarrow\widehat{{\rm G}}\ :\ \bigl((g_1,u_1),(g_2,u_2)\bigr)\longmapsto(g_1\cdot g_2,d_{g_1,g_2}\cdot u_1\cdot u_2)\,. \end{eqnarray} The action is given by the formula \begin{eqnarray}\nn \bigl(R(g,u)\Psi\bigr)[\phi]:=u\cdot\bigl(R(g)\Psi\bigr)[\phi]\,. \end{eqnarray} These considerations will play an important r\^ole in the fundamental construction developed in the present article, that is in the (super)geometrisation scheme for {\it supergroup-invariant} de Rham cohomology of super-$\sigma$-model targets. \section{Tensorial super-$\sigma$-model backgrounds -- generalities}\label{sec:stensor} We shall be concerned with the by now well-established Green--Schwarz-type models of dynamics of extended supersymmetric objects, also known as \textbf{super-$p$-branes}, whose classical configurations are generalised superharmonic embeddings $\,\Omega\longrightarrow\mathscr{M}\,$ of the {\bf worldvolume} $\,\Omega$,\ a standard manifold of dimension $p+1\in\ovl{1,11}\,$ (parametrising the history of a charged point-like particle, loop, membrane {\it etc.}) equipped with an intrinsic metric $\,\gamma$,\ in a {\bf target supermanifold} $\,\mathscr{M}$,\ to be termed the {\bf supertarget} in what follows. A general (real) supermanifold is a ringed space $\,\mathscr{M}=(M,\mathcal{O}_M)\,$ composed of a (second countable Hausdorff) topological space $\,M\,$ (termed the {\bf body} of $\,M$) and a sheaf $\,\mathcal{O}_M\,$ of (real) associative unital superalgebras on $\,M\,$ (termed the {\bf structure sheaf} and to be thought of as a generalisation of the sheaf of real functions on a manifold), locally modelled on $\,({\mathbb{R}}^{\x m},C^\infty(\cdot,{\mathbb{R}})\otimes_{\mathbb{R}}\bigwedge^\bullet{\mathbb{R}}^{\x n})\,$ -- here, the pair $\,(m,n)$,\ constant over the entire $\mathcal{M}$,\ is the so-called superdimension of $\,\mathcal{M}$.\ The global geometry of such a structure is identified in the fundamental Batchelor--Gaw\c{e}dzki Theorem of Refs.\,\cite{Gawedzki:1977pb,Batchelor:1979a} which states that $\,\mathscr{M}\,$ is (globally) isomorphic with the ringed space $\,(M,\Gamma(\bigwedge^\bullet{\mathbb{V}}))\,$ for $\,({\mathbb{V}},M,\pi_{\mathbb{V}},{\mathbb{R}}^{\x n})\,$ a real vector bundle of rank $n$ over the body. Supermanifolds admit (local) coordinate descriptions, and in this work we shall deal exclusively with supermanifolds with {\it global} coordinate systems, so that there will be no need for the abstract theory of supermanifolds. The presence of global coordinate systems helps to simplify our treatment of the differential calculus on the supermanifolds of interest, which will be seen to play an instrumental r\^ole in the field-theoretic constructions. Thus, we shall use the fact that the tangent sheaf $\,\mathcal{T}\mathscr{M}\equiv{\rm sDer}(\mathcal{O}_M)\,$ of superderivations of the structure sheaf (whose sections are to be thought of as (super)vector fields on $\,\mathscr{M}$), as well as the dual cotangent sheaf $\,\mathcal{T}^*\mathscr{M}\equiv\Hom_{\Mod_{\mathcal{O}_M}}(\mathcal{T}\mathscr{M},\mathcal{O}_M)\,$ (whose sections are to be thought of as super-1-forms on $\,\mathscr{M}$) are in general locally, and in our case also globally free, with generators given by -- respectively -- coordinate superderivations and coordinate differentials. All this will enable us to develop our discussion in a far-reaching structural analogy with the standard ({\it i.e.}, commutative-)geometric approach to $\sigma$-models, with the graded nature of the geometry under consideration reflected solely in the elementary sign conventions tabulated in Conv.\,\ref{conv:SignManifesto}. Passing to the supergeometries of interest, we shall further assume the supermanifold to be endowed with a (left) transitive action of a Lie supergroup $\,{\rm G}\,$ ({\it i.e.}, a group object in the category of supermanifolds), the latter playing the r\^ole of the (extended) global-(super)symmetry group of the field theory in question. As such the supertarget will be presentable as (or equivariantly superdiffeomorphic with) a supercoset $\,{\rm G}/{\rm H}\cong\mathscr{M}\,$ of that supergroup relative to a Lie group $\,{\rm H}\,$ embedded in $\,{\rm G}$.\ Such a presentation of the target supermanifold puts us in the framework of Cartan geometry, which, in turn, affords a neat description of the additional tensorial {\bf superbackground} of the super-$p$-brane propagation in $\,\mathscr{M}$,\ composed of a ${\rm G}$-invariant metric tensor $\,{\rm g}\,$ on $\,\mathscr{M}\,$ (typically degenerate in the Gra\ss mann-odd directions) and a left-${\rm G}$-invariant de Rham super-$(p+2)$-cocycle $\,\underset{\tx{\ciut{(p+2)}}}{\chi}\in Z^{p+2}_{\rm dR}(\mathscr{M})^{\rm G}$.\ Thus, we construct the action functionals of the models of interest in terms of (left-)${\rm G}$-equivariant components of the left-invariant Maurer--Cartan 1-form $\,\theta_{\rm L}\,$ on ${\rm G}\,$ with values in the Lie superalgebra ({\it cp.} App.\,\ref{app:LieAlgCohom}) $\,\gt{g}\,$ of that supergroup as well as invariant tensors on the latter, and the lagrangean fields of the theory are identified with the Riemann normal (super)coordinates on $\,{\rm G}\,$ restricted to a section $\,\gamma\in\Gamma({\rm G})\,$ of the principal ${\rm H}$-bundle $\,{\rm G}\longrightarrow{\rm G}/{\rm H}\,$ modelling the supertarget. In fact, the specific choice of the section and the ensuing treatment of the geometric Goldstone fields coordinatising some of the directions in the complement of the Lie algebra $\,\gt{h}\,$ of the Lie group $\,{\rm H}\,$ within the Lie superalgebra $\,\gt{g}\,$ has -- as has been amply demonstrated in the literature ({\it cp.}, {\it e.g.}, Refs.\,\cite{McArthur:1999dy,Gomis:2006wu}), and shall be elaborated in what follows -- far-reaching consequences for the geometrisation of the theory (in the spirit of the previous section). Taking into account the structure of the super-Poincar\'e algebra that serves as the local model for the geometries under consideration, as well as that of the distinguished anti-de Sitter superbackgrounds to be explored in subsequent studies, we shall restrict our attention to the so-called {\bf reductive} homogeneous spaces $\,{\rm G}/{\rm H}$,\ {\it i.e.}, those for which the direct-sum (supervector-space) complement $\,\gt{m}\,$ of the Lie algebra $\,\gt{h}\,$ within \begin{eqnarray}\label{eq:gdecomp} \gt{g}=\gt{m}\oplus\gt{h} \end{eqnarray} has the $\gt{h}$-module property \begin{eqnarray}\nn [\gt{h},\gt{m}]\subset\gt{m}\,. \end{eqnarray} The prototype of the said structure is an extension of the super-point algebra of anti-commuting {\bf supercharges} $\,Q_{I\,\a},\ (I,\a)\in\ovl{1,N}\x\ovl{1,D}\,$ (with $\,N\,$ denoting the number of supersymmetries, {\it i.e.}, of distinct Majorana spinors entering the definition of the relevant GS model, and $\,D\,$ the dimension of the respective representation of the underlying Clifford algebra, the two numbers being constrained severly by the requirement of existence of the corresponding GS model) by the algebra of Gra\ss mann-even translations$\,P_a,\ a\in\ovl{0,d-1}\,$ ($d\,$ is the spacetime dimension of the supertarget), further enhanced -- as a spinor/vector-module algebra -- by the Lorentz algebra\footnote{Further enhancements are possible and, indeed, physically relevant, {\it e.g.}, by generators of dilations and special conformal transformations in the supersymmetric anti-de Sitter setting.} $\,\gt{r}\,$ with generators $\,J_{ab}=J_{[ab]},\ a,b\in\ovl{0,d-1}\,$ to form the Lie superalgebra $\,\gt{g}\,$ with the defining supercommutation relations \begin{eqnarray}\nn &\{Q_{I\,\a},Q_{J\,\beta}\}=f_{I\,\a,J\,\beta}^{\hspace{24pt}a}\,P_a+f_{I\,\a,J\,\beta}^{\hspace{24pt}ab}\,J_{ab}\,,\qquad[Q_{I\,\a},P_a]=f_{I\,\a,a}^{\hspace{15pt}J\,\beta}\,Q_{J\,\beta}\,,\qquad[P_a,P_b]=f_{a,b}^{\hspace{10pt}cd}\,J_{cd}\,,&\cr\cr &[J_{ab},J_{cd}]=\eta_{ad}\,J_{bc}-\eta_{ac}\,J_{bd}+\eta_{bc}\,J_{ad}-\eta_{bd}\,J_{ac}\,,&\cr\cr &[J_{ab},P_c]=\eta_{bc}\,P_a-\eta_{ac}\,P_b\,,\qquad[J_{ab},Q_{I\,\a}]=\tfrac{1}{2}\,(\Gamma_{[ab]})^\beta_{\ \a}\,Q_{I\,\beta}\,,& \end{eqnarray} written in terms of the Minkowskian metric $\,\eta=\textrm{diag}(1,-1,-1,\ldots,-1)\,$ on the body of the supertarget, and in terms of the generators $\,\Gamma_a,\ a\in\ovl{0,d-1}\,$ of the corresponding Clifford algebra, {\it cp.}\ App.\,\ref{app:conv}. For $\,N=1$,\ the choice of the structure constants \begin{eqnarray}\nn f_{\a,\beta}^{\hspace{12pt}a}=\ovl\Gamma{}^a_{\a\beta}\,,\qquad f_{\a,\beta}^{\hspace{12pt}ab}=\tfrac{\lambda_1}{R}\,\bigl(\ovl\Gamma{}^{[ab]}\bigr)_{\a\beta}\,,\qquad f_{\a,a}^{\hspace{11pt}\beta}=\tfrac{\lambda_2}{R}\,(\Gamma_a)^\beta_{\ \a}\,,\qquad f_{a,b}^{\hspace{10pt}cd}=\tfrac{\lambda_3}{R^2} \end{eqnarray} yields, for certain (normalisation-dependent) values of the numerical constants $\,\lambda_1,\lambda_2\in{\mathbb{R}}\,$ and $\,\lambda_3\in{\mathbb{R}}_{>0}$,\ the $\,d=4\,$ {\bf super-anti-de Sitter algebra} at radius $\,R\in{\mathbb{R}}_{>0}\,$ of \Rcite{vanHolten:1982mx} (in the notation of \Rcite{Freedman:2012zz}), and reduces, {\it via} the \.In\"on\"u--Wigner contraction $\,R\to\infty$,\ to the standard ($N=1$) {\bf super-Minkowski algebra} \begin{eqnarray}\nn f_{\a,\beta}^{\hspace{24pt}a}=\ovl\Gamma{}^a_{\a\beta}\,,\qquad\qquad f_{\a,\beta}^{\hspace{24pt}ab}\,,\ f_{\a,a}^{\hspace{15pt}\beta}\,,\ f_{a,b}^{\hspace{10pt}cd}=0\,. \end{eqnarray} In this setting, $\,\gt{h}\,$ is the Lie algebra of a Lie subgroup $\,{\rm H}={\rm Spin}(1,p)\x{\rm Spin}(d-p-1)\subset{\rm Spin}(1,d-1)\,$ that is preserved in the presence of a classical configuration (embedding) of the extended super-$p$-brane in the supertarget or its (super)extension by the distinguished (super)translations that leave the chosen action functional invariant, {\it cp.} below. Given decomposition \eqref{eq:gdecomp} and the corresponding decomposition of the set $\,\{t_A\}_{A\in\ovl{1,{\rm dim}\,{\rm G}}}\,$ of generators of $\,\gt{g}\,$ into subsets: $\,\{t_{\widetilde a}\}_{\widetilde a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{h}}}\,$ and $\,\{t_{\unl a}\}_{\unl a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{g}-{\rm dim}_{\mathbb{R}}\,\gt{h}}}\,$ spanning -- respectively -- $\,\gt{h}\,$ and $\,\gt{m}$,\ we may write the Maurer--Cartan 1-form as \begin{eqnarray}\label{eq:sMC} \theta_{\rm L}=\theta_{\rm L}^A\otimes_{\mathbb{R}} t_A=\theta_{\rm L}^{\widetilde a}\otimes_{\mathbb{R}} t_{\widetilde a}+\theta_{\rm L}^{\unl a}\otimes_{\mathbb{R}} t_{\unl a} \end{eqnarray} and subsequently formulate the physical theory of interest in terms of the $\,\theta_{\rm L}^{\unl a}$. With all the ingredients in place, we may finally write down the Dirac--Feynman amplitude for the maps $\,{\mathbb{X}}\ :\ \Sigma\longrightarrow{\rm G}\,$ of the field theory of interest, embedding the worldsheet $\,\Sigma\,$ within a section of the principal bundle $\,{\rm G}\longrightarrow{\rm G}/{\rm H}\,$ introduced earlier. Generically, it takes the familiar form \begin{eqnarray}\label{eq:GSmodel} \cA_{{\rm DF,GS},p}[{\mathbb{X}}]={\rm exp}\bigl({\mathsf i}\,S_{{\rm metr,GS},p}[{\mathbb{X}}]\bigr)\cdot{\rm exp}\bigg({\mathsf i}\,\tint_\Omega\,{\mathbb{X}}^*\bigl({\mathsf d}^{-1}\underset{\tx{\ciut{(p+2)}}}{\chi}\bigr)\bigg) \end{eqnarray} in which the first factor $\,S_{{\rm metr,GS}}\,$ computes the metric ($p+1$-)volume of the embedded hypersurface $\,{\mathbb{X}}(\Omega)\,$ measured in the metric induced from $\,{\rm g}\,$ along $\,{\mathbb{X}}\,$ on the worldvolume $\,\Omega\,$ and can assume various forms, depending on the choice of the supertarget $\,\mathscr{M}\,$ (or, more to the point, on the choice of the embedding of the physical spacetime in $\,\mathscr{M}\,$ -- {\it cp.}\ below), and in which the second factor, tentatively respresenting the geometric coupling of the external field $\,\underset{\tx{\ciut{(p+2)}}}{\chi}\,$ to the charge current defined by the propagation of the super-$p$-brane in $\,\mathscr{M}$,\ is {\it locally} (over $\,\mathscr{M}$) expressed as the integral \begin{eqnarray}\nn S_{{\rm top,GS},p}[\xi]\equiv\int_\Omega\,{\mathbb{X}}^*\bigl({\mathsf d}^{-1}\underset{\tx{\ciut{(p+2)}}}{\chi}\bigr)\xrightarrow[\ {\rm loc.}\ ]{}\int_\Omega\,{\mathbb{X}}^*\underset{\tx{\ciut{(p+1)}}}{\beta} \end{eqnarray} of a (local) de Rham primitive $\,\underset{\tx{\ciut{(p+1)}}}{\beta}\,$ of $\,\underset{\tx{\ciut{(p+2)}}}{\chi}$.\ In the most studied examples, {\it i.e.}, on supercosets of the super-Minkowskian type (body) $\,{\mathbb{R}}^{1,d-1}\,$ ({\it cp.}\ Refs.\,\cite{Brink:1981nb,Green:1983wt,Bergshoeff:1985su,Hughes:1986fa,Achucarro:1987nc}) and of the super-anti-de Sitter type $\,{\rm AdS}_{p+2}\x{\mathbb{S}}^{d-p-2},\ p\in\ovl{0,d-2},\ d\in\{10,11\}\,$ ({\it cp.}\ Refs.\,\cite{Metsaev:1998it,Arutyunov:2008if,deWit:1998yu,Claus:1998fh}), the primitives of various physically relevant GS $p+2$-cocycles exist (although in the latter class, they are often given in an implicit integral form), and yet they are typically not supersymmetric (or, in our language, not left-${\rm G}$-invariant), {\it cp.}\ Refs.\,\cite{Chryssomalakos:2000xd,Sakaguchi:1999fm} and Refs.\,\cite{Hatsuda:2001pp,Hatsuda:2002hz} for interesting analyses in the super-Minkowskian and super-anti-de Sitter context, respectively. Hence, in an attempt to grasp the geometric meaning and thereupon properly define the WZ term, we arrive at a crossroads -- we are confronted with the choice between the standard de Rham cohomology of the supertarget $\,\mathscr{M}\,$ and the ${\rm G}$-invariant (or at the very least supersymmetry-invariant) de Rham cohomology of the same space. In many examples of $\sigma$-models with standard ({\it i.e.}, Gra\ss mann-even) manifolds as targets, there is either no distinguished symmetry group identified in the canonical analysis, or that group is compact, as is the case for the WZW $\sigma$-model on a compact Lie group -- the mother of all rational conformal field theories in two dimensions. In the fomer situation, the question of choice does not even come up, whereas in the latter one, it is answered by the Chevalley--Eilenberg Theorem of \Rcite{Chevalley:1948} that states the equivalence of the two cohomologies (that is, of the existence of an isomorphism of the corresponding cohomology groups). The supergeometric setting of interest does not fall into either category as the symmetry (super)group is built into the definition of the supertarget and the latter group is (assumed) non-compact, which precludes the application of the Chevalley--Eilenberg Theorem. The universal gauge principle invoked in the previous section prompts us to demand that the global symmetry resp.\ the supersymmetry be rendered local, or gauged (or, at least, that it be possible to gauge it), so that -- in the light of Refs.\,\cite{Gawedzki:2012fu,Suszek:2012ddg} -- the super-$\sigma$-model may descend to the space of orbits. The problem with this line of reasoning is two-fold: first of all, already the descent to the space of orbits of the all-important supersymmetry transformations (understood as translations in the soul directions) takes us out of the original geometric category of supermanifolds, as illustrated in \Rcite{Rabin:1984rm} -- this is analogous to the descent to a non-smooth space of orbits of the action of a finite group on a smooth manifold; secondly, the purely geometric meaning of the ${\rm G}$- resp.\ supersymmetry-{\it invariant} de Rham cohomology (not to be confused with the ${\rm G}$- resp.\ supersymmetry-{\it equivariant} one) is not clear, which undermines the postulate of giving preference to that cohomology over the usual de Rham cohomology. The latter objection would be lifted if we could find a subgroup $\,{\mathsf T}\subset{\rm G}\,$ (resp.\ that of the supersymmetry group) with the following properties: \begin{itemize} \item[(i)] ${\mathsf T}$-invariance of a differential form on $\,\mathscr{M}\,$ implies its ${\rm G}$-invariance (resp.\ its supersymmetry-invariance); \item[(ii)] the orbit space $\,\mathscr{M}/{\mathsf T}\,$ is a supermanifold locally modelled on the Gra\ss mann bundle of the same vector bundle over the body manifold of $\,\mathscr{M}\,$ as that of the supermanifold $\,\mathscr{M}\,$ itself. \end{itemize} The above properties would legitimise thinking of the original super-$\sigma$-model with the supertarget $\,\mathscr{M}\,$ as one with the supertarget $\,\mathscr{M}/{\mathsf T}\,$ on which the GS super-cocycle defines -- by construction -- a non-trivial de Rham class, and this would, in turn, mean that the topology of $\,\mathscr{M}/{\mathsf T}\,$ encodes the non-trivial Chevalley--Eilenberg cohomology of $\,\mathscr{M}\,$ and justify a geometrisation of the GS super-cocycle in a manner completely analogous with that employed in the Gra\ss mann-even setting. The existence of the relevant subgroup $\,{\mathsf T}\,$ in the case of the super-Minkowski space was demonstrated explicitly by Rabin and Crane in Refs.\,\cite{Rabin:1984rm,Rabin:1985tv}, and is therefore anticipated (but has to be proven on a case-by-case basis) on a generic supermanifold (of the type under consideration) in the light of the Gaw\c{e}dzki--Batchelor Theorem of Refs.\,\cite{Gawedzki:1977pb,Batchelor:1979a}. In the former setting, condition (ii) rules out the obvious candidate for $\,{\mathsf T}\,$ given by the full supersymmetry group -- indeed, the resulting quotient is not of the same type as the original supermanifold. It is then readily checked that the Kosteleck\'y--Rabin discrete supersymmetry subgroup of \Rcite{Kostelecky:1983qu}, to be thought of as a lattice variant of the continuous supergroup of supertranslations, is a suitable choice -- it yields a supermanifold with the fundamental group generated by unital (in the lattice spacing) traslations in the Gra\ss mann-odd (or {\bf soul}) directions, and the only nontrivial supercommutator in the underlying Lie superalgebra (the anticommutator of the supercharges) gives rise to a torsion component in the ensuing homology, {\it cp.}\ \Rcite{Rabin:1984rm}. It is also worth noting that Witten's trick\footnote{Defining the Wess--Zumino term for the $\sigma$-model on $\,\mathscr{M}/{\mathsf T}\,$ as an integral of the GS 3-cocycle over a filling 3-manifold (a solid handlebody) of the worldsheet $\,\Sigma$.} does not work in this setting, {\it cp.} \Rcite{Rabin:1985tv}, which is another reason to look for a geometrisation of the GS cocycle. We shall construct such a geometrisation explicitly in what follows. Since, moreover, we want to study its equivariance under actions of subgroups of $\,{\rm G}\,$ and subalgebras of $\,\gt{g}\,$ induced from the underlying left- and right-regular actions of $\,{\rm G}\,$ on itself upon restriction to the section of the principal bundle $\,{\rm G}\longrightarrow{\rm G}/{\rm H}\,$ referred to previously, it will be important to gain a better understanding of the induction scheme first. Our introductory remarks concerning the general structure of the supertargets of interest essentially determine the nature of the action of the symmetry supergroup $\,{\rm G}\,$ to be considered, and so also -- in particular -- the implementation of supersymmetries. These will be realised nonlinearly in the scheme originally conceived by Schwinger and Weinberg in Refs.\,\cite{Schwinger:1967tc,Weinberg:1968de} in the context of effective field theories with chiral symmetries, and subsequently elaborated in Refs.\,\cite{Coleman:1969sm,Callan:1969sn,Salam:1969rq}, to be adapted to the study of spacetime symmetries by Salam, Strathdee and Isham in Refs.\,\cite{Salam:1970qk,Isham:1971dv}. The scheme was successfully employed in the setting of a supersymmetric field theory by Akulov and Volkov {\it et al.} in Refs.\,\cite{Volkov:1972jx,Volkov:1973ix,Ivanov:1978mx,Lindstrom:1979kq,Uematsu:1981rj,Ivanov:1982bpa,Samuel:1982uh,Ferrara:1983fi,Bagger:1983mv}, and this is the variant that we encounter below. Thus, we take the Lie superalgebra $\,\gt{g}\,$ of the symmetry supergroup $\,{\rm G}\,$ to decompose as \begin{eqnarray}\nn \gt{g}=\gt{t}\oplus\gt{r} \end{eqnarray} into a Lie algebra $\,\gt{r}\supset[\gt{r},\gt{r}]\,$ and its super-graded module $\,\gt{t}=\gt{t}^{(0)}\oplus\gt{t}^{(1)}\,$ (not a Lie superalgebra in general), \begin{eqnarray}\nn [\gt{r},\gt{t}]\subset\gt{t}\,, \end{eqnarray} that further splits as \begin{eqnarray}\nn \gt{r}=\gt{d}\oplus\gt{r}_{\rm vac} \end{eqnarray} into a Lie subalgebra $\,\gt{r}_{\rm vac}\supset[\gt{r}_{\rm vac},\gt{r}_{\rm vac}]\,$ and its vector-space complement $\,\gt{d}\,$ which together with a subspace $\,\gt{e}\subset\gt{t}\,$ composes the super-vector space \begin{eqnarray}\nn \gt{e}\oplus\gt{d}\equiv\gt{m} \end{eqnarray} mentioned earlier. This leaves us with the direct-sum complement $\,\gt{t}_{\rm vac}\subset\gt{t}\,$ of $\,\gt{e}\,$ in $\,\gt{t}\,$ as the last ingredient in the definition of the vacuum-symmetry Lie superalgebra \begin{eqnarray}\label{eq:rvacontvac} \gt{t}_{\rm vac}\oplus\gt{r}_{\rm vac}\equiv\gt{h}\,. \end{eqnarray} and we demand that $\,\gt{t}_{\rm vac}\,$ be an $\gt{r}_{\rm vac}$-module \begin{eqnarray}\nn [\gt{r}_{\rm vac},\gt{t}_{\rm vac}]\subset\gt{t}_{\rm vac}\,. \end{eqnarray} We shall denote the basis vectors (generators) of $\,\gt{t}\,$ as $\,\{t_\mu\}_{\mu\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}}}$,\ and among them those of $\,\gt{t}^{(0)}\,$ as $\,\{t_I\}_{I\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}^{(0)}}}$,\ and those of $\,\gt{t}^{(1)}\,$ as $\,\{t_\a\}_{\a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}^{(1)}}}$.\ We further divide the generators of $\,\gt{t}^{(0)}\,$ into those co-spanning $\,\gt{t}_{\rm vac}$,\ which we label as $\,\{t_{\unl A}\}_{\unl A\in\ovl{0,p}},\ p:={\rm dim}_{\mathbb{R}}\,\gt{t}_{\rm vac}^{(0)}-1$,\ and those co-spanning $\,\gt{e}$,\ labelled as $\,\{t_{\widehat S}\}_{\widehat S\in\ovl{1,d-p-1}},\ d:=p+1+{\rm dim}_{\mathbb{R}}\,\gt{e}^{(0)}$.\ Finally the generators of $\,\gt{d}\,$ will be written as $\,\{t_a\}_{a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{d}}}$,\ and those of $\,\gt{r}_{\rm vac}\,$ as $\,\{t_{\ovl a}\}_{\ovl a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{r}_{\rm vac}}}$.\ In the specific examples listed above, $\,\gt{t}\,$ is the linear span of supertranslations, and so -- in particular -- it is promoted to the rank of a sub-Lie superalgebra in the Minkowskian setting. Taking into account the reasoning presented in Refs.\,\cite{West:2000hr,Gomis:2006wu} and the results derived therefrom, we introduce (local) coordinates $\,(\xi^\mu,\phi^a)^{(\mu,a)\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}}\x\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{d}}}\,$ and subsequently parametrise the aforementioned section $\,\gamma\in\Gamma({\rm G})\,$ as \begin{eqnarray}\nn \gamma(\xi,\phi)={\rm e}^{\xi^\mu\,t_\mu}\cdot{\rm e}^{\phi^a\,t_a}\,. \end{eqnarray} Accordingly, the lagrangean field of the super-$\sigma$-model is of the form \begin{eqnarray}\nn {\mathbb{X}}\equiv(\xi^\mu,\phi^a)(\cdot)\ :\ \Omega\longrightarrow{\rm G}/{\rm H}\,. \end{eqnarray} A word is well due at this stage concerning the physical meaning of the various components of this field. Thus, the Gra\ss mann-even components $\,\{x^I\equiv\xi^I\}_{I\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}^{(0)}}}\,$ are to be thought of as coordinates on a physical spacetime $\,M\subset\mathscr{M}\,$ in which the super-$p$-brane propagates in a manner dictated by the Green--Schwarz super-$\sigma$-model -- this explains the appearance of coordinates corresponding to vacuum-symmetry directions $\,\gt{t}_{\rm vac}\,$ in the above parametrisation of the section. The Gra\ss mann-odd components $\,\{\xi^\a\}_{\a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}^{(1)}}}\,$ map the spinorial (super-charge) directions. In both cases, fixing the components in the directions of $\,\gt{h}\,$ explicitly introduced in the parametrisation is tantamount to fixing the gauge of the local symmetry $\,{\rm H}$.\ The remaining components $\,\{\phi^a\}_{a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{d}}}\,$ model certain Goldstone (or pure-gauge) degrees of freedom that are ultimately to be integrated out with the help of their field equations and will be switched on or off depending on the geometric phenomena that we want to capture in our formulation of the dynamics. While purely auxiliary in nature, they serve the important purpose of elucidating the very structure of the super-$\sigma$-model and of the geometric mechanism (a variant of the so-called inverse Higgs effect of \Rcite{Ivanov:1975zq}) that leads to the emergence of separate metric and topological contributions to its action functional out of a purely topological (or higher gerbe-theoretic) expression on the larger supermanifold coordinatised by the $\,\xi^\mu\,$ {\it and} the $\,\phi^a$.\ These remarks will be clarified further in Sec.\,\ref{sub:HPGS} and when we pass to the analysis of concrete models. Parenthetically, and with view to later applications, we note that the pullbacks of the Maurer--Cartan 1-forms can be decomposed in the (local) coordinate basis as \begin{eqnarray}\label{eq:Vielbeine} \gamma^*\theta_{\rm L}^A(\xi,\phi)=E^A_{\ \mu}(\xi,\phi)\,{\mathsf d}\xi^\mu+E^A_{\ a}(\xi,\phi)\,{\mathsf d}\phi^a\,. \end{eqnarray} The functional coefficients $\,E^A_{\ \mu}\,$ and $\,E^A_{\ a}\,$ are traditionally termed Vielbeine. The important consequence of our assumptions concerning the structure of the underlying Lie superalgebra $\,\gt{g}\,$ is the general form of the distinguished spacetime components: \begin{eqnarray}\nn \gamma^*\theta_{\rm L}^I(\xi,\phi)=E^I_{\ \mu}(\xi,\phi)\,{\mathsf d}\xi^\mu\,. \end{eqnarray} In what follows, we adopt the fairly natural hypothesis\footnote{Recall that the pullback 1-forms are to span the cotangent space of the supertarget $\,\mathscr{M}\,$ at each point. The hypothesis is trivially satisfied in the super-Minkowskian setting, {\it cp.} Sec.\,\ref{sec:sMinktarget}.} that the truncated Vielbein $\,(\unl E^I_{\ J}(\xi,\phi)\equiv E^I_{\ J}(\xi,\phi))_{I,J\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}^{(0)}}}\,$ admits an inverse $\,(\unl E^{-1\,I}_{\quad\ J}(\xi,\phi))_{I,J\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}^{(0)}}}$.\ This will be instrumental in the canonical analysis to follow. The first type of induced action of $\,{\rm G}\ni g\,$ on the supertarget $\,{\rm G}/{\rm H}\,$ can be read off the (local, in general) multiplication rule \begin{eqnarray}\label{eq:cosetlreg} g\cdot\gamma(\xi,\phi)=:\gamma\bigl(\xi_l(\xi,\phi,g),\phi_l(\xi,\phi,g)\bigr)\cdot h_l(\xi,\phi,g) \end{eqnarray} in which $\,(\xi_l,\phi_l)\ :\ {\rm G}/{\rm H}\x{\rm G}\longrightarrow{\rm G}/{\rm H}\,$ is a certain (non-linear, in general) mapping, and (the inverse of) the last element $\,h_l(\xi,\phi,g)\in{\rm H}\,$ translates the product $\,g\cdot\gamma(\xi,\phi)\,$ back into the section $\,\gamma$,\ defining therewith an effective (non-linear) transformation \begin{eqnarray}\nn \widetilde\ell_\cdot\ :\ {\rm G}\x\mathscr{M}\longrightarrow\mathscr{M}\ :\ \bigl(g,(\xi,\phi)\bigr)\longmapsto\bigl(\xi_l(\xi,\phi,g),\phi_l(\xi,\phi,g)\bigr) \end{eqnarray} on the base $\,\mathscr{M}\equiv{\rm G}/{\rm H}\,$ of the principal ${\rm H}$-bundle. By construction, this action captures the rigid symmetry of the super-$\sigma$-model. Besides it, the theory has {\it infinitesimal} gauge symmetries that can be modelled -- after Refs.\,\cite{McArthur:1999dy,Gomis:2006wu} -- on infinitesimal right-regular translations of the lagrangean section in the directions of the subspace $\,\gt{t}\subset\gt{g}\,$ subject to certain constraints, to be established through a direct calculation later on. In the meantime, we consider unconstrained translations \begin{eqnarray}\nn \gamma(\xi,\phi)\cdot{\rm e}^{\zeta^\mu\,t_\mu}=:\gamma\bigl(\xi+\zeta^\mu\,\delta\xi_\mu(\xi,\phi)+\mathscr{O}(\zeta^2),\phi+\zeta^\mu\,\delta\phi_\mu(\xi,\phi)+\mathscr{O}(\zeta^2)\bigr)\cdot{\rm e}^{\zeta^\mu\,\delta\chi_\mu^{\ovl a}(\xi,\phi)\,t_{\ovl a}} \end{eqnarray} in which $\,\zeta^\mu\,(\delta\xi_\mu,\delta\phi_\mu)\ :\ {\rm G}/{\rm H}\longrightarrow{\mathsf T}_{(\xi,\phi)}({\rm G}/{\rm H})\,$ is a tangential shift of the local coordinates $\,(\xi^\mu,\phi^a)\,$ in the supertarget that determines a realisation of the algebraic structure\footnote{As will be shown later, the structure is far from trivial: it defines a Lie superalgebra only upon restriction to classical field configurations.} generated by the constrained maps $\,\widehat\zeta(\cdot)=\zeta^\mu(\cdot)\otimes_{\mathbb{R}} t_\mu\ :\ \Omega\longrightarrow\gt{t}$,\ whose set shall be denoted as $\,\mathscr{F}(\Omega,\gt{t})\,$ in what follows, on the lagrangean fields of the super-$\sigma$-model of the form \begin{eqnarray}\nn \widetilde\varrho_\cdot\ &:&\ C^\infty(\Omega,\mathscr{M})\x\mathscr{F}(\Omega,\gt{t})\longrightarrow C^\infty(\Omega,\mathscr{M})\cr\cr &:&\ \bigl(\bigl(\xi(\cdot),\phi(\cdot)\bigr),\widehat\zeta(\cdot)\bigr)\longmapsto\bigl(\xi(\cdot)+\zeta^\mu(\cdot)\,\delta\xi_\mu\bigl(\xi(\cdot),\phi(\cdot)\bigr),\phi(\cdot)+\zeta^\mu(\cdot)\,\delta\phi_\mu\bigl(\xi(\cdot),\phi(\cdot)\bigr)\bigr)\,. \end{eqnarray} While the nature of the assumed rigid symmetry corroborates the idea of formulating the lagrangean super-$\sigma$-model in terms of (left-)invariant forms on $\,{\rm G}$,\ the presence of a gauge symmetry of the type specified turns out to fix the relative normalisation of the metric and topological terms as the latter, defined in terms of a non-invariant de Rham primitive of $\,\underset{\tx{\ciut{(p+2)}}}{\chi}$,\ is not even pseudo-invariant\footnote{That is invariant up to an additive de Rham-exact correction.} under infinitesimal local right translations. An elaboration of these observations requires that a specific superbackground be chosen, and so we postpone it to later sections. \subsection{The Nambu--Goto formulation of the Green--Schwarz super-$\sigma$-model}\label{sec:NGform} In the most common formulation of the super-$\sigma$-model, the worldvolume of the super-$p$-brane is embedded entirely in a super-extension of the physical metric spacetime $\,(M,{\rm g})$,\ with (local) coordinates $\,\{{\mathbb{X}}^\mu\equiv(\xi^\mu,0)\equiv\xi^\mu\}_{\mu\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}}}$,\ and has no (Gra\ss mann-even) Goldstone degrees of freedom, \begin{eqnarray}\nn \forall_{a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{d}}}\ :\ \phi^a=0\,. \end{eqnarray} The influence of the background gravitational field on the dynamics of the super-$p$-brane is encoded in the action functional giving just the metric volume of the embedded worldvolume $\,\Omega$,\ that is \begin{eqnarray}\label{eq:SmetrNG} S^{({\rm NG})}_{{\rm metr,GS},p}[\xi]&=&\int_\Omega\,\Vol(\Omega)\,\sqrt{{\rm det}_{(p+1)}\,\bigl({\rm g}_{IJ}(x)\,\bigl(\partial_i\righthalfcup(\gamma\circ\xi)^*\theta^I_{\rm L}\bigr)\,\bigl(\partial_j\righthalfcup(\gamma\circ\xi)^*\theta^J_{\rm L}\bigr)\bigr)}\cr\cr &\equiv&\int_\Omega\,\Vol(\Omega)\,\mathscr{L}^{({\rm NG})}_{{\rm metr,GS},p}(\xi,\partial\xi) \end{eqnarray} Thus, the metric term alone favours minimal hypersurfaces. The condition of minimality is deformed in the presence of the topological term \begin{eqnarray}\label{eq:StopNG} S^{({\rm NG})}_{{\rm top,GS},p}[\xi]:=\int_\Omega\,\xi^*\bigl({\mathsf d}^{-1}\underset{\tx{\ciut{(p+2)}}}{\chi}\bigr) \end{eqnarray} Together, the two terms yield the Green--Schwarz action functional in the Nambu--Goto form \begin{eqnarray}\label{eq:NGGS} S^{({\rm NG})}_{{\rm GS},p}[\xi]=S^{({\rm NG})}_{{\rm metr,GS},p}[\xi]+S^{({\rm NG})}_{{\rm top,GS},p}[\xi] \end{eqnarray} As a lagrangean field theory, the Green--Schwarz super-$\sigma$-model in the above form admits a canonical description in terms of a presymplectic classical space of states and a Poisson algebra of hamiltonians over it. Such a description can be obtained through adaptation to the supersymmetric setting of interest of the formalism of the covariant classical field theory invoked in Sec.\,\ref{sec:Bose}. We begin by associating with the lagrangean density of the action functional of \Reqref{eq:NGGS}, $\,\mathscr{L}^{({\rm NG})}_{{\rm GS},p}(\xi^\mu,\partial_i\xi^\nu)$,\ a function on the first-jet bundle $\,J^1\mathscr{F}\,$ of its (trivial) covariant configuration bundle $\,{\rm pr}_1\ :\ \mathscr{F}\equiv\Omega\x\mathscr{M}\longrightarrow\Omega\,$ which in the standard adapted coordinates $\,(\xi^\mu,t^\nu_i)\,$ on the fibre of $\,J^1\mathscr{F}\,$ is given by $\,\mathscr{L}^{({\rm NG})}_{{\rm GS},p}(\xi^\mu,t_i^\nu)$.\ The latter enters the definition of the Poincar\'e--Cartan form of the model (written in the obvious shorthand notation, with the variational derivatives with respect to Gra\ss mann-odd variables understood to be the {\it left} derivatives and so marked accordingly) \begin{eqnarray}\nn \Theta^{({\rm NG})}_{{\rm GS},p}(\xi,t)=\left(\mathscr{L}^{({\rm NG})}_{{\rm GS},p}(\xi,t)-t^\mu_i\,\tfrac{\overrightarrow\delta\mathscr{L}^{({\rm NG})}_{{\rm GS},p}}{\delta t^\mu_i}(\xi,t)\right)\,\Vol(\Omega)+\delta\xi^\mu\,\tfrac{\overrightarrow\delta\mathscr{L}^{({\rm NG})}_{{\rm GS},p}}{\delta t^\mu_i}(\xi,t)\wedge\left(\partial_i\righthalfcup\Vol(\Omega)\right) \,. \end{eqnarray} It has the fundamental property (that is straightforward to demonstrate): The extremals of the functional \begin{eqnarray}\nn S_{\Theta^{({\rm NG})}_{{\rm GS},p}}\ :\ \Gamma(J^1\mathscr{F})\longrightarrow{\mathbb{R}}\ :\ \Psi\longmapsto\int_\Omega\,\Psi^*\Theta \end{eqnarray} are first jets of extremals of $\,S^{({\rm NG})}_{{\rm GS},p}$.\ As in the Gra\ss mann-even setting, we obtain a presymplectic form $\,\Omega^{({\rm NG})}_{{\rm GS},p}\,$ on the space of states $\,{\mathsf P}^{({\rm NG})}_{{\rm GS},p}\,$ of the super-$\sigma$-model \begin{eqnarray}\nn \Omega^{({\rm NG})}_{{\rm GS},p}[\Psi\mathord{\restriction}_\mathscr{C}]=\int_\mathscr{C}\,\left(\Psi\mathord{\restriction}_\mathscr{C}\right)^*\delta\Theta^{({\rm NG})}_{{\rm GS},p}\,, \end{eqnarray} the space being parameterised by Cauchy data $\,\Psi\mathord{\restriction}_\mathscr{C}\equiv(\unl\xi^\mu,\pi_I)\,$ of extremals $\,\Psi\,$ of $\,S^{({\rm NG})}_{{\rm GS},p}\,$ supported on a model Cauchy hypersurface $\,\mathscr{C}\subset\Omega$.\ Here, the $\,\pi_I\,$ correspond to (spacetime) components of the Vielbein-transformed kinetic momentum $\,\tfrac{\delta\mathscr{L}^{({\rm NG})}_{{\rm metr,GS},p}}{\delta t^I_0}(\xi,t)\,\unl E^{-1\,I}_{\quad\ J}(\xi)$.\ The presymplectic form has the universal structure \begin{eqnarray}\nn \Omega^{({\rm NG})}_{{\rm GS},p}=\delta\vartheta+\pi^*_{{\rm GS},p}\int_\mathscr{C}\,{\rm ev}^*\underset{\tx{\ciut{(p+2)}}}{\chi}\,, \end{eqnarray} determined by the momentum 1-form \begin{eqnarray}\nn \vartheta[\unl\xi,\pi]:=\int_\mathscr{C}\,\Vol(\mathscr{C})\,\pi_I(\cdot)\,\gamma^*\theta_{\rm L}^I\bigl(\unl\xi(\cdot)\bigr)\,, \end{eqnarray} on the (partially symplectically reduced) space of states given by the cotangent bundle \begin{eqnarray}\label{eq:canproj-GS} \pi_{{\rm GS},p}\ :\ {\mathsf P}^{({\rm NG})}_{{\rm GS},p}={\mathsf T}^*_0\mathscr{M}_\mathscr{C}\longrightarrow\mathscr{M}_\mathscr{C} \end{eqnarray} of the space $\,\mathscr{M}_\mathscr{C}\,$ of (smooth) maps from the model Cauchy hypersurface $\,\mathscr{C}\,$ to the target supermanifold $\,\mathscr{M}\,$ (generalising the space $\,\mathscr{M}\,$ (for $\,p=0$) and the loop space $\,{\mathsf L}\mathscr{M}\,$ (for $\,p=1$)) with the Gra\ss mann-odd component of the fibre projected out\footnote{The latter component is contained in the characteristic distribution of $\,\Omega_{{\rm GS},p}$.}. Above, \begin{eqnarray}\nn {\rm ev}\ :\ \mathscr{C}\x\mathscr{M}_\mathscr{C}\longrightarrow\mathscr{M}\ :\ (\varphi,\unl\xi)\longmapsto\unl\xi(\varphi)\,. \end{eqnarray} is the standard evaluation map. As in the Gra\ss mann-even setting, the 2-form enables us to define a Poisson bracket on the algebra of smooth functions on $\,{\mathsf P}^{({\rm NG})}_{{\rm GS},p}$,\ and -- among other things -- analyse the phase-space realisation of symmetries. To this end, we consider, for every element $\,X=X^A\,t_A\,$ of the Lie superalgebra $\,\gt{g}$, the associated \textbf{fundamental vector field} \begin{eqnarray}\nn \mathcal{K}_X\equiv X^A\,\mathcal{K}_A\in\Gamma({\mathsf T}\mathscr{M})\,, \end{eqnarray} expressible in terms of the basis fields $\,\mathcal{K}_A\,$ that act on functions $\,f\,$ on $\,\mathscr{M}\ni x\,$ as \begin{eqnarray}\label{eq:fundvec} (\mathcal{K}_A f)(x)=\tfrac{{\mathsf d}\ }{{\mathsf d} t}\mathord{\restriction}_{t=0}f\bigl(\widetilde\ell_{{\rm e}^{t\,t_A}}(x)\bigr)\,. \end{eqnarray} As mentioned above, in order that $\,\gt{g}\,$ define a global symmetry of $\,S^{({\rm NG})}_{{\rm GS},p}$,\ we must have (as we, indeed, do) a ${\rm G}$-invariant GS $(p+2)$-form with, for every $\,X\,$ as above, \begin{eqnarray}\label{eq:fundform} \mathcal{K}_X\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi}=-{\mathsf d}\kappa_X \end{eqnarray} for some $\,\kappa_X\in\Omega^p(\mathscr{M})$.\ We then find the canonical lift of $\,\mathcal{K}_X\,$ to $\,{\mathsf T}{\mathsf P}^{({\rm NG})}_{{\rm GS},p}\,$ as follows. Write the lift as \begin{eqnarray}\nn \widetilde\mathcal{K}_X[\unl\xi,\pi]=\int_\mathscr{C}\,\Vol(\mathscr{C})\,\bigl[\mathcal{K}^\mu\bigl(\unl\xi(\cdot)\bigr)\,\tfrac{\overrightarrow\delta\ }{\delta\unl\xi^\mu(\cdot)}+\Delta^X_I\bigl(\unl\xi(\cdot),\pi(\cdot)\bigr)\,\tfrac{\delta\ }{\delta\pi_I(\cdot)}\bigr] \end{eqnarray} to obtain \begin{eqnarray}\nn 0\stackrel{!}{=}\pLie{\widetilde\mathcal{K}_X}\vartheta[\unl\xi,\pi]=\int_\mathscr{C}\,\Vol(\mathscr{C})\,\bigl[\Delta^X_I\bigl(\unl\xi(\cdot),\pi(\cdot)\bigr)\,\gamma^*\theta_{\rm L}^I\bigl(\unl\xi(\cdot)\bigr)+\pi_I(\cdot)\,\bigl(\pLie{\mathcal{K}_X}\gamma^*\theta_{\rm L}^I\bigr)\bigl(\unl\xi(\cdot)\bigr)\bigr]\,, \end{eqnarray} whence, upon invoking the invertibility of the truncated Vielbein, we compute (dropping the implicit dependence on the point in $\,\mathscr{C}\,$ for the sake of brevity) \begin{eqnarray}\nn \Delta^X_I(\unl\xi,\pi)&=&-\pi_J\,\unl E^{-1\,K}_{\quad\ I}(\unl\xi)\,\tfrac{\delta\ }{\delta\unl\xi^K}\righthalfcup\pLie{\mathcal{K}_X}\gamma^*\theta_{\rm L}^J(\unl\xi)=-\pi_J\,\unl E^{-1\,K}_{\quad\ I}(\unl\xi)\,\bigl(\mathcal{K}_X\righthalfcup\delta E^J_{\ K}-[\mathcal{K}_X,\tfrac{\delta\ }{\delta\unl\xi^K}]\righthalfcup\gamma^*\theta_{\rm L}^J\bigr)(\unl\xi)\cr\cr &=&-\pi_J\,\unl E^{-1\,K}_{\quad\ I}(\unl\xi)\,\bigl(\mathcal{K}_X^\mu\,\overrightarrow\partial_\mu E^J_{\ K}+E^J_{\ \mu}\,\partial_K\mathcal{K}^\mu_X\bigr)(\unl\xi)\,. \end{eqnarray} Clearly, for the above constraints to be solvable thus, we need that \begin{eqnarray}\nn \pLie{\mathcal{K}_X}\gamma^*\theta_{\rm L}^I\in\bigoplus_{K\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}^{(0)}}}\,{\mathbb{R}}\,\tfrac{\delta\ }{\delta\xi^K}\,. \end{eqnarray} Note, in particular, that in the case of diffeomorphisms of $\,\mathscr{M}\,$ induced by left-regular translations on $\,{\rm G}\,$ with a trivial (unital) correction $\,h_l(\xi,\phi,g)\,$ in the transformation law \eqref{eq:cosetlreg} we obtain \begin{eqnarray}\nn \Delta^{X^{(h_l=0)}}_I(\unl\xi,\pi)=0\,. \end{eqnarray} The resulting Noether charge $\,Q_X$,\ defined by the relation \begin{eqnarray}\nn \widetilde\mathcal{K}_X\righthalfcup\Omega^{({\rm NG})}_{{\rm GS},p}=:-\delta Q_X\,, \end{eqnarray} assumes the universal form \begin{eqnarray}\nn Q_X[\unl\xi,\pi]=\int_\mathscr{C}\,\Vol(\mathscr{C})\,\pi_I(\cdot)\,(\mathcal{K}_X\righthalfcup\gamma^*\theta^I_{\rm L})\bigl(\xi(\cdot)\bigr)+\int_\mathscr{C}\,{\rm ev}^*\kappa_X\bigl(\xi(\cdot)\bigr)\,. \end{eqnarray} The Poisson bracket of two such hamiltonians reads \begin{eqnarray} &&\{Q_{X_1},Q_{X_2}\}_{\Omega^{({\rm NG})}_{{\rm GS},p}}\cr\cr &=&-\widetilde\mathcal{K}_{X_2}\righthalfcup\bigl[\int_\mathscr{C}\,\Vol(\mathscr{C})\,\bigl(\bigl(\mathcal{K}_{X_1}\righthalfcup\gamma^*\theta_{\rm L}^I\bigr)\bigl(\unl\xi(\cdot)\bigr)\,\delta\pi_I(\cdot)+\pi_I(\cdot)\,\delta\bigl(\mathcal{K}_{X_1}\righthalfcup\gamma^*\theta_{\rm L}^I\bigr)\bigl(\unl\xi(\cdot)\bigr)\bigr)\cr\cr &&\hspace{1cm}+\delta\pi_{{\rm GS},p}^*\int_\mathscr{C}\,\Vol(\mathscr{C})\,{\rm ev}^*\kappa_{X_1}\bigl(\unl\xi(\cdot)\bigr)\bigr]\cr\cr &=&Q_{[X_1,X_2]}+\int_\mathscr{C}\,\Vol(\mathscr{C})\,\bigl[-\Delta^{X_2}_I\bigl(\unl\xi(\cdot),\pi(\cdot)\bigr)\,\bigl(\mathcal{K}_{X_1}\righthalfcup\gamma^*\theta_{\rm L}^I\bigr)\bigl(\unl\xi(\cdot)\bigr)-\pi_I(\cdot)\,\bigl(\mathcal{K}_{X_1}\righthalfcup\pLie{\mathcal{K}_{X_2}}\gamma^*\theta^I_{\rm L}\bigr)\bigl(\unl\xi(\cdot)\bigr)\bigr]\cr\cr &&\hspace{1.35cm}+\pi_{{\rm GS},p}^*\int_\mathscr{C}\,{\rm ev}^*(\pLie{\mathcal{K}_{X_1}}\kappa_{X_2}-\kappa_{[X_1,X_2]})\,,\label{eq:sIsoNoethPoiss} \end{eqnarray} and so, in particular, \begin{eqnarray}\nn \{Q_{X_1^{(h_l=0)}},Q_{X_2^{(h_l=0)}}\}_{\Omega^{({\rm NG})}_{{\rm GS},p}}=Q_{[X_1^{(h_l=0)},X_2^{(h_l=0)}]}+\pi_{{\rm GS},p}^*\int_\mathscr{C}\,{\rm ev}^*(\pLie{\mathcal{K}_{X_1^{(h_l=0)}}}\kappa_{X_2^{(h_l=0)}}-\kappa_{[X_1^{(h_l=0)},X_2^{(h_l=0)}]})\,. \end{eqnarray} It is worth pointing out that the latter integrand is closed as \begin{eqnarray}\nn -{\mathsf d}\kappa_{[X_1,X_2]}&=&\mathcal{K}_{[X_1,X_2]}\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi}=[\mathcal{K}_{X_1},\mathcal{K}_{X_2}]\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi}=\pLie{\mathcal{K}_{X_1}}(\mathcal{K}_{X_2})\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi}\cr\cr &=&\pLie{\mathcal{K}_{X_1}}(\mathcal{K}_{X_2}\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi})-\mathcal{K}_{X_2}\righthalfcup\pLie{\mathcal{K}_{X_1}}\underset{\tx{\ciut{(p+2)}}}{\chi}=-\pLie{\mathcal{K}_{X_1}}{\mathsf d}\kappa_{X_2}=-{\mathsf d}\pLie{\mathcal{K}_{X_1}}\kappa_{X_2}\,. \end{eqnarray} Therefore, the realisation of the (left-regular) translational symmetry is hamiltonian iff the $p$-forms $\,\kappa_A\equiv\kappa_{t_A}\,$ can be chosen such that the condition \begin{eqnarray}\nn \pLie{\mathcal{K}_A}\kappa_B=f_{AB}^{\quad\ C}\,\kappa_C+{\mathsf d} D_{AB} \end{eqnarray} is satisfied for some $\,D_{AB}\in\Omega^{p-1}(M),\ A,B\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{g}}\,$ (as long as $\,p>0$). Even then there may occur a classical anomaly structurally identical with the one encountered in the Gra\ss mann-even setting, {\it cp.}\ \Reqref{eq:curranom}. This suggests a simple geometric model, over the supertarget, of the anomalous Poisson--Lie algebra of Noether hamiltonians associated with left-regular translations of the distinguished type considered, extending (as is clearly necessary) the Lie algebra of vector fields on $\,\mathscr{M}$.\ The existence of such a model is to be anticipated on the basis of the symmetry analysis of the bosonic $\sigma$-model carried out in \Rcite{Suszek:2012ddg} and recalled in Sec.\,\ref{sec:Bose}. Thus, in analogy with the purely Gra\ss mann-even case, consider {\bf the fundamental section} \begin{eqnarray}\nn \gt{K}_X:=(\mathcal{K}_X,\kappa_X)\in\Gamma(\mathcal{E}^{1,p}\mathscr{M}) \end{eqnarray} of the generalised tangent bundle \begin{eqnarray}\nn \mathcal{E}^{1,p}\mathscr{M}:={\mathsf T}\mathscr{M}\oplus\bigwedge{}^p{\mathsf T}^*\mathscr{M} \end{eqnarray} For any two such Gra\ss mann-homogeneous sections $\,\gt{K}_i:=(\mathcal{K}_i,\kappa_i),\ i\in\{1,2\}\,$ of the latter space, of the respective parities $\,\widetilde\gt{K}_i$,\ we may define a $\underset{\tx{\ciut{(p+2)}}}{\chi}$-twisted (in the sense of \Rcite{Severa:2001qm}) Vinogradov(--Courant)-type superbracket which takes the form \begin{eqnarray} \sVbra{(\mathcal{K}_1,\kappa_1)}{(\mathcal{K}_2,\kappa_2)}^{\underset{\tx{\ciut{(p+2)}}}{\chi}}&:=&\bigl([\mathcal{K}_1,\mathcal{K}_2\},\pLie{\mathcal{K}_1}\kappa_2-(-1)^{\widetilde\gt{K}_1\,\widetilde\gt{K}_2}\,\pLie{\mathcal{K}_2}\kappa_1-\tfrac{1}{2}\,{\mathsf d}\bigl(\mathcal{K}_1\righthalfcup\kappa_2-(-1)^{\widetilde\gt{K}_1\,\widetilde\gt{K}_2}\,\mathcal{K}_2\righthalfcup\kappa_1\bigr)\cr\cr &&\hspace{1.5cm}+\mathcal{K}_1\righthalfcup\mathcal{K}_2\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi}\bigr)\,,\label{eq:VBra} \end{eqnarray} with the Lie superbracket of vector fields defined as \begin{eqnarray}\nn [\mathcal{K}_1,\mathcal{K}_2\}:=\mathcal{K}_1\circ\mathcal{K}_2-(-1)^{\widetilde\gt{K}_1\,\widetilde\gt{K}_2}\,\mathcal{K}_1\circ\mathcal{K}_2\,, \end{eqnarray} cp.\ Refs\,\cite{Vinogradov:1990un,Cabras:1992ex} and. We then readily find the formula \begin{eqnarray}\nn \Vbra{\gt{K}_{X_1}}{\gt{K}_{X_2}}^{\underset{\tx{\ciut{(p+2)}}}{\chi}}=\gt{K}_{[X_1,X_2]}+(0,\a_{X_1,X_2})\,, \end{eqnarray} with the {\bf Lie anomaly super-$p$-form} \begin{eqnarray}\label{eq:Liean} \a_{X_1,X_2}:=\pLie{\mathcal{K}_{X_1}}\kappa_{X_2}-\kappa_{[X_1,X_2\}}-{\mathsf d}\corr{\gt{K}_{X_1},\gt{K}_{X_2}}\,, \end{eqnarray} written in terms of the following pairing of sections \begin{eqnarray}\nn \corr{\gt{K}_1,\gt{K}_2}:=\tfrac{1}{2}\,\bigl(\mathcal{K}_1\righthalfcup\kappa_2+(-1)^{\widetilde\gt{K}_1\,\widetilde\gt{K}_2}\,\mathcal{K}_2\righthalfcup\kappa_1\bigr)\,. \end{eqnarray} The last (manifestly exact) term in the formula for the anomaly, while absent from \Reqref{eq:sIsoNoethPoiss}, becomes visible in an analogous formula for the corresponding Noether currents. \subsection{The Hughes--Polchinski formulation of the Green--Schwarz super-$\sigma$-model}\label{sub:HPGS} There is an alternative to the standard (Nambu--Goto) formulation of the (super-)$\sigma$-model with a homogenous space of a (super)group action as a (super)target that was originally cenceived in \Rcite{Hughes:1986dn} and elaborated significantly in \Rcite{Gauntlett:1989qe}. Here, we use its full-fledged version and draw on an in-depth geometric understanding thereof worked out, in the context of immediate interest, in \Rcite{McArthur:1999dy,McArthur:2010zm} and Refs.\,\cite{West:2000hr,Gomis:2006xw,Gomis:2006wu}. The formulation introduces into the lagrangean density, among other fields, Goldstone fields for (some of) the global spacetime symmetries of the super-$\sigma$-model broken by the `vacuum' of the theory, {\it i.e.}, by the embedding of the membrane in the supertarget $\,\mathscr{M}\,$ described by a classical field configuration, and subjects them to the {\bf inverse Higgs mechanism} of \Rcite{Ivanov:1975zq} to remove some of them in a manner consistent with the surviving `vacuum' symmetries. In this procedure, the Cartan geometry of the homogeneous space $\,\mathscr{M}\,$ employed in the construction of the action functional proves instrumental. Indeed, the mechanism boils down to the imposition of geometric constraints that restrict the tangents of classical field configurations to the directions within $\,{\rm G}/{\rm H}\equiv\mathscr{M}\,$ determined by the unbroken symmetries -- these constraints can be expressed as the conditions of the vanishing, on classical field configurations, of the pullbacks along the coset section $\,\gamma\,$ of those components of the Maurer--Cartan 1-form \eqref{eq:sMC} which correspond to the broken (infinitesimal) symmetries in $\,\gt{g}$.\ Technically, this means that the Goldstone fields eliminated in the procedure are expressed in terms of the remaining fields of the theory, and in particular -- in terms of the derivatives of the surviving Goldstone fields, whence also the name of the mechanism. The basic building blocks of the super-$\sigma$-model in the Hughes--Polchinski formulation are, as previously, the component Maurer--Cartan 1-forms $\,\gamma^*\theta^\mu_{\rm L},\ \mu\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{t}}$,\ however, in this case, we introduce the Goldstone fields $\,\phi^a,\ a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{d}}$.\ Consequently, in the standard notation \begin{eqnarray}\nn {\rm ad}_{t_a}(t_I)\equiv[t_a,t_I]=f_{a I}^{\ \ J}\,t_J\,,\qquad\qquad{\rm ad}_{t_a}(t_\a)\equiv[t_a,t_\a]=f_{I\a}^{\ \ \beta}\,t_\beta\,, \end{eqnarray} we obtain the factorised Vielbeine \begin{eqnarray} &\gamma^*\theta^I_{\rm L}(\xi,\phi)\otimes_{\mathbb{R}} t_I=e^I_{\ \mu}(\xi)\,{\mathsf d}\xi^\mu\otimes_{\mathbb{R}}{\rm e}^{-\phi^a\,{\rm ad}_{t_a}}(t_I)=e^J_{\ \mu}(\xi)\,\bigl({\rm e}^{\Lambda(\phi)}\bigr)_J^{\ I}\,{\mathsf d}\xi^\mu\otimes_{\mathbb{R}} t_I\,,\qquad\quad\Lambda(\phi)_I^{\ J}=\phi^a\,f_{a I}^{\ \ J}\,,&\cr\cr &\gamma^*\theta^\a_{\rm L}(\xi,\phi)\otimes_{\mathbb{R}} t_\a=\sigma^\a_{\ \mu}(\xi)\,{\mathsf d}\xi^\mu\otimes_{\mathbb{R}}{\rm e}^{-\phi^a\,{\rm ad}_{t_a}}(t_\a)=\sigma^\a_{\ \mu}(\xi)\,\bigl({\rm e}^{\widetilde\Lambda(\phi)}\bigr)_\beta^{\ \a}\,{\mathsf d}\xi^\mu\otimes_{\mathbb{R}} t_\a\,,\qquad\quad\widetilde\Lambda(\phi)_\a^{\ \beta}=\phi^a\,f_{a\a}^{\ \ \beta}\,.&\nonumber\\ \label{eq:factViel} \end{eqnarray} Prior to writing out the action functional in that formulation, we need to make further algebraic assumptions. Thus, upon introducing the auxiliary matrix \begin{eqnarray}\nn \widehat\Lambda(\phi)_\mu^{\ \nu}=\phi^a\,f_{a\mu}^{\ \ \nu}\,, \end{eqnarray} with the obvious decomposition \begin{eqnarray}\nn \widehat\Lambda(\phi)=\bigl(\Lambda(\phi),\widetilde\Lambda(\phi)\bigr)\in{\rm End}_{\mathbb{R}}\bigl(\gt{t}^{(0)}\bigr)\oplus{\rm End}_{\mathbb{R}}\bigl(\gt{t}^{(1)}\bigr) \subset{\rm End}_{\mathbb{R}}(\gt{t}) \end{eqnarray} we presuppose that the Green--Schwarz $(p+2)$-form \begin{eqnarray}\nn \underset{\tx{\ciut{(p+2)}}}{\chi}=\chi_{\mu_1\mu_2\cdots\mu_{p+2}}\,\gamma^*\theta^{\mu_1}_{\rm L}\wedge\gamma^*\theta^{\mu_2}_{\rm L}\wedge\cdots\wedge\gamma^*\theta^{\mu_{p+2}}_{\rm L}\,,\quad\chi_{\mu_1\mu_2\cdots\mu_{p+2}}\in{\mathbb{R}} \end{eqnarray} has the invariance property \begin{eqnarray}\nn \chi_{\nu_1\nu_2\cdots\nu_{p+2}}\,\widehat\Lambda(\phi)_{\mu_1}^{\ \nu_1}\,\widehat\Lambda(\phi)_{\mu_2}^{\ \nu_2}\,\cdots\,\widehat\Lambda(\phi)_{\mu_{p+2}}^{\ \nu_{p+2}}=\chi_{\mu_1\mu_2\cdots\mu_{p+2}} \end{eqnarray} that implies the identity \begin{eqnarray}\label{eq:GScocycnophi} \underset{\tx{\ciut{(p+2)}}}{\chi}(\xi,\phi)=\underset{\tx{\ciut{(p+2)}}}{\chi}(\xi,0)\,. \end{eqnarray} At this stage, it suffices to demand that the Lie-algebra action \eqref{eq:rvacontvac} integrate to a {\it unimodular} (adjoint) action of the Lie group $\,{\rm R}_{\rm vac}\,$ of $\,\gt{r}_{\rm vac}\,$ on $\,\gt{t}^{(0)}_{\rm vac}$,\ {\it i.e.}, \begin{eqnarray}\nn \forall_{r\in{\rm R}_{\rm vac}}\ :\ {\rm det}\,\bigl({\mathsf T}_e{\rm Ad}_r\mathord{\restriction}_{\gt{t}^{(0)}_{\rm vac}}\bigr)=1\,, \end{eqnarray} to be able to define the action functional: We take its topological term to be the same as in the Nambu--Goto formulation (which makes sense in consequence of \Reqref{eq:GScocycnophi}), \begin{eqnarray}\label{eq:StopHP} S^{({\rm HP})}_{{\rm top,GS},p}[\xi,\phi]=\int_\Omega\,(\gamma\circ{\mathbb{X}})^*\bigl({\mathsf d}^{-1}\underset{\tx{\ciut{(p+2)}}}{\chi}\bigr)=\int_\Omega\,(\gamma\circ\xi)^*\bigl({\mathsf d}^{-1}\underset{\tx{\ciut{(p+2)}}}{\chi}\bigr)\equiv S^{({\rm NG})}_{{\rm top,GS},p}[\xi] \end{eqnarray} and set -- for $\,{\mathbb{X}}=(\xi,\phi)\,$ -- \begin{eqnarray}\label{eq:SmetrHP} S^{({\rm HP})}_{{\rm metr,GS},p}[\xi,\phi]=\tfrac{1}{(p+1)!}\,\int_\Omega\,(\gamma\circ{\mathbb{X}})^*\underset{\tx{\ciut{(p+1)}}}{\beta}\hspace{-7pt}{}^{\rm (HP)}\,, \end{eqnarray} with \begin{eqnarray}\label{eq:HPcurv} \underset{\tx{\ciut{(p+1)}}}{\beta}\hspace{-7pt}{}^{\rm (HP)}=\epsilon_{\unl A_0\unl A_1\ldots\unl A_p}\,\bigl(\theta^{\unl A_0}_{\rm L}\wedge\theta^{\unl A_1}_{\rm L}\wedge\cdots\wedge\theta^{\unl A_p}_{\rm L}\bigr) \end{eqnarray} written in terms of the standard totally antisymmetric symbol \begin{eqnarray}\nn \epsilon_{\unl A_0\unl A_1\ldots\unl A_p}=\left\{\begin{array}{cl} {\rm sign}\left(\begin{array}{cccc}0 & 1 &\ldots& p \\ \unl A_0 & \unl A_1 & \ldots & \unl A_p\end{array}\right) & \tx{ if } \{\unl A_0,\unl A_1,\ldots,\unl A_p\}=\ovl{0,{\rm dim}_{\mathbb{R}}\,\gt{t}_{\rm vac}^{(0)}-1} \\ \\ 0 & \tx{ otherwise}\end{array}\right.\,, \end{eqnarray} so that -- altogether -- \begin{eqnarray}\label{eq:HPGS} S^{({\rm HP})}_{{\rm GS},p}[\xi,\phi]=S^{({\rm HP})}_{{\rm metr,GS},p}[\xi,\phi]+S^{({\rm HP})}_{{\rm top,GS},p}[\xi,\phi]\,. \end{eqnarray} There is a class of supertargets for which we may establish a direct relation betweem the two formulations of the Green--Schwarz super-$\sigma$-model, to wit, \begin{Prop}\label{prop:IHCart} Let $\,{\rm G}\,$ be a Lie supergroup with the Lie superalgebra $\,\gt{g}=\gt{t}\oplus\gt{r}\,$ and let $\,{\rm H}\subset{\rm G}\,$ be its Lie subgroup with the Lie algebra $\,\gt{h}$,\ the two algebras satisfying the relations described at the beginning of Sec.\,\ref{sec:stensor}. The Green--Schwarz super-$\sigma$-model on the homogeneous space $\,{\rm G}/{\rm H}\,$ in the Hughes--Polchinski formulation, determined by the action functional $\,S^{({\rm HP})}_{{\rm GS},p}\,$ \eqref{eq:HPGS} with the metric term \eqref{eq:SmetrHP} and the topological term \eqref{eq:StopHP}, is equivalent to the Green--Schwarz super-$\sigma$-model on the same supertarget in the Nambu--Goto formulation, defined by the action functional \eqref{eq:NGGS} with the metric term \eqref{eq:SmetrNG} for the metric $\,{\rm g}=\kappa^{(0)}\mathord{\restriction}_{\gt{t}^{(0)}_{\rm vac}\circlesign{\perp}\gt{e}^{(0)}}\,$ given by the restriction to $\,\gt{t}^{(0)}_{\rm vac}\circlesign{\perp}\gt{e}^{(0)}\,$ of the Cartan--Killing metric $\,\kappa^{(0)}\,$ on the Lie algebra $\,\gt{t}^{(0)}\oplus\gt{r}\equiv\gt{g}^{(0)}\,$ and the topological term \eqref{eq:StopNG}, if the following conditions are satisfied: \begin{itemize} \item[(E1)] $\,\kappa^{(0)}\,$ defines an orthogonal decomposition \begin{eqnarray}\nn \gt{g}^{(0)}=\gt{t}^{(0)}_{\rm vac}\circlesign{\perp}\gt{e}^{(0)}\circlesign{\perp}\gt{f}^{(0)} \end{eqnarray} (in which $\,\gt{f}^{(0)}\,$ is an orthogonal direct-sum completion of $\,\gt{t}^{(0)}_{\rm vac}\circlesign{\perp}\gt{e}^{(0)}$) such that $\,\kappa^{(0)}\mathord{\restriction}_{\gt{t}^{(0)}_{\rm vac}\circlesign{\perp}\gt{e}^{(0)}}\,$ is non-degenerate; \item[(E2)] $\,S^{({\rm HP})}_{{\rm GS},p}\,$ is restricted to field configurations satisfying -- in the notation introduced at the beginning of Sec.\,\ref{sec:stensor} -- the {\bf inverse Higgs constraint} \begin{eqnarray}\label{eq:IsHiggs} \forall_{\widehat S\in\ovl{1,d-p-1}}\ :\ (\gamma\circ{\mathbb{X}})^*\theta^{\widehat S}_{\rm L}\must0 \end{eqnarray} whose solvability is ensured by the invertibility -- in an arbitrary (local) coordinate system $\,\{\sigma^i\}^{i\in\ovl{0,p}}\,$ on $\,\Omega\,$ -- of the (tangent-transport) operator \begin{eqnarray}\nn e^{\unl A}_{\ \mu}\bigl(\xi(\sigma)\bigr)\,\tfrac{\partial\xi^\mu}{\partial\sigma^i}(\sigma)\equiv\unl\epsilon^{\unl A}_{\ i}(\sigma)\,,\quad\sigma\in\Omega\,. \end{eqnarray} \end{itemize} The latter constraint is equivalent to the Euler--Lagrange equations of $\,S^{({\rm HP})}_{{\rm GS},p}\,$ obtained by varying the functional in the direction of the Goldstone fields $\,\phi^a,\ a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{d}}$. \end{Prop} \noindent\begin{proof} In view of the previous remarks, we first have to demonstrate that $\,S^{({\rm HP})}_{\rm metr,GS}\,$ reduces to $\,S^{({\rm NG})}_{\rm metr,GS}\,$ upon imposing \eqref{eq:IsHiggs} whenever conditions (E1) and (E2) are satisfied. To this end, we work out explicit formul\ae ~for the relevant components of the Maurer--Cartan 1-form. We have \begin{eqnarray}\nn &&{\rm e}^{-\phi^a\,{\rm ad}_{t_a}}(t_{\unl A})\cr\cr &=&\sum_{n=0}^\infty\,\tfrac{1}{(2n)!}\,\phi^{a_1}\,\phi^{a_2}\,\cdots\,\phi^{a_{2n}}\,f_{a_{2n}\unl A}^{\quad\ \ \widehat S_{2n}}\,f_{a_{2n-1}\widehat S_{2n}}^{\qquad\ \unl A_{2n-1}}\,f_{a_{2n-2}\unl A_{2n-1}}^{\qquad\ \widehat S_{2n-2}}\,f_{a_{2n-3}\widehat S_{2n-2}}^{\qquad\ \unl A_{2n-3}}\,\cdots\,f_{a_2\unl A_3}^{\quad\ \widehat S_2}\,f_{a_1\widehat S_2}^{\quad\ \unl A_1}\vartriangleright t_{\unl A_1}\cr\cr &&-\sum_{n=0}^\infty\,\tfrac{1}{(2n+1)!}\,\phi^{a_1}\,\phi^{a_2}\,\cdots\,\phi^{a_{2n+1}}\,f_{a_{2n+1}\unl A}^{\qquad \ \widehat S_{2n+1}}\,f_{a_{2n}\widehat S_{2n+1}}^{\qquad\ \unl A_{2n}}\,f_{a_{2n-1}\unl A_{2n}}^{\qquad\ \widehat S_{2n-1}}\,f_{a_{2n-2}\widehat S_{2n-1}}^{\qquad\ \unl A_{2n-2}}\,\cdots\,f_{a_3\unl A_4}^{\quad\ \widehat S_3}\,f_{a_2\widehat S_3}^{\quad\ \unl A_2}\,f_{a_1\unl A_2}^{\quad\ \widehat S_1}\vartriangleright t_{\widehat S_1} \end{eqnarray} and \begin{eqnarray}\nn &&{\rm e}^{-\phi^a\,{\rm ad}_{t_a}}(t_{\widehat S})\cr\cr &=&\sum_{n=0}^\infty\,\tfrac{1}{(2n)!}\,\phi^{a_1}\,\phi^{a_2}\,\cdots\,\phi^{a_{2n}}\,f_{a_{2n}\widehat S}^{\quad\ \unl A_{2n}}\,f_{a_{2n-1}\unl A_{2n}}^{\qquad\ \widehat S_{2n-1}}\,f_{a_{2n-2}\widehat S_{2n-1}}^{\qquad\ \unl A_{2n-2}}\,f_{a_{2n-3}\unl A_{2n-2}}^{\qquad\ \widehat S_{2n-3}}\,\cdots\,f_{a_2\widehat S_3}^{\quad\ \unl A_2}\,f_{a_1\unl A_2}^{\quad\ \widehat S_1}\vartriangleright t_{\widehat S_1}\cr\cr &&-\sum_{n=0}^\infty\,\tfrac{1}{(2n+1)!}\,\phi^{a_1}\,\phi^{a_2}\,\cdots\,\phi^{a_{2n+1}}\,f_{a_{2n+1}\widehat S}^{\qquad\ \unl A_{2n+1}}\,f_{a_{2n}\unl A_{2n+1}}^{\qquad\ \widehat S_{2n}}\,f_{a_{2n-1}\widehat S_{2n}}^{\qquad\ \unl A_{2n-1}}\,f_{a_{2n-2}\unl A_{2n-1}}^{\qquad\ \widehat S_{2n-2}}\,\cdots\,f_{a_3\widehat S_4}^{\quad\ \unl A_3}\,f_{a_2\unl A_3}^{\quad\ \widehat S_2}\,f_{a_1\widehat S_2}^{\quad\ \unl A_1}\vartriangleright t_{\unl A_1}\,, \end{eqnarray} Denote \begin{eqnarray}\nn F(\phi)_{\unl A}^{\ \widehat S}=\phi^a\,f_{a\unl A}^{\ \ \widehat S}\,,\qquad\qquad\widetilde F(\phi)_{\widehat S}^{\ \unl A}=\phi^a\,f_{a\widehat S}^{\ \ \unl A} \end{eqnarray} and \begin{eqnarray}\nn \phi^a\,\phi^b\,f_{a\unl A_1}^{\ \ \widehat S}\,f_{b\widehat S}^{\ \ \unl A_2}=:Q(\phi)_{\unl A_1}^{\ \unl A_2}\,,\qquad\qquad\phi^a\,\phi^b\,f_{a\widehat S_1}^{\ \ \unl A}\,f_{b\unl A}^{\ \ \widehat S_2}=:\widetilde Q(\phi)_{\widehat S_1}^{\ \widehat S_2}\,. \end{eqnarray} Furthermore, for the sake of brevity, use the symbollic notation \begin{eqnarray}\nn L(\phi)^2:=Q(\phi)\,,\qquad\qquad\widetilde L(\phi)^2:=\widetilde Q(\phi) \end{eqnarray} in (symmetric) functions of $\,\phi\,$ whose dependence on the argument factors through $\,Q(\phi)\,$ or $\,\widetilde Q(\phi)$,\ {\it e.g.}, \begin{eqnarray}\nn {\rm e}^{-\phi^a\,{\rm ad}_{t_a}}(t_{\unl A})&=&\bigl({\rm ch}\,L(\phi)\bigr)_{\unl A}^{\ \unl B}\vartriangleright t_{\unl B}-\phi^a\,f_{a\unl A}^{\ \ \widehat S}\,\bigl(\tfrac{{\rm sh}\,\widetilde L(\phi)}{\widetilde L(\phi)}\bigr)_{\widehat S}^{\ \widehat T}\vartriangleright t_{\widehat T}\cr\cr {\rm e}^{-\phi^a\,{\rm ad}_{t_a}}(t_{\widehat S})&=&\bigl({\rm ch}\,\widetilde L(\phi)\bigr)_{\widehat S}^{\ \widehat T}\vartriangleright t_{\widehat T}-\phi^a\,f_{a\widehat S}^{\ \ \unl A}\,\bigl(\tfrac{{\rm sh}\,L(\phi)}{L(\phi)}\bigr)_{\unl A}^{\ \unl B}\vartriangleright t_{\unl B}\,. \end{eqnarray} The above then rewrites as \begin{eqnarray}\nn {\rm e}^{\Lambda(\phi)}=\left(\begin{array}{cc} {\rm ch}\,L(\phi) & -F(\phi)\cdot\tfrac{{\rm sh}\,\widetilde L(\phi)}{\widetilde L(\phi)} \\ -\widetilde F(\phi)\cdot\tfrac{{\rm sh}\,L(\phi)}{L(\phi)} & {\rm ch}\,\widetilde L(\phi) \end{array}\right)\,, \end{eqnarray} where the blocks correspond (in an obvious manner) to the direct summands in the decomposition $\,\gt{t}^{(0)}=\gt{t}^{(0)}_{\rm vac}\oplus\gt{e}^{(0)}$.\ This can be further decomposed as \begin{eqnarray}\nn {\rm e}^{\Lambda(\phi)}=\left(\begin{array}{cc} {\boldsymbol{1}}_{p+1} & -\varphi \\ -\widetilde\varphi & {\boldsymbol{1}}_{d-p-1} \end{array}\right)\cdot\left(\begin{array}{cc} {\rm ch}\,L(\phi) & 0 \\ 0 & {\rm ch}\,\widetilde L(\phi) \end{array}\right)\,, \end{eqnarray} with \begin{eqnarray}\nn \varphi=F(\phi)\cdot\tfrac{{\rm sh}\,\widetilde L(\phi)}{\widetilde L(\phi)\,{\rm ch}\,\widetilde L(\phi)}\,,\qquad\qquad\widetilde\varphi=\widetilde F(\phi)\cdot\tfrac{{\rm sh}\,L(\phi)}{L(\phi)\,{\rm ch}\,L(\phi)}\,. \end{eqnarray} In view of the obvious identities \begin{eqnarray}\nn F(\phi)\cdot\widetilde Q(\phi)=Q(\phi)\cdot F(\phi)\,,\qquad\qquad\widetilde F(\phi)\cdot Q(\phi)=\widetilde Q(\phi)\cdot \widetilde F(\phi)\,, \end{eqnarray} we may rewrite the last definitions in the equivalent form \begin{eqnarray}\nn \varphi=\tfrac{{\rm sh}\,L(\phi)}{L(\phi)\,{\rm ch}\,L(\phi)}\cdot F(\phi)\,,\qquad\qquad\widetilde\varphi=\tfrac{{\rm sh}\,\widetilde L(\phi)}{\widetilde L(\phi)\,{\rm ch}\,\widetilde L(\phi)}\cdot\widetilde F(\phi)\,. \end{eqnarray} Furthermore, as \begin{eqnarray}\nn F(\phi)\cdot\widetilde F(\phi)=Q(\phi)\,,\qquad\qquad\widetilde F(\phi)\cdot F(\phi)=\widetilde Q(\phi)\,, \end{eqnarray} we obtain the relation \begin{eqnarray}\nn \varphi\cdot\widetilde\varphi=\tfrac{{\rm sh}\,L(\phi)}{L(\phi)\,{\rm ch}\,L(\phi)}\cdot F(\phi)\cdot\widetilde F(\phi)\cdot\tfrac{{\rm sh}\,L(\phi)}{L(\phi)\,{\rm ch}\,L(\phi)}=\bigl(\tfrac{{\rm sh}\,L(\phi)}{{\rm ch}\,L(\phi)}\bigr)^2={\boldsymbol{1}}_{p+1}-\tfrac{1}{{\rm ch}\,L(\phi)^2}\,, \end{eqnarray} and -- similarly -- the relation \begin{eqnarray}\nn \widetilde\varphi\cdot\varphi={\boldsymbol{1}}_{d-p-1}-\tfrac{1}{{\rm ch}\,\widetilde L(\phi)^2}\,, \end{eqnarray} so that we may ultimately express $\,{\rm e}^{\Lambda(\phi)}\,$ entirely in terms of $\,\varphi\,$ and $\,\widetilde\varphi\,$ as \begin{eqnarray}\nn {\rm e}^{\Lambda(\phi)}=\left(\begin{array}{cc} {\boldsymbol{1}}_{p+1} & -\varphi \\ -\widetilde\varphi & {\boldsymbol{1}}_{d-p-1} \end{array}\right)\cdot\left(\begin{array}{cc} ({\boldsymbol{1}}_{p+1}-\varphi\cdot\widetilde\varphi)^{-\frac{1}{2}} & 0 \\ 0 & ({\boldsymbol{1}}_{d-p-1}-\widetilde\varphi\cdot\varphi)^{-\frac{1}{2}} \end{array}\right)\,. \end{eqnarray} We next use assumption (E1) to relate $\,\widetilde\varphi\,$ to $\,\varphi$.\ To this end, we compute, using the ${\rm ad}$-invariance of the Killing form, \begin{eqnarray}\nn \kappa^{(0)\,-1\,\unl A\unl B}\,f_{a\unl B}^{\ \ \widehat T}\,\kappa^{(0)}_{\widehat T\widehat S}&\equiv&\kappa^{(0)\,-1\,\unl A \unl B}\,f_{a\unl B}^{\ \ C}\,\kappa^{(0)}_{C\widehat S}=\kappa^{(0)\,-1\,\unl A \unl B}\,\kappa^{(0)}\bigl([t_a,t_{\unl B}],t_{\widehat S}\bigr)=-\kappa^{(0)\,-1\,\unl A \unl B}\,\kappa^{(0)}\bigl([t_a,t_{\widehat S}],t_{\unl B}\bigr)\cr\cr &=&-\kappa^{(0)\,-1\,\unl A \unl B}\,f_{a\widehat S}^{\ \ C}\,\kappa^{(0)}_{C\unl B}=-f_{a\widehat S}^{\ \ \unl A}\,, \end{eqnarray} whence also \begin{eqnarray}\label{eq:f2fIHkill} \kappa^{(0)}_{\unl A\unl B}\,f_{a\widehat T}^{\ \ \unl B}\,\kappa^{(0)\,-1\,\widehat T\widehat S}=-f_{a\unl A}^{\ \ \widehat S}\,. \end{eqnarray} Taking into account that $\,\widetilde\varphi\,$ is an odd function of the $\,\phi^a$,\ we then readily establish the fundamental identity \begin{eqnarray}\label{eq:phitilvsphi} \widetilde\varphi_{\widehat S}^{\ \unl A}=-\kappa^{(0)\,-1\,\unl A\unl B}\,\varphi_{\unl B}^{\ \widehat T}\,\kappa^{(0)}_{\widehat T\widehat S}\,. \end{eqnarray} At this stage, we may pass to express (the pullbacks of) the relevant left-invariant Maurer--Cartan 1-forms as functions of $\,\xi^\mu\,$ and $\,\varphi_{\unl A}^{\ \widehat S}$,\ whereupon the imposition of the inverse Higgs constraint becomes straightforward. Thus, taking into account the expressions \eqref{eq:factViel} as well as the hitherto results, we find \begin{eqnarray} \gamma^*\theta^{\unl A}_{\rm L}(\xi,\phi)&=&\bigl(e^{\unl B}_{\ \mu}(\xi)-e^{\widehat S}_{\ \mu}(\xi)\,\widetilde\varphi_{\widehat S}^{\ \unl B}(\phi)\bigr)\,\sqrt{{\boldsymbol{1}}_{p+1}-\varphi\cdot\widetilde\varphi}_{\unl B}^{\ \unl A}\,{\mathsf d}\xi^\mu\,, \label{eq:MCformsgal}\\ \gamma^*\theta^{\widehat S}_{\rm L}(\xi,\phi)&=&\bigl(e^{\widehat T}_{\ \mu}(\xi)-e^{\unl A}_{\ \mu}(\xi)\,\varphi_{\unl A}^{\ \widehat T}(\phi)\bigr)\,\sqrt{{\boldsymbol{1}}_{d-p-1}-\widetilde\varphi\cdot\varphi}_{\widehat T}^{\ \widehat S}\,{\mathsf d}\xi^\mu\,. \end{eqnarray} Denote, for any $\,I\in\ovl{1,{\rm dim}\,\gt{t}^{(0)}}\,$ and for (local) coordinates $\,\{\sigma^i\}^{i\in\ovl{0,p}}\,$ on $\,\Omega$, \begin{eqnarray}\nn \epsilon^I_{\ i}(\sigma):=e^I_{\ \mu}\bigl(\xi(\sigma)\bigr)\,\tfrac{\partial\xi^\mu}{\partial\sigma^i}(\sigma) \end{eqnarray} and further write \begin{eqnarray}\nn \epsilon^{\unl A}_{\ i}\equiv\unl\epsilon^{\unl A}_{\ i}\,,\qquad\qquad\epsilon^{\widehat S}_{\ i}\equiv\widehat\epsilon^{\widehat S}_{\ i} \end{eqnarray} for the sake of clarity of the formul\ae ~that follow. The solution to the inverse Higgs constraint now reads \begin{eqnarray}\nn \varphi_{\unl A}^{\ \widehat S}\bigl(\phi(\sigma)\bigr)=\unl\epsilon^{-1\,i}_{\quad\unl A}(\sigma)\,\widehat\epsilon^{\widehat S}_{\ i}(\sigma)\,, \end{eqnarray} or -- in an obvious shorthand notation -- \begin{eqnarray}\label{eq:IHconstrexpl} \varphi\circ\phi=\unl\epsilon^{-1\,{\rm T}}\cdot\widehat\epsilon^{\rm T}\,. \end{eqnarray} Substituting this into the formula for $\,{\mathbb{X}}^*\gamma^*\theta^{\unl A}_{\rm L}\,$ and using \Reqref{eq:phitilvsphi} along the way, we arrive at the expression \begin{eqnarray}\nn \varpi^{\unl A}&\equiv&\varpi^{\unl A}_{\ \ i}\,{\mathsf d}\sigma^i:=(\xi,\phi(\xi)\bigr)^*\gamma^*\theta^{\unl A}_{\rm L}\cr\cr &=&\bigl(\unl\epsilon^{\unl B}_{\ i}+\epsilon^{\widehat S}_{\ i}\,\kappa^{(0)\,-1\,\unl B\unl C}\,\widehat\epsilon^{\widehat T}_{\ j}\,\unl\epsilon^{-1\,j}_{\quad\unl C}\,\kappa^{(0)}_{\widehat T\widehat S}\bigr)\,\left(\tfrac{1}{\sqrt{{\boldsymbol{1}}_{p+1}+\kappa^{(0)}_{\widehat S\widehat T}\,\widehat\epsilon^{\widehat S}_{\ k}\,\widehat\epsilon^{\widehat T}_{\ l}\,\unl\epsilon^{-1\,k}_{\quad\unl\cdot}\,\unl\epsilon^{-1\,l}_{\quad\unl D}\,\kappa^{(0)\,-1\,\unl D\unl\cdot}}}\right)_{\unl B}^{\ \ \unl A}\,{\mathsf d}\sigma^i\,. \end{eqnarray} In order to simplify the above expression and prepare it for subsequent use in the reconstruction of the inverse Higgs-reduced Hughes--Polchinski action functional, let us call \begin{eqnarray}\nn \unl\kappa_{\unl A\unl B}:=\kappa^{(0)}_{\unl A\unl B}\,,\qquad\qquad\widehat\kappa_{\widehat S\widehat T}:=\kappa^{(0)}_{\widehat S\widehat T} \end{eqnarray} and \begin{eqnarray}\nn \widehat{\rm g}_{ij}:=\widehat\kappa_{\widehat S\widehat T}\,\widehat\epsilon^{\widehat S}_{\ i}\,\widehat\epsilon^{\widehat T}_{\ j}\,,\qquad\qquad\unl{\rm g}_{ij}:=\widehat\kappa_{\unl A\unl B}\,\unl\epsilon^{\unl A}_{\ i}\,\unl\epsilon^{\unl B}_{\ j}\,, \end{eqnarray} as well as \begin{eqnarray}\nn \widetilde{\rm g}_{ij}:=\unl g_{ij}+\widehat g_{ij}\equiv\kappa_{IJ}\,\epsilon^I_{\ i}\,\epsilon^J_{\ j}\,. \end{eqnarray} We then obtain \begin{eqnarray}\nn \varpi^{\unl A}_{\ \ i}&=&\bigl(\unl\epsilon^{\unl B}_{\ i}+\widehat{\rm g}_{ij}\,\unl\kappa^{-1\,\unl B\unl C}\,\unl\epsilon^{-1\,j}_{\quad\unl C}\bigr)\,\left(\tfrac{1}{\sqrt{{\boldsymbol{1}}_{p+1}+\widehat{\rm g}_{kl}\,\unl\epsilon^{-1\,k}_{\quad\unl\cdot}\,\unl\epsilon^{-1\,l}_{\quad\unl D}\,\unl\kappa^{-1\,\unl D\unl\cdot}}}\right)_{\unl B}^{\ \ \unl A}\cr\cr &=&\unl\epsilon^{\unl E}_{\ i}\,\bigl(\delta^{\ \ \unl B}_{\unl E}+\widehat{\rm g}_{jk}\,\unl\epsilon^{-1\,j}_{\quad\unl E}\,\unl\epsilon^{-1\,k}_{\quad\unl C}\,\unl\kappa^{-1\,\unl C\unl B}\bigr)\,\left(\tfrac{1}{\sqrt{{\boldsymbol{1}}_{p+1}+\widehat{\rm g}_{lm}\,\unl\epsilon^{-1\,l}_{\quad\unl\cdot}\,\unl\epsilon^{-1\,m}_{\quad\ \unl D}\,\unl\kappa^{-1\,\unl D\unl\cdot}}}\right)_{\unl B}^{\ \ \unl A}\cr\cr &=&\unl\epsilon^{\unl B}_{\ i}\,\sqrt{{\boldsymbol{1}}_{p+1}+\widehat{\rm g}_{jk}\,\unl\epsilon^{-1\,j}_{\quad\ \unl\cdot}\,\unl\epsilon^{-1\,k}_{\quad\unl C}\,\unl\kappa^{-1\,\unl C\unl\cdot}}_{\unl B}^{\ \ \unl A}=\unl\epsilon^{\unl B}_{\ i}\,\sqrt{\bigl(\unl\kappa_{\unl\cdot\unl C}+\unl\epsilon^{-1\,j}_{\quad\ \unl\cdot}\,\widehat{\rm g}_{jk}\,\unl\epsilon^{-1\,k}_{\quad\unl C}\bigr)\,\unl\kappa^{-1\,\unl C\unl\cdot}}_{\unl B}^{\ \ \unl A}\cr\cr &=&\unl\epsilon^{\unl B}_{\ i}\,\sqrt{\unl\epsilon^{-1\,j}_{\quad\ \unl\cdot}\,\widetilde{\rm g}_{jk}\,\unl\epsilon^{-1\,k}_{\quad\unl C}\,\unl\kappa^{-1\,\unl C\unl\cdot}}_{\unl B}^{\ \ \unl A}\,. \end{eqnarray} At long last, we may now write out the sought-after metric term of the reduced Hughes--Polchinski action functional (in an obvious shorthand notation), \begin{eqnarray}\nn S^{({\rm HP})}_{{\rm metr,GS},p}[\xi,\phi(\xi)]&=&\int_\Omega\,\Vol(\Omega)\,\varepsilon_{i_0 i_1\ldots i_p}\,\varpi^0_{\ i_0}\,\varpi^1_{\ i_1}\,\cdots\,\varpi^p_{\ i_p}(\cdot)\equiv\int_\Omega\,\Vol(\Omega)\,{\rm det}_{(p+1)}\bigl(\varpi^{\unl\cdot}_{\ \cdot}\bigr)\cr\cr &=&\int_\Omega\,\Vol(\Omega)\,{\rm det}_{(p+1)}\unl\epsilon\cdot{\rm det}_{(p+1)}\sqrt{\unl\epsilon^{-1\,{\rm T}}\cdot\widetilde{\rm g}\cdot\unl\epsilon^{-1}\cdot\unl\kappa^{-1}}\,, \end{eqnarray} whence also we finally retrieve the anticipated result \begin{eqnarray}\nn S^{({\rm HP})}_{{\rm metr,GS},p}[\xi,\phi(\xi)]&=&\lambda_p\,\int_\Omega\,\Vol(\Omega)\,\sqrt{\bigl({\rm det}_{(p+1)}\unl\epsilon\bigr)^2\cdot{\rm det}_{(p+1)}\bigl(\unl\epsilon^{-1\,{\rm T}}\cdot\widetilde{\rm g}\cdot\unl\epsilon^{-1}\bigr)}\cr\cr &=&\lambda_p\,\int_\Omega\,\Vol(\Omega)\,\sqrt{{\rm det}_{(p+1)}\widetilde{\rm g}}\,, \end{eqnarray} up to an overall constant $\,\lambda_p\,$ (which we can always set to one by a suitable rescaling of the metric term). Passing to the closing statement of the proposition, we shall first write out the metric term of the Hughes--Polchinski action functional in a form amenable to further treatment. Taking into account \Reqref{eq:MCformsgal}, we obtain -- in the previously introduced notation -- \begin{eqnarray}\nn S^{({\rm HP})}_{{\rm metr,GS},p}[\xi,\phi]&=&\int_\Omega\,\Vol(\Omega)\,{\rm det}_{(p+1)}(e^{\unl\cdot}_{\ \cdot})=\int_\Omega\,\Vol(\Omega)\,{\rm det}_{(p+1)}M(\xi)\cdot{\rm det}_{(p+1)}\,\bigl[A(\xi,\phi)\cdot B(\xi,\phi)^{-\frac{1}{2}}\bigr]\,, \end{eqnarray} where $\,M(\xi)\,$ is a matrix that does not depend on the $\,\phi^a$,\ and hence does not contribute to the Euler--Lagrange equations for these fields, and where \begin{eqnarray}\label{eq:ABexpl}\qquad A(\xi,\phi)=\unl\kappa+\varphi(\phi)\cdot\widehat\kappa\cdot\widehat\epsilon(\xi)\cdot\unl\epsilon^{-1}(\xi)\,,\qquad\qquad B(\xi,\phi)=\unl\kappa+\varphi(\phi)\cdot\widehat\kappa\cdot\varphi(\phi)^{\rm T}\,. \end{eqnarray} The said Euler--Lagrange equations read \begin{eqnarray}\nn \tfrac{\delta\varphi_{\unl A}^{\ \widehat S}}{\delta\phi^a}\,{\rm tr}_{(p+1)}\bigl(A(\xi,\phi)^{-1}\cdot\tfrac{\delta\ }{\delta\varphi_{\unl A}^{\ \widehat S}}A(\xi,\phi)-\tfrac{1}{2}\,B(\xi,\phi)^{-1}\cdot\tfrac{\delta\ }{\delta\varphi_{\unl A}^{\ \widehat S}}B(\xi,\phi)\bigr)=0\,, \end{eqnarray} and so using the symmetricity of $\,B(\xi,\phi)$,\ they can be cast in the simple matrix form \begin{eqnarray}\nn \bigl(\widehat\kappa\cdot\widehat\epsilon(\xi)\cdot\unl\epsilon^{-1}(\xi)\cdot A(\xi,\phi)^{-1}\bigr)^{\rm T}=B(\xi,\phi)^{-1}\cdot\varphi(\phi)\cdot\widehat\kappa\,. \end{eqnarray} Upon multiplying both sides of the above equation by $\,\varphi(\phi)^{\rm T}\,$ and invoking \Reqref{eq:ABexpl}, we deduce from the above the identity \begin{eqnarray}\nn A(\xi,\phi)^{-1\,{\rm T}}=B(\xi,\phi)^{-1}\,, \end{eqnarray} which -- when used in the original equation -- yields the anticipated solution \eqref{eq:IHconstrexpl}. \end{proof} The assumptions of the last proposition exclude important -- both mathematically and physically -- examples of supertargets such as the super-Minkowski space for which the Killing metric degenerates in the (Gra\ss mann-even) translational directions. At the same time, they suggest very clearly a generalisation that does not -- {\it a priori} -- constrain the structure of the underlying Lie algebra $\,\gt{g}^{(0)}$.\ Thus, we formulate \begin{Prop}\label{prop:IHCartMink} Let $\,{\rm G}\,$ be a Lie supergroup with the Lie superalgebra $\,\gt{g}=\gt{t}\oplus\gt{r}\,$ and let $\,{\rm H}\subset{\rm G}\,$ be its Lie subsupergroup with the Lie superalgebra $\,\gt{h}$,\ the two algebras satisfying the relations described at the beginning of Sec.\,\ref{sec:stensor}. If condition (E2) of Prop.\,\ref{prop:IHCart} is satisfied in conjunction with condition \begin{itemize} \item[(E1')] there exist non-degenerate bilinear symmetric forms: $\,\unl\gamma\,$ on $\,\gt{t}^{(0)}_{\rm vac}\,$ and $\,\widehat\gamma\,$ on $\,\gt{e}^{(0)}\,$ with respective presentations \begin{eqnarray}\nn \unl\gamma=\unl\gamma_{\unl A\unl B}\,\tau^{\unl A}\otimes_{\mathbb{R}}\tau^{\unl B}\,,\qquad\qquad\widehat\gamma=\widehat\gamma_{\widehat S\widehat T}\,\tau^{\widehat S}\otimes_{\mathbb{R}}\tau^{\widehat T} \end{eqnarray} in the basis $\,\{\tau^A\}^{A\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{g}}}\,$ of $\,\gt{g}\,$ dual to $\,\{t_A\}_{A\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{g}}}$,\ \begin{eqnarray}\nn \tau^A(t_B)=\delta^A_{\ B}\,,\quad A,B\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{g}}\,, \end{eqnarray} for which the following identities hold true \begin{eqnarray}\label{eq:f2fIH} \unl\gamma^{-1\,\unl A\unl B}\,f_{a\unl B}^{\ \ \widehat T}\,\widehat\gamma_{\widehat T\widehat S}=-f_{a\widehat S}^{\ \ \unl A}\,, \end{eqnarray} \end{itemize} the Green--Schwarz super-$\sigma$-model on the homogeneous space $\,{\rm G}/{\rm H}\,$ in the Hughes--Polchinski formulation, determined by the action functional $\,S^{({\rm HP})}_{{\rm GS},p}\,$ \eqref{eq:HPGS} with the metric term \eqref{eq:SmetrHP} and the topological term \eqref{eq:StopHP}, is equivalent to the Green--Schwarz super-$\sigma$-model on the same supertarget in the Nambu--Goto formulation, defined by the action functional \eqref{eq:NGGS} with the metric term \eqref{eq:SmetrNG} for the metric $\,{\rm g}=\unl\gamma\oplus\widehat\gamma\,$ and the topological term \eqref{eq:StopNG}. The inverse Higgs constraint is equivalent to the Euler--Lagrange equations of $\,S^{({\rm HP})}_{{\rm GS},p}\,$ obtained by varying the functional in the direction of the Goldstone fields $\,\phi^a,\ a\in\ovl{1,{\rm dim}_{\mathbb{R}}\,\gt{d}}$. \end{Prop} \noindent\begin{proof} The proof is entirely analogous to that of Prop.\,\ref{prop:IHCart}, with identity \eqref{eq:f2fIH} playing the structural r\^ole of identity \eqref{eq:f2fIHkill}, the latter being satisfied automatically under the assumptions of that proposition. \end{proof} While we are not going to make essential use of that in what follows, it is to be noted that the canonical description of the Hughes--Polchinski model is highly singular in that the corresponding presymplectic form \begin{eqnarray}\nn \Omega_{{\rm GS},p}^{({\rm HP})}[\unl{\mathbb{X}}]=\int_\mathscr{C}\,{\rm ev}^*\bigl(\delta\beta_{\rm (HP)}+\underset{\tx{\ciut{(p+2)}}}{\chi}\bigr) \end{eqnarray} does not depend on the kinetic momentum. In the light of the above proposition, the latter is reintroduced into the canonical description only through the imposition of the inverse Higgs constraint. \section{The super-Minkowskian background}\label{sec:sMinktarget} In the present section, we restrict our considerations to one of the simplest supertargets, to wit, the {\bf super-Minkowski spacetime with $N$ supersymmetries}, and specify its tensorial data necessary for the definition of the relevant super-$\sigma$-model. \subsection{The Cartan supergeometry of the super-Minkowskian target}\label{sec:CartMink} As a supermanifold, the super-Minkowski spacetime with $N$ supersymmetries is the previously introduced model ringed space \begin{eqnarray}\nn \bigl({\mathbb{R}}^{\x d},C^\infty(\cdot,{\mathbb{R}})\otimes_{\mathbb{R}}\bigwedge{\mathbb{R}}^{\x ND_{1,d-1}}\bigr)\equiv{\rm sMink}^{1,d-1\,\vert\,ND_{1,d-1}}\,,\qquad D_{1,d-1}={\rm dim}_{\mathbb{C}} S_{1,d-1}\,,\quad d\in\{9,10\}\,, \end{eqnarray} where $\,{\rm dim}_{\mathbb{C}} S_{1,d-1}\,$ denotes the dimension of the Majorana-spinor representation of the spin group $\,{\rm Spin}(1,d-1)\,$ of the Clifford algebra $\,{\rm Cliff}({\mathbb{R}}^{1,d-1})\,$ of the standard Minkowski (quadratic) space $\,({\mathbb{R}}^{\x d},\eta),\ \eta=\textrm{diag}(+,-,-,\ldots,-)$.\ The supertarget will be conveniently described as a homogeneous space of the natural action of the $N$-extended super-Poincar\'e Lie supergroup, the latter being given by the semidirect product of the {\bf supertranslation} (or {\bf supersymmetry}) {\bf group}\footnote{We adopt mathematicians' notation in which the supertranslation group is denoted as $\,{\mathbb{R}}^{1,d\,\vert\,ND_{1,d-1}}$,\ while physicists would have it in the form $\,{\mathbb{R}}^{1,d-1\,\vert\,N}$.} $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\equiv{\rm sMink}^{1,d-1\,\vert\,ND_{1,d-1}}\,$ with the spin group $\,{\rm Spin}(1,d-1)$, \begin{eqnarray}\label{eq:sPoinc} {\rm s}\mathscr{P}(1,d-1;N)={\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\rtimes{\rm Spin}(1,d-1)\,, \end{eqnarray} with respect to the standard vector-spinor representation of $\,{\rm Spin}(1,d-1)\,$ on $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}$.\ The supergroup, and so also the supertarget, admits homogeneous coordinates: the Gra\ss mann-even ones $\,\{x^I\}^{I\in\ovl{0,d-1}}\,$ associated with the left-invariant vector fields $\,\{P_I\}_{I\in\ovl{0,d-1}}\,$ generating translations and $\,\{\phi^{KL}=-\phi^{LK}\}_{K,L\in\ovl{0,d-1}}\,$ associated with the left-invariant vector fields $\,\{J_{KL}=-J_{LK}\}^{K,L\in\ovl{0,d-1}}\,$ generating Lorentz transformations, as well as the Gra\ss mann-odd ones $\,\{\theta^\a_i\}^{\a\in\ovl{1,D_{1,d-1}}}_{i\in\ovl{1,N}}\,$ associated with left-invariant (super)vector fields $\,\{Q^i_\a\}^{i\in\ovl{1,N}}_{\a\in\ovl{1,D_{1,d-1}}}\,$ generating spinorial translations. The Lie-supergroup structure on the above supermanifold is determined by the binary operation \begin{eqnarray}\nn {\rm m}\ &:&\ {\rm s}\mathscr{P}(1,d-1;N)\x{\rm s}\mathscr{P}(1,d-1;N)\longrightarrow{\rm s}\mathscr{P}(1,d-1;N)\cr\cr \ &:&\ \bigl(\bigl(x_1^I,\theta^\a_{1\,i},\phi_1^{KL}\bigr),\bigl(x_2^J,\theta^\beta_{2\,j},\phi_2^{MN}\bigr)\bigr)\longmapsto\bigl(x_1^I+ L(\phi_1)^I_{\ J}\,x_2^J-\tfrac{1}{2}\,\theta_{1\,i}^\a\,C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,S(\phi_1)_{\ \delta}^\gamma\,\theta_{2\,i}^\delta,\cr\cr &&\hspace{5.75cm}\theta_{1\,i}^\a+S(\phi_1)_{\ \beta}^\a\,\theta_{2\,i}^\beta,\widetilde\phi(\phi_1,\phi_2)^{KL}\bigr)\,, \end{eqnarray} written in terms of the vector representation $\,L\ :\ {\rm Spin}(1,d-1)\longrightarrow{\rm End}_{\mathbb{R}}\,({\mathbb{R}}^{\x d})\,$ and of (the $i$-th copy of) the Majorana-spinor representation $\,S\ :\ {\rm Spin}(1,d-1)\longrightarrow{\rm End}_{\mathbb{C}}\,(S_{1,d-1})\,$ of $\,{\rm Spin}(1,d-1)$,\ in which we also take the relevant charge-conjugation matrix and the generators of the Clifford algebra (with contributions from this representation resummed over the range $\,i\in\ovl{1,N}\,$ in the vectorial Gra\ss mann-even component), and in terms of the standard non-linear group law $\,\widetilde\phi\,$ for elements of group $\,{\rm Spin}(1,d-1)$.\ Upon restriction to $\,{\rm sMink}^{1,d-1\,\vert\,ND_{1,d-1}}\equiv{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\subset{\rm s}\mathscr{P}(1,d-1;N)\,$ in the above group law, we recover the natural (left) action of $\,{\rm s}\mathscr{P}(1,d-1;N)\,$ on the super-Minkowski space (coset), \begin{eqnarray}\nn \ell_\cdot\ &:&\ {\rm s}\mathscr{P}(1,d-1;N)\x{\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}\longrightarrow{\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}\cr\cr \ &:&\ \bigl(\bigl(y^I,\varepsilon^\a_i,\psi^{KL}\bigr),\bigl(x^J,\theta^\beta_j\bigr)\bigr)\longmapsto\bigl( L(\psi)_{\ J}^I\,x^J+y^I-\tfrac{1}{2}\,\varepsilon_i^\a\,C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,S(\psi)_{\ \delta}^\gamma\,\theta_i^\delta,S(\psi)_{\ \beta}^\a\,\theta_j^\beta+\varepsilon_j^\a\bigr)\,. \end{eqnarray} The right action of the supertranslation group on the super-Minkowski spacetime is defined analogously, \begin{eqnarray}\label{eq:sMinksMink} \wp\ &:&\ {\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}\x{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\longrightarrow{\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}\cr\cr \ &:&\ \bigl(\bigl(x^I,\theta^\a_i\bigr),\bigl(y^J,\varepsilon^\beta_j\bigr)\bigr)\longmapsto\bigl(x^I+y^I-\tfrac{1}{2}\,\theta_i^\a\,C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,\varepsilon_i^\gamma,\theta_i^\a+\varepsilon_i^\a\bigr)\,. \end{eqnarray} It is to be noted at this stage that the generators of the Clifford algebra are equivariant with respect to the two representations of $\,{\rm Spin}(1,d-1)\,$ introduced above, as expressed by the identities \begin{eqnarray}\label{eq:Gammasinter} S(\phi)\cdot\Gamma^I\cdot S(-\phi)= L(-\phi)^I_{\ J}\,\Gamma^J\,. \end{eqnarray} Here, $\,S(-\phi)\equiv S(\phi)^{-1}\,$ and, similarly, $\, L(-\phi)\equiv L(\phi)^{-1}$,\ and we have the defining identity \begin{eqnarray}\label{eq:LorintMink} L(\phi)^K_{\ J}\,\eta_{KI}= L(-\phi)^K_{\ I}\,\eta_{KJ}\,. \end{eqnarray} In consequence of the symmetry properties of the said generators listed in Conv.\,\ref{conv:Cliff}, we also obtain the useful equality (writing $\,\phi_{IJ}\equiv\phi^{KL}\,\eta_{KI}\,\eta_{LJ}\,$ where necessary) \begin{eqnarray} C\cdot S(\phi)\cdot C^{-1}&\equiv&C\cdot{\rm exp}\bigl(\tfrac{1}{2}\,\phi_{IJ}\,\Gamma^I\cdot\Gamma^J\bigr)\cdot C^{-1}={\rm exp}\bigl(\tfrac{1}{2}\,\phi_{IJ}\,C\cdot\Gamma^I\cdot C^{-1}\cdot C\cdot\Gamma^J\cdot C^{-1}\bigr)\cr\cr &=&{\rm exp}\bigl(\tfrac{1}{2}\,\phi_{IJ}\,\bigl(-\Gamma^{I\,{\rm T}}\bigr)\cdot\bigl(-\Gamma^{J\,{\rm T}}\bigr)\bigr)={\rm exp}\bigl(\tfrac{1}{2}\,\phi_{IJ}\,\bigl(\Gamma^J\cdot\Gamma^I\bigr)^{\rm T}\bigr)\cr\cr &=&{\rm exp}\bigl(\tfrac{1}{2}\,\phi_{IJ}\,\Gamma^J\cdot\Gamma^I\bigr)^{\rm T}\equiv{\rm exp}\bigl(\tfrac{1}{2}\,\phi_{IJ}\,\bigl(\{\Gamma^J,\Gamma^I\}-\Gamma^I\cdot\Gamma^J\bigr)\bigr)^{\rm T}\cr\cr &=&{\rm exp}\bigl(\tfrac{1}{2}\,\phi_{IJ}\,\bigl(2\eta^{JI}\,{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^I\cdot\Gamma^J\bigr)\bigr)^{\rm T}={\rm exp}\bigl(-\tfrac{1}{2}\,\phi_{IJ}\,\Gamma^I\cdot\Gamma^J\bigr)^{\rm T}\cr\cr &\equiv&S(-\phi)^{\rm T}\,.\label{eq:LorintC} \end{eqnarray} We may finally write out the left-invariant supervector fields on $\,{\rm s}\mathscr{P}(1,d-1;N)$: \begin{eqnarray}\nn &P_I(\theta,x,\phi)= L(\phi)^J_{\ I}\,\tfrac{\partial\ }{\partial x^J}\,,\qquad\qquad Q^i_\a(\theta,x,\phi)=S(\phi)^\beta_{\ \a}\,\bigl(\tfrac{\overrightarrow\partial\ }{\partial\theta^\beta_i}+\tfrac{1}{2}\,\theta^\gamma_i\,C_{\gamma\delta}\,\Gamma^{I\,\delta}_{\ \ \beta}\,\tfrac{\partial\ }{\partial x^I}\bigr)\,,&\cr\cr &J_{IJ}(\theta,x,\phi)=\tfrac{{\mathsf d}\ }{{\mathsf d} t}\mathord{\restriction}_{t=0}\,\widetilde\phi(\phi,t\,\phi_{IJ})\,,\quad(\phi_{IJ})^{KL}=\delta^K_I\,\delta^L_J-\delta^K_J\,\delta^L_I\,.& \end{eqnarray} These satisfy the familiar super-Poincar\'e (super)algebra \begin{eqnarray} &[P_I,P_J]=0\,,\qquad\qquad[Q^i_\a,Q^j_\beta]=\delta^{ij}\,C_{\a\gamma}\,\Gamma^{I\,\gamma}_{\ \ \beta}\,P_I\,,\qquad\qquad[P_I,Q^i_\a]=0\,,&\cr\cr &[J_{KL},J_{MN}]=\eta_{KN}\,J_{LM}-\eta_{KM}\,J_{LN}+\eta_{LM}\,J_{KN}-\eta_{LN}\,J_{KM}\,,&\label{eq:sPoincalg}\\ \cr &[J_{KL},P_M]=\eta_{LM}\,P_K-\eta_{KM}\,P_L\,,\qquad\qquad[J_{KL},Q^i_\a]=\tfrac{1}{2}\,\bigl(\Gamma_{KL}\bigr)^\beta_{\ \a}\,Q^i_\beta\,.&\nonumber \end{eqnarray} We shall also need the right-invariant supervector fields on $\,{\rm s}\mathscr{P}(1,d-1;N)$: \begin{eqnarray}\nn &\mathscr{P}_I(\theta,x,\phi)=\tfrac{\partial\ }{\partial x^I}\,,\qquad\qquad\mathscr{Q}^i_\a(\theta,x,\phi)=\tfrac{\overrightarrow\partial\ }{\partial\theta^\a_i}-\tfrac{1}{2}\,\theta^\beta_i\,C_{\beta\gamma}\,\Gamma^{I\,\gamma}_{\ \ \a}\,\tfrac{\partial\ }{\partial x^I}\,,&\cr\cr &\mathscr{J}_{IJ}(\theta,x,\phi)=x^K\,\bigl(\eta_{KJ}\,\tfrac{\partial\ }{\partial x^I}-\eta_{KI}\,\tfrac{\partial\ }{\partial x^J}\bigr)+\tfrac{1}{2}\,\bigl(\Gamma_{IJ}\bigr)^\a_{\ \beta}\,\theta^\beta_i\,\tfrac{\partial\ }{\partial\theta^\a_i}+\tfrac{{\mathsf d}\ }{{\mathsf d} t}\mathord{\restriction}_{t=0}\,\widetilde\phi(t\,\phi_{IJ},\phi)\,,& \end{eqnarray} with the corresponding super-Poincar\'e (super)algebra \begin{eqnarray} &[\mathscr{P}_I,\mathscr{P}_J]=0\,,\qquad\qquad[\mathscr{Q}^i_\a,\mathscr{Q}^j_\beta]=-\delta^{ij}\,C_{\a\gamma}\,\Gamma^{I\,\gamma}_{\ \ \beta}\,\mathscr{P}_I\,,\qquad\qquad[\mathscr{P}_I,\mathscr{Q}^i_\a]=0\,,&\cr\cr &[\mathscr{J}_{KL},\mathscr{J}_{MN}]=-\eta_{KN}\,\mathscr{J}_{LM}+\eta_{KM}\,\mathscr{J}_{LN}-\eta_{LM}\,\mathscr{J}_{KN}+\eta_{LN}\,\mathscr{J}_{KM}\,,&\label{eq:sPoincalgR}\\ \cr &[\mathscr{J}_{KL},\mathscr{P}_M]=-\eta_{LM}\,\mathscr{P}_K+\eta_{KM}\,\mathscr{P}_L\,,\qquad\qquad[\mathscr{J}_{KL},\mathscr{Q}^i_\a]=-\tfrac{1}{2}\,\bigl(\Gamma_{KL}\bigr)^\beta_{\ \a}\,\mathscr{Q}^i_\beta\,.&\nonumber \end{eqnarray} In their derivation, we employed the explicit vector and spinor representations \begin{eqnarray}\label{eq:Lorexpl} (J_{KL})^I_{\ J}=\delta^I_{\ K}\,\eta_{LJ}-\delta^I_{\ L}\,\eta_{KJ}\,,\qquad\qquad(J_{KL})^\a_{\beta}=\tfrac{1}{2}\,\bigl(\Gamma_{KL}\bigr)^\a_{\ \beta} \end{eqnarray} of the Lorentz generators. The above data enable us to describe and manipulate, in a particularly convenient manner, the dual left-invariant Maurer--Cartan 1-forms which are instrumental in defining the super-$\sigma$-models. Thus, we parametrise the group as ($t_A\equiv t_A(0,0,0)$) \begin{eqnarray}\nn g(\theta,x,\phi)={\rm e}^{x^I\,P_I}\cdot{\rm e}^{\theta^\a_i\,Q^i_\a}\cdot{\rm e}^{\frac{1}{2}\,\phi^{KL}\,J_{KL}}\in{\rm s}\mathscr{P}(1,d-1;N) \end{eqnarray} and obtain the desired decomposition \begin{eqnarray}\nn g^*\theta_{\rm L}(\theta,x,\phi)&=&{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\cdot{\rm e}^{-\theta^\a_i\,Q^i_\a}\cdot{\rm e}^{-x^I\,P_I}\,{\mathsf d}\bigl({\rm e}^{x^I\,P_I}\cdot{\rm e}^{\theta^\a_i\,Q^i_\a}\cdot{\rm e}^{\frac{1}{2}\,\phi^{KL}\,J_{KL}}\bigr)\cr\cr &=&{\mathsf d} x^I\otimes_{\mathbb{R}}{\mathsf T}_e{\rm Ad}_{{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}}(P_I)+\bigl({\rm id}_{\Omega^1({\rm s}\mathscr{P}(1,d-1;1))}\otimes{\mathsf T}_e{\rm Ad}_{{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}}\bigr)\bigl({\rm e}^{-\theta^\a_i\,Q^i_\a}\,{\mathsf d}{\rm e}^{\theta^\a_i\,Q^i_\a}\bigr)\cr\cr &&+{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\,{\mathsf d}{\rm e}^{\frac{1}{2}\,\phi^{KL}\,J_{KL}}\cr\cr &=& L(-\phi)^I_{\ J}\,{\mathsf d} x^J\otimes_{\mathbb{R}} P_I+\tfrac{1}{2}\,\theta^\a_i\,C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,{\mathsf d}\theta^\gamma_i\otimes_{\mathbb{R}}{\mathsf T}_e{\rm Ad}_{{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}}(P_I)\cr\cr &&+{\mathsf d}\theta^\a_i\otimes_{\mathbb{R}}{\mathsf T}_e{\rm Ad}_{{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}}(Q_\a^i)+\frac{1}{2}\, L(-\phi)^K_{\ M}\,{\mathsf d} L(\phi)^M_{\ N}\,\eta^{NL}\otimes_{\mathbb{R}} J_{KL}\cr\cr &=& L(-\phi)^I_{\ J}\,\bigl({\mathsf d} x^J+\tfrac{1}{2}\,\theta^\a_i\,C_{\a\beta}\,\Gamma^{J\,\beta}_{\ \ \gamma}\,{\mathsf d}\theta^\gamma_i\bigr)\otimes_{\mathbb{R}} P_I+S(-\phi)^\a_{\ \beta}\,{\mathsf d}\theta_i^\beta\otimes_{\mathbb{R}} Q_\a^i\cr\cr &&+\frac{1}{2}\, L(-\phi)^K_{\ M}\,{\mathsf d} L(\phi)^M_{\ N}\,\eta^{NL}\otimes_{\mathbb{R}} J_{KL} \end{eqnarray} of the Maurer--Cartan 1-form. In its derivation, we have used the following identity (in which we have fixed $\,n\in{\mathbb{N}}^\x\,$ and suppressed the representation label $\,i\,$ for the sake of transparency): \begin{eqnarray}\nn &&{\mathsf d}(\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_n}\,Q_{\a_n})=\sum_{k=1}^n\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{k-1}}\,Q_{\a_{k-1}}\,{\mathsf d}\theta^{\a_k}\,Q_{\a_k}\,\theta^{\a_{k+1}}\,Q_{\a_{k+1}}\,\cdots\,\theta^{\a_n}\,Q_{\a_n}\cr\cr &=&\sum_{k=1}^n\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{k-1}}\,Q_{\a_{k-1}}\,\theta^{\a_{k+1}}\,\bigl(\{Q_{\a_k},Q_{\a_{k+1}}\}-Q_{\a_{k+1}}\,Q_{\a_k}\bigr)\,\theta^{\a_{k+2}}\,Q_{\a_{k+2}}\,\cdots\,\theta^{\a_n}\,Q_{\a_n}\,{\mathsf d}\theta^{\a_k}\cr\cr &=&\sum_{k=1}^n\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{k-1}}\,Q_{\a_{k-1}}\,\theta^{\a_{k+2}}\,Q_{\a_{k+2}}\,\cdots\,\theta^{\a_n}\,Q_{\a_n}\,\theta^\a\,C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,{\mathsf d}\theta^\gamma\,P_I\cr\cr &&+\sum_{k=1}^n\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{k-1}}\,Q_{\a_{k-1}}\,\theta^{\a_{k+1}}\,Q_{\a_{k+1}}\,\theta^{\a_{k+2}}\,\bigl(\{Q_{\a_k},Q_{\a_{k+2}}\}\cr\cr &&-Q_{\a_{k+2}}\,Q_{\a_k}\bigr)\,\theta^{\a_{k+3}}\,Q_{\a_{k+3}}\,\cdots\,\theta^{\a_n}\,Q_{\a_n}\,{\mathsf d}\theta^{\a_k}=\ldots\cr\cr &=&\bigl(\sum_{k=1}^n\,(n-k)\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{n-2}}\,Q_{\a_{n-2}}\bigr)\,\theta^\a\,C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,{\mathsf d}\theta^\gamma\,P_I\cr\cr &&+\bigl(\sum_{k=1}^n\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{n-1}}\,Q_{\a_{n-1}}\bigr)\,{\mathsf d}\theta^\a\,Q_\a\cr\cr &=&\bigl(n(n-1)\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{n-2}}\,Q_{\a_{n-2}}\bigr)\,\tfrac{1}{2}\,\theta^\a\,C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,{\mathsf d}\theta^\gamma\,P_I\cr\cr &&+n\,\theta^{\a_1}\,Q_{\a_1}\,\theta^{\a_2}\,Q_{\a_2}\,\cdots\,\theta^{\a_{n-1}}\,Q_{\a_{n-1}}\,{\mathsf d}\theta^\a\,Q_\a\,. \end{eqnarray} In this manner, we identify the sought-after component left-invariant 1-forms in the decomposition \begin{eqnarray}\nn g^*\theta_{\rm L}(\theta,x,\phi)=\theta_{\rm L}^I(\theta,x,\phi)\otimes_{\mathbb{R}} P_I+\theta_{{\rm L}\,i}^\a(\theta,x,\phi)\otimes_{\mathbb{R}} Q^i_\a+\theta_{\rm L}^{KL}(\theta,x,\phi)\otimes_{\mathbb{R}} J_{KL} \end{eqnarray} as \begin{eqnarray} \theta_{\rm L}^I(\theta,x,\phi)&=& L(-\phi)^I_{\ J}\,\bigl({\mathsf d} x^J+\tfrac{1}{2}\,\theta^\a_i\,C_{\a\beta}\,\Gamma^{J\,\beta}_{\ \ \gamma}\,{\mathsf d}\theta^\gamma_i\bigr)\,,\cr\cr \theta_{{\rm L}\,i}^\a(\theta,x,\phi)&=&S(-\phi)^\a_{\ \beta}\,{\mathsf d}\theta_i^\beta\,,\label{eq:thetaLdef}\\\cr \theta_{\rm L}^{KL}(\theta,x,\phi)&=& L(-\phi)^K_{\ M}\,{\mathsf d} L(\phi)^M_{\ N}\,\eta^{NL}\,.\nonumber \end{eqnarray} Their invariance with respect to left translations on $\,{\rm s}\mathscr{P}(1,d-1;N)\,$ can also be checked directly. They satisfy the Maurer--Cartan equations \begin{eqnarray}\nn {\mathsf d}\theta_{\rm L}^I&=&-\eta_{JK}\,\theta_{\rm L}^{IJ}\wedge\theta_{\rm L}^K+\tfrac{1}{2}\, L(-\phi)^I_{\ J}\,S(\phi)^\a_{\ \delta}\,S(\phi)^\gamma_{\ \epsilon}\,\theta_{{\rm L}\,i}^\delta\wedge C_{\a\beta}\,\Gamma^{J\,\beta}_{\ \ \gamma}\,\theta_{{\rm L}\,i}^\epsilon\cr\cr &=&-\eta_{JK}\,\theta_{\rm L}^{IJ}\wedge\theta_{\rm L}^K+\tfrac{1}{2}\, L(-\phi)^I_{\ J}\,S(-\phi)^\beta_{\ \delta}\,S(\phi)^\gamma_{\ \epsilon}\,\theta_{{\rm L}\,i}^\a\wedge C_{\a\beta}\,\Gamma^{J\,\delta}_{\ \ \gamma}\,\theta_{{\rm L}\,i}^\epsilon\cr\cr &=&-\eta_{JK}\,\theta_{\rm L}^{IJ}\wedge\theta_{\rm L}^K+\tfrac{1}{2}\, L(-\phi)^I_{\ J}\, L(\phi)^J_{\ K}\,\theta_{{\rm L}\,i}^\a\wedge C_{\a\beta}\,\Gamma^{K\,\beta}_{\ \ \gamma}\,\theta_{{\rm L}\,i}^\gamma\cr\cr &=&-\eta_{JK}\,\theta_{\rm L}^{IJ}\wedge\theta_{\rm L}^K+\tfrac{1}{2}\,\theta_{{\rm L}\,i}^\a\wedge C_{\a\beta}\,\Gamma^{I\,\beta}_{\ \ \gamma}\,\theta_{{\rm L}\,i}^\gamma\,,\cr\cr\cr {\mathsf d}\theta_{{\rm L}\,i}^\a&=&{\mathsf d} S(-\phi)^\a_{\ \beta}\wedge{\mathsf d}\theta_i^\beta\equiv{\mathsf d}\bigl({\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\bigr)^\a_{\beta}\wedge{\mathsf d}\theta_i^\beta\cr\cr &=&-\bigl({\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\,{\mathsf d}{\rm e}^{\frac{1}{2}\,\phi^{KL}\,J_{KL}}\,{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\bigr)^\a_{\beta}\wedge{\mathsf d}\theta_i^\beta=-\bigl({\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\,{\mathsf d}{\rm e}^{\frac{1}{2}\,\phi^{KL}\,J_{KL}}\bigr)^\a_{\beta}\wedge\theta_{{\rm L}\,i}^\beta\cr\cr &=&-\tfrac{1}{2}\,\theta_{\rm L}^{KL}\,\bigl(J_{KL}\bigr)^\a_{\beta}\wedge\theta_{{\rm L}\,i}^\beta=-\tfrac{1}{4}\,\theta_{\rm L}^{KL}\wedge\bigl(\Gamma_{KL}\bigr)^\a_{\ \beta}\,\theta_{{\rm L}\,i}^\beta\,,\cr\cr\cr {\mathsf d}\theta_{\rm L}^{KL}&=&-\eta_{MN}\,\theta_{\rm L}^{KM}\wedge\theta_{\rm L}^{NL}\,, \end{eqnarray} dictated by the algebra \eqref{eq:sPoincalg} in consequence of the standard (Free Differential-Algebraic) relation between the Chevalley--Eilenberg model of the (super-)Lie-algebra cohomology and the Cartan--Eilenberg model of the Lie-(super)group invariant de Rham cohomology. Above, we used \Reqref{eq:Lorexpl} to compute the differential \begin{eqnarray}\nn {\mathsf d} L(-\phi)^I_{\ J}&=&-\bigl({\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\,{\mathsf d}{\rm e}^{\frac{1}{2}\,\phi^{KL}\,J_{KL}}\,{\rm e}^{-\frac{1}{2}\,\phi^{KL}\,J_{KL}}\bigr)^I_{\ J}=-\tfrac{1}{2}\,\theta_{\rm L}^{KL}\,\bigl(J_{KL}\bigr)^I_{\ M}\, L(-\phi)^M_{\ J}\cr\cr &=&-\theta_{\rm L}^{IK}\,\eta_{KL}\, L(-\phi)^L_{\ J} \end{eqnarray} and its spinorial variant. Finally, we are ready to specify the (super)group-theoretic form of the lagrangean fields of the two formulations of the Green--Schwarz super-$\sigma$-model, regarded as supervariants (of various dimensionality) of the Wess--Zumino--Witten model of \Rcite{Witten:1983ar}. Thus, in the Nambu--Goto formulation, we take the ($\gamma$-shifted) lagrangean field in the form \begin{eqnarray}\nn \gamma\circ{\mathbb{X}}_{({\rm NG})}\ :\ \Omega\longrightarrow{\rm sMink}^{1,d-1\,\vert\,ND_{1,d-1}}\longrightarrow{\rm s}\mathscr{P}(1,d-1;N)\ :\ \sigma\longmapsto\bigl(x^I(\sigma),\theta^\a_i(\sigma)\bigr)\longmapsto{\rm e}^{x^I(\sigma)\,P_I}\cdot{\rm e}^{\theta^\a_i(\sigma)\,Q^i_\a}\,. \end{eqnarray} In the Hughes--Polchinski formulation, on the other hand, we further distinguish among the Gra\ss mann-even coordinates the first $p+1$ ones, to be denoted as $\,\{x^{\unl A}\}^{\unl A\in\ovl{0,p}}$,\ which are to be thought of as describing the embedding of the $(p+1)$-dimensional worldvolume of the super-$p$-brane in the $d$-dimensional target. Local departures of the embedding from flatness are parametrised by additional Goldstone fields $\,\{\phi^{\unl A\widehat S}\}^{(\unl A,\widehat S)\in\ovl{0,p}\x\ovl{p+1,d-1}}\,$ associated with generators $\,\{J_{\unl A\widehat S}\}_{(\unl A,\widehat S)\in\ovl{0,p}\x\ovl{p+1,d-1}}\,$ of the Lorentz transformations broken by the embedding. Altogether, the lagrangean field of the model takes the form \begin{eqnarray} \gamma\circ{\mathbb{X}}_{({\rm HP})}\ &:&\ \Omega\longrightarrow{\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}\x\bigl({\rm Spin}(1,d-1)/{\rm Spin}(1,p)\x{\rm Spin}(d-p-1)\bigr)\longrightarrow{\rm s}\mathscr{P}(1,d-1;N)\cr\cr &:&\ \sigma\longmapsto\bigl(x^I(\sigma),\theta^\a_i(\sigma),\phi^{\unl A\widehat S}(\sigma)\bigr)\longmapsto{\rm e}^{x^I(\sigma)\,P_I}\cdot{\rm e}^{\theta^\a_i(\sigma)\,Q^i_\a}\cdot{\rm e}^{\phi^{\unl A\widehat S}(\sigma)\,J_{\unl A\widehat S}}\,.\nonumber\\ \label{eq:HParaMink} \end{eqnarray} From now onwards, we shall, in our analysis, use the shorthand notation (and make the assumptions) of Conv.\,\ref{conv:Cliff} and consider non-extended supersymmetry, \begin{eqnarray}\nn N=1\,, \end{eqnarray} for the sake of clarity, so that the generation index $\,i\,$ can be suppressed. In particular, we denote the corresponding left-invariant 1-forms as \begin{eqnarray}\nn S(-\phi)^\a_{\ \beta}\,\sigma^\beta(\theta,x)=\Sigma^\a_{\rm L}(\theta,x,\phi)\equiv\theta^\a_{\rm L}(\theta,x,\phi) \end{eqnarray} in order to distinguish them clearly from their spacetime-indexed counterparts \begin{eqnarray}\label{eq:eIdef} \theta_{\rm L}^I(\theta,x,\phi)\equiv L(-\phi)^I_{\ J}\,e^J(\theta,x) \end{eqnarray} in index-free expressions with contracted spinorial indices, such as, {\it e.g.}, the following one \begin{eqnarray}\nn \theta^\a_{\rm L}\wedge\ovl\Gamma{}^I_{\a\beta}\,\theta^\beta_{\rm L}\equiv\ovl\Sigma_{\rm L}\wedge\Gamma^I\,\Sigma_{\rm L}\,. \end{eqnarray} The field-theoretic relation between the two descriptions of the lagrangean field of the GS super-$\sigma$-model introduced above is made precise in \begin{Prop}\label{prop:sMinkHPvsNG} Fix $\,d\in{\mathbb{N}}\,$ and $\,p\in\ovl{0,d-1}\,$ and consider the Minkowski spacetime $\,({\mathbb{R}}^{1,d-1},\eta)$,\ regarded as a Lie group. Take the orthogonal decomposition of its Lie algebra \begin{eqnarray}\nn \bigoplus_{I=0}^{d-1}\,\corr{P_I}_{\mathbb{R}}\equiv\gt{t}^{(0)} \end{eqnarray} induced by $\,\eta$, \begin{eqnarray}\label{eq:splitMink} \gt{t}^{(0)}=\gt{t}^{(0)}_{\rm vac}\circlesign{\perp}\gt{e}^{(0)}\,, \end{eqnarray} with \begin{eqnarray}\nn \gt{t}^{(0)}_{\rm vac}=\bigoplus_{\unl A=0}^p\,\corr{P_{\unl A}}_{\mathbb{R}}\,,\qquad\qquad\gt{e}^{(0)}=\bigoplus_{\widehat S=p+1}^{d-1}\,\corr{P_{\widehat S}}_{\mathbb{R}}\,. \end{eqnarray} Next, extend the above Lie algebra to the full Poincar\'e algebra by adjoining the generators of the Lorentz algebra \begin{eqnarray}\nn {\gt so}(1,d-1)=\bigoplus_{K,L=0 \atop K<L}^{d-1}\,\corr{J_{KL}}_{\mathbb{R}}\equiv\gt{r}\,, \end{eqnarray} further decomposed, relative to the splitting \eqref{eq:splitMink}, into the Lie subalgebra \begin{eqnarray}\nn \bigoplus_{\unl A,\unl B=0 \atop\unl A<\unl B}^{p}\,\corr{J_{\unl A\unl B}}_{\mathbb{R}}\oplus\bigoplus_{\widehat S,\widehat T=p+1 \atop \widehat S<\widehat T}^{d-1}\,\corr{J_{\widehat S\widehat T}}_{\mathbb{R}}\equiv\gt{r}_{\rm vac} \end{eqnarray} of Lorentz transformations preserving \eqref{eq:splitMink}, and its direct-sum completion \begin{eqnarray}\nn \bigoplus_{(\unl A,\widehat S)\in\ovl{0,p}\x\ovl{p+1,d-1}}\,\corr{J_{\unl A\widehat S}}_{\mathbb{R}}\equiv\gt{d}\,. \end{eqnarray} Finally, embed the Poincar\'e group generated by $\,\gt{t}^{(0)}\oplus\gt{r}\,$ as a Lie subgroup in the super-Poincar\'e supergroup $\,{\rm s}\mathscr{P}(1,d-1;1)\equiv{\rm G}\,$ as in \Reqref{eq:sPoinc}. Given a projector $\,{\mathsf P}\in{\rm End}_{\mathbb{C}}\,S_{1,d-1}\,$ on the spinor module $\,S_{1,d-1}\,$ correlated with the decomposition \eqref{eq:splitMink} through the relation \begin{eqnarray}\nn \{{\mathsf P}^\gamma_{\ \a}\,Q_\gamma,{\mathsf P}^\delta_{\ \beta}\,Q_\delta\}=\bigl({\mathsf P}^{\rm T}\cdot\ovl\Gamma^{\unl A}\cdot{\mathsf P}\bigr)_{\a\beta}\,P_{\unl A} \end{eqnarray} and thus determining a Lie superalgebra \begin{eqnarray}\nn \gt{t}_{\rm vac}:=\gt{t}^{(0)}_{\rm vac}\oplus\gt{t}^{(1)}_{\rm vac} \end{eqnarray} with \begin{eqnarray}\nn \gt{t}^{(1)}_{\rm vac}:=\corr{\ {\mathsf P}^\beta_{\ \a}\,Q_\beta\ \vert\ \a\in\ovl{1,D_{1,d-1}}\ }_{\mathbb{R}}\cong{\rm im}\,{\mathsf P}\,, \end{eqnarray} define a Lie subsupergroup $\,{\rm H}\subset{\rm s}\mathscr{P}(1,d-1;1)\,$ as the semidirect product of the Lorentz group $\,{\rm Spin}(1,d-1)\,$ with the Lie supergroup generated by $\,\gt{t}_{\rm vac}$,\ which we shall write symbolically as \begin{eqnarray}\nn {\rm H}={\rm exp}(\gt{t}_{\rm vac})\rtimes{\rm Spin}(1,d-1)\,. \end{eqnarray} The data enumerated above satisfy the assumptions of Prop.\,\ref{prop:IHCartMink}, and so the corresponding Green--Schwarz super-$\sigma$-model on $\,{\rm G}/{\rm H}\,$ in the Hughes--Polchinski formulation is equivalent to the same model in the Nambu--Goto formulation. \end{Prop} \noindent\begin{proof} The proposition is fairly self-evident. Indeed, the restrictions of the Minkowskian metric $\,\eta\,$ to the directions $\,\partial_{\unl A},\ \unl A\in\ovl{0,p}\,$ and $\,\partial_{\widehat S},\ \widehat S\in\ovl{p+1,d-1}\,$ in the tangent sheaf define -- respectively -- the non-degenerate bilinear symmetric forms $\,\unl\gamma\,$ and $\,\widehat\gamma\,$ mentioned in Prop.\,\ref{prop:IHCartMink}. Furthermore, the action of the Lorentz generators $\,\{J_{\unl A\unl B}\}_{\unl A,\unl B\in\ovl{0,p}}\cup\{J_{\widehat S\widehat T}\}_{\widehat S,\widehat T\in\ovl{p+1,d-1}}\,$ on the momenta $\,\{P_{\unl C}\}_{\unl C\in\ovl{0,p}}\,$ integrates to a unimodular action of the Lie group $\,{\rm Spin}(1,p)\x{\rm Spin}(d-p-1)\,$ on $\,\gt{t}_{\rm vac}^{(0)}$.\ Finally, we readily check that the identity \eqref{eq:f2fIH} is trivially satisfied, \begin{eqnarray}\nn \eta^{\unl A\unl B}\,f_{\unl C\widehat S,\unl B}^{\qquad \widehat T}\,\eta_{\widehat T\widehat U}\equiv-\eta^{\unl A\unl B}\,\eta_{\unl C\unl B}\,\delta_{\widehat S}^{\ \widehat T}\,\eta_{\widehat T\widehat U}=-\delta_{\unl C}^{\ \unl A}\,\eta_{\widehat S\widehat U}\equiv-f_{\unl C\widehat S,\widehat U}^{\qquad \unl A}\,. \end{eqnarray} \end{proof} \subsection{The $\,N=1\,$ GS super-$(p+2)$-cocycles and the ensuing ``old branescan''}\label{ref:GScocyc} The super-$p$-branes whose dynamics we intend to geometrise carry topological charge, and so their propagation defines a charge current to which a gauge field couples in the usual geometric manner, that is, through pullback (of the gauge potential) to the worldvolume of the charged object. The coupling gives rise to corrections to the condition of minimality of the classical embedding that follows from minimising the metric term of the (super-)$\sigma$-model action functional, and the corrections are determined by the field strength of the said gauge field. As was announced at the beginning of Sec.\,\ref{sec:stensor}, these field strengths are certain distinguished ${\rm s}\mathscr{P}(1,d-1;1)$-invariant de Rham super-$(p+2)$-cocycles that -- owing to the topological triviality of the super-Minkowski space -- admit global primitives, none of which, however, is ${\rm s}\mathscr{P}(1,d-1;1)$-invariant. The super-$(p+2)$-cocycles that we want to consider take the general form \begin{eqnarray}\label{eq:GScurv}\qquad \underset{\tx{\ciut{(p+2)}}}{\chi}=\ovl\Sigma_{\rm L}\wedge\Gamma_{I_1 I_2\ldots I_p}\,\Sigma_{\rm L}\wedge\theta_{\rm L}^{I_1 I_2\ldots I_p}\,,\qquad\qquad\theta_{\rm L}^{I_1 I_2\ldots I_p}=\theta_{\rm L}^{I_1}\wedge\theta_{\rm L}^{I_2}\wedge\cdots\wedge\theta_{\rm L}^{I_p}\,,\qquad p>0\,, \end{eqnarray} with the sole exception \begin{eqnarray}\label{eq:GScurv0} \underset{\tx{\ciut{(2)}}}{\chi}=\ovl\Sigma_{\rm L}\wedge\Gamma_{11}\,\Sigma_{\rm L} \end{eqnarray} occuring for $\,p=0\,$ and defined in terms of the volume element $\,\Gamma_{11}\,$ of the Clifford algebra $\,{\rm Cliff}({\mathbb{R}}^{1,9})$,\ described in App.\,\ref{conv:Cliff}. They will be jointly referred to as {\bf Green--Schwarz} ({\bf GS}) {\bf super-$(p+2)$-cocycles} in what follows. These are readily seen to descend to $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ as \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{\chi}(\theta,x,\phi)&\equiv&\sigma^{\rm T}(\theta)\,\wedge S(-\phi)^{\rm T}\cdot C\cdot\Gamma_{11}\cdot S(-\phi)\,\sigma(\theta)=\ovl\sigma(\theta)\wedge S(\phi)\cdot\Gamma_{11}\cdot S(-\phi)\,\sigma(\theta)\cr\cr &=&{\rm det}_{(10)}\,\bigl(L(\phi)\bigr)\cdot\bigl(\ovl\sigma\wedge\Gamma_{11}\,\sigma\bigr)(\theta)=\bigl(\ovl\sigma\wedge\Gamma_{11}\,\sigma\bigr)(\theta)\equiv\underset{\tx{\ciut{(2)}}}{\chi}(\theta,x,0) \end{eqnarray} and -- for $\,p>0\,$ -- \begin{eqnarray}\nn \underset{\tx{\ciut{(p+2)}}}{\chi}(\theta,x,\phi)&\equiv&\sigma^{\rm T}(\theta)\wedge S(-\phi)^{\rm T}\cdot C\cdot\Gamma_{I_1 I_2\ldots I_p}\cdot S(-\phi)\,\sigma(\theta)\wedge\theta_{\rm L}^{I_1 I_2\ldots I_p}(\theta,x,\phi)\cr\cr &=&\sigma^{\rm T}(\theta)\wedge C\cdot S(\phi)\cdot\Gamma_{I_1 I_2\ldots I_p}\cdot S(-\phi)\,\sigma(\theta)\wedge\theta_{\rm L}^{I_1 I_2\ldots I_p}(\theta,x,\phi)\cr\cr &=&\sigma^{\rm T}(\theta)\wedge\ovl\Gamma{}^{K_1 K_2\ldots K_p}\,\sigma(\theta)\cr\cr &&\wedge\eta_{I_1 J_1}\,\eta_{I_2 J_2}\,\cdots\,\eta_{I_p J_p}\, L(-\phi)^{J_1}_{\ K_1}\, L(-\phi)^{J_2}_{\ K_2}\,\cdots\, L(-\phi)^{J_p}_{\ K_p}\,\theta_{\rm L}^{I_1 I_2\ldots I_p}(\theta,x,\phi)\cr\cr &=&\sigma^{\rm T}(\theta)\wedge\ovl\Gamma{}^{K_1 K_2\ldots K_p}\,\sigma(\theta)\wedge\prod_{k=1}^p\,\bigl(\eta_{I_k J_k}\, L(-\phi)^{J_k}_{\ K_k}\, L(-\phi)^{I_k}_{\ L_k}\bigr)\,e^{L_1 L_2\ldots L_p}(\theta,x)\cr\cr &=&\ovl\sigma(\theta)\wedge\Gamma_{I_1 I_2\ldots I_p}\,\sigma(\theta)\wedge e^{I_1 I_2\ldots I_p}(\theta,x)\equiv\underset{\tx{\ciut{(p+2)}}}{\chi}(\theta,x,0)\,, \end{eqnarray} with \begin{eqnarray}\nn e^{I_1 I_2\ldots I_p}\equiv e^{I_1}\wedge e^{I_2}\wedge\cdots\wedge e^{I_p}\,, \end{eqnarray} and, for all $\,K,L\in\ovl{0,d-1}$, \begin{eqnarray}\nn J_{KL}\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi}(\theta,x,\phi)=0\,, \end{eqnarray} which means that the super-$(p+2)$-forms are not only rotationally invariant but also horizontal, and so, altogether, basic. For $\,p>0$,\ their closedness, \begin{eqnarray}\nn 0\stackrel{!}{=}{\mathsf d}\underset{\tx{\ciut{(p+2)}}}{\chi}&=&\tfrac{p}{2}\,\Sigma_{\rm L}^\a\wedge\bigl(\ovl\Gamma_{I_1 I_2\ldots I_p}\bigr)_{\a\beta}\,\Sigma_{\rm L}^\beta\wedge\Sigma_{\rm L}^\gamma\wedge\bigl(\ovl\Gamma{}^{I_1}\bigr)_{\gamma\delta}\,\Sigma_{\rm L}^\delta\wedge\theta_{\rm L}^{I_2 I_3\ldots I_p}\cr\cr &\equiv&\tfrac{p}{2}\,\bigl(\ovl\Gamma_{I_1 I_2\ldots I_p}\bigr)_{\a\beta}\,\bigl(\ovl\Gamma{}^{I_1}\bigr)_{\gamma\delta}\,\Sigma_{\rm L}^\a\wedge\Sigma_{\rm L}^\beta\wedge\Sigma_{\rm L}^\gamma\wedge\Sigma_{\rm L}^\delta\wedge\theta_{\rm L}^{I_2 I_3\ldots I_p}\,, \end{eqnarray} is ensured by a suitable choice of the relevant representation of the Clifford algebra such that the symmetry constraints \begin{eqnarray}\label{eq:ClifFierz} \ovl\Gamma{}^{I_1}_{(\a\beta}\,(\ovl\Gamma_{I_1 I_2\ldots I_p})_{\gamma\delta)}=0 \end{eqnarray} implied by the previous condition are obeyed. Note that for $\,p=1\,$ the latter reduces to the (more) familiar identity \begin{eqnarray}\label{eq:ClifFierz1} \ovl\Gamma{}^I_{\a(\beta}\,(\ovl\Gamma_I)_{\gamma\delta)}=0 \end{eqnarray} due to the assumed symmetry of the $\,\ovl\Gamma{}^I$,\ {\it cp.}\ Conv.\,\ref{conv:Cliff}. The admissible pairs $\,(d,p)\,$ for which the above constraints can be solved and a super-$\sigma$-model with the appropriate supersymmetry ({\it cp.}\ Sec.\,\ref{sec:kappa}) can be written down were found in \Rcite{Achucarro:1987nc} and constitute the so-called ``old branescan''. Closedness of the GS super-$(p+2)$-cocycles implies -- in consequence of the (de Rham-)cohomological triviality of their support (which follows directly from the Kostant Theorem of \Rcite{Kostant:1975}) -- the existence of smooth primitives. These were derived in Refs.\,\cite{Hughes:1986fa}, albeit in a different convention, and so we rederive them in App.\,\ref{app:alGS} through an adaptation of the original method to the current algebraic setting. \begin{Prop}\label{prop:GSprim} For any $\,p>0$,\ the GS super-$(p+2)$-cocycle $\,\underset{\tx{\ciut{(p+2)}}}{\chi}\,$ of \Reqref{eq:GScurv} admits a manifestly ${\rm ISO}(1,d-1)$-invariant primitive \begin{eqnarray}\label{eq:GSprim}\hspace{2cm} \underset{\tx{\ciut{(p+1)}}}{\beta}(\theta,x)=\tfrac{1}{p+1}\,\sum_{k=0}^p\,\ovl\theta\,\Gamma_{I_1 I_2\ldots I_p}\,\sigma(\theta)\wedge{\mathsf d} x^{I_1}\wedge{\mathsf d} x^{I_2}\wedge\cdots\wedge{\mathsf d} x^{I_k}\wedge e^{I_{k+1} I_{k+2}\ldots I_p}(\theta,x)\,. \end{eqnarray} A primitive of the super-2-form $\,\underset{\tx{\ciut{(2)}}}{\chi}\,$ of \Reqref{eq:GScurv0} can be chosen in the form \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)=\ovl\theta\,\Gamma_{11}\,\sigma(\theta)\,. \end{eqnarray} \end{Prop} \noindent\begin{proof} {\it Cp.}\ App.\,\ref{app:GSprim}. \end{proof} \noindent The above primitives are manifestly non-supersymmetric. In fact, this cannot be repaired as it was demonstrated in \Rcite{DeAzcarraga:1989vh} that the GS super-$(p+2)$-cocycles do {\it not} admit ${\rm s}\mathscr{P}(1,d-1;1)$-invariant primitives. This result puts us naturally in the framework of the (${\mathbb{R}}$-valued) Chevalley--Eilenberg cohomology of the super-Poincar\'e algebra ({\it cp.}\ \Rcite{Chevalley:1948}), which we shall exploit in the present treatment, whence a recapitulation thereof in App.\,\ref{app:LieAlgCohom} in the superalgebraic context of interest. On the other hand, the manifest left-invariance of the GS super-$(p+2)$-cocycle $\,\underset{\tx{\ciut{(p+2)}}}{\chi}\,$ itself, in conjunction with the triviality of the standard de Rham cohomology of $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$,\ ensures that condition \Reqref{eq:jpullJ} is satisfied as the supersymmetry variation of the global primitive $\,\underset{\tx{\ciut{(p+1)}}}{\beta}\,$ is exact. Indeed, we have the identity \begin{eqnarray}\nn {\mathsf d}\underset{\tx{\ciut{(p+1)}}}{\beta}\equiv\underset{\tx{\ciut{(p+2)}}}{\chi}=\ell_{(\varepsilon,y)}^*\underset{\tx{\ciut{(p+2)}}}{\chi}\equiv{\mathsf d}\bigl(\ell_{(\varepsilon,y)}^*\underset{\tx{\ciut{(p+1)}}}{\beta}\bigr) \end{eqnarray} which implies the existence of a super-$p$-form $\,\underset{\tx{\ciut{(p)}}}{\jmath}{}_{(\varepsilon,y)}\in\bigwedge^p\mathcal{T}^*{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ satisfying the condition \begin{eqnarray}\nn \bigl(\delta\underset{\tx{\ciut{(p+1)}}}{\beta}\bigr)_{(\varepsilon,y)}\equiv\bigl(\delta_{{\mathbb{R}}^{1,d\,\vert\,1}}\underset{\tx{\ciut{(p+1)}}}{\beta}\bigr)_{(\varepsilon,y)}=\ell_{(\varepsilon,y)}^*\underset{\tx{\ciut{(p+1)}}}{\beta}-\underset{\tx{\ciut{(p+1)}}}{\beta}={\mathsf d}\underset{\tx{\ciut{(p)}}}{\jmath}{}_{(\varepsilon,y)}\,. \end{eqnarray} We shall call $\,\underset{\tx{\ciut{(p)}}}{\jmath}{}_{(\varepsilon,y)}\,$ the {\bf target supercurrent}. The action 1-cochain now takes the explicit form \begin{eqnarray}\nn c_{(\varepsilon,y)}[{\mathbb{X}}]={\rm e}^{{\mathsf i}\,\int_{\partial\Omega}\,\left(\ell_{(-y,-\varepsilon)}\circ\gamma\circ{\mathbb{X}}\mathord{\restriction}_{\partial\Omega}\right)^*\underset{\tx{\ciut{(p)}}}{\jmath_{(\varepsilon,y)}}}\,. \end{eqnarray} The very same arguments imply the existence of an extension, to the generalised tangent sheaf \begin{eqnarray}\nn \mathcal{E}^{(1,p)}{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}=\mathcal{T}{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\oplus\bigwedge{}^p\mathcal{T}^*{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,, \end{eqnarray} of the algebra of the (left) supersymmetry generators \begin{eqnarray}\label{eq:susygenv} \mathcal{R}_{(\varepsilon,y)}(\theta,x):=\varepsilon^\a\,\mathscr{Q}_\a(\theta,x)+y^I\,\mathscr{P}_I(\theta,x)\,,\qquad(\varepsilon,y)\in{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,, \end{eqnarray} with the Lie bracket\footnote{Note that we (intentionally) consider Gra\ss mann-even supervector fields here.} \begin{eqnarray}\label{eq:sFVFalg}\hspace{2cm} [\mathcal{R}_{(\varepsilon_1,y_1)},\mathcal{R}_{(\varepsilon_2,y_2)}]=[\varepsilon_{1}^\a\,\mathscr{Q}_\a,\varepsilon_{2}^\beta\,\mathscr{Q}_\beta]=-\varepsilon_{1}^\a\,\varepsilon_{2}^\beta\,\{\mathscr{Q}_\a,\mathscr{Q}_\beta\}=\mathcal{R}_{(0,\ovl\varepsilon_{1}\,\Gamma^\cdot\,\varepsilon_{2})} \end{eqnarray} readily derived from the elementary ones in \Reqref{eq:sPoincalgR} and giving us the ({\bf right}) {\bf super-Minkowski supersymmetry algebra} \begin{eqnarray}\nn [(\varepsilon_1,y_1),(\varepsilon_2,y_2)]=\bigl(0,\ovl\varepsilon_{1}\,\Gamma^\cdot\,\varepsilon_{2}\bigr)\,. \end{eqnarray} Indeed, we have \begin{Prop}\label{prop:contrGSprim} For any $\,p\in{\mathbb{N}}$,\ the fundamental vector field $\,\mathcal{R}_{(\varepsilon,y)}\,$ of \Reqref{eq:susygenv} (defined as above for arbitrary $\,(\varepsilon,y)\in{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}$) is generalised hamiltonian with respect to the super-$(p+2)$-cocycle $\,\underset{\tx{\ciut{(p+2)}}}{\chi}\,$ of \Reqref{eq:GScurv}, that is, there exists a globally smooth super-$p$-form $\,\varrho_{(\varepsilon,y)}\in\bigwedge^p{\mathsf T}^*{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ with the property \begin{eqnarray}\nn \mathcal{R}_{(\varepsilon,y)}\righthalfcup\underset{\tx{\ciut{(p+2)}}}{\chi}=-{\mathsf d}\underset{\tx{\ciut{(p)}}}{\varrho}{}_{(\varepsilon,y)}\,. \end{eqnarray} The latter can be chosen in the manifestly ${\rm ISO}(1,d-1)$-invariant form \begin{eqnarray}\nn \underset{\tx{\ciut{(0)}}}{\varrho}{}_{(\varepsilon,y)}(\theta,x)=-2\ovl\varepsilon\,\Gamma_{11}\,\theta \end{eqnarray} for $\,p=0$,\ and -- for $\,p>0\,$ -- \begin{eqnarray}\nn \underset{\tx{\ciut{(p)}}}{\varrho}{}_{(\varepsilon,y)}(\theta,x)&=&-p\,y^I\,\underset{\tx{\ciut{(p)}}}{\beta}{}_I(\theta,x)-2(\ovl\varepsilon\,\Gamma_{I_1 I_2\ldots I_p}\,\theta)\,e^{I_1 I_2\ldots I_p}(\theta,x)\cr\cr &&+\tfrac{p!}{(2p+1)!!}\,\sum_{k=1}^p\,\tfrac{2^k\,(2p+1-2k)!!}{(p-k)!}\,\underset{\tx{\ciut{(1)}}}{\eta}{}_{I_2 I_3\ldots I_p}^\varepsilon\wedge{\mathsf d} x^{I_2}\wedge{\mathsf d} x^{I_3}\wedge\cdots\wedge{\mathsf d} x^{I_k}\wedge e^{I_{k+1}I_{k+3}\ldots I_p}(\theta,x)\,, \end{eqnarray} written in terms of the super-$p$-forms $\,\underset{\tx{\ciut{(p)}}}{\beta}{}_I\,$ from \Reqref{eq:primba} and of the super-1-forms $\,\underset{\tx{\ciut{(1)}}}{\eta}{}_{I_2 I_3\ldots I_p}^\varepsilon\,$ from \Reqref{eq:etavep}. \end{Prop} \noindent\begin{proof} Cp App.\,\ref{app:contrGSprim}. \end{proof} \noindent The extension, defined in terms of the Vinogradov-type bracket of \Reqref{eq:VBra}, is readily seen to close on pairs of the distinguished fundamental sections \begin{eqnarray}\label{eq:fundsecMink} \gt{R}_{(\varepsilon,y)}=\mathcal{R}_{(\varepsilon,y)}\oplus\varrho_{(\varepsilon,y)}\in\mathcal{E}^{(1,p)}{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}} \end{eqnarray} of the generalised tangent bundle over $\,{\rm sMink}^{1,d\,\vert\,D_{1,d-1}}$.\ It turns out that the {\it right}-regular action of the supersymmetry group $\,{\mathbb{R}}^{1,d\,\vert\,D_{1,d-1}}\,$ on $\,{\rm sMink}^{1,d\,\vert\,D_{1,d-1}}$,\ while not a global symmetry of the super-$\sigma$-model as it stands, {\it cp.} Sec.\,\ref{sec:kappaCart}, exhibits similar properties relative to the GS super-$(p+2)$-cocycles, which justifies our discussion of the corresponding sections of the generalised tangent bundle over $\,{\rm sMink}^{1,d\,\vert\,D_{1,d-1}}\,$ in the next section. There, we carry out a case-by-case analysis of the relevant current super-2-cocycles \begin{eqnarray}\nn (\delta\underset{\tx{\ciut{(p)}}}{\jmath})_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}=\ell_{(\varepsilon_2,y_2)}^*\underset{\tx{\ciut{(p)}}}{\jmath}{}_{(\varepsilon_1,y_1)}-\underset{\tx{\ciut{(p)}}}{\jmath}{}_{\bigl(\varepsilon_1+\varepsilon_2,y_1+y_2-\frac{1}{2}\,\ovl\varepsilon_1\,\Gamma\, \varepsilon_2\bigr)}+\underset{\tx{\ciut{(p)}}}{\jmath}{}_{(\varepsilon_2,y_2)} \end{eqnarray} and derive the Lie anomaly for the various natural actions of the supersymmetry group that can be constructed out of the two one-sided regular actions. \medskip \paragraph{\unl{The Green--Schwarz superstring}} Consider, next, the ${\mathbb{R}}^{1,d\,\vert\,N}$-invariant GS 3-form superfield \begin{eqnarray}\label{eq:GS3form} \underset{\tx{\ciut{(3)}}}{\chi}=E^a\wedge\ovl\mathcal{E}\wedge\Gamma_a\,\mathcal{E}\,, \end{eqnarray} with the ${\mathbb{R}}^{1,d\,\vert\,0}$-invariant primitive \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{\beta}(x,\theta)=\ovl\theta\,\Gamma_a\,{\mathsf d}\theta\wedge E^a(x,\theta)\equiv\ovl\theta\,\Gamma_a\,{\mathsf d}\theta\wedge{\mathsf d} x^a\,, \end{eqnarray} where the last equality is a straightforward consequence of Conv.\,\ref{conv:SignManifesto}, \begin{eqnarray}\nn \ovl\theta\,\Gamma_a\,{\mathsf d}\theta\wedge\ovl\theta\,\Gamma^a\,{\mathsf d}\theta=-\ovl\theta\,\Gamma^a\,{\mathsf d}\theta\wedge\ovl\theta\,\Gamma_a\,{\mathsf d}\theta\,. \end{eqnarray} In the canonical description, we readily derive the Cartan--Poincar\'e form \begin{eqnarray}\nn \Theta(x,\theta,\xi,t)&=&-\mathcal{L}_{\rm GS}(x,\theta,\xi,t)\,{\mathsf d}\sigma^0\wedge{\mathsf d}\sigma^1\cr\cr &&+\tfrac{1}{2}\,\left[\left(2\xi_{0\,a}+\ovl\theta\,\Gamma_a\,t_0-2\ovl\theta\,\Gamma_a\,t_1\right)\,\delta x^a+\left(\xi_{0\,a}+2\xi_{1\,a}+\tfrac{1}{2}\,\ovl\theta\,\Gamma_a\,t_0\right)\,\ovl\theta\, \Gamma^a\,\delta\theta\right]\wedge{\mathsf d}\sigma^1\cr\cr &&+\tfrac{1}{2}\,\left[\left(2\xi_{1\,a}+\ovl\theta\,\Gamma_a\,t_1-2\ovl\theta\,\Gamma_a\,t_0\right)\,\delta x^a+\left(\xi_{1\,a}+2\xi_{0\,a}+\tfrac{1}{2}\,\ovl\theta\,\Gamma_a\,t_1\right)\,\ovl\theta\, \Gamma^a\,\delta\theta\right]\wedge{\mathsf d}\sigma^0 \end{eqnarray} that subsequently yields the (pre)symplectic form \begin{eqnarray}\nn \Omega_{\rm GS}=\delta\vartheta+\pi_{\rm GS}^*\int_{{\mathbb{S}}^1}\,{\rm ev}^*\underset{\tx{\ciut{(3)}}}{\chi}\,, \end{eqnarray} with \begin{eqnarray}\nn \vartheta[x,\theta,{\rm p}]:=\int_{{\mathbb{S}}^1}\,\Vol({\mathbb{S}}^1)\,{\rm p}_a\,E^a(x,\theta)\,. \end{eqnarray} The equivariant lift of the fundamental vector field \eqref{eq:susygenv} now reads \begin{eqnarray}\nn \widetilde\mathcal{K}_{(y,\varepsilon)}[x,\theta,{\rm p}]:=\int_{{\mathbb{S}}^1}\,\Vol({\mathbb{S}}^1)\,\left[\left(y^a-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^a\,\theta(\cdot)\right)\,\tfrac{\delta\ }{\delta x^a(\cdot)}+\varepsilon^\a\,\tfrac{\delta\ }{\delta\theta^\a(\cdot)}\right] \end{eqnarray} and gives rise to Noether charges \begin{eqnarray}\nn Q_{(y,\varepsilon)}[x,\theta,{\rm p}]=\int_0^{2\pi}\,{\mathsf d}\varphi\,\bigl\{{\rm p}_a(\varphi)\,\bigl[y^a-\ovl\varepsilon\,\Gamma^a\,\theta(\varphi)\bigr]-\bigl[y^a\,\ovl\theta\,\Gamma_a\,\partial_\varphi\theta(\varphi)+\ovl\varepsilon\,\Gamma_a\,\theta(\varphi)\,\bigl(2\partial_\varphi x^a-\tfrac{1}{3}\,\ovl\theta\,\Gamma^a\,\partial_\varphi\theta\bigr)(\varphi)\bigr]\bigr\}\,. \end{eqnarray} These satisfy the algebra \begin{eqnarray}\nn \{Q_{(y_1,\varepsilon_1)},Q_{(y_2,\varepsilon_2)}\}[x,\theta,{\rm p}]&=&Q_{[(y_1,\varepsilon_1),(y_2,\varepsilon_2)]}[x,\theta,{\rm p}]\cr\cr &&+\tfrac{2}{3}\,\int_0^{2\pi}\,{\mathsf d}\varphi\,\bigl[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,(\ovl\theta\,\Gamma^a\,\partial_\varphi\theta)(\varphi)-2(\ovl\varepsilon_1\,\Gamma_a\,\theta)\,(\ovl\varepsilon_2\,\Gamma^a\,\partial_\varphi\theta)(\varphi)\bigr]\cr\cr &=&Q_{[(y_1,\varepsilon_1),(y_2,\varepsilon_2)]}[x,\theta,{\rm p}]\,, \end{eqnarray} the vanishing of the last term in the middle expression being ensured by \Reqref{eq:CliffSym}. Thus, the Noether charges furnish a hamiltonian realisation of the Lie (super)algebra $\,{\mathbb{R}}^{1,d\,\vert\,N}\,$ on $\,{\mathsf P}_{\rm GS}$. The above algebra is modelled by the Vinogradov-type (or, indeed, $\underset{\tx{\ciut{(3)}}}{\chi}$-twisted Courant) bracket of the fundamental sections \begin{eqnarray}\label{eq:fsec2d} \gt{K}_{(y,\varepsilon)}(x,\theta)=\mathcal{K}_{(y,\varepsilon)}(x,\theta)\oplus\bigl[-y^a\,\ovl\theta\,\Gamma_a\,{\mathsf d}\theta-(\ovl\varepsilon\,\Gamma_a\,\theta)\,\bigl(2{\mathsf d} x^a-\tfrac{1}{3}\,\ovl\theta\,\Gamma^a\,{\mathsf d}\theta\bigr)\bigr] \end{eqnarray} of $\,\mathcal{E}^{1,1}{\rm sMink}^{1,d\,\vert\,N}$,\ given by \begin{eqnarray} \Vbra{\gt{K}_{(y_1,\varepsilon_1)}}{\gt{K}_{(y_2,\varepsilon_2)}}^{\underset{\tx{\ciut{(3)}}}{\chi}}&=&\gt{K}_{[(y_1,\varepsilon_1),(y_2,\varepsilon_2)]}+0\oplus\tfrac{1}{2}\,{\mathsf d}\bigl[y_1^a\,(\ovl\varepsilon_2\,\Gamma_a\,\theta)-y_2^a\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)+2(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,x^a\bigr]\cr\cr &&+0\oplus 2\bigl[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,\ovl\theta\,\Gamma^a\,{\mathsf d}\theta+(\ovl\varepsilon_2\,\Gamma_a\,\theta)\,\ovl\varepsilon_1\,\Gamma^a\,{\mathsf d}\theta+(\ovl\theta\,\Gamma_a\,\varepsilon_1)\,\ovl\varepsilon_2\,\Gamma^a\,{\mathsf d}\theta\bigr]\cr\cr &=&\gt{K}_{[(y_1,\varepsilon_1),(y_2,\varepsilon_2)]}+0\oplus\tfrac{1}{2}\,{\mathsf d}\bigl[y_1^a\,(\ovl\varepsilon_2\,\Gamma_a\,\theta)-y_2^a\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)+2(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,x^a\bigr]\,. \label{eq:Liean2d} \end{eqnarray} Note the appearance of a Lie anomaly in the above Vinogradov bracket, signalling an algebroidal obstruction against the gauging of the supertranslation symmetry. Pursuing the geometric analysis on the target superspace, we find that the ${\mathbb{R}}^{1,d\,\vert\,N}$-coboundary of the primitive 2-form superfield reads \begin{eqnarray}\nn (\delta\underset{\tx{\ciut{(2)}}}{\beta})_{(y,\varepsilon)}(x,\theta)=\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta\wedge{\mathsf d} x^a-\tfrac{1}{2}\,(\ovl\theta\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)\,, \end{eqnarray} and the last term vanishes as a contraction of the symmetric tensor $\,\eta\,$ with the anticommuting 1-forms. The middle term can be rewritten -- with the help of identity \eqref{eq:CliffSym} -- as \begin{eqnarray}\nn (\ovl\theta\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)&\equiv&\ovl\Gamma_{a\,\a\beta}\,\ovl\Gamma{}^a_{\gamma\delta}\,\theta^\a\,{\mathsf d}\theta^\beta\wedge\varepsilon^\gamma\,{\mathsf d}\theta^\delta=\tfrac{1}{2}\,\bigl(\ovl\Gamma_{a\,\a\beta}\,\ovl\Gamma{}^a_{\gamma\delta}+\ovl\Gamma_{a\,\a\delta}\,\ovl\Gamma{}^a_{\gamma\beta}\bigr)\,\theta^\a\,{\mathsf d}\theta^\beta\wedge\varepsilon^\gamma\,{\mathsf d}\theta^\delta\cr\cr &=&-\tfrac{1}{2}\,\ovl\Gamma_{a\,\a\gamma}\,\ovl\Gamma{}^a_{\beta\delta}\,\theta^\a\,{\mathsf d}\theta^\beta\wedge\varepsilon^\gamma\,{\mathsf d}\theta^\delta\equiv-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,{\mathsf d}\ovl\theta\wedge\Gamma^a\,{\mathsf d}\theta\cr\cr &=&{\mathsf d}\bigl(-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\bigr)+\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\cr\cr &=&{\mathsf d}\bigl(-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\bigr)-\tfrac{1}{2}\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta)\,, \end{eqnarray} whence also \begin{eqnarray}\nn (\ovl\theta\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)={\mathsf d}\bigl(-\tfrac{1}{3}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\bigr)\,. \end{eqnarray} Thus, the target current associated with our choice of the primitive $\,\underset{\tx{\ciut{(2)}}}{\beta}\,$ takes the form \begin{eqnarray}\label{eq:targcur1} \underset{\tx{\ciut{(1)}}}{\jmath_{(y,\varepsilon)}}(x,\theta)=(\ovl\varepsilon\,\Gamma_a\,\theta)\,\left({\mathsf d} x^a+\tfrac{1}{6}\,\ovl\theta\,\Gamma^a\,{\mathsf d}\theta\right) \end{eqnarray} We compute \begin{eqnarray}\nn (\delta\underset{\tx{\ciut{(1)}}}{\jmath})_{(y_1,\varepsilon_1),(y_2,\varepsilon_2)}(x,\theta)&=&{\mathsf d}\left[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)\right]-\tfrac{1}{6}\,\left[2({\mathsf d}\ovl\theta\,\Gamma_a\,\varepsilon_2)(\ovl\theta\,\Gamma^a\,\varepsilon_1)+({\mathsf d}\ovl\theta\,\Gamma_a\,\theta)(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\right]\cr\cr &=&{\mathsf d}\left[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)-\tfrac{1}{6}\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)(\ovl\varepsilon_2\,\Gamma^a\,\theta)\right]\cr\cr &&-\tfrac{1}{6}\,\left[({\mathsf d}\ovl\theta\,\Gamma_a\,\varepsilon_2)(\ovl\theta\,\Gamma^a\,\varepsilon_1)+(\ovl\varepsilon_1\,\Gamma_a\,{\mathsf d}\theta)(\ovl\theta\,\Gamma^a\,\varepsilon_2)+({\mathsf d}\ovl\theta\,\Gamma_a\,\theta)(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\right]\,. \end{eqnarray} Taking, again, \Reqref{eq:CliffSym} into account, we find the relation \begin{eqnarray}\nn ({\mathsf d}\ovl\theta\,\Gamma_a\,\varepsilon_2)(\ovl\theta\,\Gamma^a\,\varepsilon_1)+(\ovl\varepsilon_1\,\Gamma_a\,{\mathsf d}\theta)(\ovl\theta\,\Gamma^a\,\varepsilon_2)+({\mathsf d}\ovl\theta\,\Gamma_a\,\theta)(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\cr\cr =\varepsilon_1^\a\,\varepsilon_2^\beta\,{\mathsf d}\theta^\gamma\,\theta^\delta\,\bigl(\ovl\Gamma_{a\,\a\beta}\,\ovl\Gamma{}^a_{\gamma\delta}+\ovl\Gamma_{a\,\a\gamma}\,\ovl\Gamma{}^a_{\beta\delta}+\ovl\Gamma_{a\,\beta\gamma}\,\ovl\Gamma{}^a_{\a\delta}\bigr) =\varepsilon_1^\a\,\varepsilon_2^\beta\,{\mathsf d}\theta^\gamma\,\theta^\delta\,\bigl(-\ovl\Gamma_{a\,\a\delta}\,\ovl\Gamma{}^a_{\beta\gamma}+\ovl\Gamma_{a\,\beta\gamma}\,\ovl\Gamma{}^a_{\a\delta}\bigr)=0\,, \end{eqnarray} that implies the exactness of the current 2-cocycle, \begin{eqnarray}\nn (\delta\underset{\tx{\ciut{(1)}}}{\jmath})_{(y_1,\varepsilon_1),(y_2,\varepsilon_2)}(x,\theta)={\mathsf d}\left[(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)-\tfrac{1}{6}\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)(\ovl\varepsilon_2\,\Gamma^a\,\theta)\right]\,. \end{eqnarray} We conclude that the GS superstring admits a \emph{non}-projective realisation of the classical symmetry group $\,{\mathbb{R}}^{1,d\,\vert\,N}\,$ on its Hilbert space. We readily verify that the primitive of the current 2-cocycle \begin{eqnarray}\nn \wp_{(y_1,\varepsilon_1),(y_2,\varepsilon_2)}(x,\theta):=(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)-\tfrac{1}{6}\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)(\ovl\varepsilon_2\,\Gamma^a\,\theta) \end{eqnarray} yields a constant 3-cocycle \begin{eqnarray}\nn (\delta\wp)_{(y_1,\varepsilon_1),(y_2,\varepsilon_2),(y_3,\varepsilon_3)}(x,\theta)&=&(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,y_3^a-\tfrac{1}{6}\,\left[2(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)(\ovl\varepsilon_2\,\Gamma^a\,\varepsilon_3)+(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_3)(\ovl\varepsilon_2\,\Gamma^a\,\varepsilon_3)\right]\cr\cr &=:&\lambda_{(y_1,\varepsilon_1),(y_2,\varepsilon_2),(y_3,\varepsilon_3)}\,. \end{eqnarray} \void{Thus, altogether, the cohomological data of the GS superstring can be written as \begin{eqnarray}\nn &\mathcal{I}_{\underset{\tx{\ciut{(2)}}}{\beta}}\qquad\tx{with}\qquad\curv(\mathcal{I}_{\underset{\tx{\ciut{(2)}}}{\beta}})=\underset{\tx{\ciut{(3)}}}{\chi}\in Z^3({\rm sMink}^{1,d\,\vert\,N})^{{\mathbb{R}}^{1,d\,\vert\,N}}\,,&\cr\cr &\Phi_{(y,\varepsilon)}:=(\underset{\tx{\ciut{(1)}}}{\jmath_{(y,\varepsilon)}})\ :\ (\delta\mathcal{I}_{\underset{\tx{\ciut{(2)}}}{\beta}})_{(y,\varepsilon)}\xrightarrow{\cong}\mathcal{I}_0\,,&\cr\cr &\varphi_{(y_1,\varepsilon_1),(y_2,\varepsilon_2)}:=(\wp_{(y_1,\varepsilon_1),(y_2,\varepsilon_2)})\ :\ (\delta\Phi)_{(y_1,\varepsilon_1),(y_2,\varepsilon_2)}\overset{\cong\ }{\Rightarrow}\mathcal{J}_0\,,&\cr\cr &(\delta\varphi)_{(y_1,\varepsilon_1),(y_2,\varepsilon_2),(y_3,\varepsilon_3)}\neq 1\qquad\Leftrightarrow\qquad\lambda_{(y_1,\varepsilon_1),(y_2,\varepsilon_2),(y_3,\varepsilon_3)}\neq 0\,,& \end{eqnarray} in a form amenable to direct generalisation.} \medskip \paragraph{\unl{The supermembrane}} \medskip By way of a closing remark, we note that besides the ${\rm ISO}(1,d-1)$-invariant super-$p$-forms $\,\underset{\tx{\ciut{(p)}}}{\kappa}{}_{(\varepsilon,y)}$,\ the GS super-$(p+2)$-cocycles give rise -- as revealed by inspection -- to a host of supersymmetric super-2-cocycles that play a fundamental r\^ole in our geometrisation of the ${\rm s}\mathscr{P}(1,d-1;1)$-invariant cohomology classes of the $\,\underset{\tx{\ciut{(p+2)}}}{\chi}$.\ These will be obtained through contraction of (certain) $p$-tuples of fundamental (right-invariant) vector fields $\,\mathcal{K}_{A_i}\in\{\mathscr{Q}_\a,\mathscr{P}_I\}_{(\a,I)\in\ovl{1,D_{1,d-1}}\x\ovl{0,d-1}},\ i\in\ovl{1,p}\,$ (the $\,A_i\,$ are indices of the supersymmetry algebra) of \Reqref{eq:susygenv} with the super-$(p+2)$-cocycle $\,\underset{\tx{\ciut{(p+2)}}}{\chi}\,$ of \Reqref{eq:GScurv}, \begin{eqnarray}\label{eq:LI2bas} \underset{\tx{\ciut{(2)}}}{h}{}_{\lambda^\cdot}:=\lambda^{A_1 A_2\ldots A_p}\,\mathcal{K}_{A_1}\righthalfcup\mathcal{K}_{A_2}\righthalfcup\cdots\mathcal{K}_{A_p}\righthalfcup\underset{\tx{\ciut{( p+2)}}}{\chi}\,,\qquad\lambda^{A_1 A_2\ldots A_p}\in{\mathbb{R}}\,. \end{eqnarray} Both, the condition of closedness and the condition of invariance are tantamount to certain linear constraints on the coefficients $\,\lambda^{A_1 A_2\ldots A_p}\,$ which involve (also linearly) the structure constants of the Lie superalgebra under consideration, and so it is far from obvious that such super-2-cocycles exist. Specific examples will be examined closely in Sec.\,\ref{sec:GSgerbe}. \section{Supergerbes for the Nambu--Goto super-$p$-branes from extensions of $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$}\label{sec:GSgerbe} Our next aim, motivated amply in Sec.\,\ref{sec:stensor}, is to work out for the GS super-$(p+2)$-cocycles $\,\underset{\tx{\ciut{(p+2)}}}{\chi}\,$ of Sec.\,\ref{ref:GScocyc} a supergeometric analogon of the standard scheme of geometrisation of de Rham cocycles known from the theory of fibre bundles with connection and the theory of bundle ($n$-)gerbes with connection and recalled briefly in Sec.\,\ref{sec:Bose} in the context of the 2d bosonic $\sigma$-model with the Wess--Zumino term. The conceptual basis of our construction is the relation between algebra and geometry of the Lie (super)group established -- in the manner delineated in Thm.\,\ref{thm:sCEmodelIdR} -- by the Chevalley--Eilenberg model of Lie-(super)algebra cohomology (with values in the trivial module $\,{\mathbb{R}}$) in conjunction with the interpretation -- expressed in Props.\,\ref{prop:ExtoCE} and \ref{prop:CEtoExt} -- of the second cohomology group in that model in terms of equivalence classes of (super)central extensions of the underlying Lie-(super)algebra. More specifically, the said cohomological results enable us to associate with the super-$(p+2)$-cocycles\footnote{Strictly speaking, we present an explicit analysis for the cases $\,p\in\{0,1,2\}$.\ However, the structural nature of our construction turns it into a tenable proposal for a completely general geometrisation scheme.} a tower of supergroup extensions of the Lie supergroup $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\equiv{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ (of the kind originally discovered in \Rcite{Chryssomalakos:2000xd}) that are readily verified to play the r\^ole of the various surjective submersions encountered in the geometric definition of the (0-, 1- and 2-)gerbe and thus give us a natural definition of a supergerbe with curvature $\,\underset{\tx{\ciut{(p+2)}}}{\chi}$,\ conceived along the lines of the fundamental Principle of Categorial Descent of \Rcite{Stevenson:2000wj}. Rudimentary aspects of the Lie-superalgebra (to be abbreviated as \textbf{LSA} in what follows) cohomology and its Chevalley--Eilenberg (to be abbreviated as \textbf{CE}) model, as well as the link with the Cartan--Eilenberg (to be abbreviated as \textbf{CaE}) cohomology of supersymmetric differential superforms that are of relevance to the subsequent discussion have been recalled in App.\,\ref{app:LieAlgCohom}. \subsection{Geometrisation of Cartan--Eilenberg super-cocycles}\label{sec:CaEscocgeomise} By way of preparation for the systematic (super)geometric resolution of the super-$(p+2)$-cocycles of interest, we should first review -- after Refs.\,\cite{Aldaya:1984gt,Chryssomalakos:2000xd}) -- the construction of the super-Minkowski spacetime $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ as a central extension of the purely Gra\ss mann-odd superspace $\,{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\,$ (the so-called superpoint, also known as the odd hyperplane), determined by a canonical 2-cocycle on the supercommutative Lie superalgebra $\,{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\,$ with values in its trivial module $\,{\mathbb{R}}^{1,d-1}$.\ A natural point of departure for our general discussion is the manifestly closed left-invariant (to be abbreviated as \textbf{LI} in what follows) super-2-form \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{\chi}^I:=\tfrac{1}{2}\,\ovl\sigma\wedge\Gamma^I\,\sigma\,,\qquad I\in\ovl{0,d-1} \end{eqnarray} on the supermanifold $\,\mathcal{M}^{(0)}\equiv{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}$,\ with global Gra\ss mann-odd coordinates $\,\{\theta^\a\}^{\a\in\ovl{1,D_{1,d-1}}}\,$ and the associated LI vector fields \begin{eqnarray}\nn \mathscr{Q}_\a^{(0)}(\theta)=\tfrac{\overrightarrow\partial\ }{\partial\theta^\a} \end{eqnarray} furnishing a realisation of the supercommutative LSA \begin{eqnarray}\nn \{\mathscr{Q}_\a^{(0)},\mathscr{Q}_\beta^{(0)}\}=0\,. \end{eqnarray} The de Rham super-2-cocycle $\,\underset{\tx{\ciut{(2)}}}{\chi}^I\,$ does not admit a primitive on $\,\mathcal{M}^{(0)}\,$ invariant with respect to the (left) regular action of $\,{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\,$ on (itself) $\,\mathcal{M}^{(0)}$, \begin{eqnarray}\nn \ell^{(0)}_\cdot\ :\ {\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\x\mathcal{M}^{(0)}\longrightarrow\mathcal{M}^{(0)}\ :\ (\varepsilon^\a,\theta^\a)\longmapsto\theta^\a+\varepsilon^\a\,, \end{eqnarray} and so -- arguing along the lines of Appendix \ref{app:LieAlgCohom} -- we are led to consider a (super)central extension $\,\mathcal{M}^{(1)}:={\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ of the Lie supergroup $\,\mathcal{M}^{(0)}$,\ the former being canonically surjectively submersed onto the latter as a rank-$d$ (real) vector bundle\footnote{For a detailed account of the fibre-bundle structure on the extended superspacetime(s), consult Refs.\,\cite{Aldaya:1984gt,Chryssomalakos:2000xd}.} \begin{eqnarray}\nn \pi_0\equiv{\rm pr}_1\ :\ \mathcal{M}^{(1)}\longrightarrow\mathcal{M}^{(0)}\ :\ (\theta^\a,x^I)\longmapsto\theta^\a \end{eqnarray} with fibre coordinates $\,x^I,\ I\in\ovl{0,d-1}$.\ The pullback of the GS super-2-cocycle $\,\underset{\tx{\ciut{(2)}}}{\chi}^I\,$ to $\,\mathcal{M}^{(1)}\,$ trivialises in the associated CaE cohomology as ({\it cp.}\ Remark \ref{rem:LSApulltriv}) \begin{eqnarray}\nn \pi_0^*\underset{\tx{\ciut{(2)}}}{\chi}^I={\mathsf d} e^I\,, \end{eqnarray} for the $\,e^I\,$ as defined by Eqs.\,\eqref{eq:thetaLdef} and \eqref{eq:eIdef}. The corresponding (super)centrally extended LSA of the equivariant lifts \begin{eqnarray}\label{eq:Qal1} \mathscr{Q}^{(1)}_\a(\theta,x):=\mathscr{Q}^{(0)}_\a(\theta)+\tfrac{1}{2}\,\ovl\Gamma{}^I_{\a\beta}\,\theta^\beta\,\tfrac{\partial\ }{\partial x^I} \end{eqnarray} of the $\,\mathscr{Q}^{(0)}_\a\,$ and of the coordinate vector fields \begin{eqnarray}\label{eq:Pa1} \mathscr{P}^{(1)}_I(\theta,x):=\tfrac{\partial\ }{\partial x^I}\,, \end{eqnarray} the two families making up a basis of the tangent sheaf dual to that of the cotangent sheaf formed by the LI super-1-forms $\,\sigma^\a,\ \a\in\ovl{1,D_{1,d-1}}\,$ and $\,e^I,\ I\in\ovl{0,d-1}$,\ reads \begin{eqnarray} \{\mathscr{Q}^{(1)}_\a,\mathscr{Q}^{(1)}_\beta\}=\ovl\Gamma{}^I_{\a\beta}\,\mathscr{P}^{(1)}_I\,,\qquad\qquad[\mathscr{P}^{(1)}_I,\mathscr{P}^{(1)}_J]=0\,,\qquad\qquad[\mathscr{P}^{(1)}_I,\mathscr{Q}^{(1)}_\a]=0\,.\cr\label{eq:sMinkLSA} \end{eqnarray} The action of the original supergroup $\,{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\,$ on $\,\mathcal{M}^{(1)}\,$ follows from the demand that it project to $\,\ell^{(0)}_\cdot\,$ and that the super-1-forms $\,\sigma^\a\,$ and $\,e^I,\ I\in\ovl{0,d-1}\,$ be invariant with respect to it, and we may extend it to a full-blown structure of a Lie supergroup on $\,\mathcal{M}^{(1)}\,$ by requiring that it yield the above supervector fields $\,\mathscr{Q}^{(1)}_\a,\ \a\in\ovl{1,D_{1,d-1}}\,$ and $\,\mathscr{P}^{(1)}_I,\ I\in\ovl{0,d-1}\,$ as the fundamental left-invariant supervector fields and that it leaves the super-1-forms intact when treated as an action of $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on (itself) $\,\mathcal{M}^{(1)}\,$ -- this determines the said action in the familiar form \begin{eqnarray} \ell^{(1)}_\cdot\equiv{\rm m}^{(1)}\ &:&\ {\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathcal{M}^{(1)}\longrightarrow\mathcal{M}^{(1)}\cr\cr &:&\ \left((\varepsilon^\a,y^I),(\theta^\beta,x^J)\right)\longmapsto\left(\theta^\a+\varepsilon^\a,x^I+y^I-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^I\,\theta\right)\,,\label{eq:sact-sMink} \end{eqnarray} equivalent to the one given in \Reqref{eq:sMinksMink} (for $\,N=1$). Clearly, we could have equivalently derived it from the LSA \eqref{eq:sMinkLSA} by exponentiating the generators and computing, with the help of the standard Baker--Campbell--Hausdorff formula, \begin{eqnarray}\nn {\rm e}^{\varepsilon^\a\,\mathscr{Q}^{(1)}_\a+y^I\,\mathscr{P}^{(1)}_I}\cdot{\rm e}^{\theta^\beta\,\mathscr{Q}^{(1)}_\beta+x^J\,\mathscr{P}^{(1)}_J}&=&{\rm e}^{(\varepsilon^\a+\theta^\a)\,\mathscr{Q}^{(1)}_\a+(y^I+x^I)\,\mathscr{P}^{(1)}_I+\frac{1}{2}\,[\varepsilon^\a\,\mathscr{Q}^{(1)}_\a,\theta^\beta\,\mathscr{Q}^{(1)}_\beta]}\cr\cr &=&{\rm e}^{(\varepsilon^\a+\theta^\a)\,\mathscr{Q}^{(1)}_\a+(y^I+x^I)\,\mathscr{P}^{(1)}_I-\frac{1}{2}\,\varepsilon^\a\,\theta^\beta\,\{\mathscr{Q}^{(1)}_\a,\mathscr{Q}^{(1)}_\beta\}}\cr\cr &=&{\rm e}^{(\varepsilon^\a+\theta^\a)\,\mathscr{Q}^{(1)}_\a+(y^I+x^I-\frac{1}{2}\,\ovl\varepsilon\,\Gamma^I\,\theta)\,\mathscr{P}^{(1)}_I}\,. \end{eqnarray} Thus, if we take $\,\mathcal{M}^{(0)}\,$ as the basis of our supergeometry for the sake of illustrating the extension principle, the CaE/CE-cohomological trivialisation leads us quite naturally to a surjective submersion over it, with the commutative typical fibre $\,{\mathbb{R}}^{1,d-1}$.\ In the remainder of this section, we assume, instead, the super-Minkowski spacetime $\,\mathcal{M}^{(1)}\,$ to be the actual basis of subsequent extensions necessitated by the trivialisation of the GS super-$(p+2)$-cocycles. However, in order to indicate the relation between $\,\mathcal{M}^{(0)}\,$ and $\,\mathcal{M}^{(1)}$,\ we pedantically pull back the LI super-1-forms $\,\sigma\,$ to $\,\mathcal{M}^{(1)}\,$ along $\,\pi_0$.\medskip \subsubsection{The super-$0$-brane} The GS super-2-cocycle on the ten-dimensional super-Minkowski spacetime $\,\mathcal{M}^{(1)}\,$ reconstructed above that codetermines the dynamics of the super-$0$-brane has the simple form \begin{eqnarray}\label{eq:GS0form} \underset{\tx{\ciut{(2)}}}{\chi}=\ovl\sigma\wedge\Gamma_{11}\,\sigma\equiv\sigma^{\rm T}\wedge C\cdot\Gamma_{11}\,\sigma\,, \end{eqnarray} implicitly written in a spin representation of the Clifford algebra in which the product of the charge-conjugation matrix and the volume element $\,\Gamma_{11}\,$ is symmetric, \begin{eqnarray}\label{eq:Csymm} \bigl(C\cdot\Gamma_{11}\bigr)^{\rm T}=C\cdot\Gamma_{11}\,. \end{eqnarray} The super-2-form is manifestly LI but does not possess a primitive with this property. Indeed, a global primitive $\,\underset{\tx{\ciut{(1)}}}{\beta}\,$ of $\,\underset{\tx{\ciut{(2)}}}{\chi}\,$ satisfies \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)-\ovl\theta\,\Gamma_{11}\,\sigma(\theta,x)\in{\rm ker}\,{\mathsf d}\,, \end{eqnarray} and so -- in view of the triviality of the de Rham cohomology of $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ -- we have \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)=\ovl\theta\,\Gamma_{11}\,\sigma(\theta)+{\mathsf d}\txa(\theta,x) \end{eqnarray} for a (Gra\ss mann-)even-valued superfunction $\,\txa\,$ on $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$.\ Write \begin{eqnarray}\nn \txa(\theta,x)=\txa_0(x)+\theta^\a\,\txa_{1\,\a}(\theta,x) \end{eqnarray} for some even-valued superfunction $\,\txa_0\,$ and an odd-valued one $\,\txa_{1\,\a}$,\ so that \begin{eqnarray}\nn {\mathsf d}\txa(\theta,x)={\mathsf d} x^I\,\partial_I\txa_0(x)+{\mathsf d}\theta^\a\,\txa_{1\,\a}(\theta,x)+{\mathsf d} x^I\,\theta^\a\,\partial_I\txa_{1\,\a}(\theta,x)- {\mathsf d}\theta^\beta\,\theta^\a\,\overrightarrow\partial_\beta\txa_{1\,\a}(\theta,x)\,. \end{eqnarray} Demanding that the above be ${\mathbb{R}}^{1,d-1}$-invariant at every point $\,(\theta,x)\in{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ of its domain implies that the $\,\partial_I\txa_0(x)\,$ are Gra\ss mann-even constants and that the $\,\partial_I\txa_{1\,\a}(\theta,x)\,$ are odd-valued superfunctions of $\,\theta$,\ or that \begin{eqnarray}\nn \txa_{1\,\a}(\theta,x)={\rm c}_{1\,\a}(\theta)+x^I\,{\rm c}_{1\,I\a}(\theta) \end{eqnarray} for some odd-valued superfunctions $\,{\rm c}_{1\,\a}(\theta)\,$ and $\,{\rm c}_{1\,I\a}(\theta)$,\ on which the requirement that \begin{eqnarray}\nn \txa_{1\,\a}(\theta,x)-\theta^\beta\,\overrightarrow\partial_\a\txa_{1\,\beta}(\theta,x)={\rm c}_{1\,\a}(\theta)-\theta^\beta\,\overrightarrow\partial_\a{\rm c}_{1\,\beta}(\theta)+x^I\,\bigl({\rm c}_{1\,I\a}(\theta)-\theta^\beta\,\overrightarrow\partial_\a{\rm c}_{1\,I\beta}(\theta)\bigr) \end{eqnarray} be ${\mathbb{R}}^{1,d-1}$-invariant immediately yields \begin{eqnarray}\nn {\rm c}_{1\,I\a}(\theta)=0\,, \end{eqnarray} and so leads to \begin{eqnarray}\nn {\mathsf d}\txa(\theta,x)={\mathsf d} x^I\,c_I+{\mathsf d}\left(\theta^\a\,\txa_{1\,\a}(\theta)\right)\equiv c_I\,{\mathsf d} x^I+{\mathsf d}\widetilde\txa(\theta) \end{eqnarray} for an (arbitrary) even-valued superfunction $\,\widetilde\txa$.\ Altogether, we obtain \begin{eqnarray}\label{eq:spartprimgen} \underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)=\ovl\theta\,\Gamma_{11}\,\sigma(\theta)+{\mathsf d}\widetilde\txa(\theta)+c_I\,{\mathsf d} x^I\,, \end{eqnarray} whence \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\beta}\bigl(\theta+\varepsilon,x-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^\cdot\,\theta\bigr)-\underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)=\ovl\varepsilon\,\widetilde\Gamma_{11}\,{\mathsf d}\theta+{\mathsf d}\bigl(\widetilde\txa(\theta+\varepsilon)-\widetilde\txa(\theta)\bigr)\,, \end{eqnarray} with \begin{eqnarray}\nn \widetilde\Gamma_{11}:=\Gamma_{11}-\tfrac{1}{2}\,c_I\,\Gamma^I\,. \end{eqnarray} However, the term in the expansion of $\,\widetilde\txa\,$ quadratic in the $\,\theta^\a\,$ is necessarily of the form $\,\theta^{\rm T}\,A\,\theta\,$ with $\,A^{\rm T}=-A$,\ which implies that it cannot cancel the first term in the variation containing the (nonzero) symmetric matrix $\,C\cdot\widetilde\Gamma_{11}$.\ This infers the necessity to extend $\,\mathcal{M}^{(1)}\,$ along the lines of App.\,\ref{app:LieAlgCohom}. Prior to proceeding with the extension, we pause to take a closer look at the (super)symmetry properties of the primitive \eqref{eq:spartprimgen}, with view to understanding the nature of the extension to be constructed. Reasoning along the lines of the (pre-)quantum-symmetry analysis presented at the end of Sec.\,\ref{sub:defcanquant}, we are led to consider the expression \begin{eqnarray}\nn (\delta_{{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}}\underset{\tx{\ciut{(1)}}}{\beta})_{(\varepsilon,y)}(\theta,x)\equiv(\delta\underset{\tx{\ciut{(1)}}}{\beta})_{(\varepsilon,y)}(\theta,x)={\mathsf d}\bigl(\ovl\varepsilon\,\widetilde\Gamma_{11}\,\theta+(\delta\widetilde\txa)_{(\varepsilon,y)}(\theta)\bigr)\,, \end{eqnarray} which gives us the {\bf target supercurrent} \begin{eqnarray}\label{eq:tgt-curr-0} \underset{\tx{\ciut{(0)}}}{\jmath}{}_{(\varepsilon,y)}(\theta,x)=\ovl\varepsilon\,\widetilde\Gamma_{11}\,\theta+(\delta\widetilde\txa)_{(\varepsilon,y)}(\theta)+{\rm c}_{(\varepsilon,y)}\,,\qquad{\rm c}_{(\varepsilon,y)}\in{\mathbb{R}}\,. \end{eqnarray} Here, the Gra\ss mann-even constants $\,{\rm c}_{(\varepsilon,y)}\,$ quantify the residual freedom of redefinition of the current. With the latter, we associate, in the manner structurally identical with that discussed in Sec.\,\ref{sub:defcanquant}, the {\bf current super-2-cocycle}, \begin{eqnarray} (\delta\underset{\tx{\ciut{(0)}}}{\jmath})_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}(\theta,x)&=&\ovl\varepsilon_1\,\widetilde\Gamma_{11}\,(\theta+\varepsilon_2)-(\ovl\varepsilon_1+\ovl\varepsilon_2)\,\widetilde\Gamma_{11}\,\theta+\ovl\varepsilon_2\,\widetilde\Gamma_{11}\,\theta+(\delta^2\widetilde\txa)_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}(\theta)\cr\cr &&+{\rm c}_{(\varepsilon_1,y_1)}-{\rm c}_{(\varepsilon_1,y_1)\cdot(\varepsilon_2,y_2)}+{\rm c}_{(\varepsilon_2,y_2)}\cr\cr &=&\ovl\varepsilon_1\,\widetilde\Gamma_{11}\,\varepsilon_2+(\delta{\rm c})_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}\,, \label{eq:delphi} \end{eqnarray} whose \emph{non}triviality is readily verified. We begin the proof by rephrasing the question about the triviality of $\,\delta\underset{\tx{\ciut{(0)}}}{\jmath}\,$ -- this is tantamount to the existence of a 1-cochain $\,{\rm c}_\cdot\,$ with the property that \begin{eqnarray}\label{eq:spartcurrtriv} (\delta{\rm c})_{(\varepsilon_1,0),(\varepsilon_2,0)}=-\ovl\varepsilon_1\,\widetilde\Gamma_{11}\,\varepsilon_2\,. \end{eqnarray} Given the nilpotence of the Gra\ss mann-odd coordinates, it makes sense to write the Maclaurin expansion of the parameterised constants, \begin{eqnarray}\nn {\rm c}_{(\varepsilon,y)}=2{\rm C}_0(y)+{\rm C}_1(y)\,\varepsilon+\tfrac{1}{2}\,\varepsilon\,{\rm C}_2(y)\,\varepsilon+\Delta_3(\varepsilon,y) \end{eqnarray} where $\,\Delta_3(\varepsilon,y)\,$ is a rest trilinear in $\,\varepsilon$,\ and where, for all $\,y\in{\mathbb{R}}^{1,d-1}$, \begin{eqnarray}\nn {\rm C}_2(y)^{\rm T}=-{\rm C}_2(y)\,. \end{eqnarray} We now obtain \begin{eqnarray}\nn (\delta{\rm c})_{(\varepsilon_1,0),(\varepsilon_2,0)}&\equiv&{\rm c}_{(\varepsilon_1,0)}-{\rm c}_{(\varepsilon_1+\varepsilon_2,-\frac{1}{2}\,\ovl\varepsilon_1\,\Gamma^\cdot\,\varepsilon_2)}+{\rm c}_{(\varepsilon_2,0)}\cr\cr &=&4{\rm C}_0(0)+{\rm C}_1(0)(\varepsilon_1+\varepsilon_2)+\tfrac{1}{2}\,\varepsilon_1\,{\rm C}_2(0)\,\varepsilon_1+\tfrac{1}{2}\,\varepsilon_2\,{\rm C}_2(0)\,\varepsilon_2+\Delta_3(\varepsilon_1,0)+\Delta_3(\varepsilon_2,0)\cr\cr &&-2{\rm C}_0\bigl(-\tfrac{1}{2}\,\ovl\varepsilon_1\,\Gamma^\cdot\,\varepsilon_2\bigr)-{\rm C}_1\bigl(-\tfrac{1}{2}\,\ovl\varepsilon_1\,\Gamma^\cdot\,\varepsilon_2\bigr)(\varepsilon_1+\varepsilon_2)-\tfrac{1}{2}\,(\varepsilon_1+\varepsilon_2)\,{\rm C}_2\bigl(-\tfrac{1}{2}\,\ovl\varepsilon_1\,\Gamma^\cdot\,\varepsilon_2\bigr)\,(\varepsilon_1+\varepsilon_2)\cr\cr &&-\Delta_3\bigl(\varepsilon_1+\varepsilon_2,-\tfrac{1}{2}\,\ovl\varepsilon_1\,\Gamma^\cdot\,\varepsilon_2\bigr)\cr\cr &=&2{\rm C}_0(0)+\varepsilon_1\,\bigl(\partial_I{\rm C}_0(0)\,\ovl\Gamma{}^I-{\rm C}_2(0)\bigr)\,\varepsilon_2+\widetilde\Delta(\varepsilon_1,\varepsilon_2) \end{eqnarray} in which the last term depends at least cubically on the $\,\varepsilon_i\,$ and hence cannot cancel $\,\ovl\varepsilon_1\,\varepsilon_2$.\ The relevant equality \eqref{eq:spartcurrtriv} implies \begin{eqnarray}\nn \ovl\Gamma_{11}=\bigl(\tfrac{1}{2}\,c_I-\partial_I{\rm C}_0(0)\bigr)\,\ovl\Gamma{}^I+{\rm C}_2(0)\,. \end{eqnarray} In view of the assumed symmetricity of the charge-conjugation matrix $\,\ovl\Gamma_{11}\,$ and of the $\,\ovl\Gamma{}^I\,$ ({\it cp.}\ \Reqref{eq:CGamSym}), the above yields the equality \begin{eqnarray}\nn {\rm C}_2(0)=0 \end{eqnarray} and further reduces to \begin{eqnarray}\nn \Gamma_{11}=\bigl(\tfrac{1}{2}\,c_I-\partial_I{\rm C}_0(0)\bigr)\,\Gamma^I\,, \end{eqnarray} which admits no solutions. Thus convinced of the nontriviality of the supersymmetry current 2-cocycle, we conclude that the ensuing {\bf homomorphicity super-2-cocycle} \begin{eqnarray}\nn d^{(0)}_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}={\rm e}^{{\mathsf i}\,\ovl\varepsilon_1\,\widetilde\Gamma_{11}\,\varepsilon_2} \end{eqnarray} is also nontrivial, and therefore predicts a projective nature of the realisation of supersymmetry on the Hilbert space of the super-$0$-brane. This suggests that it is, in fact, the central extension of $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ determined by the above super-2-cocycle, and not the supersymmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ itself, that will lift to the extension of $\,\mathcal{M}^{(1)}\,$ that we are about to derive. We may now return to our geometric construction and seek corroboration of our expectations. Consider a trivial principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\label{eq:spartbndl} \pi_{\mathscr{L}^{(0)}}\equiv{\rm pr}_1\ :\ \mathscr{L}^{(0)}:=\mathcal{M}^{(1)}\x{\mathbb{C}}^\x\longrightarrow\mathcal{M}^{(1)}\ :\ (\theta^\a,x^I,z)\longmapsto(\theta^\a,x^I) \end{eqnarray} with a connection \begin{eqnarray}\label{eq:conn0grb} \nabla_{\mathscr{L}^{(0)}}={\mathsf d}+\tfrac{1}{{\mathsf i}}\,\underset{\tx{\ciut{(1)}}}{\beta}\,, \end{eqnarray} or -- equivalently -- a principal ${\mathbb{C}}^\x$-connection 1-form \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\beta}^{(2)}(\theta,x,z)={\mathsf i}\,\tfrac{{\mathsf d} z}{z}+\underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)\,, \end{eqnarray} where we fix the primitive of $\,\underset{\tx{\ciut{(2)}}}{\chi}\,$ to be \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)=\ovl\theta\,\Gamma_{11}\,\sigma(\theta)\,, \end{eqnarray} and demand that a lift of the geometric action $\,\ell^{(1)}_\cdot\,$ of \Reqref{eq:sact-sMink} to the total space $\,\mathscr{L}^{(0)}\,$ be a connection-preserving automorphism. In the light of our analysis of the supersymmetry properties of the primitive $\,\underset{\tx{\ciut{(1)}}}{\beta}\,$ of $\,\underset{\tx{\ciut{(2)}}}{\chi}$,\ it is justified to leave open the possibility of inducing the said lift (through restriction) from the Lie supergroup structure on $\,\mathscr{L}^{(0)}\,$ determined by the binary operation \begin{eqnarray} {\rm m}^{(2)}_0\ &:&\ \mathscr{L}^{(0)}\x\mathscr{L}^{(0)}\longrightarrow\mathscr{L}^{(0)}\cr\cr &:&\ \bigl(\bigl(\theta_1^\a,x_1^I,z_1\bigr),\bigl(\theta_2^\beta,x_2^J,z_2\bigr)\bigr)\longmapsto\bigl(\theta_1^\a+\theta_2^\a,x_1^I+x_2^I-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,{\rm e}^{{\mathsf i}\,\lambda_{(\theta_1,x_1),(\theta_2,x_2)}}\cdot z_1\cdot z_2\bigr)\,,\label{eq:spartLonL} \end{eqnarray} in whose definition $\,\lambda_{\cdot,\cdot}\,$ is a 2-cocycle on $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ with values in $\,{\mathbb{R}}/2\pi{\mathbb{Z}}$,\ {\it cp.}\ \Reqref{eq:projisext}. The induced action is then given by the bundle automorphisms \begin{eqnarray} \mathscr{L}^{(0)}\ell_\cdot^{(1)}\ &:&\ {\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathscr{L}^{(0)}\longrightarrow\mathscr{L}^{(0)}\cr\cr &:&\ \bigl(\bigl(\varepsilon^\a,y^I\bigr),\bigl(\theta^\beta,x^J,z\bigr)\bigr)\longmapsto{\rm m}^{(2)}_0\bigl(\bigl(\varepsilon^\a,y^I,1\bigr),\bigl(\theta^\beta,x^J,z\bigr)\bigr)\label{eq:spartindact}\\\cr &&\hspace{4cm}=\bigl(\theta^\a+\varepsilon^\a,x^I+y^I-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^I\,\theta,{\rm e}^{{\mathsf i}\,\lambda_{(\varepsilon,y),(\theta,x)}}\cdot z\bigr)\,.\nonumber \end{eqnarray} The requirement that it preserve the connection \eqref{eq:conn0grb} is tantamount to the imposition of the constraints \begin{eqnarray}\nn {\mathsf d}\lambda_{(\varepsilon,y),(\theta,x)}={\mathsf d}(\ovl\varepsilon\,\Gamma_{11}\,\theta)\,, \end{eqnarray} to which the solution reads, in conformity with our expectations, \begin{eqnarray}\nn \lambda_{(\varepsilon,y),(\theta,x)}=\ovl\varepsilon\,\Gamma_{11}\,\theta+\Delta_{(\varepsilon,y)}\,, \end{eqnarray} where $\,\Delta_{(\varepsilon,y)}\in{\mathbb{R}}/2\pi{\mathbb{Z}}\,$ is suitably constrained, \begin{eqnarray}\nn \forall_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)\in{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}}\ :\ \Delta_{(\varepsilon_1,y_1)\cdot(\varepsilon_2,y_2)}=\Delta_{(\varepsilon_2,y_2)}\,, \end{eqnarray} so that $\,\lambda_{\cdot,\cdot}\,$ is a 2-cocycle. We shall set \begin{eqnarray}\nn \lambda_{(\varepsilon,y),(\theta,x)}=\ovl\varepsilon\,\Gamma_{11}\,\theta\,, \end{eqnarray} and so also \begin{eqnarray}\label{eq:homscocyc} d^{(0)}_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}={\rm e}^{{\mathsf i}\,\ovl\varepsilon_1\,\Gamma_{11}\,\varepsilon_2}\,. \end{eqnarray} With the phases thus fixed, we arrive at \begin{Prop}\label{prop:L0group} The principal ${\mathbb{C}}^\x$-bundle $\,\mathscr{L}^{(0)}\,$ of \Reqref{eq:spartbndl} equipped with the binary operation \begin{eqnarray} {\rm m}^{(2)}_0\ &:&\ \mathscr{L}^{(0)}\x\mathscr{L}^{(0)}\longrightarrow\mathscr{L}^{(0)}\cr\cr &:&\ \bigl(\bigl(\theta_1^\a,x_1^I,z_1\bigr),\bigl(\theta_2^\beta,x_2^J,z_2\bigr)\bigr)\longmapsto\bigl(\theta_1^\a+\theta_2^\a,x_1^I+x_2^I-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,d^{(0)}_{(\theta_1,x_1),(\theta_2,x_2)}\cdot z_1\cdot z_2\bigr)\,,\label{eq:spartLonLfix} \end{eqnarray} with the inverse \begin{eqnarray}\nn {\rm Inv}_0^{(2)}\ :\ \mathscr{L}^{(0)}\longrightarrow\mathscr{L}^{(0)}\ :\ \bigl(\theta^\a,x^I,z\bigr)\longmapsto\bigl(-\theta^\a,-x^I,z^{-1}\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn e^{(2)}_0=(0,0,1) \end{eqnarray} is a Lie supergroup. It is a central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{C}}^\x\longrightarrow{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{C}}^\x\equiv\widehat{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\xrightarrow{\ \pi_{\mathscr{L}^{(0)}}\ }{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the super-Minkowski group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ determined by the homomorphicity super-2-cocycle $\,d^{(0)}_{\cdot,\cdot}\,$ of \Reqref{eq:homscocyc}. \end{Prop} \noindent\begin{proof} Obvious, through inspection\footnote{The existence of the structure of a Lie supergroup on this and many other (super)central extensions of Lie supergroups derived from the underlying super-Minkowski group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ through consecutive extensions determined by CaE super-2-cocycles was noted and discussed at great length in \Rcite{Chryssomalakos:2000xd}, and follows from the general theory of (super)central group extensions, {\it cp.}\ also \Rcite{DeAzcarraga:1995}. Our results, augmented with detailed derivations, should therefore be compared with those obtained in the paper.}. \end{proof} Using the above binary operation $\,{\rm m}^{(2)}_0\,$ in \Reqref{eq:spartindact}, we obtain the composition law of the induced action: \begin{eqnarray}\nn \mathscr{L}^{(0)}\ell^{(1)}_{(\varepsilon_2,y_2)}\circ\mathscr{L}^{(0)}\ell^{(1)}_{(\varepsilon_1,y_1)}=\bigl({\rm id}_{\mathscr{M}^{(1)}}\x{\mathsf m}\bigl({\rm e}^{{\mathsf i}\,\ovl\varepsilon_1\,\Gamma_{11}\,\varepsilon_2},\cdot\bigr)\bigr)\circ\mathscr{L}^{(0)}\ell^{(1)}_{(\varepsilon_2,y_2)\cdot(\varepsilon_1,y_1)}\,, \end{eqnarray} and so the supergroup structure on $\,\mathscr{L}^{(0)}\,$ defines a projective realisation of supersymmetry on $\,\mathscr{L}^{(0)}$.\ It is only upon defining the action of the full central extension $\,\widehat{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ that the realisation becomes proper, as anticipated. The definition is deduced from \Reqref{eq:spartLonLfix} and reads \begin{eqnarray}\nn \widehat\ell^{(0)}_\cdot\equiv{\rm m}^{(2)}_0\ &:&\ \widehat{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathscr{L}^{(0)}\longrightarrow\mathscr{L}^{(0)}\,. \end{eqnarray} Our discussion leads us naturally to \begin{Def}\label{def:s0gerbe} The \textbf{Green--Schwarz super-0-gerbe} over $\,\mathcal{M}^{(1)}\equiv{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ of curvature $\,\underset{\tx{\ciut{(2)}}}{\chi}\,$ is the triple \begin{eqnarray}\nn \mathcal{sG}^{(0)}_{\rm GS}:=\bigl(\mathscr{L}^{(0)},\pi_\mathscr{L}^{(0)},\underset{\tx{\ciut{(1)}}}{\beta}^{(2)}\bigr) \end{eqnarray} constructed in the preceding paragraphs. \begin{flushright}$\diamond$\end{flushright \noindent We may now restate the results of our analysis in the form of \begin{Prop}\label{prop:s0gerbe} The Green--Schwarz super-0-gerbe of Definition \ref{def:s0gerbe} is a ${\mathbb{C}}^\x$-bundle with connection over the super-Minkowski space $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$.\ The bundle admits the natural projective action $\,\mathscr{L}^{(0)}\ell_\cdot^{(1)}\,$ of \Reqref{eq:spartindact}, by connection-preserving principal-bundle automorphisms, of the supersymmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ induced, through restriction, by the group structure $\,{\rm m}^{(2)}_0\,$ of \Reqref{eq:spartLonL} on the total space of the bundle. The said group structure defines, also through restriction, an action of the central extension $\,\widehat{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ of $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ determined by the homomorphicity super-2-cocycle $\,d^{(0)}_{\cdot,\cdot}\,$ on $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ specified in \Reqref{eq:homscocyc}. \end{Prop} With view to subsequent discussion, and to potential future applications, we abstract from the above \begin{Def}\label{def:CaEs0g} Let $\,{\rm G}\,$ be a Lie supergroup with trivial de Rham cohomology. Denote the binary operation on $\,{\rm G}\,$ as \begin{eqnarray}\nn {\rm m}_{\rm G}\ :\ {\rm G}\x{\rm G}\longrightarrow{\rm G} \end{eqnarray} and the corresponding left regular action of $\,{\rm G}\,$ on itself as \begin{eqnarray}\nn \ell_\cdot\equiv{\rm m}_{\rm G}\ :\ {\rm G}\x{\rm G}\longrightarrow{\rm G}\ :\ ({\mathbb{Y}},{\mathbb{X}})\longmapsto{\rm m}_{\rm G}({\mathbb{Y}},{\mathbb{X}})\equiv\ell_{\mathbb{Y}}({\mathbb{X}})\,. \end{eqnarray} Let $\,\underset{\tx{\ciut{(2)}}}{{\rm h}}\,$ be a super-2-cocycle on $\,{\rm G}\,$ representing a class in its (left) CaE cohomology. A {\bf Cartan--Eilenberg super-0-gerbe} over $\,{\rm G}\,$ with curvature $\,\underset{\tx{\ciut{(2)}}}{{\rm h}}\,$ is a triple \begin{eqnarray}\nn \mathcal{sG}^{(0)}_{\rm CaE}:=\bigl({\rm L},\pi_{\rm L},\txa_{\rm L}\bigr) \end{eqnarray} composed of \begin{itemize} \item a trivial principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \pi_{\rm L}\equiv{\rm pr}_1\ :\ {\rm L}:={\rm G}\x{\mathbb{C}}^\x\longrightarrow{\rm G}\ :\ ({\mathbb{X}},z)\longmapsto{\mathbb{X}} \end{eqnarray} \item a principal connection 1-form on it, \begin{eqnarray}\nn \txa_{\rm L}({\mathbb{X}},z)={\mathsf i}\,\tfrac{{\mathsf d} z}{z}+\underset{\tx{\ciut{(1)}}}{{\rm b}}({\mathbb{X}})\,, \end{eqnarray} and the associated principal connection \begin{eqnarray}\nn \nabla_{\rm L}={\mathsf d}+\tfrac{1}{{\mathsf i}}\,\underset{\tx{\ciut{(1)}}}{{\rm b}}\,, \end{eqnarray} determined by a global primitive $\,\underset{\tx{\ciut{(1)}}}{{\rm b}}\,$ of $\,\underset{\tx{\ciut{(2)}}}{{\rm h}}$, \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{{\rm h}}={\mathsf d}\underset{\tx{\ciut{(1)}}}{{\rm b}}\,, \end{eqnarray} and with a structure of a Lie supergroup on the total space that lifts that on its base along the Lie-supergroup homomorphism $\,\pi_{\rm L}\,$ in such a manner that $\,\txa_{\rm L}\,$ is an LI super-1-form with respect to it, that is, given a super-0-form $\,\underset{\tx{\ciut{(1)}}}{\lambda}\,$ on $\,{\rm G}^{\x 2}\,$ satisfying the identity \begin{eqnarray}\nn \ell_{\mathbb{Y}}^*\underset{\tx{\ciut{(1)}}}{{\rm b}}({\mathbb{X}})-\underset{\tx{\ciut{(1)}}}{{\rm b}}({\mathbb{X}})={\mathsf d}\underset{\tx{\ciut{(1)}}}{\lambda}({\mathbb{Y}},{\mathbb{X}})\,, \end{eqnarray} the binary operation on $\,{\rm L}\,$ takes the form \begin{eqnarray}\nn {\rm m}_{\rm L}\ :\ {\rm L}\x{\rm L}\longrightarrow{\rm L}\ :\ \bigl(({\mathbb{X}}_1,z_1),({\mathbb{X}}_2,z_2)\bigr)\longmapsto\bigl({\rm m}_{\rm G}({\mathbb{X}}_1,{\mathbb{X}}_2),{\rm exp}\bigl({\mathsf i}\,\underset{\tx{\ciut{(1)}}}{\lambda}({\mathbb{X}}_1,{\mathbb{X}}_2)\bigr)\cdot z_1\cdot z_2\bigr)\,. \end{eqnarray} \end{itemize} Given CaE super-0-gerbes $\,\mathcal{sG}^{(0)\,A}_{\rm CaE}=\bigl({\rm L}_A,\pi_{{\rm L}_A},\txa_{{\rm L}_A}\bigr),\ A\in\{1,2\}\,$ over a common base $\,{\rm G}$,\ with the respective principal connections $\,\txa_{{\rm L}_A}({\mathbb{X}},z_A)={\mathsf i}\,\frac{{\mathsf d} z_A}{z_A}+\underset{\tx{\ciut{(1)}}}{{\rm b}}{}_A({\mathbb{X}})$,\ an {\bf isomorphism} between them is a connection-preserving principal-bundle isomorphism \begin{eqnarray}\nn \Phi^{(0)}_{\rm CaE}\ :\ \mathcal{sG}^{(0)\,1}_{\rm CaE}\xrightarrow{\ \cong\ }\mathcal{sG}^{(0)\,2}_{\rm CaE} \end{eqnarray} determined by a left-invariant super-0-form $\,{\rm f}\,$ on $\,{\rm G}\,$ satisfying the identity \begin{eqnarray}\nn {\mathsf d}{\rm f}=\underset{\tx{\ciut{(1)}}}{{\rm b}}{}_2-\underset{\tx{\ciut{(1)}}}{{\rm b}}{}_1\,, \end{eqnarray} that is, the isomorphism has a coordinate presentation \begin{eqnarray}\nn \Phi^{(0)}_{\rm CaE}({\mathbb{X}},z_1)=\bigl({\mathbb{X}},{\rm exp}\bigl({\mathsf i}\,{\rm f}({\mathbb{X}})\bigr)\cdot z_1\bigr)\,. \end{eqnarray} \begin{flushright}$\diamond$\end{flushright \brem Note that the tensor product of principal ${\mathbb{C}}^\x$-bundles described in the footnote on p.\,\pageref{foot:Cxprintens} gives rise to a tensor product of super-0-gerbes. \end{Rem}\medskip \brem On the super-Minkowski space and its cartesian powers, the existence of an isomorphism between CaE super-0-gerbes is tantamount to the (strict) equality of the corresponding base components of the principal connection 1-form, \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{{\rm b}}{}_2-\underset{\tx{\ciut{(1)}}}{{\rm b}}{}_1=0\,. \end{eqnarray} \end{Rem}\medskip We conclude this part of our discussion of the super-0-brane data with a detailed analysis of various ($\underset{\tx{\ciut{(2)}}}{\chi}$-twisted) algebroidal structures associated with natural actions of the supergroup $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on the super-Minkowski target. We start with the left-regular action. Here, the fundamental sections \eqref{eq:fundsecMink} of $\,\mathcal{E}^{1,0}{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ take the form \begin{eqnarray}\nn \gt{R}_I(\theta,x)=\bigl(\mathscr{P}_I(\theta,x),0\bigr)\,,\qquad\qquad\gt{R}_\a(\theta,x)=\bigl(\mathscr{Q}_\a(\theta,x),-2\ovl\Gamma{}^{11}_{\a\beta}\,\theta^\beta\bigr)\,,\qquad(I,\a)\in\ovl{0,d-1}\x\ovl{1,D_{1,d-1}} \end{eqnarray} and satisfy the algebra \begin{eqnarray}\nn &\sVbra{\gt{R}_I}{\gt{R}_J}^{\underset{\tx{\ciut{(2)}}}{\chi}}(\theta,x)=0\,,\qquad\qquad\sVbra{\gt{R}_I}{\gt{R}_\a}^{\underset{\tx{\ciut{(2)}}}{\chi}}(\theta,x)=0\,,&\cr\cr &\sVbra{\gt{R}_\a}{\gt{R}_\beta}^{\underset{\tx{\ciut{(2)}}}{\chi}}(\theta,x)=-\ovl\Gamma{}^I_{\a\beta}\,\gt{R}_I+\bigl(0,-2\ovl\Gamma{}^{11}_{\a\beta}\bigr)\,,& \end{eqnarray} from which we read off the left-regular Lie anomaly super-0-form \begin{eqnarray}\nn \a^{\rm L}_{IJ}=0\,,\qquad\qquad\a^{\rm L}_{I\a}=0=\a^{\rm L}_{\a I}\,,\qquad\qquad\a^{\rm L}_{\a\beta}=-2\ovl\Gamma{}^{11}_{\a\beta}\,. \end{eqnarray} Passing to the right-regular action, we find the relevant fundamental sections \begin{eqnarray}\nn \gt{L}_I(\theta,x)=\bigl(P_I(\theta,x),0\bigr)\,,\qquad\qquad\gt{L}_\a(\theta,x)=\bigl(Q_\a(\theta,x),-2\ovl\Gamma{}^{11}_{\a\beta}\,\theta^\beta\bigr) \end{eqnarray} and the superbrackets \begin{eqnarray}\nn &\sVbra{\gt{L}_I}{\gt{L}_J}^{\underset{\tx{\ciut{(2)}}}{\chi}}(\theta,x)=0\,,\qquad\qquad\sVbra{\gt{L}_I}{\gt{L}_\a}^{\underset{\tx{\ciut{(2)}}}{\chi}}(\theta,x)=0\,,&\cr\cr &\sVbra{\gt{L}_\a}{\gt{L}_\beta}^{\underset{\tx{\ciut{(2)}}}{\chi}}(\theta,x)=\ovl\Gamma{}^I_{\a\beta}\,\gt{L}_I+\bigl(0,-2\ovl\Gamma{}^{11}_{\a\beta}\bigr)\,,& \end{eqnarray} from which we read off the right-regular Lie anomaly super-0-form \begin{eqnarray}\nn \a^{\rm R}_{IJ}=0\,,\qquad\qquad\a^{\rm R}_{I\a}=0=\a^{\rm R}_{\a I}\,,\qquad\qquad\a^{\rm R}_{\a\beta}=-2\ovl\Gamma{}^{11}_{\a\beta}\,. \end{eqnarray} Comparison of the two anomalies singles out the adjoint action, with the fundamental sections \begin{eqnarray}\nn (\gt{R}_I-\gt{L}_I)(\theta,x)&=&\bigl((\mathscr{P}_I-P_I)(\theta,x),0\bigr)=0\,,\cr\cr (\gt{R}_\a-\gt{L}_\a)(\theta,x)&=&\bigl((\mathscr{Q}_\a-Q_\a)(\theta,x),0\bigr)=\bigl(-\ovl\Gamma{}^I_{\a\beta}\,\theta^\beta\,\tfrac{\partial\ }{\partial x^I},0\bigr)\,. \end{eqnarray} Given the superbrackets \begin{eqnarray}\nn &\sVbra{\gt{R}_I}{\gt{L}_J}^{\underset{\tx{\ciut{(2)}}}{\chi}}=0\,,\qquad\quad\sVbra{\gt{R}_I}{\gt{L}_\a}^{\underset{\tx{\ciut{(2)}}}{\chi}}=0\,,\qquad\quad\sVbra{\gt{R}_\a}{\gt{L}_I}^{\underset{\tx{\ciut{(2)}}}{\chi}}=0\,,\qquad\quad\sVbra{\gt{R}_\a}{\gt{L}_\a}^{\underset{\tx{\ciut{(2)}}}{\chi}}=\bigl(0,-2\ovl\Gamma{}^{11}_{\a\beta}\bigr)\,,& \end{eqnarray} we obtain a trivial, and hence -- in particular -- (Lie )anomaly-free superbrackets of the fundamental sections of the adjoint action. Trivially, the fundamental sections for the adjoint action span a Lie superalgebroid. \medskip \subsubsection{The Green--Schwarz superstring} At the next level in cohomology, which is where the super-$\sigma$-model for the superstring is constructed, we find the GS super-3-cocycle \begin{eqnarray}\label{eq:GS3form} \underset{\tx{\ciut{(3)}}}{\chi}=e^I\wedge\pi_0^*\bigl(\ovl\sigma\wedge\Gamma_I\,\sigma\bigr)\,. \end{eqnarray} This is a closed and manifestly LI super-3-form on $\,\mathcal{M}^{(1)}$,\ with no smooth LI primitive on the latter space. The stepwise procedure of its trivialisation in the CaE cohomology through pullback to consecutive (super)central extensions of the underlying Lie supergroup $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ begins at the latter space, which is also where we look for LI de Rham super-2-cocycles of the type \eqref{eq:LI2bas}. The distinguished members of the family that we shall examine with view to solving the trivialisation problem are \begin{eqnarray}\label{eq:CaEscocyc1} \underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_\a=-\tfrac{1}{2}\mathscr{Q}_\a\righthalfcup\underset{\tx{\ciut{(3)}}}{\chi}=-\ovl\Gamma_{I\,\a\beta}\,\pi_0^*\sigma^\beta\wedge e^I\,. \end{eqnarray} Their closedness follows -- just as that of the super-3-cocycle \eqref{eq:GS3form} -- directly from the assumed Fierz identity \eqref{eq:ClifFierz1}. In order to construct a suitable common extension of their support $\,\mathcal{M}^{(1)}\,$ on which their pullbacks trivialise, we first derive their non-LI primitives on $\,\mathcal{M}^{(1)}$.\ To this end, we compute \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_\a(\theta,x)=-{\mathsf d}\bigl(\ovl\Gamma_{I\,\a\beta}\,\theta^\beta\,{\mathsf d} x^I\bigr)-\tfrac{1}{2}\,\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}\,{\mathsf d}\theta^\beta\wedge\,\theta^\gamma\,{\mathsf d}\theta^\delta\,. \end{eqnarray} Using identity \eqref{eq:ClifFierz1}, we readily find \begin{eqnarray}\nn \ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}\,{\mathsf d}\theta^\beta\wedge\,\theta^\gamma\,{\mathsf d}\theta^\delta&=&{\mathsf d}\left(\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}\,\theta^\beta\,\theta^\gamma\,{\mathsf d}\theta^\delta\right)-\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}\,\theta^\beta\,{\mathsf d}\theta^\gamma\wedge{\mathsf d}\theta^\delta\cr\cr &=&{\mathsf d}\left(\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}\,\theta^\beta\,\theta^\gamma\,{\mathsf d}\theta^\delta\right)+2\ovl\Gamma_{I\,\a\gamma}\,\ovl\Gamma{}^I_{\beta\delta}\,\theta^\beta\,{\mathsf d}\theta^\gamma\wedge{\mathsf d}\theta^\delta\,, \end{eqnarray} so that \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_\a(\theta,x)={\mathsf d}\bigl(-\ovl\Gamma_{I\,\a\beta}\,\theta^\beta\,{\mathsf d} x^I-\tfrac{1}{6}\,\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}\,\theta^\beta\,\theta^\gamma\,{\mathsf d}\theta^\delta\bigr)\,. \end{eqnarray} Drawing on our hitherto experience, we may now conceive a trivial vector bundle \begin{eqnarray}\nn \pi^{(2)}_1\equiv{\rm pr}_1\ :\ \mathcal{M}^{(2)}_1=\mathcal{M}^{(1)}\x{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\longrightarrow\mathcal{M}^{(1)} \end{eqnarray} with the purely Gra\ss mann-odd fibre $\,{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\,$ and a Lie supergroup structure that projects to the previously considered supersymmetry-group structure on the base and so lifts the action \eqref{eq:sact-sMink} of that supersymmetry group in a manner that we fix by demanding invariance under this lift of the primitives $\,e^{(2)}_\a\in\bigwedge^1{\mathsf T}^*\mathcal{M}^{(2)}_1\,$ of the distinguished super-2-cocycles, \begin{eqnarray}\nn \pi^{(2)\,*}_1\underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_\a=:{\mathsf d} e^{(2)}_\a\,. \end{eqnarray} Let us denote the (global) coordinates on $\,\mathcal{M}^{(2)}_1\,$ as $\,\xi_\a,\ \a\in\ovl{1,D_{1,d-1}}$.\ We then take \begin{eqnarray}\nn e^{(2)}_\a(\theta,x,\xi)={\mathsf d}\xi_\a-\ovl\Gamma_{I\,\a\beta}\,\theta^\beta\,\bigl({\mathsf d} x^I+\tfrac{1}{6}\,\ovl\theta\,\Gamma^I\,\sigma(\theta)\bigr)\,. \end{eqnarray} The supersymmetry variation of the non-LI primitive $\,\unl e^{(2)}_\a:=e^{(2)}_\a-{\mathsf d}\xi_\a\,$ of $\,\underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_\a\,$ is exact and may be cast in the form \begin{eqnarray}\nn \unl e^{(2)}_\a\bigl(\theta+\varepsilon,x+y-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma\,\theta\bigr)-\unl e^{(2)}_\a(\theta,x)&=&\tfrac{1}{3}\,\ovl\Gamma{}^I_{\a\beta}\,\theta^\beta\,\bigl(\ovl\varepsilon\,\Gamma_I\,\sigma(\theta)\bigr)-\ovl\Gamma_{I\,\a\beta}\,\varepsilon^\beta\,\bigl({\mathsf d} x^I+\tfrac{1}{6}\,\ovl\theta\,\Gamma^I\,\sigma(\theta)-\tfrac{1}{3}\,\ovl\varepsilon\,\Gamma^I\,\sigma(\theta)\bigr)\cr\cr &=&{\mathsf d}\bigl(-x^I\,\ovl\Gamma_{I\,\a\beta}\,\varepsilon^\beta+\tfrac{1}{6}\,\ovl\Gamma{}^I_{\a\beta}\,\bigl(2\varepsilon^\beta+\theta^\beta\bigr)\,\ovl\varepsilon\,\Gamma_I\,\theta\bigr) \end{eqnarray} with the help of the Fierz identity \eqref{eq:ClifFierz1}. In this manner, we arrive at \begin{Prop}\label{prop:M2group} The above-described vector bundle $\,\mathcal{M}^{(2)}_1\,$ equipped with the binary operation \begin{eqnarray}\nn {\rm m}_1^{(2)}\ &:&\ \mathcal{M}_1^{(2)}\x\mathcal{M}_1^{(2)}\longrightarrow\mathcal{M}_1^{(2)}\cr\cr &:&\ \bigl(\bigl(\theta_1^\a,x_1^I,\xi_{1\,\beta}\bigr),\bigl(\theta_2^\gamma,x_2^J,\xi_{2\,\delta}\bigr)\bigr)\longmapsto\bigl(\theta_1^\a+\theta_2^\a,x_1^I+x_2^I-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,\xi_{1\,\beta}+\xi_{2\,\beta}+\ovl\Gamma_{I\,\beta\gamma}\,\theta_1^\gamma\,x_2^I\cr\cr &&\hspace{6cm}-\tfrac{1}{6}\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)\,\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(2\theta_1^\gamma+\theta_2^\gamma\bigr)\bigr)\,, \end{eqnarray} with the inverse \begin{eqnarray}\nn {\rm Inv}_1^{(2)}\ :\ \mathcal{M}_1^{(2)}\longrightarrow\mathcal{M}_1^{(2)}\ :\ (\theta^\a,x^I,\xi_\beta)\longmapsto\bigl(-\theta^\a,-x^I,-\xi_\beta+x^I\,\ovl\Gamma_{I\,\beta\gamma}\,\theta^\gamma\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn e_1^{(2)}=(0,0,0) \end{eqnarray} is a Lie supergroup. It is a (super)central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\longrightarrow{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\xrightarrow{\ \pi^{(2)}_1\ }{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the super-Minkowski group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ determined by the family of CE super-2-cocycles corresponding to the CaE super-2-cocycles $\,\{\underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_\a\}_{\a\in\ovl{1,D_{1,d-1}}}\,$ of \Reqref{eq:CaEscocyc1}. \end{Prop} \noindent\begin{proof} Through inspection. In particular, the associativity of $\,{\rm m}_1^{(2)}\,$ hinges upon identity \eqref{eq:ClifFierz1}. \end{proof} Upon pullback to $\,\mathcal{M}_1^{(2)}$,\ we obtain the sought-after trivialisation \begin{eqnarray}\nn \pi^{(2)\,*}_1\underset{\tx{\ciut{(3)}}}{\chi}={\mathsf d}\underset{\tx{\ciut{(2)}}}{\beta}^{(2)}\,,\qquad\qquad\underset{\tx{\ciut{(2)}}}{\beta}^{(2)}:=\pi_{01}^{(2)\,*}\sigma^\a\wedge e^{(2)}_\a\,, \end{eqnarray} written in the shorthand notation \begin{eqnarray}\nn \pi^{(2)}_{01}:=\pi_0\circ\pi^{(2)}_1 \end{eqnarray} that we adapt in our subsequent considerations. The relation of the above trivialisation to the previously found (in Prop.\,\ref{prop:GSprim}) non-LI one on $\,\mathcal{M}^{(1)}\,$ reads \begin{eqnarray}\label{eq:betobet3} \underset{\tx{\ciut{(2)}}}{\beta}{}^{(2)}=\pi^{(2)\,*}_1\underset{\tx{\ciut{(2)}}}{\beta}+{\mathsf d}{\rm B}\,,\qquad\qquad {\rm B}(\theta,x,\xi):=\theta^\a\,{\mathsf d}\xi_\a\,. \end{eqnarray} ~\medskip Structurally, the construction of the (super)central extension \begin{eqnarray}\nn \pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}:=\pi^{(2)}_1\ :\ {\mathsf Y}_1\mathcal{M}^{(1)}:=\mathcal{M}^{(2)}_1\longrightarrow\mathcal{M}^{(1)}\ :\ (\theta^\a,x^I,\xi_\beta)\longmapsto(\theta^\a,x^I) \end{eqnarray} in the present geometric context plays a r\^ole fully analogous to that of the surjective submersion $\,\pi_{{\mathsf Y} M}:{\mathsf Y} M\longrightarrow M\,$ from Section \ref{sec:Bose}, to wit, it yields an epimorphism, in the geometric category of interest, onto the support of a non-trivial 3-cocycle on which, upon pullback along that epimorphism, the 3-cocycle trivialises in the same cohomology (in which it would not, at least in general, trivialise on the base/codomain of the epimorphism). The last observation motivates our subsequent attempt at establishing a (super)geometric realisation of the super-3-cocycle $\,\underset{\tx{\ciut{(3)}}}{\chi}\,$ in the CaE cohomology through a procedure closely imitating the one that defines its analogon $\,({\mathsf Y} M,\pi_{{\mathsf Y} M},{\rm B},L,\nabla_L,\mu_L)\,$ in the standard (purely Gra\ss mann-even) setting. To this end, let us first consider the fibred square, represented by the commutative diagram \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{& {\mathsf Y}_1^{[2]}\mathcal{M}^{(1)} \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}_1\mathcal{M}^{(1)} \ar[rd]_{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}} & & {\mathsf Y}_1\mathcal{M}^{(1)} \ar[ld]^{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}} \\ & \mathcal{M}^{(1)} & }\,. \end{eqnarray} The difference of the pullbacks of the primitive $\,\underset{\tx{\ciut{(2)}}}{\beta^{(2)}}\,$ along the two canonical projections reads \begin{eqnarray}\nn ({\rm pr}_2^*-{\rm pr}_1^*)\underset{\tx{\ciut{(2)}}}{\beta}{}^{(2)}=(\pi_0\circ\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}\circ{\rm pr}_1)^*\sigma^\a\wedge({\rm pr}_2^*-{\rm pr}_1^*)e^{(2)}_\a\,. \end{eqnarray} This is, by construction, an LI super-2-cocycle, and so we may seek to trivialise it, or -- if necessary -- its pullback to a suitable (super)central extension of $\,{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}$,\ in the CaE cohomology. Inspection of the coordinate expression \begin{eqnarray}\nn ({\rm pr}_2^*-{\rm pr}_1^*)\underset{\tx{\ciut{(2)}}}{\beta}{}^{(2)}\bigl(\theta,x,\xi^1,\xi^2\bigr)={\mathsf d}\left(\theta^\a\,{\mathsf d}\xi^{21}_\a\right)\,, \end{eqnarray} written in terms of the variables $\,\xi^{21}_\a:=\xi^2_\a-\xi^1_\a,\ \a\in\ovl{1,D_{1,d-1}}$,\ convinces us that there is no LI primitive of the above super-2-cocycle on $\,{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}$,\ and so, invoking our results from the analysis of the GS super-2-cocycle, we are led to associate with it a trivial principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\label{eq:sstringbndl} \pi_{\mathscr{L}^{(1)}}\equiv{\rm pr}_1\ :\ \mathscr{L}^{(1)}:={\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\x{\mathbb{C}}^\x\longrightarrow{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\ :\ \bigl(\theta,x,\xi^1,\xi^2,z\bigr)\longmapsto\bigl(\theta,x,\xi^1,\xi^2\bigr) \end{eqnarray} with a principal connection \begin{eqnarray}\nn \nabla_{\mathscr{L}^{(1)}}={\mathsf d}+\tfrac{1}{{\mathsf i}}\,{\rm A}\,, \end{eqnarray} or -- equivalently -- a principal connection 1-form \begin{eqnarray}\nn \cA\bigl(\theta,x,\xi^1,\xi^2,z\bigr)={\mathsf i}\tfrac{{\mathsf d} z}{z}+{\rm A}\bigl(\bigl(\theta,x,\xi^1\bigr),\bigl(\theta,x,\xi^2\bigr)\bigr)\,, \end{eqnarray} with the base component \begin{eqnarray}\label{eq:ABB} {\rm A}\bigl(\bigl(\theta,x,\xi^1\bigr),\bigl(\theta,x,\xi^2\bigr)\bigr)\equiv({\rm pr}_2^*-{\rm pr}_1^*){\rm B}\bigl(\bigl(\theta,x,\xi^1\bigr),\bigl(\theta,x,\xi^2\bigr)\bigr)=\theta^\a\,{\mathsf d}\xi^{21}_\a\,, \end{eqnarray} The fibred product $\,{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\,$ of Lie supergroups inherits from $\,{\mathsf Y}_1\mathcal{M}^{(1)}\equiv\mathcal{M}^{(2)}_1\,$ a Lie-supergroup structure determined by the binary operation \begin{eqnarray} {\rm m}_1^{(2)\,[2]}\ &:&\ {\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\x{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\longrightarrow{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\ :\ \bigl(\bigl(\theta_1^\a,x_1^I,\xi^1_{1\,\beta},\xi^2_{1\,\gamma}\bigr),\bigl(\theta_2^\delta,x_2^J,\xi^1_{2\,\epsilon},\xi^2_{2\,\zeta}\bigr)\bigr)\longmapsto\cr\cr &&\longmapsto\bigl(\theta_1^\a+\theta_2^\a,x_1^I+x_2^I-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,\xi^1_{1\,\beta}+\xi^1_{2\,\beta}+\ovl\Gamma_{I\,\beta\gamma}\,\theta_1^\gamma\,x_2^I-\tfrac{1}{6}\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)\,\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(2\theta_1^\gamma+\theta_2^\gamma\bigr)\,,\cr\cr &&\hspace{.75cm}\xi^2_{1\,\beta}+\xi^2_{2\,\beta}+\ovl\Gamma_{I\,\beta\gamma}\,\theta_1^\gamma\,x_2^I-\tfrac{1}{6}\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)\,\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(2\theta_1^\gamma+\theta_2^\gamma\bigr)\bigr)\,.\label{eq:sstringY2Lie} \end{eqnarray} In analogy with the case of the superparticle, we endow $\,\mathscr{L}^{(1)}\,$ with the structure of a Lie supergroup determined by the requirement of left-invariance of the principal connection 1-form $\,\cA$. \begin{Prop}\label{prop:L1} The principal ${\mathbb{C}}^\x$-bundle $\,\mathscr{L}^{(1)}\,$ of \Reqref{eq:sstringbndl} equipped with the binary operation \begin{eqnarray} {\rm m}_1^{(3)}\ &:&\ \mathscr{L}^{(1)}\x\mathscr{L}^{(1)}\longrightarrow\mathscr{L}^{(1)}\ :\ \bigl(\bigl(\theta_1^\a,x_1^I,\xi^1_{1\,\beta},\xi^2_{1\,\gamma},z_1\bigr),\bigl(\theta_2^\delta,x_2^J,\xi^1_{2\,\epsilon},\xi^2_{2\,\zeta},z_2\bigr)\bigr)\longmapsto\cr\cr &&\longmapsto\bigl(\theta_1^\a+\theta_2^\a,x_1^I+x_2^I-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,\xi^1_{1\,\beta}+\xi^1_{2\,\beta}+\ovl\Gamma_{I\,\beta\gamma}\,\theta_1^\gamma\,x_2^I-\tfrac{1}{6}\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)\,\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(2\theta_1^\gamma+\theta_2^\gamma\bigr)\,,\cr\cr &&\hspace{.75cm}\xi^2_{1\,\beta}+\xi^2_{2\,\beta}+\ovl\Gamma_{I\,\beta\gamma}\,\theta_1^\gamma\,x_2^I-\tfrac{1}{6}\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)\,\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(2\theta_1^\gamma+\theta_2^\gamma\bigr),d^{(1)}_{(\theta_1,x_1,\xi^1_1,\xi^2_1),(\theta_2,x_2,\xi^1_2,\xi^2_2)}\cdot z_1\cdot z_2\bigr)\,,\label{eq:sstringLonLfix} \end{eqnarray} the latter being defined in terms of the super-2-cocycle \begin{eqnarray}\nn d^{(1)}_{(\theta_1,x_1,\xi^1_1,\xi^2_1),(\theta_2,x_2,\xi^1_2,\xi^2_2)}={\rm e}^{{\mathsf i}\,\ovl\theta_1\,\xi_2^{21}}\,, \end{eqnarray} with the inverse \begin{eqnarray}\nn {\rm Inv}_1^{(3)}\ &:&\ \mathscr{L}^{(1)}\longrightarrow\mathscr{L}^{(1)}\cr\cr &:&\ \bigl(\theta^\a,x^I,\xi^1_\beta,\xi^2_\gamma,z\bigr)\longmapsto\bigl(-\theta^\a,-x^I,-\xi^1_\beta+x^I\,\ovl\Gamma_{I\,\beta\delta}\,\theta^\delta,-\xi^2_\gamma+x^I\,\ovl\Gamma_{I\,\gamma\epsilon}\,\theta^\epsilon,{\rm e}^{{\mathsf i}\,\ovl\theta\,\xi^{21}}\cdot z^{-1}\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn \sigma^{(3)}_1=(0,0,0,0,1) \end{eqnarray} is a Lie supergroup. It is a central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{C}}^\x\longrightarrow{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{C}}^\x\equiv\widehat{{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}}\xrightarrow{\ \pi_{\mathscr{L}^{(1)}}\ }{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the Lie supergroup $\,{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\,$ of \Reqref{eq:sstringY2Lie} determined by $\,d^{(1)}_{\cdot,\cdot}$. \end{Prop} \noindent\begin{proof} Through inspection. \end{proof} The above structure can be employed to lift the original action of the supersymmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ all the way up to the total space of $\,\mathscr{L}^{(1)}\,$ as per \begin{eqnarray}\nn \widehat\ell^{(1)}_\cdot\equiv{\rm m}_1^{(3)}\ :\ \widehat{{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}}\x\mathscr{L}^{(1)}\longrightarrow\mathscr{L}^{(1)}\,. \end{eqnarray} By construction, the lift preserves the connection on $\,\mathscr{L}^{(1)}$. In the last step, we consider the cartesian cube of the surjective submersion $\,{\mathsf Y}_1\mathcal{M}^{(1)}\,$ fibred over $\,\mathcal{M}^{(1)}$,\ with its canonical projections $\,{\rm pr}_{i,j}\equiv({\rm pr}_i,{\rm pr}_j),\ (i,j)\in\{(1,2),(2,3),(1,3)\}\,$ to $\,{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\,$ that render the diagram \begin{eqnarray}\nn \alxydim{@C=1cm@R=1cm}{& & {\mathsf Y}_1^{[3]}\mathcal{M}^{(1)} \ar[rd]^{{\rm pr}_{1,3}} \ar[ld]_{{\rm pr}_{1,2}} \ar[d]_{{\rm pr}_{2,3}} & & \\ & {\mathsf Y}_1^{[2]}\mathcal{M}^{(1)} \ar[ld]_{{\rm pr}_1} \ar[d]_{{\rm pr}_2} & {\mathsf Y}_1^{[2]}\mathcal{M}^{(1)} \ar[ld]_{{\rm pr}_1} \ar[rd]^{{\rm pr}_2} & {\mathsf Y}_1^{[2]}\mathcal{M}^{(1)} \ar[d]^{{\rm pr}_2} \ar[rd]^{{\rm pr}_1} & \\ {\mathsf Y}_1\mathcal{M}^{(1)} \ar@/_5.0pc/@{=}[rrrr] \ar[rrd]_{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}} & {\mathsf Y}_1\mathcal{M}^{(1)} \ar[rd]^{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}} & & {\mathsf Y}_1\mathcal{M}^{(1)} \ar[ld]_{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}} & {\mathsf Y}_1\mathcal{M}^{(1)} \ar[lld]^{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}} \\ & & \mathcal{M}^{(1)} & & }\cr\cr \end{eqnarray} commutative, and, over it, look for a connection-preserving isomorphism \begin{eqnarray}\nn \mu_{\mathscr{L}^{(1)}}\ :\ {\rm pr}_{1,2}^*\mathscr{L}^{(1)}\otimes{\rm pr}_{2,3}^*\mathscr{L}^{(1)}\xrightarrow{\ \cong\ }{\rm pr}_{1,3}^*\mathscr{L}^{(1)}\,. \end{eqnarray} Comparison of the pullbacks of the (global) connection 1-forms \begin{eqnarray}\nn ({\rm pr}_{1,2}^*+{\rm pr}_{2,3}^*-{\rm pr}_{1,3}^*){\rm A}\bigl(\bigl(\theta,x,\xi^1\bigr),\bigl(\theta,x,\xi^2\bigr),\bigl(\theta,x,\xi^3\bigr)\bigr)=0\,, \end{eqnarray} in conjunction with inspection of the invariance of the relevant combination $\,z_{1,2}\cdot z_{2,3}\cdot z_{1,3}^{-1}\,$ of the fibre coordinates lead us to set \begin{eqnarray}\nn \mu_{\mathscr{L}^{(1)}}\left(\bigl(\theta,x,\xi^1,\xi^2,z_{1,2}\bigr)\otimes\bigl(\theta,x,\xi^2,\xi^3,z_{2,3}\bigr)\right):=\bigl(\theta,x,\xi^1,\xi^3,z_{1,2}\cdot z_{2,3}\bigr)\,, \end{eqnarray} where we have identified \begin{eqnarray}\nn \bigl(\theta,x,\xi^i,\xi^j,z_{i,j}\bigr)\equiv\bigl(\bigl(\theta,x,\xi^1,\xi^2,\xi^3,\xi^4\bigr),\bigl(\theta,x,\xi^i,\xi^j,z_{i,j}\bigr)\bigr)\in{\rm pr}_{i,j}^*\mathscr{L}^{(1)}\,. \end{eqnarray} A fibre-bundle map thus defined trivially satisfies the groupoid identity \eqref{eq:mugrpd} over $\,{\mathsf Y}_1^{[4]}\mathcal{M}^{(1)}$,\ and conforms with the previous definition of super-0-gerbe isomorphism. We conclude our analysis with \begin{Def}\label{def:s1gerbe} The \textbf{Green--Schwarz super-1-gerbe} over $\,\mathcal{M}^{(1)}\equiv{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ of curvature $\,\underset{\tx{\ciut{(3)}}}{\chi}\,$ is the sextuple \begin{eqnarray}\nn \mathcal{sG}^{(1)}_{\rm GS}:=\bigl({\mathsf Y}_1\mathcal{M}^{(1)},\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}},\underset{\tx{\ciut{(2)}}}{\beta}^{(2)},\mathscr{L}^{(1)},\nabla_{\mathscr{L}^{(1)}},\mu_{\mathscr{L}^{(1)}}\bigr) \end{eqnarray} constructed in the preceding paragraphs. \begin{flushright}$\diamond$\end{flushright \noindent Our discussion is now concisely summarised in \begin{Prop}\label{prop:s1gerbe} The Green--Schwarz super-1-gerbe $\,\mathcal{sG}^{(1)}_{\rm GS}\,$ of Definition \ref{def:s1gerbe} is an abelian bundle gerbe with connection over the super-Minkowski space $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$.\ The action \eqref{eq:sact-sMink} of the supersymmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on the base of the gerbe lifts to an action, by connection-preserving principal-bundle automorphisms, of the central extension $\,\widehat{{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}}$,\ detailed in Prop.\,\ref{prop:L1}, of the Lie supergroup $\,{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\,$ defined through \Reqref{eq:sstringY2Lie} and itself being an extension, described in Prop.\,\ref{prop:M2group}, of $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}$. \end{Prop} Following the lower-dimensional example, we formulate \begin{Def}\label{def:CaEs1g} Adopt the notation of Def.\,\ref{def:CaEs0g}. Let $\,\underset{\tx{\ciut{(3)}}}{{\rm h}}\,$ be a super-3-cocycle on $\,{\rm G}\,$ representing a class in its (left) CaE cohomology. A {\bf Cartan--Eilenberg super-1-gerbe} over $\,{\rm G}\,$ with curvature $\,\underset{\tx{\ciut{(3)}}}{{\rm h}}\,$ is a sextuple \begin{eqnarray}\nn \mathcal{sG}^{(1)}_{\rm CaE}:=\bigl({\mathsf Y}{\rm G},\pi_{{\mathsf Y}{\rm G}},\underset{\tx{\ciut{(2)}}}{{\rm b}},{\rm L},\txa,\mu_{\rm L}\bigr) \end{eqnarray} composed of \begin{itemize} \item a surjective submersion \begin{eqnarray}\nn \pi_{{\mathsf Y}{\rm G}}\ :\ {\mathsf Y}{\rm G}\longrightarrow{\rm G} \end{eqnarray} with a structure of a Lie supergroup on its total space that lifts that on $\,{\rm G}\,$ along the Lie-supergroup homomorphism $\,\pi_{{\mathsf Y}{\rm G}}$, \item a global primitive $\,\underset{\tx{\ciut{(2)}}}{{\rm b}}\,$ of the pullback of $\,\underset{\tx{\ciut{(3)}}}{{\rm h}}\,$ to it, \begin{eqnarray}\nn \pi_{{\mathsf Y}{\rm G}}^*\underset{\tx{\ciut{(3)}}}{{\rm h}}={\mathsf d}\underset{\tx{\ciut{(2)}}}{{\rm b}}\,, \end{eqnarray} which is LI with respect to the induced left-regular action of $\,{\mathsf Y}{\rm G}\,$ on itself \begin{eqnarray}\nn {\mathsf Y}\ell_\cdot\ :\ {\mathsf Y}{\rm G}\x{\mathsf Y}{\rm G}\longrightarrow{\mathsf Y}{\rm G}\,, \end{eqnarray} lifting $\,\ell_\cdot\,$ along $\,\pi_{{\mathsf Y}{\rm G}}$, \begin{eqnarray}\nn \forall_{y\in{\mathsf Y}{\rm G}}\ :\ \ell_y^*\underset{\tx{\ciut{(2)}}}{{\rm b}}=\underset{\tx{\ciut{(2)}}}{{\rm b}}\,, \end{eqnarray} \item a CaE super-0-gerbe \begin{eqnarray}\nn \bigl({\rm L},\pi_{\rm L},\txa_{\rm L}\bigr) \end{eqnarray} over the fibred square $\,{\mathsf Y}^{[2]}{\rm G}\equiv{\mathsf Y}{\rm G}\x_{\rm G}{\mathsf Y}{\rm G}\,$ (endowed with the natural (product) Lie-supergroup structure), with a principal connection 1-form $\,\txa_{\rm L}\,$ of curvature $\,\underset{\tx{\ciut{(2)}}}{{\rm h}}$, \begin{eqnarray}\nn \pi_{\rm L}^*\underset{\tx{\ciut{(2)}}}{{\rm h}}{}_{\rm L}={\mathsf d}\txa_{\rm L}\,, \end{eqnarray} that satisfies the identity \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{{\rm h}}{}_{\rm L}=\bigl({\rm pr}_2^*-{\rm pr}_1^*\bigr)\underset{\tx{\ciut{(2)}}}{{\rm b}}\,, \end{eqnarray} \item an isomorphism of CaE super-0-gerbes\footnote{Note that pullback along a canonical projection is consistent with the definition of a super-0-gerbe.} \begin{eqnarray}\nn \mu_{\rm L}\ :\ {\rm pr}_{1,2}^*{\rm L}\otimes{\rm pr}_{2,3}^*{\rm L}\xrightarrow{\ \cong\ }{\rm pr}_{1,3}^*{\rm L} \end{eqnarray} over the fibred cube $\,{\mathsf Y}^{[3]}{\rm G}\equiv{\mathsf Y}{\rm G}\x_{\rm G}{\mathsf Y}{\rm G}\x_{\rm G}{\mathsf Y}{\rm G}\,$ that satisfies the coherence (associativity) condition \begin{eqnarray}\nn {\rm pr}_{1,2,4}^*\mu_{\rm L}\circ({\rm id}_{{\rm pr}_{1,2}^*{\rm L}}\otimes{\rm pr}_{2,3,4}^*\mu_{\rm L})={\rm pr}_{1,3,4}^*\mu_{\rm L}\circ({\rm pr}_{1,2,3}^*\mu_{\rm L}\otimes {\rm id}_{{\rm pr}_{3,4}^*{\rm L}}) \end{eqnarray} over the quadruple fibred product $\,{\mathsf Y}^{[4]}{\rm G}\equiv{\mathsf Y}{\rm G}\x_{\rm G}{\mathsf Y}{\rm G}\x_{\rm G}{\mathsf Y}{\rm G}\x_{\rm G}{\mathsf Y}{\rm G}$. \end{itemize} Given CaE super-1-gerbes $\,\mathcal{sG}^{(1)\,A}_{\rm CaE}=\bigl({\mathsf Y}_A{\rm G},\pi_{{\mathsf Y}_A{\rm G}},\underset{\tx{\ciut{(2)}}}{{\rm b}}{}_A,{\rm L}_A,\txa_A,\mu_{{\rm L}_A}\bigr),\ A\in\{1,2\}\,$ over a common base $\,{\rm G}$,\ a 1-{\bf isomorphism} between them is a quintuple \begin{eqnarray}\nn \Phi^{(1)}_{\rm CaE}:=\bigl({\mathsf Y}\sfY_{1,2}{\rm G},\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}},{\rm E},\txa_{\rm E},\a_{\rm E}\bigr)\ :\ \mathcal{sG}^{(1)\,1}_{\rm CaE}\xrightarrow{\ \cong\ }\mathcal{sG}^{(1)\,2}_{\rm CaE} \end{eqnarray} composed of \begin{itemize} \item a surjective submersion \begin{eqnarray}\nn \pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}\ :\ {\mathsf Y}\sfY_{1,2}{\rm G}\longrightarrow{\mathsf Y}_1{\rm G}\x_{\rm G}{\mathsf Y}_2{\rm G}\equiv{\mathsf Y}_{1,2}{\rm G} \end{eqnarray} with a structure of a Lie supergroup on its total space that lifts the product Lie-supergroup structure on the fibred product $\,{\mathsf Y}_{1,2}{\rm G}\,$ along the Lie-supergroup homomorphism $\,\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}$, \item a CaE super-0-gerbe \begin{eqnarray}\nn \bigl({\rm E},\pi_{\rm E},\txa_{\rm E}\bigr) \end{eqnarray} over the total space $\,{\mathsf Y}\sfY_{1,2}{\rm G}$,\ with a principal connection 1-form $\,\txa_{\rm E}\,$ of curvature $\,\underset{\tx{\ciut{(2)}}}{{\rm h}}{}_{\rm E}$, \begin{eqnarray}\nn \pi_{\rm E}^*\underset{\tx{\ciut{(2)}}}{{\rm h}}{}_{\rm E}={\mathsf d}\txa_{\rm E}\,, \end{eqnarray} that satisfies the identity \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{{\rm h}}{}_{\rm E}=\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}^*\bigl({\rm pr}_2^*\underset{\tx{\ciut{(2)}}}{{\rm b}}{}_2-{\rm pr}_1^*\underset{\tx{\ciut{(2)}}}{{\rm b}}{}_1\bigr)\,, \end{eqnarray} \item an isomorphism of super-0-gerbes \begin{eqnarray}\nn \a_{\rm E}\ :\ (\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}\x\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}})^*\circ{\rm pr}_{1,3}^*{\rm L}_1\otimes{\rm pr}_2^*{\rm E}\xrightarrow{\ \cong\ }{\rm pr}_1^*{\rm E}\otimes(\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}\x\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}})^*\circ{\rm pr}_{2,4}^*{\rm L}_2 \end{eqnarray} over the fibred product $\,{\mathsf Y}^{[2]}{\mathsf Y}_{1,2}{\rm G}={\mathsf Y}\sfY_{1,2}{\rm G}\x_{\rm G}{\mathsf Y}\sfY_{1,2}{\rm G}$,\ subject to the coherence constraint expressed by the commutative diagram of isomorphisms of CaE super-0-gerbes \begin{eqnarray}\nn \alxydim{@C=.15cm@R=1.5cm}{ & \pi_{1,2}^*\circ{\rm pr}_{1,3}^*{\rm L}_1\otimes\pi_{2,3}^*\circ{\rm pr}_{3,5}^*{\rm L}_1\otimes{\rm pr}_3^*{\rm E} \ar[rd]^{\qquad\pi_{1,2,3}^*\circ{\rm pr}_{1,3,5}^*\mu_{{\rm L}_1}\otimes{\rm id}_{{\rm pr}_3^*{\rm E}}} \ar[ld]_{{\rm id}_{\pi_{1,2}^*\circ{\rm pr}_{1,3}^*{\rm L}_1}\otimes{\rm pr}_{2,3}^*\a_{\rm E}\qquad} & \\ \pi_{1,2}^*\circ{\rm pr}_{1,3}^*{\rm L}_1\otimes{\rm pr}_2^*{\rm E}\otimes\pi_{2,3}^*\circ{\rm pr}_{4,6}^*{\rm L}_2 \ar[d]_{{\rm pr}_{1,2}^*\a_{\rm E}\otimes{\rm id}_{\pi_{2,3}^*\circ{\rm pr}_{4,6}^*{\rm L}_2}} & & \pi_{1,3}^*\circ{\rm pr}_{1,5}^*{\rm L}_1\otimes{\rm pr}_3^*{\rm E} \ar[d]^{{\rm pr}_{1,3}^*\a_{\rm E}} \\ {\rm pr}_1^*{\rm E}\otimes\pi_{1,2}^*\circ{\rm pr}_{2,4}^*{\rm L}_2\otimes\pi_{2,3}^*\circ{\rm pr}_{4,6}^*{\rm L}_2 \ar[rr]_{{\rm id}_{{\rm pr}_1^*{\rm E}}\otimes\pi_{1,2,3}^*\circ{\rm pr}_{2,4,6}^*\mu_{{\rm L}_2}} & & {\rm pr}_1^*{\rm E}\otimes\pi_{1,3}^*\circ{\rm pr}_{2,6}^*{\rm L}_2 } \end{eqnarray} over the fibred product $\,{\mathsf Y}^{[3]}{\mathsf Y}_{1,2}{\rm G}\equiv{\mathsf Y}\sfY_{1,2}{\rm G}\x_{\rm G}{\mathsf Y}\sfY_{1,2}{\rm G}\x_{\rm G}{\mathsf Y}\sfY_{1,2}{\rm G}$,\ with \begin{eqnarray}\nn &\pi_{i,j}=(\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}\x\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}})\circ{\rm pr}_{i,j}\,,\quad(i,j)\in\{(1,2),(2,3),(1,3)\}\,,&\cr\cr &\pi_{1,2,3}=\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}\x\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}\x\pi_{{\mathsf Y}\sfY_{1,2}{\rm G}}\,.& \end{eqnarray} \end{itemize} Given a pair of 1-isomorphisms $\,\Phi^{(1)\,B}_{\rm CaE}=({\mathsf Y}^B{\mathsf Y}_{1,2}{\rm G},\pi_{{\mathsf Y}^B{\mathsf Y}_{1,2}{\rm G}},E_B,\txa_{E_B},\a_{E_B}),\ B\in\{1,2\}\,$ between CaE super-1-gerbes $\,\mathcal{G}^{(1)\,A}_{\rm CaE}=({\mathsf Y}_A {\rm G},\pi_{{\mathsf Y}_A {\rm G}},{\rm B}_A,L_A,\nabla_{L_A},\mu_{L_A}),\ A\in\{1,2\}$,\ a 2-isomorphism is represented by a triple \begin{eqnarray}\nn \varphi^{(1)}_{\rm CaE}=({\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G},\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}},\beta)\ :\ \Phi^{(1)\,1}_{\rm CaE}\xLongrightarrow{\ \cong\ }\Phi^{(1)\,2}_{\rm CaE} \end{eqnarray} composed of \begin{itemize} \item a surjective submersion \begin{eqnarray}\nn \pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}}\ :\ {\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}\longrightarrow{\mathsf Y}^1{\mathsf Y}_{1,2}{\rm G}\x_{{\mathsf Y}_{1,2}{\rm G}}{\mathsf Y}^2{\mathsf Y}_{1,2}{\rm G}\equiv{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}{\rm G} \end{eqnarray} with a structure of a Lie supergroup on its total space that lifts the product Lie-supergroup structure on the fibred product $\,{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}{\rm G}\,$ along the Lie-supergroup homomorphism $\,\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}}$, \item an isomorphism of CaE super-0-gerbes \begin{eqnarray}\nn \beta\ :\ ({\rm pr}_1\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}})^*{\rm E}_1\xrightarrow{\ \cong\ }({\rm pr}_2\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}})^*{\rm E}_2 \end{eqnarray} subject to the coherence constraint expressed by the commutative diagram of isomorphisms of CaE super-0-gerbes \begin{eqnarray}\nn \alxydim{@C=1.75cm@R=1.75cm}{ p_{1,1}^*{\rm L}_1\otimes\pi_{1,2}^*{\rm E}_1 \ar[r]^{\a_1} \ar[d]_{{\rm id}_{p_{1,1}^*{\rm L}_1}\otimes{\rm pr}_2^*\beta} & \pi_{1,1}^*{\rm E}_1\otimes p_{2,2}^*{\rm L}_2 \ar[d]^{{\rm pr}_1^*\beta\otimes{\rm id}_{p_{2,2}^*{\rm L}_2}} \\ p_{1,1}^*{\rm L}_1\otimes\pi_{2,2}^*{\rm E}_2 \ar[r]_{\a_2} & \pi_{2,1}^*{\rm E}_2\otimes p_{2,2}^*{\rm L}_2 } \end{eqnarray} over $\,{\mathsf Y}^{[2]}{\mathsf Y}^{1,2}{\mathsf Y}_{1,2}{\rm G}$,\ with \begin{eqnarray}\nn &p_{i,i}={\rm pr}_i\circ\pi_{{\mathsf Y}^i{\mathsf Y}_{1,2}{\rm G}}\circ{\rm pr}_i\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}}\x{\rm pr}_i\circ\pi_{{\mathsf Y}^i{\mathsf Y}_{1,2}{\rm G}}\circ {\rm pr}_i\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}}\,,\quad i\in\{1,2\}&\cr\cr &\pi_{j,k}={\rm pr}_j\circ\pi_{{\mathsf Y}\sfY^{1,2}{\mathsf Y}_{1,2}{\rm G}}\circ{\rm pr}_k\,,\quad j,k\in\{1,2\}\,. \end{eqnarray} \end{itemize} \begin{flushright}$\diamond$\end{flushright As in the case of the super-0-brane, we close the section dedicated to the super-1-brane with (Vinogradov-)algebroidal considerations. The first to be discussed are the fundamental sections of $\,\mathcal{E}^{1,1}{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ associated with the left-regular action, \begin{eqnarray}\nn \gt{R}_I(\theta,x)=\bigl(\mathscr{P}_I(\theta,x),-\ovl\theta\,\Gamma_I\,\sigma(\theta)\bigr)\,,\qquad\qquad\gt{R}_\a(\theta,x)=\bigl(\mathscr{Q}_\a(\theta,x),\ovl\Gamma_{I\,\a\beta}\,\theta^\beta\,\bigl(-2{\mathsf d} x^I+\tfrac{1}{3}\,\ovl\theta\,\Gamma^I\,\sigma(\theta)\bigr)\bigr)\,, \end{eqnarray} for which we obtain superbrackets \begin{eqnarray}\nn &\sVbra{\gt{R}_I}{\gt{R}_J}^{\underset{\tx{\ciut{(3)}}}{\chi}}(\theta,x)=0\,,\qquad\qquad\sVbra{\gt{R}_I}{\gt{R}_\a}^{\underset{\tx{\ciut{(3)}}}{\chi}}(\theta,x)=\bigl(0,\tfrac{1}{2}\,\ovl\Gamma_{I\,\a\beta}\,{\mathsf d}\theta^\beta\bigr)\,,&\cr\cr &\sVbra{\gt{R}_\a}{\gt{R}_\beta}^{\underset{\tx{\ciut{(3)}}}{\chi}}(\theta,x)=-\ovl\Gamma{}^I_{\a\beta}\,\gt{R}_I+\bigl(0,-2\ovl\Gamma_{I\,\a\beta}\,{\mathsf d} x^I\bigr)\,.& \end{eqnarray} These yield the left-regular Lie anomaly super-1-form \begin{eqnarray}\nn \a^{\rm L}_{IJ}=0\,,\qquad\qquad\a^{\rm L}_{I\a}=\tfrac{1}{2}\,\ovl\Gamma_{I\,\a\beta}\,{\mathsf d}\theta^\beta=-\a^{\rm L}_{\a I}\,,\qquad\qquad\a^{\rm L}_{\a\beta}=-2\ovl\Gamma_{I\,\a\beta}\,{\mathsf d} x^I\,. \end{eqnarray} For the right-regular action, we find \begin{eqnarray}\nn \gt{L}_I(\theta,x)=\bigl(P_I(\theta,x),-\ovl\theta\,\Gamma_I\,\sigma(\theta)\bigr)\,,\qquad\qquad\gt{L}_\a(\theta,x)=\bigl(Q_\a(\theta,x),-\ovl\Gamma_{I\,\a\beta}\,\theta^\beta\,\bigl(2{\mathsf d} x^I+\tfrac{1}{3}\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\bigr)\bigr) \end{eqnarray} and the superbrackets \begin{eqnarray}\nn &\sVbra{\gt{L}_I}{\gt{L}_J}^{\underset{\tx{\ciut{(3)}}}{\chi}}(\theta,x)=0\,,\qquad\qquad\sVbra{\gt{L}_I}{\gt{L}_\a}^{\underset{\tx{\ciut{(3)}}}{\chi}}(\theta,x)=\bigl(0,\tfrac{1}{2}\,\ovl\Gamma_{I\,\a\beta}\,{\mathsf d}\theta^\beta\bigr)\,,&\cr\cr &\sVbra{\gt{L}_\a}{\gt{L}_\beta}^{\underset{\tx{\ciut{(3)}}}{\chi}}(\theta,x)=\ovl\Gamma{}^I_{\a\beta}\,\gt{L}_I+\bigl(0,-2\ovl\Gamma_{I\,\a\beta}\,{\mathsf d} x^I\bigr)\,,& \end{eqnarray} and so also the right-regular Lie anomaly super-1-form \begin{eqnarray}\nn \a^{\rm R}_{IJ}=0\,,\qquad\qquad\a^{\rm R}_{I\a}=\tfrac{1}{2}\,\ovl\Gamma_{I\,\a\beta}\,{\mathsf d}\theta^\beta=\a^{\rm R}_{\a I}\,,\qquad\qquad\a^{\rm R}_{\a\beta}=-2\ovl\Gamma{}^{11}_{\a\beta}\,. \end{eqnarray} The mixed superbrackets read \begin{eqnarray}\nn &\sVbra{\gt{R}_I}{\gt{L}_J}^{\underset{\tx{\ciut{(3)}}}{\chi}}=0\,,\qquad\qquad\sVbra{\gt{R}_I}{\gt{L}_\a}^{\underset{\tx{\ciut{(3)}}}{\chi}}=\bigl(0,\tfrac{1}{2}\,\ovl\Gamma_{I\,\a\beta}\,{\mathsf d}\theta^\beta\bigr)=-\sVbra{\gt{R}_\a}{\gt{L}_I}^{\underset{\tx{\ciut{(3)}}}{\chi}}\,,&\cr\cr &\sVbra{\gt{R}_\a}{\gt{L}_\a}^{\underset{\tx{\ciut{(3)}}}{\chi}}=\bigl(0,-2\ovl\Gamma_{I\,\a\beta}\,{\mathsf d} x^I+\ovl\Gamma{}^I_{\a\gamma}\,\ovl\Gamma_{I\,\beta\delta}\,{\mathsf d}(\theta^\gamma\,\theta^\delta)\bigr)\,.& \end{eqnarray} Taking these into account alongside the previous ones, we derive a trivial, and so also (Lie )anomaly-free superbrackets of the fundamental sections of the adjoint action \begin{eqnarray}\nn (\gt{R}_I-\gt{L}_I)(\theta,x)&=&\bigl((\mathscr{P}_I-P_I)(\theta,x),0\bigr)=0\,,\cr\cr (\gt{R}_\a-\gt{L}_\a)(\theta,x)&=&\bigl((\mathscr{Q}_\a-Q_\a)(\theta,x),0\bigr)=\bigl(-\ovl\Gamma{}^I_{\a\beta}\,\theta^\beta\,\tfrac{\partial\ }{\partial x^I},\tfrac{2}{3}\,\ovl\Gamma_{I\,\a\beta}\,\theta^\beta\,\ovl\theta\,\Gamma^I\,\sigma(\theta)\bigr)\,. \end{eqnarray} Once again, the fundamental sections for the adjoint action span a Lie superalgebroid. \medskip \subsubsection{The supermembrane} In order to corroborate our claim as to the structural nature of the proposed geometrisation scheme in the setting of the $\sigma$-model super-$p$-brane dynamics, we discuss yet another example of a super-$n$-gerbe, to wit, the super-2-gerbe of the super-4-form field that couples to the uniformly charged supermembrane. In so doing, we encounter a slightly more involved extension mechanism than those dealt with heretofore. Thus, we are going to adapt the logic employed -- after Refs.\,\cite{Aldaya:1984gt,Chryssomalakos:2000xd} -- in the previous paragraph to the problem of trivialisation of the Green--Schwarz super-4-form \begin{eqnarray}\nn \underset{\tx{\ciut{(4)}}}{\chi}=\pi_0^*\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge e^I\wedge e^J \end{eqnarray} whose closedness implies the particular variant \begin{eqnarray}\label{eq:ClifFierz2} \ovl\Gamma{}^I_{(\a\beta}\,\bigl(\ovl\Gamma_{IJ}\bigr)_{\gamma\delta)}=0 \end{eqnarray} of the Fierz identity \eqref{eq:ClifFierz}, which can be conveniently rewritten as \begin{eqnarray}\nn \ovl\Gamma{}^I_{(\gamma\delta}\,\bigl(\ovl\Gamma_{IJ}\bigr)_{\beta)\a}=-\ovl\Gamma{}^I_{\a(\beta}\,\bigl(\ovl\Gamma_{IJ}\bigr)_{\gamma\delta)}\,. \end{eqnarray} On the tentative list \eqref{eq:LI2bas} of LI de Rham super-2-cocycles, we now find\footnote{Note that neither $\,\mathscr{Q}_\a\righthalfcup\mathscr{Q}_\beta\righthalfcup\underset{\tx{\ciut{(4)}}}{\chi}\,$ nor $\,\mathscr{P}_a\righthalfcup\mathscr{Q}_\a\righthalfcup\underset{\tx{\ciut{(4)}}}{\chi}\,$ are closed.} \begin{eqnarray}\label{eq:CaEscocyc2} \underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_{IJ}=-\tfrac{1}{4}\,\mathscr{P}_I\righthalfcup\mathscr{P}_J\righthalfcup\underset{\tx{\ciut{(4)}}}{\chi}=\tfrac{1}{2}\,\pi_0^*\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\,. \end{eqnarray} Reasoning along the lines of our previous analyses, we erect over $\,\mathcal{M}^{(1)}={\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ a trivial rank-$\frac{d(d-1)}{2}$ (real) vector bundle \begin{eqnarray}\nn \pi^{(2)}_2\equiv{\rm pr}_1\ :\ \mathcal{M}^{(2)}_2:=\mathcal{M}^{(1)}\x{\mathbb{R}}^{\frac{d(d-1)}{2}}\longrightarrow\mathcal{M}^{(1)}\ :\ \bigl(\theta^\a,x^I,\zeta_{JK}=-\zeta_{KJ}\bigr)\longmapsto\bigl(\theta^\a,x^I\bigr) \end{eqnarray} endowed with the structure of a Lie supergroup that lifts the same structure on $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ in a manner that ensures left-invariance of the super-1-forms \begin{eqnarray}\nn e^{(2)}_{IJ}(\theta,x,\zeta)={\mathsf d}\zeta_{IJ}+\tfrac{1}{2}\,\ovl\theta\,\Gamma_{IJ}\,\sigma(\theta)\,, \end{eqnarray} the latter satisfying the identities \begin{eqnarray}\nn \pi^{(2)\,*}_2\underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_{IJ}={\mathsf d} e^{(2)}_{IJ}\,. \end{eqnarray} The relevant structure is given in \begin{Prop}\label{prop:M22sgroup} The above-described vector bundle $\,\mathcal{M}^{(2)}_2\,$ equipped with the binary operation \begin{eqnarray}\nn {\rm m}^{(2)}_2\ &:&\ \mathcal{M}^{(2)}_2\x\mathcal{M}^{(2)}_2\longrightarrow\mathcal{M}^{(2)}_2\cr\cr &:&\ \bigl(\bigl(\theta_1^\a,x_1^I,\zeta_{1\,JK}\bigr),\bigl(\theta_2^\beta,x_2^L,\zeta_{2\,MN}\bigr)\bigr)\longmapsto\bigl(\theta_1^\a+\theta_2^\a,x_1^I+x_2^I-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,\zeta_{1\,JK}+\zeta_{2\,JK}-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma_{JK}\,\theta_2\bigr) \end{eqnarray} with the inverse \begin{eqnarray}\nn {\rm Inv}^{(2)}_2\ :\ \mathcal{M}^{(2)}_2\longrightarrow\mathcal{M}^{(2)}_2\ :\ \bigl(\theta^\a,x^I,\zeta_{JK}\bigr)\longmapsto\bigl(-\theta^\a,-x^I,-\zeta_{JK}\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn e^{(2)}_2=(0,0,0) \end{eqnarray} is a Lie supergroup. It is a central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{R}}^{\frac{d(d-1)}{2}}\longrightarrow{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{R}}^{\frac{d(d-1)}{2}}\xrightarrow{\ \pi_2^{(2)}\ }\mathcal{M}_2^{(2)}{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the super-Minkowski group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ determined by the family of CE super-2-cocycles corresponding to the CaE super-2-cocycles $\,\{\underset{\tx{\ciut{(2)}}}{h}{}^{(1)}_{IJ}\}_{I,J\in\ovl{0,d-1}}\,$ of \Reqref{eq:CaEscocyc2}. \end{Prop} \noindent\begin{proof} Through inspection. \end{proof} In the next step, let us split the super-4-cocycle as \begin{eqnarray}\nn \underset{\tx{\ciut{(4)}}}{\chi}=\lambda_1\,\pi_0^*\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge e^I\wedge e^J+\lambda_2\,\pi_0^*\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge e^I\wedge e^J\,,\qquad\lambda_1+\lambda_2=1 \end{eqnarray} and pull it back to $\,\mathcal{M}_2^{(2)}$,\ whereupon we judiciously\footnote{Trivialising the factor $\,\pi_0^*\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\,$ within $\,\pi_0^*\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge e^I\wedge e^J\,$ as a whole does not solve our problem.} rewrite it, using the shorthand notation \begin{eqnarray}\nn \pi^{(2)}_{02}\equiv\pi_0\circ\pi^{(2)}_2 \end{eqnarray} along the way, as \begin{eqnarray}\nn \pi^{(2)\,*}_2\underset{\tx{\ciut{(4)}}}{\chi}&=&{\mathsf d}\bigl(2\lambda_1\,e_{IJ}^{(2)}\wedge\pi^{(2)\,*}_2(e^I\wedge e^J)\bigr)+2\lambda_1\,e_{IJ}^{(2)}\wedge\pi^{(2)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma^I\,\sigma\bigr)\wedge\pi^{(2)\,*}_2 e^J\cr\cr &&+\lambda_2\,\pi^{(2)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge\pi^{(2)\,*}_2\bigl(e^I\wedge e^J\bigr) \end{eqnarray} and subsequently -- guided by hindsight once again -- we decompose it further as \begin{eqnarray}\nn \pi^{(2)\,*}_2\underset{\tx{\ciut{(4)}}}{\chi}&=&{\mathsf d}\bigl(2\lambda_1\,e_{IJ}^{(2)}\wedge\pi^{(2)\,*}_2(e^I\wedge e^J)\bigr)+2\lambda_{11}\,e_{IJ}^{(2)}\wedge\pi^{(2)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma^I\,\sigma\bigr)\wedge\pi^{(2)\,*}_2 e^J\cr\cr &&\hspace{-.5cm}+2\lambda_{12}\,e_{IJ}^{(2)}\wedge\pi^{(2)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma^I\,\sigma\bigr)\wedge\pi^{(2)\,*}_2 e^J+\lambda_2\,\pi^{(2)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge\pi^{(2)\,*}_2\bigl(e^I\wedge e^J\bigr)\,, \end{eqnarray} for $\,\lambda_{11}+\lambda_{12}=\lambda_1$.\ We may now apply the generating technique that gave us \eqref{eq:LI2bas} to the partially corrected super-4-cocycle \begin{eqnarray}\nn \widetilde{\underset{\tx{\ciut{(4)}}}{\chi}}=\pi^{(2)\,*}_2\underset{\tx{\ciut{(4)}}}{\chi}-{\mathsf d}\bigl(2\lambda_1\,e_{IJ}^{(2)}\wedge\pi^{(2)\,*}_2(e^I\wedge e^J)\bigr) \end{eqnarray} over the supermanifold $\,\mathcal{M}^{(2)}_2$.\ For that, we need supervector fields dual to the set of LI super-1-forms $\,\{\sigma^\a,e^I,e^{(2)}_{JK}\}_{\a\in\ovl{1,D_{1,d-1}},\ I,J,K\in\ovl{0,d-1}}$.\ These are readily found to be given by \begin{eqnarray}\nn &\mathscr{Q}^{(2)}_\a(\theta,x,\zeta)=\mathscr{Q}^{(1)}_\a(\theta,x)+\tfrac{1}{2}\,\ovl\Gamma_{LM\,\a\beta}\,\theta^\beta\,\tfrac{\partial\ }{\partial\zeta_{LM}}\,,&\cr\cr &\mathscr{P}^{(2)}_I(\theta,x,\zeta)=\mathscr{P}^{(1)}_I(\theta,x)\,,\qquad\qquad\mathscr{F}^{(2)\,JK}(\theta,x,\zeta)=\tfrac{\partial\ }{\partial\zeta_{JK}}& \end{eqnarray} and furnish the LSA \begin{eqnarray}\nn &\{\mathscr{Q}^{(2)}_\a,\mathscr{Q}^{(2)}_\beta\}=\ovl\Gamma{}^I_{\a\beta}\,\mathscr{P}^{(2)}_I+\ovl\Gamma_{IJ\,\a\beta}\,\mathscr{F}^{(2)\,IJ}\,,\quad\quad[\mathscr{P}^{(2)}_I,\mathscr{P}^{(2)}_J]=0\,,\quad\quad[\mathscr{F}^{(2)\,IJ},\mathscr{F}^{(2)\,KL}]=0\,,&\cr\cr &[\mathscr{Q}^{(2)}_\a,\mathscr{P}^{(2)}_I]=0\,,\qquad\qquad[\mathscr{Q}^{(2)}_\a,\mathscr{F}^{(2)\,IJ}]=0\,,\qquad\qquad[\mathscr{P}^{(2)}_I,\mathscr{F}^{(2)\,JK}]=0\,.& \end{eqnarray} We may next contract the super-4-cocycle $\,\widetilde{\underset{\tx{\ciut{(4)}}}{\chi}}\,$ with the vector fields $\,\mathscr{P}^{(2)}_I\,$ and $\,\mathscr{Q}^{(2)}_\a$,\ whereby, for suitably adjusted normalisation constants ($4\lambda_1=1=4\lambda_2$), we obtain the combination \begin{eqnarray}\label{eq:CaEscocyc3}\hspace{1cm} \widetilde{\underset{\tx{\ciut{(2)}}}{h}}{}_{I\a}:=\mathscr{Q}^{(2)}_\a\righthalfcup\mathscr{P}^{(2)}_I\righthalfcup\widetilde{\underset{\tx{\ciut{(4)}}}{\chi}}=\ovl\Gamma{}^J_{\a\beta}\,e_{JI}^{(2)}\wedge\pi_{02}^{(2)\,*}\sigma^\beta+\ovl\Gamma_{JI\,\a\beta}\,\pi^{(2)\,*}_2e^J\wedge\pi_{02}^{(2)\,*}\sigma^\beta\,. \end{eqnarray} The latter is closed in virtue of identity \eqref{eq:ClifFierz2}. Upon rewriting the super-2-cocycle as \begin{eqnarray}\nn \left(\ovl\Gamma{}^J_{\a\beta}\,e_{JI}^{(2)}(\theta,x,\zeta)+\ovl\Gamma_{JI\,\a\beta}\,e^J(\theta,x)\right)\wedge\sigma^\beta(\theta)&=&{\mathsf d}\bigl[\bigl(\ovl\Gamma{}^J_{\a\beta}\,e_{IJ}^{(2)}(\theta,x,\zeta)+\ovl\Gamma_{IJ\,\a\beta}\,e^J(\theta,x)\bigr)\,\theta^\beta\bigr]\cr\cr &&-\tfrac{1}{2}\,(\ovl\Gamma{}^J_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^J_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta})\,\theta^\beta\,\sigma^\gamma\wedge\sigma^\delta(\theta)\,, \end{eqnarray} and taking into account the identity (also following from \Reqref{eq:ClifFierz2}) \begin{eqnarray}\nn -(\ovl\Gamma{}^J_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^J_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta})\,\theta^\beta\,\sigma^\gamma\wedge\sigma^\delta(\theta)=2\bigl(\ovl\Gamma{}^J_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+\ovl\Gamma{}^J_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\bigl[{\mathsf d}\bigl(\theta^\beta\,\theta^\gamma\,\sigma^\delta(\theta)\bigr)+\theta^\gamma\,\sigma^\beta\wedge\sigma^\delta(\theta)\bigr]\,, \end{eqnarray} we may write \begin{eqnarray}\nn \widetilde{\underset{\tx{\ciut{(2)}}}{h}}{}_{I\a}(\theta,x,\zeta)={\mathsf d}\bigl[\bigl(\ovl\Gamma{}^J_{\a\beta}\,e_{IJ}^{(2)}(\theta,x,\zeta)+\ovl\Gamma_{IJ\,\a\beta}\,e^J(\theta,x)\bigr)\,\theta^\beta+\tfrac{1}{3}\,\bigl(\ovl\Gamma{}^J_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+\ovl\Gamma{}^J_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\theta^\beta\,\theta^\gamma\,\sigma^\delta(\theta)\bigr]\,, \end{eqnarray} and so we see that the super-2-cocycle admits a manifestly non-LI de Rham primitive. Its trivialisation in the CaE cohomology necessitates the construction of a trivial vector bundle \begin{eqnarray}\nn \pi_2^{(3)}\equiv{\rm pr}_1\ :\ \mathcal{M}_2^{(3)}:=\mathcal{M}_2^{(2)}\x{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\longrightarrow\mathcal{M}_2^{(2)}\ :\ \bigl(\theta^\a,x^I,\zeta_{JK},\psi_{L\,\beta}\bigr)\longmapsto\bigl(\theta^\a,x^I,\zeta_{JK}\bigr) \end{eqnarray} with the purely Gra\ss mann-odd fibre $\,{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\,$ and a Lie-supergroup structure that extends the previously established structure on its base $\,\mathcal{M}_2^{(2)}\,$ in such a way that the super-1-forms \begin{eqnarray}\nn \sigma^{(3)}_{I\a}(\theta,x,\zeta,\psi)={\mathsf d}\psi_{I\,\a}+\bigl[\ovl\Gamma{}^J_{\a\beta}\,\bigl(e_{IJ}^{(2)}(\theta,x,\zeta)-\tfrac{1}{3}\,\ovl\theta\,\Gamma_{IJ}\,\sigma(\theta)\bigr)+\ovl\Gamma_{IJ\,\a\beta}\,\bigl(e^J(\theta,x)-\tfrac{1}{3}\,\ovl\theta\,\Gamma^J\,\sigma(\theta)\bigr)\bigr]\,\theta^\beta \end{eqnarray} are LI with respect to this extension. We thus obtain \begin{Prop}\label{prop:M23sgroup} The above-described vector bundle $\,\mathcal{M}^{(3)}_2\,$ equipped with the binary operation \begin{eqnarray}\nn {\rm m}^{(3)}_2\ &:&\ \mathcal{M}^{(3)}_2\x\mathcal{M}^{(3)}_2\longrightarrow\mathcal{M}^{(3)}_2\ :\ \left((\theta_1^\a,x_1^I,\zeta_{1\,JK},\psi_{1\,L\beta}),(\theta_2^\gamma,x_2^M,\zeta_{2\,NO},\psi_{2\,P\delta})\right)\longmapsto\cr\cr &&\longmapsto\bigl(\theta_1^\a+\theta_2^\a,x_1^I+x_2^I-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,\zeta_{1\,JK}+\zeta_{2\,JK}-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma_{JK}\,\theta_2,\psi_{1\,L\beta}+\psi_{2\,L\beta}\cr\cr &&+\bigl(\ovl\Gamma{}^M_{\beta\gamma}\,\zeta_{2\,ML}+x_2^M\,\ovl\Gamma_{ML\,\beta\gamma}\bigr)\,\theta_1^\gamma-\tfrac{1}{6}\,\bigl(\ovl\Gamma{}^M_{\beta\gamma}\,\ovl\Gamma_{ML\,\delta\epsilon}+\ovl\Gamma{}^M_{\delta\epsilon}\,\ovl\Gamma_{ML\,\beta\gamma}\bigr)\bigl(2\theta_1^\gamma+\theta_2^\gamma\bigr)\,\theta_1^\delta\,\theta_2^\epsilon\bigr) \end{eqnarray} with the inverse \begin{eqnarray}\nn {\rm Inv}^{(3)}_2\ :\ \mathcal{M}^{(3)}_2\longrightarrow\mathcal{M}^{(3)}_2\ :\ \bigl(\theta^\a,x^I,\zeta_{JK},\psi_{L\beta}\bigr)\longmapsto\bigl(-\theta^\a,-x^I,-\zeta_{JK},-\psi_{L\beta}+\bigl(\ovl\Gamma{}^M_{\beta\gamma}\,\zeta_{ML}+x^M\,\ovl\Gamma_{ML\,\beta\gamma}\bigr)\,\theta^\gamma\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn e^{(2)}_2=(0,0,0,0) \end{eqnarray} is a Lie supergroup. It is a (super)central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\longrightarrow\mathcal{M}_2^{(2)}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\xrightarrow{\ \pi_2^{(3)}\ }\mathcal{M}_2^{(2)}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the Lie supergroup $\,\mathcal{M}_2^{(2)}\,$ of Prop.\,\ref{prop:M22sgroup} determined by the family of CE super-2-cocycles corresponding to the CaE super-2-cocycles $\,\{\widetilde{\underset{\tx{\ciut{(2)}}}{h}}{}_{I\a}\}_{(I,\a)\in\ovl{0,d-1}\x\ovl{1,D_{1,d-1}}}\,$ of \Reqref{eq:CaEscocyc3}. \end{Prop} \noindent\begin{proof} Through inspection. As previously, the associativity of $\,{\rm m}_2^{(3)}\,$ hinges upon the identity \eqref{eq:ClifFierz2}. The only slightly less obvious element of the proof is the derivation of the very last term in the formula for the product of two points in $\,\mathcal{M}^{(3)}_2$.\ Indeed, one has to take into account the relevant Fierz identities in order to identify the primitive and this, while in no way dramatic in the present case, may take up more time than necessary. Therefore, we pause to indicate a simple calculation method that will come handy in subsequent calculations. When considering the effect of a (lifted) supersymmetry transformation on the super-1-form \begin{eqnarray}\nn \widetilde\sigma^{(3)}_{I\a}(\theta,x,\zeta):=\sigma^{(3)}_{I\a}(\theta,x,\zeta,\psi)-{\mathsf d}\psi_{I\,\a}\,, \end{eqnarray} we immediately arrive at the expression \begin{eqnarray}\nn &&{\rm m}^{(2)}_2\widetilde\sigma^{(3)}_{I\a}\bigl((\varepsilon,y,\xi),(\theta,x,\zeta)\bigr)-\widetilde\sigma^{(3)}_{I\a}(\theta,x,\zeta)\cr\cr &=&{\mathsf d}\bigl(\bigl(\ovl\Gamma{}^J_{\a\beta}\,\zeta_{IJ}+x^J\,\ovl\Gamma_{IJ\,\a\beta}-\tfrac{1}{3}\,\bigl(\ovl\Gamma{}^J_{\a\beta}\,\ovl\varepsilon\,\Gamma_{IJ}\,\theta+\ovl\Gamma_{IJ\,\a\beta}\,\ovl\varepsilon\,\Gamma^J\,\theta\bigr)\bigr)\,\varepsilon^\gamma\bigr)\cr\cr &&+\tfrac{1}{6}\,\bigl(\ovl\Gamma{}^J_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^J_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta}+2\ovl\Gamma{}^J_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+2\,\ovl\Gamma{}^J_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\varepsilon^\beta\,\theta^\gamma\,{\mathsf d}\theta^\delta \end{eqnarray} in which the last term is closed by construction. Hence, we are led to consider the de Rham super-1-cocycle \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\eta}{}_{I\a}(\theta)=\tfrac{1}{6}\,\bigl(\ovl\Gamma{}^J_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^J_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta}+2\ovl\Gamma{}^J_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+2\,\ovl\Gamma{}^J_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\varepsilon^\beta\,\theta^\gamma\,{\mathsf d}\theta^\delta\,. \end{eqnarray} In consequence of the cohomological triviality of supermanifold under consideration (in fact, the super-1-cocycle descends to the odd hyperplane $\,{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}$,\ and so it is the triviality of the latter that matters here), the super-1-form has a global primitive given by a global section $\,F\,$ of the structure sheaf of $\,{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}$, \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\eta}{}_{I\a}={\mathsf d} F_{I\a}\,, \end{eqnarray} which we derive with the help of the standard homotopy argument (the so-called `homotopy formula'). Thus, we consider a homotopy \begin{eqnarray}\nn H\ :\ [0,1]\x{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\longrightarrow{\mathbb{R}}^{0\,\vert\,D_{1,d-1}}\ :\ \bigl(t,\theta^\a\bigr)\longmapsto t\theta^\a \end{eqnarray} that linearly retracts the odd hyperplane to its distinguished point $\,0$,\ and write the primitive in the form of the integral over the homotopy fibre \begin{eqnarray}\label{eq:shomform} F_{I\a}(\theta)=\int_0^1\,{\mathsf d} t\,\partial_t\righthalfcup H^*\underset{\tx{\ciut{(1)}}}{\eta}{}_{I\a}(t,\theta)\,, \end{eqnarray} to the effect \begin{eqnarray}\nn F_{I\a}(\theta)&=&\tfrac{1}{6}\,\int_0^1\,{\mathsf d} t\,t\,\partial_t\righthalfcup\bigl(\ovl\Gamma{}^J_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^J_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta}+2\ovl\Gamma{}^J_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+2\,\ovl\Gamma{}^J_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\varepsilon^\beta\,\theta^\gamma\,\bigl(t\,{\mathsf d}\theta^\delta+\theta^\delta\,{\mathsf d} t\bigr)\cr\cr &=&\tfrac{1}{12}\,\bigl(\ovl\Gamma{}^J_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^J_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta}+2\ovl\Gamma{}^J_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+2\,\ovl\Gamma{}^J_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\varepsilon^\beta\,\theta^\gamma\,\theta^\delta\cr\cr &=&\tfrac{1}{6}\,\bigl(\ovl\Gamma{}^J_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+\ovl\Gamma{}^J_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\varepsilon^\beta\,\theta^\gamma\,\theta^\delta\,. \end{eqnarray} \end{proof} For $\,\lambda_2=\lambda_{21}+\lambda_{22}\,$ and $\,\lambda_{21}=2\lambda_{11}$,\ which is the choice that we make, the last trivialisation leaves us with the partially reduced GS super-4-cocycle, \begin{eqnarray}\nn \pi^{(2,3)\,*}_2\underset{\tx{\ciut{(4)}}}{\chi}&=&{\mathsf d}\bigl(2\lambda_1\,\pi^{(3)\,*}_2 e_{IJ}^{(2)}\wedge\pi^{(2,3)\,*}_2(e^I\wedge e^J)\bigr)-2\lambda_{11}\,{\mathsf d}\sigma_{I\a}^{(3)}\wedge\pi^{(2,3)\,*}_2e^I\wedge\pi^{(2,3)\,*}_{02}\sigma^\a\cr\cr &&+2\lambda_{12}\,\pi^{(3)\,*}_2e_{IJ}^{(2)}\wedge\pi^{(2,3)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma^I\,\sigma\bigr)\wedge\pi^{(2,3)\,*}_2e^J+\lambda_{22}\,\pi^{(2,3)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge\pi^{(2,3)\,*}_2\bigl(e^I\wedge e^J\bigr)\cr\cr &=&{\mathsf d}\bigl(2\lambda_1\,\pi^{(3)\,*}_2 e_{IJ}^{(2)}\wedge\pi^{(2,3)\,*}_2(e^I\wedge e^J)-2\lambda_{11}\, \sigma_{I\a}^{(3)}\wedge\pi^{(2,3)\,*}_2e^I\wedge\pi^{(2,3)\,*}_{02}\sigma^\a\bigr)\cr\cr &&-\lambda_{11}\,\sigma_{I\a}^{(3)}\wedge\pi^{(2,3)\,*}_{02}(\ovl\sigma\wedge\Gamma^I\,\sigma\wedge \sigma^\a)+2\lambda_{12}\,\pi^{(3)\,*}_2 e_{IJ}^{(2)}\wedge\pi^{(2,3)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma^I\,\sigma\bigr)\wedge\pi^{(2,3)\,*}_2e^J\cr\cr &&+\lambda_{22}\,\pi^{(2,3)\,*}_{02}\bigl(\ovl\sigma\wedge\Gamma_{IJ}\,\sigma\bigr)\wedge\pi^{(2,3)\,*}_2\bigl(e^I\wedge e^J\bigr)\cr\cr &\equiv&{\mathsf d}\bigl(2\lambda_1\,\pi^{(3)\,*}_2 e_{IJ}^{(2)}\wedge\pi^{(2,3)\,*}_2(e^I\wedge e^J)-2\lambda_{11}\, \sigma_{I\a}^{(3)}\wedge\pi^{(2,3)\,*}_2e^I\wedge\pi^{(2,3)\,*}_{02}\sigma^\a\bigr)\cr\cr &&+\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}\wedge\pi^{(2,3)\,*}_{02}(\sigma^\a\wedge\sigma^\beta)\,, \end{eqnarray} where we used the shorthand notation \begin{eqnarray}\nn \pi^{(2,3)}_2\equiv\pi^{(2)}_2\circ\pi^{(3)}_2\,,\qquad\qquad\pi^{(2,3)}_{02}=\pi_0\circ\pi^{(2)}_2\circ\pi^{(3)}_2\,. \end{eqnarray} Upon setting $\,\lambda_{11}=\lambda_{111}+\lambda_{112}$,\ we may cast the factor $\,\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}\,$ in the form \begin{eqnarray} \underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}&=&-\bigl(\lambda_{111}\,\ovl\Gamma{}^I_{\a\beta}\,\sigma^{(3)}_{I\gamma}+\tfrac{1}{2}\,\lambda_{112}\,\ovl\Gamma{}^I_{\a\gamma}\,\sigma^{(3)}_{I\beta}+\tfrac{1}{2}\,\lambda_{112}\,\ovl\Gamma{}^I_{\beta\gamma}\,\sigma^{(3)}_{I\a}\bigr)\wedge\pi^{(2,3)\,*}_{02}\sigma^\gamma+2\lambda_{12}\,\ovl\Gamma{}^I_{\a\beta}\,\widetilde\pi_2^*e_{IJ}^{(2)}\wedge\pi^{(2,3)\,*}_2e^J\cr\cr &&\hspace{1cm}+\lambda_{22}\,\ovl\Gamma_{IJ\,\a\beta}\,\pi^{(2,3)\,*}_2\bigl(e^I\wedge e^J\bigr)\,,\label{eq:scocDab} \end{eqnarray} and enquire as to the existence of a choice of the parameters for which the latter is closed. Taking into account the definitions of the super-1-forms $\,e^I,e_{JK}^{(2)}\,$ and $\,\sigma^{(3)}_{L\a}$,\ we find the exterior derivative of $\,\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}\,$ in the form \begin{eqnarray}\nn {\mathsf d}\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}&=&(\lambda_{111}-\lambda_{12})\,\ovl\Gamma{}^I_{\a\beta}\,\ovl\Gamma{}^J_{\gamma\delta}\,\widetilde\pi_2^*\bigl(e_{IJ}^{(2)}\wedge\widetilde\pi_{01}^*(\sigma^\gamma\wedge \sigma^\delta)\bigr)+\bigl[(\lambda_{111}+\lambda_{12})\,\ovl\Gamma{}^I_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\lambda_{22}\,\ovl\Gamma{}^I_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta}\cr\cr &&+\tfrac{1}{4}\,\lambda_{112}\,(\ovl\Gamma{}^I_{\a\gamma}\,\ovl\Gamma_{IJ\,\beta\delta}+\ovl\Gamma{}^I_{\beta\gamma}\,\ovl\Gamma_{IJ\,\a\delta}+\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma_{IJ\,\beta\gamma}+\ovl\Gamma{}^I_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma})\bigr]\,\pi^{(2,3)\,*}_2\bigl(e^J\wedge\widetilde\pi_0^*(\sigma^\gamma\wedge \sigma^\delta)\bigr) \end{eqnarray} Thus, in order for the derivative to vanish identically in the given representation of the Clifford algebra (that is, with the Fierz identity \eqref{eq:ClifFierz2} in force), we have to impose the constraints \begin{eqnarray}\nn \lambda_{111}-\lambda_{12}\stackrel{!}{=} 0\,,\qquad\qquad 4(\lambda_{111}+\lambda_{12})\stackrel{!}{=}\lambda_{112}\stackrel{!}{=} 4\lambda_{22} \end{eqnarray} with the solution \begin{eqnarray}\nn (\lambda_{112},\lambda_{12},\lambda_{22})=\lambda_{111}\cdot(8,1,2)\,. \end{eqnarray} We fix the free coefficient by demanding consistency of the result derived above with the linear relations between the various coefficients, and in particular -- with $\,\lambda_1+\lambda_2=1$,\ whereupon we obtain \begin{eqnarray}\label{eq:scocDabnorm} (\lambda_{111},\lambda_{112},\lambda_{12},\lambda_{21},\lambda_{22})=\tfrac{1}{30}\cdot(1,8,1,18,2)\,. \end{eqnarray} Given the closed super-2-cocycle $\,\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}\,$ on $\,\mathcal{M}^{(3)}_2$,\ we may -- following the same logic as usual -- construct a trivial vector bundle \begin{eqnarray}\nn \pi_2^{(4)}\equiv{\rm pr}_1\ &:&\ \mathcal{M}_2^{(4)}:=\mathcal{M}_2^{(3)}\x{\mathbb{R}}^{\x \delta_{1,d-1}}\longrightarrow\mathcal{M}_2^{(3)}\,,\qquad\delta_{1,d-1}=\tfrac{D_{1,d-1}(D_{1,d-1}+1)}{2}\cr\cr &:&\ \bigl(\theta^\a,x^I,\zeta_{JK},\psi_{L\,\beta},\upsilon_{\gamma\delta}=\upsilon_{\delta\gamma}\bigr)\longmapsto\bigl(\theta^\a,x^I,\zeta_{JK},\psi_{L\,\beta}\bigr) \end{eqnarray} with the purely Gra\ss mann-even fibre $\,{\mathbb{R}}^{\x\delta_{1,d-1}}\,$ and a Lie-supergroup structure fixed by the requirement that the super-1-forms\footnote{The normalisation of the super-1-forms involved is arbitrary. We fix it by demanding that the result of the ensuing trivialisation of the GS super-4-cocycle reproduce the one obtained in \Rxcite{Eq.\,(73)}{Chryssomalakos:2000xd}.} \begin{eqnarray}\nn \sigma^{(4)}_{\a\beta}(\theta,x,\zeta,\psi,\upsilon)={\mathsf d}\upsilon_{\a\beta}-\tfrac{15}{2}\,{\mathsf d}^{-1}\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}(\theta,x,\zeta,\psi)\,, \end{eqnarray} defined in terms of some specific (non-LI) primitives $\,{\mathsf d}^{-1}\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}\,$ of the respective super-2-forms $\,\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}$,\ be LI with respect to this extension. We readily establish \begin{Prop}\label{prop:homformsgroup} The super-2-cocycles $\,\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}=\Delta_{\beta\a},\ \a,\beta\in\ovl{1,D_{1,d-1}}\,$ of \Reqref{eq:scocDab} on $\,\mathcal{M}_2^{(3)}\,$ corresponding to the choice of coefficients given in \Reqref{eq:scocDabnorm} admit primitives \begin{eqnarray}\nn -30\,{\mathsf d}^{-1}\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}(\theta,x,\zeta,\psi)&=&-\bigl(\ovl\Gamma{}^I_{\a\beta}\,\sigma^{(3)}_{I\gamma}+4\ovl\Gamma{}^I_{\a\gamma}\,\sigma^{(3)}_{I\beta}+4\ovl\Gamma{}^I_{\beta\gamma}\,\sigma^{(3)}_{I\a}\bigr)(\theta,x,\zeta,\psi)\,\theta^\gamma+2\ovl\Gamma{}^I_{\a\beta}\,e_{IJ}^{(2)}(\theta,x,\zeta)\,x^J\cr\cr &&-2\bigl(2\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma{}^J_{\beta\gamma}\,e_{IJ}^{(2)}(\theta,x,\zeta)+\bigl(\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma_{IJ\,\beta\gamma}+\ovl\Gamma{}^I_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,e^J(\theta,x)\bigr)\,\theta^\delta\,\theta^\gamma\cr\cr &&-2\ovl\Gamma_{IJ\,\a\beta}\,x^I\,e^J(\theta,x)-\bigl(\ovl\Gamma{}^I_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^I_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta}\bigr)\,x^J\,\theta^\gamma\,\sigma^\delta(\theta)\cr\cr &&-\Delta_{\a\beta;\gamma\delta\varepsilon\eta}\,\theta^\gamma\,\theta^\delta\,\theta^\epsilon\,\sigma^\eta(\theta)\,, \end{eqnarray} written in terms of the expressions \begin{eqnarray}\nn \Delta_{\a\beta;\gamma\delta\varepsilon\eta}=\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma{}^J_{\beta\gamma}\,\ovl\Gamma_{IJ\,\epsilon\eta}+\tfrac{1}{2}\,\bigl(\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma_{IJ\,\beta\gamma}+\ovl\Gamma{}^I_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,\ovl\Gamma{}^J_{\epsilon\eta}\,. \end{eqnarray} These determine, in the manner detailed above, the structure of a Lie supergroup on the vector bundle $\,\mathcal{M}_2^{(4)}\,$ with the binary operation \begin{eqnarray}\nn {\rm m}_2^{(4)}\ &:&\ \mathcal{M}_2^{(4)}\x\mathcal{M}_2^{(4)}\longrightarrow\mathcal{M}_2^{(4)}\ :\ \bigl(\bigl(\theta^\a_1,x^I_1,\zeta_{1\,JK},\psi_{1\,L\beta},\upsilon_{1\,\gamma\delta}\bigr),\bigl(\theta^\epsilon_2,x^M_2,\zeta_{2\,NO},\psi_{2\,P\eta},\upsilon_{2\,\kappa\lambda}\bigr)\bigr)\cr\cr &&\longmapsto\bigl(\theta^\a_1+\theta^\a_2,x^I_1+x^I_2-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma^I\,\theta_2,\zeta_{1\,JK}+\zeta_{2\,JK}-\tfrac{1}{2}\,\ovl\theta_1\,\Gamma_{JK}\,\theta_2,\psi_{1\,L\beta}+\psi_{2\,L\beta}\cr\cr &&+\bigl(\ovl\Gamma{}^M_{\beta\gamma}\,\zeta_{2\,ML}+x_2^M\,\ovl\Gamma_{ML\,\beta\gamma}\bigr)\,\theta_1^\gamma-\tfrac{1}{6}\,\bigl(\ovl\Gamma{}^M_{\beta\gamma}\,\ovl\Gamma_{ML\,\delta\epsilon}+\ovl\Gamma{}^M_{\delta\epsilon}\,\ovl\Gamma_{ML\,\beta\gamma}\bigr)\bigl(2\theta_1^\gamma+\theta_2^\gamma\bigr)\,\theta_1^\delta\,\theta_2^\epsilon,\cr\cr &&\upsilon_{1\,\gamma\delta}+\upsilon_{2\,\gamma\delta}+\bigl(\tfrac{1}{4}\,\ovl\Gamma{}^I_{\gamma\delta}\,\psi_{2\,I\epsilon}+\ovl\Gamma{}^I_{\gamma\epsilon}\,\psi_{2\,I\delta}+\ovl\Gamma{}^I_{\delta\epsilon}\,\psi_{2\,I\gamma}\bigr)\,\theta_1^\epsilon+\tfrac{1}{4}\,x_1^I\,\bigl(\ovl\Gamma_{IJ\,\gamma\delta}\,\bigl(2x_2^J-\ovl\theta_1\,\Gamma^J\,\theta_2\bigr)\cr\cr &&+\ovl\Gamma{}^J_{\gamma\delta}\,\bigl(2\zeta_{2\,IJ}-\ovl\theta_1\,\Gamma_{IJ}\,\theta_2\bigr)\bigr)+\tfrac{1}{4}\,x_2^J\,\bigl(\ovl\Gamma{}^I_{\gamma\delta}\,\ovl\Gamma_{IJ\,\epsilon\eta}+\ovl\Gamma{}^I_{\epsilon\eta}\,\ovl\Gamma_{IJ\,\gamma\delta}\bigr)\,\theta_1^\epsilon\,\theta_2^\eta+\bigl(\zeta_{2\,IJ}\,\ovl\Gamma{}^I_{\gamma\epsilon}\,\ovl\Gamma{}^J_{\delta\eta}\cr\cr &&+\tfrac{1}{2}\,x_2^J\,\bigl(\ovl\Gamma{}^I_{\gamma\epsilon}\,\ovl\Gamma_{IJ\,\delta\eta}+\ovl\Gamma{}^I_{\delta\epsilon}\,\ovl\Gamma_{IJ\,\gamma\eta}\bigr)\bigr)\,\theta_1^\epsilon\,\theta_1^\eta-\tfrac{1}{24}\,\theta_2^\epsilon\,\bigl(\theta_2^\eta\,\bigl(2\Delta_{\gamma\delta;\epsilon\eta\kappa\lambda}\,\theta_2^\kappa+3\bigl(\Delta_{\gamma\delta;\kappa\epsilon\eta\lambda}-\Delta_{\gamma\delta;\epsilon\kappa\eta\lambda}\bigr)\,\theta_1^\kappa\bigr)\cr\cr &&+6\Delta_{\gamma\delta;\kappa\eta\epsilon\lambda}\bigr)\,\theta_1^\lambda\bigr)\,, \end{eqnarray} with the inverse \begin{eqnarray}\nn {\rm Inv}_2^{(4)}\ &:&\ \mathcal{M}_2^{(4)}\longrightarrow\mathcal{M}_2^{(4)}\ :\ \bigl(\theta^\a,x^I,\zeta_{JK},\psi_{L\,\beta},\upsilon_{\gamma\delta}\bigr)\cr\cr &&\longmapsto\bigl(-\theta^\a,-x^I,-\zeta_{JK},-\psi_{L\beta}+\bigl(\ovl\Gamma{}^M_{\beta\epsilon}\,\zeta_{ML}+x^M\,\ovl\Gamma_{ML\,\beta\epsilon}\bigr)\,\theta^\epsilon,-\upsilon_{\gamma\delta}-4\ovl\Gamma{}^I_{\gamma\eta}\,\ovl\Gamma{}^J_{\delta\kappa}\,\zeta_{IJ}\,\theta^\eta\,\theta^\kappa\cr\cr &&+2x^I\,\bigl(\ovl\Gamma{}^J_{\gamma\delta}\,\zeta_{IJ}+\bigl(\ovl\Gamma{}^J_{\gamma\eta}\,\ovl\Gamma_{IJ\,\delta\kappa}-\ovl\Gamma{}^J_{\delta\kappa}\,\ovl\Gamma_{IJ\,\gamma\eta}\bigr)\,\theta^\eta\,\theta^\kappa\bigr)+\bigl(\ovl\Gamma{}^I_{\gamma\delta}\,\psi_{I\eta}+4\ovl\Gamma{}^I_{\gamma\eta}\,\psi_{I\delta}+4\ovl\Gamma{}^I_{\delta\eta}\,\psi_{I\gamma}\bigr)\,\theta^\eta\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn e_2^{(4)}=(0,0,0,0,0)\,. \end{eqnarray} It is a (super)central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{R}}^{\x\delta_{1,d-1}}\longrightarrow\mathcal{M}_2^{(3)}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{R}}^{\x\delta_{1,d-1}}\xrightarrow{\ \pi_2^{(4)}\ }\mathcal{M}_2^{(3)}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the Lie supergroup $\,\mathcal{M}_2^{(3)}\,$ of Prop.\,\ref{prop:M23sgroup} determined by the family of CE super-2-cocycles corresponding to the CaE super-2-cocycles $\,\{\underset{\tx{\ciut{(2)}}}{\Delta}{}_{\a\beta}\}_{\a,\beta\in\ovl{1,D_{1,d-1}}}\,$ of \Reqref{eq:scocDab}. \end{Prop} \noindent\begin{proof} Proofs of both statements made in the proposition are rather tedious but otherwise fairly straightforward. The former one requires some ingenuity, therefore, we detail it in App.\,\ref{app:homformsgroup}. \end{proof} \noindent The above analysis gives us an explicit formula for the new LI super-1-form \begin{eqnarray}\nn \sigma^{(4)}_{\a\beta}(\theta,x,\zeta,\psi,\upsilon)&=&{\mathsf d}\upsilon_{\a\beta}-\bigl(\tfrac{1}{4}\,\ovl\Gamma{}^I_{\a\beta}\,\sigma^{(3)}_{I\gamma}+\ovl\Gamma{}^I_{\a\gamma}\,\sigma^{(3)}_{I\beta}+\ovl\Gamma{}^I_{\beta\gamma}\,\sigma^{(3)}_{I\a}\bigr)(\theta,x,\zeta,\psi)\,\theta^\gamma+\tfrac{1}{2}\,\ovl\Gamma{}^I_{\a\beta}\,e_{IJ}^{(2)}(\theta,x,\zeta)\,x^J\cr\cr &&-\bigl(\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma{}^J_{\beta\gamma}\,e_{IJ}^{(2)}(\theta,x,\zeta)+\tfrac{1}{2}\,\bigl(\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma_{IJ\,\beta\gamma}+\ovl\Gamma{}^I_{\beta\delta}\,\ovl\Gamma_{IJ\,\a\gamma}\bigr)\,e^J(\theta,x)\bigr)\,\theta^\delta\,\theta^\gamma\cr\cr &&-\tfrac{1}{2}\,\ovl\Gamma_{IJ\,\a\beta}\,x^I\,e^J(\theta,x)-\tfrac{1}{4}\,\bigl(\ovl\Gamma{}^I_{\a\beta}\,\ovl\Gamma_{IJ\,\gamma\delta}+\ovl\Gamma{}^I_{\gamma\delta}\,\ovl\Gamma_{IJ\,\a\beta}\bigr)\,x^J\,\theta^\gamma\,\sigma^\delta(\theta)\cr\cr &&-\tfrac{1}{4}\,\Delta_{\a\beta;\gamma\delta\varepsilon\eta}\,\theta^\gamma\,\theta^\delta\,\theta^\epsilon\,\sigma^\eta(\theta)\,. \end{eqnarray} Altogether, we extract from our hitherto considerations a primitive for (the pullback of) the GS super-4-cocycle: \begin{eqnarray}\nn \pi^{(2,3,4)\,*}_{02}\underset{\tx{\ciut{(4)}}}{\chi}={\mathsf d}\underset{\tx{\ciut{(3)}}}{\beta}^{(4)} \end{eqnarray} given by \begin{eqnarray}\nn \underset{\tx{\ciut{(3)}}}{\beta}^{(4)}&=&\tfrac{2}{3}\,\pi^{(3,4)\,*}_2\bigl(e^{(2)}_{IJ}\wedge\pi^{(2)\,*}_2\bigl(e^I\wedge e^J\bigr)\bigr)-\tfrac{3}{5}\,\pi^{(4)\,*}_2\bigl(\sigma^{(3)}_{I\a}\wedge\pi^{(2,3)\,*}_2 e^I\wedge\pi^{(2,3)\,*}_{02}\sigma^\a\bigr)\cr\cr &&-\tfrac{2}{15}\,\sigma_{\a\beta}^{(4)}\wedge\pi^{(2,3,4)\,*}_{02}(\sigma^\a\wedge\sigma^\beta)\,, \end{eqnarray} where we used the self-explanatory shorthand notation \begin{eqnarray}\nn \pi^{(3,4)}_2=\pi^{(3)}_2\circ\pi^{(4)}_2\,,\qquad\qquad \pi^{(2,3,4)}_{02}=\pi_0\circ\pi^{(2,3,4)}_2\,,\qquad\qquad\pi^{(2,3,4)}_2=\pi^{(2)}_2\circ\pi^{(3)}_2\circ\pi^{(4)}_2\,. \end{eqnarray} The primitive is {\it left-invariant} with respect to the lift $\,\ell^{(4)}_\cdot\,$ of the supersymmetry $\,\ell^{(1)}_\cdot\,$ induced from $\,{\rm m}_2^{(4)}\,$ as {\it per} \begin{eqnarray}\nn \ell^{(4)}_\cdot={\rm m}_2^{(4)}\,, \end{eqnarray} in which the first component of the domain is to be regarded as the extended supersymmetry group. Guided by the intuition developed previously in our analysis of the GS super-2-cocycle, we take the complete extension \begin{eqnarray}\nn \pi_{{\mathsf Y}_2\mathcal{M}^{(1)}}:=\pi^{(2)}_2\circ\pi^{(3)}_2\circ\pi^{(4)}_2\ :\ {\mathsf Y}_2\mathcal{M}^{(1)}:=\mathcal{M}^{(4)}_2\longrightarrow\mathcal{M}^{(1)}\ :\ \bigl(\theta^\a,x^I,\zeta_{JK},\psi_{L\beta},\upsilon_{\gamma\delta}\bigr)\longmapsto\bigl(\theta^\a,x^I\bigr) \end{eqnarray} to be the surjective submersion of a super-geometrisation of the GS super-4-cocycle $\,\underset{\tx{\ciut{(4)}}}{\chi}\,$ that we now work out in detail. As a first step, we compare pullbacks of $\,\underset{\tx{\ciut{(3)}}}{\beta}^{(4)}\,$ to the $\mathcal{M}^{(1)}$-fibred square \begin{eqnarray}\nn \alxydim{@C=.75cm@R=1cm}{& {\mathsf Y}_2^{[2]}\mathcal{M}^{(1)} \ar[rd]^{{\rm pr}_2} \ar[ld]_{{\rm pr}_1} & \\ {\mathsf Y}_2\mathcal{M}^{(1)} \ar[rd]_{\pi_{{\mathsf Y}_2\mathcal{M}^{(1)}}} & & {\mathsf Y}_2\mathcal{M}^{(1)} \ar[ld]^{\pi_{{\mathsf Y}_2\mathcal{M}^{(1)}}} \\ & \mathcal{M}^{(1)} & } \end{eqnarray} along the two canonical projections to $\,{\mathsf Y}_2\mathcal{M}^{(1)}$,\ whereby we obtain -- for $\,m^A_4:=(\theta,x,\zeta^A,\psi^A,\upsilon^A)\,,\ A\in\{1,2\}\,$ and $\,\zeta^{21}_{IJ}:=\zeta^2_{IJ}-\zeta^1_{IJ}\,,\ \psi^{21}_{K\a}:=\psi^2_{K\a}-\psi^1_{K\a}\,$ and $\,\upsilon^{21}_{\a\beta}:=\upsilon^2_{\a\beta}-\upsilon^1_{\a\beta}\,$ -- the expression \begin{eqnarray}\nn ({\rm pr}_2^*-{\rm pr}_1^*)\underset{\tx{\ciut{(3)}}}{\beta^{(4)}}\bigl(m_4^1,m_4^2\bigr)&=&\tfrac{2}{3}\,{\mathsf d}\zeta^{21}_{IJ}\wedge(e^I\wedge e^J)(\theta,x)-\tfrac{3}{5}\,\bigl({\mathsf d}\psi^{21}_{I\a}+\ovl\Gamma{}^J_{\a\beta}\,\theta^\beta\,{\mathsf d}\zeta^{21}_{IJ}\bigr)\wedge e^I(\theta,x)\wedge \sigma^\a(\theta)\cr\cr &&-\tfrac{2}{15}\,\bigl[{\mathsf d}\upsilon_{\a\beta}^{21}-\bigl(\tfrac{1}{4}\,\ovl\Gamma{}^I_{\a\beta}\,\bigl({\mathsf d}\psi^{21}_{I\gamma}+\ovl\Gamma{}^J_{\gamma\delta}\,\theta^\delta\,{\mathsf d}\zeta^{21}_{IJ}\bigr)+\ovl\Gamma{}^I_{\a\gamma}\,\bigl({\mathsf d}\psi^{21}_{I\beta}+\ovl\Gamma{}^J_{\beta\delta}\,\theta^\delta\,{\mathsf d}\zeta^{21}_{IJ}\bigr)\cr\cr &&+\ovl\Gamma{}^I_{\beta\gamma}\,\bigl({\mathsf d}\psi^{21}_{I\a}+\ovl\Gamma{}^J_{\a\delta}\,\theta^\delta\,{\mathsf d}\zeta^{21}_{IJ}\bigr)\bigr)\,\theta^\gamma+\tfrac{1}{2}\,\ovl\Gamma{}^I_{\a\beta}\,x^J\,{\mathsf d}\zeta^{21}_{IJ}\cr\cr &&-\ovl\Gamma{}^I_{\a\delta}\,\ovl\Gamma{}^J_{\beta\gamma}\,\theta^\delta\,\theta^\gamma\,{\mathsf d}\zeta^{21}_{IJ}\bigr]\wedge(\sigma^\a\wedge\sigma^\beta)(\theta)\,, \end{eqnarray} in which the super-1-forms \begin{eqnarray}\nn \mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr):={\mathsf d}\zeta^{21}_{IJ}\,,\qquad\qquad\mathscr{Y}_{I\a}\bigl(m_4^1,m_4^2\bigr):={\mathsf d}\psi^{21}_{I\a}+\ovl\Gamma{}^J_{\a\beta}\,\theta^\beta\,\mathscr{X}_{IJ} \end{eqnarray} and \begin{eqnarray}\nn \mathscr{Z}_{\a\beta}\bigl(m_4^1,m_4^2\bigr)&:=&{\mathsf d}\upsilon^{21}_{\a\beta}-\bigl(\tfrac{1}{4}\,\ovl\Gamma{}^I_{\a\beta}\,\mathscr{Y}_{I\gamma}+\ovl\Gamma{}^I_{\a\gamma}\,\mathscr{Y}_{I\beta}+\ovl\Gamma{}^I_{\beta\gamma}\,\mathscr{Y}_{I\a}\bigr)\bigl(m_4^1,m_4^2\bigr)\,\theta^\gamma+\tfrac{1}{2}\,\ovl\Gamma{}^I_{\a\beta}\,x^J\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\cr\cr &&-\ovl\Gamma{}^I_{\a\gamma}\,\ovl\Gamma{}^J_{\beta\delta}\,\theta^\gamma\,\theta^\delta\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr) \end{eqnarray} are -- by construction (as differences of LI super-1-forms) -- left-invariant under the diagonal lift of the action $\,\ell^{(4)}_\cdot\,$ of the Lie supergroup $\,\mathcal{M}^{(4)}_2\,$ to the fibred square $\,{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}$.\ Following the standard procedure, we seek to trivialise the 3-cocycle in a LI manner by pulling it back to the total space a suitable surjective submersion over (or supercentral extension of) $\,{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}$.\ To this end, we first consider the collection \begin{eqnarray}\label{eq:CaEscocyc5} \widehat{\underset{\tx{\ciut{(2)}}}{h}}{}_{\a\beta}:={\rm pr}_1^*\pi^{(2,3,4)\,*}_{02}(\sigma^\a\wedge\sigma^\beta) \end{eqnarray} of manifestly LI 2-cocycles and associate with them a trivial vector bundle \begin{eqnarray}\nn \widehat\pi_2{}^{(5)}\equiv{\rm pr}_1\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}:={\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}\x{\mathbb{R}}^{\x\delta_{1,d-1}}\longrightarrow{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}\cr\cr &:&\ \widehat m_5:=\bigl(m^1_4,m^2_4,X^{\a\beta}=X^{\beta\a}\bigr)\longmapsto\bigl(m^1_4,m^2_4\bigr) \end{eqnarray} with the purely Gra\ss mann-even fibre $\,{\mathbb{R}}^{\x\delta_{1,d-1}}\,$ and a Lie-supergroup structure fixed -- as formerly -- by the requirement that the super-1-forms \begin{eqnarray}\nn \widehat e^{(5)\,\a\beta}(\widehat m_5)={\mathsf d} X^{\a\beta}+\tfrac{1}{2}\,\bigl(\theta^\a\,{\mathsf d}\theta^\beta+\theta^\beta\,{\mathsf d}\theta^\a\bigr)\,, \end{eqnarray} be LI with respect to this extension. We have the obvious \begin{Prop}\label{prop:M25sgroup} The above-described vector bundle $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}\,$ equipped with the binary operation \begin{eqnarray}\nn \widehat{\rm m}{}^{(5)}_2\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(5)}\x\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(5)}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(5)}\ :\ \bigl(\bigl(m_{4\,1}^1,m_{4\,1}^2,X_1^{\a\beta}\bigr),\bigl(m_{4\,2}^1,m_{4\,2}^2,X_2^{\gamma\delta}\bigr)\bigr)\cr\cr &&\longmapsto\bigl({\rm m}_2^{(3)}\bigl(m_{4\,1}^1,m_{4\,2}^1\bigr),{\rm m}_2^{(3)}\bigl(m_{4\,1}^2,m_{4\,2}^2\bigr),X_1^{\a\beta}+X_2^{\a\beta}-\tfrac{1}{2}\,\bigl(\theta_1^\a\,\theta_2^\beta+\theta_1^\beta\,\theta_2^\a\bigr)\bigr) \end{eqnarray} with the inverse \begin{eqnarray}\nn \widehat{\rm Inv}{}^{(5)}_2\ :\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(5)}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(5)}\ :\ \bigl(m_4^1,m_4^2,X^{\a\beta}\bigr)\longmapsto\bigl(m_4^{1\,-1},m_4^{2\,-1},-X^{\a\beta}\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn \widehat e{}^{(5)}_2=(0,0,0) \end{eqnarray} is a Lie supergroup. It is a (super)central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{R}}^{\x\delta_{1,d-1}}\longrightarrow{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{R}}^{\x\delta_{1,d-1}}\xrightarrow{\ \widehat\pi_2{}^{(5)}\ }{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the (product) Lie supergroup $\,{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}$,\ the latter being formed from the Lie supergroup $\,{\mathsf Y}_2\mathcal{M}^{(1)}\,$ of Prop.\,\ref{prop:homformsgroup}. The supercentral extension is determined by the family of CE super-2-cocycles corresponding to the CaE super-2-cocycles $\,\{\widehat{\underset{\tx{\ciut{(2)}}}{h}}{}_{\a\beta}\}_{\a,\beta\in\ovl{1,D_{1,d-1}}}\,$ of \Reqref{eq:CaEscocyc5}. \end{Prop} \noindent\begin{proof} Trivial. \end{proof} \noindent The LI 1-forms $\,\widehat e^{(5)\,\a\beta}=\widehat e^{(5)\,\beta\a}\,$ on the new Lie supergroup, satisfying the identity \begin{eqnarray}\nn {\mathsf d}\widehat e^{(5)\,\a\beta}=\widehat\pi^{(2,3,4,5)\,*}_{02}(\sigma^\a\wedge\sigma^\beta)\,, \end{eqnarray} written in the shorthand notation \begin{eqnarray}\nn \widehat\pi^{(2,3,4,5)}_{02}=\pi^{(2,3,4)}_{02}\circ{\rm pr}_1\circ\widehat\pi_2{}^{(5)} \end{eqnarray} (to be adapted to subsequent extensions in an obvious manner), enable us to partially trivialise the super-3-form $\,({\rm pr}_2^*-{\rm pr}_1^*)\underset{\tx{\ciut{(3)}}}{\beta^{(4)}}\,$ upon pullback to $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}\ni\widehat m_5\,$ as \begin{eqnarray}\nn &&\widehat\pi_2^{(5)\,*}({\rm pr}_2^*-{\rm pr}_1^*)\underset{\tx{\ciut{(3)}}}{\beta^{(4)}}(\widehat m_5)\cr\cr &=&{\mathsf d}\bigl(\tfrac{2}{15}\,\mathscr{Z}_{\a\beta}\bigl(m_4^1,m_4^2\bigr)\wedge\widehat e^{(5)\,\a\beta}(\widehat m_5)\bigr)-\tfrac{2}{15}\,\bigl[\bigl(\tfrac{1}{4}\,\ovl\Gamma{}^I_{\a\beta}\,\mathscr{Y}_{I\gamma}+2\ovl\Gamma{}^I_{\beta\gamma}\,\mathscr{Y}_{I\a}\bigr)\bigl(m_4^1,m_4^2\bigr)\wedge \sigma^\gamma(\theta)\cr\cr &&-\tfrac{1}{2}\,\ovl\Gamma{}^I_{\a\beta}\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\wedge e^J(\theta,x)\bigr]\wedge\widehat e^{(5)\,\a\beta}(\widehat m_5)+\tfrac{2}{3}\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\wedge(e^I\wedge e^J)(\theta,x)\cr\cr &&-\tfrac{3}{5}\,\mathscr{Y}_{I\a}\bigl(m_4^1,m_4^2\bigr)\wedge e^I(\theta,x)\wedge \sigma^\a(\theta)\cr\cr &=&{\mathsf d}\bigl(\tfrac{2}{15}\,\mathscr{Z}_{\a\beta}\bigl(m_4^1,m_4^2\bigr)\wedge\widehat e^{(5)\,\a\beta}(\widehat m_5)\bigr)+\tfrac{1}{15}\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\wedge\bigl(10e^I(\theta,x)-\ovl\Gamma{}^I_{\a\beta}\,\widehat e^{(5)\,\a\beta}(\widehat m_5)\bigr)\wedge e^J(\theta,x)\cr\cr &&-\tfrac{1}{30}\,\mathscr{Y}_{I\a}\bigl(m_4^1,m_4^2\bigr)\wedge\bigl(18\,e^I(\theta,x)\wedge \sigma^\a(\theta)-\ovl\Gamma{}^I_{\gamma\delta}\,\bigl(\widehat e^{(5)\,\gamma\delta}(\widehat m_5)\wedge \sigma^\a(\theta)+8\,\widehat e^{(5)\,\a\gamma}(\widehat m_5)\wedge \sigma^\delta(\theta)\bigr)\bigr)\,. \end{eqnarray} In the next step, we readily verify that the manifestly LI super-2-form \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(2)}}}{h}}{}^{I\a}:=18\,\widehat\pi^{(2,3,4,5)\,*}_2\bigl(e^I\wedge\pi^*_0\sigma^\a\bigr)-\ovl\Gamma{}^I_{\beta\gamma}\,\widehat\pi_2{}^{(5)\,*}\bigl(\widehat e^{(5)\,\beta\gamma}\wedge{\rm pr}_1^*\pi^{(2,3,4)\,*}_{02}\sigma^\a+8\,\widehat e^{(5)\,\a\beta}\wedge{\rm pr}_1^*\pi^{(2,3,4)\,*}_{02}\sigma^\gamma\bigr)\,, \end{eqnarray} with \begin{eqnarray}\nn \widehat\pi^{(2,3,4,5)}_2=\pi^{(2,3,4)}_2\circ{\rm pr}_1\circ\widehat\pi_2{}^{(5)}\,, \end{eqnarray} is closed, and hence gives rise to yet another (super)central extension. This time, we take the trivial vector bundle \begin{eqnarray}\nn \widehat\pi_2^{(6)}\equiv{\rm pr}_1\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}:=\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}\x{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}\cr\cr &:&\ \widehat m_6:=\bigl(m^1_4,m^2_4,X^{\a\beta},Y^{I\gamma}\bigr)\longmapsto\bigl(m^1_4,m^2_4,X^{\a\beta}\bigr) \end{eqnarray} with the purely Gra\ss mann-odd fibre $\,{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\,$ and a Lie-supergroup structure that extends the previously established structure on its base $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}\,$ so that the super-1-forms \begin{eqnarray}\nn \widehat\sigma^{(6)\,I\a}\bigl(m^1_4,m^2_4,X,Y)={\mathsf d} Y^{I\,\a}-18\,\theta^\a\,e^I(\theta,x)+\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(\theta^\a\,\widehat e^{(5)\,\beta\gamma}(\widehat m_5)+8\,\theta^\gamma\,\widehat e^{(5)\,\a\beta}(\widehat m_5)+8\,\theta^\a\,\theta^\beta\,\sigma^\gamma(\theta)\bigr)\,, \end{eqnarray} satisfying the identities \begin{eqnarray}\nn {\mathsf d}\widehat\sigma^{(6)\,I\a}&=&\widehat\pi_2^{(6)\,*}\widehat{\underset{\tx{\ciut{(2)}}}{h}}{}^{I\a}\,, \end{eqnarray} are LI with respect to this extension. We have \begin{Prop}\label{prop:M26sgroup} The above-described vector bundle $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(6)}\,$ equipped with the binary operation \begin{eqnarray}\nn \widehat{\rm m}{}^{(6)}_2\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\x\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\ :\ \bigl(\bigl(m_{4\,1}^1,m_{4\,1}^2,X_1^{\a\beta},Y_1^{I\,\gamma}\bigr),\bigl(m_{4\,2}^1,m_{4\,2}^2,X_2^{\delta\epsilon},Y_2^{J\,\eta}\bigr)\bigr)\cr\cr &&\longmapsto\bigl({\rm m}_2^{(3)}\bigl(m_{4\,1}^1,m_{4\,2}^1\bigr),{\rm m}_2^{(3)}\bigl(m_{4\,1}^2,m_{4\,2}^2\bigr),X_1^{\a\beta}+X_2^{\a\beta}-\theta_1^\a\,\theta_2^\beta-\theta_1^\beta\,\theta_2^\a,Y^{I\gamma}_1+Y^{I\gamma}_2+18\,\theta_1^\gamma\,x_2^I\cr\cr &&-4(2\theta_1^\gamma+\theta_2^\gamma)\,\bigl(\ovl\theta_1\,\Gamma^I\,\theta_2\bigr)-\ovl\Gamma{}^I_{\delta\epsilon}\,\bigl(\theta_1^\gamma\,X_2^{\delta\epsilon}+8\,\theta_1^\delta\,X_2^{\gamma\epsilon}\bigr)\bigr) \end{eqnarray} with the inverse \begin{eqnarray}\nn \widehat{\rm Inv}{}^{(6)}_2\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\cr\cr &:&\ \bigl(m_4^1,m_4^2,X^{\a\beta},Y^{I\gamma}\bigr)\longmapsto\bigl(m_4^{1\,-1},m_4^{2\,-1},-X^{\a\beta},-Y^{I\gamma}+18\,x^I\,\theta^\gamma-\ovl\Gamma{}^I_{\delta\epsilon}\,\bigl(\theta^\gamma\,X^{\delta\epsilon}+8\,\theta^\delta\,X^{\gamma\epsilon}\bigr)\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn \widehat e{}^{(6)}_2=(0,0,0,0) \end{eqnarray} is a Lie supergroup. It is a (super)central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{R}}^{0\,\vert\,dD_{1,d-1}}\xrightarrow{\ \widehat\pi_2{}^{(6)}\ }\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the Lie supergroup $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(5)}\,$ of Prop.\,\ref{prop:M25sgroup}. The supercentral extension is determined by the family of CE super-2-cocycles corresponding to the CaE super-2-cocycles $\,\{\widehat{\underset{\tx{\ciut{(2)}}}{h}}{}^{I\a}\}_{(I,\a)\in\ovl{0,d-1}\x\ovl{1,D_{1,d-1}}}\,$ of \Reqref{eq:CaEscocyc5}. \end{Prop} \noindent\begin{proof} Through inspection. \end{proof} \noindent Thus, upon pullback to $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\ni\widehat m_6\,$ along \begin{eqnarray}\nn \widehat\pi_2^{(5,6)}=\widehat\pi_2^{(5)}\circ\widehat\pi_2^{(6)}\,, \end{eqnarray} we obtain \begin{eqnarray}\nn &&\widehat\pi_2^{(5,6)\,*}({\rm pr}_2^*-{\rm pr}_1^*)\underset{\tx{\ciut{(3)}}}{\beta^{(4)}}(\widehat m_6)\cr\cr &=&{\mathsf d}\bigl(\tfrac{2}{15}\,\mathscr{Z}_{\a\beta}\bigl(m_4^1,m_4^2\bigr)\wedge\widehat e^{(5)\,\a\beta}(\widehat m_5)+\tfrac{1}{30}\,\mathscr{Y}_{I\a}\bigl(m_4^1,m_4^2\bigr)\wedge\widehat\sigma^{(6)\,I\a}(\widehat m_6)\bigr)\cr\cr &&+\tfrac{1}{30}\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\wedge\bigl(2\bigl(10\,e^I(\theta,x)-\ovl\Gamma{}^I_{\a\beta}\,\widehat e^{(5)\,\a\beta}(\widehat m_5)\bigr)\wedge e^J(\theta,x)+\ovl\Gamma{}^J_{\a\beta}\,\sigma^\a(\theta)\wedge\widehat\sigma^{(6)\,I\beta}(\widehat m_6)\bigr)\,, \end{eqnarray} and it is easy to check (or deduce from the construction) that the LI super-2-form \begin{eqnarray} \widehat{\underset{\tx{\ciut{(2)}}}{h}}{}^{IJ}&:=&20\,\widehat\pi^{(2,3,4,5,6)\,*}_2\bigl(e^I\wedge e^J\bigr)+\widehat\pi_2^{(6)\,*}\bigl(\widehat\pi^{(2,3,4,5)\,*}_2\bigl(\ovl\Gamma{}^I_{\a\beta}\,e^J-\ovl\Gamma{}^J_{\a\beta}\,e^I\bigr)\wedge\widehat e^{(5)\,\a\beta}\bigr)\cr\cr &&-\tfrac{1}{2}\,\widehat\pi^{(2,3,4,5,6)\,*}_{02}\sigma^\a\wedge\bigl(\ovl\Gamma{}^I_{\a\beta}\,\widehat\sigma^{(6)\,J\beta}-\ovl\Gamma{}^J_{\a\beta}\,\widehat\sigma^{(6)\,I\beta}\bigr)\,,\label{eq:CaEscocyc6} \end{eqnarray} written in terms of the maps \begin{eqnarray}\nn \widehat\pi^{(2,3,4,5,6)}_{02}=\widehat\pi^{(2,3,4,5)}_{02}\circ\widehat\pi_2{}^{(6)}\,,\qquad\qquad\widehat\pi^{(2,3,4,5,6)}_2=\widehat\pi^{(2,3,4,5)}_2\circ\widehat\pi_2{}^{(6)}\,, \end{eqnarray} is closed, so that we may finally trivialise the difference of pullbacks in the CaE cohomology by constructing one last (super)central extension. Thus, take the trivial vector bundle \begin{eqnarray}\nn \widehat\pi_2^{(7)}\equiv{\rm pr}_1\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}:=\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\x{\mathbb{R}}^{\x\frac{d(d-1)}{2}}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(6)}\cr\cr &:&\ \widehat m_7:=\bigl(m^1_4,m^2_4,X^{\a\beta},Y^{I\gamma},Z^{JK}=-Z^{KJ}\bigr)\longmapsto\bigl(m^1_4,m^2_4,X^{\a\beta},Y^{I\gamma}\bigr) \end{eqnarray} with the purely Gra\ss mann-even fibre $\,{\mathbb{R}}^{\x\frac{d(d-1)}{2}}\,$ and endow it with the structure of a Lie supergroup that lifts the previously established structure of the same type from its base in such a manner that the super-1-forms \begin{eqnarray}\nn \widehat e^{(7)\,IJ}(\widehat m_7)&=&{\mathsf d} Z^{IJ}+10\bigl(x^I\,{\mathsf d} x^J-x^J\,{\mathsf d} x^I\bigr)+4\ovl\Gamma{}^I_{\a\beta}\,\ovl\Gamma{}^J_{\gamma\delta}\,\theta^\a\,\theta^\gamma\,\widehat e^{(5)\,\beta\delta}(\widehat m_5)+\bigl(\ovl\Gamma{}^I_{\a\beta}\,x^J-\ovl\Gamma{}^J_{\a\beta}\,x^I\bigr)\,{\mathsf d} X^{\a\beta}\cr\cr &&+\tfrac{1}{2}\,\bigl(\ovl\Gamma{}^I_{\a\beta}\,\widehat e^{(6)\,J\a}-\ovl\Gamma{}^J_{\a\beta}\,\widehat e^{(6)\,I\a}\bigr)(\widehat m_6)\,\theta^\beta\,, \end{eqnarray} satisfying the identities \begin{eqnarray}\nn {\mathsf d}\widehat e^{(7)\,IJ}=\widehat\pi_2^{(7)\,*}\widehat{\underset{\tx{\ciut{(2)}}}{h}}{}^{IJ}\,, \end{eqnarray} are LI with respect to this new supergroup structure. Yet again, we obtain \begin{Prop}\label{prop:M27sgroup} The above-described vector bundle $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(7)}\,$ equipped with the binary operation \begin{eqnarray}\nn \widehat{\rm m}{}^{(7)}_2\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\x\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\cr\cr &:&\ \bigl(\bigl(m_{4\,1}^1,m_{4\,1}^2,X_1^{\a\beta},Y_1^{I\,\gamma},Z_1^{JK}\bigr),\bigl(m_{4\,2}^1,m_{4\,2}^2,X_2^{\delta\epsilon},Y_2^{L\,\eta},Z_2^{MN}\bigr)\bigr)\longmapsto\bigl({\rm m}_2^{(3)}\bigl(m_{4\,1}^1,m_{4\,2}^1\bigr),\cr\cr &&{\rm m}_2^{(3)}\bigl(m_{4\,1}^2,m_{4\,2}^2\bigr),X_1^{\a\beta}+X_2^{\a\beta}-\theta_1^\a\,\theta_2^\beta-\theta_1^\beta\,\theta_2^\a,Y^{I\gamma}_1+Y^{I\gamma}_2+18\,\theta_1^\gamma\,x_2^I-4(2\theta_1^\gamma+\theta_2^\gamma)\,\bigl(\ovl\theta_1\,\Gamma^I\,\theta_2\bigr)\cr\cr &&-\ovl\Gamma{}^I_{\delta\epsilon}\,\bigl(\theta_1^\gamma\,X_2^{\delta\epsilon}+8\,\theta_1^\delta\,X_2^{\gamma\epsilon}\bigr),Z_1^{JK}+Z_2^{JK}-10\,(x_1^J\,x_2^K-x_2^J\,x_1^K)\cr\cr &&+4\bigl(\bigl(x_1^J+x_2^J\bigr)\,\bigl(\ovl\theta_1\,\Gamma^K\,\theta_2\bigr)-\bigl(x_1^K+x_2^K\bigr)\,\bigl(\ovl\theta_1\,\Gamma^J\,\theta_2\bigr)\bigr)+\tfrac{1}{2}\,\theta_1^\eta\,\bigl(\ovl\Gamma{}^J_{\eta\kappa}\,Y_2^{K\kappa}-\ovl\Gamma{}^K_{\eta\kappa}\,Y_2^{J\kappa}\bigr)\cr\cr &&-\bigl(4\ovl\Gamma{}^J_{\eta\kappa}\,\ovl\Gamma{}^K_{\lambda\mu}\,\theta_1^\eta\,\theta_1^\lambda-x_1^J\,\ovl\Gamma{}^K_{\kappa\mu}+x_1^K\,\ovl\Gamma{}^J_{\kappa\mu}\bigr)\,X_2^{\kappa\mu}\bigr) \end{eqnarray} with the inverse \begin{eqnarray}\nn \widehat{\rm Inv}{}^{(7)}_2\ &:&\ \widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\ :\ \bigl(m_4^1,m_4^2,X^{\a\beta},Y^{I\gamma},Z^{JK}\bigr)\cr\cr &&\longmapsto\bigl(m_4^{1\,-1},m_4^{2\,-1},-X^{\a\beta},-Y^{I\gamma}+18\,x^I\,\theta^\gamma-\ovl\Gamma{}^I_{\delta\epsilon}\,\bigl(\theta^\gamma\,X^{\delta\epsilon}+8\,\theta^\delta\,X^{\gamma\epsilon}\bigr),\cr\cr &&\hspace{1cm}-Z^{JK}+\tfrac{1}{2}\,\theta^\eta\,\bigl(\ovl\Gamma{}^J_{\eta\kappa}\,Y^{K\kappa}-\ovl\Gamma{}^K_{\eta\kappa}\,Y^{J\kappa}\bigr)+\bigl(x^J\,\ovl\Gamma{}^K_{\kappa\mu}-x^K\,\ovl\Gamma{}^J_{\kappa\mu}+4\ovl\Gamma{}^J_{\eta\kappa}\,\ovl\Gamma{}^K_{\lambda\mu}\,\theta^\eta\,\theta^\lambda\bigr)\,X^{\kappa\mu}\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn \widehat e{}^{(7)}_2=(0,0,0,0,0) \end{eqnarray} is a Lie supergroup. It is a (super)central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{R}}^{\x\frac{d(d-1)}{2}}\longrightarrow\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(6)}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathbb{R}}^{\x\frac{d(d-1)}{2}}\xrightarrow{\ \widehat\pi_2{}^{(7)}\ }\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(6)}\longrightarrow{\boldsymbol{1}} \end{eqnarray} of the Lie supergroup $\,\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}{}^{(6)}\,$ of Prop.\,\ref{prop:M26sgroup}. The supercentral extension is determined by the family of CE super-2-cocycles corresponding to the CaE super-2-cocycles $\,\{\widehat{\underset{\tx{\ciut{(2)}}}{h}}{}^{IJ}\}_{I,J\in\ovl{0,d-1}}\,$ of \Reqref{eq:CaEscocyc6}. \end{Prop} \noindent\begin{proof} Straightforward, through inspection. \end{proof} By the end of the long day, we are left with the desired result \begin{eqnarray}\nn \widehat\pi_2^{(5,6,7)\,*}({\rm pr}_2^*-{\rm pr}_1^*)\underset{\tx{\ciut{(3)}}}{\beta^{(4)}}(\widehat m_7)&=&{\mathsf d}\bigl[\tfrac{2}{15}\,\mathscr{Z}_{\a\beta}\bigl(m_4^1,m_4^2\bigr)\wedge\widehat e^{(5)\,\a\beta}(\widehat m_5)+\tfrac{1}{30}\,\mathscr{Y}_{I\a}\bigl(m_4^1,m_4^2\bigr)\wedge\widehat\sigma^{(6)\,I\a}(\widehat m_6)\cr\cr &&-\tfrac{1}{30}\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\wedge\widehat e^{(7)\,IJ}(\widehat m_7)\bigr]\,, \end{eqnarray} where \begin{eqnarray}\nn \widehat\pi_2^{(5,6,7)}=\widehat\pi_2^{(5)}\circ\widehat\pi_2^{(6)}\circ\widehat\pi_2^{(7)}\,. \end{eqnarray} The above formula suggests that we should take the (super)central extension \begin{eqnarray}\nn \pi_{\widehat{\mathsf Y}\sfY_2^{[2]}\mathcal{M}^{(1)}}:=\widehat\pi_2^{(5)}\circ\widehat\pi_2^{(6)}\circ\widehat\pi_2^{(7)}\ &:&\ \widehat{\mathsf Y}\sfY_2^{[2]}\mathcal{M}^{(1)}:=\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\longrightarrow{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\cr\cr &:&\ \bigl(m_4^1,m_4^2,X,Y,Z\bigr)\longmapsto\bigl(m_4^1,m_4^2\bigr) \end{eqnarray} as the surjective submersion of the super-1-gerbe over $\,{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\,$ with a connection of the LI curvature \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(3)}}}{\chi}}=\tfrac{2}{3}\,\mathscr{X}_{IJ}\wedge{\rm pr}_1^*\pi^{(2,3,4)\,*}_2(e^I\wedge e^J)-\tfrac{3}{5}\,\mathscr{Y}_{I\a}\wedge{\rm pr}_1^*\pi^{(2,3,4)\,*}_2\bigl(e^I\wedge\pi_0^*\sigma^\a\bigr)-\tfrac{2}{15}\,\mathscr{Z}_{\a\beta}\wedge\pi^{(2,3,4)\,*}_{02}(\sigma^\a\wedge\sigma^\beta) \end{eqnarray} and the LI curving given by the formula \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(2)}}}{\beta}}=\tfrac{1}{30}\,\bigl(4\widehat\pi_2^{(6,7)\,*}\bigl(\widehat\pi_2^{(5)\,*}\mathscr{Z}_{\a\beta}\wedge\widehat e^{(5)\,\a\beta}\bigr)+\widehat\pi_2^{(7)\,*}\bigl(\widehat\pi_2^{(5,6)\,*}\mathscr{Y}_{I\a}\wedge\widehat \sigma^{(6)\,I\a}\bigr)-\widehat\pi_2^{(5,6,7)\,*}\mathscr{X}_{IJ}\wedge\widehat e^{(7)\,IJ}\bigr)\,. \end{eqnarray} In the next step, we compare pullbacks of that curving along the canonical projections to the ${\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}$-fibred square \begin{eqnarray}\nn \widehat{\mathsf Y}^{[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\equiv\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\x_{{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}}\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\,, \end{eqnarray} whereby we find -- for $\,\widehat m_7^A:=\bigl(m_4^1,m_4^2,X^A,Y^A,Z^A\bigr),\ A\in\{1,2\}\,$ and $\,X^{21}:=X^2-X^1\,,\ Y^{21}:=Y^2-Y^1\,$ and $\,Z^{21}:=Z^2-Z^1\,$ -- \begin{eqnarray}\nn &&({\rm pr}_2^*-{\rm pr}_1^*)\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}\bigl(\widehat m_7^1,\widehat m_7^2\bigr)\cr\cr &=&\tfrac{2}{15}\,\mathscr{Z}_{\a\beta}\bigl(m_4^1,m_4^2\bigr)\wedge{\mathsf d} X^{21\,\a\beta}+\tfrac{1}{30}\,\mathscr{Y}_{I\a}\bigl(m_4^1,m_4^2\bigr)\wedge\bigl({\mathsf d} Y^{21\,I\a}+\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(\theta^\a\,{\mathsf d} X^{21\,\beta\gamma}+8\theta^\gamma\,{\mathsf d} X^{21\,\a\beta}\bigr)\bigr)\cr\cr &&-\tfrac{1}{30}\,\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\wedge\bigl({\mathsf d} Z^{21\,IJ}+2\ovl\Gamma{}^I_{\a\beta}\,\bigl(x^J\,{\mathsf d} X^{21\,\a\beta}+2\ovl\Gamma{}^J_{\gamma\delta}\,\theta^\beta\,\theta^\gamma\,{\mathsf d} X^{21\,\a\delta}\bigr)\cr\cr &&+\ovl\Gamma{}^I_{\a\beta}\,\bigl({\mathsf d} Y^{21\,J\a}+\ovl\Gamma{}^J_{\gamma\delta}\,\bigl(\theta^\a\,{\mathsf d} X^{21\,\gamma\delta}+8\theta^\delta\,{\mathsf d} X^{21\,\a\gamma}\bigr)\bigr)\,\theta^\beta\bigr)\cr\cr &\equiv&\tfrac{1}{30}\,\bigl[\bigl(4\mathscr{Z}_{\a\beta}\bigl(m_4^1,m_4^2\bigr)+\bigl(\mathscr{Y}_{I\gamma}\,\ovl\Gamma{}^I_{\a\beta}+8\mathscr{Y}_{I\a}\,\ovl\Gamma{}^I_{\beta\gamma}\bigr)\bigl(m_4^1,m_4^2\bigr)\,\theta^\gamma+2\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\,\bigl(2\ovl\Gamma{}^I_{\a\gamma}\,\ovl\Gamma{}^J_{\beta\delta}\,\theta^\gamma\,\theta^\delta\cr\cr &&-\ovl\Gamma{}^I_{\a\beta}\,x^J\bigr)\bigr)\wedge{\mathsf d} X^{21\,\a\beta}+\bigl(\mathscr{Y}_{I\a}\bigl(m_4^1,m_4^2\bigr)-\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\,\ovl\Gamma{}^J_{\a\beta}\,\theta^\beta\bigr)\wedge{\mathsf d} Y^{21\,I\a}-\mathscr{X}_{IJ}\bigl(m_4^1,m_4^2\bigr)\wedge{\mathsf d} Z^{21\,IJ}\bigr]\cr\cr &=&\tfrac{1}{30}\,\bigl(4{\mathsf d}\upsilon^{21}_{\a\beta}\wedge{\mathsf d} X^{21\,\a\beta}+{\mathsf d}\psi^{21}_{I\a}\wedge{\mathsf d} Y^{21\,I\a}-{\mathsf d}\zeta^{21}_{IJ}\wedge{\mathsf d} Z^{21\,IJ}\bigr)\,. \end{eqnarray} Thus, just as in the case of the GS super-1-gerbe, we obtain a trivial principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\label{eq:smembndl}\hspace{2cm} \pi_{\widehat\mathscr{L}}\equiv{\rm pr}_1\ :\ \widehat\mathscr{L}:=\widehat{\mathsf Y}^{[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\x{\mathbb{C}}^\x\longrightarrow\widehat{\mathsf Y}^{[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\ :\ \bigl(\widehat m_7^1,\widehat m_7^2,\widehat z\bigr)\longmapsto\bigl(\widehat m_7^1,\widehat m_7^2\bigr) \end{eqnarray} with a principal connection \begin{eqnarray}\nn \nabla_{\widehat\mathscr{L}}={\mathsf d}+\tfrac{1}{{\mathsf i}}\,\widehat{\rm A}\,, \end{eqnarray} or -- equivalently -- a principal connection 1-form \begin{eqnarray}\nn \widehat\cA\bigl(\widehat m_7^1,\widehat m_7^2,\widehat z\bigr)={\mathsf i}\tfrac{{\mathsf d}\widehat z}{\widehat z}+\widehat{\rm A}\bigl(\widehat m_7^1,\widehat m_7^2\bigr) \end{eqnarray} with the base component \begin{eqnarray}\nn \widehat{\rm A}\bigl(\widehat m_7^1,\widehat m_7^2\bigr)&=&\tfrac{1}{30}\,\bigl(Z^{21\,IJ}\,{\mathsf d}\zeta^{21}_{IJ}+Y^{21\,I\a}\,{\mathsf d} \psi^{21}_{I\a}-4X^{21\,\a\beta}\,{\mathsf d}\upsilon^{21}_{\a\beta}\bigr)\,. \end{eqnarray} Following the by now well-established procedure, we determine the lift of the Lie-supergroup structure from the base of the bundle to its total space by imposing the requirement that the principal connection 1-form be LI with respect to the rigid lifted supersymmetry induced from the ensuing group law. In order to study its consequences, we first work out in detail how the various coordinate differences entering the definition of $\,\widehat{\rm A}\,$ change under a rigid supersymmetry transformation with parameters $\,\widehat\delta_7^A\equiv(\varepsilon^\a,y^I,\xi^1_{JK},\phi^1_{L\beta},\varpi^1_{\gamma\delta},\xi^2_{MN},\phi^2_{O\epsilon},\varpi^2_{\eta\kappa},U^{A\,\lambda\mu},V^{A\,P\nu},W^{A\,RS})\in\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)},\ A\in\{1,2\}\,$ induced, in the same manner, from $\,\widehat{\rm m}{}^{(7)}_2$,\ in which we have taken into account the various fibrings involved in the construction (a point in the $A$-th factor of $\,\widehat{\mathsf Y}^{[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\,$ is transformed by the corresponding $\,\widehat\delta_7^A$). We readily find the following transformation laws \begin{eqnarray}\nn \zeta^{21}_{IJ}&\longmapsto&\zeta^{21}_{IJ}\,,\cr\cr \psi^{21}_{I\a}&\longmapsto&\psi^{21}_{I\a}-\varepsilon^\beta\,\ovl\Gamma{}^J_{\a\beta}\,\zeta^{21}_{IJ}\,,\cr\cr \upsilon^{21}_{\a\beta}&\longmapsto&\upsilon^{21}_{\a\beta}-\varepsilon^\gamma\,\bigl(\tfrac{1}{4}\,\ovl\Gamma{}^I_{\a\beta}\,\psi^{21}_{I\gamma}+\ovl\Gamma{}^I_{\a\gamma}\,\psi^{21}_{I\beta}+\ovl\Gamma{}^I_{\beta\gamma}\,\psi^{21}_{I\a}\bigr)+\bigl(\tfrac{1}{2}\,y^I\,\ovl\Gamma{}^J_{\a\beta}+\varepsilon^\gamma\,\varepsilon^\delta\,\ovl\Gamma{}^I_{\a\gamma}\,\ovl\Gamma{}^J_{\beta\delta}\bigr)\,\zeta^{21}_{IJ}\,,\cr\cr X^{21\,\a\beta}&\longmapsto&X^{21\,\a\beta}\,,\cr\cr Y^{21\,I\a}&\longmapsto&Y^{21\,I\a}-\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(\varepsilon^\a\,X^{21\,\beta\gamma}+8\,\varepsilon^\beta\,X^{21\,\a\gamma}\bigr)\,,\cr\cr Z^{21\,IJ}&\longmapsto&Z^{21\,IJ}+\tfrac{1}{2}\,\varepsilon^\a\,\bigl(\ovl\Gamma{}^I_{\a\beta}\,Y^{21\,J\beta}-\ovl\Gamma{}^J_{\a\beta}\,Y^{21\,I\beta}\bigr)-\bigl(4\ovl\Gamma{}^I_{\a\beta}\,\ovl\Gamma{}^J_{\gamma\delta}\,\varepsilon^\a\,\varepsilon^\gamma-y^I\,\ovl\Gamma{}^J_{\beta\delta}+y^J\,\ovl\Gamma{}^I_{\beta\delta}\bigr)\,X^{21\,\beta\delta}\,, \end{eqnarray} and so it follows that the base component of the principal connection is actually left-invariant, and not merely quasi-left-invariant as previously, \begin{eqnarray}\nn \widehat{\rm A}\bigl(\widehat{\rm m}{}^{(7)}_2\bigl(\widehat\delta_7^1,\widehat m_7^1\bigr),\widehat{\rm m}{}^{(7)}_2\bigl(\widehat\delta_7^2,\widehat m_7^2\bigr)\bigr)=\widehat{\rm A}\bigl(\widehat m_7^1,\widehat m_7^2\bigr)\,. \end{eqnarray} Accordingly, we may take the lift of the supersymmetry to $\,\widehat\mathscr{L}\,$ to be trivial, as stated in \begin{Prop}\label{prop:smembndlie} The principal ${\mathbb{C}}^\x$-bundle $\,\widehat\mathscr{L}\,$ of \Reqref{eq:smembndl} equipped with the binary operation \begin{eqnarray} \widehat{\rm m}^{(8)}_2\ &:&\ \widehat\mathscr{L}\x\widehat\mathscr{L}\longrightarrow\widehat\mathscr{L}\cr\cr &:&\ \bigl(\bigl(\widehat m_{7\,1}^1,\widehat m_{7\,1}^2,\widehat z_1\bigr),\bigl(\widehat m_{7\,2}^1,\widehat m_{7\,2}^2,\widehat z_2\bigr)\bigr)\longmapsto\bigl(\widehat{\rm m}{}^{(7)}_2\bigl(\widehat m_{7\,1}^1,\widehat m_{7\,2}^1\bigr),\widehat{\rm m}{}^{(7)}_2\bigl(\widehat m_{7\,1}^2,\widehat m_{7\,2}^2\bigr),\widehat z_1\cdot\widehat z_2\bigr)\,,\label{eq:smembLonLfix} \end{eqnarray} with the inverse \begin{eqnarray}\nn \widehat{\rm Inv}^{(8)}_2\ :\ \widehat\mathscr{L}\longrightarrow\widehat\mathscr{L}\ :\ \bigl(\widehat m_7^1,\widehat m_7^2,\widehat z\bigr)\longmapsto\bigl(\widehat{\rm Inv}{}^{(7)}_2\bigl(\widehat m_7^1\bigr),\widehat{\rm Inv}{}^{(7)}_2\bigl(\widehat m_7^2\bigr),\widehat z^{-1}\bigr) \end{eqnarray} and the neutral element \begin{eqnarray}\nn \widehat e^{(8)}_2=(0,0,1) \end{eqnarray} is a Lie supergroup. It is a trivial central extension \begin{eqnarray}\nn {\boldsymbol{1}}\longrightarrow{\mathbb{C}}^\x\longrightarrow\widehat{\mathsf Y}^{[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\x{\mathbb{C}}^\x\xrightarrow{\ \pi_{\widehat\mathscr{L}}\ }\widehat{\mathsf Y}^{[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\longrightarrow{\boldsymbol{1}}\,, \end{eqnarray} that is the direct product of the Lie supergroup $\,{\mathsf Y}^{[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\,$ with the structure group $\,{\mathbb{C}}^\x$. \end{Prop} \noindent\begin{proof} {\it Cp.}\ above. \end{proof} At this stage, we may pass to the fibred cube \begin{eqnarray}\nn \widehat{\mathsf Y}^{[3]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\equiv\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\x_{{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}}\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\x_{{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}}\widehat{{\mathsf Y}^{[2]}_2\mathcal{M}^{(1)}}{}^{(7)}\,, \end{eqnarray} and look for a connection-preserving isomorphism \begin{eqnarray}\nn \mu_\mathscr{L}\ :\ {\rm pr}_{1,2}^*\widehat\mathscr{L}\otimes{\rm pr}_{2,3}^*\widehat\mathscr{L}\xrightarrow{\ \cong\ }{\rm pr}_{1,3}^*\widehat\mathscr{L}\,. \end{eqnarray} The comparison of the pullbacks of the connection 1-forms \begin{eqnarray}\nn ({\rm pr}_{1,2}^*+{\rm pr}_{2,3}^*-{\rm pr}_{1,3}^*)\widehat{\rm A}\bigl(\widehat m_7^1,\widehat m_7^2,\widehat m_7^3\bigr)=0\,, \end{eqnarray} in conjunction with Prop.\,\ref{prop:smembndlie} immediately suggest the natural choice \begin{eqnarray}\label{eq:grpdstrwidehatL} \mu_{\widehat\mathscr{L}}\left(\bigl(\widehat m_7^1,\widehat m_7^2,\widehat z_{1,2}\bigr)\otimes\bigl(\widehat m_7^2,\widehat m_7^3,\widehat z_{2,3}\bigr)\right):=\bigl(\widehat m_7^1,\widehat m_7^3,\widehat z_{1,2}\cdot\widehat z_{2,3}\bigr)\,. \end{eqnarray} A fibre map thus defined trivially satisfies the groupoid identity \eqref{eq:mugrpd} over $\,\widehat{\mathsf Y}^{[3]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}$. Altogether, then, we establish the existence of a super-1-gerbe \begin{eqnarray}\nn \widehat\mathscr{G}=\bigl(\widehat{\mathsf Y}\sfY_2^{[2]}\mathcal{M}^{(1)},\pi_{\widehat{\mathsf Y}\sfY_2^{[2]}\mathcal{M}^{(1)}},\widehat{\underset{\tx{\ciut{(2)}}}{\beta}},\widehat\mathscr{L},\nabla_{\widehat\mathscr{L}},\mu_{\widehat\mathscr{L}}\bigr) \end{eqnarray} over the fibred square $\,{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)}\,$ of the (super)central extension $\,{\mathsf Y}_2\mathcal{M}^{(1)}\,$ of the support of the GS super-4-cocycle, in the sense of Def.\,\ref{def:s1gerbe}. We shall next construct a coherent product on the super-1-gerbe. To this end, we define the pullback surjective submersions \begin{eqnarray}\label{diag:pbsspro} \alxydim{@C=2.cm@R=1.5cm}{\widehat{\mathsf Y}^{i,j}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\cong\widehat{\mathsf Y}\sfY_2^{[2]}\mathcal{M}^{(1)}\x_{\mathcal{M}^{(1)}}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)} \ar[r]^{\hspace{3cm}\widehat{\rm pr}_{i,j}} \ar[d]_{\pi_{\widehat{\mathsf Y}^{i,j}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}}} & \widehat{\mathsf Y}\sfY_2^{[2]}\mathcal{M}^{(1)} \ar[d]^{\pi_{\widehat{\mathsf Y}\sfY_2^{[2]}\mathcal{M}^{(1)}}} \\ {\mathsf Y}_2^{[3]}\mathcal{M}^{(1)} \ar[r]_{{\rm pr}_{i,j}} & {\mathsf Y}_2^{[2]}\mathcal{M}^{(1)} } \end{eqnarray} for $\,(i,j)\in\{(1,2),(2,3),(1,3)\}$,\ with (global) coordinates \begin{eqnarray}\nn \widehat m^{(i,j)}\equiv\bigl(m_4^1,m_4^2,m_4^3,X^{(i,j)},Y^{(i,j)},Z^{(i,j)}\bigr)\in\widehat{\mathsf Y}^{i,j}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)} \end{eqnarray} and projections \begin{eqnarray}\nn \widehat{\rm pr}_{i,j}\bigl(m_4^1,m_4^2,m_4^3,X^{(i,j)},Y^{(i,j)},Z^{(i,j)}\bigr)&:=&\bigl(m_4^i,m_4^j,X^{(i,j)},Y^{(i,j)},Z^{(i,j)} \bigr)\,,\cr\cr \pi_{\widehat{\mathsf Y}^{i,j}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}}\bigl(m_4^1,m_4^2,m_4^3,X^{(i,j)},Y^{(i,j)},Z^{(i,j)}\bigr)&:=&\bigl(m_4^1,m_4^2,m_4^3 \bigr)\,, \end{eqnarray} and equip them with the obvious Lie-supergroup structure projecting to that of Prop.\,\ref{prop:M27sgroup} along the respective map $\,\widehat{\rm pr}_{i,j}\,$ and to that of Prop.\,\ref{prop:homformsgroup} along each of the maps $\,{\rm pr}_A\circ\pi_{\widehat{\mathsf Y}^{i,j}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}},\ A\in\{1,2,3\}$.\ Subsequently, we compare the (Deligne) tensor product of the pullback super-1-gerbes (we are dropping some obvious subscripts for the sake of transparency) \begin{eqnarray}\nn {\rm pr}_{1,2}^*\widehat\mathscr{G}\otimes{\rm pr}_{2,3}^*\widehat\mathscr{G}&=&\bigl(\widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)},\pi_{\widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}}\circ{\rm pr}_1,\bigl({\rm pr}_1^*\widehat{\rm pr}_{1,2}^*+{\rm pr}_2^*\widehat{\rm pr}_{2,3}^*\bigr)\widehat{\underset{\tx{\ciut{(2)}}}{\beta}},\cr\cr &&{\rm pr}_{1,3}^*\widehat{\rm pr}_{1,2}^{\x 2}{}^*\widehat\mathscr{L}\otimes{\rm pr}_{2,4}^*\widehat{\rm pr}_{2,3}^{\x 2}{}^*\widehat\mathscr{L},{\rm pr}_{1,3}^*\widehat{\rm pr}_{1,2}^{\x 2}{}^*\nabla_{\widehat\mathscr{L}}\otimes{\rm id}+{\rm id}\otimes{\rm pr}_{2,4}^*\widehat{\rm pr}_{2,3}^{\x 2}{}^*\nabla_{\widehat\mathscr{L}},\cr\cr &&{\rm pr}_{1,3,5}^*\widehat{\rm pr}_{1,2}^{\x 3}{}^*\mu_{\widehat\mathscr{L}}\otimes{\rm pr}_{2,4,6}^*\widehat{\rm pr}_{2,3}^{\x 3}{}^*\mu_{\widehat\mathscr{L}}\bigr)\,, \end{eqnarray} written in terms of the obvious canonical projections (which will be made explicit below), with the pullback super-1-gerbe \begin{eqnarray}\nn {\rm pr}_{1,3}^*\widehat\mathscr{G}=\bigl(\widehat{\mathsf Y}^{1,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)},\pi_{\widehat{\mathsf Y}^{1,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}},\widehat{\rm pr}_{1,3}^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}},\widehat{\rm pr}_{1,3}^{\x 2}{}^*\widehat\mathscr{L},\widehat{\rm pr}_{1,3}^{\x 2}{}^*\nabla_{\widehat\mathscr{L}},\widehat{\rm pr}_{1,3}^{\x 3}{}^*\mu_{\widehat\mathscr{L}}\bigr)\,. \end{eqnarray} We perform the comparison over the fibred product \begin{eqnarray}\nn \widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}:=\widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}} \widehat{\mathsf Y}^{2,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)} \end{eqnarray} surjectively submersed onto $\,{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\,$ as {\it per} \begin{eqnarray}\nn \pi_{\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}}\equiv\pi_{\widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}}\circ{\rm pr}_1\,. \end{eqnarray} There, we find \begin{eqnarray}\nn &&\bigl({\rm pr}_3^*\widehat{\rm pr}_{1,3}^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}-{\rm pr}_{1,2}^*\bigl({\rm pr}_1^*\widehat{\rm pr}_{1,2}^*+{\rm pr}_2^*\widehat{\rm pr}_{2,3}^*\bigr)\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}\bigr)\bigl(\widehat m^{(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)}\bigr)\cr\cr &=&\tfrac{1}{30}\,{\mathsf d}\bigl(\bigl(Z^{(1,3)\,IJ}-Z^{(1,2)\,IJ}-Z^{(2,3)\,IJ}\bigr)\,{\mathsf d}\zeta^{21}_{IJ}+\bigl(Y^{(1,3)\,I\a}-Y^{(1,2)\,I\a}-Y^{(2,3)\,I\a}\bigr)\,{\mathsf d}\psi^{21}_{I\a}\cr\cr &&-4\bigl(X^{(1,3)\,\a\beta}-X^{(1,2)\,\a\beta}-X^{(2,3)\,\a\beta}\bigr)\,{\mathsf d}\upsilon^{21}_{\a\beta}\bigr)\,. \end{eqnarray} From the last result, we infer the existence of a trivial principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \pi_{\widehat\mathscr{E}}\equiv{\rm pr}_1\ &:&\ \widehat\mathscr{E}:=\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\x{\mathbb{C}}^\x\longrightarrow\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\cr\cr &:&\ \bigl(\widehat m^{(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)},\widehat\zeta^{1,2,3}\bigr)\longmapsto\bigl(\widehat m^{(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)}\bigr) \end{eqnarray} with an LI principal ${\mathbb{C}}^\x$-connection 1-form \begin{eqnarray}\nn \widehat\a\bigl(\widehat m^{(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)},\widehat\zeta^{1,2,3}\bigr)={\mathsf i}\tfrac{{\mathsf d}\widehat\zeta^{1,2,3}}{\widehat\zeta^{1,2,3}}+\widehat\txa\bigl(\widehat m^{(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)}\bigr) \end{eqnarray} with the base component \begin{eqnarray}\nn \widehat\txa\bigl(\widehat m^{(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)}\bigr)&=&\tfrac{1}{30}\,\bigl[\bigl(Z^{(1,3)\,IJ}-Z^{(1,2)\,IJ}-Z^{(2,3)\,IJ}\bigr)\,{\mathsf d}\zeta^{21}_{IJ}\cr\cr &&+\bigl(Y^{(1,3)\,I\a}-Y^{(1,2)\,I\a}-Y^{(2,3)\,I\a}\bigr)\,{\mathsf d}\psi^{21}_{I\a}\cr\cr &&-4\bigl(X^{(1,3)\,\a\beta}-X^{(1,2)\,\a\beta}-X^{(2,3)\,\a\beta}\bigr)\,{\mathsf d}\upsilon^{21}_{\a\beta}\bigr]\,. \end{eqnarray} Next, we take the ${\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}$-fibred square $\,\widehat{\mathsf Y}^{1,2,3\,[2]}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}$,\ with its canonical projections \begin{eqnarray}\nn &{\rm pr}_{1,4}\ :\ \widehat{\mathsf Y}^{1,2,3\,[2]}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}\longrightarrow\widehat{\mathsf Y}^{1,2\,[2]}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)},\qquad{\rm pr}_{2,5}\ :\ \widehat{\mathsf Y}^{1,2,3\,[2]}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}\longrightarrow\widehat{\mathsf Y}^{2,3\,[2]}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}\,,&\cr\cr &{\rm pr}_{3,6}\ :\ \widehat{\mathsf Y}^{1,2,3\,[2]}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}\longrightarrow\widehat{\mathsf Y}^{1,3\,[2]}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}\,,& \end{eqnarray} alongside \begin{eqnarray}\nn {\rm pr}_{1,2,3}\,,{\rm pr}_{4,5,6}\ :\ \widehat{\mathsf Y}^{1,2,3\,[2]}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}\longrightarrow\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}^{[3]}_2\mathcal{M}^{(1)}\,, \end{eqnarray} and compute \begin{eqnarray}\nn \bigl({\rm pr}_{1,4}^*+{\rm pr}_{2,5}^*\bigr)\widehat{\rm A}+{\rm pr}_{4,5,6}^*\widehat\txa={\rm pr}_{1,2,3}^*\widehat\txa+ {\rm pr}_{3,6}^*\widehat{\rm A}\,, \end{eqnarray} whereupon it becomes clear that we have a connection-preserving ${\mathbb{C}}^\x$-bundle isomorphism \begin{eqnarray}\nn \widehat\varepsilon\ &:&\ \widehat{\rm pr}_{1,4}^*\widehat\mathscr{L}\otimes\widehat{\rm pr}_{2,5}^*\widehat\mathscr{L}\otimes{\rm pr}_{4,5,6}^*\widehat\mathscr{E} \xrightarrow{\ \cong\ }{\rm pr}_{1,2,3}^*\widehat\mathscr{E}\otimes\widehat{\rm pr}_{3,6}^*\widehat\mathscr{L}\cr\cr &:&\ \bigl(\bigl(\widehat m^{(1,2)}_1,\widehat m^{(1,2)}_2,\widehat z^{(1,2)}\bigr),\bigl(\widehat m^{(2,3)}_1,\widehat m^{(2,3)}_2,\widehat z^{(2,3)}\bigr),\bigl(\widehat m^{(1,2)}_2,\widehat m^{(2,3)}_2,\widehat m^{(1,3)}_2,\widehat\zeta^{1,2,3}_2\bigr)\bigr)\cr\cr &&\hspace{0.5cm}\longmapsto\bigl(\bigl(\widehat m^{(1,2)}_1,\widehat m^{(2,3)}_1,\widehat m^{(1,3)}_1,\widehat z^{(1,2)}\cdot\widehat z^{(2,3)}\cdot\widehat\zeta^{1,2,3}_2\bigr),\bigl(\widehat m^{(1,3)}_1,\widehat m^{(1,3)}_2,1\bigr)\bigr)\,. \end{eqnarray} The triviality of its form, in conjunction with that of the groupoid structure on $\,\widehat\mathscr{L}\,$ established in \Reqref{eq:grpdstrwidehatL}, ensures that it satisfies the usual requirement of compatibility with the respective groupoid structures on $\,{\rm pr}_{1,2}^*\widehat\mathscr{G}\otimes{\rm pr}_{2,3}^*\widehat\mathscr{G}\,$ and $\,{\rm pr}_{1,3}^*\widehat\mathscr{G}$.\ It is also in keeping with our definition of the super-0-gerbe isomorphism. Thus, altogether, we have the desired product 1-isomorphism \begin{eqnarray}\nn \mathcal{M}_{\widehat\mathscr{G}}\ :\ {\rm pr}_{1,2}^*\widehat\mathscr{G}\otimes{\rm pr}_{2,3}^*\widehat\mathscr{G}\xrightarrow{\ \cong\ }{\rm pr}_{1,3}^*\widehat\mathscr{G}\,. \end{eqnarray} Finally, we verify the existence (over $\,{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}$) of an associator 2-isomorphism \begin{eqnarray}\label{diag:assoc} \alxydim{@C=4.cm@R=2cm}{{\rm pr}_{1,2}^*\widehat\mathscr{G}\otimes{\rm pr}_{2,3}^*\widehat\mathscr{G}\otimes{\rm pr}_{3,4}^*\widehat\mathscr{G} \ar[r]^{{\rm pr}_{1,2,3}^*\mathcal{M}_{\widehat\mathscr{G}}\otimes{\rm id}_{{\rm pr}_{1,3}^*\widehat\mathscr{G}}} \ar[d]_{{\rm id}_{{\rm pr}_{1,2}^*\widehat\mathscr{G}}\otimes{\rm pr}_{2,3,4}^*\mathcal{M}_{\widehat\mathscr{G}}} & {\rm pr}_{1,3}^*\widehat\mathscr{G}\otimes{\rm pr}_{3,4}^*\widehat\mathscr{G} \ar[d]^{{\rm pr}_{1,3,4}^*\mathcal{M}_{\widehat\mathscr{G}}} \ar@{=>}[dl]|{\ \mu_{\widehat\mathscr{G}}\ } \\ {\rm pr}_{1,2}^*\widehat\mathscr{G}\otimes{\rm pr}_{2,4}^*\widehat\mathscr{G} \ar[r]_{{\rm pr}_{1,2,4}^*\mathcal{M}_{\widehat\mathscr{G}}} & {\rm pr}_{1,4}^*\widehat\mathscr{G} } \end{eqnarray} For that purpose, we first consider the surjective submersion \begin{eqnarray}\nn \widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\equiv\widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \cr\cr \x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \end{eqnarray} with projections (defined analogously to the $\,\widehat{\rm pr}_{i,j}\,$ of Diag.\,\eqref{diag:pbsspro}) \begin{eqnarray}\nn \widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\xleftarrow{\ \widehat{\rm pr}_{1,2,3}\ }\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\xrightarrow{\ \widehat\pi_{4,5}\ }\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)} \end{eqnarray} which we use to erect the principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \widehat{\rm pr}_{1,2,3}^*\widehat\mathscr{E}\otimes\widehat\pi_{4,5}^*\widehat\mathscr{L}\longrightarrow\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \end{eqnarray} associated with the super-1-gerbe 1-isomorphism $\,{\rm pr}_{1,2,3}^*\mathcal{M}_{\widehat\mathscr{G}}\otimes{\rm id}_{{\rm pr}_{1,3}^*\widehat\mathscr{G}}$.\ Next, we take the principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \widehat{\rm pr}_{1,3,4}^*\widehat\mathscr{E}\longrightarrow\widehat{\mathsf Y}^{1,3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\,, \end{eqnarray} pulled back to its base (defined similarly as $\,\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}$) from $\,\widehat{\mathsf Y}^{1,3,4}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\,$ and associated with the super-1-gerbe 1-isomorphism $\,{\rm pr}_{1,3,4}^*\mathcal{M}_{\widehat\mathscr{G}}$.\ The tensor product of the pullback bundles \begin{eqnarray}\nn &&{\rm pr}_{1,2,3,4,5}^*\bigl(\widehat{\rm pr}_{1,2,3}^*\widehat\mathscr{E}\otimes\widehat\pi_{4,5}^*\widehat\mathscr{L}\bigr)\otimes{\rm pr}_{3,5,6}^*\widehat {\rm pr}_{1,3,4}^*\widehat\mathscr{E}\cr\cr &\longrightarrow&\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \end{eqnarray} over the base \begin{eqnarray}\nn &&\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\cr\cr &\equiv&\bigl(\widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\bigr) \cr\cr &&\x_{\widehat{\mathsf Y}^{1,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\bigl(\widehat{\mathsf Y}^{1,3} {\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \cr\cr &&\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\bigr) \end{eqnarray} now corresponds to the first composite 1-isomorphism of Diag.\,\eqref{diag:assoc}, \begin{eqnarray}\nn {\rm pr}_{1,3,4}^*\mathcal{M}_{\widehat\mathscr{G}}\circ\bigl({\rm pr}_{1,2,3}^*\mathcal{M}_{\widehat\mathscr{G}}\otimes{\rm id}_{{\rm pr}_{1,3}^*\widehat\mathscr{G}}\bigr)\,. \end{eqnarray} Analogously, we construct the pricipal ${\mathbb{C}}^\x$-bundle associated with the other composite 1-isomorphism. Thus, we take the surjective submersion \begin{eqnarray}\nn &&\widehat{\mathsf Y}^{1,2\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\cr\cr &\equiv& \widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,2}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4} {\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\cr\cr &&\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \end{eqnarray} with projections ({\it cp.}\ above) \begin{eqnarray}\nn \widehat{\mathsf Y}^{1,2\,[2]}{\mathsf Y}_2^{[2]}\mathcal{M}^{(1)} \xleftarrow{\ \widehat\pi_{1,2}\ }\widehat{\mathsf Y}^{1,2\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\xrightarrow{\ \widehat{\rm pr}_{3,4,5}\ }\widehat{\mathsf Y}^{2,3,4}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)} \end{eqnarray} as the basis of the pricipal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \widehat\pi_{1,2}^*\widehat\mathscr{L}\otimes\widehat{\rm pr}_{3,4,5}^*\widehat\mathscr{E}\longrightarrow\widehat{\mathsf Y}^{1,2\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \end{eqnarray} of the 1-isomorphism $\,{\rm id}_{{\rm pr}_{1,2}^*\widehat\mathscr{G}}\otimes{\rm pr}_{2,3,4}^*\mathcal{M}_{\widehat\mathscr{G}}$,\ and then the pricipal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \widehat{\rm pr}_{1,2,4}^*\widehat\mathscr{E}\longrightarrow\widehat{\mathsf Y}^{1,2,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \end{eqnarray} for the 1-isomorphism $\,{\rm pr}_{1,2,4}^*\mathcal{M}_{\widehat\mathscr{G}}$.\ These two combine to give the bundle \begin{eqnarray}\nn &&{\rm pr}_{1,2,3,4,5}^*\bigl(\widehat\pi_{1,2}^*\widehat\mathscr{L}\otimes\widehat{\rm pr}_{3,4,5}^*\widehat\mathscr{E}\bigr)\otimes{\rm pr}_{2,5,6}^* \widehat{\rm pr}_{1,2,4}^*\widehat\mathscr{E}\cr\cr &\longrightarrow& \widehat{\mathsf Y}^{1,2\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}} \widehat{\mathsf Y}^{1,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)} \end{eqnarray} corresponding to \begin{eqnarray}\nn {\rm pr}_{1,2,4}^*\mathcal{M}_{\widehat\mathscr{G}}\circ\bigl({\rm id}_{{\rm pr}_{1,2}^*\widehat\mathscr{G}}\otimes{\rm pr}_{2,3,4}^*\mathcal{M}_{\widehat\mathscr{G}}\bigr)\,. \end{eqnarray} The sought-after 2-isomorphism is a connection-preserving isomorphism \begin{eqnarray}\nn m\ &:&\ {\rm pr}_{1,2,3,4,5,8}^*\bigl[{\rm pr}_{1,2,3,4,5}^*\bigl(\widehat{\rm pr}_{1,2,3}^*\widehat\mathscr{E}\otimes\widehat\pi_{4,5}^*\widehat\mathscr{L}\bigr)\otimes{\rm pr}_{3,5,6}^*\widehat {\rm pr}_{1,3,4}^*\widehat\mathscr{E}\bigr]\cr\cr &&\xrightarrow{\ \cong\ }{\rm pr}_{1,6,2,4,7,8}^*\bigl[{\rm pr}_{1,2,3,4,5}^*\bigl(\widehat\pi_{1,2}^*\widehat\mathscr{L}\otimes\widehat{\rm pr}_{3,4,5}^*\widehat\mathscr{E}\bigr)\otimes{\rm pr}_{2,5,6}^* \widehat{\rm pr}_{1,2,4}^*\widehat\mathscr{E}\bigr] \end{eqnarray} of principal ${\mathbb{C}}^\x$-bundles over \begin{eqnarray}\nn &&\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,2,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\cr\cr &\equiv& \bigl(\widehat{\mathsf Y}^{1,2,3}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{3,4\,[2]}{\mathsf Y}_2^{[3]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\bigr)\cr\cr &&{}_{1,2,4,6}\x_{1,3,4,6} \bigl(\widehat{\mathsf Y}^{1,2\,[2]}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{2,3,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\x_{{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}}\widehat{\mathsf Y}^{1,4}{\mathsf Y}_2^{[4]}\mathcal{M}^{(1)}\bigr)\,, \end{eqnarray} where the indices on $\,\x\,$ indicate the cartesian factors of the component to the left and to the right of the product sign, respectively, that are identified in the fibred product. The proof of the existence of $\,m\,$ is based on the comparison between the relevant connection 1-forms. Their equality, expressed by the formula \begin{eqnarray}\nn &&\widehat\txa\bigl(\widehat m^{1\,(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)}\bigr)+\widehat{\rm A}\bigl(\widehat m^{1\,(3,4)}_7,\widehat m^{2\,(3,4)}_7\bigr)+\widehat\txa\bigl(\widehat m^{(1,3)},\widehat m^{2\,(3,4)},\widehat m^{(1,4)}\bigr)\cr\cr &=&\widehat{\rm A}\bigl(\widehat m^{1\,(1,2)}_7,\widehat m^{2\,(1,2)}_7\bigr)+\widehat\txa\bigl(\widehat m^{(2,3)},\widehat m^{1\,(3,4)},\widehat m^{(2,4)}\bigr)+\widehat\txa\bigl(\widehat m^{2\,(1,2)},\widehat m^{(2,4)},\widehat m^{(1,4)}\bigr) \end{eqnarray} (written in an obvious adaptation of the previously employed shorthand notation), leads us to set \begin{eqnarray}\nn &&m\bigl(\bigl(\widehat m^{1\,(1,2)},\widehat m^{(2,3)},\widehat m^{(1,3)},\widehat\zeta^{1,2,3}\bigr),\bigl(\widehat m^{1\,(3,4)}_7,\widehat m^{2\,(3,4)}_7,\widehat z_{1,2}^{(3,4)}\bigr),\bigl(\widehat m^{(1,3)},\widehat m^{2\,(3,4)},\widehat m^{(1,4)},\widehat\zeta^{1,3,4}\bigr)\bigr)\cr\cr &:=&\bigl(\widehat m^{1\,(1,2)}_7,\widehat m^{2\,(1,2)}_7,\widehat\zeta^{1,2,3}\cdot\widehat z_{1,2}^{(3,4)}\cdot\widehat\zeta^{1,3,4}\bigr),\bigl(\widehat m^{(2,3)},\widehat m^{1\,(3,4)},\widehat m^{(2,4)},1\bigr), \bigl(\widehat m^{2\,(1,2)},\widehat m^{(2,4)},\widehat m^{(1,4)},1\bigr)\,. \end{eqnarray} Clearly, for a super-1-gerbe thus defined, all coherence costraints involving the groupoid structure on $\,\mu_{\widehat\mathscr{L}}\,$ and the product isomorphism $\,\widehat\varepsilon\,$ (likewise trivial) are automatically satisfied. Also, once again, we have perfect agreement with our definition of the super-0-gerbe isomorphism. We conclude our analysis with \begin{Def}\label{def:s2gerbe} The \textbf{Green--Schwarz super-2-gerbe} of curvature $\,\underset{\tx{\ciut{(4)}}}{\chi}\,$ is the quintuple \begin{eqnarray}\nn \mathcal{sG}^{(2)}_{\rm GS}:=\bigl({\mathsf Y}_2\mathcal{M}^{(1)},\underset{\tx{\ciut{(3)}}}{\beta}^{(4)},\widehat\mathscr{G},\mathcal{M}_{\widehat\mathscr{G}},\mu_{\widehat\mathscr{G}}\bigr) \end{eqnarray} constructed in the preceding paragraphs. \begin{flushright}$\diamond$\end{flushright Our results are amenable to a straightforward abstraction in the spirit of Defs.\,\ref{def:CaEs0g} and \ref{def:CaEs1g}. We leave it to the avid Reader to work out the obvious details of a definition of a Cartan--Eilenberg super-2-gerbe. The same goes for the $\underset{\tx{\ciut{(4)}}}{\chi}$-twisted Vinogradov-type superbrackets of the various fundamental sections of $\,\mathcal{E}^{1,2}{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ engendered by natural actions of $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$. \brem In the light of the findings of Refs.\,\cite{Gawedzki:2010rn,Gawedzki:2012fu,Suszek:2012ddg}, our analysis of the algebroidal structure associated with left- and right-regular as well as adjoint actions of $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on the super-Minkowskian supertarget $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ suggests the possibility of the existence of an ${\rm Ad}_\cdot$-equivariant structure on the GS super-$p$-gerbes constructed above. In what follows, we verify this expectation. \end{Rem}\medskip \subsection{Supersymmetry-equivariance of the Green--Schwarz supergerbe}\label{sec:sgerbequiv} Our choice of the type of cohomology underlying supergeometric considerations and constructions as well as their field-theoretic applications based on the definition of the GS super-$(p+2)$-cocycles raises the natural question about the existence of a {\it structural} realisation ({\it i.e.}, in categorial terms -- by means of suitable morphisms) of the geometric action of the supersymmetry group (on $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$) on the supergerbe and about the ensuing symmetry content of the super-$\sigma$-model that it (co)determines. An appropriate framework in which such questions can be formulated and answered concretely and rigorously was delineated in Sec.\,\ref{sub:defcanquant} and in the literature cited therein. Below, we adapt the formal gerbe-theoretic language of description of $\sigma$-model symmetries to the supergeometric setting in hand, in conformity with the logic of the hitherto discussion. As recalled in Sec.\,\ref{sub:defcanquant}, after Refs.\,\cite{Runkel:2008gr,Gawedzki:2008um,Gawedzki:2010rn,Gawedzki:2012fu,Suszek:2011hg,Suszek:2012ddg}, rigid and gauge symmetries have significantly different gerbe-theoretic emanations: While the former are described by families of gerbe 1-isomorphisms over the original target space of the $\sigma$-model and are directly built into the very construction of the super-gerbes presented in the foregoing sections (which is why we never return to them in the remainder), the geometric data of the latter are bound to scatter over various components of the nerve $\,{\mathsf N}^\bullet({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\hspace{0.02cm}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}^{(1)})\equiv({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}})^{\x\bullet}\x\mathcal{M}^{(1)}\,$ of the relevant action groupoid $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\hspace{0.02cm}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}^{(1)}$, \begin{eqnarray}\nn {\tiny\hspace{-.5cm}\ldots\qquad \alxydim{@R=2cm@C=2cm}{({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}})^{\x 3}\x\mathcal{M}^{(1)} \ar@/^1.25pc/[r]^{\hspace{.5cm} d^{(3)}_0} \ar@/^.5pc/[r]|-{d^{(3)}_1} \ar@/^-.5pc/[r]|-{d^{(3)}_2} \ar@/^-1.25pc/[r]_{\hspace{.5cm} d^{(3)}_3} & ({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}})^{\x 2}\x\mathcal{M}^{(1)} \ar@/^.75pc/[r]^{\hspace{.3cm} d^{(2)}_0} \ar@<0ex>[r]|-{d^{(2)}_1} \ar@/^-.75pc/[r]_{\hspace{.3cm} d^{(2)}_2} & {\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathcal{M}^{(1)} \ar@<.5ex>[r]^{\hspace{.75cm} d^{(1)}_0\equiv{\rm pr}_2} \ar@<-.75ex>[r]_{\hspace{.5cm} d^{(1)}_1\equiv\ell_\cdot^{(1)}} & \mathcal{M}^{(1)}}\,,} \end{eqnarray} and so their analysis in the supergeometric setting unavoidably goes beyond the structural framework of Defs.\,\ref{def:CaEs0g} and \ref{def:CaEs1g}, forcing us to abstract a suitable general definition of a supersymmetry-equivariant structure from the study of particular cases, to which we turn next. When addressing them, we should keep in mind that the choice of the symmetry group alone is not enough in general to decide whether the corresponding rigid symmetry of the (super-)$\sigma$-model is amenable to gauging or not -- indeed, in the case of a Lie-group target $\,{\rm G}$,\ we have a variety of representations of the group $\,{\rm G}\,$ (or any of its subgroups, for that matter) on itself that embed in the product $\,{\rm G}\x{\rm G}\,$ representing the independent left- and right-regular translations, and it has long been known (and confirmed anew in Refs.\,\cite{Gawedzki:2010rn,Gawedzki:2012fu}, from the gerbe-theoretic vantage point) that in the case of the full group $\,{\rm G}\,$ the adjoint representation admits gauging, whereas the left- and right-regular ones do not ({\it i.e.}, there appear anomalies). In what follows, we invoke the interpretation of the two-dimensional super-$\sigma$-model as a super-variant of the WZW model as motivation and set out to corroborate the anticipated equivariance pattern in the case of the GS super-0-gerbe and that of the GS super-1-gerbe, leaving the technically much heavier but conceptually fully analogous case of the GS super-2-gerbe (in which we conjecture the very same pattern to be realised) as an exercise for the interested Reader. In so doing, we are guided by the intuition developed through the study, carried out in the previous section, of the algebroidal structures associated with the various actions of the supersymmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on the super-Minkowskian supertarget $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$.\ At the same time, we keep in mind the somewhat involved nature of the right-regular symmetry of the super-$\sigma$-models under consideration, to be recalled and studied in detail in Sec.\,\ref{sec:kappa}. By way of a warm-up, and in order to develop the necessary intuitions as to the relevant geometric structure behind supersymmetry-equivariance, we first illustrate the anomalous nature of the left-regular representation of supersymmetry on the example of the super-0-gerbe, and then examine at length its non-anomalous adjoint representation for the super-0-gerbe and the super-1-gerbe. Finally, in Sec.\,\ref{sec:kappa}, we discuss a very special Lie-superalgebraic symmetry of the super-$\sigma$-model and its super-gerbe which is induced from (constrained) right-regular translations. Prior to launching the proper case-by-case study, let us adapt the notion of left-invariance to the setting of an equivariant structure on a (super-)$p$-gerbe over a (super)manifold $\,\mathcal{M}\,$ endowed with a (left) group action \begin{eqnarray}\nn \ell_\cdot\ :\ {\rm G}\x\mathcal{M}\longrightarrow\mathcal{M}\,, \end{eqnarray} by which we mean identifying the appropriate action on the $n$-th component of the nerve $\,{\mathsf N}^\bullet({\rm G}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M})\equiv{\rm G}^{\x\bullet}\x\mathcal{M}\,$ for which all the face maps of the nerve are equivariant, with the action on the bottom level of the nerve fixed in the form $\,\ell_\cdot^0\equiv\ell_\cdot$.\ The equivariance of the maps in question ensures that objects which are LI with respect to the original action of $\,{\rm G}\,$ pull back to objects with the same property with respect to the new action. It is clear that the unique choice of the sought-after action on $\,{\mathsf N}^\bullet({\rm G}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M})\,$ reads \begin{eqnarray}\nn \ell^n_\cdot\ &:&\ {\rm G}\x{\mathsf N}^n({\rm G}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M})\longrightarrow{\mathsf N}^n({\rm G}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M})\cr\cr &:&\ \bigl(g,(g_1,g_2,\ldots,g_n,x)\bigr)\longmapsto\bigl({\rm Ad}_g(g_1),{\rm Ad}_g(g_2),\ldots,{\rm Ad}_g(g_n),\ell_g(m)\bigr)\,, \end{eqnarray} and so it is natural -- in the context of left-invariant cohomology and the associated (super)geometric constructions -- to demand invariance of the geometric objects (tensors and their geometrisations) over components $\,{\mathsf N}^n({\rm G}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M})\,$ of the nerve with respect to the respective distinguished extensions $\,\ell^n_\cdot\,$ of $\,\ell_\cdot$.\ In fact, we may be slightly more general and consider equivariance with respect to the action of any normal Lie sub-supergroup $\,{\rm H}\subset{\rm G}\,$ of the Lie supergroup $\,{\rm G}$, \begin{eqnarray}\nn \forall_{g\in{\rm G}}\ :\ {\rm Ad}_g({\rm H})\subset{\rm H}\,, \end{eqnarray} with the associated action groupoid $\,{\rm H}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}\,$ with the nerve \begin{eqnarray}\label{eq:HonMnerve} \hspace{1.5cm}\ldots\qquad \alxydim{@R=2cm@C=2cm}{{\rm H}^{\x 3}\x\mathcal{M} \ar@/^1.25pc/[r]^{\hspace{.5cm} d^{(3)}_0} \ar@/^.5pc/[r]|-{d^{(3)}_1} \ar@/^-.5pc/[r]|-{d^{(3)}_2} \ar@/^-1.25pc/[r]_{\hspace{.5cm} d^{(3)}_3} & {\rm H}^{\x 2}\x\mathcal{M} \ar@/^.75pc/[r]^{\hspace{.3cm} d^{(2)}_0} \ar@<0ex>[r]|-{d^{(2)}_1} \ar@/^-.75pc/[r]_{\hspace{.3cm} d^{(2)}_2} & {\rm H}\x\mathcal{M} \ar@<.5ex>[r]^{d^{(1)}_0\equiv{\rm pr}_2} \ar@<-.75ex>[r]_{d^{(1)}_1\equiv\ell_\cdot} & \mathcal{M}}\,, \end{eqnarray} in which case we shall use the same symbols to denote the corresponding actions \begin{eqnarray} \ell_\cdot^n\ :\ &:&\ {\rm G}\x{\mathsf N}^n({\rm H}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M})\longrightarrow{\mathsf N}^n({\rm H}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M})\cr\cr &:&\ \bigl(g,(h_1,h_2,\ldots,h_n,x)\bigr)\longmapsto\bigl({\rm Ad}_g(h_1),{\rm Ad}_g(h_2),\ldots,{\rm Ad}_g(h_n),\ell_g(m)\bigr)\,.\label{eq:GLIonHM} \end{eqnarray} In the case of immediate interest, the general scheme specialises to one of the following three possibilities that we shall encounter below: $\,\ell_\cdot\,$ is the left-regular action of $\,{\rm G}\,$ on itself, or the superposition $\,\wp\circ{\rm Inv}\,$ of the right-regular action of $\,{\rm G}\,$ on itself with the group inverse, or the adjoint action $\,{\rm Ad}_\cdot\ :\ {\rm G}\x{\rm G}\longrightarrow{\rm G}\ :\ (h,g)\longmapsto h\cdot g\cdot h^{-1}$.\ Consequently, we need an explicit form of the adjoint action of the Lie supergroup $\,\mathcal{M}^{(1)}\,$ on itself and that of its supercentral extension $\,\mathcal{M}_1^{(2)}\,$ of Prop.\,\ref{prop:M2group} on itself. As the latter contains the fomer, we confine ourselves to writing out the latter: \begin{eqnarray} {\rm Ad}^{(2)}_\cdot\ &:&\ \mathcal{M}_1^{(2)}\x\mathcal{M}_1^{(2)}\longrightarrow\mathcal{M}_1^{(2)}\cr\cr &:&\ \bigl(\bigl(\varepsilon^\a,y^I,\zeta_\beta\bigr),\bigl(\theta^\gamma,x^J,\xi_\delta\bigr)\bigr)\longmapsto\bigl(\theta^\a,x^I-\ovl\varepsilon\,\Gamma^I\,\theta,\xi_\beta-\tfrac{1}{2}\,\bigl(\ovl\varepsilon\,\Gamma_I\,\theta\bigr)\,\ovl\Gamma{}^I_{\beta\gamma}\,\theta^\gamma\bigr)\,.\label{eq:Adong} \end{eqnarray} We are now ready to perform the detailed analysis of the invariance resp.\ equivariance properties of the various objects defined over $\,{\mathsf N}^\bullet({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}^{(1)})$. \subsubsection{The GS super-0-gerbe} We begin by looking for an isomorphism \begin{eqnarray}\nn \Upsilon^{(0,\ell_\cdot)}_{{\rm GS},p}\ :\ \ell_\cdot^*\mathscr{L}^{(0)}\xrightarrow{\ \cong\ }{\rm pr}_2^*\mathscr{L}^{(0)}\otimes\mathcal{J}_{\underset{\tx{\ciut{(1)}}}{\rho}{}^{(\ell_\cdot)}} \end{eqnarray} of (trivial) principal ${\mathbb{C}}^\x$-bundles over the supermanifold $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathcal{M}^{(1)}$,\ of which the last one, $\,\mathcal{J}_{\underset{\tx{\ciut{(1)}}}{\rho}}$,\ is to be understood as trivial in the sense of invariant cohomology, that is -- admitting a connection 1-form $\,\underset{\tx{\ciut{(1)}}}{\rho}\,$ on the base which is left-invariant with respect to $\,\ell^1_\cdot\,$ (this enables us to identify it as a super-0-gerbe trivial in the ${\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}$-invariant cohomology). In so doing, we impose the additional requirement that the isomorphism also be left-invariant with respect to this action, which means that its data (a super-0-form on $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathcal{M}^{(1)}$) should have the same property. We compute \begin{eqnarray}\nn \bigl(\ell_\cdot^*-{\rm pr}_2^*\bigr)\underset{\tx{\ciut{(2)}}}{\chi}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)={\mathsf d}\bigl(\ovl\theta_1\,\Gamma_{11}\,{\mathsf d}(\theta_1+2\theta_2)\bigr)\,, \end{eqnarray} and so we conclude that \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\rho}^{(\ell_\cdot)}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)=\ovl\theta_1\,\Gamma_{11}\,{\mathsf d}(\theta_1+2\theta_2)+{\mathsf d}\Delta^{(\ell_\cdot)}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)\,, \end{eqnarray} where $\,{\mathsf d}\Delta\,$ (written in terms of an super-0-form $\,\Delta^{(\ell_\cdot)}$) is an admissible de Rham-exact LI correction. In consequence of the (de Rham-)cohomological triviality of $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$,\ its invariance is tantamount to the left-invariance of the super-0-form $\,\Delta^{(\ell_\cdot)}\,$ itself. Passing, next, to the level of connection 1-forms, we readily establish the identity \begin{eqnarray}\nn \ell_\cdot^*\underset{\tx{\ciut{(1)}}}{\beta}+{\mathsf d}\bigl({\rm F}+\Delta^{(\ell_\cdot)}\bigr)={\rm pr}_2^*\underset{\tx{\ciut{(1)}}}{\beta}+\underset{\tx{\ciut{(1)}}}{\rho}^{(\ell_\cdot)}\,, \end{eqnarray} where \begin{eqnarray}\nn {\rm F}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)=\ovl\theta_1\,\Gamma_{11}\,\theta_2\,, \end{eqnarray} which in conjunction with the non-invariance property of the latter super-0-form, \begin{eqnarray}\nn \ell^{1\,*}_{(\varepsilon,y,\zeta)}{\rm F}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)={\rm F}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)+\ovl\varepsilon\,\Gamma_{11}\,\theta_1\,, \end{eqnarray} infers conclusively that there does {\it not} exist an isomorphism of the type sought after. This is to be contrasted with the situation that arises when the left-regular action is replaced by the adjoint action. We have \begin{eqnarray}\nn \bigl({\rm Ad}_\cdot^*-{\rm pr}_2^*\bigr)\underset{\tx{\ciut{(2)}}}{\chi}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)=0\,, \end{eqnarray} and so we may take \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\rho}^{({\rm Ad}_\cdot)}={\mathsf d}\Delta^{({\rm Ad}_\cdot)}\,, \end{eqnarray} with $\,\Delta^{({\rm Ad}_\cdot)}\,$ an LI super-0-form. In view of the equality \begin{eqnarray}\nn {\rm Ad}_\cdot^*\underset{\tx{\ciut{(1)}}}{\beta}={\rm pr}_2^*\underset{\tx{\ciut{(1)}}}{\beta}\,, \end{eqnarray} we ultimately conclude that we may -- without any loss of generality -- set \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\rho}^{({\rm Ad}_\cdot)}\equiv 0 \end{eqnarray} and identify the manifestly LI isomorphism in the trivial form \begin{eqnarray}\nn \Upsilon^{(0,{\rm Ad}_\cdot)}_{{\rm GS},p}\equiv{\rm id}_{{\rm pr}_2^*\mathscr{L}^{(0)}}\ :\ {\rm Ad}_\cdot^*\mathscr{L}^{(0)}\xrightarrow{\ \cong\ }{\rm pr}_2^*\mathscr{L}^{(0)}\otimes\mathcal{J}_{\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}_\cdot)}}\equiv{\rm pr}_2^*\mathscr{L}^{(0)}\,. \end{eqnarray} The latter satisfies the usual coherence condition \begin{eqnarray}\nn \bigl(d_0^{(2)\,*}\Upsilon^{(0,{\rm Ad}_\cdot)}_{{\rm GS},p}\otimes{\rm id}_{\mathcal{J}_{d_2^{(2)\,*}\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}_\cdot)}}}\bigr)\circ d_2^{(2)\,*}\Upsilon^{(0,{\rm Ad}_\cdot)}_{{\rm GS},p}=d_1^{(2)\,*}\Upsilon^{(0,{\rm Ad}_\cdot)}_{{\rm GS},p} \end{eqnarray} (written in terms of the face maps of the nerve $\,{\mathsf N}^\bullet({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}^{(1)})\,$ for the adjoint action), to be imposed on data of an equivariant structure on a bundle. This requires that the identity \begin{eqnarray}\nn \bigl(d_0^{(2)\,*}+d_2^{(2)\,*}-d_1^{(2)\,*}\bigr)\underset{\tx{\ciut{(1)}}}{\rho}^{({\rm Ad}_\cdot)}=0 \end{eqnarray} hold true, which it does trivially in the case in hand. We are thus led to postulate \begin{Def}\label{def:SUSYequiv0} Adopt the notation of Def.\,\ref{def:CaEs0g} and let $\,{\rm H}\subset{\rm G}\,$ be a normal Lie sub-supergroup of the Lie supergroup $\,{\rm G}\,$ endowed with a left action \begin{eqnarray}\nn \ell_\cdot\ :\ {\rm H}\x{\rm G}\longrightarrow{\rm G}\,, \end{eqnarray} the latter determining the action groupoid $\,{\rm H}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\rm G}\,$ with the nerve \eqref{eq:HonMnerve} (where $\,\mathcal{M}\equiv{\rm G}$). A {\bf supersymmetric ${\rm H}$-equivariant structure on the Cartan--Eilenberg super-0-gerbe} $\,\mathcal{G}^{(0)}_{\rm CaE}=({\rm L},\pi_{\rm L},\txa_{\rm L})\,$ with a connection 1-form $\,\txa_{\rm L}({\mathbb{X}},z)={\mathsf i}\,\frac{{\mathsf d} z}{z}+\underset{\tx{\ciut{(1)}}}{{\rm b}}({\mathbb{X}})\,,\,$ of curvature $\,\underset{\tx{\ciut{(2)}}}{\chi}\,$ over $\,{\rm G}\,$ is a pair $\,(\Upsilon^{(0)},\underset{\tx{\ciut{(1)}}}{\rho})\,$ composed of \begin{itemize} \item a super-1-form $\,\underset{\tx{\ciut{(1)}}}{\rho}\,$ on $\,{\rm H}\x{\rm G}\,$ satisfying the identities \begin{eqnarray}\nn \bigl(d_1^{(1)\,*}-d_0^{(1)\,*}\bigr)\underset{\tx{\ciut{(2)}}}{\chi}={\mathsf d}\underset{\tx{\ciut{(1)}}}{\rho} \end{eqnarray} and \begin{eqnarray}\label{eq:rho1inv} \bigl(d_0^{(2)\,*}+d_2^{(2)\,*}-d_1^{(2)\,*}\bigr)\underset{\tx{\ciut{(1)}}}{\rho}=0\,, \end{eqnarray} and LI with respect to the action $\,\ell^1_\cdot\,$ of \Reqref{eq:GLIonHM} (where $\,\mathcal{M}\equiv{\rm G}$), \begin{eqnarray}\nn \forall_{{\mathbb{X}}\in{\rm G}}\ :\ \ell_{\mathbb{X}}^{1\,*}\underset{\tx{\ciut{(1)}}}{\rho}=\underset{\tx{\ciut{(1)}}}{\rho}\,, \end{eqnarray} \item a connection-preserving isomorphism of principal ${\mathbb{C}}^\x$-bundles \begin{eqnarray}\nn \Upsilon^{(0)}\ :\ d_1^{(1)\,*}{\rm L}\xrightarrow{\ \cong\ }d_0^{(1)\,*}{\rm L}\otimes\mathcal{J}_{\underset{\tx{\ciut{(1)}}}{\rho}} \end{eqnarray} determined by a super-0-form $\,{\rm F}\,$ on $\,{\rm H}\x{\rm G}\,$ satisfying the identity \begin{eqnarray}\nn \bigl(d_1^{(1)\,*}-d_0^{(1)\,*}\bigr)\underset{\tx{\ciut{(1)}}}{{\rm b}}=\underset{\tx{\ciut{(1)}}}{\rho}+{\mathsf d}{\rm F}\,, \end{eqnarray} LI with respect to the action $\,\ell^1_\cdot\,$ of \Reqref{eq:GLIonHM} (where $\,\mathcal{M}\equiv{\rm G}$), \begin{eqnarray}\nn \forall_{{\mathbb{X}}\in{\rm G}}\ :\ \ell_{\mathbb{X}}^{1\,*}{\rm F}={\rm F}\,, \end{eqnarray} and subject to the coherence constraints \begin{eqnarray}\label{eq:F1inv} \bigl(d_0^{(2)\,*}+d_2^{(2)\,*}-d_1^{(2)\,*}\bigr){\rm F}=0\,. \end{eqnarray} \end{itemize} \begin{flushright}$\diamond$\end{flushright \brem The last two conditions in the above definition (imposed on $\,{\rm F}$) do not follow directly from our analysis, due to of the triviality of the relevant structures in the case of the GS super-0-gerbe, but otherwise constitute its natural generalisation consistent with the invariant-cohomological approach developed in the present paper, and -- in this sense -- provide a natural logical completion of the earlier part of the definition. We shall encounter instances of such a more general behaviour of geometric objects under supersymmetry actions in the next example. \end{Rem}\medskip We may now summarise our hitherto findings in the form of \begin{Prop}\label{prop:Adequivstr0} The Green--Schwarz super-0-gerbe of Def.\,\ref{def:s0gerbe} carries a canonical supersymmetric equivariant structure $\,\Upsilon^{(0,{\rm Ad}_\cdot)}_{{\rm GS},p}\,$ with respect to the adjoint action of the Lie supergroup $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on itself, relative to the LI super-1-form $\,\underset{\tx{\ciut{(1)}}}{\rho}^{({\rm Ad}_\cdot)}=0$. \end{Prop} \medskip \subsubsection{The GS super-1-gerbe}\label{subsub:Adequivstr1} Next, we consider the GS super-1-gerbe. We readily convince ourselves that the left-regular representation is anomalous just as in the previous case, and so we pass directly to the adjoint representation, in which we obtain the relation \begin{eqnarray}\nn \bigl({\rm Ad}_\cdot^*-{\rm pr}_2^*\bigr)\underset{\tx{\ciut{(3)}}}{\chi}={\mathsf d}\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)} \end{eqnarray} with \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)=\bigl(\ovl\theta_2\,\Gamma{}^I\,\theta_1\bigr)\,{\mathsf d}\ovl\theta_2\wedge\Gamma_I\,{\mathsf d}\theta_2\,. \end{eqnarray} The latter super-1-form is manifestly LI with respect to the action $\,\ell^1_\cdot\,$ of \Reqref{eq:GLIonHM} (where $\,\mathcal{M}\equiv{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}$), \begin{eqnarray}\nn {\rm Ad}_{(\varepsilon,y)}^{1\,*}\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}\bigl(\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)\bigr)&=&\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}\bigl({\rm Ad}_{(\varepsilon,y)}(\theta_1,x_1),{\rm Ad}_{(\varepsilon,y)}(\theta_2,x_2)\bigr)\cr\cr &=&\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}\bigl((\theta_1,x_1),(\theta_2,x_2)\bigr)\,, \end{eqnarray} and so we may look for a 1-isomorphism of (super-)gerbes \begin{eqnarray}\nn \Upsilon^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p}\ :\ {\rm Ad}_\cdot^*\mathcal{G}_{{\rm GS},p}^{(1)}\xrightarrow{\ \cong\ }{\rm pr}_2^*\mathcal{G}_{{\rm GS},p}^{(1)}\otimes\mathcal{I}_{\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}} \end{eqnarray} over $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathcal{M}^{(1)}\,$ which would be {\it left-invariant} in a natural manner, suggested in part by Def.\,\ref{def:SUSYequiv0}. To this end, we consider the surjective submersion \begin{eqnarray}\nn\hspace{-1cm} {\small\ldots \alxydim{@R=2cm@C=2cm}{\bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 4} \ar@/^1.25pc/[r]^{{\mathsf Y}_1 d^{(3)}_0} \ar@/^.5pc/[r]|-{{\mathsf Y}_1 d^{(3)}_1} \ar@/^-.5pc/[r]|-{{\mathsf Y}_1 d^{(3)}_2} \ar@/^-1.25pc/[r]_{{\mathsf Y}_1 d^{(3)}_3} \ar[d]_{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}^{\x 4}} & \bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 3} \ar@/^.75pc/[r]^{{\mathsf Y}_1 d^{(2)}_0} \ar@<0ex>[r]|-{{\mathsf Y}_1 d^{(2)}_1} \ar@/^-.75pc/[r]_{{\mathsf Y}_1 d^{(2)}_2} \ar[d]_{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}^{\x 3}} & \bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 2} \ar@<.75ex>[r]^{{\mathsf Y}_1 d^{(1)}_0\equiv{\rm pr}_2} \ar@<-.75ex>[r]_{{\mathsf Y}_1 d^{(1)}_1\equiv{\rm Ad}_\cdot^{(2)}} \ar[d]_{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}^{\x 2}} & {\mathsf Y}_1\mathcal{M}^{(1)} \ar[d]^{\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}} \\ \bigl({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\bigr)^{\x 3}\x\mathcal{M}^{(1)} \ar@/^1.25pc/[r]^{\hspace{.5cm} d^{(3)}_0} \ar@/^.5pc/[r]|-{d^{(3)}_1} \ar@/^-.5pc/[r]|-{d^{(3)}_2} \ar@/^-1.25pc/[r]_{\hspace{.5cm} d^{(3)}_3} & \bigl({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\bigr)^{\x 2}\x\mathcal{M}^{(1)} \ar@/^.75pc/[r]^{\hspace{.3cm} d^{(2)}_0} \ar@<0ex>[r]|-{d^{(2)}_1} \ar@/^-.75pc/[r]_{\hspace{.3cm} d^{(2)}_2} & {\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x\mathcal{M}^{(1)} \ar@<.5ex>[r]^{\hspace{.75cm} d^{(1)}_0\equiv{\rm pr}_2} \ar@<-.75ex>[r]_{\hspace{.5cm} d^{(1)}_1\equiv{\rm Ad}_\cdot} & \mathcal{M}^{(1)}}} \end{eqnarray} over the nerve $\,{\mathsf N}^\bullet({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}^{(1)})$,\ with the important covering property \begin{eqnarray}\nn d_i^{(n)}\circ\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}^{\x\,n+1}=\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}^{\x n}\circ{\mathsf Y}_1 d_i^{(n)} \end{eqnarray} that fixes the form of $\,{\mathsf Y}_1 d_i^{(n)}$.\ Denote \begin{eqnarray}\nn \mathcal{M}^{(1)\,n}:=\bigl({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\bigr)^{\x n}\x\mathcal{M}^{(1)} \end{eqnarray} in order to unclutter the formul\ae ~that follow. Taking the supermanifold $\,\bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 2}\,$ as the common surjective submersion for the pullback gerbes $\,d_i^{(1)\,*}\mathcal{G}^{(1)}_{{\rm GS},p}\,$ and the trivial gerbe $\,\mathcal{I}_{\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}}$,\ we commence our search for the principal ${\mathbb{C}}^\x$-bundle of the 1-isomorpshism $\,\Upsilon^{(1,{\rm Ad}_\cdot)}\,$ at the surjective submersion \begin{eqnarray}\nn &{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr):=\bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 2}\x_{\mathcal{M}^{(1)\,1}}\bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 2}\ni\bigl(\bigl((\theta_1,x_1,\xi_1),(\theta_2,x_2,\xi_2)\bigr),\bigl((\theta_1,x_1,\xi_3),(\theta_2,x_2,\xi_4)\bigr)\bigr)&\cr\cr &\hspace{4cm}=:(y_{1,2;1,2},y_{1,2;3,4})\,,& \end{eqnarray} where we obtain, in a direct computation invoking \eqref{eq:Adong}, the identity \begin{eqnarray}\nn {\rm pr}_{3,4}^*{\rm pr}_2^*\underset{\tx{\ciut{(2)}}}{\beta}+{\rm pr}_{3,4}^*\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}^{\x 2\,*}\underset{\tx{\ciut{(2)}}}{\rho}^{({\rm Ad}_\cdot)}-{\rm pr}_{1,2}^*{\rm Ad}_\cdot^{(2)\,*}\underset{\tx{\ciut{(2)}}}{\beta}{}^{(2)}={\mathsf d}{\mathsf E}\,, \end{eqnarray} in which \begin{eqnarray}\nn {\mathsf E}(y_{1,2;1,2},y_{1,2;3,4})=\tfrac{1}{2}\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)\,\bigl(\bigl(\ovl\theta_1-\ovl\theta_2\bigr)\,\Gamma^I\,{\mathsf d}\theta_2\bigr)+\theta_2^\a\,{\mathsf d}\bigl(x_1^I\,\ovl\Gamma{}_{I\,\a\beta}\,\theta_2^\beta-x_2^I\,\ovl\Gamma{}_{I\,\a\beta}\,\theta_1^\beta+\xi_{4\,\a}-\xi_{2\,\a}\bigr) \end{eqnarray} is a super-1-form on $\,({\mathsf Y}_1\mathcal{M}^{(1)})^{\x 2}\,$ which we identify as the base component of a principal connection 1-form on a (trivial) principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \pi_\mathscr{E}\equiv{\rm pr}_1\ :\ \mathscr{E}:={\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr)\x{\mathbb{C}}^\x\longrightarrow{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr)\ :\ \bigl((y_{1,2;1,2},y_{1,2;3,4}),z\bigr)\longmapsto(y_{1,2;1,2},y_{1,2;3,4}) \end{eqnarray} determining the 1-isomorphism sought after. In the above coordinates, the principal connection 1-form reads \begin{eqnarray}\nn \txa_\mathscr{E}\bigl((y_{1,2;1,2},y_{1,2;3,4}),z\bigr)={\mathsf i}\,\tfrac{{\mathsf d} z}{z}+{\mathsf E}(y_{1,2;1,2},y_{1,2;3,4})\,. \end{eqnarray} Following the logic overarching our considerations, we inspect the supersymmetry variation of its curvature \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{{\mathsf H}}:={\mathsf d}{\mathsf E} \end{eqnarray} with respect to the induced action \begin{eqnarray}\nn {\rm Ad}_\cdot^{(2)\,\x2}\ &:&\ {\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr)\longrightarrow{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr)\cr\cr &:&\ \bigl((\varepsilon,y,\zeta),(y_{1,2;1,2},y_{1,2;3,4})\bigr)\longmapsto\bigl({\rm Ad}^{(2)}_{(\varepsilon,y,\zeta)}(y_{1,2;1,2}),{\rm Ad}^{(2)}_{(\varepsilon,y,\zeta)}(y_{1,2;3,4})\bigr)\,, \end{eqnarray} to the effect: \begin{eqnarray}\nn \bigl({\rm Ad}_\cdot^{(2)\,\x2\,*}-{\rm pr}_2^*\bigr)\underset{\tx{\ciut{(2)}}}{{\mathsf H}}={\mathsf d}\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}^{(2)\,\x2}_\cdot)}\,, \end{eqnarray} where \begin{eqnarray}\nn \underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}^{(2)\,\x2}_\cdot)}\bigl((\varepsilon,y,\zeta),(y_{1,2;1,2},y_{1,2;3,4})\bigr):=\bigl(\ovl\theta_1\,\Gamma^I\,\theta_2\bigr)\,\bigl(\ovl\varepsilon\,\Gamma_I\,{\mathsf d}\theta_2\bigr)\,. \end{eqnarray} The appearance of the latter super-1-form immediately suggests a suitable notion of `left-invariance' for the candidate data $\,(\mathscr{E},\txa_\mathscr{E})\,$ of the 1-isomorphism, to wit, an equivariant structure in the sense of Def.\,\ref{def:SUSYequiv0} over the nerve \begin{eqnarray}\nn\hspace{-.5cm} {\small\ldots\quad \alxydim{@R=2cm@C=2.5cm}{\bigl({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\bigr)^{\x 2}\x{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr) \ar@/^.75pc/[r]^{\hspace{.3cm} \widetilde d^{(2)}_0} \ar@<0ex>[r]|-{\widetilde d^{(2)}_1} \ar@/^-.75pc/[r]_{\hspace{.3cm} \widetilde d^{(2)}_2} & {\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\x{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr) \ar@<.5ex>[r]^{\qquad \widetilde d^{(1)}_0\equiv{\rm pr}_2} \ar@<-.75ex>[r]_{\qquad\quad \widetilde d^{(1)}_1\equiv{\rm Ad}^{(2)\,\x2}_\cdot} & {\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr)}\,,} \end{eqnarray} of the relevant action groupoid $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}{\mathsf Y}(\mathcal{M}^{(1)\,1})$.\ We now pause to verify this anticipated property. To this end, we first check identity \eqref{eq:rho1inv}, which in the present setting takes the simple form \begin{eqnarray}\nn &&\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}^{(2)\,\x2}_\cdot)}\bigl((\varepsilon_1,y_1,\zeta_1),{\rm Ad}_{(\varepsilon_2,y_2,\zeta_2)}^{(2)\,\x2}(y_{1,2;1,2},y_{1,2;3,4})\bigr)+\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}^{(2)\,\x2}_\cdot)}\bigl(((\varepsilon_2,y_2,\zeta_2),(y_{1,2;1,2},y_{1,2;3,4})\bigr)\cr\cr &&-\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}^{(2)\,\x2}_\cdot)}\bigl({\rm m}_1^{(2)}\bigl((\varepsilon_1,y_1,\zeta_1),(\varepsilon_2,y_2,\zeta_2)\bigr),(y_{1,2;1,2},y_{1,2;3,4})\bigr)\cr\cr &=&\bigl(\ovl\theta_1\,\Gamma^I\,\theta_2\bigr)\,\bigl(\ovl\varepsilon_1\,\Gamma_I\,{\mathsf d}\theta_2\bigr)+\bigl(\ovl\theta_1\,\Gamma^I\,\theta_2\bigr)\,\bigl(\ovl\varepsilon_2\,\Gamma_I\,{\mathsf d}\theta_2\bigr)-\bigl(\ovl\theta_1\,\Gamma^I\,\theta_2\bigr)\,\bigl(\bigl(\ovl\varepsilon_1+\ovl\varepsilon_2\bigr)\,\Gamma_I\,{\mathsf d}\theta_2\bigr)=0\,. \end{eqnarray} Motivated by the above, we look for a connection-preserving isomorphism of principal ${\mathbb{C}}^\x$-bundles \begin{eqnarray}\nn \Upsilon_0^{({\rm Ad}^{(2)\,\x2}_\cdot)}\ :\ \widetilde d_1^{(1)\,*}\mathscr{E}\xrightarrow{\ \cong\ }\widetilde d_0^{(1)\,*}\mathscr{E}\otimes\mathcal{J}_{\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}^{(2)\,\x2}_\cdot)}}\,. \end{eqnarray} We compute directly \begin{eqnarray}\nn {\rm Ad}_\cdot^{(2)\,\x2\,*}{\mathsf E}-{\rm pr}_2^*{\mathsf E}-\underset{\tx{\ciut{(1)}}}{\rho}{}^{({\rm Ad}^{(2)\,\x2}_\cdot)}={\mathsf d}{\mathsf F}\,, \end{eqnarray} where \begin{eqnarray}\nn {\mathsf F}\bigl((\varepsilon,y,\zeta),(y_{1,2;1,2},y_{1,2;3,4})\bigr)=\bigl(\ovl\varepsilon\,\Gamma^I\,\theta_2\bigr)\,\bigl(\ovl\theta_1\,\Gamma^I\,\theta_2\bigr) \end{eqnarray} are data of the isomorphism. These are manifestly LI with respect to the action $\,{\rm Ad}_\cdot^{(2)\,\x2\,1}$,\ and so it remains to check the identity \eqref{eq:F1inv}. A direct computation \begin{eqnarray}\nn &&{\mathsf F}\bigl((\varepsilon_1,y_1,\zeta_1),{\rm Ad}_{(\varepsilon_2,y_2,\zeta_2)}^{(2)\,\x2}(y_{1,2;1,2},y_{1,2;3,4})\bigr)+{\mathsf F}\bigl(((\varepsilon_2,y_2,\zeta_2),(y_{1,2;1,2},y_{1,2;3,4})\bigr)\cr\cr &&-{\mathsf F}\bigl({\rm m}_1^{(2)}\bigl((\varepsilon_1,y_1,\zeta_1),(\varepsilon_2,y_2,\zeta_2)\bigr),(y_{1,2;1,2},y_{1,2;3,4})\bigr)\cr\cr &=&\bigl(\ovl\varepsilon_1\,\Gamma^I\,\theta_2\bigr)\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)+\bigl(\ovl\varepsilon_2\,\Gamma^I\,\theta_2\bigr)\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)-\bigl(\bigl(\ovl\varepsilon_1+\ovl\varepsilon_2\bigr)\,\Gamma^I\,\theta_2\bigr)\,\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)=0\,. \end{eqnarray} convinces us that we have, indeed, an ${\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}$-equivariant structure on $\,\mathscr{E}$,\ as anticipated. Having established the desired natural realisation of supersymmetry on $\,\mathscr{E}$,\ we may return to the proof of the expectation, based on our earlier findings, that the bundle forms part of an equivariant structure on the GS super-1-gerbe. Thus, upon passing to the fibred product \begin{eqnarray}\nn {\mathsf Y}^{[2]}\bigl(\mathcal{M}^{(1)\,1}\bigr)\equiv{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr)\x_{\mathcal{M}^{(1)\,1}}{\mathsf Y}\bigl(\mathcal{M}^{(1)\,1}\bigr) \ni\bigl((y_{1,2;1,2},y_{1,2;3,4}),(y_{1,2;5,6},y_{1,2;7,8})\bigr) \end{eqnarray} with the canonical projections (written for $\,(i,j)\in\{(1,2),(1,3),(2,4),(3,4)\}$) \begin{eqnarray}\nn {\rm pr}_{i,j}\ &:&\ {\mathsf Y}^{[2]}\bigl(\mathcal{M}^{(1)\,1}\bigr)\longrightarrow\bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 2}\x_{\mathcal{M}^{(1)\,1}}\bigl({\mathsf Y}_1\mathcal{M}^{(1)}\bigr)^{\x 2}\cr\cr &:&\ \bigl((y_{1,2;1,2},y_{1,2;3,4}),(y_{1,2;5,6},y_{1,2;7,8})\bigr)\longmapsto(y_{1,2;i,i+1},y_{1,2;j,j+1})\,, \end{eqnarray} we then find the exact equality \begin{eqnarray}\nn {\rm pr}_{1,3}^*{\rm Ad}_\cdot^{(2)\,\x2\,*}{\rm A}+{\rm pr}_{3,4}^*{\rm E}={\rm pr}_{1,2}^*{\rm E}+{\rm pr}_{2,4}^*{\rm pr}_2^{\x2\,*}{\rm A} \end{eqnarray} from which we read off the existence of a trivial (and hence manifestly LI) connection-preserving isomorphism \begin{eqnarray}\nn \a_\mathscr{E}\equiv{\rm id}_{({\rm Ad}_\cdot^{(2)\,\x2}\circ{\rm pr}_{1,3})^*\mathscr{L}^{(1)}\otimes{\rm pr}_{3,4}^*\mathscr{E}}\ :\ \bigl({\rm Ad}_\cdot^{(2)\,\x2}\circ{\rm pr}_{1,3}\bigr)^*\mathscr{L}^{(1)}\otimes{\rm pr}_{3,4}^*\mathscr{E}\xrightarrow{\ \cong\ }{\rm pr}_{1,2}^*\mathscr{E}\otimes\bigl({\rm pr}_2^{\x2}\circ{\rm pr}_{2,4}\bigr)^*\mathscr{L}^{(1)}\,, \end{eqnarray} of principal ${\mathbb{C}}^\x$-bundles over $\,{\mathsf Y}^{[2]}\bigl(\mathcal{M}^{(1)\,1}\bigr)$.\ Taking into account the triviality of the groupoid structure $\,\mu_{\mathscr{L}^{(1)}}\,$ on $\,\mathscr{L}^{(1)}$,\ we conclude that $\,\a_\mathscr{E}\,$ satisfies the desired coherence constraints of (a variant of) Diag.\,\ref{diag:grb1isocoh}. Thus, altogether, we obtain a gerbe 1-isomorphism \begin{eqnarray}\nn \Upsilon^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p}:=\bigl(\bigl({\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}\bigr)^{\x 2},\pi_{{\mathsf Y}_1^{[2]}\mathcal{M}^{(1)}}^{\x 2},\mathscr{E},\txa_\mathscr{E},\a_\mathscr{E}\bigr)\,, \end{eqnarray} which is LI in a well-defined (and natural) sense. Having identified the 1-isomorphism over $\,{\mathsf N}^1({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\hspace{0.02cm}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}^{(1)})\,$ that furnishes the GS super-1-gerbe with a realisation of the supersymmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ in the adjoint, we may now look for the 2-isomorphism over $\,{\mathsf N}^2({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\hspace{0.02cm}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathcal{M}^{(1)})\,$ which renders that realisation compatible with the binary operation on the supergroup. To this end, we take the surjective submersion of the supermanifold $\,{\mathsf N}^2({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\hspace{0.02cm}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathscr{M})\,$ in the form $\,({\mathsf Y}_1\mathcal{M}^{(1)})^{\x 3}\,$ and, having pulled $\,\mathcal{G}_{{\rm GS},p}^{(1)}\,$ and $\,\mathcal{I}_{\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}}\,$ to $\,{\mathsf N}^2({\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\hspace{0.02cm}{\hspace{-0.04cm}\ltimes\hspace{-0.05cm}}\mathscr{M})\,$ along the various face maps of the nerve (the $\,d_i^{(1)}\circ d_j^{(2)}\,$ and the $\,d_k^{(2)}$,\ respectively) and the principal ${\mathbb{C}}^\x$-bundle $\,\mathscr{E}\,$ to $\,{\mathsf Y}_1\mathcal{M}^{(1)\,\x 3}\x_{\mathcal{M}^{(1)\,2}}{\mathsf Y}_1\mathcal{M}^{(1)\,\x 3}\,$ along the cartesian squares \begin{eqnarray}\nn {\mathsf Y}_1^{[2]}d_i^{(2)}:=\bigl({\mathsf Y}_1 d_k^{(2)}\bigr)^{\x 2} \end{eqnarray} of their extensions $\,{\mathsf Y}_1 d_k^{(2)}\,$ introduced fomerly, we seek to determine a connection-preserving isomorphism \begin{eqnarray}\nn \beta\ :\ {\rm pr}_{1,2}^*\bigl({\mathsf Y}^{[2]}d_2^{(2)}\bigr)^*\mathscr{E}\otimes{\rm pr}_{2,3}^*\left({\mathsf Y}^{[2]}d_0^{(2)}\right)^*\mathscr{E}\xrightarrow{\ \cong\ }{\rm pr}_{1,3}^*\left({\mathsf Y}^{[2]}d_1^{(2)}\right)^*\mathscr{E} \end{eqnarray} of principal ${\mathbb{C}}^\x$-bundles over the common surjective base \begin{eqnarray}\nn &\widetilde{\mathsf Y}_1^3\mathcal{M}^{(1)\,\x 3}:={\mathsf Y}_1\mathcal{M}^{(1)\,\x 3}\x_{\mathcal{M}^{(1)\,2}}{\mathsf Y}_1\mathcal{M}^{(1)\,\x 3}\x_{\mathcal{M}^{(1)\,2}}{\mathsf Y}_1\mathcal{M}^{(1)\,\x 3}\ni\bigl(\bigl((\theta_1,x_1,\xi_1),(\theta_2,x_2,\xi_2),(\theta_3,x_3,\xi_3)\bigr),\cr\cr &\hspace{2cm}\bigl((\theta_1,x_1,\xi_4),(\theta_2,x_2,\xi_5),(\theta_3,x_3,\xi_6)\bigr),\bigl((\theta_1,x_1,\xi_7),(\theta_2,x_2,\xi_8),(\theta_3,x_3,\xi_9)\bigr)\bigr)\cr\cr &\hspace{-5cm}=:y_{1,2,3;1,2,3\,\vert\,4,5,6\,\vert\,7,8,9}\,.& \end{eqnarray} Inspection of the relevant combination of pullbacks of the connection 1-forms yields the identity \begin{eqnarray}\nn &&\bigl({\rm pr}_{1,2}^*\bigl({\mathsf Y}^{[2]}d_2^{(2)}\bigr)^*+{\rm pr}_{2,3}^*\bigl({\mathsf Y}^{[2]}d_0^{(2)}\bigr)^*- {\rm pr}_{1,3}^*\bigl({\mathsf Y}^{[2]}d_1^{(2)}\bigr)^*\bigr){\rm E}(y_{1,2,3;1,2,3\,\vert\,4,5,6\,\vert\,7,8,9})\cr\cr &=&{\mathsf d}\bigl(\bigl(\ovl\theta_3\,\Gamma\,\theta_1\bigr)\,\bigl(\ovl\theta_2\,\Gamma\,\theta_3\bigr)\bigr)+\bigl(\bigl(\ovl\theta_1\,\Gamma_I\,\theta_2\bigr)\,\theta^\a_3+\bigl(\ovl\theta_3\,\Gamma_I\,\theta_1\bigr)\,\theta^\a_2+\bigl(\ovl\theta_2\,\Gamma_I\,\theta_3\bigr)\,\theta^\a_1\bigr)\,\ovl\Gamma{}^I_{\a\beta}\,{\mathsf d}\theta_3^\beta\cr\cr &=&{\mathsf d}\bigl(\bigl(\ovl\theta_3\,\Gamma\,\theta_1\bigr)\,\bigl(\ovl\theta_2\,\Gamma\,\theta_3\bigr)\bigr)\,, \end{eqnarray} obtained with the help of identity \eqref{eq:ClifFierz1} (in the last line), from which we extract global data of $\,\beta$, \begin{eqnarray}\nn \widetilde{\rm F}(y_{1,2,3;1,2,3\,\vert\,4,5,6\,\vert\,7,8,9})=\bigl(\ovl\theta_3\,\Gamma\,\theta_1\bigr)\,\bigl(\ovl\theta_2\,\Gamma\,\theta_3\bigr)\,. \end{eqnarray} We readily see that an isomorphism with these data is coherent with (suitably tensoerd pullbacks of) $\,\a_\mathscr{E}$,\ as expressed in an appropriate adaptation of Diag.\,\ref{diag:betacohalpha} over $\,\widetilde{\mathsf Y}_1^3\mathcal{M}^{(1)\,\x 3}\x_{\mathcal{M}^{(1)\,2}}\widetilde{\mathsf Y}_1^3\mathcal{M}^{(1)\,\x 3}$,\ and so we are left with the coherence condition \eqref{eq:geq2isocoh} to be checked over \begin{eqnarray}\nn {\mathsf Y}_1\mathcal{M}^{(1)\,\x 4}\x_{\mathcal{M}^{(1)\,3}}{\mathsf Y}_1\mathcal{M}^{(1)\,\x 4}\x_{\mathcal{M}^{(1)\,3}}{\mathsf Y}_1\mathcal{M}^{(1)\,\x 4}\x_{\mathcal{M}^{(1)\,3}}{\mathsf Y}_1\mathcal{M}^{(1)\,\x 4}\,. \end{eqnarray} This is completely straightforward, and leads us to conclude that there does, indeed, exist a 2-isomorphism \begin{eqnarray}\nn \bigl(\bigl(\widetilde{\mathsf Y}_1^3\mathcal{M}^{(1)\,\x 3},\beta\bigr)=:\gamma^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p}\ :\ d_2^{(2)}{}^*\Upsilon^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p}\otimes d_0^{(2)}{}^*\Upsilon^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p}\xLongrightarrow{\ \cong\ }d_1^{(2)}{}^*\Upsilon^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p}\,. \end{eqnarray} Altogether, from the above analysis, we distill the desired \begin{Def}\label{def:SUSYequiv1} Adopt the notation of Defs.\,\ref{def:CaEs1g} and \ref{def:SUSYequiv0}. A {\bf supersymmetric ${\rm H}$-equivariant structure on the Cartan--Eilenberg super-1-gerbe} $\,\mathcal{G}^{(1)}_{\rm CaE}=({\mathsf Y}{\rm G},\pi_{{\mathsf Y}{\rm G}},\underset{\tx{\ciut{(2)}}}{{\rm b}},{\rm L},\txa,\mu_{\rm L})\,$ of curvature $\,\underset{\tx{\ciut{(3)}}}{\chi}\,$ over $\,{\rm G}\,$ is a triple $\,(\Upsilon^{(1)},\gamma^{(1)},\underset{\tx{\ciut{(2)}}}{\rho})\,$ composed of \begin{itemize} \item a super-2-form $\,\underset{\tx{\ciut{(2)}}}{\rho}\,$ on $\,{\rm H}\x{\rm G}\,$ satisfying the identities \begin{eqnarray}\nn \bigl(d_1^{(1)\,*}-d_0^{(1)\,*}\bigr)\underset{\tx{\ciut{(3)}}}{\chi}={\mathsf d}\underset{\tx{\ciut{(2)}}}{\rho} \end{eqnarray} and \begin{eqnarray}\label{eq:rho1inv} \bigl(d_0^{(2)\,*}+d_2^{(2)\,*}-d_1^{(2)\,*}\bigr)\underset{\tx{\ciut{(2)}}}{\rho}=0\,, \end{eqnarray} and LI with respect to the action $\,\ell^1_\cdot\,$ of \Reqref{eq:GLIonHM} (where $\,\mathcal{M}\equiv{\rm G}$), \begin{eqnarray}\nn \forall_{{\mathbb{X}}\in{\rm G}}\ :\ \ell_{\mathbb{X}}^{1\,*}\underset{\tx{\ciut{(2)}}}{\rho}=\underset{\tx{\ciut{(2)}}}{\rho}\,, \end{eqnarray} \item a 1-isomorphism \begin{eqnarray}\nn \Upsilon^{(1)}\ :\ \ell_\cdot^*\mathcal{G}_{\rm CaE}^{(1)}\xrightarrow{\ \cong\ }{\rm pr}_2^*\mathcal{G}_{\rm CaE}^{(1)}\otimes\mathcal{I}_{\underset{\tx{\ciut{(2)}}}{\rho}} \end{eqnarray} of gerbes over $\,{\rm H}\x{\rm G}\,$ with the following properties: \begin{itemize} \item the principal ${\mathbb{C}}^x$-bundle $\,(\mathscr{E},\txa_\mathscr{E})\,$ of $\,\Upsilon^{(1)}\,$ carries a supersymmetric ${\rm H}$-equivariant structure, {\it i.e.}, admits a connection-preserving isomorphism of principal ${\mathbb{C}}^\x$-bundles \begin{eqnarray}\nn \widetilde\Upsilon^{(0)}\ :\ \widetilde\ell^{1\,*}_\cdot\mathscr{E}\xrightarrow{\ \cong\ }{\rm pr}_2^*\mathscr{E}\otimes\mathcal{J}_{\underset{\tx{\ciut{(1)}}}{\widetilde\rho}}\,, \end{eqnarray} written in terms of an extension $\,\widetilde\ell_\cdot^1\,$ of $\,\ell^1_\cdot\,$ to its base and of a super-1-form $\,\underset{\tx{\ciut{(1)}}}{\widetilde\rho}\,$ satisfying condition \begin{eqnarray}\nn \bigl(\widetilde d_0^{(2)\,*}+\widetilde d_2^{(2)\,*}-\widetilde d_1^{(2)\,*}\bigr)\underset{\tx{\ciut{(1)}}}{\widetilde\rho}=0\,, \end{eqnarray} and with data given by a super-0-form $\,\widetilde{\rm F}\,$ satisfying the identity (written in terms of the base component $\,\underset{\tx{\ciut{(1)}}}{\widetilde{\rm b}}\,$ of $\,\txa_\mathscr{E}$) \begin{eqnarray}\nn \bigl(\widetilde d_1^{(1)\,*}-\widetilde d_0^{(1)\,*}\bigr)\underset{\tx{\ciut{(1)}}}{\widetilde{\rm b}}=\underset{\tx{\ciut{(1)}}}{\widetilde\rho}+{\mathsf d}\widetilde{\rm F}\,, \end{eqnarray} LI with respect to $\,\widetilde\ell^1_\cdot$, \begin{eqnarray}\nn \forall_{{\mathbb{X}}\in{\rm G}}\ :\ \widetilde\ell_{\mathbb{X}}^{1\,*}\widetilde{\rm F}=\widetilde{\rm F}\,, \end{eqnarray} and subject to the coherence constraints \begin{eqnarray}\nn \bigl(\widetilde d_0^{(2)\,*}+\widetilde d_2^{(2)\,*}-\widetilde d_1^{(2)\,*}\bigr)\widetilde{\rm F}=0\,, \end{eqnarray} written in terms of the face maps $\,\widetilde d_i^{(2)}\,$ of the nerve of the action groupoid defined by $\,\widetilde\ell^1_\cdot$; \item the connection-preserving ${\mathbb{C}}^\x$-bundle isomorphism $\,\a_\mathscr{E}\,$ of $\,\Upsilon^{(1)}\,$ has data $\,\widehat{\rm F}\,$ that are LI with respect to a lift $\,\widehat\ell_\cdot^1\,$ of $\,\ell^1_\cdot\,$ to its base, \begin{eqnarray}\nn \forall_{{\mathbb{X}}\in{\rm G}}\ :\ \widehat\ell_{\mathbb{X}}^{1\,*}\widehat{\rm F}=\widehat{\rm F}\,, \end{eqnarray} and subject to the coherence constraints \begin{eqnarray}\nn \bigl(\widehat d_0^{(2)\,*}+\widehat d_2^{(2)\,*}-\widehat d_1^{(2)\,*}\bigr)\widehat{\rm F}=0\,, \end{eqnarray} written in terms of the face maps $\,\widehat d_i^{(2)}\,$ of the nerve of the action groupoid defined by $\,\widehat\ell^1_\cdot$, \end{itemize} \item a 2-isomorphism \begin{eqnarray}\nn \gamma^{(1)}\ :\ d_2^{(2)}{}^*\Upsilon^{(1)}\otimes d_0^{(2)}{}^*\Upsilon^{(1)}\xLongrightarrow{\ \cong\ }d_1^{(2)}{}^*\Upsilon^{(1)} \end{eqnarray} with local data $\,\check{\rm F}\,$ that are LI with respect to a lift $\,\check\ell_\cdot^2\,$ of $\,\ell^2_\cdot\,$ to its base, \begin{eqnarray}\nn \forall_{{\mathbb{X}}\in{\rm G}}\ :\ \check\ell_{\mathbb{X}}^{2\,*}\check{\rm F}=\check{\rm F}\,, \end{eqnarray} and subject to the coherence constraints \begin{eqnarray}\nn \bigl(\check d_0^{(2)\,*}+\check d_2^{(2)\,*}-\check d_1^{(2)\,*}\bigr)\check{\rm F}=0\,, \end{eqnarray} written in terms of the face maps $\,\check d_i^{(2)}\,$ of the nerve of the action groupoid defined by $\,\check\ell^2_\cdot$,\ and such that the coherence condition \begin{eqnarray}\nn d_1^{(3)\,*}\gamma^{(1)}\bullet\bigl({\rm id}_{(d_2^{(2)}\circ d_1^{(3)})^*\Upsilon^{(1)}}\circ d_3^{(3)\,*}\gamma^{(1)} \bigr)=d_2^{(3)\,*}\gamma^{(1)}\bullet\bigl(\bigl(d_0^{(3)\,*}\gamma^{(1)} \otimes{\rm id}_{{\rm id}_{\mathcal{I}_{(d_2^{(2)}\circ d_1^{(3)})^*\underset{\tx{\ciut{(2)}}}{\rho}}}}\bigr)\circ{\rm id}_{(d_2^{(2)}\circ d_3^{(3)})^*\Upsilon^{(1)}}\bigr) \end{eqnarray} is obeyed. \end{itemize} \begin{flushright}$\diamond$\end{flushright Our discussion can then be summarised in \begin{Prop}\label{prop:Adequivstr1} The Green--Schwarz super-1-gerbe of Def.\,\ref{def:s1gerbe} carries a canonical supersymmetric equivariant structure $\,(\Upsilon^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p},\gamma^{(1,{\rm Ad}_\cdot)}_{{\rm GS},p})\,$ with respect to the adjoint action of the Lie supergroup $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\,$ on itself, relative to the LI super-2-form $\,\underset{\tx{\ciut{(2)}}}{\rho}{}^{({\rm Ad}_\cdot)}$. \end{Prop} \medskip \void{\subsection{Canonical symmetry analysis and geometric prequantisation via transgression} \subsubsection{The $0$-brane} The Poincar\'e--Cartan form (written for $\,\Omega\cong{\mathbb{S}}^1$) \begin{eqnarray}\nn \Theta(\theta,x,\xi,t)=-\tfrac{1}{2}\,\eta_{ab}\,\left(\xi^a+\tfrac{1}{2}\,\ovl\theta\,\Gamma^a\,t\right)\,\left(\xi^b+\tfrac{1}{2}\,\ovl\theta\,\Gamma^J\,t\right)\,\Vol(\Omega)+\eta_{ab}\,\left(\xi^a+\tfrac{1}{2}\,\ovl\theta\,\Gamma^a\,t\right)\,\left(\delta\xi^b+\tfrac{1}{2}\,\ovl\theta\,\Gamma^J\,\delta\theta\right)+\ovl\theta\,\delta\theta \end{eqnarray} yields the presymplectic form \begin{eqnarray}\label{eq:presympl-0} \Omega_{{\rm GS},p}&=&\delta\vartheta+\pi_{{\rm GS},p}^*\underset{\tx{\ciut{(2)}}}{\chi} \end{eqnarray} on the (unphysical) space of states of the super-$\sigma$-model, parameterised by super-points $\,(\theta,x)\in{\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}\,$ and Gra\ss mann-even momentum 1-forms $\,{\rm p}$.\ The above is written in terms of the manifestly ${\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}$-invariant canonical (action) 1-form \begin{eqnarray}\nn \vartheta(\theta,x,{\rm p}):={\rm p}_a\,E^a(\theta,x) \end{eqnarray} on $\,{\mathsf T}^*_0{\mathbb{R}}^{1,d\,\vert\,0}\x{\mathbb{R}}^{0\,\vert\,N}$,\ and the canonical projection \eqref{eq:canproj-GS}. The vector field on $\,{\mathsf P}_{{\rm GS},p}\,$ that lifts the fundamental vector field engendered by the super-translation \eqref{eq:susy-sMink} can be written as \begin{eqnarray}\label{eq:stranslift} \widetilde\mathcal{K}_{(\varepsilon,y)}(\theta,x,{\rm p}):=\left(y^a-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^a\,\theta\right)\,\tfrac{\delta\ }{\delta x^a}+\varepsilon^\a\,\tfrac{\delta\ }{\delta\theta^\a}\equiv y^a\,\widetilde\mathscr{P}_a(\theta,x,{\rm p})+\varepsilon^\a\,\widetilde\mathscr{Q}_\a(\theta,x,{\rm p}) \end{eqnarray} in terms of the (trivial) lifts of the fundamental vector fields of \Reqref{eq:sFVF}. The lift gives rise to the Noether hamiltonian \begin{eqnarray}\nn Q_{(\varepsilon,y)}(\theta,x,{\rm p})={\rm p}_a\,(y^a-\ovl\varepsilon\,\Gamma^a\,\theta)-2\ovl\varepsilon\,\theta\,. \end{eqnarray} We may now compute the Poisson bracket of two such hamiltonians, \begin{eqnarray}\label{eq:PoissNoeth} \{Q_{(\varepsilon_1,y_1)},Q_{(\varepsilon_2,y_2)}\}(\theta,x,{\rm p})={\rm p}_a\,\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2+2\ovl\varepsilon_1\,\varepsilon_2\equiv Q_{(\ovl\varepsilon_1\,\Gamma\,\varepsilon_2,0)}+2\ovl\varepsilon_1\,\varepsilon_2\,. \end{eqnarray} Comparison between the last result and \Reqref{eq:sFVFalg} demonstrates that the second term in the above Poisson bracket of the Noether hamiltonians is \emph{anomalous}, in that it renders the realisation of the symmetry algebra on $\,{\mathsf P}_{{\rm GS},p}\,$ \emph{non}-hamiltonian. This is the first sign of the emergence of an essential central extension of the symmetry algebra (and of the symmetry group) in the field theory in hand. Here, the fundamental sections of $\,\mathcal{E}^{1,0}{\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}\,$ take the form \begin{eqnarray}\nn \gt{K}_{(\varepsilon,y)}(\theta,x)=\mathcal{K}_{(\varepsilon,y)}(\theta,x)\oplus(-2\ovl\varepsilon\,\theta) \end{eqnarray} and satisfy the algebra \begin{eqnarray}\nn \Vbra{\gt{K}_{(\varepsilon_1,y_1)}}{\gt{K}_{(\varepsilon_2,y_2)}}^{\underset{\tx{\ciut{(2)}}}{\chi}}=(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\,\mathscr{P}_a\oplus 2\ovl\varepsilon_1\,\varepsilon_2\equiv\gt{K}_{[\,(\varepsilon_1,y_1)\,,\,(\varepsilon_2,y_2)\,]}+0\oplus 2\ovl\varepsilon_1\,\varepsilon_2\,, \end{eqnarray} with the `anomalous' term precisely as in the Poisson algebra \eqref{eq:PoissNoeth} of the corresponding Noether charges. \medskip \subsubsection{The Green--Schwarz superstring} Consider, next, the ${\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}$-invariant GS super-3-cocycle \eqref{eq:GS3form} with the ${\mathbb{R}}^{1,d-1\,\vert\,0}$-invariant primitive \begin{eqnarray}\nn \underset{\tx{\ciut{(2)}}}{\beta}(\theta,x)=\ovl\theta\,\Gamma_a\,{\mathsf d}\theta\wedge E^a(\theta,x)\equiv\ovl\theta\,\Gamma_a\,{\mathsf d}\theta\wedge{\mathsf d} x^a\,, \end{eqnarray} where the last equality is a straightforward consequence of Conv.\,\ref{conv:SignManifesto}, \begin{eqnarray}\nn \ovl\theta\,\Gamma_a\,{\mathsf d}\theta\wedge\ovl\theta\,\Gamma^a\,{\mathsf d}\theta=-\ovl\theta\,\Gamma^a\,{\mathsf d}\theta\wedge\ovl\theta\,\Gamma_a\,{\mathsf d}\theta\,. \end{eqnarray} In the canonical description, we readily derive the Cartan--Poincar\'e form \begin{eqnarray}\nn \Theta(\theta,x,\xi,t)&=&-\mathcal{L}_{{\rm GS},p}(\theta,x,\xi,t)\,{\mathsf d}\sigma^0\wedge{\mathsf d}\sigma^1\cr\cr &&+\tfrac{1}{2}\,\left[\left(2\xi_{0\,a}+\ovl\theta\,\Gamma_a\,t_0-2\ovl\theta\,\Gamma_a\,t_1\right)\,\delta x^a+\left(\xi_{0\,a}+2\xi_{1\,a}+\tfrac{1}{2}\,\ovl\theta\,\Gamma_a\,t_0\right)\,\ovl\theta\, \Gamma^a\,\delta\theta\right]\wedge{\mathsf d}\sigma^1\cr\cr &&+\tfrac{1}{2}\,\left[\left(2\xi_{1\,a}+\ovl\theta\,\Gamma_a\,t_1-2\ovl\theta\,\Gamma_a\,t_0\right)\,\delta x^a+\left(\xi_{1\,a}+2\xi_{0\,a}+\tfrac{1}{2}\,\ovl\theta\,\Gamma_a\,t_1\right)\,\ovl\theta\, \Gamma^a\,\delta\theta\right]\wedge{\mathsf d}\sigma^0 \end{eqnarray} that subsequently yields the (pre)symplectic form \begin{eqnarray}\nn \Omega_{{\rm GS},p}=\delta\vartheta+\pi_{{\rm GS},p}^*\int_{{\mathbb{S}}^1}\,{\rm ev}^*\underset{\tx{\ciut{(3)}}}{\chi}\,, \end{eqnarray} with \begin{eqnarray}\nn \vartheta[\theta,x,{\rm p}]:=\int_{{\mathbb{S}}^1}\,\Vol({\mathbb{S}}^1)\,{\rm p}_a\,E^a(\theta,x)\,. \end{eqnarray} The equivariant lift of the fundamental vector field \eqref{eq:susygenv} now reads \begin{eqnarray}\nn \widetilde\mathcal{K}_{(\varepsilon,y)}[\theta,x,{\rm p}]:=\int_{{\mathbb{S}}^1}\,\Vol({\mathbb{S}}^1)\,\left[\left(y^a-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^a\,\theta(\cdot)\right)\,\tfrac{\delta\ }{\delta x^a(\cdot)}+\varepsilon^\a\,\tfrac{\delta\ }{\delta\theta^\a(\cdot)}\right] \end{eqnarray} and gives rise to Noether charges \begin{eqnarray}\nn Q_{(\varepsilon,y)}[\theta,x,{\rm p}]=\int_0^{2\pi}\,{\mathsf d}\varphi\,\bigl\{{\rm p}_a(\varphi)\,\bigl[y^a-\ovl\varepsilon\,\Gamma^a\,\theta(\varphi)\bigr]-\bigl[y^a\,\ovl\theta\,\Gamma_a\,\partial_\varphi\theta(\varphi)+\ovl\varepsilon\,\Gamma_a\,\theta(\varphi)\,\bigl(2\partial_\varphi x^a-\tfrac{1}{3}\,\ovl\theta\,\Gamma^a\,\partial_\varphi\theta\bigr)(\varphi)\bigr]\bigr\}\,. \end{eqnarray} These satisfy the algebra \begin{eqnarray}\nn \{Q_{(\varepsilon_1,y_1)},Q_{(\varepsilon_2,y_2)}\}[\theta,x,{\rm p}]&=&Q_{[(\varepsilon_1,y_1),(\varepsilon_2,y_2)]}[\theta,x,{\rm p}]\cr\cr &&+\tfrac{2}{3}\,\int_0^{2\pi}\,{\mathsf d}\varphi\,\bigl[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,(\ovl\theta\,\Gamma^a\,\partial_\varphi\theta)(\varphi)-2(\ovl\varepsilon_1\,\Gamma_a\,\theta)\,(\ovl\varepsilon_2\,\Gamma^a\,\partial_\varphi\theta)(\varphi)\bigr]\cr\cr &=&Q_{[(\varepsilon_1,y_1),(\varepsilon_2,y_2)]}[\theta,x,{\rm p}]\,, \end{eqnarray} the vanishing of the last term in the middle expression being ensured by \Reqref{eq:ClifFierz}. Thus, the Noether charges furnish a hamiltonian realisation of the Lie (super)algebra $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\,$ on $\,{\mathsf P}_{{\rm GS},p}$. The above algebra is modelled by the Vinogradov-type (or, indeed, $\underset{\tx{\ciut{(3)}}}{\chi}$-twisted Courant) bracket of the fundamental sections \begin{eqnarray}\label{eq:fsec2d} \gt{K}_{(\varepsilon,y)}(\theta,x)=\mathcal{K}_{(\varepsilon,y)}(\theta,x)\oplus\bigl[-y^a\,\ovl\theta\,\Gamma_a\,{\mathsf d}\theta-(\ovl\varepsilon\,\Gamma_a\,\theta)\,\bigl(2{\mathsf d} x^a-\tfrac{1}{3}\,\ovl\theta\,\Gamma^a\,{\mathsf d}\theta\bigr)\bigr] \end{eqnarray} of $\,\mathcal{E}^{1,1}{\rm sMink}^{1,d\,\vert\,ND_{1,d-1}}$,\ given by \begin{eqnarray} \Vbra{\gt{K}_{(\varepsilon_1,y_1)}}{\gt{K}_{(\varepsilon_2,y_2)}}^{\underset{\tx{\ciut{(3)}}}{\chi}}&=&\gt{K}_{[(\varepsilon_1,y_1),(\varepsilon_2,y_2)]}+0\oplus\tfrac{1}{2}\,{\mathsf d}\bigl[y_1^a\,(\ovl\varepsilon_2\,\Gamma_a\,\theta)-y_2^a\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)+2(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,x^a\bigr]\cr\cr &&+0\oplus 2\bigl[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,\ovl\theta\,\Gamma^a\,{\mathsf d}\theta+(\ovl\varepsilon_2\,\Gamma_a\,\theta)\,\ovl\varepsilon_1\,\Gamma^a\,{\mathsf d}\theta+(\ovl\theta\,\Gamma_a\,\varepsilon_1)\,\ovl\varepsilon_2\,\Gamma^a\,{\mathsf d}\theta\bigr]\cr\cr &=&\gt{K}_{[(\varepsilon_1,y_1),(\varepsilon_2,y_2)]}+0\oplus\tfrac{1}{2}\,{\mathsf d}\bigl[y_1^a\,(\ovl\varepsilon_2\,\Gamma_a\,\theta)-y_2^a\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)+2(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,x^a\bigr]\,. \label{eq:Liean2d} \end{eqnarray} Note the appearance of a Lie anomaly in the above Vinogradov bracket, signalling an algebroidal obstruction against the gauging of the supertranslation symmetry. Pursuing the geometric analysis on the target superspace, we find that the ${\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}$-coboundary of the primitive 2-form superfield reads \begin{eqnarray}\nn (\delta\underset{\tx{\ciut{(2)}}}{\beta})_{(\varepsilon,y)}(\theta,x)=\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta\wedge{\mathsf d} x^a-\tfrac{1}{2}\,(\ovl\theta\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)\,, \end{eqnarray} and the last term vanishes as a contraction of the symmetric tensor $\,\eta\,$ with the anticommuting 1-forms. The middle term can be rewritten -- with the help of identity \eqref{eq:ClifFierz} -- as \begin{eqnarray}\nn (\ovl\theta\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)&\equiv&\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}\,\theta^\a\,{\mathsf d}\theta^\beta\wedge\varepsilon^\gamma\,{\mathsf d}\theta^\delta=\tfrac{1}{2}\,\bigl(\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}+\ovl\Gamma_{I\,\a\delta}\,\ovl\Gamma{}^I_{\gamma\beta}\bigr)\,\theta^\a\,{\mathsf d}\theta^\beta\wedge\varepsilon^\gamma\,{\mathsf d}\theta^\delta\cr\cr &=&-\tfrac{1}{2}\,\ovl\Gamma_{I\,\a\gamma}\,\ovl\Gamma{}^I_{\beta\delta}\,\theta^\a\,{\mathsf d}\theta^\beta\wedge\varepsilon^\gamma\,{\mathsf d}\theta^\delta\equiv-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,\ovl\sigma\wedge\Gamma^a\,{\mathsf d}\theta\cr\cr &=&{\mathsf d}\bigl(-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\bigr)+\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\cr\cr &=&{\mathsf d}\bigl(-\tfrac{1}{2}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\bigr)-\tfrac{1}{2}\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma_a\,{\mathsf d}\theta)\,, \end{eqnarray} whence also \begin{eqnarray}\nn (\ovl\theta\,\Gamma_a\,{\mathsf d}\theta)\wedge(\ovl\varepsilon\,\Gamma^a\,{\mathsf d}\theta)={\mathsf d}\bigl(-\tfrac{1}{3}\,(\ovl\varepsilon\,\Gamma_a\,\theta)\,(\ovl\theta\,\Gamma^a\,{\mathsf d}\theta)\bigr)\,. \end{eqnarray} Thus, the target current associated with our choice of the primitive $\,\underset{\tx{\ciut{(2)}}}{\beta}\,$ takes the form \begin{eqnarray}\label{eq:targcur1} \underset{\tx{\ciut{(1)}}}{\jmath_{(\varepsilon,y)}}(\theta,x)=(\ovl\varepsilon\,\Gamma_a\,\theta)\,\left({\mathsf d} x^a+\tfrac{1}{6}\,\ovl\theta\,\Gamma^a\,{\mathsf d}\theta\right) \end{eqnarray} We compute \begin{eqnarray}\nn (\delta\underset{\tx{\ciut{(1)}}}{\jmath})_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}(\theta,x)&=&{\mathsf d}\left[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)\right]-\tfrac{1}{6}\,\left[2(\ovl\sigma\,\Gamma_a\,\varepsilon_2)(\ovl\theta\,\Gamma^a\,\varepsilon_1)+(\ovl\sigma\,\Gamma_a\,\theta)(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\right]\cr\cr &=&{\mathsf d}\left[(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)-\tfrac{1}{6}\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)(\ovl\varepsilon_2\,\Gamma^a\,\theta)\right]\cr\cr &&-\tfrac{1}{6}\,\left[(\ovl\sigma\,\Gamma_a\,\varepsilon_2)(\ovl\theta\,\Gamma^a\,\varepsilon_1)+(\ovl\varepsilon_1\,\Gamma_a\,{\mathsf d}\theta)(\ovl\theta\,\Gamma^a\,\varepsilon_2)+(\ovl\sigma\,\Gamma_a\,\theta)(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\right]\,. \end{eqnarray} Taking, again, \Reqref{eq:ClifFierz} into account, we find the relation \begin{eqnarray}\nn (\ovl\sigma\,\Gamma_a\,\varepsilon_2)(\ovl\theta\,\Gamma^a\,\varepsilon_1)+(\ovl\varepsilon_1\,\Gamma_a\,{\mathsf d}\theta)(\ovl\theta\,\Gamma^a\,\varepsilon_2)+(\ovl\sigma\,\Gamma_a\,\theta)(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\cr\cr =\varepsilon_1^\a\,\varepsilon_2^\beta\,{\mathsf d}\theta^\gamma\,\theta^\delta\,\bigl(\ovl\Gamma_{I\,\a\beta}\,\ovl\Gamma{}^I_{\gamma\delta}+\ovl\Gamma_{I\,\a\gamma}\,\ovl\Gamma{}^I_{\beta\delta}+\ovl\Gamma_{I\,\beta\gamma}\,\ovl\Gamma{}^I_{\a\delta}\bigr) =\varepsilon_1^\a\,\varepsilon_2^\beta\,{\mathsf d}\theta^\gamma\,\theta^\delta\,\bigl(-\ovl\Gamma_{I\,\a\delta}\,\ovl\Gamma{}^I_{\beta\gamma}+\ovl\Gamma_{I\,\beta\gamma}\,\ovl\Gamma{}^I_{\a\delta}\bigr)=0\,, \end{eqnarray} that implies the exactness of the current 2-cocycle, \begin{eqnarray}\nn (\delta\underset{\tx{\ciut{(1)}}}{\jmath})_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}(\theta,x)={\mathsf d}\left[(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)-\tfrac{1}{6}\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)(\ovl\varepsilon_2\,\Gamma^a\,\theta)\right]\,. \end{eqnarray} We conclude that the GS superstring admits a \emph{non}-projective realisation of the classical symmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\,$ on its Hilbert space. We readily verify that the primitive of the current 2-cocycle \begin{eqnarray}\nn \wp_{(\varepsilon_1,y_1),(\varepsilon_2,y_2)}(\theta,x):=(\ovl\varepsilon_1\,\Gamma^a\,\varepsilon_2)\left(x^a-\tfrac{1}{3}\,\ovl\varepsilon_2\,\Gamma^a\,\theta\right)-\tfrac{1}{6}\,(\ovl\varepsilon_1\,\Gamma_a\,\theta)(\ovl\varepsilon_2\,\Gamma^a\,\theta) \end{eqnarray} yields a constant 3-cocycle \begin{eqnarray}\nn (\delta\wp)_{(\varepsilon_1,y_1),(\varepsilon_2,y_2),(y_3,\varepsilon_3)}(\theta,x)&=&(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)\,y_3^a-\tfrac{1}{6}\,\left[2(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_2)(\ovl\varepsilon_2\,\Gamma^a\,\varepsilon_3)+(\ovl\varepsilon_1\,\Gamma_a\,\varepsilon_3)(\ovl\varepsilon_2\,\Gamma^a\,\varepsilon_3)\right]\cr\cr &=:&\lambda_{(\varepsilon_1,y_1),(\varepsilon_2,y_2),(y_3,\varepsilon_3)}\,. \end{eqnarray} \medskip \subsubsection{The supermembrane} The fundamental physical significance of the $n$-gerbe stems from the construction of the transgression bundle $\,\mathscr{L}_\mathcal{G}\,$ of \Reqref{eq:transbund} and the associated pre-quantum bundle $\,\mathscr{L}_\sigma\,$ of \Reqref{eq:preqbund}, first advanced by Gaw\c{e}dzki in \Rcite{Gawedzki:1987ak}, in analogy with its historical prototype of Refs.\,\cite{Dirac:1931fb,Sniatycki:1974} due to Dirac and \'Sniatycki, respectively. The construction, originally worked out for a 1-gerbe, generalises to an arbitrary degree in cohomology. Below, we adapt the scheme of geometric (pre)quantisation of Souriau and Kostant, laid out in Refs.\,\cite{Souriau:1966,Kostant:1970}, through cohomological transgression in the spirit of Gaw\c{e}dzki to the setting of supertarget geometry. At he basis of our considerations, we have \begin{Def} A \textbf{superequivariant prequantisation} of the Green--Schwarz super-$\sigma$-model is a triple \begin{eqnarray}\nn \left(\mathscr{L}_{{\rm GS},p},\nabla_{\mathscr{L}_{{\rm GS},p}},\mathscr{L}_{{\rm GS},p}\ell_\cdot^{(1)}\right) \end{eqnarray} composed of a principal ${\mathbb{C}}^\x$-bundle $\,\mathscr{L}_{{\rm GS},p}\longrightarrow{\mathsf P}_{{\rm GS},p}\,$ over the space of states $\,{\mathsf P}_{{\rm GS},p}\,$ of the said model, with connection $\,\nabla_{\mathscr{L}_{{\rm GS},p}}\,$ of curvature equal to the corresponding (pre)symplectic form $\,\Omega_{{\rm GS},p}$, and of a group homomorphism $\,\mathscr{L}_{{\rm GS},p}\ell_\cdot^{(1)}:{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\to{\rm Aut}(\mathscr{L}_{{\rm GS},p},\nabla_{\mathscr{L}_{{\rm GS},p}})\,$ lifting the geometric action $\,\ell_\cdot^{(1)}\,$ of the supertranslation group $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\,$ on the supertarget space $\,\mathscr{M}^{(1)}$.\ If, instead, the latter lift maps $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\,$ to a central extension of the automorphism group $\,{\rm Aut}(\mathscr{L}_{{\rm GS},p},\nabla_{\mathscr{L}_{{\rm GS},p}})$,\ we speak of a \textbf{projectively superequivariant prequantisation}. \begin{flushright}$\diamond$\end{flushright \medskip \subsubsection{The $0$-brane} The point of departure in our discussion of the prequantisation of the dynamics of a charged superparticle is its (pre)symplectic form \eqref{eq:presympl-0}. Its ${\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}$-invariant trivialisation requires that a principal ${\mathbb{C}}^\x$-bundle be erected over the space of states $\,{\mathsf P}_{{\rm GS},p}={\mathsf T}^*_0\mathscr{M}^{(1)}\,$ of the relevant super-$\sigma$-model which geometrises the class $\,[\tfrac{1}{2\pi}\,\Omega_{{\rm GS},p}]\in H^2_{\rm dR}({\mathsf P}_{{\rm GS},p};{\mathbb{Z}})\subset H^2_{\rm dR}({\mathsf P}_{{\rm GS},p};{\mathbb{R}})$.\ In the light of the considerations of Section \ref{sec:GSgerbe}, we conclude that the bundle sought after is the trivial principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \pi_{\mathscr{L}_{{\rm GS},p}}\ :\ \mathscr{L}_{{\rm GS},p}:=\pi_{{\rm GS},p}^*\mathscr{L}\otimes({\mathsf P}_{{\rm GS},p}\x{\mathbb{C}}^\x)\cong{\mathsf P}_{{\rm GS},p}\x{\mathbb{C}}^\x\longrightarrow{\mathsf P}_{{\rm GS},p}\ :\ (\theta,x,{\mathsf p},z)\longmapsto(\theta,x,{\mathsf p}) \end{eqnarray} with the second (trivial) tensor factor contributing a LI connection 1-form $\,\vartheta$,\ so that the product bundle has a connection \begin{eqnarray}\nn \nabla_{\mathscr{L}_{{\rm GS},p}}={\mathsf d}+\tfrac{1}{{\mathsf i}}\,(\vartheta+\pi_{{\rm GS},p}^*\underset{\tx{\ciut{(1)}}}{\beta})\,, \end{eqnarray} or -- equivalently -- a principal ${\mathbb{C}}^\x$-connection 1-form \begin{eqnarray}\nn \cA_{{\rm GS},p}(\theta,x,{\mathsf p},z)={\mathsf i}\tfrac{\delta z}{z}+{\mathsf p}_a\,E^a(\theta,x)+\underset{\tx{\ciut{(1)}}}{\beta}(\theta) \end{eqnarray} that satisfies the fundamental relation \begin{eqnarray}\nn \pi_{\mathscr{L}_\sigma}^*\Omega_{{\rm GS},p}=\delta\cA_{{\rm GS},p}\,. \end{eqnarray} The total space of the bundle can be regarded as an extension of $\,{\mathsf P}_{{\rm GS},p}\,$ defined by the non-trivial CE-2-cocycle $\,\Omega_{{\rm GS},p}$.\ The geometric action of $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\,$ on $\,{\mathsf P}_{{\rm GS},p}\,$ explicited in \Reqref{eq:sact-sMink} lifts to the latter bundle on which it is implemented by bundle automorphisms \begin{eqnarray}\nn \mathscr{L}_{{\rm GS},p}\ell_\cdot\ :\ {\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\x\mathscr{L}_{{\rm GS},p}\longrightarrow\mathscr{L}_{{\rm GS},p}\ :\ \bigl((y^a,\varepsilon^\a),(x^a,\theta^\a,{\mathsf p}_a,z)\bigr)\longmapsto\bigl(x^a+y^a-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^a\,\theta,\theta^\a+\varepsilon^a,{\mathsf p}_a,{\rm e}^{{\mathsf i}\,\jmath_{(\varepsilon,y)}(\theta,x)}\cdot z\bigr)\,, \end{eqnarray} written in terms of the target current \eqref{eq:tgt-curr-0} and giving rise to an essentially projective realisation of the classical symmetry group on the quantum space of states (referred to earlier), \begin{eqnarray}\nn \mathscr{L}_{{\rm GS},p}\ell_{(\varepsilon_2,y_2)}\circ\mathscr{L}_{{\rm GS},p}\ell_{(\varepsilon_1,y_1)}=({\rm id}_{{\mathsf P}_{{\rm GS},p}}\x{\mathsf m}_{{\rm e}^{-{\mathsf i}\,\ovl\varepsilon_1\,\varepsilon_2}})\circ\mathscr{L}_{{\rm GS},p}\ell_{(\varepsilon_2,y_2)\cdot(\varepsilon_1,y_1)}\,, \end{eqnarray} cp \Reqref{eq:delphi}. The automorphisms are readily verified to preserve the connection. Our findings are now summarised in \begin{Prop} The GS super-0-gerbe canonically defines a projectively superequivariant prequantisation of the GS super-$\sigma$-model in dimension 1. \end{Prop} \medskip \subsubsection{The Green--Schwarz superstring} In the two-dimensional case, we invoke the insights gathered in the bosonic setting and in the previous paragraph, and consider the transgression bundle given by the trivial principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \mathscr{L}_{\mathcal{sG}_{{\rm GS},p}}:={\mathsf L}\mathscr{M}^{(1)}\x{\mathbb{C}}^\x\longrightarrow{\mathsf L}\mathscr{M}^{(1)}\ :\ (x^a,\theta^\a,z)\longmapsto(x^a,\theta^\a) \end{eqnarray} with a connection \begin{eqnarray}\nn \nabla_{\mathscr{L}_{\mathcal{sG}_{{\rm GS},p}}}={\mathsf d}-\tfrac{1}{{\mathsf i}}\,\int_{{\mathbb{S}}^1}\,{\rm ev}^*\underset{\tx{\ciut{(2)}}}{\beta}\,, \end{eqnarray} or -- equivalently -- with a principal ${\mathbb{C}}^\x$-connection 1-form \begin{eqnarray}\nn \a[\theta,x,z]={\mathsf i}\,\tfrac{\delta z}{z}-\int_{{\mathbb{S}}^1}\,{\rm ev}^*\underset{\tx{\ciut{(2)}}}{\beta}(\theta,x)\,. \end{eqnarray} Upon employing the target current \eqref{eq:targcur1} to lift the action of $\,{\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\,$ to $\,\mathscr{L}_{\mathcal{sG}_{{\rm GS},p}}\,$ as \begin{eqnarray}\nn \mathscr{L}_{\mathcal{sG}_{{\rm GS},p}}\ell_\cdot\ &:&\ {\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\x\mathscr{L}_{\mathcal{sG}_{{\rm GS},p}}\longrightarrow\mathscr{L}_{\mathcal{sG}_{{\rm GS},p}}\cr\cr &:&\ \bigl((y^a,\varepsilon^\a),(x^a,\theta^\a,z)\bigr)\longmapsto\bigl(x^a+y^a-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^a\,\theta,\theta^\a+\varepsilon^a,{\rm e}^{{\mathsf i}\,\int_{{\mathbb{S}}^1}\,{\rm ev}^*\underset{\tx{\ciut{(1)}}}{\jmath_{(\varepsilon,y)}}(\theta,x)}\cdot z\bigr)\,, \end{eqnarray} we ensure that the connection is preserved. At this stage, it suffices to tensor the pullback of the transgression bundle along $\,\pi_{{\rm GS},p}:{\mathsf P}_{{\rm GS},p}\longrightarrow{\mathsf L}\mathscr{M}^{(1)}\,$ by a trivial principal ${\mathbb{C}}^\x$-bundle with a manifestly LI connection 1-form $\,\vartheta\,$ to obtain a prequantum bundle \begin{eqnarray}\nn \pi_{\mathscr{L}_{{\rm GS},p}}\ :\ \mathscr{L}_{{\rm GS},p}:=\pi_{{\rm GS},p}^*\mathscr{L}_{\mathcal{sG}_{{\rm GS},p}}\otimes({\mathsf P}_{{\rm GS},p}\x{\mathbb{C}}^\x)\longrightarrow{\mathsf P}_{{\rm GS},p}\ :\ (x^a,\theta^\a,{\mathsf p}_a,z)\longmapsto(x^a,\theta^\a,{\mathsf p}_a) \end{eqnarray} with a connection \begin{eqnarray}\nn \nabla_{{\rm GS},p}={\mathsf d}+\tfrac{1}{{\mathsf i}}\,\bigl(\vartheta-\pi_{{\rm GS},p}^*\int_{{\mathbb{S}}^1}\,{\rm ev}^*\underset{\tx{\ciut{(2)}}}{\beta}\bigr)\,, \end{eqnarray} or -- equivalently -- with a principal ${\mathbb{C}}^\x$-connection 1-form \begin{eqnarray}\nn \a[\theta,x,{\mathsf p},z]={\mathsf i}\,\tfrac{\delta z}{z}+\int_{{\mathbb{S}}^1}\,\left[\Vol({\mathbb{S}}^1)\,{\mathsf p}_a\,E^a(\theta,x)-{\rm ev}^*\underset{\tx{\ciut{(2)}}}{\beta}(\theta,x)\right]\,, \end{eqnarray} preserved by the induced action \begin{eqnarray}\nn \mathscr{L}_{{\rm GS},p}\ell_\cdot\ &:&\ {\mathbb{R}}^{1,d-1\,\vert\,ND_{1,d-1}}\x\mathscr{L}_{{\rm GS},p}\longrightarrow\mathscr{L}_{{\rm GS},p}\cr\cr &:&\ \bigl((y^a,\varepsilon^\a),(x^a,\theta^\a,{\mathsf p}_a,z)\bigr)\longmapsto\bigl(x^a+y^a-\tfrac{1}{2}\,\ovl\varepsilon\,\Gamma^a\,\theta,\theta^\a+\varepsilon^a,{\mathsf p}_a,{\rm e}^{{\mathsf i}\,\int_{{\mathbb{S}}^1}\,{\rm ev}^*\underset{\tx{\ciut{(1)}}}{\jmath_{(\varepsilon,y)}}(\theta,x)}\cdot z\bigr)\,. \end{eqnarray} All in all, then, we arrive at \begin{Prop} The GS super-1-gerbe canonically defines a projectively superequivariant prequantisation of the GS super-$\sigma$-model in dimension 2. \end{Prop} \medskip \subsubsection{The supermembrane} \medskip} \section{The pure-supergerbe Hughes--Polchinski superbackgrounds \& their $\kappa$-symmetry}\label{sec:kappa} Our discussion of the higher-geometric structures behind the GS super-$\sigma$-model and its symmetries induced from automorphisms of the supertarget, which we anticipate to be reflected in the equivariance of those structures, brings us to the Hughes--Polchinski formulation recapitulated in Sec.\,\ref{sub:HPGS}. Indeed, a careful canonical analysis of the super-$\sigma$-model in its Nambu--Goto formulation, first performed by de Azc\'arraga and Lukierski in \Rcite{deAzcarraga:1982njd} and then by Siegel in \Rcite{Siegel:1983hh}, reveals the existence of gauge symmetries that engage {\it jointly} the metric and the topological term of the super-$\sigma$-model in that neither of them is invariant separately under the corresponding symmetry transformations and it is only a distinguished linear combination of the two, with their relative normalisation thus fixed, that remains intact. The symmetry plays a crucial r\^ole that justifies devoting the closing paragraphs of this work to its preliminary discussion in the (super)gerbe-theoretic language developed heretofore: It identifies some of the target-space spinorial degrees of freedom of the original model as pure gauge and serves to remove them in the standard gauge-fixing procedure through which actual {\it super}symmetry of the (effective) field content of the physical theory is attained. As the symmetries mix the two terms in the NG action functional, we cannot expect them to geometrise in the supergeometric setting of that formulation in a purely gerbe-theoretic manner analogous to the realisation of the adjoint supersymmetry established in Sec.\,\ref{sec:sgerbequiv}. A path to geometrisation opens up -- on the firm basis of Prop.\,\ref{prop:sMinkHPvsNG} -- only in the HP formulation in which the metric and cohomological structures of the supertarget amalgamate into a purely (super-)gerbe-theoretic structure on a larger supertarget and -- as shall be documented in the remainder of this work -- in that amalgam the symmetry is encoded in a form resembling closely the previously described one. The local supersymmetry under consideration has its peculiarities, to be detailed below, that preclude the construction of a full-blown equivariant structure on the composite super-gerbe of the HP formulation insofar as there is no obvious way to geometrise the conditions that need to be imposed in order for the supersymmetry to be realised in the super-$\sigma$-model in the first place. Consequently, the analysis to follow provides us with a non-standard geometric instantiation of a field-theoretic symmetry in the presence of a topological charge. \subsection{The Cartan supergeometry of the gauged supersymmetry}\label{sec:kappaCart} The point of departure of our analysis is the HP formulation of the GS super-$\sigma$-model of embeddings of the $(p+1)$-dimensional riemannian worldvolume $\,\Omega\,$ in the supertarget $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ parametrised as in \Reqref{eq:HParaMink}. Following the general rules laid out in Sec.\,\ref{sub:HPGS}, we may write its action functional in the form \begin{eqnarray}\nn S^{\rm (HP)}_{{\rm GS},0}[{\mathbb{X}}_{\rm (HP)}]&=&\int_\Omega\,\bigl(\theta^0_{\rm L}(\theta,x,\phi)(\cdot)+\lambda_0\,\ovl\theta\,\Gamma_{11}\,\sigma(\theta)(\cdot)\bigr)\,, \end{eqnarray} for $\,p=0$,\ or -- for $\,p>0\,$ -- \begin{eqnarray}\nn S^{\rm (HP)}_{{\rm GS},p}[{\mathbb{X}}_{\rm (HP)}]&=&\int_\Omega\,\bigl(\tfrac{1}{(p+1)!}\,\epsilon_{\unl A_0\unl A_1\ldots\unl A_p}\,\bigl(\theta^{\unl A_0}_{\rm L}\wedge\theta^{\unl A_1}_{\rm L}\wedge\cdots\wedge\theta^{\unl A_p}_{\rm L}\bigr)(\theta,x,\phi)(\cdot)\cr\cr &&\hspace{.75cm}+\tfrac{\lambda_p}{p+1}\,\sum_{k=0}^p\,\ovl\theta\,\Gamma_{I_1 I_2\ldots I_p}\,\sigma(\theta)\wedge{\mathsf d} x^{I_1}\wedge{\mathsf d} x^{I_2}\wedge\cdots\wedge{\mathsf d} x^{I_k}\wedge e^{I_{k+1} I_{k+2}\ldots I_p}(\theta,x)(\cdot)\bigr)\,, \end{eqnarray} where we have reinstated a parameter $\,\lambda_p\in{\mathbb{R}}^\x\,$ that quantifies the relative parametrisation of the two terms in the action functional. This parameter passes to the NG formulation upon integrating out the (unphysical) Goldstone degrees of freedom in the HP action functional. There, as in the HP action functional itself, its value does not affect the global symmetries of the super-$\sigma$-model in any qualitative manner, and so it remains arbitary as long as we consider those symmetries only. Its status changes dramatically in the context of local symmetries for which we look, with hindsight, among {\it infinitesimal} (or tangential) shifts of the coordinates $\,\theta^\a, x^I\,$ and $\,\phi^{\unl A\widehat S}\,$ induced by the {\it right}-regular translations \begin{eqnarray}\nn \wp_\cdot\ &:&\ {\rm s}\mathscr{P}(1,d-1;1)\x{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}\longrightarrow{\rm s}\mathscr{P}(1,d-1;1)\cr\cr &:&\ \bigl({\rm e}^{\theta^\a\,Q_\a}\cdot{\rm e}^{x^I\,P_I}\cdot{\rm e}^{\phi^{KL}\,J_{KL}},{\rm e}^{\kappa^\beta\,Q_\beta}\cdot{\rm e}^{y^J\,P_J}\bigr)\longmapsto{\rm e}^{\theta^\a\,Q_\a}\cdot{\rm e}^{x^I\,P_I}\cdot {\rm e}^{\phi^{KL}\,J_{KL}}\cdot{\rm e}^{\kappa^\beta\,Q_\beta}\cdot{\rm e}^{y^J\,P_J}\cr\cr &&\hspace{5cm}={\rm e}^{(\theta^\a+S(\phi)^\a_{\ \beta}\,\kappa^\beta)\,Q_\a}\cdot{\rm e}^{(x^I+ L(\phi)^I_{\ J}\,y^J-\frac{1}{2}\,\ovl\theta\,\Gamma^I\cdot S(\phi)\,\kappa)\,P_I}\cdot {\rm e}^{\phi^{KL}\,J_{KL}}\,, \end{eqnarray} that is \begin{eqnarray}\label{eq:ksymmind}\hspace{2cm} \bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr)\longmapsto\bigl(\theta^\a+\widetilde\kappa^\a(\phi),x^I+\widetilde y^I(\phi)-\tfrac{1}{2}\,\ovl\theta\,\Gamma^I\,\widetilde\kappa(\phi),\phi^{\unl A\widehat S}\bigr)=:r_{(\kappa^\beta,y^J)}\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr)\,, \end{eqnarray} where \begin{eqnarray}\nn \widetilde\kappa^\a(\phi):=S(\phi)^\a_{\ \beta}\,\kappa^\beta\,,\qquad\qquad\widetilde y^I(\phi):= L(\phi)^I_{\ J}\,y^J\,. \end{eqnarray} These satisfy the algebra \begin{eqnarray}\nn \bigl[r_{(\kappa_1^\a,y_1^I)},r_{(\kappa_2^\beta,y_2^J)}\bigr]=r_{(0,\ovl\kappa_1\,\Gamma^I\,\kappa_2)}\,. \end{eqnarray} Note, in particular, that purely Gra\ss mann-odd translations do {\it not} form a subalgebra in the latter which implies that closing the algebra of infinitesimal symmetries will require extending the space of such translations by the purely Gra\ss mann-even ones. The above shifts induce infinitesimal transformations of the relevant Maurer--Cartan super-1-forms: \begin{eqnarray}\nn \bigl(\Sigma_{\rm L}^\a,\theta_{\rm L}^I\bigr)(\theta,x,\phi)\longmapsto\bigl(\Sigma_{\rm L}^\a,\theta_{\rm L}^I\bigr)(\theta,x,\phi)+\bigl(S(-\phi)^\a_{\ \beta}\,{\mathsf d}\widetilde\kappa^\beta(\phi), L(-\phi)^I_{\ J}\,{\mathsf d}\widetilde y^J(\phi)+\ovl\kappa\,\Gamma^I\,\Sigma_{\rm L}(\theta,x,\phi)\bigr)+\mathscr{O}(\kappa^2) \end{eqnarray} that determine the variation of the action functional. Below, we write out and examine the variation in the particular cases of $\,p\in\{0,1\}\,$ in which we shall subsequently look for a suitable lift of the symmetry to the corresponding super-$p$-gerbes. The symmetry is discussed in all generality in Refs.\,\cite{McArthur:1999dy,Gomis:2006wu}, {\it cp.}\ also \Rcite{deAzcarraga:2004df}. \subsubsection{The super-0-brane} The HP action functional for the super-0-brane (in $\,{\rm sMink}^{1,9\,\vert\,D_{1,9}}$) takes the form \begin{eqnarray}\nn S^{\rm (HP)}_{{\rm GS},0}[{\mathbb{X}}_{\rm (HP)}]=\int_\Omega\,{\mathbb{X}}_{\rm (HP)}^*\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\lambda_0)}\,, \end{eqnarray} with the integrand given by the pullback of the super-1-form \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\lambda_0)}(\theta,x,\phi)=\theta_{\rm L}^0(\theta,x,\phi)+\lambda_0\,\ovl\theta\,\Gamma_{11}\,\sigma(\theta)\,, \end{eqnarray} whose variation under the Gra\ss mann-odd shift of the lagrangean field reads \begin{eqnarray}\nn &&r_\cdot^*\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\lambda_0)}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)-\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\lambda_0)}\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr)\cr\cr &=&- L(-\phi)^0_{\ I}\,\ovl\sigma(\theta)\,\Gamma^I\,\widetilde\kappa(\phi)-2\lambda_0\,\ovl\sigma(\theta)\,\Gamma_{11}\,\widetilde\kappa(\phi)+{\mathsf d}\bigl(\lambda_0\,\ovl\theta\,\Gamma_{11}\,\widetilde\kappa(\phi)\bigr)\cr\cr &=&-2\ovl\Sigma_{\rm L}(\theta,x,\phi)\,\Gamma^0\cdot\left(\tfrac{{\boldsymbol{1}}_{S_{1,d-1}}+2\lambda_0\,\Gamma^0\cdot\Gamma_{11}}{2}\right)\kappa+{\mathsf d}\widehat{\rm F}^{(\lambda_0)}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)\,. \end{eqnarray} where \begin{eqnarray}\nn \widehat{\rm F}^{(\lambda_0)}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)=\lambda_0\,\ovl\theta\,\Gamma_{11}\,\widetilde\kappa(\phi)\,. \end{eqnarray} The operator \begin{eqnarray}\nn {\mathsf P}^{(0)}_{\lambda_0}:=\tfrac{{\boldsymbol{1}}_{S_{1,d-1}}+2\lambda_0\,\Gamma^0\cdot\Gamma_{11}}{2}\in{\rm End}_{\mathbb{C}}\,(S_{1,d-1}) \end{eqnarray} appearing in the first term of the variation is a projector iff \begin{eqnarray}\nn \lambda_0\in\bigl\{-\tfrac{1}{2},\tfrac{1}{2}\bigr\}\,, \end{eqnarray} and it then suffices to take \begin{eqnarray}\label{eq:ksymmproj} \kappa\in{\rm ker}\,{\mathsf P}^{(0)}_{\lambda_0} \end{eqnarray} to obtain a symmetry of the action functional. The difference between the two choices is immaterial, hence we set, {\it e.g.}, \begin{eqnarray}\nn \lambda_0=\tfrac{1}{2} \end{eqnarray} and proceed with the symmetry analysis of the action functional associated with the super-1-form \begin{eqnarray}\label{eq:hatbeta1} \widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\frac{1}{2})}(\theta,x,\phi)=\theta_{\rm L}^0(\theta,x,\phi)+\tfrac{1}{2}\,\ovl\theta\,\Gamma_{11}\,\sigma(\theta)\,. \end{eqnarray} Note that the complementary projector $\,{\boldsymbol{1}}_{D_{1,d-1}}-{\mathsf P}^{(0)}_{\frac{1}{2}}\,$ is precisely of the type described in Prop.\,\ref{prop:sMinkHPvsNG}, \begin{eqnarray}\nn &&\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\cdot\Gamma_{11}}{2}\right)^{\rm T}\cdot\ovl\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\cdot\Gamma_{11}}{2}\right)=C\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\bigl(C^{-1}\cdot\Gamma^{\rm T}_{11}\cdot C\bigr)\cdot\bigl(C^{-1}\cdot\Gamma^{0\,{\rm T}}\cdot C\bigr)}{2}\right)\cdot\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\,\Gamma_{11}}{2}\right)\cr\cr &=&C\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma_{11}\cdot\Gamma^0}{2}\right)\cdot\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\,\Gamma_{11}}{2}\right)=C\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\cdot\Gamma_{11}}{2}\right)\cdot\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\,\Gamma_{11}}{2}\right)\cr\cr &=&\ovl\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+(-1)^{\delta^{I0}}\,\Gamma^0\cdot\Gamma_{11}}{2}\right)\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\,\Gamma_{11}}{2}\right)=\delta^{I0}\,\ovl\Gamma{}^0\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\,\Gamma_{11}}{2}\right)^2\cr\cr &=&\delta^{I0}\,\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\cdot\Gamma_{11}}{2}\right)^{\rm T}\cdot\ovl\Gamma{}^0\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\cdot\Gamma_{11}}{2}\right) \end{eqnarray} and in particular the commutator of two Gra\ss mann-odd shifts, \begin{eqnarray}\nn \bigl[r_{(\kappa_1^\a,0)},r_{(\kappa_2^\beta,0)}\bigr]=r_{(0,\ovl\kappa_1\,\Gamma^J\,\kappa_2)}\,, \end{eqnarray} takes the form dictated by the algebra \begin{eqnarray}\nn \ovl\kappa_1\,\Gamma^I\,\kappa_2&\equiv&\bigl(({\boldsymbol{1}}_{D_{1,d-1}}-{\mathsf P}^{(0)}_{\frac{1}{2}})\kappa_1\bigr)^{\rm T}\,\ovl\Gamma{}^I\,\kappa_2=\ovl\kappa_1\,\Gamma^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+(-1)^{\delta^{I0}}\,\Gamma^0\cdot\Gamma_{11}}{2}\right)\,\kappa_2=\ovl\kappa_1\,\Gamma^0\,\kappa_2\,\delta^I_0\,, \end{eqnarray} or simply \begin{eqnarray}\nn \bigl[r_{(\kappa_1^\a,0)},r_{(\kappa_2^\beta,0)}\bigr]=r_{(0,\ovl\kappa_1\,\Gamma^0\,\kappa_2\,\delta^I_0)}\,, \end{eqnarray} A purely Gra\ss mann-even shift of the lagrangean field of the super-$\sigma$-model now yields \begin{eqnarray}\nn r_\cdot^*\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\frac{1}{2})}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(0,y^J)\bigr)-\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\frac{1}{2})}(\theta^\a,x^I,\phi^{\unl A\widehat S})&=& L(-\phi)^0_{\ I}\,{\mathsf d}\widetilde y^I(\phi)={\mathsf d} y^0+\theta^{0I}_{\rm L}(\theta,x,\phi)\,\eta_{IJ}\,{\mathsf d} y^J\cr\cr &\equiv&{\mathsf d} y^0+\theta^{0\widehat S}_{\rm L}(\theta,x,\phi)\,\eta_{\widehat S\widehat T}\,{\mathsf d} y^{\widehat T}\,, \end{eqnarray} and so if we restrict to those Gra\ss mann-even shifts that are required for the closure of the commutator, we obtain the desired result \begin{eqnarray}\nn r_\cdot^*\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\frac{1}{2})}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(0,y\,\delta^J_0)\bigr)-\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\frac{1}{2})}(\theta^\a,x^I,\phi^{\unl A\widehat S})&=&{\mathsf d} y \end{eqnarray} without any further manipulations. The distinguished shifts may be seen to engender diffeomorphisms of the super-0-brane worldline in the so-called static gauge, {\it cp.}\ \Rcite{Gomis:2006wu}, and so insisting on their presence among symmetry generators is physically perfectly justified. At this stage, we still have to reconcile the inverse Higgs constraints, central to the correspondence with the NG formulation, with the newly established symmetries. To this end, we derive the integrability conditions for the differential relation \eqref{eq:IsHiggs} in the presence of the symmetries by considering their variation under a general symmetry transformation, \begin{eqnarray}\nn r_\cdot^*\theta_{\rm L}^{\widehat S}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),\bigl(\kappa^\beta,y\,\delta^J_0\bigr)\bigr)= \theta_{\rm L}^{\widehat S}(\theta,x,\phi)-y\,\theta^{0\widehat S}_{\rm L}(\theta,x,\phi)-\ovl\kappa\,\Gamma^{\widehat S}\,\Sigma_{\rm L}(\theta,x,\phi)\,. \end{eqnarray} The demand that the above vanish yields secondary constraints: \begin{eqnarray}\label{eq:IsHiggsIC} \theta^{0\widehat S}_{\rm L}(\theta,x,\phi)\must0\,,\qquad\qquad\Gamma^{\widehat S}\,{\mathsf P}^{(0)}_{\frac{1}{2}}\,\Sigma_{\rm L}(\theta,x,\phi)\must0\,,\qquad\widehat S\in\ovl{1,9}\,, \end{eqnarray} of which the former were identified in \Rcite{Gomis:2006wu} as field equations of the super-$\sigma$-model under study. Constraints analogous to the latter were encountered in the study of gauge supersymmetries of the GS super-$\sigma$-model in \Rcite{McArthur:1999dy}. From our analysis, we extract \begin{Def}\label{def:s0bksymmalg} The {\bf extended super-0-brane $\kappa$-symmetry superalgebra} (in $\,{\rm sMink}^{1,9\,\vert\,D_{1,9}}$) is the Lie subsuperalgebra of $\,{\mathbb{R}}^{1,9\,\vert\,D_{1,9}}\,$ with generators \begin{eqnarray}\nn \gt{k}:=\corr{\ Q^*_\a:={\mathsf P}^{(0)\,\beta}_{\frac{1}{2}}{}_\a\,Q_\beta\ \vert\ \a\in\ovl{1,D_{1,9}}\ }_{\mathbb{R}}\oplus\corr{P_0}_{\mathbb{R}}\,, \end{eqnarray} satisfying the supercommutation relations \begin{eqnarray}\nn \{Q^*_\a,Q^*_\beta\}=\bigl({\mathsf P}^{(0)\,{\rm T}}_{\frac{1}{2}}\cdot\ovl\Gamma{}^0\cdot{\mathsf P}^{(0)}_{\frac{1}{2}}\bigr)_{\a\beta}\,P_0\,,\qquad\qquad[P_0,Q^*_\a]=0\,. \end{eqnarray} \begin{flushright}$\diamond$\end{flushright Our findings are summarised in \begin{Prop}\label{prop:kappasymm0} The super-0-brane $\kappa$-symmetry superalgebra of Def.\,\ref{def:s0bksymmalg} is a gauge symmetry of the corresponding Green--Schwarz super-$\sigma$-model (in the Hughes--Polchinski formulation) and preserves the space of its restricted (classical) field configurations $\,\mathscr{D}_0\subset{\rm s}\mathscr{P}(1,9\,\vert\,1)\,$ defined by the family of constraints \begin{eqnarray}\label{eq:ksymmconstr0} \bigl(\theta_{\rm L}^{\widehat S},\theta^{0\widehat S}_{\rm L},\Gamma^{\widehat S}\,{\mathsf P}_{\frac{1}{2}}\,\Sigma_{\rm L}\bigr)\mathord{\restriction}_{\mathcal{T}\mathscr{D}_0}=0\,,\qquad\widehat S\in\ovl{1,9}\,. \end{eqnarray} \end{Prop} \brem Note that super-1-form $\,\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}{}^{(\frac{1}{2})}\,$ and the constraints are invariant with respect to the (right) action of the special orthogonal group $\,{\rm Spin}(9)\,$ generated by rotations that leave the distinguished direction $\,\partial_0\,$ intact. This follows directly from the vectorial nature of the (bare) indices carried by the Maurer--Cartan 1-forms entering the definition of the super-1-form and of the constraints. \end{Rem}\medskip \subsubsection{The Green--Schwarz superstring} In the case of the superstring, the HP action functional \begin{eqnarray}\nn S^{\rm (HP)}_{{\rm GS},1}[{\mathbb{X}}_{\rm (HP)}]=\int_\Omega\,{\mathbb{X}}_{\rm (HP)}^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\lambda_1)}\,, \end{eqnarray} is the pullback of the super-2-form \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\lambda_1)}(\theta,x,\phi)=\bigl(\theta_{\rm L}^0\wedge\theta_{\rm L}^1\bigr)(\theta,x,\phi)+\lambda_1\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\wedge{\mathsf d} x^I\,, \end{eqnarray} and we readily compute, invoking Eqs.\,\eqref{eq:Gammasinter}, \eqref{eq:LorintMink} and \eqref{eq:LorintC} along the way, \begin{eqnarray}\nn &&r_\cdot^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\lambda_1)}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)-\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\lambda_1)}\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr)\cr\cr &=&-\ovl\Sigma_{\rm L}\,\Gamma^0\,\kappa\wedge\theta_{\rm L}^1(\theta,x,\phi)+\ovl\Sigma_{\rm L}\,\Gamma^1\,\kappa\wedge\theta_{\rm L}^0(\theta,x,\phi)-2\lambda_1\,\ovl\Sigma_{\rm L}\,\Gamma_I\,\kappa\wedge\theta_{\rm L}^I(\theta,x,\phi)+{\mathsf d}\bigl(\lambda_1\,\ovl\theta\,\Gamma_I\,\widetilde\kappa(\phi)\,e^I(\theta,x)\bigr)\,. \end{eqnarray} Drawing on the previous observations, we impose the inverse Higgs constraints, whereupon the last formula reduces to \begin{eqnarray}\nn &&r_\cdot^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\lambda_1)}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)-\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\lambda_1)}\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr)\cr\cr &=&2\bigl(\theta_{\rm L}^1\wedge\ovl\Sigma_{\rm L}\,\Gamma^0-\theta_{\rm L}^0\wedge\ovl\Sigma_{\rm L}\,\Gamma^1\bigr)(\theta,x,\phi)\,\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-2\lambda_1\,\Gamma^0\cdot\Gamma^1}{2}\right)\kappa+{\mathsf d}\widehat{\mathsf E}^{(\lambda_1)}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)\,, \end{eqnarray} where \begin{eqnarray}\nn \widehat{\mathsf E}^{(\lambda_1)}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)=\lambda_1\,\ovl\theta\,\Gamma_I\,\widetilde\kappa(\phi)\,e^I(\theta,x)\,. \end{eqnarray} Reasoning as in the previous case, we convince ourselves that the operator \begin{eqnarray}\nn {\mathsf P}^{(1)}_{\lambda_1}:=\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-2\lambda_1\,\Gamma^0\cdot\Gamma^1}{2}\in{\rm End}_{\mathbb{C}}\,(S_{1,d-1}) \end{eqnarray} appearing in the above expression is a projector iff \begin{eqnarray}\nn \lambda_1\in\bigl\{-\tfrac{1}{2},\tfrac{1}{2}\bigr\}\,, \end{eqnarray} and then for \begin{eqnarray}\nn \kappa\in{\rm ker}\,{\mathsf P}^{(1)}_{\lambda_1} \end{eqnarray} we obtain a symmetry of the space of field configurations subject to the inverse Higgs constraints. Note that in the present case -- in contrast with the previous one -- field equations have to be invoked already for a single Gra\ss mann-odd variation. Once again, we set \begin{eqnarray}\nn \lambda_1=\tfrac{1}{2} \end{eqnarray} and continue our analysis for the super-2-form \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\frac{1}{2})}(\theta,x,\phi)=\bigl(\theta_{\rm L}^0\wedge\theta_{\rm L}^1\bigr)(\theta,x,\phi)+\tfrac{1}{2}\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\wedge{\mathsf d} x^I\,. \end{eqnarray} Inspection of the complementary projector $\,{\boldsymbol{1}}_{D_{1,d-1}}-{\mathsf P}_{\frac{1}{2}}\,$ reveals its expected property \begin{eqnarray}\nn &&\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\cdot\Gamma^1}{2}\right)^{\rm T}\cdot\ovl\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\cdot\Gamma^1}{2}\right)=C\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\bigl(C^{-1}\cdot\Gamma^{1\,{\rm T}}\cdot C\bigr)\cdot\bigl(C^{-1}\cdot\Gamma^{0\,{\rm T}}\cdot C\bigr)}{2}\right)\cdot\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\,\Gamma^1}{2}\right)\cr\cr &=&C\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^1\cdot\Gamma^0}{2}\right)\cdot\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\,\Gamma^1}{2}\right)=C\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\cdot\Gamma^1}{2}\right)\cdot\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\,\Gamma^1}{2}\right)\cr\cr &=&\ovl\Gamma{}^I\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-(-1)^{\delta^{I0}+\delta^{I1}}\,\Gamma^0\cdot\Gamma^1}{2}\right)\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\,\Gamma^1}{2}\right)=\bigl(\delta^{I0}\,\ovl\Gamma{}^0+\delta^{I1}\,\ovl\Gamma{}^1\bigr)\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}+\Gamma^0\,\Gamma^1}{2}\right)^2\cr\cr &=&\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\cdot\Gamma^1}{2}\right)^{\rm T}\cdot\bigl(\delta^{I0}\,\ovl\Gamma{}^0+\delta^{I1}\,\ovl\Gamma{}^1\bigr)\cdot\left(\tfrac{{\boldsymbol{1}}_{D_{1,d-1}}-\Gamma^0\cdot\Gamma^1}{2}\right)\,. \end{eqnarray} which leads to the algebra \begin{eqnarray}\label{eq:kappacomm1} \bigl[r_{(\kappa_1^\a,0)},r_{(\kappa_2^\beta,0)}\bigr]=r_{(0,\ovl\kappa_1\,\Gamma^0\,\kappa_2\,\delta^I_0+\ovl\kappa_1\,\Gamma^1\,\kappa_2\,\delta^I_1)}\,. \end{eqnarray} Next, we examine the transformation of the super-2-form $\,\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\frac{1}{2})}\,$ under a purely Gra\ss mann-even shift, \begin{eqnarray}\nn &&r_\cdot^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\frac{1}{2})}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(0,y^J)\bigr)-\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\frac{1}{2})}(\theta^\a,x^I,\phi^{\unl A\widehat S})\cr\cr &=&L(-\phi)^0_{\ I}\,{\mathsf d}\widetilde y^I\wedge\theta_{\rm L}^1(\theta,x,\phi)+\theta_{\rm L}^0(\theta,x,\phi)\wedge L(-\phi)^1_{\ I}\,{\mathsf d}\widetilde y^I+\tfrac{1}{2}\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\wedge{\mathsf d}\widetilde y^I\cr\cr &=&y^I\,\eta_{IJ}\,\bigl(\theta_{\rm L}^{0J}\wedge\theta_{\rm L}^1-\theta_{\rm L}^{1J}\wedge\theta_{\rm L}^0\bigr)(\theta,x,\phi)+{\mathsf d} y^0\wedge\theta_{\rm L}^1(\theta,x,\phi)-{\mathsf d} y^1\wedge\theta_{\rm L}^0(\theta,x,\phi)+\tfrac{1}{2}\,\widetilde y^I(\phi)\,\bigl(\ovl\sigma\wedge\Gamma_I\,\sigma)(\theta)\cr\cr &&-\tfrac{1}{2}\,{\mathsf d}\bigl(\widetilde y^I(\phi)\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\bigr)\cr\cr &=&\theta_{\rm L}^{01}\wedge\bigl(y^0\,\theta_{\rm L}^0(\theta,x,\phi)-y^1\,\theta_{\rm L}^1(\theta,x,\phi)\bigr)+y^0\,\bigl(\theta_{\rm L}^{1I}\,\eta_{IJ}\,\theta_{\rm L}^J(\theta,x,\phi)-\tfrac{1}{2}\,L(-\phi)^1_{\ I}\,\bigl(\ovl\sigma\wedge\Gamma^I\,\sigma\bigr)(\theta)\bigr)\cr\cr &&-y^1\,\bigl(\theta_{\rm L}^{0I}\,\eta_{IJ}\,\theta_{\rm L}^J(\theta,x,\phi)-\tfrac{1}{2}\,L(-\phi)^0_{\ I}\,\bigl(\ovl\sigma\wedge\Gamma^I\,\sigma\bigr)(\theta)\bigr)-y^{\widehat S}\,\delta_{\widehat S\widehat T}\,\bigl(\theta_{\rm L}^{0\widehat T}\wedge\theta_{\rm L}^1-\theta_{\rm L}^{1\widehat T}\wedge\theta_{\rm L}^0\bigr)(\theta,x,\phi)\cr\cr &&+{\mathsf d}\bigl(y^0\,\theta_{\rm L}^1(\theta,x,\phi)-y^1\,\theta_{\rm L}^0(\theta,x,\phi)-\tfrac{1}{2}\,\widetilde y^I(\phi)\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\bigr)\cr\cr &=&\tfrac{1}{2}\,\bigl(y^0+y^1\bigr)\,\ovl\Sigma_{\rm L}\wedge\bigl(\Gamma^0-\Gamma^1\bigr)\,\Sigma_{\rm L}+\tfrac{1}{2}\,y^{\widehat S}\,\ovl\Sigma_{\rm L}\wedge\Gamma_{\widehat S}\,\Sigma_{\rm L}\cr\cr &&-y^{\widehat S}\,\delta_{\widehat S\widehat T}\,\bigl(\theta_{\rm L}^{0\widehat T}\wedge\theta_{\rm L}^1-\theta_{\rm L}^{1\widehat T}\wedge\theta_{\rm L}^0\bigr)(\theta,x,\phi)+\delta_{\widehat S\widehat T}\,\bigl(y^1\,\theta_{\rm L}^{0\widehat S}(\theta,x,\phi)-y^0\,\theta_{\rm L}^{1\widehat S}(\theta,x,\phi)\bigr)\wedge\theta_{\rm L}^{\widehat T}\cr\cr &&+{\mathsf d}\bigl(y^0\,\theta_{\rm L}^1(\theta,x,\phi)-y^1\,\theta_{\rm L}^0(\theta,x,\phi)-\tfrac{1}{2}\,\widetilde y^I(\phi)\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\bigr) \end{eqnarray} ({\it cp.}\ the Maurer--Cartan equations of Sec.\,\ref{sec:CartMink}), and so upon restricting to translations along the distinguished directions $\,\partial_0\,$ and $\,\partial_1\,$ and taking into account the inverse Higgs constraints, we obtain the reduced formula \begin{eqnarray}\nn &&r_\cdot^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\frac{1}{2})}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(0,y^0\,\delta^J_0+y^1\,\delta^J_1)\bigr)-\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\frac{1}{2})}(\theta^\a,x^I,\phi^{\unl A\widehat S})\cr\cr &=&\tfrac{1}{2}\,\bigl(y^0+y^1\bigr)\,\ovl\Sigma_{\rm L}\wedge\bigl(\Gamma^0-\Gamma^1\bigr)\,\Sigma_{\rm L}+{\mathsf d}\bigl(y^0\,\theta_{\rm L}^1(\theta,x,\phi)-y^1\,\theta_{\rm L}^0(\theta,x,\phi)-\tfrac{1}{2}\,\widetilde y^I(\phi)\,\ovl\theta\,\Gamma_I\,\sigma(\theta)\bigr) \end{eqnarray} from which we may already read off the condition for the {\it pure} translation along the distinguished directions to be symmetries of the space of classical field configurations, to wit\footnote{One-sided regular translations on the group manifold are {\it chiral} gauge symmetries the two-dimensional bosonic WZW $\sigma$-model extending, through the Sugawara construction, the algebra of (conformal) worldsheet diffeomorphisms. It is amusing to note that the condition on admissible Gra\ss mann-even translations obtained in our simple-minded analysis translates, in the static gauge underlying the interpretation of those translations in terms of diffeomorphisms of the embedded worldsheet, into the statement of constancy of the relevant chiral coordinate $\,x^0+x^1$.}, \begin{eqnarray}\nn y^0+y^1=0\,. \end{eqnarray} That our treatment of the Gra\ss mann-even component of the gauge supersymmetry under study is oversimplified is readily seen as follows: Consider the commutator \eqref{eq:kappacomm1} of two purely Gra\ss mann-odd shifts. In consequence of the constraints imposed upon $\,\kappa_1\,$ (and $\,\kappa_2$), the components $\,y_{12}^I\equiv\ovl\kappa_1\,\Gamma^0\,\kappa_2\,\delta^I_0+\ovl\kappa_1\,\Gamma^1\,\kappa_2\,\delta^I_1\,$ of the ensuing Gra\ss mann-even translation satisfy the identity \begin{eqnarray}\nn y_{12}^1=\ovl\kappa_1\,\Gamma^1\,\kappa_2=\ovl\kappa_1\,\Gamma^0\,\kappa_2\equiv y_{12}^0\,, \end{eqnarray} and so -- na\"ively -- fall outside the class of Gra\ss mann-even symmetries indicated above (unless we set $\,\kappa_1\,$ or $\,\kappa_2\,$ equal to 0). The authors of \Rcite{Gomis:2006wu} consider translations accompanied by compensating rotations from the subspace $\,\gt{d}\,$ of Prop.\,\ref{prop:sMinkHPvsNG} in order to recover the full diffeomorphism group of the superstring worldvolume. In the present work, we are mainly interested in the Gra\ss mann-odd component of the $\kappa$-symmetry superalgebra and hence refrain from a completely general discussion. We do, on the other hand, complete our analysis by deriving secondary conditions to be imposed upon the lagrangean fields of the super-$\sigma$-model in order to ensure invariance of the inverse Higgs constraints under the purely Gra\ss mann-odd shifts. These we read off from the direct calculation \begin{eqnarray}\nn r_\cdot^*\theta_{\rm L}^{\widehat S}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),\bigl(\kappa^\beta,0\bigr)\bigr)=\theta_{\rm L}^{\widehat S}\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr)+\ovl\kappa\,\Gamma^{\widehat S}\cdot{\mathsf P}^{(1)}_{\frac{1}{2}}\,\Sigma_{\rm L}\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr) \end{eqnarray} and combine with those imposed fomerly, as well as with those resulting from the requirement of invariance of the inverse Higgs constraints under Gra\ss mann-even shifts. Putting the various pieces together, we arrive at \begin{Def}\label{def:s1bksymmalg} The {\bf spinorial super-1-brane $\kappa$-symmetry transformations} are the tangential variations of the lagrangean field $\,{\mathbb{X}}_{\rm (HP)}\,$ generated by the distinguished linear combinations of the supercharges \begin{eqnarray}\nn Q^*_\a:={\mathsf P}^{(1)\,\beta}_{\frac{1}{2}}{}_\a\,Q_\beta\,,\qquad\a\in\ovl{1,D_{1,d-1}}\,. \end{eqnarray} satisfying the supercommutation relations \begin{eqnarray}\nn \{Q^*_\a,Q^*_\beta\}=\bigl({\mathsf P}^{(1)\,{\rm T}}_{\frac{1}{2}}\cdot\ovl\Gamma{}^0\cdot{\mathsf P}^{(1)}_{\frac{1}{2}}\bigr)_{\a\beta}\,P_0+\bigl({\mathsf P}^{(1)\,{\rm T}}_{\frac{1}{2}}\cdot\ovl\Gamma{}^1\cdot{\mathsf P}^{(1)}_{\frac{1}{2}}\bigr)_{\a\beta}\,P_1\,,\qquad\qquad[P_{\unl A},Q^*_\a]=0\,,\qquad\unl A\in\{0,1\}\,. \end{eqnarray} \begin{flushright}$\diamond$\end{flushright Finally, we may articulate \begin{Prop}\label{prop:spinkappasymm1} The spinorial super-1-brane $\kappa$-symmetry transformations of Def.\,\ref{def:s1bksymmalg} are gauge symmetries of the corresponding Green--Schwarz super-$\sigma$-model (in the Hughes--Polchinski formulation) and preserve the space of its restricted (classical) field configurations $\,\mathscr{D}_1\subset{\rm s}\mathscr{P}(1,d-1\,\vert\,1)\,$ defined by the family of constraints \begin{eqnarray}\label{eq:ksymmconstr1} \bigl(\theta_{\rm L}^{\widehat S},\theta^{0\widehat S}_{\rm L},\Gamma^{\widehat S}\,{\mathsf P}^{(1)}_{\frac{1}{2}}\,\Sigma_{\rm L}\bigr)\mathord{\restriction}_{\mathcal{T}\mathscr{D}_1}=0\,,\qquad\widehat S\in\ovl{2,d-1}\,. \end{eqnarray} \end{Prop} \brem In analogy with the case of the super-0-brane (and for the very same reasons), the super-2-form $\,\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}{}^{(\frac{1}{2})}\,$ and the constraints are invariant with respect to the (right) action of the special orthogonal group $\,{\rm Spin}(d-2)\,$ generated by rotations that leave the distinguished directions $\,\partial_0\,$ and $\,\partial_1\,$ intact. \end{Rem}\medskip \subsection{The extended Green--Schwarz supergerbe} Having understood the (super)group-theoretic origin of $\kappa$-symmetry in the framework of Cartan geometry of the homogeneous space of Prop.\,\ref{prop:sMinkHPvsNG}, we may next -- in the spirit of Sec.\,\ref{sec:CaEscocgeomise} -- look for a geometrisation of the HP formulation of the super-$\sigma$-model and the corresponding gerbe-theoretic extension of its gauge-symmetry analysis. This is more than well justified as the relevant action functional $\,S^{({\rm HP})}_{{\rm GS},p}\,$ has the structure of a gerbe holonomy, with the ``metric'' term $\,S^{({\rm HP})}_{{\rm metr,GS},p}\,$ of \eqref{eq:SmetrHP} determined by a manifestly LI super-$(p+1)$-form and hence defining a {\it trivial} super-$p$-gerbe on the extended supertarget. The analysis of the preceding section suggests that the ensuing simple picture of a (Deligne) tensor product of the trivial super-$p$-gerbe defined by the ``metric'' term with the pullback of the super-$p$-gerbe from $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ to the super-Poincar\'e supergroup $\,{\rm s}\mathscr{P}(1,d-1\,\vert\,1)\,$ along the canonical projection\footnote{Strictly speaking, we should, if anything, pull it back to the {\bf HP section} coordinatised as in \Reqref{eq:HParaMink}.} be refined though incorporation of the tangential constraints deduced from the $\kappa$-symmetry analysis. In the cases of $\,p\in\{0,1\}$,\ to which we restrict our attention in the remainder of the section, this boils down to the imposition of conditions \eqref{eq:ksymmconstr0} resp.\ \eqref{eq:ksymmconstr1}. Thus, we arrive at \begin{Def}\label{def:exts0gerbe} The {\bf extended Green--Schwarz super-0-gerbe} is the restriction to the subspace $\,\mathscr{D}_0\,$ of Prop.\,\ref{prop:kappasymm0} of the (Deligne) tensor product\footnote{The numerical factor 2 in front of the global connection super-1-form of the trivial gerbe is an artifact of our initial normalisation of the HP super-1-form $\,\underset{\tx{\ciut{(1)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}$.} \begin{eqnarray}\nn \widehat\mathcal{G}_{\rm GS}^{(0)}:=\pi_{(0)}^*\mathcal{G}_{\rm GS}^{(0)}\otimes\mathcal{J}_{2\underset{\tx{\ciut{(1)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}} \end{eqnarray} of the trivial super-0-gerbe defined by the LI super-1-form $\,2\underset{\tx{\ciut{(1)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}\,$ of \Reqref{eq:HPcurv} with the pullback, along the canonical projection \begin{eqnarray}\nn \pi_{(0)}\ :\ {\rm s}\mathscr{P}(1,9\,\vert\,1)\longrightarrow{\rm sMink}^{1,9\,\vert\,D_{1,9}}\,, \end{eqnarray} of the Green--Schwarz super-0-gerbe of Def.\,\ref{def:s0gerbe}. \begin{flushright}$\diamond$\end{flushright \noindent and the analogous \begin{Def}\label{def:exts1gerbe} The {\bf extended Green--Schwarz super-1-gerbe} is the restriction to the subspace $\,\mathscr{D}_1\,$ of Prop.\,\ref{prop:spinkappasymm1} of the (Deligne) tensor product \begin{eqnarray}\nn \widehat\mathcal{G}_{\rm GS}^{(1)}:=\pi_{(1)}^*\mathcal{G}_{\rm GS}^{(1)}\otimes\mathcal{I}_{2\underset{\tx{\ciut{(2)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}} \end{eqnarray} of the trivial super-1-gerbe defined by the LI super-2-form $\,2\underset{\tx{\ciut{(2)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}\,$ of \Reqref{eq:HPcurv} with the pullback, along the canonical projection \begin{eqnarray}\nn \pi_{(1)}\ :\ {\rm s}\mathscr{P}(1,d-1\,\vert\,1)\longrightarrow{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,, \end{eqnarray} of the Green--Schwarz super-1-gerbe of Def.\,\ref{def:s1gerbe}. \begin{flushright}$\diamond$\end{flushright \noindent While the above definitions, being formulated in terms of LI (super-)differential objects over $\,{\rm s}\mathscr{P}(1,d-1\,\vert\,1)$,\ make perfect sense, they call for a careful revision of the notion of global (left) supersymmetry, a fact of key importance in any subsequent (supersymmetry-)invariance considerations, and so also in the very construction of gerbe-theoretic structures. It seems natural to truncate the original supersymmetry group $\,{\mathbb{R}}^{1,d-1\,\vert\,1}\,$ (as well as its extensions encountered in this work) to its maximal subgroup composed of elements which preserve the constraints \eqref{eq:ksymmconstr0} and \eqref{eq:ksymmconstr1} derived from the requirement of (spinorial) $\kappa$-symmetry. The subgroup will be generated by the one-parameter (sub)groups induced by flows of the right-invariant vector fields on $\,{\rm s}\mathscr{P}(1,d-1\,\vert\,1)\,$ from the intersection of the kernels of the left-invariant super-1-forms defining the respective tangent sheaves $\,\mathcal{T}\mathscr{D}_p,\ p\in\{0,1\}$.\ These are natural candidates for global symmetries of the subspaces $\,\mathscr{D}_p,\ p\in\{0,1\}\,$ introduced in Props.\,\ref{prop:kappasymm0} and \ref{prop:spinkappasymm1}. A complete treatment of the supergerbe theory behind the HP formulation of the super-$\sigma$-model consistent with the (gauge) supersymmetries present ought to be preceded by an in-depth analysis of the geometric content of such a definition. In this preliminary discussion, we merely point out its necessity and relevance, leaving the clarification of its ramifications to a future thorough investigation. \subsection{Weak $\kappa$-equivariance of the extended Green--Schwarz supergerbe} The purely gerbe-theoretic nature of the HP action functional of the GS super-$\sigma$-model in conjunction with the presence of a {\it gauge} supersymmetry rederived, in a simplified version tailored to our subsequent considerations, at the beginning of the present section, give rise to the expectation, based on fomer studies reported in Refs.\,\cite{Gawedzki:2010rn,Gawedzki:2012fu,Suszek:2012ddg}, that the extended GS super-$p$-gerbes should be endowed with an equivariant structure of some sort with respect to right-regular (Gra\ss mann-odd) translations on the supertarget. Such an informed guess is confronted with the obvious obstacles: The very existence of the symmetry necessitates the imposition of constraints of the admissible field configurations and the symmetry algebra does not seem to close on non-classical field configurations ({\it cp.}, {\it e.g.}, \Rcite{McArthur:1999dy,Gomis:2006wu}). Luckily, the constraints, enumerated in Props.\,\ref{prop:kappasymm0} and \ref{prop:spinkappasymm1}, admit a natural geometrisation, {\it i.e.}, can be treated as linear conditions to be imposed on sections of the tangent sheaf of the supertarget and thus distinguishing its subspace $\,\mathscr{D}_p,\ p\in\{0,1\}$,\ whereas the latter obstacle leaves us -- in the light of the interpretation given in \Rcite{Suszek:2012ddg} ({\it cp.}\ also \Rcite{Runkel:2008gr}) to the various components of the full-fledged equivariant structure on the gerbe -- with the possibility of having on the extended GS super-$p$-gerbes (for $\,p\in\{0,1\}$) the first component of such a structure, giving an element-wise realisation of the symmetry `set' (or `ensemble des op\'erateurs' in the sense of \Rxcite{Chap.\,I \S\,3.1}{Bourbaki:alg1970}). Below, we perform a case-by-case study that provides us with solid arguments in favour of this last expectation. \subsubsection{The extended GS super-0-gerbe} A natural point of departure of our analysis is a lift of the Gra\ss mann-odd shifts of \Reqref{eq:ksymmind} (with $\,y^J=0,\ J\in\ovl{0,d-1}$) to the total space of the super-0-gerbe of Def.\,\ref{def:exts0gerbe}. The latter is a semidirect product of the total space $\,\mathscr{L}^{(0)}\,$ of the GS super-0-gerbe over $\,{\rm sMink}^{1,9\,\vert\,D_{1,9}}\,$ of Def.\,\ref{def:s0gerbe} with $\,{\rm Spin}(1,9)$,\ or -- equivalently -- a principal ${\mathbb{C}}^\x$-bundle over $\,{\rm s}\mathscr{P}(1,9\,\vert\,1)\,$ endowed with the structure of a central extension of the super-Poincar\'e group, with an obvious group law (an obvious adaptation of the binary operation on $\,\mathscr{L}^{(0)}\,$ from Prop.\,\ref{prop:L0group}) which we employ to define the sought-after lift in the form \begin{eqnarray}\nn \widehat r^{(0)}_\cdot\ &:&\ \bigl(\mathscr{L}^{(0)}\rtimes{\rm Spin}(1,9)\bigr)\x{\rm ker}\,{\mathsf P}^{(0)}_{\frac{1}{2}}\longrightarrow\mathscr{L}^{(0)}\rtimes{\rm Spin}(1,9)\cr\cr &:&\ \bigl(\bigl(\theta^\a,x^I,z,\phi^{JK}\bigr),\kappa^\beta\bigr)\longmapsto\bigl(\theta^\a+\widetilde\kappa^\a(\phi),x^I-\tfrac{1}{2}\,\ovl\theta\,\Gamma^I\,\widetilde\kappa(\phi),{\rm e}^{{\mathsf i}\,\ovl\theta\,\Gamma_{11}\,\widetilde\kappa(\phi)}\cdot z,\phi^{JK}\bigr)\,. \end{eqnarray} We now readily verify that the principal ${\mathbb{C}}^\x$-connection of the extended GS super-0-gerbe, \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(1)}}}{\beta}}(\theta,x,z,\phi)={\mathsf i}\,\tfrac{{\mathsf d} z}{z}+\underset{\tx{\ciut{(1)}}}{\beta}(\theta,x)+2\underset{\tx{\ciut{(1)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}(\theta,x,\phi)\,, \end{eqnarray} satisfies, upon restriction to the HP section with coordinates $\,(\theta^\a,x^I,\phi^{\unl A\widehat S})$,\ the expected identity \begin{eqnarray}\nn &&\widehat r_\cdot^{(0)\,*}\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}\bigl(\bigl(\theta^\a,x^I,z,\phi^{\unl A\widehat S}\bigr),\kappa^\beta\bigr)-\widehat{\underset{\tx{\ciut{(1)}}}{\beta}}\bigl(\theta^\a,x^I,z,\phi^{\unl A\widehat S}\bigr)\cr\cr &=&-{\mathsf d}\bigl(\ovl\theta\,\Gamma_{11}\,\widetilde\kappa(\phi)\bigr)+\ovl\theta\,\Gamma_{11}\,{\mathsf d}\widetilde\kappa(\phi)+\widetilde\kappa(\phi)\,\ovl\Gamma_{11}\,\sigma(\theta)+\ovl\kappa\,\Gamma^0\,\Sigma_{\rm L}(\theta,x,\phi)=-2\ovl\sigma(\theta)\,\Gamma_{11}\,\widetilde\kappa(\phi)-2\ovl\Sigma_{\rm L}\,\Gamma^0\,\kappa\cr\cr &=&-4\ovl\Sigma_{\rm L}\,\Gamma^0\cdot{\mathsf P}^{(0)}_{\frac{1}{2}}\,\kappa=0\,. \end{eqnarray} We conclude that there exists over \begin{eqnarray}\nn \bigl(\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,9)\bigr)\x{\rm ker}\,{\mathsf P}^{(0)}_{\frac{1}{2}}=:\widetilde\mathcal{M}^{(1)\,1} \end{eqnarray} a connection-preserving isomorphism \begin{eqnarray}\nn \widehat\Upsilon^{(0)}_\kappa\ :\ \widehat r_\cdot^{(0)\,*}\widehat\mathcal{G}_{\rm GS}^{(0)}\xrightarrow{\ \cong\ }{\rm pr}_1^*\widehat\mathcal{G}_{\rm GS}^{(0)}\equiv{\rm pr}_1^*\widehat\mathcal{G}_{\rm GS}^{(0)}\x\mathcal{J}_{\widehat{\underset{\tx{\ciut{(1)}}}{\rho}}=0} \end{eqnarray} with data \begin{eqnarray}\nn 2\widehat{\rm F}^{(\frac{1}{2})}\bigl(\bigl(\theta^\a,x^I,\phi^{\unl A\widehat S}\bigr),(\kappa^\beta,0)\bigr)=\ovl\theta\,\Gamma_{11}\,\widetilde\kappa(\phi)\,, \end{eqnarray} determined by the Lie-supergroup structure on the total space $\,\mathscr{L}^{(0)}\,$ of the GS super-0-gerbe $\,\mathcal{G}_{\rm GS}^{(0)}$.\ Our analysis yields a weak (spinorial) $\kappa$-equivariant structure relative to a {\it vanishing} super-1-form $\,\widehat{\underset{\tx{\ciut{(1)}}}{\rho}}=0$,\ in keeping with the findings of Refs.\,\cite{Gawedzki:2010rn,Gawedzki:2012fu}, where equivariant structures of this type were identified as the unique ones for which the gerbe descends from its base to the space of orbits of the action of the symmetry group being gauged. It should be noted that the above data appear non-LI, and so, clearly, further study is needed to put the said equivariant structure, implicitly defined by them, on an equal footing with the ${\rm Ad}_\cdot$-equivariant structure on $\,\mathcal{G}_{\rm GS}^{(0)}\,$ described in Prop.\,\ref{prop:Adequivstr0}. We leave this challenge for future work. \subsubsection{The extended GS super-1-gerbe} In the case of the super-1-gerbe of Def.\,\ref{def:exts1gerbe}, we employ the same ruse as in the previous section, that is, we first judiciously choose the surjective submersion of the extended gerbe $\,\widehat\mathcal{G}_{\rm GS}^{(1)}$, \begin{eqnarray}\nn \pi_{{\mathsf Y}\widetilde\mathcal{M}^{(1)}}\ &:&\ {\mathsf Y}\widetilde\mathcal{M}^{(1)}:={\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1)\longrightarrow\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1)\cr\cr &:&\ \bigl(\bigl(\theta^\a,x^I,\xi_\beta\bigr),\phi^{JK}\bigr)\longmapsto\bigl(\theta^\a,x^I,\phi^{JK}\bigr)\,, \end{eqnarray} and of the pullback gerbes $\,r_\cdot^*\widehat\mathcal{G}_{\rm GS}^{(1)}\,$ and $\,{\rm pr}_1^*\widehat\mathcal{G}_{\rm GS}^{(1)}$, \begin{eqnarray}\nn \pi_{\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}}\equiv\pi_{{\mathsf Y}_1\mathcal{M}^{(1)}}\x{\rm id}_{{\rm Spin}(1,d-1)\x{\rm ker}\,{\mathsf P}^{(1)}_{\frac{1}{2}}}\ :\ \widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}:={\mathsf Y}\widetilde\mathcal{M}^{(1)}\x{\rm ker}\,{\mathsf P}^{(1)}_{\frac{1}{2}}\longrightarrow \widetilde\mathcal{M}^{(1)\,1}\,, \end{eqnarray} and subsequently induce a lift of the Gra\ss mann-odd shifts to the fomer with the help of the (suitably adapted) group law of Prop.\,\ref{prop:M2group}, whereby we obtain the transformation law \begin{eqnarray}\nn \widehat r^{(1)}_\cdot\ &:&\ \bigl({\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1)\bigr)\x{\rm ker}\,{\mathsf P}^{(1)}_{\frac{1}{2}}\longrightarrow{\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1)\cr\cr &:&\ \bigl(\bigl(\theta^\a,x^I,\xi_\beta,\phi^{JK}\bigr),\kappa^\gamma\bigr)\longmapsto\bigl(\theta^\a+\widetilde\kappa^\a(\phi),x^I-\tfrac{1}{2}\,\ovl\theta\,\Gamma^I\,\widetilde\kappa(\phi),\cr\cr &&\hspace{4.5cm}\xi_\beta-\tfrac{1}{6}\,\bigl(\ovl\theta\,\Gamma_I\,\widetilde\kappa(\phi)\bigr)\,\ovl\Gamma{}^I_{\beta\gamma}\,\bigl(2\theta^\gamma+\widetilde\kappa^\gamma(\phi)\bigr),\phi^{JK}\bigr)\,. \end{eqnarray} Following the logic of Sec.\,\ref{subsub:Adequivstr1}, we then pass to the surjective submersion \begin{eqnarray}\nn {\mathsf Y}\bigl(\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\bigr):=\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\x_{\widetilde\mathcal{M}^{(1)\,1}} \widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\ni\bigl(\bigl((\theta,x,\xi_1,\phi),\kappa\bigr),\bigl((\theta,x,\xi_2,\phi),\kappa\bigr)\bigr)=:\bigl(\widetilde y_1,\widetilde y_2\bigr)\,, \end{eqnarray} to which we pull back the curving \begin{eqnarray}\nn \widehat{\underset{\tx{\ciut{(2)}}}{\beta}}(\theta,x,\xi,\phi)=\underset{\tx{\ciut{(2)}}}{\beta}{}^{(2)}(\theta,x,\xi)+2\underset{\tx{\ciut{(2)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}(\theta,x,\phi) \end{eqnarray} of the extended super-1-gerbe from its surjective submersion $\,{\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1)\,$ along the relevant maps that let us calculate, up to $\,\mathscr{O}(\kappa^2)$, \begin{eqnarray}\label{eq:hatbebE} {\rm pr}_2^*{\rm pr}_2^*\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}-{\rm pr}_1^*\widehat r^{(1)\,*}_\cdot\widehat{\underset{\tx{\ciut{(2)}}}{\beta}}={\mathsf d}\widehat{\mathsf E}\,, \end{eqnarray} where \begin{eqnarray}\nn \widehat{\mathsf E}\bigl(\widetilde y_1,\widetilde y_2\bigr)=-\ovl\theta\,\Gamma_I\,\widetilde\kappa(\phi)\,\bigl(e^I(\theta,x)-\tfrac{1}{3}\,\ovl\theta\,\Gamma^I\,\sigma(\theta)\bigr)-\widetilde\kappa^\a(\phi)\,{\mathsf d}\xi_{1\,\a}+\theta^\a\,{\mathsf d}(\xi_{2\,\a}-\xi_{1\,\a})\,. \end{eqnarray} Taking into account the commutativity, for arbitrary $\,(\varepsilon,y,\zeta)\in{\mathsf Y}_1\mathcal{M}^{(1)}\,$ and \begin{eqnarray}\nn \ell^{(1)}_\cdot\equiv{\rm m}_1^{(2)}\ :\ {\mathsf Y}_1\mathcal{M}^{(1)}\x{\mathsf Y}_1\mathcal{M}^{(1)}\longrightarrow{\mathsf Y}_1\mathcal{M}^{(1)}\,, \end{eqnarray} of the diagram (expressing none other than the associativity of the binary operation of the Lie supergroup $\,{\mathsf Y}_1\mathcal{M}^{(1)}$) \begin{eqnarray}\nn \alxydim{@C=1.75cm@R=1.75cm}{ \bigl({\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1)\bigr)\x{\rm ker}\,{\mathsf P}^{(1)}_{\frac{1}{2}} \ar[r]^{\quad\qquad\widehat r^{(1)}_\cdot} \ar[d]_{\ell^{(1)}_{(\varepsilon,y,\zeta)}\x{\rm id}_{{\rm Spin}(1,d-1)\x{\rm ker}\,{\mathsf P}^{(1)}_{\frac{1}{2}}}} & {\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1) \ar[d]^{\ell^{(1)}_{(\varepsilon,y,\zeta)}\x{\rm id}_{{\rm Spin}(1,d-1)}} \\ \bigl({\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1)\bigr)\x{\rm ker}\,{\mathsf P}^{(1)}_{\frac{1}{2}} \ar[r]_{\quad\qquad\widehat r^{(1)}_\cdot} & {\mathsf Y}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1) }\,, \end{eqnarray} we conclude that the left-hand side of \Reqref{eq:hatbebE}, and so also its right-hand side are LI, and so it makes sense to erect over $\,{\mathsf Y}\bigl(\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\bigr)\,$ a (trivial) principal ${\mathbb{C}}^\x$-bundle \begin{eqnarray}\nn \pi_{\widehat\mathscr{E}}\equiv{\rm pr}_1\ :\ \widehat\mathscr{E}:={\mathsf Y}\bigl(\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\bigr)\x{\mathbb{C}}^\x\longrightarrow{\mathsf Y}\bigl(\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\bigr)\ :\ \bigl(\bigl(\widetilde y_1,\widetilde y_2\bigr),z\bigr)\longmapsto\bigl(\widetilde y_1,\widetilde y_2\bigr) \end{eqnarray} with a principal ${\mathbb{C}}^\x$-connection 1-form \begin{eqnarray}\nn \txa_{\widehat\mathscr{E}}\bigl(\bigl(\widetilde y_1,\widetilde y_2\bigr),z\bigr)={\mathsf i}\,\tfrac{{\mathsf d} z}{z}+\widehat{\mathsf E}\bigl(\widetilde y_1,\widetilde y_2\bigr) \end{eqnarray} and assume the latter to be LI, which fixes the data of a connection-preserving principal ${\mathbb{C}}^\x$-bundle automorphism implementing supersymmetry on $\,\widehat\mathscr{E}$.\ The bundle is a candidate for the datum of a gerbe 1-isomorphism of the (weak) $\kappa$-equivariant structure that we are seeking to reconstruct. At this stage, it remains to verify our expectations with regard to the status of $\,\widehat\mathscr{E}\,$ by establishing a connection-preserving isomorphism \begin{eqnarray}\nn \a_{\widehat\mathscr{E}}\ :\ \bigl(\widehat r_\cdot^{(1)\,\x2}\circ{\rm pr}_{1,3}\bigr)^*\widehat\mathscr{L}^{(1)}\otimes{\rm pr}_{3,4}^*\mathscr{E}\xrightarrow{\ \cong\ }{\rm pr}_{1,2}^*\mathscr{E}\otimes\bigl({\rm pr}_2^{\x2}\circ{\rm pr}_{2,4}\bigr)^*\widehat\mathscr{L}^{(1)}\,, \end{eqnarray} of principal ${\mathbb{C}}^\x$-bundles over \begin{eqnarray}\nn {\mathsf Y}\bigl(\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\bigr)\x_{\widetilde\mathcal{M}^{(1)\,1}}{\mathsf Y}\bigl(\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\bigr)\equiv \widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\x_{\widetilde\mathcal{M}^{(1)\,1}}\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\x_{\widetilde\mathcal{M}^{(1)\,1}} \widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\x_{\widetilde\mathcal{M}^{(1)\,1}}\widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\cr\cr \ni\bigl(\bigl((\theta,x,\xi_1,\phi),\kappa\bigr),\bigl((\theta,x,\xi_2,\phi),\kappa\bigr),\bigl((\theta,x,\xi_3,\phi),\kappa\bigr),\bigl((\theta,x,\xi_4,\phi),\kappa\bigr)\bigr)=:\bigl(\widetilde y_1,\widetilde y_2,\widetilde y_3,\widetilde y_4\bigr)\,, \end{eqnarray} where \begin{eqnarray}\nn \widehat\mathscr{L}^{(1)}={\rm pr}_1^{\x2\,*}\mathscr{L}^{(1)}\otimes\bigl({\mathsf Y}^{[2]}\widetilde\mathcal{M}^{(1)}\x{\mathbb{C}}^\x\bigr)\equiv{\rm pr}_1^{\x2\,*}\mathscr{L}^{(1)}\longrightarrow{\mathsf Y}^{[2]}\widetilde\mathcal{M}^{(1)}\equiv{\mathsf Y}^{[2]}_1\mathcal{M}^{(1)}\rtimes{\rm Spin}(1,d-1) \end{eqnarray} is the principal ${\mathbb{C}}^\x$-bundle of the extended GS super-1-gerbe $\,\widehat\mathcal{G}_{\rm GS}^{(1)}$,\ in whose definition the trivial tensor factor $\,\bigl({\mathsf Y}^{[2]}\widetilde\mathcal{M}^{(1)}\x{\mathbb{C}}^\x\bigr)\,$ represents the principal ${\mathbb{C}}^\x$-bundle of the trivial super-1-gerbe $\,\mathcal{I}_{2\underset{\tx{\ciut{(2)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}}$,\ with a {\it vanishing} connection. To these ends, we calculate the difference of the base components of the relevant principal connection 1-forms (keeping in mind the triviality of the contribution of $\,\mathcal{I}_{2\underset{\tx{\ciut{(2)}}}{\beta}\hspace{-2pt}{}^{\rm (HP)}}$), \begin{eqnarray}\nn {\rm pr}_{1,3}^*\widehat r_\cdot^{(1)\,\x2\,*}{\rm A}+{\rm pr}_{3,4}^*\widehat{\rm E}-{\rm pr}_{1,2}^*\widehat{\rm E}-{\rm pr}_{2,4}^*{\rm pr}_2^{\x2\,*}{\rm A}=\widehat\Delta\,, \end{eqnarray} whereby we obtain the result \begin{eqnarray}\nn \widehat\Delta\bigl(\widetilde y_1,\widetilde y_2,\widetilde y_3,\widetilde y_4\bigr)=\widetilde\kappa^\a(\phi)\,{\mathsf d}(\xi_{3\,\a}-{\mathsf d}\xi_{1\,\a})\equiv\widetilde\kappa^\a(\phi)\,\bigl({\rm pr}_3^*\widetilde\pi_1^*e^{(2)}_\a-{\rm pr}_1^*\widetilde\pi_1^*e^{(2)}_\a\bigr)\bigl(\widetilde y_1,\widetilde y_2,\widetilde y_3,\widetilde y_4\bigr)\,, \end{eqnarray} in which \begin{eqnarray}\nn \widetilde\pi_1\ :\ \widetilde{{\mathsf Y}_1\mathcal{M}}^{(1)\,1}\longrightarrow{\mathsf Y}_1\mathcal{M}^{(1)}\ :\ \bigl((\theta,x,\xi,\phi),\kappa\bigr)\longmapsto(\theta,x,\xi)\,. \end{eqnarray} Upon recalling the (dual) spinorial nature of the indices carried by the coordinates $\,\xi_\a,\ \a\in\ovl{1,D_{1,d-1}}\,$ in the fibre of the vector bundle $\,{\mathsf Y}_1\mathcal{M}^{(1)}$,\ we may rewrite the above result in terms of the natural counterparts of the $\,e^{(2)}_\a\,$ on the Lie supergroup $\,{\mathsf Y}\widetilde\mathcal{M}^{(1)}$,\ which we choose to denote as $\,\theta^{(2)}_{{\rm L}\,\a}$.\ We obtain the simple expression \begin{eqnarray}\nn \widehat\Delta=\kappa^\a\,\bigl({\rm pr}_3^*\theta^{(2)}_{{\rm L}\,\a}-{\rm pr}_1^*\theta^{(2)}_{{\rm L}\,\a}\bigr)\,. \end{eqnarray} Thus, it transpires that if we were to impose the constraints \begin{eqnarray}\nn \theta^{(2)}_{{\rm L}\,\beta}\,\bigl({\boldsymbol{1}}_{D_{1,d-1}}-{\mathsf P}^{(1)}_{\frac{1}{2}}\bigr)^\beta_{\ \a}\stackrel{!}{=} 0\,,\qquad\a\in\ovl{1,D_{1,d-1}}\,, \end{eqnarray} the 1-isomorphism sought after would exist and be, in fact, trivial, which would further imply that it satisfies the coherence conditions (recall the triviality of the groupoid structure on $\,\mathscr{L}^{(1)}$). This would then ensure the existence of the postulated weak $\kappa$-equivariant structure. That which makes the above constraints plausible, or -- indeed -- natural is their structural affinity with those derived earlier from the ($\kappa$-)symmetry analysis of the HP action functional. Note also that they play a r\^ole analogous to that of the $\kappa$-symmetry itself, to wit, they effectively remove part of the Gra\ss mann-odd geometric degrees of freedom. Therefore, we are led to postulate the new constraints as a proper gerbe-theoretic augmentation of those listed in Prop.\,\ref{prop:spinkappasymm1}. Of course, a full understanding of the structural observations, reported in this closing section of the paper, that are hoped to have shed some light on the gerbe-theoretic aspect of the $\kappa$-symmetry of the Green--Schwarz super-$\sigma$-model would require a thorough examination of the global supersymmetry in the presence of differential constraints imposed. This we leave to a future work. \medskip \newpage \section{Conclusions \& Outlook}\label{ref:CandO} In the present paper, we have put forward an essentially complete proposal of a novel geometrisation scheme for a family of super-$(p+2)$-cocyles representing classes in the Cartan--Eilenberg supersymmetry-invariant cohomology of the super-Minkowskian spacetime $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ (regarded as a Lie supergroup), of direct relevance to the construction of the Green--Schwarz super-$\sigma$-models of super-$p$-brane dynamics. The motivation for the geometrisation comes from the construction, due to Rabin and Crane \cite{Rabin:1984rm,Rabin:1985tv}, of an orbifold of the original supertarget $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}\,$ with respect to the natural geometric action of the discrete Kosteleck\'y--Rabin (lattice) supersymmetry group whose nontrivial topology is captured by the said Cartan--Eilenberg cohomology of the topologically trivial super-Minkowskian spacetime. The geometrisation scheme proposed hinges on the relation between the Cartan--Eilenberg cohomology of the Lie supergroup and the Chevalley--Eilenberg cohomology of its Lie superalgebra with values in the trivial module $\,{\mathbb{R}}$,\ and on the correspondence between the second cohomology group in the latter cohomology and (equivalence classes) of supercentral extensions of the Lie superalgebra, and employs a family of Lie supergroups surjectively submersed over the original supertarget, of the type originally considered by de Azc\'arraga {\it et al.} \cite{Chryssomalakos:2000xd}, that arise from the supercentral extensions determined by distinguished super-2-cocycles methodically induced from the Green--Schwarz super-$(p+2)$-cocycles. These extended Lie supergroups were subsequently used as elementary ingredients in a reconstruction of the super-$p$-gerbes, carried out explicitly for $\,p\in\{0,1,2\}$,\ along the lines of the standard bosonic geometrisation scheme for de Rham cocycles, due to Murray \cite{Murray:1994db}. The thus obtained Green--Schwarz super-$p$-gerbes were shown to possess the expected supersymmetry-(${\rm Ad}_\cdot$-)equivariant structure, signalling the amenability of the adjoint realisation of the supersymmetry group to gauging in the corresponding super-$\sigma$-model. This falls in perfect agreement with the intuitions developed in the bosonic context in the works of Gaw\c{e}dzki {\it et al.} \cite{Gawedzki:2010rn,Gawedzki:2012fu,Suszek:2011,Suszek:2012ddg,Suszek:2013}. Finally, the geometrisation scheme developed over the supertarget of the Nambu--Goto formulation of the Green--Schwarz super-$\sigma$-model was transplanted into the setting of the equivalent, but somewhat nontrivially so from the geometric point of view, Hughes--Polchinski formulation of the same super-$\sigma$-model, whereby the extended Green--Schwarz super-$p$-gerbe was erected, unifying the metric and topological (gerbe-theoretic) data of the corresponding Nambu--Goto formulation to which it descends upon imposition of certain Cartan-geometric constraints on its covariant configuration bundle ({\it i.e.}, on the enlarged supertarget). Conditions ensuring equivalence of the two formulations were analysed in considerable detail. The passage to the Hughes--Polchinski formulation opened the possibility for a straightforward (if also, at the same time, incomplete) geometrisation of the all-important gauge supersymmetry of the Green--Schwarz super-$\sigma$-model, that is to say, of the $\kappa$-symmetry of Refs.\,\cite{deAzcarraga:1982njd,Siegel:1983hh,Siegel:1983ke}, known to effectively implement suppersymmetric balance between the bosonic and fermionic degrees of freedom in the field theories under consideration (whence also the necessity to study it closely). The geometrisation assumed the form of an incomplete $\kappa$-equivariant structure on the extended super-$p$-gerbe, derived explicitly for $\,p\in\{0,1\}\,$ and termed the weak $\kappa$-equivariant structure, its existence being, again, in conformity with the bosonic intuition referred to $\kappa$-symmetry, the gauge symmetry of the super-$\sigma$-model.\medskip The results reported in the present paper prompt a host of natural questions, and actually define a conrete formal context in which these may be formulated. Starting with those of the more fundamental nature, it is certainly tempting to seek an explicit relation between our construction and alternative approaches to supersymmetry in the context of superstring and related models, one such particularly attractive approach being at the heart of the proposal, originally conceived by Killingback \cite{Killingback:1986rd} and Witten \cite{Witten:1988dls}, elaborated by Freed \cite{Yau:1987}, recently revived by Freed and Moore \cite{Freed:2004yc}, and ultimately concretised in the higher-geometric language by Bunke \cite{Bunke:2009} ({\it cp.}\ also \Rcite{Waldorf:2009uf} for an explicit construction), for a geometrisation of the Pfaffian bundle of the target-space Dirac operator, associated with fermionic contributions to the superstring path integral, in terms of a differential ${\rm String}$-structure on the target space. Another, and not entirely unrelated, idea that might -- given the r\^ole played by the algebra and (super)symmetry arguments in our construction -- lead to a deeper understanding of the geometrisation scheme proposed would be to look for an explicit and geometrically meaningful relation between the super-$p$-gerbes constructed in the present work, and in particular the towers of supercentral extensions of the Lie supergroups built over the super-Minkowski (resp.\ super-Poincar\'e) Lie supergroup, and the Lie-$n$-superalgebras and $L_\infty$-superalgebras of Baez {\it et al.} considered in Refs.\,\cite{Baez:2004hda6,Baez:2010ye,Huerta:2011ic}. On the next level, we find directions in which the study initiated in the present paper could and should be completed. One such question that complements the discourse developed herein concerns the actual (super)geometric and (super)algebraic content of $\kappa$-symmetry, and its full-fledged (super-)gerbe-theoretic realisation -- among the issues that have to be settled in order to gain a better understanding in this matter, the compatibility of global supersymmetry with the Cartan-geometric constraints resulting from the $\kappa$-symmetry analysis of the Hughes--Polchinski formulation of the Green--Schwarz super-$\sigma$-model stands out as singularly pressing. Driven by bosonic intuitions, one would also like to enquire about the existence and concrete realisation of a multiplicative structure on the Green--Schwarz super-$p$-gerbe, in keeping with the findings of Refs.\,\cite{Carey:2004xt,Waldorf:2008mult,Gawedzki:2009jj}. Finally, our construction of the supersymmetry(${\rm Ad}_\cdot$-)equivariant structure on the super-$p$-gerbe begs for a logical conclusion in the form of a hands-on construction of a gauged Green--Schwarz super-$\sigma$-model (taking into account the nontrivial nature of the right-regular supersymmetry). With the maximal choice of the supersymmetry group to be gauged, {\it i.e.}, $\,{\mathbb{R}}^{1,d-1\,\vert\,D_{1,d-1}}$,\ one should expect, on the basis of the bosonic experience \cite{Gawedzki:1999bq,Gawedzki:2001rm,Gawedzki:2001ye}, the emergence of a topological field theory of the (super-)Chern--Simons type. A prerequisite for this analysis would be an in-depth study of the maximally (super)symmetric boundary conditions in the proposed formulation -- this point in the direction of a systematic study of super-$p$-gerbe (bi-)modules, or -- more generally -- the reconstruction of the associated higher categories of super-$p$-gerbs over $\,{\rm sMink}^{1,d-1\,\vert\,D_{1,d-1}}$. Last but not least, our work paves the way to a variety of natural and interesting applications and extensions. One obvious line of development is the application of the formalism proposed to the super-$\sigma$-models on supertargets with the body of the general type $\,{\rm AdS}_{p+2}\x{\mathbb{S}}^{d-p-2}\,$ whose exploration has led to remarkable progress in string theory, as seen from the phenomenological but also purely theoretical perspective. Here, the hope is that the ideas and constructions advanced in the present work prove sufficiently universal and technically robust to accomodate the extra complexity of these superbackgrounds whose super-$\sigma$-model description is -- after all -- structurally akin to that considered above. Another one that can be conceived is an explicit construction of a bosonisation/fermionisation defect (and the associated super-1-gerbe bi-brane) in the much tractable super-Minkowskian setting -- this promises to shed some light on the geometry behind the correspondence between worldsheet and target-space supersymmetry in superstring theory. We shall certainly return to these ideas in a future work. \newpage
2,877,628,088,750
arxiv
\section{Introduction}\label{introqp} \subsection{Quasi-projectivit\'e d'espaces de modules} La question de la quasi-projectivit\'e des espaces de modules de vari\'et\'es al\-g\'e\-briques a \'et\'e r\'evolutionn\'ee par Mumford qui a d\'evelopp\'e pour l'\'etudier la th\'eorie g\'eom\'etrique des invariants. Cette technique a permis \`a Mumford \cite{GIT} de montrer la quasi-projectivit\'e de l'espace de modules des courbes lisses, puis \`a Knudsen \cite{KnudsenIII} et Gieseker et Mumford \cite{Mumfordstab} de montrer ind\'ependamment la projectivit\'e de l'espace des modules des courbes stables. En dimension sup\'erieure, les travaux de Viehweg \cite{Viehweg} montrent la quasi-projectivit\'e des espaces de modules de vari\'et\'es lisses canoniquement polaris\'ees en caract\'eristique nulle. Une autre strat\'egie pour montrer la quasi-projectivit\'e d'un espace de modules, efficace quand celui-ci est propre, a \'et\'e d\'evelopp\'ee par Koll\'ar \cite{Kollarcomplete}. Elle devrait permettre de montrer la projectivit\'e de compactifications modulaires des espaces de modules \'etudi\'es par Viehweg (voir \cite{livreKollar}). Dans ces exemples, le fibr\'e canonique des vari\'et\'es consid\'er\'ees v\'erifie des propri\'et\'es de positivit\'e. A contrario, on ne conna\^it pas d'\'enonc\'e g\'en\'eral sur la quasi-projectivit\'e des espaces de modules de vari\'et\'es de Fano. Vu les exemples de Koll\'ar \cite{Kollarcex} d'espaces de modules non quasi-projectifs de vari\'et\'es polaris\'ees, il n'est pas clair dans quelle g\'en\'eralit\'e attendre des r\'esultats positifs. Dans ce texte, on \'etudie le cas particulier des intersections compl\`etes lisses par des m\'ethodes de th\'eorie g\'eom\'etrique des invariants. On construit ainsi de nombreux exemples d'espaces de modules quasi-projectifs de vari\'et\'es de Fano. \subsection{\'Enonc\'e des principaux r\'esultats} Soient $N\geq 2$, $1\leq c\leq N-1$ et $2\leq d_1\leq \ldots\leq d_c$ des entiers. Une intersection compl\`ete sur un corps $k$ est un sous-sch\'ema de codimension $c$ de $\mathbb{P}^N_k$ d\'efini par $c$ \'equations homog\`enes de degr\'es $d_1,\dots, d_c$. Soit $H$ l'ouvert du sch\'ema de Hilbert de $\mathbb{P}^N_{\mathbb{Z}}$ param\'etrant les intersections compl\`etes lisses (voir \cite{Sernesi} 4.6.1). Si on n'a pas $c=1$ et $d_1=2$, l'action par changement de coordonn\'ees de $PGL_{N+1}$ sur $H$ est propre (\cite{Oolsep} Th\'eor\`eme 1.7), et le th\'eor\`eme de Keel et Mori \cite{KM} montre l'existence d'un quotient g\'eom\'etrique $M$ de $H$ par $PGL_{N+1}$, unique par \cite{Kollarquo} Corollary 2.15. C'est un espace alg\'ebrique s\'epar\'e de type fini sur $\Spec(\mathbb{Z})$ : l'espace de modules (grossier) des intersections compl\`etes lisses. L'espace alg\'ebrique $M$ est-il un sch\'ema ? Un sch\'ema quasi-projectif ? Un sch\'ema affine ? Le r\'esultat principal de ce texte est le suivant : \begin{thm}\label{edm} Soit $M$ l'espace de modules des intersections compl\`etes lisses. \begin{enumerate}[(i)] \item Si $d_1=\dots=d_c$ et si l'on n'a pas $c=1$ et $d_1=2$, $M$ est un sch\'ema affine. \item Si $c\geq 2$, $d_1<d_2=\dots=d_c$ et $d_2(N-c+2)>d_1((c-1)(d_2-d_1)+1)$, $M$ est un sch\'ema quasi-projectif. \end{enumerate} \end{thm} En caract\'eristique nulle, la quasi-projectivit\'e d'un espace de modules de vari\'et\'es lisses dont le fibr\'e canonique est ample est connue par les travaux de Viehweg \cite{Viehweg}. Or si $c\geq 2$ et $d_1<d_2=\dots=d_c$, et que le fibr\'e canonique des intersections compl\`etes consid\'er\'ees n'est pas ample, les hypoth\`eses du th\'eor\`eme \ref{edm} (ii) sont v\'erifi\'ees. En effet, on a $N+1\geq (c-1)d_2+d_1\geq(c-1)(d_2-d_1)+(c-1)+d_1$. Ainsi, $d_2(N-c+2)\geq d_2(c-1)(d_2-d_1)+d_2d_1>d_1((c-1)(d_2-d_1)+1)$. Les r\'esultats de Viehweg et le th\'eor\`eme \ref{edm} impliquent donc : \begin{cor} En caract\'eristique nulle, si $d_1\leq d_2=\dots=d_c$ et si l'on n'a pas $c=1$ et $d_1=2$, $M$ est un sch\'ema quasi-projectif. \end{cor} \subsection{Le cas $d_1=\dots=d_c$}\label{edmaff} La preuve du th\'eor\`eme \ref{edm} (ii) occupe la majeure partie de ce texte. En revanche, le th\'eor\`eme \ref{edm} (i) est facile : le cas des hypersurfaces ($c=1$) est d\^u \`a Mumford (\cite{GIT} Prop. 4.2), et la preuve se g\'en\'eralise facilement. \begin{proof}[$\mathbf{Preuve \text{ }du \text{ }th\acute{e}or\grave{e}me\text{ }\ref{edm} (i)}$]~ Soit $\bar{H}$ la grassmanienne (relative sur $\Spec(\mathbb{Z})$) des sous-espaces vectoriels de dimension $c$ de $H^0(\mathbb{P}^N,\mathcal{O}(d_1))$. Le sch\'ema de Hilbert $H$ s'identifie \`a un ouvert de $\bar{H}$ : le compl\'ementaire du diviseur discriminant. Comme la grassmanienne est lisse de groupe de Picard engendr\'e par le fibr\'e de Pl\"ucker, tout diviseur effectif non trivial sur celle-ci est ample. Ainsi, le discriminant est ample, et son compl\'ementaire $H$ est affine. On pose $H=\Spec(A)$. Par \cite{Oolsep} Th\'eor\`eme 1.7, $PGL_{N+1}$ agit proprement sur $H$. Comme de plus $PGL_{N+1}$ est r\'eductif, on peut appliquer un th\'eor\`eme de Seshadri (\cite{Seshadri} Theorem 3, \cite{Kollarquo} Theorem 7.3) pour montrer que le quotient g\'eom\'etrique de $H$ par $PGL_{N+1}$ est $M=\Spec(A^{PGL_{N+1}})$, et est donc affine. \end{proof} \subsection{Plan du texte}\label{plan} L'argument de Mumford d\'ecrit ci-dessus fonctionne car $H$ admet une compactification tr\`es simple. Quand $d_1<d_2=\dots=d_c$, $H$ a encore une compactification explicite $\bar{H}$ : un fibr\'e en grassmaniennes sur un espace projectif. Les paragraphes \ref{constructionsqp} et \ref{parample} sont consacr\'es \`a la construction et \`a l'\'etude de cette compactification. Le r\'esultat principal est le th\'eor\`eme \ref{ample} qui calcule son c\^one ample. On pourrait alors esp\'erer que l'argument de Mumford fonctionne encore : il faudrait que le diviseur discriminant soit ample sur $\bar{H}$. Malheureusement, ce n'est jamais le cas si $c=2$ (voir la remarque \ref{echec}). On doit donc appliquer la th\'eorie g\'eom\'etrique des invariants de mani\`ere moins na\"ive : on fixe un fibr\'e ample sur $\bar{H}$ et on calcule \`a l'aide du crit\`ere de Hilbert-Mumford quand toutes les intersections compl\`etes lisses sont stables. C'est l'objet du paragraphe \ref{preuveedm}. La preuve de l'in\'egalit\'e qui permet de v\'erifier le crit\`ere de Hilbert-Mumford est report\'ee \`a la troisi\`eme partie : c'est le th\'eor\`eme \ref{alphadeg}. Celui-ci est \'enonc\'e et d\'emontr\'e sans hypoth\`eses restrictives sur les degr\'es des intersections compl\`etes. On peut maintenant expliquer le r\^ole des hypoth\`eses du th\'eor\`eme \ref{edm}. Si l'on n'a pas $d_1\leq d_2=\dots=d_c$, je ne connais pas de compactification explicite de $H$ analogue \`a celles \'evoqu\'ees ci-dessus. Si $d_1<d_2=\dots=d_c$, mais qu'on n'a pas $d_2(N-c+2)>d_1((c-1)(d_2-d_1)+1)$, aucun fibr\'e en droites ample sur $\bar{H}$ ne rend toutes les intersections compl\`etes lisses stables (proposition \ref{stablisse}), et on ne peut pas appliquer la th\'eorie g\'eom\'etrique des invariants sur $\bar{H}$. Pour montrer la quasi-projectivit\'e de $M$ pour d'autres valeurs des degr\'es \`a l'aide de th\'eorie g\'eom\'etrique des invariants, il faut donc consid\'erer une autre compactification de $H$. On peut choisir (suivant Mumford \cite{Mumfordstab}) le sch\'ema de Hilbert de $\mathbb{P}^N$. Cette possibilit\'e est discut\'ee dans la quatri\`eme partie. On y explique en particulier pourquoi l'in\'egalit\'e \ref{alphadeg} est plus faible que celle qui serait n\'ecessaire \`a la preuve de la Hilbert-stabilit\'e des intersections compl\`etes lisses. \subsection{Liens avec d'autres travaux} La th\'eorie g\'eom\'etrique des invariants d'hypersurfaces ou d'intersections comp\-l\`etes dans $\mathbb{P}^N$ a \'et\'e \'etudi\'ee dans de nombreux cas particuliers : surfaces quartiques \cite{Shah4}, solides cubiques \cite{Allcock33}, cubiques dans $\mathbb{P}^5$ \cite{Laza34}, pinceaux de quadriques dans $\mathbb{P}^4$ \cite{AM224}, intersections d'une quadrique et d'une cubique dans $\mathbb{P}^3$ \cite{Lazaetcie}, \cite{Lazaetcie2}, ... Chacun de ces travaux \'etudie un espace de modules pr\'ecis et m\`ene une analyse compl\`ete : le lieu semi-stable est calcul\'e et on obtient une compactification de l'espace de modules. Dans ce texte, on obtient des r\'esultats pour beaucoup de valeurs des degr\'es, mais le r\'esultat est moins fort : on se contente de montrer que les intersections compl\`etes lisses sont stables. Signalons particuli\`erement \cite{Lazaetcie} et \cite{Lazaetcie2} o\`u est men\'ee une \'etude tr\`es pr\'ecise du cas particulier $N=3$, $c=2$, $d_1=2$ et $d_2=3$, en utilisant la compactification $\bar{H}$ \'etudi\'ee dans la suite de cet article. \paragraph{Remerciements.}~ Les suggestions d'un rapporteur anonyme ont permis d'am\'e\-lio\-rer la pr\'esentation de ce texte de mani\`ere importante. \section{G\'eom\'etrie du sch\'ema $\bar{H}$}\label{partieqp} \begin{conventions}\label{notationsgen} Dans cette partie, on fixe $2\leq c\leq N-1$ et $2\leq d_1<d_2=\ldots= d_c$ des entiers. Une intersection compl\`ete sur un corps $K$ est toujours de codimension $c$ dans $\mathbb{P}^N_K$ et de degr\'es $d_1,\dots,d_c$. Sauf mention du contraire, les sch\'emas que nous consid\'ererons seront d\'efinis sur $\Spec(\mathbb{Z})$. En particulier, $\mathbb{P}^N=\mathbb{P}^N_{\mathbb{Z}}$. Quand on manipulera un point g\'eom\'etri\-que, on notera toujours $K$ le corps alg\'ebriquement clos sur lequel il est d\'efini. Si $\mathcal{F}$ est un faisceau localement libre sur un sch\'ema, le fibr\'e vectoriel g\'eom\'e\-tri\-que associ\'e \`a $\mathcal{F}$ est celui dont le faisceau des sections est $\mathcal{F}^{\vee}$. Par $\mathbb{G}(r,\mathcal{F})$, on d\'esignera la grassmannienne des sous-espaces vectoriels de rang $r$ de ce fibr\'e vectoriel g\'eom\'etrique. Quand $r=1$, on notera aussi ce sch\'ema $\mathbb{P}(\mathcal{F})$. \end{conventions} \subsection{Constructions}\label{constructionsqp} On construit tout d'abord les sch\'emas $H$ et $\bar{H}$, les familles de sous-sch\'emas de $\mathbb{P}^N$ qu'ils param\`etrent, ainsi que divers faisceaux localement libres sur ces espaces. On utilisera notamment les notations du diagramme ci-dessous. $$\xymatrix @C=5mm @R=5mm{ &&&\bar{\mathcal{X}}\ar[dlll]^{pr_2}\ar@{^{(}->}[dl] \\ \bar{H}\ar[dd]_{\pi_2}&&\pi_2^*\bar{\mathcal{X}}_{d_1}\ar[ll]^{pr_1}\ar@{^{(}->}[dl]\ar[dd]_{\pi_2}& \\ & \mathbb{P}^N\times\bar{H}\ar[ul]^{pr}\ar[dd]_<<<<{\pi_2}&& \\ \bar{H}_{d_1}\ar[dd]_{\pi_1}&&\bar{\mathcal{X}}_{d_1}\ar[ll]|\hole^<<<<<<<<<<<{pr_1}\ar@{^{(}->}[dl]& \\ & \mathbb{P}^N\times\bar{H}_{d_1}\ar[dd]_{\pi_1}\ar[ul]^{pr}&& \\ \Spec(\mathbb{Z})&&&\\ & \mathbb{P}^N \ar[ul]^{pr} && }$$ \paragraph{Hypersurfaces.}~ Soit $d\geq 1$. Notons $pr:\mathbb{P}^N\to \Spec(\mathbb{Z})$ le morphisme structurel. Le faisceau $pr_*\mathcal{O}_{\mathbb{P}^N}(d)$ sur $\Spec(\mathbb{Z})$ est localement libre, et ses fibres g\'eom\'etriques s'identifient \`a $H^0(\mathbb{P}^N_K,\mathcal{O}(d))$. On note $\bar{H}_{d}=\mathbb{P}((pr_*\mathcal{O}_{\mathbb{P}^N}(d))^{\vee})$ et $\pi_1:\bar{H}\rightarrow\Spec(\mathbb{Z})$ la projection. Un point g\'eom\'etrique de $\bar{H}_d$ est une droite vectorielle $\langle F\rangle$ de $H^0(\mathbb{P}^N_K,\mathcal{O}(d))$. \vspace{1em} On note encore $pr:\mathbb{P}^N\times\bar{H}_d\to\bar{H}_d$ et $\pi_1:\mathbb{P}^N\times\bar{H}_d\to\mathbb{P}^N$ les changements de base. La construction de $\bar{H}_d$ fournit une injection du fibr\'e en droites tautologique $\mathcal{O}_{\bar{H}_d}(-1)\to\pi_1^*pr_*\mathcal{O}_{\mathbb{P}^N}(d)$. Par changement de base par le morphisme plat $\pi_1$, cette injection se r\'e\'ecrit $\mathcal{O}_{\bar{H}_d}(-1)\to pr_*\mathcal{O}_{\mathbb{P}^N\times\bar{H}_d}(d;0)$. Tirant en arri\`ere sur $\mathbb{P}^N\times\bar{H}_d$, et utilisant l'adjonction, on obtient un morphisme de fibr\'es en droites $\mathcal{O}_{\mathbb{P}^N\times\bar{H}_d}(0;-1)\rightarrow\mathcal{O}_{\mathbb{P}^N\times\bar{H}_d}(d;0)$. Le lieu o\`u ce morphisme est nul est un diviseur de Cartier $\bar{\mathcal{X}}_d$ sur $\mathbb{P}^N\times\bar{H}_d$. Par construction, la fibre en $\langle F\rangle$ de $pr_1:\bar{\mathcal{X}}_d\rightarrow\bar{H}_d$ est le sous-sch\'ema $\{F=0\}$ de $\mathbb{P}^N_K$. L'\'equation de $\bar{\mathcal{X}}_{d}$ fournit sur $\mathbb{P}^N\times\bar{H}_{d}$ la suite exacte courte suivante : $$0\rightarrow\mathcal{O}_{\mathbb{P}^N\times\bar{H}_{d}}(-d;-1)\rightarrow \mathcal{O}_{\mathbb{P}^N\times\bar{H}_{d}}\rightarrow\mathcal{O}_{\bar{\mathcal{X}}_{d}}\rightarrow 0.$$ Tensorisons par $\mathcal{O}_{\mathbb{P}^N\times\bar{H}_{d}}(l;0)$, et appliquons $pr_*$ en remarquant par calcul du $H^1$ des fibres que $R^1pr_*\mathcal{O}_{\mathbb{P}^N\times\bar{H}_{d}}(-d;-1)=0$. Utilisons la formule de projection et le changement de base par le morphisme plat $\pi_1$ pour obtenir sur $\bar{H}_{d}$ la suite exacte courte de faisceaux suivante : \begin{equation} 0\rightarrow \pi_1^*pr_*\mathcal{O}_{\mathbb{P}^N}(l-d)\otimes\mathcal{O}_{\bar{H}_{d}}(-1)\rightarrow \pi_1^*pr_*\mathcal{O}_{\mathbb{P}^N}(l)\rightarrow pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d}}(l)\rightarrow 0. \label{faisceau1} \end{equation} Par exactitude \`a droite du produit tensoriel, on voit que la fibre g\'eom\'etrique $(pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d}}(l))_{\langle F\rangle}$ est $H^0(\mathbb{P}^N_K,\mathcal{O}(l))/\left\langle F\right\rangle$, o\`u l'on a not\'e $\langle F\rangle=H^0(\mathbb{P}^N_K,\mathcal{O}(l-d))\cdot F$. Ainsi, $pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d}}(l)$ est localement libre par constance de la dimension de ses fibres, et la fibre g\'eom\'etrique en $\langle F\rangle$ de la suite exacte courte de faisceaux localement libres (\ref{faisceau1}) est : \begin{equation} 0\rightarrow\left\langle F\right\rangle\rightarrow H^0(\mathbb{P}^N_K,\mathcal{O}(l)) \rightarrow H^0(\mathbb{P}^N_K,\mathcal{O}(l))/\left\langle F\right\rangle\rightarrow 0. \label{fibfaisceau1} \end{equation} \paragraph{Intersections compl\`etes.} On a vu ci-dessus que $pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d_1}}(d_2)$ est un faisceau localement libre sur $\bar{H}_{d_1}$. On notera $\bar{H} = \mathbb{G}_{\bar{H}_{d_1}}(c-1,(pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d_1}}(d_2)^{\vee}))$ et $\pi_2:\bar{H}\rightarrow\bar{H}_{d_1}$ la projection. Par (\ref{fibfaisceau1}), les points g\'eom\'etriques de $\bar{H}$ sont en bijection avec la donn\'ee d'une droite $\langle F_1\rangle$ de $H^0(\mathbb{P}^N_K,\mathcal{O}(d_1))$ et d'un sous-espace vectoriel de dimension $c-1$ de $H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))/\langle F_1\rangle$. Si $F_2,\dots, F_c\in H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))$ engendrent ce sous-espace vectoriel, on notera $[F_1,F_2,\dots, F_c]$ ce point g\'eom\'etrique de $\bar{H}$. La description de $\bar{H}$ comme grassmannienne relative sur un espace projectif montre que son groupe de Picard est de rang $2$, engendr\'e par $\mathcal{O}(1,0)=\pi_2^*\mathcal{O}(1)$ et par le fibr\'e de Pl\"ucker relatif $\mathcal{O}(0,1)$. \vspace{1em} La construction de $\bar{H}$ fournit une injection du fibr\'e tautologique $\mathcal{F}\to\pi_2^*pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d_1}}(d_2)$. Par changement de base par le morphisme plat $\pi_2$, cette injection se r\'e\'ecrit $\mathcal{F}\rightarrow pr_{1*}\mathcal{O}_{\pi_2^*\bar{\mathcal{X}}_{d_1}}(d_2;0,0)$. Tirant en arri\`ere sur $\pi_2^*\bar{\mathcal{X}}_{d_1}$, et utilisant l'adjonction, on obtient un morphisme de fibr\'es vectoriels $pr^*_{1}\mathcal{F}\rightarrow\mathcal{O}_{\pi_2^*\bar{\mathcal{X}}_{d_1}}(d_2;0,0)$. Le lieu des z\'eros de ce morphisme est un sous-sch\'ema $\bar{\mathcal{X}}$ de $\pi_2^*\bar{\mathcal{X}}_{d_1}$. Par construction, la fibre en $[F_1,F_2,\dots,F_c]$ de la projection $pr_2:\bar{\mathcal{X}}\rightarrow\bar{H}$ est le sous-sch\'ema $\{F_1=F_2=\dots=F_c=0\}$ de $\mathbb{P}^N_K$. Notons $H$ l'ouvert de $\bar{H}$ constitu\'e des points g\'eom\'etriques $[F_1,F_2,\dots,F_c]$ tels que $\{F_1=F_2=\dots=F_c=0\}$ soit lisse de codimension $c$ dans $\mathbb{P}^N_K$. On note $\mathcal{X}\rightarrow H$ la restriction de $\bar{\mathcal{X}}\rightarrow\bar{H}$ \`a $H$. On montre ais\'ement que $\mathcal{X}\rightarrow H$ s'identifie au sch\'ema de Hilbert des intersections compl\`etes lisses et \`a sa famille universelle. \vspace{1em} Par construction de $\bar{H}$, on dispose d'une suite exacte courte de faisceaux localement libres sur $\bar{H}$ : \begin{equation}\label{faisceau2} 0\to\mathcal{F}\to \pi_2^*pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d_1}}(d_2)\to \mathcal{Q}\to0, \end{equation} dont la fibre g\'eom\'etrique en $[F_1,\dots, F_c]$ s'identifie \`a : \begin{equation}\label{fibfaisceau2} \scriptstyle{0\to\langle F_2,\dots,F_c\rangle\to H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))/\langle F_1\rangle \to H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))/\langle F_1,\dots,F_c\rangle\to0.} \end{equation} Par ailleurs, par (\ref{faisceau1}) pour $d=d_1$ et $l=d_2$, on dispose d'une suite exacte courtes de faisceaux localement libres sur $\bar{H}$ : \begin{equation}\label{faisceau3} \scriptstyle{0\to\pi_2^*\pi_1^*pr_*\mathcal{O}_{\mathbb{P}^N}(d_2-d_1)\otimes\mathcal{O}(-1,0)}\to\scriptstyle{ \pi_2^*\pi_1^*pr_{*}\mathcal{O}_{\mathbb{P}^N}(d_2)}\to\scriptstyle{\pi_2^*pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d_1}}(d_2)}\to 0. \end{equation} Par (\ref{fibfaisceau1}), la fibre g\'eom\'etrique de (\ref{faisceau3}) en $[F_1,\dots, F_c]$ s'identifie \`a : \begin{equation}\label{fibfaisceau3} 0\to\langle F_1\rangle\to H^0(\mathbb{P}^N_K,\mathcal{O}(d_2)) \to H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))/\langle F_1\rangle\to0. \end{equation} Notons $\mathcal{E}$ le faisceau localement libre noyau de la compos\'ee des surjections $\pi_2^*\pi_1^*pr_{*}\mathcal{O}_{\mathbb{P}^N}(d_2)\rightarrow \pi_2^*pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d_1}}(d_2)\rightarrow \mathcal{Q}$. Les suites exactes (\ref{faisceau2}), (\ref{fibfaisceau2}), (\ref{faisceau3}) et (\ref{fibfaisceau3}) permettent d'\'ecrire le diagramme exact (\ref{diagfaisc}) de faisceaux localement libres sur $\bar{H}$ ci-dessous et de calculer sa fibre g\'eom\'etrique (\ref{diagfibre}) en $[F_1,\dots,F_c]$ : \begin{equation} \begin{split} \xymatrix @C=2mm @R=3mm { &0\ar[d]&0\ar[d]& \\ &\scriptstyle{\pi_2^*\pi_1^*pr_{*}\mathcal{O}_{\mathbb{P}^N}(d_2-d_1)\otimes\mathcal{O}(-1,0)}\ar@{=}[r]\ar[d] &\scriptstyle{\pi_2^*\pi_1^*pr_{*}\mathcal{O}_{\mathbb{P}^N}(d_2-d_1)\otimes\mathcal{O}(-1,0)}\ar[d]& \\ 0\ar[r]&\mathcal{E}\ar[r]\ar[d]&\scriptstyle{\pi_2^*\pi_1^*pr_{*}\mathcal{O}_{\mathbb{P}^N}(d_2)}\ar[r]\ar[d] &\mathcal{Q} \ar[r]\ar@{=}[d] & 0 \\ 0\ar[r]&\mathcal{F}\ar[r]\ar[d]& \scriptstyle{\pi_2^*pr_{1*}\mathcal{O}_{\bar{\mathcal{X}}_{d_1}}(d_2)}\ar[r]\ar[d] &\mathcal{Q} \ar[r]&0 \\ &0&0 } \label{diagfaisc} \end{split} \end{equation} \begin{equation} \begin{split} \xymatrix @C=2mm @R=3mm { &0\ar[d]&0\ar[d]& \\ &\scriptstyle{\langle F_1\rangle}\ar@{=}[r]\ar[d] &\scriptstyle{\langle F_1\rangle}\ar[d]& \\ 0\ar[r]&\scriptstyle{\langle F_1,\dots, F_c\rangle}\ar[r]\ar[d]&\scriptstyle{H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))}\ar[r]\ar[d] &\scriptstyle{H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))/\langle F_1,\dots, F_c\rangle }\ar[r]\ar@{=}[d] & 0 \\ 0\ar[r]&\scriptstyle{\langle F_1,\dots, F_c\rangle/\langle F_1\rangle}\ar[r]\ar[d]& \scriptstyle{H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))/\langle F_1\rangle}\ar[r]\ar[d] &\scriptstyle{H^0(\mathbb{P}^N_K,\mathcal{O}(d_2))/\langle F_1,\dots, F_c\rangle} \ar[r]&0 \\ &0&0 } \label{diagfibre} \end{split} \end{equation} \paragraph{Action de $SL_{N+1}$.}~ L'action de $SL_{N+1}$ sur $\mathbb{A}^{N+1}$ induit des actions de $SL_{N+1}$ par changement de coordonn\'ees sur tous les espaces et faisceaux d\'ecrits ci-dessus. En particulier, $SL_{N+1}$ agit sur le faisceau localement libre $pr_*\mathcal{O}_{\mathbb{P}^N}(d_1)$, induisant une lin\'earisation de $\mathcal{O}(1)$ sur $\bar{H}_{d_1}$. Par fonctorialit\'e, on en d\'eduit une lin\'earisation de $\mathcal{O}(1,0)$ sur $\bar{H}$. De m\^eme, $SL_{N+1}$ agit sur le faisceau localement libre $pr_{1*}\mathcal{O}_{\mathcal{X}_{d_1}}(d_2)$, donc sur $\bigwedge^{c-1}(pr_{1*}\mathcal{O}_{\mathcal{X}_{d_1}}(d_2))$, induisant une lin\'earisation du fibr\'e de Pl\"ucker relatif $\mathcal{O}(0,1)$ sur $\bar{H}$. Par combinaisons lin\'eaires, on construit alors une lin\'earisation naturelle de tous les fibr\'es en droites $\mathcal{O}(l_1,l_2)$ sur $\bar{H}$. Ces lin\'earisations sont uniques par \cite{GIT}, Prop. 1.4. \subsection{Fibr\'es amples sur $\bar{H}$}\label{parample} Avant de pouvoir prouver le th\'eor\`eme \ref{ample} qui d\'ecrit les fibr\'es en droites amples sur $\bar{H}$, on a besoin r\'esultats pr\'eliminaires sur la g\'eom\'etrie de $\bar{H}$. \paragraph{Lien entre $\bar{H}$ et $\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1}$.} L'injection de faisceaux localement libres $\mathcal{E}\rightarrow \pi_2^*\pi_1^*pr_{*}\mathcal{O}_{\mathbb{P}^N}(d_2)$ dans le diagramme (\ref{diagfaisc}) induit une immersion ferm\'ee $\mathbb{P}\mathcal{E}^{\vee}\hookrightarrow\bar{H}\times\bar{H}_{d_2}$ entre fibr\'es projectifs sur $\bar{H}$. En prenant le produit fibr\'e au-dessus de $\bar{H}$ de $c-1$ copies de ces fibr\'es projectifs, on obtient une immersion ferm\'ee $i:\Sigma\hookrightarrow\bar{H}\times\bar{H}_{d_2}^{c-1}$. Remarquons que, par le diagramme (\ref{diagfibre}), les points g\'eom\'etriques de $\Sigma$ sont les $([F_1,\dots,F_c],\langle G_2\rangle,\dots,\langle G_c\rangle)\in(\bar{H}\times\bar{H}_{d_2}^{c-1})(K)$ tels que $G_i\in\left\langle F_1,\dots,F_c \right\rangle$ pour $2\leq i\leq c$. Le diagramme ci-dessous r\'esume les notations que nous utiliserons. $$\xymatrix{ &\bar{H}\times\bar{H}_{d_2}^{c-1}\ar[ddl]_{\pi_2\times id}\ar[ddr]^{p_1}& \\ &\Sigma\ar^{i}[u]\ar[dl]^{e}\ar[dr]_{q}& \\ \bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}\ar[dr]_{p_1}&& \bar{H}\ar[dl]^{\pi_2} \\ &\bar{H}_{d_1}&}$$ Notre objectif est de comparer les espaces $\bar{H}$ et $\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1}$ via $\Sigma$. On commence par \'etudier l'application $e:=(\pi_2\times id)\circ i$. La description des points g\'eom\'etriques de $\Sigma$ et de $\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1}$ montre que le ferm\'e de $\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1}$ o\`u $e$ a des fibres de dimension $>0$ a pour points g\'eom\'etriques les $(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)$ tels que $\langle F_1\rangle\cap\langle G_2,\dots,G_c\rangle\neq\{0\}$. On note ce ferm\'e $W$, et on le munit de sa structure r\'eduite. Notons $E$ le ferm\'e $e^{-1}(W)$ de $\Sigma$, et munissons-le de sa structure r\'eduite ($E=e^{-1}(W)$ vaut ensemblistement mais pas n\'ecessairement sch\'ematiquement). Les points g\'eom\'etriques de $E$ sont les points g\'eom\'etriques $([F_1,\dots,F_c],\langle G_2\rangle,\dots,\langle G_c\rangle)$ de $\Sigma$ tels que $\langle F_1,G_2,\dots,G_c\rangle\subsetneq\langle F_1,\dots,F_c\rangle$. \begin{lemme}\label{EWirr} Les sch\'emas $E$ et $W$ sont irr\'eductibles. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] Comme $W=e(E)$, il suffit de montrer que $E$ est irr\'eductible. Pour cela, il suffit de montrer que les fibres g\'eom\'etriques de $q|_E:E\to\bar{H}$ le sont. Soit $[F_1,\dots,F_c]$ un point g\'eom\'etrique de $\bar{H}$. La fibre g\'eom\'etrique de $q|_E:E\to\bar{H}$ correspondante est constitu\'ee des $(\langle G_2\rangle,\dots,\langle G_c\rangle)$ n'induisant pas une base de $\langle F_1,\dots,F_c\rangle/\langle F_1\rangle$. Elle est donc ensemblistement d\'efinie par l'annulation d'un d\'eterminant, et irr\'eductible par irr\'eductibilit\'e du d\'eterminant. \end{proof} \begin{lemme}\label{birat} Le morphisme $e|_{\Sigma\setminus E}:\Sigma\setminus E\to (\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1})\setminus W$ est un isomorphisme. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] Le morphisme $e=(\pi_2\times id)\circ i$ est propre comme compos\'ee, donc, par changement de base, $e|_{\Sigma\setminus E}$ est propre. De plus, la description des points g\'eom\'etriques de $\Sigma$ et $\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1}$ montre que $e|_{\Sigma\setminus E}$ r\'ealise une bijection entre points g\'eom\'etriques. Ainsi, $e|_{\Sigma\setminus E}$ est propre et quasifini, donc fini. Finalement, par lissit\'e g\'en\'erique ($\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1}$ est de caract\'eristique g\'en\'erique $0$), $e^{-1}(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)$ est un point r\'eduit pour $(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)$ g\'e\-n\'e\-rique, de sorte que $e|_{\Sigma\setminus E}$ est birationnel. Comme $\bar{H}_{d_1}\times \bar{H}_{d_2}^{c-1}$ est r\'egulier donc normal, par le Main Theorem de Zariski, $e|_{\Sigma\setminus E}$ est un isomorphisme. \end{proof} \paragraph{Des \'equations pour $E$ et $W$.}~ \begin{prop}\label{E} Le sous-sch\'ema $E$ est un diviseur de Cartier dans $\Sigma$ et $\mathcal{O}(E)=i^*\mathcal{O}(0,-1,1,\dots,1)$. \end{prop} \begin{proof}[$\mathbf{Preuve}$] La construction de $\Sigma$ comme produit de fibr\'es projectifs sur $\bar{H}$ fournit $c-1$ sous-faisceaux tautologiques $\mathcal{L}_1,\dots,\mathcal{L}_{c-1}$ de $q^*\mathcal{E}$. On a donc un morphisme $\bigoplus_{k=1}^{c-1}\mathcal{L}_k\rightarrow q^*\mathcal{E}$ dont la fibre g\'eom\'etrique en $([F_1,\dots,F_c],\langle G_2\rangle,\dots,\langle G_c\rangle)$ est $\langle G_2\rangle\oplus\dots\oplus\langle G_c \rangle \rightarrow\langle F_1,\dots,F_c\rangle$. Remarquons que, par compatibilit\'e entre les faisceaux tautologiques des fibr\'es projectifs $\mathbb{P}\mathcal{E}^{\vee}$ et $\bar{H}\times\bar{H}_{d_2}$ sur $\bar{H}$, on a $\mathcal{L}_1=i^*\mathcal{O}(0,0,-1,0\dots,0),\dots,\mathcal{L}_{c-1}=i^*\mathcal{O}(0,0,\dots,0,-1)$. D'autre part, en tirant en arri\`ere sur $\Sigma$ le morphisme $\mathcal{E}\rightarrow\mathcal{F}$ du diagramme (\ref{diagfaisc}), on obtient un morphisme $q^*\mathcal{E}\rightarrow q^*\mathcal{F}$ dont la fibre g\'eom\'etrique en $([F_1,\dots,F_c],\langle G_2\rangle,\dots,\langle G_c\rangle)$ est $\left\langle F_1,\dots,F_c \right\rangle\rightarrow\left\langle F_1,\dots,F_c \right\rangle/\left\langle F_1 \right\rangle$ par le diagramme (\ref{diagfibre}). Notons $\beta:\bigoplus_{k=1}^{c-1}\mathcal{L}_k\rightarrow q^*\mathcal{F}$ la compos\'ee de ces deux morphismes de faisceaux. Les fibres $\beta_{([F_1,\dots,F_c],\langle G_2\rangle,\dots,\langle G_c\rangle)}: \langle G_2\rangle\oplus\dots\oplus\langle G_c \rangle \rightarrow\langle F_1,\dots,F_c \rangle/\langle F_1 \rangle$ de $\beta$ sont des isomorphismes exactement si $([F_1,\dots,F_c],\langle G_2\rangle,\dots,\langle G_c\rangle)\notin E(K)$. On en d\'e\-duit que $\det(\beta)$ est une injection, et que son conoyau $\mathcal{K}$ a pour support un sous-sch\'ema ferm\'e de $\Sigma$ dont la r\'eduction est $E$. On remarque alors que $\det(\bigoplus_{k=1}^{c-1}\mathcal{L}_k)=i^*\mathcal{O}(0,0,-1,\dots,-1)$ et que, par d\'efinition du fibr\'e de Pl\"ucker, $\det(q^*\mathcal{F})=i^*\mathcal{O}(0,-1,0,\dots,0)$. Tensorisant par $i^*\mathcal{O}(0,1,0,\dots,0)$, on obtient : $$0\rightarrow i^*\mathcal{O}(0,1,-1,\dots,-1)\rightarrow \mathcal{O}_{\Sigma}\rightarrow\mathcal{K}\otimes i^*\mathcal{O}(0,1,0,\dots,0)\rightarrow 0.$$ Le fibr\'e en droites $i^*\mathcal{O}(0,1,-1,\dots,-1)$ s'identifie ainsi au faisceau d'id\'eaux d'un diviseur de Cartier $D$ de $\Sigma$ qui co\"incide ensemblistement avec $E$. Le sous-sch\'ema $E$ est donc le diviseur de Cartier r\'eduit associ\'e \`a $D$ sur le sch\'ema r\'egulier $\Sigma$. Comme, par le lemme \ref{EWirr}, $E$ est irr\'eductible, il existe $k\geq1$ tel que $i^*\mathcal{O}(0,1,-1,\dots,-1)=\mathcal{O}(-kE)$. Or la description de $\Sigma$ comme produit de fibr\'es projectifs montre que $i^*\mathcal{O}(0,1,-1,\dots,-1)$ n'est pas divisible dans $\Pic(\Sigma)$. On a donc n\'ecessairement $k=1$, et $\mathcal{O}(E)=i^*\mathcal{O}(0,-1,1,\dots,1)$. \end{proof} Les calculs que nous m\`enerons par la suite n\'ecessitent d'avoir des \'equations au moins ensemblistes pour $W$. C'est l'objet de la proposition \ref{eqZ}. \begin{prop}\label{eqZ} Soit $(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)$ un point g\'eom\'etrique de $\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}$ n'appartenant pas \`a $W$. Alors il existe un diviseur $D\in|\mathcal{O}_{\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}}((c-1)(d_2-d_1)+1,1,\dots,1)|$ contenant $W$ mais pas $(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)$. \end{prop} \begin{proof}[$\mathbf{Preuve}$] Soit $(X_0,\dots,X_N)$ un syst\`eme de coordonn\'ees sur $\mathbb{P}^N$, c'est-\`a-dire une base de $H^0(\mathbb{P}^N,\mathcal{O}(1))$. On note $\mathfrak{M}_d$ l'ensemble des mon\^omes de degr\'e $d$ en les $X_i$ : c'est une base de $H^0(\mathbb{P}^N,\mathcal{O}(d))$. On obtient des coordonn\'ees sur les espaces projectifs $\bar{H}_{d_1}$ et $\bar{H}_{d_2}$ en consid\'erant les bases duales $(a_L)_{L\in\mathfrak{M}_{d_1}}$ et $(b_M^{(i)})_{M\in\mathfrak{M}_{d_2}}$ de $H^0(\mathbb{P}^N,\mathcal{O}(d_1))^{\vee}=H^0(\bar{H}_{d_1},\mathcal{O}(1))$ et $H^0(\mathbb{P}^N,\mathcal{O}(d_2))^{\vee}=H^0(\bar{H}_{d_2},\mathcal{O}(1))$, o\`u l'exposant $i$ ($2\leq i\leq c$) permet de distinguer les coordonn\'ees sur les $c-1$ copies de $\bar{H}_{d_2}$. On choisit notre syst\`eme de coordonn\'ees de sorte que $F_1$ ait un coefficient non nul en $X_0^{d_1}$, qu'on peut alors supposer \'egal \`a $1$. Soit $2\leq i\leq c$. Consid\'erons l'identit\'e \begin{equation} a_{X_0^{d_1}}^{d_2-d_1+1}g^{(i)}=q_{d_2-d_1+1}^{(i)}f+r_{d_2-d_1+1}^{(i)} \label{diveucl} \end{equation} obtenue en substituant la variable $b^{(i)}_M$ \`a la variable $b_M$ dans l'identit\'e fournie par le lemme \ref{division} ci-dessous. Substituant alors les coefficients de $F_1$ dans les $a_L$ et les coefficients de $G_i$ dans les $b^{(i)}_M$, on obtient une \'egalit\'e de la forme $G_i=Q_iF_1+R_i$ dans $K[X_0,\dots, X_N]$. Comme $(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)\notin W$, les $R_i$ forment une famille libre. On peut donc trouver $(M_j)_{2\leq j\leq c}$ des mon\^omes de $\mathfrak{M}_{d_2}$ tels que la matrice $(c-1)\times(c-1)$ dont le coefficient $(i,j)$ est le coefficient de $M_j$ dans $R_i$ soit inversible. On note $C^{(i)}_j\in\mathbb{Z}[a_L,b^{(i)}_M]_{L\in\mathfrak{M}_{d_1},M\in\mathfrak{M}_{d_2}}$ le coefficient de $M_j$ dans $r_{d_2-d_1+1}^{(i)}$. Alors $P=\det(C^{(i)}_j)$ est un polyn\^ome homog\`ene de degr\'e $(c-1)(d_2-d_1+1)$ en les $a_L$ et, pour tout $i\in\{2,\dots,c-1\}$, de degr\'e $1$ en les $b^{(i)}_M$. On voit $P$ comme une section de $\mathcal{O}_{\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}}((c-1)(d_2-d_1+1),1,\dots,1)$. Par choix des $M_j$, $P$ est non nul en $(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)$. Montrons que $\{P=0\}$ contient $W$. Comme, par le lemme \ref{EWirr}, $W$ est int\`egre, il suffit de voir que $\{P=0\}$ contient les points g\'eom\'etriques de l'ouvert dense de $W$ d\'efini par l'\'equation $a_{X_0^{d_1}}\neq0$. Soit donc $(\langle F_1'\rangle,\langle G_2'\rangle,\dots,\langle G_c'\rangle)$ un point g\'eom\'etrique de $W$ tel que le coefficient en $X_0^{d_1}$ de $F'_1$ vale $1$. Comme $(\langle F_1'\rangle,\langle G_2'\rangle,\dots,\langle G_c'\rangle)\in W$, il existe une \'equation de la forme $\sum_{i=2}^c\lambda_iG'_i=QF'_1$ avec $Q\in K[X_0,\dots, X_N]$ et $\lambda_i\in K$ non tous nuls. Pour $2\leq i\leq c$, en substituant dans l'\'egalit\'e (\ref{diveucl}) les coefficients de $F'_1$ dans les $a_L$ et les coefficients de $G'_i$ dans les $b^{(i)}_M$, on obtient des \'egalit\'es de la forme $G'_i=Q'_iF'_1+R'_i$. Il vient $\sum_{i=2}^c\lambda_i R'_i=(Q-\sum_{i=2}^c\lambda_i Q'_i)F_1'$. Comme aucun des mon\^omes des $R'_i$ n'est divisible par $X_0^{d_1}$ et que le coefficient en $X_0^{d_1}$ de $F_1'$ est non nul, on a n\'ecessairement $Q-\sum_{i=2}^c\lambda_i Q'_i=0$, donc $\sum_{i=2}^c\lambda_i R'_i=0$. Par cons\'equent, $P(F'_1,G'_2,\dots, G_c')$ est le d\'eterminant d'une matrice dont les lignes sont li\'ees, et est nul. Ceci montre que $(\langle F_1'\rangle,\langle G_2'\rangle,\dots,\langle G_c'\rangle)\in \{P=0\}$. Enfin, remarquons que $P$ est divisible par $a_{X_0^{d_1}}^{c-2}$. Pour cela, utilisons la derni\`ere partie du lemme \ref{division} : on a une identit\'e de la forme $r_{d_2-d_1+1}=a_{X_0^{d_1}}T+b_{X_0^{d_2}}S$. Par homog\'en\'eit\'e, $S$ ne d\'epend pas des variables $(b_M)_{M\in\mathfrak{M}_{d_2}}$, de sorte qu'on obtient, pour $2\leq i\leq c$ des identit\'es de la forme $r^{(i)}_{d_2-d_1+1}=a_{X_0^{d_1}}T^{(i)}+b^{(i)}_{X_0^{d_2}}S$. Ces expressions montrent que, dans la matrice $(C^{(i)}_j)$, chaque ligne est somme de deux termes : les premiers divisibles par $a_{X_0^{d_1}}$, les seconds tous proportionnels. D\'evelopper le d\'eterminant montre que $P$ est divisible par $a_{X_0^{d_1}}^{c-2}$. Posons alors $\tilde{P}=P/a_{X_0^{d_1}}^{c-2}$ : c'est une section de $\mathcal{O}_{\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}}((c-1)(d_2-d_1)+1,1,\dots,1)$. Comme $P$ est non nul en $(\langle F_1\rangle,\langle G_2\rangle,\dots,\langle G_c\rangle)$, c'est aussi le cas de $\tilde{P}$. Comme $W\subset\{P=0\}$, que $W$ est int\`egre par le lemme \ref{EWirr}, et que $W$ n'est pas inclus dans $\{a_{X_0^{d_1}}=0\}$, $W\subset\{\tilde{P}=0\}$. On a montr\'e que $D=\{\tilde{P}=0\}$ convenait. \end{proof} \begin{lemme}\label{division} On se place dans l'anneau $$A=\mathbb{Z}[X_s,a_L,b_M]_{0\leq s\leq N, L\in\mathfrak{M}_{d_1}, M\in\mathfrak{M}_{d_2}}$$ trigradu\'e par le degr\'e total en les $X_i$, en les $a_L$ et les $b_M$. On consid\`ere les \'el\'ements $f=\sum_{L\in\mathfrak{M}_{d_1}}a_LL$ et $g=\sum_{M\in\mathfrak{M}_{d_2}}b_MM$ de $A$. Alors, si $0\leq j\leq d_2-d_1+1$, il existe $q_j, r_j\in A$ homog\`enes de degr\'es respectifs $(d_2-d_1,j-1,1)$ et $(d_2,j,1)$, tels qu'aucun mon\^ome de $r_j$ ne soit divisible par $X_0^{d_2+1-j}$ et que $$a_{X_0^{d_1}}^jg=q_jf+r_j.$$ De plus, si $j\geq 1$, tout mon\^ome intervenant dans $r_j$ est divisible soit par $a_{X_0^{d_1}}$ soit par $b_{X_0^{d_2}}$. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] L'existence de $q_j$ et $r_j$ r\'esulte de l'algorithme de division euclidienne. Plus pr\'ecis\'ement, on raisonne par r\'ecurrence sur $j$. Si $j=0$, on prend $q_0=0$ et $r_0=g$. Pour passer de l'\'egalit\'e pour $j$ \`a celle pour $j+1$, on multiplie par $a_{X_0^{d_1}}$, on regroupe dans $a_{X_0^{d_1}}r_j$ les termes divisibles par $X_0^{d_2-j}$, et on r\'e\'ecrit ces termes en utilisant l'identit\'e : $$a_{X_0^{d_1}}X_0^{d_2-j}=X_0^{d_2-d_1-j}f+X_0^{d_2-d_1-j}(a_{X_0^{d_1}}X_0^{d_1}-f).$$ Cette construction explicite permet facilement de v\'erifier la derni\`ere propri\'et\'e par r\'ecurrence sur $j$. \end{proof} \paragraph{Calcul des fibr\'es amples.}~ On peut \`a pr\'esent montrer : \begin{thm}\label{ample} Le fibr\'e $\mathcal{O}(l_1,l_2)$ sur $\bar{H}$ est ample si et seulement si $l_2>0$ et $\frac{l_1}{l_2}>(c-1)(d_2-d_1)+1$. \end{thm} \begin{proof}[$\mathbf{Preuve}$] Comme $\Spec(\mathbb{Z})$ est affine, $\mathcal{O}(l_1,l_2)$ est ample si et seulement s'il est ample relativement \`a $\Spec(\mathbb{Z})$. Par \cite{EGA31} 4.7.1, il suffit de tester l'amplitude de $\mathcal{O}(l_1,l_2)$ sur les fibres du morphisme structurel, donc sur les fibres g\'eom\'etriques du morphisme structurel. La proposition est alors cons\'equence de la proposition \ref{nef} ci-dessous et du crit\`ere de Kleiman : pour une vari\'et\'e projective sur un corps alg\'ebriquement clos, le c\^one ample est l'int\'erieur du c\^one nef (\cite{LazarsfeldI}, 1.4.23). \end{proof} \begin{prop}\label{nef} Soit $K$ un corps alg\'ebriquement clos. Alors le fibr\'e $\mathcal{O}(l_1,l_2)$ sur $\bar{H}\times_{\mathbb{Z}} K$ est nef si et seulement si $l_2\geq0$ et $l_1\geq l_2((c-1)(d_2-d_1)+1)$. \end{prop} \begin{proof}[$\mathbf{Preuve}$] Dans toute cette preuve, les vari\'et\'es qu'on manipule sont d\'efinies sur le corps $K$. Les extensions des scalaires \`a $K$ seront partout sous-entendues. \begin{etape1} La condition est n\'ecessaire. \end{etape1} Supposons que $\mathcal{O}(l_1,l_2)$ est nef. On a $l_2\geq0$ car $\mathcal{O}(l_1,l_2)$ est $\pi_2$-nef. On va montrer la seconde in\'egalit\'e en calculant le degr\'e de $\mathcal{O}(l_1,l_2)$ sur une courbe bien choisie. Soient $X_0, X_1\in H^0(\mathbb{P}^N,\mathcal{O}(1))$ des \'equations lin\'eairement ind\'ependantes, $H\in H^0(\mathbb{P}^N,\mathcal{O}(d_1-1))$ une \'equation non nulle et $(\lambda^{(i)}_j)_{2\leq i\leq c,1\leq j\leq d_2-d_1}$ des scalaires distincts. Pour $2\leq i\leq c$, on note $G_i=HX_{i-1}\prod_{j=1}^{d_2-d_1}(X_0+\lambda^{(i)}_jX_1)$. Consid\'erons $\beta:\mathbb{P}^1\rightarrow\bar{H}_{d_1}$ le pinceau $t\mapsto\langle H(X_0+tX_1)\rangle$. La section constante $s:\bar{H}_{d_1}\rightarrow\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}$ de valeur $(\langle G_2\rangle,\dots,\langle G_c\rangle)$ fournit un morphisme $s\circ \beta:\mathbb{P}^1\rightarrow\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}$. Calculons les points de $\mathbb{P}^1$ dont l'image par $s\circ \beta$ est dans $W$. Soient $t\in\mathbb{P}^1(K)$ et $a_2,\dots, a_c\in K$ non tous nuls. Alors $H(X_0+tX_1)$ divise $\sum_{i=2}^c a_iG_i$ si et seulement si $X_0+tX_1$ divise $\sum_{i=2}^c a_iX_{i-1}\prod_{j=1}^{d_2-d_1}(X_0+\lambda^{(i)}_jX_1)$. On voit ais\'ement que cela ne peut se produire que si tous les $a_i$ sauf un sont nuls. Si c'est $a_i$ qui est non nul, les valeurs possibles de $t$ sont soit $t=\lambda_j^{(i)}$ pour un $j\in\{1,\dots,d_2-d_1\}$, soit $t=\infty$ si $i=2$. On a montr\'e qu'exactement $(c-1)(d_2-d_1)+1$ points de $\mathbb{P}^1$ sont envoy\'es dans $W$ par $s\circ \beta$. Comme l'image de $s\circ\beta$ n'est pas incluse dans $W$ et que $e$ est birationnel par le lemme \ref{birat}, le crit\`ere valuatif de propret\'e permet de relever $s\circ\beta$ en un morphisme $\gamma:\mathbb{P}^1\rightarrow\Sigma$. Remarquons qu'exactement $(c-1)(d_2-d_1)+1$ points de $\mathbb{P}^1$ sont envoy\'es dans $E$ par $\gamma$. Finalement, en composant par $q$, on obtient un morphisme $q\circ \gamma:\mathbb{P}^1\rightarrow\bar{H}$. On calcule alors les degr\'es des fibr\'es en droites de $\bar{H}$ sur $\mathbb{P}^1$: \begin{alignat*}{3} \mathbb{P}^1\cdot\gamma^*q^*\mathcal{O}(1,0) &=\mathbb{P}^1\cdot\gamma^*q^*\pi_2^*\mathcal{O}(1) &&= \mathbb{P}^1\cdot\gamma^*e^*p_1^*\mathcal{O}(1)\\ &= \mathbb{P}^1\cdot\beta^*s^*p_1^*\mathcal{O}(1) &&= \mathbb{P}^1\cdot\beta^*\mathcal{O}(1)\\ & = 1 &&\text{ car $\beta:\mathbb{P}^1\to\bar{H}_{d_1}$ est une droite.} \end{alignat*} \begin{alignat*}{6} \mathbb{P}^1\cdot&\gamma^*q^*\mathcal{O}(0,1)=\mathbb{P}^1\cdot\gamma^*e^*\mathcal{O}(0,1,\dots,1)-\mathbb{P}^1\cdot\gamma^*\mathcal{O}(E) &&\\ &\text{\hspace{5em} par la proposition }\ref{E}&&\\ &\leq \mathbb{P}^1\cdot\beta^*s^*\mathcal{O}(0,1,\dots,1)-(c-1)(d_2-d_1)-1 &&\\ &\text{\hspace{5em} par calcul de }\Card(\gamma^{-1}(E))&&\\ & =-(c-1)(d_2-d_1)-1 &&\\ &\text{\hspace{5em} car }s^*\mathcal{O}(0,1,\dots,1)=\mathcal{O}. \end{alignat*} On montre enfin l'in\'egalit\'e voulue comme suit : \begin{alignat*}{2} l_1-l_2((c-1)(d_2-d_1)+1) &\geq \mathbb{P}^1\cdot\gamma^*q^*\mathcal{O}(l_1,l_2) &&\text{ car }l_2\geq0 \\ &\geq0 &&\text{ car }\mathcal{O}(l_1,l_2) \text{ est nef. } \end{alignat*} \begin{etape2} La condition est suffisante. \end{etape2} Supposons \`a pr\'esent les in\'egalit\'es v\'erifi\'ees, et montrons que $\mathcal{O}(l_1,l_2)$ est nef. Soit pour cela $C$ une courbe int\`egre de $\bar{H}$. Notons $\tilde{C}$ sa normalisation et $\alpha:\tilde{C}\rightarrow\bar{H}$ le morphisme naturel. Comme $q$ est un fibr\'e localement trivial, on peut trouver une section rationnelle $\beta:\tilde{C}\dashrightarrow\Sigma$ de $\alpha$; on peut de plus supposer que son image n'est pas incluse dans $E$. Par crit\`ere valuatif de propret\'e, $\beta$ est en fait un morphisme. Notons $\gamma=e\circ\beta$. Comme $\beta(\tilde{C})\not\subset E$, on a $\gamma(\tilde{C})\not\subset W$. On peut donc choisir par la proposition \ref{eqZ} un diviseur de Cartier $D\in|\mathcal{O}_{\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}}((c-1)(d_2-d_1)+1,1,\dots,1)|$ contenant $W$ mais pas $\gamma(\tilde{C})$, donc tel que $e^*D$ contienne $E$ mais pas $\beta(\tilde{C})$. On calcule alors : \begin{alignat*}{7} \tilde{C}\cdot\alpha^*\mathcal{O}_{\bar{H}}&(l_1,l_2) = \tilde{C}\cdot\beta^*q^*\mathcal{O}_{\bar{H}}(l_1,l_2) &&\text{ par projection}\\ & = \tilde{C}\cdot\beta^*(e^*\mathcal{O}_{\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}}(l_1,l_2,\dots,l_2)-l_2E) &&\text{ par \ref{E}}\\ & \geq \tilde{C}\cdot\beta^*(e^*\mathcal{O}_{\bar{H}_{d_1}\times\bar{H}_{d_2}^{c-1}}(l_1,l_2,\dots,l_2)-l_2e^*D) &&\text{ car }E\subset e^*D\text{, }l_2\geq0\\ &=\tilde{C}\cdot\gamma^*\mathcal{O}(l_1-l_2((c-1)(d_2-d_1)+1),0,\dots,0) &&\text{ par projection }\\ &\geq0.&& \end{alignat*} On a bien montr\'e que $\mathcal{O}_{\bar{H}}(l_1,l_2)$ est nef. \end{proof} \begin{rem}\label{echec} Au paragraphe \ref{edmaff}, on a pu montrer facilement le th\'eor\`eme \ref{edm} (i) car le diviseur discriminant $\Delta=\bar{H}\setminus H$ \'etait ample sur $\bar{H}$, et son compl\'e\-men\-taire $H$ \'etait donc affine. Cette m\'ethode ne permet pas de montrer le th\'eor\`eme \ref{edm} (ii) ; plus pr\'ecis\'ement, si $c=2$, elle ne fonctionne jamais. En effet, le th\'eor\`eme 1.2 de \cite{Ooldeg} permet de calculer le fibr\'e en droites associ\'e au diviseur discriminant $\Delta=\bar{H}\setminus H$. Quand $c=2$, les calculs sont men\'es dans l'exemple 1.10 de \cite{Ooldeg}, et on obtient $\mathcal{O}(\Delta)=\mathcal{O}(l_1,l_2)$ avec $l_1=d_2(e_2^{N-1}+2e_1e_2^{N-2}+\dots+Ne_1^{N-1})$ et $ l_2=d_1(e_1^{N-1}+2e_2e_1^{N-2}+\dots+Ne_2^{N-1}) $, et o\`u l'on a pos\'e $e_i=d_i-1$. Comme $\frac{l_1}{l_2}\leq\frac{d_2}{d_1}\leq d_2-d_1+1$, le th\'eor\`eme \ref{ample} montre que ce fibr\'e n'est jamais ample. Quand $c>2$, les formules calculant $l_1$ et $l_2$ sont plus compliqu\'ees, et font appara\^itre des sommes altern\'ees, ce qui rend difficile une v\'erification analogue. \end{rem} \subsection{Th\'eorie g\'eom\'etrique des invariants}\label{preuveedm} Dans ce paragraphe, on applique le crit\`ere de Hilbert-Mumford pour montrer le th\'eor\`eme \ref{edm} (ii). On commence par \'evaluer les fonctions $\mu$ intervenant dans ce crit\`ere pour l'action de $SL_{N+1}$ sur $\bar{H}$ relativement aux fibr\'es en droites $SL_{N+1}$-lin\'earis\'es d\'ecrits au paragraphe \ref{constructionsqp}. Ces fonctions $\mu$ d\'ependent d'un point g\'eom\'etrique $P=[F_1,F_2,\dots, F_c]\in\bar{H}(K)$ et d'un sous-groupe \`a un param\`etre non trivial $\rho : \mathbb{G}_{m,K}\to SL_{N+1,K}$. Rappelons bri\`evement leur d\'efinition. Consid\'erons la fibre en $\lim_{t\to 0}\rho(t)\cdot P$ du fibr\'e en droites g\'eom\'etrique sur $\bar{H}$ associ\'e \`a $\mathcal{O}(l_1,l_2)$. Le morphisme $\rho$ induit une action de $\mathbb{G}_{m,K}$ sur cette fibre. Cette action se fait via un caract\`ere de $\mathbb{G}_{m,K}$, c'est-\`a-dire un entier ; on note $\mu^{\mathcal{O}(l_1,l_2)}(P,\rho)$ l'oppos\'e de cet entier. Dans les deux lemmes qui suivent, on met $\rho$ et $P$ sous une forme qui permettra de calculer $\mu^{\mathcal{O}(l_1,l_2)}(P,\rho)$. \begin{lemme}\label{bon1ps} Soit $\rho : \mathbb{G}_{m,K}\to SL_{N+1,K}$ un sous-groupe \`a un param\`etre non trivial. Alors on peut trouver des entiers $\alpha_0\leq\dots\leq\alpha_N$ non tous nuls de somme nulle et une base de $K^{N+1}$ dans laquelle $\rho(t)\cdot(x_0,\dots,x_N)=(t^{\alpha_0}x_0,\dots,t^{\alpha_N}x_N)$. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] C'est standard. \end{proof} Dans le reste de ce paragraphe, $\rho$ est fix\'e. On travaille avec un syst\`eme de coordonn\'ees et des entiers $\alpha_i$ comme dans le lemme \ref{bon1ps}. \begin{conventions}\label{notationsalpha} Si $\alpha$ est la donn\'ee d'entiers $\alpha_0\leq\dots\leq\alpha_N$ non tous nuls de somme nulle, le $\alpha$-degr\'e d'un mon\^ome $M=X_0^{\lambda_0}\dots X_N^{\lambda_N}$ est $\deg_{\alpha}(M)=\sum_i\alpha_i\lambda_i$. Si $F\in H^0(\mathbb{P}^N_K,\mathcal{O}(d))$ est une \'equation non nulle, on note $\deg_{\alpha}(F)$ le plus grand $\alpha$-degr\'e des mon\^omes de $F$. Par convention, $\deg_\alpha(0)=-\infty$. Soit $F^{\alpha}$ la somme des termes de $F$ de $\alpha$-degr\'e $\deg_{\alpha}(F)$. On dit que $F$ est $\alpha$-homog\`ene si $F=F^\alpha$. \end{conventions} \begin{lemme}\label{bonneq} Soit $P=[F_1,F_2,\dots,F_c]\in\bar{H}(K)$. Alors il existe des \'equations $\Phi_i\in H^0(\mathbb{P}^N_K,\mathcal{O}(d_i))$ pour $2\leq i\leq c$ telles que : \begin{enumerate}[(i)] \item $P=[F_1,\Phi_2,\dots,\Phi_c]$. \item $\deg_{\alpha}(\Phi_i)\leq\deg_{\alpha}(F_i)$ pour $2\leq i\leq c$. \item $[F_1^{\alpha},\Phi_2^{\alpha},\dots, \Phi_c^{\alpha}]\in\bar{H}(K)$. \end{enumerate} \end{lemme} \begin{proof}[$\mathbf{Preuve}$] Choisissons $\Phi_i\in H^0(\mathbb{P}^N_K,\mathcal{O}(d_i))$ pour $2\leq i\leq c$ v\'erifiant les propri\'et\'es (i) et (ii), et telles que $\sum_{i=2}^c\deg_{\alpha}(\Phi_i)$ soit minimal. Montrons par l'absurde que la condition (iii) est automatiquement v\'erifi\'ee. Si ce n'\'etait pas le cas, on pourrait trouver $Q\in H^0(\mathbb{P}^N_K,\mathcal{O}(d_2-d_1))$ et $\lambda_1,\dots,\lambda_c\in K$ non tous nuls tels que $QF_1^{\alpha}=\sum_{i=2}^c\lambda_i \Phi_i^{\alpha}$. En ne gardant dans cette identit\'e que les termes de $\alpha$-degr\'e maximal (i.e. quitte \`a remplacer $Q$ par $Q^\alpha$ ou $0$ et \`a remplacer certains des $\lambda_i$ par $0$), on peut supposer que tous les termes de cette identit\'e sont $\alpha$-homog\`enes de m\^eme $\alpha$-degr\'e. Soit alors $2\leq j\leq c$ tel que $\lambda_{j}$ soit non nul ; on pose $\Phi'_i=\Phi_i$ si $i\neq j$ et $\Phi'_j=\sum_{i=2}^c\lambda_i \Phi_i-QF_1$. Les $\Phi'_i$ v\'erifient encore la propri\'et\'e (i). On a bien $\deg_\alpha(\Phi'_i)=\deg_\alpha(\Phi_i)$ si $i\neq j$. De plus, l'expression de $\Phi'_j$ montre que $\deg_{\alpha}(\Phi'_j)\leq\deg_{\alpha}(\Phi_j)$, mais que la somme des termes de $\alpha$-degr\'e $\deg_{\alpha}(\Phi_j)$ dans $\Phi'_j$ est nulle, i.e. $\deg_{\alpha}(\Phi'_j)<\deg_{\alpha}(\Phi_j)$. D'une part cela montre que les $\Phi'_i$ v\'erifient encore la propri\'et\'e (ii). D'autre part, cela contredit la minimalit\'e dans le choix des $\Phi_i$. \end{proof} Calculons maintenant les fonctions $\mu$ qui nous seront utiles. \begin{lemme}\label{mu1} Soit $\langle F_1\rangle\in\bar{H}_{d_1}(K)$. Alors $\mu^{\mathcal{O}(1)}(\langle F_1\rangle,\rho)=\deg_{\alpha}(F_1)$. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] On rappelle que, par d\'efinition de l'action duale, si $F$ est un \'el\'ement $\alpha$-homog\`ene de $H^0(\mathbb{P}^N_K,\mathcal{O}(d_1))=\Sym^{d_1}(K^{N+1})^{\vee}$, l'action de $\rho$ sur $F$ est donn\'ee par $\rho(t)\cdot F=t^{-\deg_{\alpha}(F)}F$. Ainsi, si l'on \'ecrit $F_1=F_1^{\alpha}+R$, \begin{alignat*}{2} \rho(t)\cdot\langle F_1\rangle=\langle\rho(t)\cdot F_1\rangle&=\langle t^{\deg_{\alpha}(F_1)}\rho(t)\cdot F_1\rangle&& \\ &= \langle F_1^{\alpha}+t^{\deg_{\alpha}(F_1)}\rho(t)\cdot R \rangle.&& \end{alignat*} Le terme de droite tendant vers $0$, on a $\lim_{t\to 0}\rho(t)\cdot\langle F_1\rangle=\langle F_1^{\alpha}\rangle$. Enfin, dans $H^0(\mathbb{P}^N_K,\mathcal{O}(d_1))$, $\rho(t)\cdot F_1^{\alpha}=t^{-\deg_{\alpha}(F_1)}F_1^{\alpha}$, ce qui montre, par d\'efinition de la $SL_{N+1}$-lin\'earisation de $\mathcal{O}(1)$, que $\mu^{\mathcal{O}(1)}(\langle F_1\rangle,\rho)=\deg_{\alpha}(F_1)$. \end{proof} \begin{lemme}\label{mu2} Soit $P\in\bar{H}(K)$. On \'ecrit $P=[F_1,\Phi_2,\dots,\Phi_c]$ o\`u les $\Phi_i$ ont \'et\'e choisis comme dans le lemme \ref{bonneq}. Alors $\mu^{\mathcal{O}(0,1)}(P,\rho)=\sum_{i=2}^c\deg_{\alpha}(\Phi_i)$. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] La preuve est analogue \`a celle du lemme pr\'ec\'edent. \end{proof} En combinant les lemmes \ref{mu1} et \ref{mu2}, on obtient : \begin{prop}\label{mucombo} Soit $P\in\bar{H}(K)$. On \'ecrit $P=[F_1,\Phi_2,\dots,\Phi_c]$ o\`u les $\Phi_i$ ont \'et\'e choisis comme dans le lemme \ref{bonneq}. Alors : $$\mu^{\mathcal{O}(l_1,l_2)}(P,\rho)=l_1\deg_{\alpha}(F_1)+l_2\sum_{i=2}^c\deg_{\alpha}(\Phi_i).$$ \end{prop} Nous sommes pr\^ets \`a appliquer le crit\`ere de Hilbert-Mumford. \begin{prop}\label{stablisse} Il existe un fibr\'e en droites ample $SL_{N+1}$-lin\'earis\'e $\mathcal{L}$ sur $\bar{H}$ tel que $H\subset\bar{H}^s(\mathcal{L})$ si et seulement si \begin{equation}\label{condinum} d_2(N-c+2)>d_1((c-1)(d_2-d_1)+1). \end{equation} \end{prop} \begin{proof}[$\mathbf{Preuve}$] On a vu au paragraphe \ref{constructionsqp} que les fibr\'es en droites sur $\bar{H}$ sont de la forme $\mathcal{O}(l_1,l_2)$ et sont uniquement $SL_{N+1}$-lin\'earis\'es. Par le th\'eor\`eme \ref{ample}, un tel fibr\'e en droites est ample si et seulement si $l_2>0$ et $\frac{l_1}{l_2}>(c-1)(d_2-d_1)+1$. Supposons dans un premier temps que (\ref{condinum}) est v\'erifi\'ee et montrons que $\mathcal{L}=\mathcal{O}(l_1,l_2)$ avec $l_1=kd_2(N+2-c)-1$ et $l_2=kd_1$ convient si $k\gg0$. Ce fibr\'e en droites est bien ample : $l_2>0$ et $\frac{l_1}{l_2}>(c-1)(d_2-d_1)+1$ est vrai pour $k\gg0$ par (\ref{condinum}). Montrons alors $H\subset\bar{H}^s(\mathcal{O}(l_1,l_2))$ en appliquant le crit\`ere de Hilbert-Mumford (\cite{GIT} Theorem 2.1). Soient pour cela $P=[F_1,F_2,\dots, F_c]\in H(K)$ et $\rho : \mathbb{G}_{m,K}\to SL_{N+1,K}$ un sous-groupe \`a un param\`etre non trivial qu'on peut supposer de la forme obtenue dans le lemme \ref{bon1ps}. Par le lemme \ref{bonneq} et la proposition \ref{mucombo}, quitte \`a modifier $F_2,\dots,F_c$, on peut supposer que $\mu^{\mathcal{O}(l_1,l_2)}(P,\rho)=l_1\deg_{\alpha}(F_1)+l_2\sum_{i=2}^c\deg_{\alpha}(F_i).$ Par le th\'eor\`eme \ref{alphadeg} (ii), pour montrer que $\mu^{\mathcal{O}(l_1,l_2)}(P,\rho)>0$ et conclure, il suffit de v\'erifier que $(N+1)l_1d_1>l_1d_1+(c-1)l_2d_2$ et que $(N+1)l_2d_2>l_1d_1+(c-1)l_2d_2$, i.e. que : $$\frac{d_2}{d_1}\frac{c-1}{N}<\frac{l_1}{l_2}<\frac{d_2}{d_1}(N-c+2).$$ On remarque alors que $\frac{l_1}{l_2}$ est une fonction croissante de $k$ qui tend vers $\frac{d_2}{d_1}(N-c+2)$. Cela montre que la seconde in\'egalit\'e est toujours vraie. Comme $\frac{c-1}{N}<1<N-c+2$, cela montre aussi que la premi\`ere in\'egalit\'e est vraie pour $k\gg0$. R\'eciproquement, supposons que (\ref{condinum}) n'est pas v\'erifi\'ee, et soit $\mathcal{O}(l_1,l_2)$ un fibr\'e ample sur $\bar{H}$. L'amplitude de $\mathcal{O}(l_1,l_2)$ et le fait que (\ref{condinum}) n'est pas vraie montrent que $\frac{l_1}{l_2}\geq\frac{d_2}{d_1}(N-c+2)$, donc que $(N+1)l_2d_2\leq l_1d_1+(c-1)l_2d_2$. Alors, par le th\'eor\`eme \ref{alphadeg} (iii), on peut trouver des entiers $\alpha_0\leq\dots\leq\alpha_N$ non tous nuls de somme nulle et des \'equations $F_i\in H^0(\mathbb{P}^N_K,\mathcal{O}(d_i))$, $1\leq i\leq c$, d\'efinissant une intersection compl\`ete lisse telles que $l_1\deg_{\alpha}(F_1)+l_2\sum_{i=2}^c\deg_{\alpha}(F_i)\leq 0$. Soient $\Phi_2,\dots,\Phi_c$ comme dans le lemme \ref{bonneq}. Par la condition (ii) de ce lemme, on a encore $l_1\deg_{\alpha}(F_1)+l_2\sum_{i=2}^c\deg_{\alpha}(\Phi_i)\leq 0$. Notons $\rho:\mathbb{G}_{m,K}\to SL_{N+1,K}$ le sous-groupe \`a un param\`etre d\'efini par $\rho(t)\cdot(x_0,\dots,x_N)=(t^{\alpha_0}x_0,\dots,t^{\alpha_N}x_N)$, et $P=[F_1,F_2,\dots,F_c]\in H(K)$. Par la proposition \ref{mucombo}, $\mu^{\mathcal{O}(l_1,l_2)}(P,\rho)\leq 0$, et le crit\`ere de Hilbert-Mumford montre que $P\notin\bar{H}^s(\mathcal{O}(l_1,l_2))$. \end{proof} On en d\'eduit imm\'ediatement le th\'eor\`eme \ref{edm}(ii). \begin{proof}[$\mathbf{Preuve \text{ }du \text{ }th\acute{e}or\grave{e}me\text{ }\ref{edm} (ii)}$]~ Par la proposition \ref{stablisse}, il existe un fibr\'e en droites $SL_{N+1}$-lin\'earis\'e ample sur $\bar{H}$ rendant tous les points de $H$ stables. La th\'eorie g\'eom\'etrique des invariants permet donc de construire un quotient g\'eom\'etrique quasi-projectif de $H$ par $SL_{N+1}$ (\cite{Seshadri} Theorem 4). Celui-ci est aussi un quotient g\'eom\'etrique quasi-projectif de $H$ par $PGL_{N+1}$ : l'espace de modules grossier $M$ est donc bien quasi-projectif. \end{proof} \section{Minoration du $\alpha$-degr\'e} On fixe un corps alg\'ebriquement clos $K$. Dans cette partie, on autorise $1\leq c\leq N$ et $2\leq d_1\leq\dots\leq d_c$. Une intersection compl\`ete sur $K$ est toujours de codimension $c$ dans $\mathbb{P}^N_K$ et de degr\'es $d_1,\dots,d_c$. On conserve les conventions \ref{notationsalpha}. L'objectif de cette partie est la preuve de l'in\'egalit\'e suivante, qu'on a utilis\'ee au paragraphe \ref{preuveedm} pour v\'erifier le crit\`ere de Hilbert-Mumford. \begin{thm}\label{alphadeg} \begin{enumerate}[(i)] \item Soient $k_1,\dots, k_c$ des nombres r\'eels tels que : \begin{equation}\label{hypki} \min_{1\leq i\leq c}k_i\geq \frac{1}{N+1}\sum_{i=1}^c k_i. \end{equation} Alors si $\alpha_0\leq\dots\leq\alpha_N$ sont des entiers non tous nuls de somme nulle et si $F_1,\dots, F_c$ constituent une suite r\'eguli\`ere globale d\'efinissant une intersection compl\`ete lisse, on a : \begin{equation}\label{concki}\sum_{i=1}^c k_i\frac{\deg_{\alpha}(F_i)}{d_i}\geq 0. \end{equation} \item Supposons qu'on n'a pas $c=1$ et $d_1=2$. Alors si l'in\'egalit\'e (\ref{hypki}) est stricte, l'in\'egalit\'e (\ref{concki}) est stricte. \item Les \'enonc\'es (i) et (ii) sont optimaux au sens o\`u ils seraient faux pour d'autres valeurs des $k_i$. \end{enumerate} \end{thm} Pr\'ecisons le sens de (iii). Dire que l'\'enonc\'e (i) est optimal signifie que si $k_1,\dots, k_c$ sont des r\'eels ne v\'erifiant pas (\ref{hypki}), il existe des entiers non tous nuls de somme nulle $\alpha_0\leq\dots\leq\alpha_N$ et des \'equations $F_1,\dots, F_c$ d\'efinissant une intersection compl\`ete lisse tels que l'in\'egalit\'e (\ref{concki}) soit fausse. L'assertion concernant l'\'enonc\'e (ii) est analogue. L'in\'egalit\'e \ref{alphadeg} permet de minorer les $\alpha$-degr\'es des \'equations d'une intersection compl\`ete lisse. Son heuristique est la suivante : si les $\alpha$-degr\'es des \'equations d'une intersection compl\`ete sont petits, cela signifie que beaucoup de mon\^omes n'interviennent pas dans ces \'equations. Ce fait doit permettre de montrer, via le crit\`ere jacobien, que cette intersection compl\`ete est en fait singuli\`ere. Le paragraphe \ref{uneequation} est constitu\'e de r\'esultats pr\'eliminaires autour du lien entre $\alpha$-degr\'e d'une \'equation $F$ et singularit\'es de $\{F=0\}$ ; le paragraphe \ref{pleindequations} est consacr\'e \`a la preuve du th\'eor\`eme \ref{alphadeg}. \subsection{\'Etude d'une \'equation}\label{uneequation} On fixe dans tout ce paragraphe un entier $d\geq2$ et une \'equation non nulle $F\in H^0(\mathbb{P}^N_K,\mathcal{O}(d))$. Le lemme \ref{singdeg} permettra de faire le lien entre la g\'eom\'etrie de l'hypersurface $\{F=0\}$ et le $\alpha$-degr\'e $\deg_{\alpha}(F)$. \begin{lemme}\label{singdeg} Soient $u$, $v$ et $s$ des entiers tels que $u,v\geq0$, $s\geq0$ et $u+v=N-s$. Alors, si $\deg_{\alpha}(F)<\alpha_u+(d-1)\alpha_v$, $$\dim(\Sing(\{F=0\})\cap\{X_0=\dots=X_{v-1}=0\})\geq s.$$ \end{lemme} \begin{proof}[$\mathbf{Preuve}$] Comme $d\geq 2$ et les $\alpha_i$ sont croissants, quitte \`a \'echanger $u$ et $v$, on peut supposer que $u\leq v$. \'Ecrivons alors $F=X_0P_0+\dots+X_NP_N$, o\`u $P_i$ ne d\'epend pas de $X_0,\dots,X_{i-1}$. L'hypoth\`ese $\deg_{\alpha}(F)<\alpha_u+(d-1)\alpha_v$ montre que si $i\geq u$, $P_i$ ne d\'epend que de $X_0,\dots, X_{v-1}$. Posons $Z=\{X_0=\dots=X_{v-1}=P_0=\dots=P_{u-1}=0\}$. Si $i\leq v-1$, $X_i$ est nul sur $Z$. Si $i\geq v$, on a $i\geq u$, de sorte que $P_i$, qui ne d\'epend que de $X_0,\dots, X_{v-1}$, est nul sur $Z$. Par cons\'equent, $F=\sum_{i=0}^N X_i P_i$ est nul sur $Z$. De m\^eme, pour $0\leq j\leq N$, on peut \'ecrire $\frac{\partial F}{\partial X_j}=P_j+\sum_{i=0}^NX_i\frac{\partial P_i}{\partial X_j}$. En distinguant comme ci-dessus les cas $i\leq v-1$ et $i\geq v$, on voit que $X_i\frac{\partial P_i}{\partial X_j}$ s'annule sur $Z$. De plus, si $j<u$, $P_j$ est nul sur $Z$ et si $j\geq u$, $P_j$ qui ne d\'epend que de $X_0,\dots, X_{v-1}$, est aussi nul sur $Z$. Sommant, on voit que $\frac{\partial F}{\partial X_j}$ est nul sur $Z$. On a montr\'e que $F$ et tous les $\frac{\partial F}{\partial X_j}$ s'annulent sur $Z$, de sorte que, par le crit\`ere jacobien, $Z\subset\Sing(\{F=0\})$. Comme, par le th\'eor\`eme de l'intersection projective, $\dim(Z)\geq N-u-v=s$, le lemme est d\'emontr\'e. \end{proof} Le lemme \ref{singdeg} motive la d\'efinition qui suit. \begin{defi} On note $s(F)$ le plus petit entier $s\in\{-1,\dots,N-1\}$ tel que, si $u,v\geq 0$ sont des entiers avec $u+v=N-s-1$, on a $\deg_{\alpha}(F)\geq\alpha_u+(d-1)\alpha_v$. Supposons que $0\leq s \leq s(F)$. On note $v_s(F)$ le plus grand entier $v\in\{0,\dots,N-s\}$ tel que $\deg_{\alpha}(F)<\alpha_{N-v_s(F)-s}+(d-1)\alpha_{v_s(F)}$. \end{defi} \begin{lemme}\label{vFex} L'entier $s(F)$ est bien d\'efini. Soit $0\leq s \leq s(F)$. Alors $v_s(F)$ est bien d\'efini et $v_s(F)\geq \frac{N+s(F)-2s}{2}$. De plus, si $0<s\leq s(F)$, on a $v_{s-1}(F)\geq v_{s}(F)+1$. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] Comme le plus petit $\alpha$-degr\'e possible d'un mon\^ome de degr\'e $d$ est $d\alpha_0$, on a $\deg_\alpha(F)\geq d\alpha_0$, de sorte que $s(F)$ est bien d\'efini. Par d\'efinition de $s(F)$, et comme $s(F)\geq 0$, il existe $u,v\geq 0$ tels que $u+v=N-s(F)$ et $\deg_{\alpha}(F)<\alpha_u+(d-1)\alpha_v$. Comme $d\geq 2$ et que les $\alpha_i$ sont croissants, quitte \`a \'echanger $u$ et $v$, on peut supposer $v\geq u$, soit $v\geq \frac{N-s(F)}{2}$. Alors, si $v'=v+s(F)-s$, comme les $\alpha_i$ sont croissants, on a $\deg_{\alpha}(F)<\alpha_u+(d-1)\alpha_{v'}$. Ceci montre d'une part l'existence de $v_s(F)$ et d'autre part que $v_s(F)\geq v'\geq\frac{N-s(F)}{2}+s(F)-s=\frac{N+s(F)-2s}{2}$. Finalement, supposons $0<s\leq s(F)$. Comme $\deg_{\alpha}(F)<\alpha_{N-s-v_{s}(F)}+(d-1)\alpha_{v_{s}(F)}$, par croissance des $\alpha_i$, on obtient $\deg_{\alpha}(F)<\alpha_{N-s-v_s(F)}+(d-1)\alpha_{v_{s}(F)+1}$, ce qui montre que $v_{s-1}(F)\geq v_{s}(F)+1$. \end{proof} Le lemme ci-dessous permet d'interpr\'eter l'entier $s(F)$ comme la dimension attendue, connaissant $\deg_{\alpha}(F)$, de $\Sing(\{F=0\})$. Les entiers $v_s(F)$ indiquent, eux, la mani\`ere dont on s'attend \`a ce que les singularit\'es de $\{F=0\}$ se situent par rapport au drapeau $\varnothing\subset\{X_0=\dots=X_{N-1}=0\}\subset\dots\subset\{X_0=0\}\subset\mathbb{P}^N_K$. \begin{lemme}\label{lemsing} On a $\dim(\Sing(\{F=0\}))\geq s(F)$. Si $0\leq s \leq s(F)$, $\dim(\Sing(\{F=0\})\cap\{X_0=\dots=X_{v_s(F)-1}=0\})\geq s$. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] C'est une cons\'equence imm\'ediate du lemme \ref{singdeg} et des d\'efinitions. \end{proof} Les deux lemmes suivants permettent de minorer le $\alpha$-degr\'e des \'equations telles que $s(F)=-1$ (resp. $s(F)\geq 0$). \begin{lemme}\label{alisse} Supposons que $s(F)=-1$. Alors $\deg_{\alpha}(F)\geq 0$. De plus, cette in\'egalit\'e est stricte si $d\geq 3$. \end{lemme} \begin{proof}[$\mathbf{Preuve}$] On calcule : \begin{alignat*}{3} N\deg_{\alpha}(F) &\geq(\alpha_0+(d-1)\alpha_N)+\dots+(\alpha_{N-1}+(d-1)\alpha_1) && \text{ car }s(F)=-1 \\ &= -\alpha_N-(d-1)\alpha_0 &&\text{ car }\sum_i\alpha_i=0\\ \frac{1}{d-1}\deg_{\alpha}(F) & \geq\frac{1}{d-1}\alpha_0+\alpha_N&&\text{ car }s(F)=-1. \end{alignat*} Sommant ces deux in\'egalit\'es, on obtient $\deg_{\alpha}(F)\geq-\frac{d(d-2)}{Nd-N+1}\alpha_0$. Ceci conclut car $d\geq 2$ et $\alpha_0<0$ (les $\alpha_i$ sont croissants non tous nuls de somme nulle). \end{proof} \begin{lemme}\label{nonalisse} Supposons que $s(F)\geq 0$. Alors : $$\frac{\deg_{\alpha}(F)}{d}\geq-\frac{\sum_{s=0}^{s(F)}\alpha_{v_s(F)}}{N-s(F)}.$$ \end{lemme} \begin{proof}[$\mathbf{Preuve}$] Par d\'efinition de $v_0(F)$, on a : \begin{alignat}{2} (N-v_0(F))\deg_{\alpha}(F) &\geq(\alpha_0+(d-1)\alpha_N) && \nonumber \\ &+\dots+(\alpha_{N-v_0(F)-1}+(d-1)\alpha_{v_0(F)+1}).&& \label{in1} \end{alignat} Pour $0< s\leq s(F)$, par d\'efinition de $v_s(F)$, et comme $v_{s-1}(F)\geq v_{s}(F)+1$ par le lemme \ref{vFex}, on a : \begin{alignat}{2} (v_{s-1}(F)-v_s(F)-1)&\deg_{\alpha}(F) \geq (\alpha_{N-s-v_{s-1}(F)+1}+(d-1)\alpha_{v_{s-1}(F)-1}) && \nonumber \\ &+\dots+(\alpha_{N-s-v_s(F)-1}+(d-1)\alpha_{v_s(F)+1}).&& \label{in2} \end{alignat} Le lemme \ref{vFex} montre que $2v_{s(F)}(F)+s(F)-N\geq 0$. Ceci permet d'\'ecrire, utilisant la d\'efinition de $s(F)$ : \begin{alignat}{2} (2v_{s(F)}+s(F)-N)&\deg_{\alpha}(F) \geq(\alpha_{N-s(F)-v_{s(F)}(F)}+(d-1)\alpha_{v_{s(F)}(F)-1}) && \nonumber \\ &+\dots+(\alpha_{v_{s(F)}(F)-1}+(d-1)\alpha_{N-s(F)-v_{s(F)}(F)}).&& \label{in3} \end{alignat} Sommant deux fois l'in\'egalit\'e (\ref{in1}), deux fois les in\'egalit\'es (\ref{in2}) et l'in\'egalit\'e (\ref{in3}), on obtient : \begin{alignat}{3} (N-s(F))&\deg_{\alpha}(F) \geq2(\alpha_0+\dots+\alpha_{N-s(F)-v_{s(F)}(F)-1})&& \nonumber \\ &+d(\alpha_{N-s(F)-v_{s(F)}(F)}+\dots+\alpha_{v_{s(F)}(F)-1}) &&\label{in4} \\ &+ (2d-2)(\alpha_{v_{s(F)}(F)}+\dots+\alpha_{N})-(2d-2)\sum_{s=0}^{s(F)}\alpha_{v_s(F)}.&&\nonumber \end{alignat} Remarquons alors que : $$0\geq[\alpha_0+\dots+\alpha_{N-s(F)-v_{s(F)}(F)-1}]-[\alpha_{v_{s(F)}(F)}+\dots+\alpha_{N}-\sum_{s=0}^{s(F)}\alpha_{v_s(F)}].$$ En effet, chacun des crochets est une somme de $N-s(F)-v_{s(F)}(F)$ des $\alpha_i$. Les indices intervenant dans le premier crochet sont tous plus petits que les indices intervenant dans le second, de sorte que l'on conclut par croissance des $\alpha_i$. Multipliant cette \'equation par $(d-2)\geq 0$, et l'ajoutant \`a (\ref{in4}), on obtient : $$(N-s(F))\deg_{\alpha}(F) \geq d(\alpha_{0}+\dots+\alpha_{N})-d\sum_{s=0}^{s(F)}\alpha_{v_s(F)}.$$ Comme les $\alpha_i$ sont de somme nulle, cela prouve l'in\'egalit\'e voulue. \end{proof} Finalement, montrons une propri\'et\'e de positivit\'e des $\alpha_{v_s(F)}$ : \begin{lemme}\label{positivite} Supposons que $s(F)\geq 0$. Alors : $$\alpha_{v_{s(F)}(F)}+\frac{\sum_{s=0}^{s(F)}\alpha_{v_s(F)}}{N-s(F)}>0.$$ \end{lemme} \begin{proof}[$\mathbf{Preuve}$] En sommant les in\'egalit\'es (\ref{in1}) et (\ref{in2}) de la preuve du lemme \ref{nonalisse}, on obtient : \begin{alignat}{2} (N-s(F)-v_{s(F)}&(F))\deg_{\alpha}(F) \geq(\alpha_0+\dots+\alpha_{N-s(F)-v_{s(F)}(F)-1})&& \nonumber \\ &+ (d-1)(\alpha_{v_{s(F)}(F)}+\dots+\alpha_{N})-(d-1)\sum_{s=0}^{s(F)}\alpha_{v_s(F)}.&& \label{in5} \end{alignat} Par d\'efinition de $v_{s(F)}(F)$, $\deg_{\alpha}(F)<\alpha_{N-s(F)-v_{s(F)}(F)}+(d-1)\alpha_{v_{s(F)}(F)}$. Comme, par le lemme \ref{vFex}, $N-s(F)-v_{s(F)}(F)\leq v_{s(F)}(F)$, la croissance des $\alpha_i$ montre $\deg_{\alpha}(F)<d \alpha_{v_{s(F)}(F)}$. Combinons ce fait avec l'in\'egalit\'e (\ref{in5}), puis utilisons le fait que $d\geq 2$ et que les $\alpha_i$ sont croissants. \begin{alignat*}{4} (N-&s(F)&&-v_{s(F)}(F))d\alpha_{v_{s(F)}(F)}+(d-1)\sum_{s=0}^{s(F)}\alpha_{v_s(F)} &&&\nonumber \\ & >&&(\alpha_0+\dots+\alpha_{N-s(F)-v_{s(F)}(F)-1})+(d-1)(\alpha_{v_{s(F)}(F)}+\dots+\alpha_{N}) &&&\nonumber \\ &\geq&&(d-1)(\alpha_0+\dots+\alpha_{N-s(F)-v_{s(F)}(F)-1}+\alpha_{v_{s(F)}(F)}+\dots+\alpha_{N}) &&&\nonumber \\ & &&-(N-s(F)-v_{s(F)}(F))(d-2)\alpha_{v_{s(F)}(F)}.&&&\nonumber \end{alignat*} Utilisons que les $\alpha_i$ sont de somme nulle, puis \`a nouveau leur croissance : \begin{alignat*}{4} (N-&s(F)-v_{s(F)}(F))(2d-2)\alpha_{v_{s(F)}(F)}+(d-1)\sum_{s=0}^{s(F)}\alpha_{v_s(F)} &&\nonumber \\ &>-(d-1)(\alpha_{N-s(F)-v_{s(F)}(F)}+\dots+\alpha_{v_{s(F)}(F)-1}) &&\nonumber\\ &\geq -(2v_{s(F)}(F)+s(F)-N)(d-1)\alpha_{v_{s(F)}(F)}. &&\nonumber \end{alignat*} On obtient l'in\'egalit\'e voulue en divisant par $(d-1)(N-s(F))>0$. \end{proof} \subsection{\'Equations d'une intersection compl\`ete lisse}\label{pleindequations} Utilisons les r\'esultats du paragraphe pr\'ec\'edent pour montrer le th\'eor\`eme \ref{alphadeg}. \begin{proof}[$\mathbf{Preuve \text{ }du \text{ }th\acute{e}or\grave{e}me\text{ }\ref{alphadeg}\text{ }(i)}$]~ Tout d'abord, en sommant pour $i\in \{1,\dots,c\}$ les in\'egalit\'es $k_i\geq\frac{k_1+\dots+k_c}{N+1}$, on montre $(N+1-c)(k_1+\dots+k_c)\geq0$, donc $k_1+\dots+k_c\geq0$, et finalement, $k_i\geq0$ pour $i\in \{1,\dots, c\}$. Distinguons alors deux cas. Si $s(F_i)=-1$ pour tout $i\in \{1,\dots,c\}$, le lemme \ref{alisse} montre que $\deg_{\alpha}(F_i)\geq 0$. Ainsi, $\sum_{i=1}^c k_i\frac{\deg_{\alpha}(F_i)}{d_i}\geq0$. Supposons au contraire qu'il existe $l$ tel que $s(F_l)\geq 0$. On choisit un tel $l$ de sorte que $\frac{\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}}{N-s(F_l)}$ soit maximal. On va construire des entiers $j_0,\dots, j_{s(F_l)}\in\{1,\dots,c\}$ distincts tels que, pour $0\leq s \leq s(F_l)$, \begin{equation} \frac{\deg_{\alpha}(F_{j_s})}{d_{j_s}}\geq\alpha_{v_s(F_l)}.\label{minor1} \end{equation} Supposons $j_0,\dots,j_{s-1}$ convenables, et construisons $j_s$. Par le lemme \ref{lemsing}, $\dim(\Sing(\{F_l=0\})\cap\{X_0=\dots=X_{v_s(F_l)-1}=0\})\geq s.$ Le th\'eor\`eme de l'intersection projective implique que $\dim(\Sing(\{F_l=0\})\cap\{X_0=\dots=X_{v_s(F_l)-1}=F_{j_0}=\dots=F_{j_{s-1}}=0\})\geq 0$. Ce ferm\'e est donc non vide ; on y choisit un point ferm\'e $P$. Comme $\{F_1=\dots=F_c=0\}$ est une intersection compl\`ete lisse, elle ne peut contenir le point singulier $P$ de $\{F_l=0\}$ : il existe $j_s$ tel que $F_{j_s}$ soit non nul en $P$. Comme $F_{j_0},\dots,F_{j_{s-1}}$ s'annulent en $P$, $j_s\notin\{j_0,\dots,j_{s-1}\}$. Enfin, comme $P\in \{X_0=\dots=X_{v_s(F_l)-1}=0\}$, on a $\{X_0=\dots=X_{v_s(F_l)-1}=0\}\not\subset \{F_{j_s}=0\}$. En consid\'erant les mon\^omes intervenant dans $F_{j_s}$, on voit que cela implique $\deg_{\alpha}(F_{j_s})\geq d_{j_s}\alpha_{v_s(F_l)}$, comme voulu. Soit maintenant $i\in\{1,\dots, c\}$ quelconque. Montrons que : \begin{equation} \frac{\deg_{\alpha}(F_i)}{d_i}\geq-\frac{\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}}{N-s(F_l)}.\label{minor2} \end{equation} Si $s(F_i)\geq 0$, cela r\'esulte du lemme \ref{nonalisse} et du choix de $l$. Si $s(F_i)=-1$, on raisonne comme suit. Par le lemme \ref{positivite}, $\alpha_{v_{s(F_l)}(F_l)}+\frac{\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}}{N-s(F_l)}>0$. Comme, par le lemme \ref{vFex}, $\alpha_{v_{s(F_l)}(F_l)}$ est le plus petit des $(\alpha_{v_{s}(F_l)})_{0\leq s\leq s(F_l)}$, on en d\'eduit : $\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}>0$. Par le lemme \ref{alisse}, on a donc $\frac{\deg_{\alpha}(F_i)}{d_i}\geq 0\geq-\frac{\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}}{N-s(F_l)}$. On peut alors conclure. Notons $I=\{j_0,\dots,j_{s(F_l)}\}$ et utilisons les minorations (\ref{minor1}) pour $i\in I$ et (\ref{minor2}) pour $i\notin I$. On obtient : \begin{equation} \sum_{i=1}^c k_i\frac{\deg_{\alpha}(F_i)}{d_i}\geq \sum_{s=0}^{s(F_l)} k_{j_s}\alpha_{v_s(F_l)} -\Big(\sum_{i\notin I}k_i\Big)\frac{\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}}{N-s(F_l)}.\label{minorons} \end{equation} Montrons que le terme de droite de (\ref{minorons}) co\"incide avec : \begin{equation} \sum_{s=0}^{s(F_l)}\Big(\alpha_{v_s(F_l)}+\frac{\alpha_{v_0(F_l)}+\dots+\alpha_{v_{s(F_l)}(F_l)}}{N-s(F_l)}\Big)\Big(k_{j_s}-\frac{k_1+\dots+k_c}{N+1}\Big). \label{miracle} \end{equation} Pour cela, on d\'eveloppe (\ref{miracle}), et on identifie les coefficients des $k_i$ avec ceux apparaissant dans le terme de droite de (\ref{minorons}). Si $i\notin I$, ce coefficient vaut : \begin{alignat*}{3} -\frac{1}{N+1}\sum_{s=0}^{s(F_l)}\Big(\alpha_{v_s(F_l)}&+\frac{\alpha_{v_0(F_l)}+\dots+\alpha_{v_{s(F_l)}(F_l)}}{N-s(F_l)}\Big)\\ &=-\frac{1}{N+1}\Big(\frac{s(F_l)+1}{N-s(F_l)}+1\Big)\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}\\ &=-\frac{1}{N-s(F_l)}\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}. \end{alignat*} Si $i=j_s\in I$, un terme suppl\'ementaire appara\^it, de sorte que ce coefficient vaut bien : $$-\frac{1}{N-s(F_l)}\sum_{s=0}^{s(F_l)}\alpha_{v_s(F_l)}+\alpha_{v_s(F_l)}+\frac{\alpha_{v_0(F_l)}+\dots+\alpha_{v_{s(F_l)}(F_l)}}{N-s(F_l)}=\alpha_{v_s(F_l)}.$$ Il reste \`a montrer que (\ref{miracle}) est positif ou nul. Comme, par le lemme \ref{vFex}, $\alpha_{v_{s(F_l)}(F_l)}$ est le plus petit des $(\alpha_{v_{s}(F_l)})_{0\leq s\leq s(F_l)}$, le lemme \ref{positivite} montre que le premier facteur des termes de la somme (\ref{miracle}) est positif. Le second facteur des termes de cette somme est positif ou nul par hypoth\`ese sur les $k_i$. \end{proof} \begin{proof}[$\mathbf{Preuve \text{ }du \text{ }th\acute{e}or\grave{e}me\text{ }\ref{alphadeg}\text{ }(ii)}$]~ Tout d'abord, en sommant pour $i\in \{1,\dots c\}$ les in\'egalit\'es $k_i>\frac{k_1+\dots+k_c}{N+1}$, on montre $(N+1-c)(k_1+\dots+k_c)>0$, donc $k_1+\dots+k_c>0$, et finalement, $k_i>0$ pour $i\in \{1,\dots, c\}$. On effectue alors la m\^eme preuve que ci-dessus. Le second cas, o\`u il existe $i$ tel que $s(F_i)\geq 0$, est identique : on obtient une in\'egalit\'e stricte gr\^ace aux hypoth\`eses plus fortes $k_i>\frac{k_1+\dots+k_c}{N+1}$ et \`a l'in\'egalit\'e stricte dans le lemme \ref{positivite}. Dans le premier cas, o\`u $s(F_i)=-1$ pour tout $i$, on raisonne de m\^eme pour montrer que $\sum_{i=1}^c k_i\frac{\deg_{\alpha}(F_i)}{d_i}\geq 0$. Comme $k_i>0$, et par le cas de stricte in\'egalit\'e du lemme \ref{alisse}, on obtient une in\'egalit\'e stricte sauf \'eventuellement si $d_1=\dots=d_c=2$. L'\'etude du cas d'\'egalit\'e montre qu'on peut alors supposer $\deg_{\alpha}(F_1)=\dots=\deg_{\alpha}(F_c)=0$ et $\alpha_i+\alpha_{N-i}=0$ pour $0\leq i\leq N$. Traitons ce cas directement ; rappelons que par hypoth\`ese, on a alors $c\geq 2$. Soit $0\leq r\leq N$ le plus petit entier tel que $\alpha_r>0$. Comme $\alpha_i+\alpha_{N-i}=0$, $r'=N-r$ est le plus grand entier tel que $\alpha_{r'}<0$. Comme $\deg_{\alpha}(F_i)=0$, on voit que $\{X_0=\dots=X_{r-1}=0\}\subset\{F_i=0\}$, de sorte que $\{X_0=\dots=X_{r-1}=0\}$ est inclus dans l'intersection compl\`ete $\{F_1=\dots=F_c=0\}$. Montrons qu'il existe un point de $\{X_0=\dots=X_{r-1}=0\}$ en lequel $\{F_1=0\}$ et $\{F_2=0\}$ ont m\^eme espace tangent. Cela contredira la lissit\'e en ce point de $\{F_1=\dots=F_c=0\}$. Comme $\deg_{\alpha}(F_1)=0$, $\frac{\partial F_1}{\partial X_i}([0:\dots:0:x_r:\dots:x_N])$ est nul si $i>r'$ ; c'est une forme lin\'eaire en $x_r,\dots,x_N$ si $i\leq r'$. Notons $A_1$ la matrice $(r'+1)\times(r'+1)$ dont les lignes sont les formes lin\'eaires $(\frac{\partial F_1}{\partial X_i})_{0\leq i\leq r'}$. De m\^eme, on note $A_2$ la matrice $(r'+1)\times(r'+1)$ dont les lignes sont les formes lin\'eaires $(\frac{\partial F_2}{\partial X_i})_{0\leq i\leq r'}$. Consid\'erons $\det(\lambda_1 A_1+\lambda_2 A_2)$ : c'est un polyn\^ome homog\`ene en $\lambda_1$ et $\lambda_2$. Comme $K$ est alg\'ebriquement clos, on peut trouver $(\lambda_1,\lambda_2)\neq(0,0)$ tels que $\det(\lambda_1 A_1+\lambda_2 A_2)=0$. Il existe donc $(x_r,\dots, x_N)\neq(0,\dots,0)$ tel que $(\lambda_1 A_1+\lambda_2 A_2)(x_r,\dots, x_N)=0$. Alors $\frac{\partial (\lambda_1F_1+\lambda_2 F_2)}{\partial X_i}([0:\dots:0:x_r:\dots:x_N])=0$ pour tout $i$ : c'est ce qu'on voulait. \end{proof} \begin{proof}[$\mathbf{Preuve \text{ }du \text{ }th\acute{e}or\grave{e}me\text{ }\ref{alphadeg}\text{ }(iii)}$]~ Montrons l'optimalit\'e dans le cas (i) : supposons donn\'es des r\'eels $k_1,\dots,k_c$ tels que la conclusion de (i) soit satisfaite. Fixons $1\leq j\leq c$. On choisit $\alpha_0=\dots=\alpha_{N-1}=-1$ et $\alpha_N=N$. Si $i\neq j$, on choisit pour $F_i$ une \'equation g\'en\'erique ne faisant pas intervenir la variable $X_N$ : en particulier $\deg_{\alpha}(F_i)=-d_i$. Par le th\'eor\`eme de Bertini, l'intersection des $\{F_i=0\}_{i\neq j}$ a $[0:\dots:0:1]$ comme unique point singulier. On choisit une \'equation $F_j$ g\'en\'erique, qui \'evite ce point singulier. Le mon\^ome $X_N^{d_j}$ intervient donc dans $F_j$ de sorte que $\deg_{\alpha}(F_j)=N d_j$. De plus, par le th\'eor\`eme de Bertini, $\{F_1=\dots=F_c=0\}$ est lisse. On peut donc \'ecrire $$\sum_{i=1}^ck_i\deg_{\alpha}(F_i)=\sum_{i\neq j}-k_i+Nk_j\geq0.$$ Ceci se r\'e\'ecrit $k_j\geq\frac{k_1+\dots+k_c}{N+1}$ comme voulu. Dans le cas (ii), la m\^eme preuve fonctionne. Il faut seulement v\'erifier qu'il \'etait n\'ecessaire d'exclure le cas $c=1$ et $d_1=2$. Pour cela, on prend $\alpha_0=-1$, $\alpha_i=0$ pour $1\leq i\leq N-1$ et $\alpha_N=1$. On choisit alors $F_1=X_0X_N+Q(X_1,\dots,X_{N-1})$ o\`u $Q$ est une forme quadratique ordinaire en $X_1,\dots,X_{N-1}$. Alors $\{F_1=0\}$ est lisse, mais $\deg_{\alpha}(F_1)=0$. \end{proof} \section{Hilbert-stabilit\'e}\label{hilbstab} Dans cette partie, on conserve les conventions \ref{notationsgen} et \ref{notationsalpha}, \`a ceci pr\`es qu'on autorise $1\leq c\leq N-1$ et $2\leq d_1\leq \dots\leq d_c$. Dans \cite{Mumfordstab}, Mumford propose de construire des espaces de modules quasi-projectifs en appliquant la th\'eorie g\'eom\'etrique des invariants au sch\'ema de Hilbert. On sp\'ecialise ici cette strat\'egie au cas des intersections compl\`etes, et on fait le lien avec les r\'esultats des paragraphes pr\'ec\'edents. On obtient en particulier (Corollaire \ref{conditionhilb}) une condition n\'ecessaire de Hilbert-stabilit\'e des intersections compl\`etes. On note $P$ le polyn\^ome de Hilbert des intersections compl\`etes, et $\Hilb^P_{\mathbb{P}^N}$ le sch\'ema de Hilbert de $\mathbb{P}^N$ correspondant : c'est un sch\'ema projectif sur $\Spec(\mathbb{Z})$. Si $Z$ est un sous-sch\'ema de $\mathbb{P}^N_K$ de polyn\^ome de Hilbert $P$ (par exemple une intersection compl\`ete), on note $[Z]$ le point g\'eom\'etrique de $\Hilb^P_{\mathbb{P}^N}$ correspondant. Si $l\gg0$, on peut plonger $\Hilb^P_{\mathbb{P}^N}$ dans la grassmanienne des quotients de dimension $P(l)$ de $H^0(\mathbb{P}^N,\mathcal{O}(l))$, et le fibr\'e de Pl\"ucker induit un fibr\'e ample sur $\Hilb^P_{\mathbb{P}^N}$ not\'e $\mathcal{P}_l$. Le sch\'ema en groupes $SL_{N+1}$ agit sur $\Hilb^P_{\mathbb{P}^N}$ par changement de coordonn\'ees, et les fibr\'es en droites $\mathcal{P}_l$ sont naturellement lin\'earis\'es. On dit que $Z$ est Hilbert-stable si $[Z]\in(\Hilb^P_{\mathbb{P}^N})^s(\mathcal{P}_l)$ pour $l\gg0$. Par le crit\`ere de Hilbert-Mumford, il est \'equivalent de demander que, pour $l\gg0$, pour tout sous-groupe \`a un param\`etre $\rho : \mathbb{G}_{m,K}\to SL_{N+1,K}$, on ait $\mu^{\mathcal{P}_l}([Z],\rho)>0$. Pour pouvoir utiliser ce crit\`ere, explicitons ces fonctions $\mu$ dans notre situation. \begin{prop}\label{muhilbeg} Soit $l\gg0$, $Z$ une intersection compl\`ete sur le corps al\-g\'e\-bri\-quement clos $K$ et $\rho : \mathbb{G}_{m,K}\to SL_{N+1,K}$ un sous-groupe \`a un param\`etre choisi comme dans le lemme \ref{bon1ps}. Alors : \begin{equation*} \mu^{\mathcal{P}_l}([Z],\rho)=\min_{\mathfrak{B}}\Big(\sum_{F\in\mathfrak{B}}\deg_{\alpha}(F)\Big), \end{equation*} o\`u le $\min$ porte sur les bases $\mathfrak{B}$ de $H^0(\mathbb{P}^N_K,\mathcal{I}_Z(l))$. \end{prop} \begin{proof}[$\mathbf{Preuve}$]~ Ce calcul classique se trouve par exemple dans \cite{HarrisMorrison} Prop. 4.23. \end{proof} \subsection{Majoration des fonctions $\mu$} On majore ici la quantit\'e $\mu^{\mathcal{P}_l}([Z],\rho)$ calcul\'ee dans la proposition \ref{muhilbeg}. \begin{lemme}\label{muhilbineg} Soient $Z=\{F_1=\dots=F_c=0\}$ une intersection compl\`ete sur le corps alg\'ebriquement clos $K$, $\rho : \mathbb{G}_{m,K}\to SL_{N+1,K}$ un sous-groupe \`a un param\`etre choisi comme dans le lemme \ref{bon1ps} et $l\gg0$. Alors : \begin{equation}\label{eqmuhilb} \mu^{\mathcal{P}_l}([Z],\rho)\leq\sum_{i=1}^c\deg_\alpha(F_i) \Big(\sum_{i\notin I\subset\{1,\dots,c\}}(-1)^{c-1-|I|}\tbinom{N+l-\sum_{j\notin I}d_j}{N}\Big). \end{equation} \end{lemme} \begin{proof}[$\mathbf{Preuve}$]~ Si $I\subset\{1,\dots,c\}$, on note $V^I_l=H^0(\mathbb{P}^N_K,\mathcal{O}(l-\sum_{i\notin I} d_i))$. Pour $0\leq r\leq c$, on pose $K^r_l=\bigoplus\limits_{\substack{I\subset\{1,\dots,c\}\\|I|=r}}V^I_l$. Consid\'erons la r\'esolution de Koszul de $\mathcal{O}_Z$ sur $\mathbb{P}^N_K$. Si l'on tensorise cette r\'esolution par $\mathcal{O}(l)$ pour $l\gg0$, le complexe obtenu en prenant les sections globales reste exact par annulation de Serre. On obtient ainsi une suite exacte longue de la forme : \begin{equation*} 0\to K^0_l\to\dots\stackrel{d^{r-1}}{\rightarrow}K^r_l\stackrel{d^r}{\rightarrow}\dots\to K^c_l\to H^0(Z,\mathcal{O}(l))\to 0, \end{equation*} o\`u $d^r:V^I_l\to V^J_l$ est au signe pr\`es la multiplication par $F_i$ si $J=I\cup\{i\}$ et est nul dans les autres cas. On notera $N^r_l=\Ker(d^r)=\Ima(d^{r-1})\subset K^r_l$. On introduit les notations suivantes. Si $F\in V^I_l$, on pose : $$\deg'_\alpha(F)=\deg_\alpha(F)-\sum_{i\notin I}\deg_\alpha(F_i).$$ Si $\Phi=(F_I)\in K^r_l$, on pose $\deg'_\alpha(\Phi)=\max_I\deg'_\alpha(F_I)$. De plus, on modifie l\'eg\`erement les conventions \ref{notationsalpha} : dans toute cette preuve, la notation $\Phi^\alpha$ ou la notion d'\'el\'ement $\alpha$-homog\`ene fait r\'ef\'erence \`a $\deg_\alpha'$ et non \`a $\deg_\alpha$. Remarquons que si $0\leq r<c$ et $\Phi\in K^r_l$, par d\'efinition de $\deg_{\alpha}'$ et vu l'expression de $d^r$, on a $\deg'_\alpha(d^r(\Phi))\leq\deg'_\alpha(\Phi)$. On va montrer par r\'ecurrence sur $0\leq r\leq c$ l'\'enonc\'e suivant : il existe une base $\mathfrak{B}^r_l$ de $N^r_l$ telle que : \begin{equation}\label{hyprec} \sum_{\Phi\in\mathfrak{B}^r_l}\deg_{\alpha}'(\Phi)\leq \sum_{i=1}^c\deg_\alpha(F_i) \Big(\sum\limits_{\substack{i\notin I\subset\{1,\dots,c\}\\|I|\leq r-1}}(-1)^{r-1-|I|}\dim(V_l^I)\Big). \end{equation} Pour $r=0$, $N^0_l=\{0\}$, de sorte qu'on peut prendre $\mathfrak{B}^0=\varnothing$. Supposons l'\'enonc\'e vrai pour $r$ et montrons-le pour $r+1$. Pour cela, soit $\mathfrak{B}^r_l$ une base de $N^r_l$ telle que $\sum_{\Phi\in\mathfrak{B}^r_l}\deg_{\alpha}'(\Phi)$ soit minimal. On voit ais\'ement que $\mathfrak{B}^{r,\alpha}_l=\{\Phi^\alpha,\Phi\in\mathfrak{B}^r_l\}$ est une famille libre d'\'el\'ements $\alpha$-homog\`enes de $K^r_l$. Compl\'etons cette famille en une base $\mathfrak{C}^{r}_l$ de $K_l^r$ constitu\'ee d'\'el\'ements $\alpha$-homog\`enes. Remarquons que $\sum_{\Phi\in\mathfrak{C}}\deg'_\alpha(\Phi)$ ne d\'epend pas de la base $\mathfrak{C}$ de $K_l^r$ constitu\'ee d'\'el\'ements $\alpha$-homog\`enes. Utilisant $\alpha_0+\dots+\alpha_N=0$, cette quantit\'e est facile \`a calculer pour la base $\mathfrak{C}$ constitu\'ee des mon\^omes. Il vient donc : \begin{equation}\label{basemonome} \sum_{\Phi\in\mathfrak{C}^{r}_l}\deg'_\alpha(\Phi)=\sum_{\Phi\in\mathfrak{C}}\deg'_\alpha(\Phi)=\sum\limits_{\substack{I\subset\{1,\dots,c\}\\|I|=r}} \dim(V_l^I)\Big(-\sum_{i\notin I}\deg_\alpha(F_i)\Big). \end{equation} Comme $\{\Phi^\alpha,\Phi\in\mathfrak{B}^r_l\cup(\mathfrak{C}^{r}_l\setminus\mathfrak{B}^{r,\alpha}_l)\}=\mathfrak{C}^{r}_l$ est une base de $K_l^r$, $\mathfrak{B}^r_l\cup(\mathfrak{C}^{r}_l\setminus\mathfrak{B}^{r,\alpha}_l)$ est \'egalement une base de $K_l^r$. En particulier, $\mathfrak{C}^{r}_l\setminus\mathfrak{B}^{r,\alpha}_l$ est une base d'un suppl\'ementaire de $N^r_l$ dans $K^r_l$, de sorte que $\mathfrak{B}^{r+1}_l=d^r(\mathfrak{C}^{r}_l\setminus\mathfrak{B}^{r,\alpha}_l)$ est une base de $N^{r+1}_l$. Montrons que cette base convient. Pour cela, on calcule : \begin{alignat*}{3} \sum_{\Phi\in\mathfrak{B}^{r+1}_l}\deg'_\alpha(\Phi)&\leq\sum_{\Phi\in(\mathfrak{C}^{r}_l\setminus\mathfrak{B}^{r,\alpha}_l)}\deg'_\alpha(\Phi) =\sum_{\Phi\in\mathfrak{C}^{r}_l}\deg'_\alpha(\Phi)-\sum_{\Phi\in\mathfrak{B}^{r}_l}\deg'_\alpha(\Phi)\\ &\leq\sum_{i=1}^c\deg_\alpha(F_i) \Big(\sum\limits_{\substack{i\notin I\subset\{1,\dots,c\}\\|I|\leq r}}(-1)^{r-|I|}\dim(V_l^I)\Big), \end{alignat*} o\`u l'on a utilis\'e respectivement (\ref{basemonome}) et l'hypoth\`ese de r\'ecurrence (\ref{hyprec}) pour \'evaluer les deux termes. Cela conclut la r\'ecurrence. Faisons \`a pr\'esent $r=c$ dans (\ref{hyprec}). On obtient une base $\mathfrak{B}_l^c$ de $N^c_l=\Ker[H^0(\mathbb{P}^N_K,\mathcal{O}(l))\to H^0(Z,\mathcal{O}(l))]=H^0(\mathbb{P}^N_K,\mathcal{I}_Z(l))$ car $l\gg0$. Comme $\deg_\alpha$ et $\deg'_\alpha$ co\"incident pour des \'el\'ements de $H^0(\mathbb{P}^N_K,\mathcal{O}(l))$, et comme $\dim(V^I_l)=\tbinom{N+l-\sum_{j\notin I}d_j}{N}$, il vient : $$\sum_{F\in\mathfrak{B}_l^c}\deg_{\alpha}(F)\leq\sum_{i=1}^c\deg_\alpha(F_i) \Big(\sum_{i\notin I\subset\{1,\dots,c\}}(-1)^{c-1-|I|}\tbinom{N+l-\sum_{j\notin I}d_j}{N}\Big).$$ Par la proposition \ref{muhilbeg}, cela conclut. \end{proof} On en d\'eduit la proposition suivante : \begin{prop}\label{muhilbasympt} Soient $Z=\{F_1=\dots=F_c=0\}$ une intersection compl\`ete sur le corps alg\'ebriquement clos $K$ et $\rho : \mathbb{G}_{m,K}\to SL_{N+1,K}$ un sous-groupe \`a un param\`etre choisi comme dans le lemme \ref{bon1ps}. Alors : \begin{equation}\label{eqmuhilb2} \limsup_{l\to+\infty} \frac{\mu^{\mathcal{P}_l}([Z],\rho)}{l^{N-c+1}}\leq\frac{d_1\dots d_c}{(N-c+1)!}\sum_{i=1}^c\frac{\deg_{\alpha}(F_i)}{d_i}. \end{equation} \end{prop} \begin{proof}[$\mathbf{Preuve}$] Le polyn\^ome $\sum_{i\notin I\subset\{1,\dots,c\}}(-1)^{c-1-|I|}\tbinom{N+X-\sum_{j\notin I}d_j}{N}$ a pour terme dominant $\frac{d_1\dots\hat{d_i}\dots d_c}{(N-c+1)!}X^{N-c+1}$, comme le montre une r\'ecurrence sur $c$. Ainsi, le terme de droite dans l'in\'egalit\'e (\ref{eqmuhilb}) est un polyn\^ome en $l$ de degr\'e $\leq N-c+1$ et dont le coefficient de $l^{N-c+1}$ est $\frac{d_1\dots d_c}{(N-c+1)!}\sum_{i=1}^c\frac{\deg_{\alpha}(F_i)}{d_i}$. On conclut en divisant par $l^{N-c+1}$ l'in\'egalit\'e (\ref{eqmuhilb}), et en faisant tendre $l$ vers $+\infty$. \end{proof} \subsection{Condition n\'ecessaire de Hilbert-stabilit\'e}\label{hilbstabdiscussion} La proposition \ref{muhilbasympt} a pour corollaire imm\'ediat une condition n\'ecessaire de Hilbert-stabilit\'e, qui est le r\'esultat principal de cette partie. \begin{cor}\label{conditionhilb} Soit $Z=\{F_1=\dots=F_c=0\}$ une intersection compl\`ete Hilbert-stable sur le corps alg\'ebriquement clos $K$. Alors, si $\alpha_0\leq\dots\leq\alpha_N$ sont des entiers de somme nulle, et quelque soit le syst\`eme de coordonn\'ees choisi, \begin{equation}\label{cnhilb} \sum_{i=1}^c\frac{\deg_{\alpha}(F_i)}{d_i}\geq 0. \end{equation} \end{cor} Le Theorem 1.1 de \cite{YujiSano} montre que si $Z$ est Chow-stable, l'in\'ega\-li\-t\'e (\ref{cnhilb}) est stricte. Rappelons que, par un th\'eor\`eme de Fogarty (\cite{Fogarty}, voir aussi \cite{GIT} App. 4C), la Chow-stabilit\'e de $Z$ implique la Hilbert-stabilit\'e de $Z$ (en g\'en\'eral, on n'a pas l'implication inverse). Le corollaire \ref{conditionhilb} et le Theorem 1.1 de \cite{YujiSano} sont donc tr\`es proches mais ne peuvent se d\'eduire l'un de l'autre. Enfin, l'article \cite{YujiSano} affirme (c'est la preuve du Corollary 1.2) que, si $c=2$, et si l'in\'egalit\'e (\ref{cnhilb}) est v\'erifi\'ee et est stricte, $Z$ est Hilbert-stable. L'argument donn\'e est malheureusement erron\'e. \vspace{1em} Quand $Z$ est lisse, l'in\'egalit\'e (\ref{cnhilb}) est vraie par le th\'eor\`eme \ref{alphadeg} pour $k_1=\dots=k_c=1$. Autrement dit, le th\'eor\`eme \ref{alphadeg} implique une forme faible de la Hilbert-stabilit\'e des intersections compl\`etes lisses. La Hilbert-stabilit\'e des intersections compl\`etes lisses est connue dans tr\`es peu de cas. Quand $c=1$ est trivial, $\Hilb^P_{\mathbb{P}^N}$ est un espace projectif, tous les fibr\'es amples $\mathcal{P}_l$ introduits ci-dessus sont donc n\'ecessairement proportionnels au fibr\'e $\mathcal{O}(1)$, et les hypersurfaces lisses sont Hilbert-stables par \cite{GIT} Prop. 4.2. Signalons un cas non trivial o\`u la Hilbert-stabilit\'e est connue. Quand $N=3$, $c=2$, $d_1=2$ et $d_2=3$, Casalaina-Martin, Jensen et Laza \cite{Lazaetcie} montrent que les intersections compl\`etes lisses sont Chow-stables, ce qui implique leur Hilbert-stabilit\'e par le th\'eor\`eme de Fogarty mentionn\'e ci-dessus. Pour montrer la Hilbert-stabilit\'e d'une intersection compl\`ete lisse $Z$, la difficult\'e suppl\'ementaire par rapport au th\'eor\`eme \ref{alphadeg} est une estimation de la diff\'erence entre les deux termes de l'in\'egalit\'e (\ref{eqmuhilb}), qui d\'epend des compensations entre termes de $\alpha$-degr\'e maximal des \'equations de $Z$. Comme le signale le rapporteur, il serait d\'ej\`a int\'eressant de montrer la Hilbert-stabilit\'e d'une intersection compl\`ete g\'en\'erale, dans l'esprit de \cite{Alpergen}. \addcontentsline{toc}{section}{R\'{e}f\'{e}rences} \bibliographystyle{plain-fr}
2,877,628,088,751
arxiv
\section{Summary} \textit{PyExperimenter}\footnote{\textit{PyExperimenter} repository: \url{https://github.com/tornede/py_experimenter}} is a tool to facilitate the setup, documentation, execution, and subsequent evaluation of results from an empirical study of algorithms and in particular is designed to reduce the involved manual effort significantly. It is intended to be used by researchers in the field of artificial intelligence, but is not limited to those. The empirical analysis of algorithms is often accompanied by the execution of algorithms for different inputs and variants of the algorithms (specified via parameters) and the measurement of non-functional properties. Since the individual evaluations are usually independent, the evaluation can be performed in a distributed manner on an HPC system. However, setting up, documenting, and evaluating the results of such a study is often file-based. Usually, this requires extensive manual work to create configuration files for the inputs or to read and aggregate measured results from a report file. In addition, monitoring and restarting individual executions is tedious and time-consuming. These challenges are addressed by \textit{PyExperimenter} by means of a single well defined configuration file and a central database for managing massively parallel evaluations, as well as collecting and aggregating their results. Thereby, \textit{PyExperimenter} alleviates the aforementioned overhead and allows experiment executions to be defined and monitored with ease. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{usage.png} \caption{General schema of \textit{PyExperimenter}.} \label{fig:usage} \end{figure} A general schema of \textit{PyExperimenter} can be found in Figure~\ref{fig:usage}. \textit{PyExperimenter} is designed based on the assumption that an experiment is uniquely defined by certain inputs, i.e., parameters, and a function computing the results of the experiment based on these parameters. The set of experiments to be executed can be defined through a configuration file listing the domains of each parameter, or manually through code. Those parameters define the experiment grid, based on which \textit{PyExperimenter} setups the table in the database featuring all experiments with their input parameter values and additional information such as the execution status. Once this table has been created, a \textit{PyExperimenter} instance can be run on any machine, including a distributed system. Each instance automatically pulls open experiments from the database, executes the function provided by the user with the corresponding parameters defining the experiment and writes back the results computed by the function. Errors arising during the execution are logged in the database. In case of failed experiments or if desired otherwise, a subset of the experiments can be reset and restarted easily. After all experiments are done, results can be jointly exported as a Pandas DataFrame \citep{pandas} for further processing, such as generating a LaTeX table averaging results of randomized computations over different seeds. \section{Statement of Need} The recent advances in artificial intelligence have uncovered a need for experiment tracking functionality, leading to the emergence of several tools addressing this issue. Prominent representatives include Weights and Biases \citep{wandb}, MLFlow \citep{mlflow}, TensorBoard \citep{tensorboard}, neptune.ai \citep{neptune}, Comet.ML \citep{comet}, Aim \citep{aim}, Data Version Control \citep{dvc}, Sacred \citep{sacred}, and Guild.AI \citep{guildai}. These tools largely assume that users define the configuration of an experiment together with the experiment run itself. In case of the evaluation of different hyperparameter configurations, this process is suboptimal, since it requires to communicate the hyperparameters through scripts. This task can become cumbersome to manage as the number of configuration options and desired combinations grows and becomes more complex. Weights and Biases \citep{wandb}, Polyaxon \citep{polyaxon}, and Comet.ML \citep{comet} allow so-called sweeps, i.e., hyperparameter optimization, albeit in a limited way. For a sweep, usually hyperparameters that should be optimized are specified along with the desired search domains, and an optimizer can be selected from a pre-defined list to carry out the optimization. However, the implementation of this functionality usually imposes several restrictions on the way the sweep can be carried out. In contrast, \textit{PyExperimenter} follows an inverted workflow. Instead of experiment runners registering experiments to a tracking entity such as a tracking server or database, the experiments are predefined and runners are pulling open experiments from a database. Similarly, ClearML \citep{clearml} and Polyaxon \citep{polyaxon} support a more generic workflow where experiments are first enqueued in a central orchestration server and agents can then pull tasks from the queue to execute them. However, both are much more heavyweight than \textit{PyExperimenter} regarding the implementation of both the agents and backend-features. Moreover, they are neither completely free nor completely open-source. In addition to the inverted workflow, a core property of \textit{PyExperimenter} is that the user has direct access to the experiment database, which is usually not the case for alternative tools. This allows users to view, analyze and modify both the experiment inputs and results directly in the database, although not having to deal with the setup of the database itself. Sticking to available database technology further does not force the user to learn new query languages just to be able to retrieve files from a database. Furthermore, \textit{PyExperimenter} offers some convenience functionality like logging errors and the possibility to reset experiments with a specific status such as experiments that failed. \textit{PyExperimenter} was designed to be used by researchers in the field of artificial intelligence, but is not limited to those. The general structure of the project allows using \textit{PyExperimenter} for many kinds of experiments as long as they can be defined in terms of input parameters and a correspondingly parameterized function. \section{Acknowledgements} This work was partially supported by the German Federal Ministry for Economic Affairs and Climate Action (FLEMING project no.\ 03E16012F) and the German Research Foundation (DFG) within the Collaborative Research Center "On-The-Fly Computing" (SFB 901/3 project no.\ 160364472). \bibliographystyle{unsrtnat}
2,877,628,088,752
arxiv
\section{Introduction} \vspace{-0.1 cm} \IEEEPARstart{D}{eep} learning, especially deep neural networks~\cite{lecun2015deep,graves2013speech,bengio2009learning,tompson2014joint} have shown considerable promise through tremendous results in recent years, significantly improving the accuracy of a variety of challenging problems when compared to other machine learning methods~\cite{krizhevsky2012imagenet,farabet2013learning,simonyan2014very,hinton2012deep,hannun2014deep,amodei2015deep}. However, deep neural networks require high performance computing systems due to the tremendous quantity of computational layers they possess, leading to a massive quantity of parameters to learn and compute. This issue of architectural complexity has increased greatly in recent years~\cite{simonyan2014very,srivastava2015training,szegedy2015going}, driven by the demand for increasingly deeper and larger deep neural networks to boost modeling accuracy. As such, it has become increasingly more difficult to take advantage of such complex deep neural networks in scenarios where computational and energy resources are scarce. To enable the widespread use of deep learning, there has been a recent drive towards obtaining highly-efficient deep neural networks with strong modeling power. Much of the work in obtaining efficient deep neural networks have focused on deterministically compressing trained deep neural networks~\cite{lecun1989optimal}, using traditional lossless and lossy compression techniques such as quantization~\cite{gong2014compressing,han2015deep}, deterministic pruning~\cite{lecun1989optimal,han2015learning}, Huffman coding~\cite{han2015deep}, and hashing~\cite{chen2015compressing}. Rather than attempting to take an existing deep neural network and compress it into a smaller representation heuristically, we instead consider the following idea: \textit{Can deep neural networks \textbf{evolve} naturally over successive generations into highly efficient deep neural networks?} Using an example of evolutionary progress towards efficiency from nature, a recent study by Moran \mbox{\it et al.}~\cite{moran2015energetic} proposed that the eyeless Mexican cavefish evolved to lose its vision system over generations due to the high metabolic cost of vision. Therefore, by evolving naturally over generations in a way where the cavefish lost its vision system, the amount of energy expended is significantly reduced and thus improves survivability in subterranean habitats where food availability is low. The ability to mimic the biological evolutionary process for the task of producing highly-efficient deep neural networks over successive generations can have considerable benefits. In this study, we entertain a different notion for producing highly-efficient deep neural networks by introducing the evolutionary synthesis of deep neural networks over successive generations based on ancestor deep neural networks. While the idea of leveraging evolutionary computation concepts for training and generating deep neural networks have been previously explored in literature~\cite{Angeline,Stanley,Stanley2,Gauci,Tirumala}, there are significant key differences between these previous studies and this study: \begin{itemize} \item While previous studies have focused on improving the accuracy and training of deep neural networks, to the best of the authors' knowledge this study is the first to explore and focus on the notion of evolutionary synthesis of deep neural networks with high network architectural efficiency over successive generations. \item While the evolutionary computational approaches leveraged by these previous studies are classical approaches such as genetic algorithms and evolutionary programming, this study introduces a new probabilistic framework where evolution mechanisms such as genetic encoding and environmental conditions are modeled via probability distributions, and the stochastic synthesis process leverages these probability models to produce deep neural networks at successive generations. To the best of the authors' knowledge, this study is the first to leverage a probabilistic approach to evolutionary synthesis of deep neural networks. \item To the best of the authors' knowledge, the new approach introduced in this study is the first to achieve evolution and synthesis of deep neural networks with very deep, large neural network architectures that have been demonstrated to provide great performance in recent years~\cite{simonyan2014very,srivastava2015training,szegedy2015going}. Previous studies have focused on deep neural networks with smaller and shallower network architectures, as the approaches used in such studies are more difficult to scale to very deep, large network architectures. \end{itemize} \section{Methodology } The proposed evolutionary synthesis of deep neural networks is primarily inspired by real biological evolutionary mechanisms. In nature, traits that are passed down from generation to generation through DNA may change over successive generations due to factors such as natural selection and random mutation, giving rise to diversity and enhanced traits in later generations. To realize the idea of evolutionary synthesis for producing deep neural networks, we introduce a number of computational constructs to mimic the following mechanisms of biological evolution: i) \textbf{Heredity}, ii) \textbf{Natural Selection}, and iii) \textbf{Random Mutation}. \noindent \textbf{Heredity.} Here, we mimic the idea of heredity by encoding the architectural traits of deep neural networks in the form of synaptic probability models, which are used to pass down traits from generation to generation. One can view these synaptic probability models as the `DNA' of the networks. Let $\mathcal{H}=(\mathcal{N},S)$ denote the possible architecture of a deep neural network, with $\mathcal{N}$ denoting the set of possible neurons and $S$ denoting the set of possible synapses, with $s_{k} \in S$ denoting a synapse between two neurons $(n_i, n_j) \in \mathcal{N}$. One can encode the architectural traits of a deep neural network as $P(\mathcal{H}_g|\mathcal{H}_{g-1})$, which denotes the conditional probability of the architecture of a network in generation $g$ (denoted by $\mathcal{H}_g$), given the architecture of its ancestor network in generation $g-1$ (denoted by $\mathcal{H}_{g-1}$). If we were to treat areas of strong synapses in an ancestor network in generation $g$ as desirable traits to be inherited by descendant networks at generation $g$, where descendant networks have a higher probability of having similar areas of strong synapses as its ancestor network, one can instead encode the architectural traits of a deep neural network as the synaptic probability $P(S_g|\mathcal{W}_{g-1})$, where $w_{g-1,k} \in \mathcal{W}_{g-1}$ encodes the synaptic strength of each synapse $s_{g-1,k}$. Modeling $P(S_g|\mathcal{W}_{g-1})$ as an exponential distribution, with the probability of each synapse in the network assumed to be independently distributed, one arrives at \begin{align} P(S_{g}|\mathcal{W}_{g-1}) = \prod_{i} \exp \Big(\frac{w_{g-1,i}}{Z} - 1\Big), \label{synaptiveprob} \end{align} where $Z$ is a normalization constant. \noindent \textbf{Natural Selection and Random Mutation.} The ideas of natural selection and random mutation are mimicked through the introduction of a network synthesis process for synthesizing descendant networks, which takes into account not only the synaptic probability model encoding the architectural traits of the ancestor network, but also an environmental factor model to mimic the environmental conditions that help drive natural selection, in a random manner that drives random mutation. More specifically, a synapse is synthesized randomly between two possible neurons in a descendant network based on $P(S_g|\mathcal{W}_{g-1})$ and an environmental factor model $\mathcal{F(\mathcal{E})}$, with the neurons in the descendant network synthesized subsequently based on the set of synthesized synapses. As such, the architecture of a descendant network at generation $g$ can be synthesized randomly via synthesis probability $P(\mathcal{H}_g)$, which can be expressed by \begin{align} P(\mathcal{H}_g) = \mathcal{F(\mathcal{E})} \cdot P(S_g|\mathcal{W}_{g-1}). \end{align} The environmental factor model $\mathcal{F(\mathcal{E})}$ can be the combination of quantitative environmental conditions that are imposed upon the descendant networks that they must adapt to. To have a better intuitive understanding, let us examine an illustrative example of how one can impose environmental conditions using $\mathcal{F(\mathcal{E})}$ to promote the evolution of highly efficient deep neural networks. \noindent \textbf{Efficiency-driven Evolutionary Synthesis.} One of the main environmental factors in encouraging energy efficiency during evolution is to restrict the resources available. For example, in a study by Moran {\it et al.}~\cite{moran2015energetic}, it was proposed that the eyeless Mexican cavefish lost its vision system over generations due to the high energetic cost of neural tissue and low food availability in subterranean habitats. Their study demonstrated that the cost of vision is about 15\% of resting metabolism for a 1-g eyed phenotype, thus losing their vision system through evolution has significant energy savings and thus improves survivability. As such, we are inspired to computationally restrict resources available to descendant networks to encourage the evolution of highly-efficient deep neural networks. Considering the aforementioned example, the descendant networks must take on network architectures with more efficient energy consumption than this original ancestor network to be able to survive. The main factor in energy consumption is the quantity of synapses and neurons in the network. Therefore, to mimic environmental constraints that encourage the evolution of highly-efficient deep neural networks, we introduce an environmental constraint $\mathcal{F(\mathcal{E})}=C$ that probabilistically constrains the quantity of synapses that can be synthesized in the descendant network (which in effect also constrains the quantity of neurons that can be synthesized), such that descendant networks are forced to evolve more efficient network architectures than their ancestor networks. Therefore, given $P(S_g|\mathcal{W}_{g-1})$ and $\mathcal{F(\mathcal{E})}=C$, the synthesis probability $P(\mathcal{H}_g)$ can be formulated as \begin{align} P(\mathcal{H}_g) = C \cdot P(S_g|\mathcal{W}_{g-1}), \label{synthprob} \end{align} where $C$ is the highest percentage of synapses desired in the descendant network. The random element of the network synthesis process mimics the random mutation process and promotes network architectural diversity. Given the probabilistic framework introduced above, the proposed evolutionary synthesis of highly-efficient deep neural networks can be described as follows (see Figure~\ref{Fig:EvoNet}). Given an ancestor network at generation $g-1$, a synaptic probability model $P(S_{g}|\mathcal{W}_{g-1})$ is constructed according to Eq.~\ref{synaptiveprob}. Using $P(S_{g}|\mathcal{W}_{g-1})$ and environmental constraint $\mathcal{F(\mathcal{E})}$, a synthesis probability $P(\mathcal{H}_g)$ is constructed according to Eq.~\ref{synthprob}. To synthesize a descendant nework at generation $g$, each synapse $s_{g,k}$ in the descendant network is synthesized randomly as follows: \begin{align} s_{g,k}~{\rm exists~in~}~\mathcal{H}_g~{\rm if}~ P(s_{g,k}) \geq U(0;1), \end{align} \noindent where $U(0;1)$ is a uniformly distributed random number from a uniform distribution between 0 and 1. The synthesized descendant networks at generation $g$ are then trained into fully-functional networks, like one would train a newborn, and the evolutionary synthesis process is repeated for producing successive generations of descendant networks. \begin{figure*}[t] {\center \includegraphics[width = 16 cm]{Figures/EvoNet.png} \caption{Evolutionary synthesis process of highly-efficient deep neural networks.} \label{Fig:EvoNet}} \vspace{- 0.5 cm} \end{figure*} \section{Experimental Results} To investigate the efficacy of the proposed evolutionary synthesis of highly-efficient deep neural networks, experiments were performed using the MSRA-B~\cite{MSRAB} and HKU-IS datasets~\cite{li2015visual} for the task of visual saliency. This task was chosen given the importance for biological beings to detect objects of interest (e.g., prey, food, predators) for survival in complex visual environments, and can provide interesting insights into the evolution of networks. Three generations of descendant deep neural networks (second, third, and fourth generations) were synthesized within an artificially constrained environment beyond the original, first-generation ancestor network. The environmental constraint imposed during synthesis in this study is that the descendant networks should not have more than 40\% of the total number of synapses that its direct ancestor network possesses (i.e., $C=0.4$), thus encouraging the evolution of highly-efficient deep neural networks. The network architecture of the original, first generation ancestor network used in this study, and details on the tested datasets and performance metrics are as follow. \noindent \textbf{Network architecture.} The network architecture of the original, first generation ancestor network used in this study builds upon the VGG16 very deep convolutional neural network architecture~\cite{simonyan2014very} for the purpose of image segmentation as follows. The outputs of the c3, c4, and c5 stacks from the VGG16 architecture are fed into newly added c6, c7, c8 stacks, respectively. The output of the c7 and c8 stacks are then fed into d1 and d2 stacks. The concatenated outputs of the c6, d1, and d2 stacks are then fed into the c9 stack. The output of the c5 stack is fed into c10 and c11 stacks. Finally, the combined output of the c9, c10 and c11 stacks are fed into a softmax layer to produce final segmentation result. The details of different stacks are as follows: c1: 2 convolutional layers of 64, $3 \times 3 $ local receptive fields, c2: 2 convolutional layers of 128, $3 \times 3$ local receptive fields, c3: 3 convolutional layers of 256, $3 \times 3$ local receptive fields, c4: 3 convolutional layers of 512, $3 \times 3$ local receptive fields, c5: 3 convolutional layers of 512, $3 \times 3$ local receptive fields, c6: 1 convolutional layers of 256, $3 \times 3$ local receptive fields, c7 and c8: 1 convolutional layers of 512, $3 \times 3$ local receptive fields, c9: 1 convolutional layers of 384, $1 \times 1$ local receptive fields, c10 and c11: 2 convolutional layers of 512, $11 \times 11$ local receptive fields and 384, $1 \times 1$ local receptive fields, d1 and d2 are deconvolutional layers. \noindent \textbf{Datasets.} The MSRA-B dataset~\cite{MSRAB} consists of 5000 natural images and their corresponding ground truth maps where the salient objects in the images are segmented with pixel-wise annotation. The dataset is divided into training, validation and testing groups containing 2500, 500 and 2000 images, respectively. Figure~\ref{Fig:MSRA-B} The HKU-IS dataset~\cite{li2015visual} consists of 4447 natural images and their corresponding ground truth maps where the salient objects in the images are segmented with pixel-wise annotation. The entire dataset is used as a testing group for the descendant networks trained on the training group of the MSRA-B dataset. Figure~\ref{Fig:HKU-IS} illustrates some of the example images from the dataset with their corresponding ground truths. \begin{figure}[t] \setlength\tabcolsep{0.1 cm} \begin{center} \begin{tabular}{cccc} \includegraphics[width = 2 cm]{MSRAB_Img/3_111_111965}& \includegraphics[width = 2 cm]{MSRAB_GT/3_111_111965}& \includegraphics[width = 2.1 cm]{MSRAB_Img/0_24_24455}& \includegraphics[width = 2.1 cm]{MSRAB_GT/0_24_24455}\\ \includegraphics[width = 2 cm]{MSRAB_Img/3_114_114842}& \includegraphics[width = 2 cm]{MSRAB_GT/3_114_114842}& \includegraphics[width = 2.05 cm]{MSRAB_Img/3_97_97290}& \includegraphics[width = 2.05 cm]{MSRAB_GT/3_97_97290}\\ \includegraphics[width = 2 cm]{MSRAB_Img/3_102_102510}& \includegraphics[width = 2 cm]{MSRAB_GT/3_102_102510}& \includegraphics[width = 2 cm]{MSRAB_Img/0_5_5785}& \includegraphics[width = 2 cm]{MSRAB_GT/0_5_5785}\\ \end{tabular} \caption{MSRA-B image dataset: This dataset contains 5000 natural images divided into 2500, 500 and 2000 images as training, validation and test samples, respectively. The ground truth maps are provided with pixel-wise annotation. Examples of images and their corresponding ground truth maps in the MSRA-B image dataset are shown here.} \label{Fig:MSRA-B} \end{center} \vspace{- 0.5 cm} \end{figure} \begin{figure}[t] \centering \setlength\tabcolsep{0.1 cm} \begin{center} \begin{tabular}{cccc} \includegraphics[width = 2 cm]{HKUIS_Img/0005}& \includegraphics[width = 2 cm]{HKUIS_GT/0005}& \includegraphics[width = 2.1 cm]{HKUIS_Img/0346}& \includegraphics[width = 2.1 cm]{HKUIS_GT/0346}\\ \includegraphics[width = 2 cm]{HKUIS_Img/8062}& \includegraphics[width = 2 cm]{HKUIS_GT/8062}& \includegraphics[width = 2.05 cm]{HKUIS_Img/0012}& \includegraphics[width = 2.05 cm]{HKUIS_GT/0012}\\ \includegraphics[width = 2 cm]{HKUIS_Img/0652}& \includegraphics[width = 2 cm]{HKUIS_GT/0652}& \includegraphics[width = 2 cm]{HKUIS_Img/1465}& \includegraphics[width = 2 cm]{HKUIS_GT/1465}\\ \end{tabular} \caption{HKU-IS image dataset: This dataset contains 4447 natural images, and the entire dataset is used as a testing group for the descendant deep neural networks trained on the training group of the MSRA-B dataset.} \label{Fig:HKU-IS} \end{center} \vspace{- 0.5 cm} \end{figure} \noindent \textbf{Performance metrics.} To evaluate the performance of the evolved descendant deep neural networks at different generations, the MAE, F$_\beta$ score (where $\beta^2$=0.3~\cite{li2015visual}) metrics were computed for each of the descendant deep neural networks across the 2000 test images of the MSRA-B dataset that were not used for training. As a reference, the same performance metrics was also computed for the original, first generation ancestor deep neural network. \noindent \textbf{Architectural efficiency over successive generations.} The detailed experimental results describing the number of synapses, architectural efficiency (defined here as the reduction of synapses in the network compared to the original, ancestor deep neural network in the first generation), F$_\beta$ score, and mean absolute error (MAE) presented in Table 1 and Table 2 for the MSRA-B and HKU-IS datasets, respectively. A number of insightful observations can be made with respect to change in the architectural efficiency over successive generations of descendant deep neural networks. \begin{table}[ht] \begin{center} \footnotesize \setlength\tabcolsep{0.1 cm} \caption{Performance metrics for different generations of synthesized offspring networks for MSRA-B dataset} \label{Tab:QRes} \begin{tabular}{l||cccccc} Generation & \scriptsize Number of synapses &\scriptsize Architectural efficiency &\scriptsize F$_\beta$ score &\scriptsize MAE \\ \hline \hline 1 &63767232 & 1X &0.875 &0.0743 \\ 2 &15471797 &4.12X & 0.876 &0.0739 \\ 3 &3603007 &17.69X &0.861 &0.0813 \\ 4 &1333010 &47.83X &0.850 &0.0863 \\ \end{tabular} \end{center} \vspace{- 0.5 cm} \end{table} First, it can be observed that the performance differences from one generation of descendant networks to the next generation are small for MSRA-B ($<$3\% between first generation and the fourth generation), while the performance differences are small for HKU-IS between the first two generations ($<$0.5\%) before larger performance differences in the third and fourth generations ($<$8\% between the first and fourth generations). These results indicate that the modeling power of the ancestor network are well-preserved in the descendant networks. Second, it can be observed that the descendant networks in the second and third generations can achieve state-of-the-art F$_\beta$ scores for MSRA-B (0.876 at second generation and 0.861 at third generation, compared to 0.865 as reported by Li et al.~\cite{li2015visual} for their state-of-the art visual saliency method), while having network architectures that are significantly more efficient compared to the first generation ancestor network (\textbf{$\sim$18-fold} decrease in synapses). A similar trend was observed for HKU-IS, though persisting only in the second generation (0.826 compared to 0.8 reported in~\cite{li2015visual}, while achieving a \textbf{$\sim$4-fold} decrease in synapses over ancestor network). What is more remarkable is that the descendant network at the fourth generation maintains strong F$_\beta$ scores (0.850 for MSRA-B and 0.753 for HKU-IS), while having network architectures that are incredibly efficient (\textbf{$\sim$48-fold} decrease in synapses) compared to the first generation ancestor network. This \textbf{$\sim$48-fold} increase in architectural efficiency while maintaining modeling power clearly show the efficacy of producing highly-efficient deep neural networks over successive generations via the proposed evolutionary synthesis. \begin{table}[ht] \begin{center} \footnotesize \setlength\tabcolsep{0.1 cm} \caption{Performance metrics for different generations of synthesized offspring networks for HKU-IS dataset} \label{Tab:QResHKUIS} \begin{tabular}{l||cccccc} \scriptsize Generation &\scriptsize Number of synapses &\scriptsize Architectural efficiency &\scriptsize F$_\beta$ score &\scriptsize MAE \\ \hline \hline 1 &63767232 & 1X &0.830 &0.0914 \\ 2 &15471797 &4.12X & 0.826 &0.0911 \\ 3 &3603007 &17.69X &0.775 &0.1087 \\ 4 &1333010 &47.83X &0.753 &0.1190 \\ \end{tabular} \end{center} \end{table} \noindent \textbf{Visual saliency variations over successive generations.} To gain additional insights, Figure~\ref{Fig:saliency} demonstrate example test images from the MSRA-B dataset and the HKU-IS dataset, respectively, along with the corresponding visual saliency maps generated by the descendant networks at different generations. It can be observed that the descendant networks at all generations consistently identified the objects of interest in the scene as visually salient. It is also interesting to observe that by the fourth generation, with a $\sim$48-fold decrease in synapses compared to the first generation ancestor network, the ability to distinguish fine-grained visual saliency starts to diminish. These observations are interesting in that, similar to biological evolution, they show that the descendant networks evolved over successive generations in such a way that important traits (e.g., general ability to identify salient objects) are retained from its ancestors while less important traits (e.g., ability to distinguish fine-grained saliency) diminish in favor of adapting to environmental constraints (e.g., growing highly-efficient architectures due to imposed constraints). These experimental results show that, by taking inspiration from biological evolution, the proposed evolutionary synthesis of deep neural networks can lead to the natural evolution of deep neural networks over successive generations into highly efficient, yet powerful deep neural networks, and thus a promising direction for future exploration in deep learning. \begin{figure}[h] \centering \setlength\tabcolsep{0.05 cm} \begin{center} \begin{tabular}{ccccc} \includegraphics[width = 1.7 cm]{result/msra-b/Img/0_1_1934_heat.png}& \includegraphics[width = 1.7 cm]{result/msra-b/full/0_1_1934_heat.png}& \includegraphics[width = 1.7 cm]{result/msra-b/first/0_1_1934_heat.png}& \includegraphics[width = 1.7 cm]{result/msra-b/second/0_1_1934_heat.png}& \includegraphics[width = 1.7 cm]{result/msra-b/third/0_1_1934_heat.png}\\ \includegraphics[width = 1.7 cm]{result/hku-is/Img/0372_heat.png}& \includegraphics[width = 1.7 cm]{result/hku-is/full/0372_heat.png}& \includegraphics[width = 1.7 cm]{result/hku-is/First/0372_heat.png}& \includegraphics[width = 1.7 cm]{result/hku-is/second/0372_heat.png}& \includegraphics[width = 1.7 cm]{result/hku-is/Third/0372_heat.png}\\ \scriptsize Image &\scriptsize Generation 1 &\scriptsize Generation 2 &\scriptsize Generation 3 &\scriptsize Generation 4 \end{tabular} \caption{Example test images from the tested datasets, and the corresponding visual saliency maps generated by the descendant deep neural networks at different generations.} \label{Fig:saliency} \end{center} \end{figure} \vspace{- 0.5 cm} \section*{Author Contributions} A.W. conceived the concept of evolutionary synthesis for deep learning proposed in this study. M.S. and A.W. formulated the evolutionary synthesis process proposed in this study. A.M. implemented the evolutionary synthesis process and performed all experiments in this study. A.W., M.S., and A.M. all participated in writing this paper. \renewcommand{\refname}{\normalfont\selectfont\normalsize\bf References} \small{ \bibliographystyle{IEEEtran}
2,877,628,088,753
arxiv
\section{Introduction}\label{sec:intro} \subsection{The incompressible Navier-Stokes equations} The Navier-Stokes equations are a fundamental mathematical model of incompressible viscous fluid flow, which consists of a system of equations \begin{equation}\label{eq:NSE} \begin{cases} \partial_t u - \Delta u + \D( u \otimes u) + \nabla p = 0 &\\ \D u = 0, \end{cases} \end{equation} posed on a spatial domain $\Omega \subset \RR^d$ with suitable boundary conditions. In \eqref{eq:NSE}, $u :[0,T]\times \Omega \to \RR^d $ is the unknown velocity, and $p :[0,T]\times \Omega \to \RR $ is a scalar pressure. We consider the Cauchy problem of \eqref{eq:NSE} on a time interval $[0,T]$ for some initial data $u_0$ and $T>0$. In the paper we confine ourselves to the periodic case $\Omega = \TT^d =\RR^d /\ZZ^d $ in dimension $d \geq 2$ and consider solutions with zero spacial mean $$ \int_{\TT^d} u(t,x) \, dx = 0, $$ which is propagated under the evolution of the equation \eqref{eq:NSE}. In this paper, the notion of weak solutions refers to that of distributional solutions that solve \eqref{eq:NSE} in the sense space-time distribution, cf. \cite{MR316915,MR2838337}. \begin{definition}\label{def:weak_solutions} Denote by $\mathcal{D}_T$ the space of divergence-free test function $\varphi \in C^\infty (\RR \times \TT^d ) $ such that $\varphi =0$ if $t\geq T$. Let $ u_0 \in L^2(\TT^d)$ be weakly divergence-free\footnote{It is possible to consider more general initial data, such as $L^p $ for some $ 1 \leq p \leq \infty$ as in~\cite{MR316915}.}. A vector field $ u \in L^2 ( [0,T] \times \TT^d)$ is a weak solution of \eqref{eq:NSE} with initial data $u_0$ if the following hold: \begin{enumerate} \item For $a.e.$ $t\in [0,T]$, $u$ is weakly divergence-free; \item For any $\varphi \in \mathcal{D}_T$, \begin{equation} \int_{\TT^d} u_0(x)\cdot \varphi(0,x ) \, dx = - \int_0^T \int_{\TT^d} u\cdot \big( \partial_t \varphi+ \Delta \varphi + u \cdot \nabla \varphi \big) \, dx dt . \end{equation} \end{enumerate} \end{definition} In the literature, such solutions are sometimes called ``very weak solutions'' \cite{MR1755865,MR1798753} due to the minimal regularity assumptions. Remarkably, by \cite[Theorem 2.1]{MR316915}, up to possibly redefining $u$ on a set of measure zero in space-time, the above weak formulation is equivalent to the integral equation \begin{equation}\label{eq:NSE_integral_formulation} u = e^{t\Delta}u_0 + \int_0^t e^{(t-s)\Delta} \mathbb{P}\D(u\otimes u) (s)\, ds , \end{equation} where $ e^{t\Delta}$ is the heat semigroup and $\mathbb{P}$ is the Leray projection onto the divergence-free vector fields. Note that formulation \eqref{eq:NSE_integral_formulation} was also used in a variety of works \cite{MR166499,MR760047,MR1808843} to construct unique solutions (called mild solutions) of \eqref{eq:NSE} when the initial data $u_0$ is critical or subcritical, starting from the work of Fujita and Kato \cite{MR166499}. A more physical class of weak solutions, introduced by Leray \cite{MR1555394} and constructed by Leray \cite{MR1555394} in $\RR^3$ and Hopf \cite{doi:10.1002/mana.3210040121} in general domains in $d\geq 2$, is also considered in the literature. \begin{definition}\label{def:LHweak_solutions} A weak solution $u$ of \eqref{eq:NSE} is called Leray-Hopf weak solution if $u \in C_w([0,T; L^2) \cap L^2(0,T; H^1)$ and \begin{equation}\label{eq:energy_inequality} \frac{1}{2} \|u(t) \|_2^2 + \int_0^t\|\nabla u (s)\|_2^2 \,ds \leq \frac{1}{2} \|u(0) \|_2^2, \end{equation} for all $t\in[0,T]$ \end{definition} The Leray-Hopf weak solutions encode the natural conservation law of \eqref{eq:NSE} and satisfy much better properties than general weak solutions, especially in the most relevant case of 3D, such as Leray's structure theorem~\cite{MR1555394}, weak-strong uniqueness~\cite{MR0136885,MR2237686,MR2838337}, partial regularity~\cite{MR933230,MR673830}, integrability of higher Sobolev norms~\cite{MR3258360}, estimates of potential blowup rates~\cite{MR1992563,MR3475661,MR3465978}. In fact, it is well-known that Leray-Hopf solutions are smooth and unique in 2D. These nice properties, much desirable from a regularity stand point, make it significantly harder to construct nonunique Leray-Hopf solutions in $d\geq 3$, though partial results~\cite{Lady_enskaja_1969,MR3341963} and numerical evidence~\cite{1704.00560} are available. The main focus of the present paper is to study the question of uniqueness/nonuniqueness for general weak solutions of \eqref{eq:NSE} in the sense of Definition \ref{def:weak_solutions}. Even though the solutions constructed in this paper live on a borderline of a class of Leray-Hopf solutions, they do not have the regularity to justify \eqref{eq:energy_inequality} and are not Leray-Hopf solutions; whether Leray-Hopf solutions are unique in dimension $d \geq 3$ remains a challenging open question. \subsection{Uniqueness vs. nonuniqueness} We first discuss the uniqueness results. For brevity, we do not emphasize the underlying spatial domain of the results that we mention. Since we mainly work with the scales of mixed Lebesgue norms $L^p L^q$, to simplify notation, given $p,q \in [1,\infty]$ and $\Omega \subset \RR^d$ let us denote the Banach space \begin{equation*} X^{p,q}([0,T] ; \Omega ) = \begin{cases} L^p( [0,T]; L^q( \Omega) ) & \text{if $p \neq \infty$, } \\ C( [0,T]; L^q( \Omega) ) & \text{if $p = \infty$. } \end{cases} \end{equation*} In the setting of Leray-Hopf solutions, Prodi~\cite{Prodi1959}, Serrin~\cite{MR0136885}, and Ladyzhenskaya~\cite{MR0236541} proved that if a Leray-Hopf solution $u$ satisfies $$ u \in X^{p,q} \quad \text{for some $p <\infty$ and $q> d$ such that $\frac{2}{p} + \frac{d}{q} \leq 1$}, $$ then \emph{all} Leray-Hopf solutions with the same initial data must coincide. Remarkably, the endpoint case $p=\infty$ can be strengthened to $L^\infty L^d$ \cite{MR1876415,MR1992563}. This type of results are often referred to as \emph{weak-strong uniqueness} in the sense that if there exists a strong solution, then any weak solution with the same initial data coincides with it. In fact, a membership in such functional classes implies the regularity of Leray-Hopf solutions as well, though we will not go into details in this direction and simply refer interested readers to \cite{MR0136885,MR1992563,MR3475661} and references therein. In the setting of Definition \ref{def:weak_solutions}, one generally loses the property of weak-strong uniqueness due to the lack of the energy inequality \eqref{def:LHweak_solutions}. For such weak solutions, one can instead study the uniqueness issue within certain functional classes, such as $X^{p,q}$. The general scheme is to write \eqref{eq:NSE_integral_formulation} as the following abstract formulation \begin{equation}\label{eq:NSE_abstract} u = e^{t\Delta}u_0 + B(u, u), \end{equation} and study the continuity of the bilinear operator $B$ in the various underlying functional spaces, see for instance~\cite{MR3469428} and references therein. The first result in this direction dates back to Fabes, Jones, and Rivi\`ere \cite{MR316915} who proved that weak solutions in the class $ X^{p,q} $ are unique if $ \frac{2}{p} + \frac{d}{q} \leq 1$ and and $d < q <\infty $. The limit case $p=\infty$ and $q =d $ was later covered first by \cite{MR1813331} in $d \geq 2$, and then \cite{MR1724946,MR1680809,MR1876415} via different methods and for different spatial domains. Based on a scaling analysis, when $\frac{2}{p} + \frac{d}{q} = 1 $, the space $X^{p,q}$ is invariant under the parabolic scaling of the equation $ u \mapsto u_\lambda := \lambda u(\lambda^2 t, \lambda x)$. In the literature, the space $X^{p,q}$ is called sub-critical when $\frac{2}{p} + \frac{d}{q} < 1 $, critical when $\frac{2}{p} + \frac{d}{q} = 1 $ and, super-critical when $\frac{2}{p} + \frac{d}{q} > 1 $. Therefore, the uniqueness condition $ u\in X^{p,q}$ for $ \frac{2}{p} + \frac{d}{q} \leq 1$ seems to be sharp. In fact, thanks to~\cite{MR316915,MR760047,MR1813331,MR1876415} when $\frac{2}{p} + \frac{d}{q} \leq 1 $, all weak solutions (in the sense of Definition~\ref{def:weak_solutions}) belonging to the class $X^{p,q}$ are automatically Leray-Hopf\footnote{In contrast to \cite{MR316915,MR760047,MR1813331} where one has to assume $L^2$ initial data, our setting is on the $d$-dimensional torus and the initial data is always $L^2$.} and hence regular. In other words, sub-critical or critical weak solutions are classical solutions. We can summarize these uniqueness results as follows. Since these results were originally stated for $\RR^d$, we also include a proof applicable to our specific setup in the appendix for readers' convenience. \begin{theorem}[\cite{MR316915,MR760047,MR1813331,MR1876415}]\label{thm:FJR_uniqueness} Let $d \geq 2$ and $u$ be a weak solution of \eqref{eq:NSE} such that $u \in X^{p,q}([0,T]; \TT^d )$ for some $p, q \in [1,\infty]$ such that $ \frac{2}{p} + \frac{d}{q} \leq 1$. Then \begin{enumerate} \item $u$ is unique in the class $X^{p,q}$, \item $u$ is a Leray-Hopf solution. \end{enumerate} \end{theorem} So far, the positive results suggest $\frac{2}{p} + \frac{d}{q} = 1$ as the critical regularity threshold for uniqueness/nonuniqueness of the weak solutions. One would naturally ask what would happen in the super-critical regime $ \frac{2}{p} + \frac{d}{q} > 1$, or more specifically, whether the following conjecture is valid. \begin{conjecture}\label{conject:main} Let $d \geq 2$ and $p, q \in [1,\infty]$ such that $ \frac{2}{p} + \frac{d}{q} > 1$. Then \begin{enumerate} \item There exist two weak solutions $u, v \in X^{p,q}([0,T]; \TT^d )$ of \eqref{eq:NSE} such that \[ u(0) = v(0) \; \text{but}\; v \ne u. \] \item There exists a weak solution $u \in X^{p,q}([0,T]; \TT^d )$ of \eqref{eq:NSE} such that $u$ is not Leray-Hopf. \end{enumerate} \end{conjecture} In stark contrast to the positive result of Theorem \ref{thm:FJR_uniqueness}, which has been known for quite some time, Conjecture \ref{conject:main} was completely open until very recently due to the groundbreaking work \cite{MR3898708}. Even though the regularity was very far from the threshold $ \frac{2}{p} + \frac{d}{q} =1$, the nonuniqueness of weak solutions has been shown in dimension $d\geq 3$: $C_t L^{2+}$ \cite{MR3898708,1809.00600} in dimension $d= 3$ and $H^{1/200-}$\cite{MR3951691} in dimension $d \geq 4$. In fact, these works used a unified approach to tackle both parts of Conjecture \ref{conject:main} at the same time: one perturbs a given smooth solution of \eqref{eq:NSE} to obtain a ``wild'' solution with certain regularity $X^{p,q}$ and then the existence of such a wild solution implies both the nonuniqueness of weak solutions and the existence of non-Leray-Hopf solutions in the said class. Unfortunately, the strategy of \cite{MR3898708}, which was in turn developed from a series of works \cite{{MR3374958,MR3530360,MR3866888,1701.08678}}, breaks down in 2D. The reason is that, roughly speaking, the framework of \cite{MR3898708} is $ L^2$ critical, in the sense that the mechanism can produce finite energy wild solutions if the system is $L^2$ super-critical. This heuristic has been confirmed in \cite{1809.00600,1808.07595,MR4097236} for the generalized Navier-Stokes equations with fractional dissipation. Since the 2D Navier-Stokes equations is $L^2$-critical, there are no nonuniqueness results for the 2D case to date. In fact, a direct corollary of \cite{MR3898708} is false in 2D: any $C_t L^2$ weak solution of the 2D Navier-Stokes equations is Leray-Hopf, and hence smooth and unique. One of the main results in this paper is to show that the nonuniqueness of weak solutions holds even in 2D. \begin{theorem}[Strong nonuniqueness in 2D]\label{thm:main_short} Let $d=2$ be the dimension. Every weak solution of \eqref{eq:NSE} is not unique: for any weak solution $v:[0,T] \times \TT^2 \to \RR^2$, there exists a different weak solution $u$ with the same initial data. \end{theorem} It is worth noting that the nonuniqueness is proved here in a stronger sense than~\cite{MR3898708,MR3951691}, namely that \emph{every} solution is nonunique in the class of weak solutions. We can classify different types of nonuniqueness results as follows. Here, $X$ denotes different functional classes of weak solutions. \begin{itemize} \item ``Weak nonuniqueness'': there exists a nonunique weak solution in the class $X$. \item ``Strong nonuniqueness'': any weak solution in the class $X$ is nonunique. \end{itemize} Under this classification, currently the only strong nonuniqueness available is \cite{1809.00600} for $3D$, where $X$ can be taken as $ C_t H^{\varepsilon}$ weak solutions with intervals of regularity for a small $\varepsilon>0$. In fact, the main results of \cite{1809.00600} imply the strong uniqueness of weak solutions on $\TT^d$ for dimension $d=3,4$ since Leray-Hopf solutions in $d \leq 4$ have intervals of regularity. In this spirit, we prove a strong nonuniqueness result in a class of $ L^p_t L^\infty$ weak solutions, for any $p<2$ and in any dimension $d \geq 2$, which is sharp in view of Theorem~\ref{thm:FJR_uniqueness}. A detailed list of the properties of constructed solutions can be found in Theorem \ref{thm:main_thm_2} below. In particular, the below theorem settles Conjecture \ref{conject:main} in the case $q =\infty$. \begin{theorem}[Sharp nonuniqueness in $d\geq 2$]\label{thm:main_sharp_short} Let $d \geq 2$ be the dimension and $1 \leq p< 2 $. \begin{enumerate} \item A weak solution $u \in L^p(0,T; L^\infty(\TT^d)$ of \eqref{eq:NSE} is not unique in the class $ L^p(0,T; L^\infty(\TT^d)$ if $u$ has at least one interval of regularity\footnote{In particular, it can be applied to a smooth solution to obtain a nonunique weak solution in the class $ L^p_t L^\infty$.}. \item There exist non-Leray-Hopf weak solutions $u \in L^p(0,T; L^\infty(\TT^d)$. \end{enumerate} \end{theorem} In view of Theorem \ref{thm:FJR_uniqueness}, nonunique solutions cannot live in the class $L^2_t L^\infty$. However, Theorem \ref{thm:main_sharp_short} shows the existence of nonunique solutions on the borderline of this Leray-Hopf class and raises the question of whether such constructions can be extended to the Leray-Hopf solutions. One clearly sees that Theorem \ref{thm:main_sharp_short} only implies the sharpness of \cite{MR316915} near $q= \infty$ and the rest of the borderline regime remains open, see the discussion at the end of the introduction. Alternatively, we can present the result as: for any smooth initial data, there are infinitely many weak solutions with regularity $L^p_t L^\infty$ emerging from the same data and agrees with each other for a short time, see the proof of Theorem \ref{thm:main_euler_onsager} at the end of Section \ref{sec:outline}. \subsection{The main theorem and intervals of regularity} We now present the main theorem of the paper, which implies immediately Theorem \ref{thm:main_short} and \ref{thm:main_sharp_short} above as we will show in Section \ref{sec:outline}. One of the most interesting features of our constructed weak solutions is that they possess intervals of regularity, i.e. they are classical solutions on many sub-intervals whose union occupies a majority of the time axis, cf. \cite{MR1555394} and \cite{1809.00600}, which we will discuss in detail towards the end of the introduction. \begin{theorem}\label{thm:main_thm_2} Let $d \geq 2$ be the dimension and $1\leq p <2$, $q < \infty$, and $\varepsilon>0$. For any smooth, divergence-free vector field $v \in C^\infty ([0,T] \times \TT^d)$ with zero spatial mean for each $t \in[0,T]$, there exists a weak solution $u $ of \eqref{eq:NSE} and a set $$ \mathcal{I} = \bigcup_{i=1}^\infty (a_i ,b_i) \subset [0,T], $$ such that the following holds. \begin{enumerate} \item The solution $u$ satisfies $$ u \in L^p(0,T; L^\infty(\TT^d) ) \cap L^1(0,T; W^{1,q}(\TT^d) ). $$ \item $u $ is a smooth solution on $(a_i , b_i )$ for every $i$. Namely, $$ u|_{\mathcal{I}\times \TT^d }\in C^\infty(\mathcal{I} \times \TT^d). $$ In addition, $u$ agrees with the smooth solution emerging from the initial data $v(0)$ near $t=0$. \item The Hausdorff dimension of the residue set $\mathcal{S}= [0,T] \setminus \mathcal{I}$ satisfies $$ d_{\mathcal{H}} (\mathcal{S} ) \leq \varepsilon. $$ \item The solution $u$ and the given vector field $v$ are $\varepsilon$-close in $L_t^p L^\infty \cap L_t^1 W^{1,q} $: $$ \| u -v \|_{ L^p(0,T; L^\infty(\TT^d)) \cap L^1(0,T; W^{1,q}(\TT^d) ) } \leq \varepsilon. $$ \end{enumerate} \end{theorem} \begin{remark}\label{remark:main_thm} We list a few remarks here concerning the above result. \begin{enumerate} \item In terms of the scaling, $u$ also lies on the borderline of the Beale-Kato-Majda criterion \cite{MR763762} which scales as $L^1_t W^{1, \infty}$. \item The residue set $S=[0,T] \setminus \mathcal{I}$ is a singular set in the sense that for any $t \not \in S$, there is $\delta>0$ such that $u \in C^\infty((t-\delta, t+\delta)\times \TT^d )$. \item Since the solution $u(t)$ is smooth on $ (a_i,b_i)$, the energy equality is satisfied $$ \frac{1}{2} \|u(t_1) \|_2^2 + \int_{t_0}^{t_1} \|\nabla u (s)\|_2^2 \,ds = \frac{1}{2} \|u(t_0) \|_2^2 \quad \text{for all $t_0, t_1 \in (a_i ,b_i)$}. $$ \item The driving mechanism of nonuniqueness is a result of large chunks of mass emerging from/escaping to finer time scales. There is no blowup on each intervals of regularity $(a_i,b_i)$ but norms do blow up as $i \to \infty$. \end{enumerate} \end{remark} \subsection{Applications to the Euler equations} Our results also apply to the inviscid case with no changes in contrast to a recent work \cite{1809.00600}, which relies heavily on the parabolic regularization. \begin{theorem}\label{thm:main_euler} Theorem \ref{thm:main_thm_2} also holds for the Euler equations. Namely, under the same assumptions, there exists a weak solution $u$ of the Euler equations satisfying the same properties. \end{theorem} As a byproduct of the construction, we provide an improvement to the Onsager's conjecture in 2D in the negative direction, where the best result currently stands at $L^\infty_t C^{1/5 -\varepsilon}$~ \cite{MR3374958}, see also \cite[pp. 1817]{MR3987721} and \cite{novack2018nonuniqueness}. \begin{theorem}\label{thm:main_euler_onsager} Let $d \geq 2$ be the dimension and $\varepsilon>0$. There exist infinitely many non-conserving weak solutions $u \in L^{\frac{3}{2} -\varepsilon}_t C^{\frac{1}{3}} \cap L^1_t C^{1 -\varepsilon}$ with the same initial data. \end{theorem} This result appears to establish the first non-conserving solutions with an exact ``$\frac{1}{3}$-H\"older regularity'' in space, albeit with a non-optimal $L^{3/2 -}$ exponent in time, cf. positive results \cite{MR1298949,MR2422377}. We also note that previous constructions \cite{MR3374958,MR3530360,MR3866888,1701.08678} are continuous in space-time whereas our solutions are not, and in fact, the kinetic energy of our solutions becomes unbounded in a piece-wise constant fashion. \subsection{Main ideas of the construction} The construction in Theorem \ref{thm:main_thm_2} is based on an iterative scheme to obtain suitable approximate solutions to \eqref{eq:NSE} that consists of two main steps. The first step is to concentrate the stress error of the approximate solutions to many smaller sub-intervals, allowing us to achieve a small Hausdorff dimension in time of the singular set. In particular, this ensures the final approximate solution is an exact smooth solution on many small intervals. Crucially, this is done by adding a very small corrector to the existing solution while keeping the size of the stress error unchanged $L^1$ in time up to a constant multiple. After this concentration procedure, the stress error is zero on a large subset of the time interval and thus the size of the concentrated stress error is much larger on its support set. The second step uses a convex integration scheme to add another perturbation to the concentrated solution, reducing the size of the stress error. This convex integration technique has been developed over the last decades, see for instance \cite{MR2600877,MR3090182,MR3374958,1701.08678,MR3866888,MR3898708,MR3951691} and references therein, since its inception to fluid dynamics in \cite{MR2600877}. In particular, its latest iteration for the transport equation in \cite{2004.09538} allows us to achieve a very high level of temporal concentration of the perturbation in the sense that higher Sobolev norms blow up while their time averages remain bounded. The introduction of temporal concentration in the convex integration scheme allows us to trade temporal integrability for spatial regularity, answering a question raised in \cite[Problem 4.4]{BV2021}. To avoid a dimensional loss, the ``building blocks'' used in the scheme are almost spatially homogeneous, in stark contrast to \cite{MR3898708,MR3951691,1809.00600}. The most difficult and important part of the iterative scheme is ensuring that the perturbation $w$ satisfies the regularity $w \in L^p_t L^\infty \cap L^1_t W^{1,q}$, while at the same time successfully reducing the size of the stress error. This boils down to balancing four different aspects of the perturbation: temporal and spatial oscillation/concentration. In the present work, the leading order effects are temporal concentration and spatial oscillation, whereas temporal oscillation and spatial concentration effectively play no role and are kept to a minimum. \subsection{Comparison with previous works} In the last part of the introduction, we compare our main results to the previous works and list a few open questions. We divide the discussion into three topics as follows. \subsubsection*{Regularity threshold for uniqueness/nonuniqueness } The first nonuniqueness result for the Navier-Stokes system was established in \cite{MR3898708} by Buckmaster and Vicol, where finite energy nonunique weak solutions were constructed in 3D. Even though the nonuniqueness is only proved in $C_t L^2$, the iteration scheme in \cite{MR3898708} allows for a very small regularity $H^{\varepsilon}$ for $\varepsilon \ll 1$, which was then used in \cite{1809.00600} to show nonuniqueness at such a regularity. The work \cite{MR3951691} built upon the observation that in higher dimension, weak solutions can be less intermittent and thus the regularity of nonuniqueness was improved to $H^{1/200-}$ for $d\geq 4$. In fact, as noted in \cite{Taoblog}, in very high dimension one can show nonuniqueness in $C_t H^{1/2-}$ or $H^{1/2-}$ in the stationary case, although the regularity $H^{1/2 -}$ is still very far from the critical scale $ H^{ \frac{d-2}{2}} $ or $L^{d}$. Below we compare different results using the scales of space-time Lebesgue spaces $X^{p,q}$. \begin{table}[H] \begin{tabular}{|l|l|l|l|} \hline Results & Category & Scaling & Range \\ \hline Leray-Hopf solutions & \small Existence & \small$\frac{2}{p} + \frac{d}{q} = \frac{d}{2}$ & $ q \geq 2 $ \\ \hline \cite{MR3898708} & \small Nonuniqueness & \small$\frac{2}{p} + \frac{d}{q} = \frac{d}{2} $ & $q = 2$ and $d=3$ \\ \hline \cite{1809.00600} & \small Nonuniqueness & \small$\frac{2}{p} + \frac{d}{q} = \frac{d}{2} - \varepsilon$ & $q = 2+$ and $d=3$ \\ \hline \cite{MR3951691} & \small Nonuniqueness & \small$\frac{2}{p} + \frac{d}{q} = \frac{d}{2} - \frac{1}{200}$ & $q = 2+$ and $d \geq 4 $ \\ \hline Theorem~\ref{thm:main_thm_2} & \small Nonuniqueness & \small$\frac{2}{p} + \frac{d}{q} = 1+ \varepsilon $ & $ q = \infty $ \\ \hline Theorem~\ref{thm:FJR_uniqueness} & \small Uniqueness & \small$ \frac{2}{p} + \frac{d}{q} = 1 $ & $ q \leq \infty $ \\ \hline \end{tabular} \end{table} In light of the current state, we expect the nonuniqueness of weak solutions continue to hold in the full range of the super-critical regime $ \frac{2}{p} + \frac{d}{q} >1$. Unfortunately, the method developed in this paper heavily relies on the constraint $p<2$ ($q=\infty$) and is not able to achieve the nonuniqueness of weak solutions in $X^{p,q}$ for $p\geq 2$ and $q \geq 2$. \subsubsection*{Size of the potential singular set} Here we discuss our result in the context of partial regularity, more specifically, the size of the singular set in time or in space-time. By singular times we mean the union of times at which the solution is not locally smooth, while singular points in space-time refer to points $(t,x)$ where the solution is not locally bounded (in the sense of $\text{ess}\sup$). By the classical results of Leray, in 3D the Hausdorff dimension of possible singular times of a Leray-Hopf solution is bounded by $1/2$\footnote{This interpretation was made explicit in \cite{MR0452123}.}. A key step in understanding the (possible) singular set of weak solutions was made by Scheffer \cite{MR454426,MR510154,MR573611} where the notion of suitable weak solutions was introduced. It was proved in~\cite{MR573611} that the singular sets of these suitable weak solutions have finite $\frac{5}{3}$-dimensional Hausdorff measure in space-time. The theory of partial regularity culminated with the work~\cite{MR673830} by Caffarelli, Kohn, and Nirenberg where they show that $ \mathcal{P}^1(S)=0$, i.e., the $1$-dimensional parabolic Hausdorff measure of the singular set in space-time is zero. Note that these partial regularity results only provide upper bounds on the potential singular sets. While convincing evidence~\cite{MR814542,MR895215,MR4066585} suggests that the upper bound of $1$-dimensional parabolic singularities in 3D is likely to be sharp for suitable weak solutions, it was unknown whether there are weak solutions with a nontrivial\footnote{Here by nontrivial we mean that the singular set is not empty or full since smooth solutions have no singularity while the singular set of the solutions in \cite{MR3898708} is the whole space-time domain.} singular set until the work \cite{1809.00600} where the authors constructed wild solutions with a nonempty singular set with a dimension strictly less than $1 $. As in \cite{1809.00600}, solutions constructed here are not Leray-Hopf; however, they constitute the first example of 3D weak solutions that surpass the $1/2$ upper bound (with a nonempty singular set). In dimension $ d \geq 4$, the existence of partially regular (in space-time or in time) weak solutions becomes highly nontrivial. In fact, Leray's structure theorem only holds up to $d=4$ and the local energy inequality, a key ingredient in the partial regularity theory, remains absent in $d \geq 4$~\cite[Remark 1.1]{MR2318865}. Despite such a difficulty, the existence of partially regular weak solutions in space-time was established in 4D~\cite{MR510154} by Scheffer and also a recent preprint~\cite{2008.05802} by Wu. In dimension $d \geq 5$ the existence of partially regular weak solutions( in space-time or in time) was unknown to our knowledge and Theorem \ref{thm:main_thm_2} appears to be the first example of weak solutions with partial regularity in time in dimension $d \geq 5$. In relation to the partial regularity in space-time, the singular set of our solutions is the whole spatial domain at each singular time, as with all the other constructions exploiting a convex integration scheme. It might be possible to construct wild solutions that enjoy a certain space-time partial regularity by a space-time variant of the concentration procedure used here. \subsubsection*{Anomalous dissipation of the Euler equations} A recent milestone in incompressible fluid dynamics is the resolution of the Onsager conjecture \cite{MR36116} which states that $\frac{1}{3}$-H\"older is the critical threshold for energy conservation for the 3D Euler equations. While the positive direction was settled in the 90s in \cite{MR1298949} following the first attempt by \cite{MR1302409} and then later refined in \cite{MR2422377}, the negative part was significantly harder and the regularity of counterexamples~\cite{MR1231007,MR1476315} was far below the threshold. Advances in the negative direction really took off with the modern convex integration approach starting with the seminal paper of De Lellis and Székelyhidi Jr.~\cite{MR2600877}. The approach of using convex integration was refined and improved in a series of works~\cite{MR3090182,MR3254331,MR3374958,MR3530360}. Building upon these works, the threshold $C_t C^{1/3 - }$ was finally reached by Isett~\cite{MR3866888}, see also~\cite{1701.08678}. So far, constructed anomalous weak solutions have a limited regularity on the whole time axis, namely, the H\"older regularity in space is always below $\frac{1}{3} $. The works~\cite{MR1298949,MR2422377} suggest that insisting on the exact ``$\frac{1}{3}$-H\"older regularity'' in space leads to $L^3$ being the right critical scale in time for the energy conservation. This exact ``$\frac{1}{3}$-H\"older regularity'' seems to be out of reach for the previous Euler schemes, an issue that has been investigated recently by Isett~\cite{1706.01549}. Even though our inviscid solutions have a worse global-in-time regularity, they are smooth solutions on a ``large'' portion of the time axis, and hence the kinetic energy is conserved locally in time. The mechanisms of the failure of the energy conservation are completely different: fast spatial oscillations play a key role in the previous Euler examples, whereas a strong temporal concentration here causes the breakdown at small time scales. It would be very interesting to combine the previous Euler results with the current paper to show that there exist ``wild solutions'' in $C_t C^{\frac{1}{3} - } $ or $L^{3-}_t C^{\frac{1}{3} }$ that are locally smooth in time away from a small singular set. \subsection{Notations} For reader's convenience, we collect the notations used throughout the manuscript. \begin{itemize} \item $\TT^d = \RR^d / \ZZ^d$ is the $d$-dimensional torus and is identified with $[0,1]^d$. For any function $f: \TT^d \to \RR$ we denote by $f(\sigma \cdot)$ the $ \sigma^{-1} \TT^d$-periodic function $f(\sigma x)$. The space $C^\infty_0(\TT^d)$ is the set of periodic smooth functions with zero mean and $ C^\infty_0(\TT^d, \RR^d )$ is the set of periodic smooth vector fields with zero mean. \item The Lebesgue space is denoted by $L^p$. For any $f \in L^1(\TT^d) $, its spacial average is $$ \fint_{\TT^d} f \,dx= \int_{\TT^d} f\,dx. $$ For any function $f:[0,T] \times \TT^d \to \RR $, denote by $\| f(t) \|_p $ the Lebesgue norm on $\TT^d$ (in space only) at a fixed time $t$. If the norm is taken in space-time, we use $\|f \|_{L^p_{t,x}} $. \item The tensor divergence $\D A = \partial_j A_{ij}$ for any matrix-valued function $A: \TT^d \to \RR^{d\times d}$ and the tensor product $f \otimes g = f_i g_j$ for any two vectors $f,g \in \RR^d$. The notion $\nabla$ indicates full differentiation in space only, and space-time gradient is denoted by $ \nabla_{t,x}$. \item For any Banach space $X$, the Bochner space $L^p(0,T;X)$ is equipped with the norm $$ \Big( \int_{0}^T \| \cdot \|_X^p \, dt \Big)^\frac{1}{p}, $$ and we often use the short notations $L^p_t X$ and $\| \cdot \|_{L^p_t X}$. \item We write $X \lesssim Y$ if there exists a constant $C>0$ independent of $X$ and $Y$ such that $X \leq C Y $. If the constant $C$ depends on quantities $a_1,a_2,\dots,a_n$ we will write $X \lesssim_{a_1,\dots,a_n}$ or $X \leq C_{a_1,\dots ,a_n} Y $. \end{itemize} \subsection{Organization of the paper} The organization of the rest of the paper is as follows. The outline of construction is given in Section \ref{sec:outline}, where main theorems will be proved assuming the main proposition of the paper, proposition \ref{prop:main}. The proof of the main proposition is the content of the rest of the paper: we concentrate the stress error to many small sub-intervals in Section~\ref{sec:proof_step_1}, design a velocity perturbation using convex integration to obtain a new solution pair $(u_1 , R_1)$ in Section \ref{sec:proof_step_2_convex_integration}, and finally estimate the perturbation along with the new stress error to conclude the proof in Section \ref{sec:proof_step_2}. Appendix \ref{sec:append_weak} includes a proof of Theorem \ref{thm:FJR_uniqueness}. Appendix \ref{sec:append_tech} contains some technical tools used in the paper, namely an improved H\"older's inequality and antidivergence operators on $\TT^d$. \section{Outline of the proof}\label{sec:outline} The proof of Theorem \ref{thm:main_thm_2} consists of a iterative scheme, which is achieved by repeatedly applying the main proposition of this paper, Proposition \ref{prop:main} to obtain a sequence of solutions $(u_n, R_n)$ to \eqref{eq:NSR}. The proof mainly consists of three goals: \begin{enumerate} \item The convergence of $u_n \to u$ in $L^2_{t,x}$ and $R_n \to 0$ in $L^1_{t,x}$ so that $u$ is a weak solution of \eqref{eq:NSE}. \item Ensuring the final solution verifies $u \in L^p_t L^\infty \cap L^1_t W^{1,q}$ for $p<2$ and $q < \infty$. \item Achieving a small dimension of the singular set of $u$ in time. \end{enumerate} To this end, we employ a two-step approach: \begin{itemize} \item Step 1: $(u_n, R_n) \xrightarrow{ \text{concentrating the stress error}} (\overline{u}_n , \overline{R}_n) $ \item Step 2: $ (\overline{u}_n , \overline{R}_n) \xrightarrow{\text{space-time convex integration} }( u_{n+1} , {R}_{n+1} ) $ \end{itemize} where the first step is mainly for achieving a small singular set in time and the second step is to ensure the convergence of $R_n$. \subsection{The Navier-Stokes-Reynolds system} Let us first introduce the approximate equations of \eqref{eq:NSE} for our approximate solutions. These approximate solutions solved the so-called Navier-Stokes-Reynolds systems \begin{equation}\label{eq:NSR} \begin{cases} \partial_t u - \Delta u + \D(u \otimes u) + \nabla p = \D R &\\ \D u =0 \end{cases} \end{equation} where $R: [0,T]\times \TT^d \to \mathcal{S}^{d \times d}_0$ is a traceless symmetric matrix called Reynolds stress. Since the associated pressure $p$ can be uniquely determined by the elliptic equation: $$ \Delta p = \D \D R - \D\D( u \otimes u) = \partial_i \partial_j ( R_{ij} - u_i u_j) , $$ throughout the paper, we denote the solution of \eqref{eq:NSR} by $(u,R)$. This system \eqref{eq:NSR} arises naturally when studying weak solutions of the Navier-Stokes equations. The Reynolds stress $R$ emerges as the noncommutativity between average ensembles and the quadratic nonlinearity. In the inviscid case, we can just drop the Laplacian term in \eqref{eq:NSR} and the system becomes the so-called Euler-Reynolds equations, which was widely used in constructing non-conserving weak solutions of the Euler equations in the context of Onsager's conjecture~\cite{MR3090182,MR3254331,MR3374958,MR3530360,MR3866888,1701.08678}. Since the Laplacian plays no role in our construction, in what follows we simply use \eqref{eq:NSR}. \subsection{Concentrating the stress error} As stated in the introduction, we proceed with two steps to prove this main proposition. The first step is a procedure that concentrates the stress error into many smaller sub-intervals. Given $(u_{n-1},R_{n-1})$, we divide the time interval $[0,T]$ into smaller sub-intervals $I_i$ of length $\tau^{\varepsilon}>0$, where $\tau>0$ will be chosen to be very small depending on $(u_{n-1},R_{n-1})$. So the total number of sub-intervals is $\sim \tau^{-\varepsilon}$. On each sub-interval $I_i$, we solve a generalized Navier-Stokes equations linearized around $(u_{n-1},R_{n-1})$ to obtain a corrector $v_i$ on $I_i$. More precisely, $v_i: I_i \times \TT^d \to \RR^d$ solves \begin{equation} \begin{cases} \partial_t v_{i} - \Delta v_{i } + \D( v_{i} \otimes v_{i}) + \D(v_{i} \otimes u ) + \D( u \otimes v_{i} ) + \nabla q_i= -\D R &\\ \D v_i = 0 &\\ v_i(t_i) = 0. \end{cases} \end{equation} so that $u_{n-1} + v_i$ is an exact solution (of the Navier-Stokes equations) on $I_i$. To concentrate the error and obtain a solution on $[0,T]$, we apply a sharp cutoff $\chi_i$ to the corrector $v_i$ and obtain the glued solution $ \overline{u}_{n-1}$ defined by $$ \overline{u}_{n-1} := u_{n-1} +\sum_{i} \chi_i v_i. $$ Specifically, each $\chi_i$ equals $1$ on a majority of the sub-interval $I_i$, but $ \chi=0 $ near endpoints of each $I_i$ of scale $\sim \tau$. Since $\varepsilon\ll 1$, the cutoff $\chi_i$ is very sharp when comparing to the length of the sub-interval $I_i$. On one hand, due to the sharp cutoff $ \chi_i$, the stress error $\overline{R}_n$ associated with $\overline{u}_n$ will only be supported near the endpoints of $I_i$ of time scale $\tau$. In other words, the temporal support of $\overline{u}_n$ can be covered by $\sim \tau^{-\varepsilon}$ many intervals of size $\sim \tau$, from which one can already see a small dimension of the singular set of the final solution. On the other hand, the corrector $v_i$ is very small, say in $L^\infty_t H^d$, since it starts with initial data $0$ and we can choose time scale $\tau^{\varepsilon} = |I_i|$ to be sufficiently small. More importantly, the new stress error $\overline{R}_{n-1}$ associated with $\overline{u}_{n-1}$ satisfies the estimate $$ \| \overline{R}_{n-1} \|_{L^1_t L^r} \lesssim \| {R}_{n-1} \|_{L^1_t L^r} \quad \text{for $1 < r< \infty$}, $$ with an implicit constant independent of the time scale $\tau>0$. In other words, concentrating the stress error $R_{n-1}$ to $ \overline{R}_{n-1}$ cost a loss of a constant multiple when measuring in $L^1$ norm in time. \subsection{Space-time convex integration} The next step is to use a convex integration technique to reduce to size of $ \overline{R}_{n-1}$ by adding further a perturbation $w_n$ to $\overline{u}_{n-1}$ to obtain a new solution $(u_n, R_n)$ of \eqref{eq:NSR}. The perturbation $w_n$ and the new stress $R_n$ satisfies the equation $$ \D R_{n}= \D \overline{R}_{n-1} +\D(w_{n} \otimes w_{n} ) + \partial_t w_n -\Delta w_{n} + \D(\overline{u}_{n -1 } \otimes w_{n} ) + \D(w_{n} \otimes \overline{u}_{n -1} ) + \nabla P_n, $$ for a suitable pressure $P_n$. The heuristic is that the high-high to low cascade in space-time of $w_{n} \otimes w_{n}$ can balance the old stress error $\overline{R}_{n-1}$ in the sense that \begin{equation}\label{eq:outline_cascade} \D(\overline{R}_{n-1} + w_{n} \otimes w_{n} )= \text{High Spacial Freq. Term} + \text{High Temporal Freq. Term}+ \text{Lower Order Terms}, \end{equation} where the ``High Temporal Freq. Term'' above will further be balanced by a part of $\partial_t w_n$, as in~\cite{MR3898708,1809.00600} and~\cite{2004.09538}. However, one of the fundamental differences to ~\cite{MR3898708,1809.00600} is that this additional ``convex integration in time'' requires no additional constraint of oscillation and concentration and is basically free, which is crucial to obtain the sharp regularity $L^p_t L^\infty \cap L^1_t W^{1,q}$. In particular, executing the scheme of \cite{2004.09538} requires two ingredients: \begin{enumerate} \item Suitable stationary flows as the spatial building blocks that can achieve some level of spatial concentration. \item The use of intermittent temporal functions to oscillate the spatial building blocks in time. \end{enumerate} Once $(1)$ is available, it is relatively straightforward to implement $(2)$. On the technical side, we use the stationary Mikado flows introduced in \cite{MR3614753} as the spatial building blocks. These are periodic pipe flows that can be arranged to supported on periodic cylinders with a small radius. In other words, Mikado flows can achieved a $d-1$-dimensional concentration on $\TT^d$, which is more than enough in view of $(1)$. It is worth noting that in the framework of \cite{MR3898708}, stationary Mikado flows are not sufficiently intermittent to be used for the Navier-Stokes equations in dimension $d \leq 3$, cf. \cite{MR3951691}. The space-time cascade~\eqref{eq:outline_cascade} imposed a relation between the perturbation $w_n$ and the stress error $\overline{R}_{n-1}$ as \begin{equation}\label{eq:outline_w_n} \|w_n \|_{L^2_{t,x}} \sim \| \overline{R}_{n-1} \|_{L^1_{t,x}}. \end{equation} The relation \eqref{eq:outline_w_n} will imply the convergence in $L^2_{t,x}$ of the approximate solutions $u_{n}$ as long as one can successfully reduce the size of the stress error \begin{equation}\label{eq:outline_R_n} \| {R}_{n } \|_{L^1_{t,x}} \ll \| \overline{R}_{n-1} \|_{L^1_{t,x}} . \end{equation} In particular, special attention will be paid to estimating the temporal derivative part of the new stress error \begin{equation}\label{eq:outline_1} \D R_{\Tem } = \partial_t w_n, \end{equation} and achieving the regularity of the perturbation \begin{equation}\label{eq:outline_2} \| w_n\|_{L^p L^\infty } + \| w_n\|_{L^1 W^{1,q} } \ll 1 . \end{equation} These two constraints \eqref{eq:outline_1} and \eqref{eq:outline_2} require a very delicate choice of parameters when designing the perturbation $w_n$. On one hand, \eqref{eq:outline_1} implies the temporal frequency can not be too large, relative to the spatial frequency, otherwise the time derivative will dominate. On the other hand, \eqref{eq:outline_2} requires a large temporal frequency so that temporal concentration can offset the loss caused by going from $L^2$ to $L^\infty$ or $W^{1,q}$ in space in relation to \eqref{eq:outline_w_n}. Nevertheless, it turns out that the scheme in \cite{2004.09538} is flexible enough to accommodate \eqref{eq:outline_1} and \eqref{eq:outline_2}. We could somehow explain why this is possible. Roughly speaking, the method in \cite{2004.09538} is $L^2_{t,x}$-critical. While it is difficult for $w_n$ to go above the $L^2_{t,x}$ regularity, we trade $L^2$ for $L^p$ (resp. $L^1$) in time to obtain an improvement of $L^2$ to $L^\infty $ (resp. $W^{1,q}$) in space. We provide a scaling analysis below. \subsection{Oscillation and concentration} We do this computation in general dimension $d \geq 2$ and $D \in [0,d]$ denotes the spatial intermittency, cf. \cite{MR0495674,MR1428905}. We start with a velocity perturbation in $L^2_{t,x}$ with a certain decay given by the previous stress error, $$ \|w_n \|_{L^2_{t,x}} \lesssim 1. $$ Denote the spacial frequency by $ \lambda$ and the temporal frequency by $ \kappa$, namely $$ \|\partial_t^m \nabla^n w \|_{L^2_{t,x}} \lesssim \kappa^m \lambda^n. $$ The intermittency parameter $D$ in space dictates the concentration level of $w_n$ and the scaling law \begin{equation}\label{eq:outline_3} \| w_n (t) \|_{L^p} \lesssim \| w_n (t)\|_{L^2} \lambda^{(d-D)( \frac{1 }{2} - \frac{1}{p})}. \end{equation} As for the temporal scaling, we assume for simplicity a full concentration in time: \begin{equation}\label{eq:outline_4} \| w_n \|_{L^q L^p} \lesssim \| w_n \|_{L^2 L^p} \kappa^{\frac{1}{2} - \frac{1}{q}} \sim \kappa^{\frac{1}{2} - \frac{1}{q}} \lambda^{(d-D)( \frac{1 }{2} - \frac{1}{p})}. \end{equation} With such scaling laws, we effectively assume a negligible amount of temporal oscillation and the goal then boils down to finding a working choice of $D$ in terms of the given parameters $d,p,q$. In other words, we need to find a balance between spatial oscillation and spacial concentration. By the scaling relations \eqref{eq:outline_3} and \eqref{eq:outline_4}, the stress error contributed by the time derivative \eqref{eq:outline_1} satisfies \begin{equation}\label{eq:outline_5} \| \D^{-1}( \partial_t w_n) \|_{L^1_{t,x}} \lesssim \kappa^{ \frac{1}{2}} \lambda^{-1} \lambda^{- \frac{d-D}{2}} \end{equation} where we assume $ \D^{-1}$ gains one full derivative in space. The regularity condition \eqref{eq:outline_2} becomes \begin{equation}\label{eq:outline_6} \| w_n\|_{L^p L^\infty } \sim \kappa^{\frac{1}{2} -\frac{1}{p}} \lambda^{ \frac{d-D}{2}} \ll 1, \end{equation} and \begin{equation}\label{eq:outline_7} \| w_n\|_{L^1 W^{1,q} } \sim \kappa^{-\frac{1}{2}} \lambda^{1 + \frac{d-D}{2} - \frac{d-D}{q}} \ll 1. \end{equation} In particular, \eqref{eq:outline_5} and \eqref{eq:outline_7} imply that \begin{equation}\label{eq:outline_8} \lambda^{1 + \frac{d-D}{2} - \frac{d-D}{q}} \ll \kappa^{\frac{1}{2}} \ll \lambda^{1 + \frac{d-D}{2}}, \end{equation} which always has some room since $q<\infty$. Then all we need to is to choose $D$ to ensure \eqref{eq:outline_6} and \eqref{eq:outline_8}. One can already see that $D$ should be very close to $d$, which means we need much more spatial oscillation than spatial concentration. Indeed, solutions to \eqref{eq:outline_6} and \eqref{eq:outline_8} do exist and we refers to Section \ref{subsection:choice_of_parameters} for the exact choice used. \subsection{The main iteration proposition} We are ready to introduce the main iteration proposition of the paper that materializes the above discussion. To simplify presentation, let us introduce the notion of well-preparedness of solutions to \eqref{eq:NSR}, which encodes the small Hausdorff dimension of the singular set in time. Throughout the paper, we take $T=1$ and assume $0<\varepsilon<1$ without loss of generality. \begin{definition} We say a smooth solution $(u, R)$ of \eqref{eq:NSR} on $[0,1]$ is well-prepared if there exist a set $I$ and a length scale $ \tau >0$ such that $I$ is a union of at most $ \tau ^{-\varepsilon}$ many closed intervals of length $5\tau $ and $$ R(t,x) =0 \quad \text{if} \quad \dist(t, I^c) \leq \tau . $$. \end{definition} With this definition, to ensure the solution $u$ has intervals of regularity with a small residue set of Hausdorff dimension $\lesssim \varepsilon$, it suffices to construct approximate solutions $(u_n ,R_n)$ that are well-prepared for some $I_n$ and $\tau_n$ such that $$ I_{n} \subset I_{n-1} \quad \text{and}\quad \tau_n \to 0. $$ The main proposition of this paper states as follows. \begin{proposition}[Main iteration]\label{prop:main} For any $\varepsilon>0$ and $p < 2$, there exists a universal constant $M=M(\varepsilon,p)>0$ and $r>1$ depending only on $p$ and $q$ such that the following holds. Let $\delta>0$ and $(u,R)$ be a well-prepared smooth solution of \eqref{eq:NSR} for some set $\widetilde{I}$ and a length scale $ \widetilde{\tau }>0$. Then there exists another well-prepared smooth solution $(u_1,R_{1})$ of \eqref{eq:NSR} for some set $I \subset \widetilde{I}$ with $ 0,1 \notin I $ and $\tau < \widetilde{\tau}/2$ such that $$ \|R_{1} \|_{L^1(0,1; L^r( \TT^d ))} \leq \delta . $$ Moreover, the velocity perturbation $w : = u_{1} - u $ satisfies \begin{enumerate} \item The temporal support of $w$ is contained in $I$, i.e. $$ \Supp w \subset I \times \TT^d. $$ \item The $L^2 L^2 $ estimate, $$ \|w \|_{L^2([0,1] \times \TT^d )} \leq M \| R \|_{L^1([0,1] \times \TT^d )} ; $$ \item The $L^{p} L^\infty \cap L^{1} W^{1,q} $ estimate, $$ \|w \|_{L^{p }(0,1; L^\infty (\TT^d))} + \|w \|_{L^{1 }(0,1; W^{1,q} (\TT^d))} \leq \delta ; $$ \end{enumerate} \end{proposition} \begin{remark} We list a few comments concerning the main proposition. \begin{enumerate} \item It will be clear that the construction adapts no change for the Euler equations, except dropping the Laplacian. \item The parameter $r>1$ is used to ensure the $L^r$ boundedness of the Calder\'on-Zygmund singular integral and is very close to $1$. \item Due to the local well-preparedness, on large portions of the time axis, the solutions are exact solutions of the Navier-Stokes equations (or the Euler equations) and we do not touch them in the future. \end{enumerate} \end{remark} We will break down Proposition \ref{prop:main} into two separate propositions, whose proof will be the context of Section \ref{sec:proof_step_1} and respectively Section \ref{sec:proof_step_2_convex_integration} and Section \ref{sec:proof_step_2}. \subsection{Proof of main theorems} We first deduce Theorem \ref{thm:main_short} and Theorem \ref{thm:main_sharp_short} from Theorem \ref{thm:main_thm_2}. \begin{proof}[Proof of Theorem \ref{thm:main_short}] Given a weak solution $v$ (in the sense of Definition \ref{def:weak_solutions}) of the 2D Navier-Stokes equations on $[0,T]$, we need to construct a different weak solution $u$ with the same initial data. For brevity, we take $T=1$. \noindent {\bf Case 1}: $v$ is a Leray-Hopf solution. Since $ {v}$ is Leray-Hopf on $[0, 1]$, $ {v}$ is in fact smooth on $(0, 1]$. Let $\widetilde{v}:[1/2,1] \times \TT^2 \to \RR^2 $ be a smooth divergence-free vector field that coincide with $v$ on $[1/2, 3/4]$ but \begin{equation}\label{eq:proof_thm1.5_1} \| \widetilde{v} - v \|_{L^p([1/2,1]; L^\infty(\TT^d)} \geq 1 . \end{equation} Then we can apply Theorem \ref{thm:main_thm_2} for $ \widetilde{v} $ and $\varepsilon<1$ to obtain a weak solution $\tilde{u} $ on $[ 1/2 ,1] \times \TT^2$. The conclusion follows once we define the weak solution $u:[0,1]\times \TT^2 \to \RR^2$ by \begin{equation*} u = \begin{cases} {v} & \text{if $t \in [0, 1/2 ]$}\\ \widetilde{u} & \text{if $t \in [1/2,1]$}. \end{cases} \end{equation*} Indeed, the new glued solution $u$ is still a weak solution of \eqref{eq:NSE} and moreover, by \eqref{eq:proof_thm1.5_1} $$ \|u -v \|_{L^p([1/2,1]; L^\infty} \geq \| \widetilde{v} - v \|_{L^p([1/2,1]; L^\infty}- \| \widetilde{u} -\widetilde{v} \|_{L^p([1/2,1]; L^\infty} \geq 1-\varepsilon >0, $$ which implies $u \neq v$. \noindent {\bf Case 2}: $v$ is not Leray-Hopf. Since the initial data $v_0 \in L^2 (\TT^2)$, we may just use $v_0 $ as initial data to obtain a Leray-Hopf solution $u$ on $[0,1]$ which is different from $v$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main_sharp_short}] To prove both points, it suffices to show that given a weak solution $v \in L^p(0,1; L^\infty(\TT^d)$ of \eqref{eq:NSE} with at least one interval of regularity, there exists a non-Leray-Hopf weak solution $u \in L^p(0,1; L^\infty(\TT^d)$ having intervals of regularity and with the same initial data. Let $[a,b]$ be an interval of regularity of the weak solution $v$. We first choose a smooth, divergence-free vector field $\widetilde{v}:[a,1]\times \TT^d \to \RR^d $ such that $$ \widetilde{v}|_{ [a,b] \times \TT^d} \equiv v|_{ [a,b] \times \TT^d}, $$ but \begin{equation}\label{eq:proof_thm1.6_1} \|\widetilde{v} - v \|_{L^p( [a,1]; L^\infty(\TT^d) )} \geq 1 \quad \text{and} \quad \|\widetilde{v} \|_{L^p( [a,1]; L^2(\TT^d) )} \geq 1+ \| v(0)\|_{L^2(\TT^d) }, \end{equation} As in the proof of Theorem \ref{thm:main_short}, we apply Theorem \ref{thm:main_thm_2} to $ \widetilde{v}$ and $\varepsilon<1$ to obtain a weak solution $\widetilde{u} \in L^p( a,1 ; L^\infty(\TT^d)) $ such that \begin{equation}\label{eq:proof_thm1.6_2} \|\widetilde{u} -\widetilde{v} \|_{L^p( [a,1]; L^\infty(\TT^d) )} \leq \varepsilon . \end{equation} We then define a new solution $u: [0,1] \times \TT^d \to \RR^d$ by \begin{equation*} u = \begin{cases} v & \text{if $t \in [0,a]$}\\ \widetilde{u} & \text{if $t \in [a, 1]$}. \end{cases} \end{equation*} The glued solution $u \in L^p L^\infty$ is still a weak solution due the smoothness of both $v$ and $\widetilde{u} $ near $t =a$. Next, $u$ and $v$ are different since $$ \| u -v \|_{L^p L^\infty } = \|\widetilde{u} -v \|_{L^p ([a,1] ;L^\infty )} \geq \|\widetilde{v} - v \|_{L^p ([a,1] ;L^\infty )} - \|\widetilde{u} - \widetilde{v} \|_{L^p ([a,1] ;L^\infty )} \geq 1 -\varepsilon , $$ where we have used \eqref{eq:proof_thm1.6_1} and \eqref{eq:proof_thm1.6_2}. Finally, $u$ can not be a Leray-Hopf solution since $$ \| u \|_{L^p_t L^2} \geq \|\widetilde{u} \|_{L^p ([a,1] ;L^2 )} \geq \|\widetilde{v} \|_{L^p ([a,1] ;L^2 )} -\|\widetilde{u} - \widetilde{v} \|_{L^p ([a,1] ;L^\infty )} > \|v(0) \|_2 $$ and Leray-Hopf solutions must have non-increasing $L^2$ norm. \end{proof} Next, we prove Theorem \ref{thm:main_euler_onsager} in the case of the Euler equations. \begin{proof}[Proof of Theorem \ref{thm:main_euler_onsager}] We first choose an infinite sequence of smooth divergence-free vector fields $v_n \in C^\infty_0([0,1] \times \TT^d) $ such that: \begin{enumerate} \item On $[0, \frac{1}{2}]$, every $v_n$ coincides and is equal to an exact solution of the Euler equations; \item Each $v_n$ satisfies $$ \| v_n \|_{L^p([0,1]; L^\infty)}= 2^n. $$ \end{enumerate} Since Theorem \ref{thm:main_thm_2} holds for the Euler equations as well, we can apply it for $v_n$ and some $\varepsilon< \frac{1}{2}$. Then we obtain infinite many weak solution $u_n \in L^p L^2 \cap L^1 W^{1,q}$ of the Euler equations. On one hand, since each $v_n$ we used agrees and solves the Euler equations on $[0,1/2]$, by Theorem \ref{thm:main_thm_2}, for any $n \geq 1$, there exists $\tau_n>0$ such that $u_n$ coincides with $v_n $ on $[0,\tau_n]$\footnote{In fact, $\tau_n$ can be taken to be $1/2$ if we use Proposition \ref{prop:main} instead of Theorem \ref{thm:main_thm_2}.}. So every weak solution $u_n$ has the same initial data. On the other hand, these weak solutions $u_n$ are different since for any $n > m$, $$ \|u_n - u_{m}\|_{L^p L^\infty} \geq \|u_n - v_{n}\|_{L^p L^\infty} -\|u_m - v_{m}\|_{L^p L^\infty} \geq 2^{n}-1. $$ where we have used that $ \varepsilon< \frac{1}{2}$. It remains to show the regularity $u_n \in L^{3/2 -\varepsilon} C^{1/3} \cap L^1 C^{1-\varepsilon} $. This can be done by standard interpolations. Since for any $\varepsilon>0$ there exists $q<\infty$ such that the embedding $$ W^{1,q}(\TT^d) \hookrightarrow C^{1-\varepsilon}(\TT^d) $$ holds, we get $L^1 C^{1-\varepsilon} $. The bound $ u \in L^{3/2 -\varepsilon} C^{1/3}$ can then be obtained by interpolating $L^p L^\infty$ with $L^1 C^{1-\varepsilon{}}$ (with different $\varepsilon$). \end{proof} Finally, we prove Theorem \ref{thm:main_thm_2} and Theorem \ref{thm:main_euler} assuming Proposition \ref{prop:main}. \begin{proof}[Proof of Theorem \ref{thm:main_thm_2} and Theorem \ref{thm:main_euler}] Let $u_0 = v $ and $$ R_0 = \mathcal{R}\Big( \partial_t u_0 -\Delta u_0 + \D(u_0 \otimes u_0 ) \Big) $$ where $\mathcal{R}$ is an inverse divergence operator on $\TT^d$ defined in Appendix \ref{sec:append_tech}. Since the given vector field $v$ has zero spatial mean for each $t\in [0,1]$, $(u_0 ,R_0)$ solves \eqref{eq:NSR} trivially and is well-prepared for $I=[0,1]$ and $\tau=1$. We construct a sequence of solution $(u_n ,R_n)$ for $n \in \NN$ as follows. Given $(u_{n-1} ,R_{n-1})$, we apply Proposition~\ref{prop:main} with the parameter $$ \delta_n := 2^{-n} \min\left\{ \|R_{n-1} \|_{L^1 L^r} , \varepsilon \right\} $$ to obtain $(u_{n} ,R_{n} )$. Denote the perturbations by $w_{n } : =u_{n } - u_{n-1}$ for $n \geq 1 $. As a result, we have $$ \| R_n \|_{L^1_{t,x}} \leq \| R_n \|_{L^1_t L^r} \leq \delta_{n}, $$ and \[ \|w_{n} \|_{L^p_t L^\infty} + \|w_{n} \|_{L^1_t W^{1,q}} \leq \delta_{n}, \] for any $n \geq 1 $. Also, \begin{align*} \|w_{n} \|_{L^2_t L^2} \leq M \| R_{n-1} \|_{L^1_t L^1} \leq M \delta_{n-1}, \end{align*} for any $n \geq 2$. Since $u_n$ is Cauchy in $L^2_{t,x} \cap L^p_t L^\infty \cap L^1_t W^{1,q}$, there exists $u \in L^2_{t,x} \cap L^p_t L^\infty \cap L^1_t W^{1,q} $ such that $$ u_n \to u \quad \text{in} \quad L^2_{t,x} \cap L^p L^\infty \cap L^1 W^{1,q}. $$ To show that $u$ is a weak solution of \eqref{eq:NSE}, we need to verify the $a.e.$ in time divergence-free condition and the weak formulation. As we will show below, the Lebesgue measure of the singular set in time is zero, so $u$ is $a.e.$ in time divergence-free by construction. Now we show that the weak formulation holds. Indeed, take any $\varphi \in \mathcal{D}_T$, then using weak formulation of \eqref{eq:NSR} or integrating by parts we have $$ \int_{\TT^d} u_n(0,x) \varphi(0,x ) \, dx = - \int_{[0,1]\times \TT^d} (u_n\cdot \Delta \varphi + u_n\otimes u_n : \nabla \varphi + u_n \cdot \partial_t \varphi) \, dx dt - \int_{[0,1]\times \TT^d} R_n : \nabla \varphi \, dx dt . $$ Since $u_n \to u $ in $L^2_{t,x}$, $R_n \to 0$ in $L^1_{t,x}$, and $u_n(0,x) \equiv v(0,x)$ for all $n \geq 1$, it follows that all terms above converge to their natural limits and hence $ u$ is a weak solution of \eqref{eq:NSE}. Moreover, $$ \| u -v \|_{L^p_t L^\infty \cap L^1_t W^{1,q}} \leq \sum_{n\geq 1}\left[ \| w_n \|_{L^p_t L^\infty } + \| w_n \|_{L^1_t W^{1,q} }\right] \leq \sum_{n\geq 1} 2^{-n} \varepsilon \leq \varepsilon. $$ Finally, we show the structure of the intervals of regularity of $u$. Recall that each solution $(u_n , R_n)$ is well-prepared, so let us denote by $\mathcal{B}_n \subset [0,1] $ and $ \tau_n >0$ the set and length scale of the well-preparedness of $(u_n , R_n)$. Let \[ \mathcal{I} = \bigcup_{n \geq 0} \mathcal{B}_n^c \setminus \{0,1\}, \] where the complement is taken in $[0,1]$. Note that $\mathcal{B}_{n}^c \subset (\Supp w_n)^c $ for all $n \geq 1$, and $\mathcal{B}_n^c\setminus \{0,1\} $ are monotonically increasing open sets. Therefore $w_k(t)\equiv 0$ on $\mathcal{B}_{n}^c$ for all $k\geq n$, and hence $u(t) \equiv u_n(t)$ on $\mathcal{B}_{n}^c$ for each $n$. Since $u_n$ is smooth, this proves that $u|_{\mathcal{I}\times \TT^d }\in C^\infty(\mathcal{I} \times \TT^d)$. By construction, each $u_n$ also agrees with each other for $0 \leq t \leq \tau_1 /2$, and hence $u$ agree with the unique smooth solution with initial data $v(0)$ near $t=0$. Finally, since each $\mathcal{B}_{n}$ is a finite union of closed intervals, we can write $\mathcal{I} $ as a union of countably many open intervals: $$ \mathcal{I} = \bigcup_{i\geq 0} (a_i, b_i). $$ For the Hausdorff dimension bound of $[0,1] \setminus\mathcal{I}$, we notice that $$ (0,1) \setminus\mathcal{I} = \bigcap_{n \geq 0} \mathcal{B}_n = \limsup_n \mathcal{B}_n . $$ Since each $\mathcal{B}_n $ is covered by at most $\tau_n^{-\varepsilon}$ many balls of radius $5 \tau_n $, we have $$ d_{\mathcal{H}} (\limsup_n \mathcal{B}_n ) \leq \varepsilon . $$ \end{proof} \section{Concentrating the stress error}\label{sec:proof_step_1} The goal of this section is to prove Proposition \ref{prop:main_step_1} below. The idea is that given a solution $(u,R)$ of \eqref{eq:NSR}, we can add a small correction term to the existing solution $(u,R)$ so that all of the stress error $R$ concentrates to a set $I$, the union of small intervals of length $\tau$, and thus obtain a new solution $( \overline{u}, \overline{R})$. The key is that the procedure $$ ( u , R ) \rightarrow ( \overline{u} ,\overline{R} ) $$ leaves the size of the stress error $R$ invariant $L^1$ in time, up to a cost of a constant multiple, namely \begin{equation}\label{eq:R_concentration_loss} \| \overline{R} \|_{L^1 L^r} \leq C \| R \|_{L^1 L^r} \quad \text{for any $1 < r < \infty$}, \end{equation} where $C=C(r,\varepsilon) $ is a universal constant that only depends on $r$ and $\varepsilon>0$ in the well-preparedness. \begin{proposition}\label{prop:main_step_1} Let $ 0 < \varepsilon <1$ and $(u,R)$ be a well-prepared smooth solution of \eqref{eq:NSR} for some set and a length scale $\widetilde{I}$ and $ \widetilde{\tau} >0$. For any $ 1 < r <\infty$, there exists a universal constant $C=C(r,\varepsilon)>0$ such that the following holds. For any $\delta>0$, there exists another well-prepared smooth solution $(\overline{u},\overline{R})$ of \eqref{eq:NSR} some set $I \subset \widetilde{I}$ with $0 ,1 \not \in I$ and $\tau <\widetilde{\tau}/2$ satisfying the following. \begin{enumerate} \item The new stress error $\overline{R}$ satisfies $$ \overline{R}(t,x) = 0 \quad \text{if } \,\,\dist(t,I^c) \leq \frac{3\tau}{2} , $$ and $$ \|\overline{R} \|_{L^1(0,1; L^r( \TT^d ))} \leq C \|R \|_{L^1(0,1; L^r( \TT^d ))} ; $$ \item The velocity perturbation $ \overline{w} : = \overline{u} - u $ satisfies $$ \Supp \overline{w} \subset\widetilde{I} \times \TT^d, $$ and $$ \|\overline{w} \|_{L^{\infty}(0,1; H^d (\TT^d))} \leq \delta. $$ \end{enumerate} \end{proposition} Note the slightly stricter bound $\dist(t,I^c) \leq \frac{3\tau}{2} $ versus the definition of well-preparedness is to leave room for the future convex integration scheme in the next section. \subsection{Subdividing the time interval} We first introduce a subdivision of the time interval $[0,1]$. Then on each sub-interval $[t_i, t_{i+1}]$, we solve a generalized Navier-Stokes (or Euler in the inviscid case) equations and obtain a solution $v_i$ so that $u+v_i$ is a exact solution of the Navier-Stokes equations on $[t_i, t_{i+1}]$. Let $\tau >0$ be a small length scale to be fixed in the end of this section and define for $ 0 \leq i \leq \lfloor \tau^{-\varepsilon} \rfloor $ $$ t_i = i \tau^{\varepsilon}. $$ Without loss of generality, we assume $\tau^{-\varepsilon}$ is always an integer so that the time interval $[0,1]$ is perfectly divided. For $ 0 \leq i \leq \tau^{-\varepsilon} -1 $, let $v_i : [t_i, t_{i+1}] \times \TT^d \to \RR^d$ and $q_i : [t_i, t_{i+1}] \times \TT^d \to \RR $ be the solution of the following generalized Navier-Stokes system \begin{equation}\label{eq:concentrator} \begin{cases} \partial_t v_{i} - \Delta v_{i } + \D( v_{i} \otimes v_{i}) + \D(v_{i} \otimes u ) + \D( u \otimes v_{i} ) + \nabla q_i= -\D R &\\ \D v_i = 0 &\\ v_i(t_i) = 0. \end{cases} \end{equation} Since the initial data for $v_i$ is zero and $u$ and $R$ are smooth on $[0,1]\times \TT^d$, thanks to the general local wellposedness theory of the Navier-Stoke equations (or the Euler equations in the inviscid case), for all sufficiently small $\tau>0$, we may solve equation \eqref{eq:concentrator} on intervals $t \in [t_i ,t_{i+1}]$ to obtain a unique smooth solution $v_i$. We shall focus on estimating each $v_i$ on the associated interval $[t_i, t_{i+1}]$. The solution $v_i$ serves as an ``accumulator'' of the stress error on $[t_i, t_{i+1}]$, and it will provide the major contribution to the new stress error $\overline{R}$ once we use a gluing procedure. Recall that $\mathcal{R} : C^\infty(\TT^d ,\RR^d) \to C^\infty(\TT^d, \mathcal{S}^{d \times d }_0)$ is an inverse divergence operator on $\TT^d$ defined in Appendix \ref{sec:append_tech}. The below result quantifies the size of the corrector $v_i$ in relation to the time scale $\tau$ and the forcing $-\D R$. \begin{proposition} \label{estimates_on_vi} Let $d\geq 2$ and $(u,R)$ be a smooth solution of \eqref{eq:NSR}. There exists a universal constant $C_r>0$ depending on $1< r <\infty$ so that the following holds. For any $\delta>0$, if $\tau>0$ is sufficiently small, then the unique smooth solutions $v_i$ to \eqref{eq:concentrator} on $[t_i, t_{i+1}]$ satisfies $$ \| v_i \|_{L^\infty ( [t_i , t_{i+1}]; H^d(\TT^d)) } \leq \delta, $$ and $$ \| \mathcal{R} v_i \|_{L^\infty( [t_i , t_{i+1}]; L^r(\TT^d)) } \leq C_r \int_{[t_i, t_{i+1}]}\| R (t )\|_{ L^r } \, dt + C_u \delta \tau^{\varepsilon} , $$ where $C_u$ is a sufficiently large constant depending on $u$ but not $\delta$ or $\tau$. \end{proposition} \begin{proof} The first estimate follows directly from the uniform equicontinuity of solutions on bounded sets of $H^d$ and the fact that $\|u\|_{H^d}$ is bounded on $[0,1]$. Assuming $\delta>0$ is sufficiently small, we prove the second one as follows. To reduce notations, we simply write $v$ for $v_i$ and denote $z : = \mathcal{R} v $. Note that \eqref{eq:concentrator} preserves the zero-mean condition. Denote by $\mathbb{P}$ the Leray projection onto the divergence-free vector fields on $\TT^d$. Once the pressure is eliminated by projecting \eqref{eq:concentrator}, the evolution of $z$ is governed by \begin{align*} \partial_t z -\Delta z = F, \end{align*} where \begin{align*} F = - \mathcal{R} \mathbb{P}\D ( v\otimes v + u \otimes v + v\otimes u ) - \mathcal{R} \mathbb{P} \D R . \end{align*} Since $1 < r <\infty$ and $\mathcal{R}\mathbb{P}\D$ is a Calder\'on-Zygmund operator on $L^r(\TT^d) $, a standard energy method yields \begin{align*} \|z(t) \|_{L^r(\TT^d)} \leq C_r \int_{t_i }^{t}\left(\| R (s)\|_r + \|v\otimes v \|_r + \|u\otimes v \|_r + \|v\otimes u \|_r\right)\, ds \quad \text{for all $t \in [t_i , t_{i+1}]$,} \end{align*} where the constant $ C_r$ depends only on $r>1$ and dimension $d$. Using the obtained estimate on $v$ and the embedding $H^d(\TT^d) \subset L^\infty(\TT^d)$, we get \begin{align*} \|z \|_{L^\infty( [t_i , t_{i+1}]; L^r(\TT^d)) } & \leq C_r \int_{t_i }^{t_{i+1}}\left(\| R (t)\|_r + \|v \|_{L^\infty( [t_i , t_{i+1}]; L^r(\TT^d)) }^2 + \| u \|_{L^\infty_{t,x }} \|v \|_{L^\infty( [t_i , t_{i+1}]; L^r(\TT^d)) }\right) \, dt \end{align*} Since $ t_{t+1} - t_i = \tau^{\varepsilon}$, it follows that \begin{align*} \|z \|_{L^\infty( [t_i , t_{i+1}]; L^r(\TT^d)) } &\leq C_r \int_{t_i }^{t_{i+1}}\| R (t)\|_r \, dt + C_r \tau^{\varepsilon} \delta( \delta + \|u \|_{L^\infty_{t,x}} ). \end{align*} \end{proof} \subsection{Temporal concentration by sharp gluing} Since $u+v_i$ is an exact solution of the Navier-Stokes equations on each interval $[t_{i} , t_{i+1}]$, $0 \leq i\leq \tau^{-\varepsilon} - 1$, the next step is to suitably glue each $v_i$ together so that the glued solution $ u+ \sum \chi_i v_i$ is still an exact solution on a majority of the time interval $[0,1]$, with an error supported on many small disjoint sub-intervals. We first choose cutoff functions that will be used to glue together $v_i$. We define $\chi_i \in C^\infty_c(\RR) $ be a smooth cutoff such that when $1 \leq i \leq \tau^{-\varepsilon} -2 $, \begin{equation} \label{def:chi_i_1} \chi_i = \begin{cases} 1 & \text{if $t_i + \tau \leq t \leq t_{i+1} - \tau $ }\\ 0& \text{if $t_i + \tau/2 \geq t$ or $ t \geq t_{i+1} - \tau/2 $, } \end{cases} \end{equation} and when $ i= 0 $, \begin{equation} \label{def:chi_i_2} \chi_i = \begin{cases} 1 & \text{if $0 \leq t \leq t_{i+1} - \tau $ }\\ 0& \text{if $ t \geq t_{i+1} - \tau/2 $,} \end{cases} \end{equation} and when $ i= \tau^{-\varepsilon}-1 $, \begin{equation} \label{def:chi_i_3} \chi_i = \begin{cases} 1 & \text{if $t_i + \tau \leq t \leq 1$ }\\ 0& \text{if $ t \leq t_{i} + \tau/2 $.} \end{cases} \end{equation} In other words, we do not cut near the endpoints $t = 0 $ and $t=1$, and the glued solution $\overline{u}$ is an exact solution for a short time near $t=0$ and $t=1$. It is worth noting that in the iteration scheme $v_i$ for $i=0$ or $i=\tau^{-\varepsilon}-1$ will be zero after the first step (since it is already an exact solution of \eqref{eq:NSE} there), so \eqref{def:chi_i_2} and \eqref{def:chi_i_3} are only used once. Furthermore, we require that the bounds \begin{equation*} |\nabla^m \chi_i | \lesssim_m \tau^{-m}, \end{equation*} hold uniformly in $\tau$ and $i$. Note that for sub-intervals $[t_i,t_{i+1}]$, $1 \leq i \leq \tau^{-\varepsilon}-2 $, we cut near both the left and the right point of $[t_i , t_{i+1}]$. The left cutoff is to ensure smoothness near $t_i$ since each $v_i$ only has a limited amount of time regularity at $t=t_i$ whereas the right cutoff is where the intended gluing takes place. With $\chi_i$ in hand, we can simply define the glued solution as $$ \overline{u} : = u + \sum_{i} \chi_i v_i =u + \overline{w}. $$ It is clear that $\overline{u} : [0,1] \times \TT^d \to \RR^d$ has zero spatial mean and is also divergence-free. It remains to show that $\overline{u}$ satisfies the properties listed in Proposition \ref{prop:main_step_1}. Heuristically, $\overline{u}$ should be an exact solution with a stress error supported on smaller intervals of size $ \tau$. To confirm this claim, we must compute the stress error $\overline{R}$ associated with $\overline{u}$. Since supports of $\chi_i $ are disjoint, we can compute \begin{align*} \partial_t \overline{u} - \Delta \overline{u} + \D(\overline{u} \otimes \overline{u}) &= \D R + ( \partial_t -\Delta )\sum_{i} \chi_i v_i \\ & \quad + \sum_{i} \chi_i \D(u \otimes v_i ) + \sum_{i} \chi_i \D( v_i \otimes u ) \\ & \qquad + \sum_{i} \chi_i^2 \D(v_i \otimes v_i ). \end{align*} Thus, using the fact that $v_i$ solves \eqref{eq:concentrator} on $[t_i,t_{i+1}]$ and $u$ solves \eqref{eq:NSR} on $[0,1]$, we have \begin{align*} \partial_t \overline{u} - \Delta \overline{u} + \D(\overline{u} \otimes \overline{u} ) + \nabla p &= \D R + \sum_{i} \partial_t \chi_i v_i + \sum_{i} (\chi_i^2 -\chi_i ) \D(v_i \otimes v_i ) \\ & \qquad + \sum_{i} \chi_i \big( \partial_t v_i -\Delta v_i +\D(v_i \otimes v_i )+ \D(u \otimes v_i ) + \D(v_i \otimes u ) \big) \\ & = (1 - \sum_i \chi_i ) \D R + \sum_{i} \partial_t \chi_i v_i + \sum_{i} (\chi_i^2 -\chi_i ) \D(v_i \otimes v_i ) - \sum_i \chi_i \nabla q_i. \end{align*} Now let \begin{equation}\label{eq:def_concentrated_stressR} \overline{R} : = (1 - \sum_i \chi_i ) R + \mathcal{R} \sum_{i} \partial_t \chi_i v_i + \sum_{i} (\chi_i^2 -\chi_i ) v_i \mathring{\otimes} v_i, \end{equation} where $ \mathring{\otimes}$ denotes a traceless tensor product, i.e. $f \mathring{\otimes} g = f_i g_j - \frac{1}{d}\delta_{ij} f_k g_k $. Since each $v_i$ has zero spacial mean, $\D \mathcal{R} v_i = v_i$, and we can conclude that $$ \partial_t \overline{u} - \Delta \overline{u} + \D(\overline{u} \otimes \overline{u} ) +\nabla \overline{p} = \D\overline{R}, $$ where the pressure $\overline{p} : [0,1]\times \TT^d \to \RR$ is defined by $$ \overline{p} = p +\sum_{i} \chi_i q_i - \sum_i( \chi_i^2 -\chi_i) \frac{|v_i|^2}{d } . $$ The last step is then to show that the new Reynolds stress $\overline{R}$ is comparable to the original one when measured in $L^1_t$. It is clear that $\overline{R}$ is much more ``turbulent'' than the original $R$ as its value changes much more drastically due to the sharp cutoff $\chi$ near the endpoints of each interval $[t_i , t_{i+1} ]$. The heuristic is that if $\tau$ is small enough, then $v_i$ behaves linearly with a rate $\sim \D R$, and thus gluing together $v_i$ only counts the input from the stress forcing $\D R$. More precisely, the leading order term in \eqref{eq:def_concentrated_stressR} is the second term, where $\mathcal{R} v_i$ is proportional to $ R $ thanks to Proposition \ref{estimates_on_vi}. \begin{proposition} For any $1< r< \infty$, there exists a universal constant $C_r $ depending on $ r $ and $\varepsilon$ such that for all sufficiently small $\tau > 0$, the glued solution $( \overline{u} , \overline{R})$ satisfies $$ \| \overline{R} \|_{L^1_t L^r} \leq C_r \| R \|_{L^1_t L^r} . $$ \end{proposition} \begin{proof} By the triangle inequality, we need to estimate $$ \| \overline{R} \|_{L^1_t L^r} \leq \Big\| (1 - \sum_i \chi_i ) R \Big\|_{L^1_t L^r} + \sum_{i} \big\| \partial_t \chi_i \mathcal{R} v_i \big\|_{L^1_t L^r} + \Big\| \sum_{i} (\chi_i -\chi_i^2 ) v_i \mathring{\otimes} v_i \Big\|_{L^1_t L^r}. $$ The idea is to treat the first and the last terms as small errors. By H\"older's inequality, we get $$ \| \overline{R} \|_{L^1_t L^r} \leq \Big\| 1 - \sum_i \chi_i \Big\|_{L^1([0,1])} \| R \|_{L^\infty_t L^r} + \sum_{i} \| \partial_t \chi_i \|_{L^1([0,1])} \| \mathcal{R} v_i \|_{L^1_t L^r} + \sum_{i} \| (\chi_i -\chi_i^2 ) \|_{L^1([0,1])} \| v_i \mathring{\otimes} v_i \|_{L^\infty_t L^r}. $$ By the definition of the cutoff $\chi_i$, we have the following trivial bounds (with implicit constants depending on $\varepsilon$) in time: \begin{align*} \Big\| 1 - \sum_i \chi_i \Big\|_{L^1([0,1])} & \lesssim \tau^{1 -\varepsilon}, \end{align*} and for any $0 \leq i \leq \tau^{-\varepsilon}-1$, \begin{align*} \| \partial_t \chi_i \|_{L^1([0,1])} & \lesssim 1, \\ \| \chi_i -\chi_i^2 \|_{L^1([0,1])} & \lesssim \tau. \end{align*} In addition, it follows from the bounds for $v_i$ in Proposition~\ref{estimates_on_vi} that \begin{align*} \| v_i \mathring{\otimes} v_i \|_{L^\infty_t L^r} \lesssim \| v_i \|_{L^\infty_t H^d}^2 \leq \delta^2. \end{align*} Combining these together and using the bound on $\mathcal{R} v_i$ from Proposition~\ref{estimates_on_vi}, we get \begin{align*} \| \overline{R} \|_{L^1_t L^r} & \lesssim \tau^{1-\varepsilon} \| R \|_{L^\infty_t L^r} + \sum_{i} \left( \int_{t_i}^{t_{i+1}} \| R(t)\|_{ L^r} \, dt +C_u \delta \tau^{\varepsilon} \right)+ \delta^2 \sum_{i} \tau \\ & \lesssim \tau^{1-\varepsilon} \| R \|_{L^\infty_t L^r} + \| R \|_{L^1_t L^r} + C_u \delta, \end{align*} where we have used the fact that $\sum_i 1 \leq \tau^{-\varepsilon} \leq \tau^{-1}$. The conclusion follows once we choose $\tau>0$ sufficiently small such that $$ C_u \delta \leq \| R \|_{L^1 L^r} \quad \text{and} \quad \tau^{1-\varepsilon} \| R \|_{L^\infty_t L^r} \leq \| R \|_{L^1 L^r}. $$ \end{proof} \subsection{Proof of Proposition \ref{prop:main_step_1}} We conclude this section with the last step in the proof of Proposition \ref{prop:main_step_1}. Since all the estimates have been obtained, we only need to verify that the temporal support of $\overline{w} = \sum_i \chi_i v_i$ is contained in $\tilde{I}$ and show the well-preparedness of $( \overline{u} , \overline{R})$. Note that $(u,R)$ is well-prepared for $\tilde{I}$ and $\tilde{\tau}$, and it follows from \eqref{eq:concentrator} that $$ v_i \equiv 0 \quad \text{for $0 \leq i \leq \tau^{-\varepsilon}-1$ such that $R\equiv 0$ on $[t_i, t_{i+1}]$}. $$ Hence, if $\tau^{\varepsilon}= |[t_i, t_{i+1}]| $ is sufficiently smaller than $ \tilde{\tau}$, the definition of well-preparedness of $(u,R)$ implies that $$ \bigcup_i \Supp_t \chi_i v_i \subset \tilde{I}. $$ Thus we have proved that $ \Supp_t \overline{w} \subset \tilde{I} $ . Let us now show the well-preparedness of $( \overline{u} , \overline{R})$. Define an index set $$ E = \{ i\in \ZZ: 1 \leq i \leq \tau^{-\varepsilon} -1 \,\, \text{and} \,\, v_i \not \equiv 0 \} $$ satisfying a trivial estimate \[ | E | \leq\tau^{-\varepsilon}. \] The idea is that the concentrated stress $\overline{R}$ is supported around each $t_i$, for $i\in E$. Therefore, we can define a set on the time axis \[ {I} : = \bigcup_{i \in E } \Big[t_i - \frac{5\tau}{2} ,t_i+ \frac{5\tau}{2} \Big] , \] where as before $t_i = i \tau^{\varepsilon}$. Note that each interval in $I$ has length $ 5\tau$ and the total number of intervals is at most $ \tau^{-\varepsilon}$, consistent with the well-preparedness, and $0,1 \not \in I$ due to \eqref{def:chi_i_2} Now take any $t \in [0,1]$ such that $\dist(t, I^c) \leq \frac{3\tau}{2} $. Then by \eqref{def:chi_i_1} and \eqref{def:chi_i_2}, \[ \sum_{i }\chi_i(t) = 1. \] Moreover, $\partial_t \chi_i(t)=0$ and $\chi_i(t) \in\{0, 1\}$ for any $i$. Consequently, \[ \overline{R} (t) = (1 - \sum_i \chi_i ) R + \mathcal{R} \sum_{i} \partial_t \chi_i v_i + \sum_{i} (\chi_i^2 -\chi_i ) v_i \mathring{\otimes} v_i =0, \] for every $t$ such that $\dist(t, I^c) \leq \frac{3\tau}{2} $. In particular, $(\overline{u},\overline{R})$ is also well-prepared, which concludes the proof. \section{Convex integration in space-time}\label{sec:proof_step_2_convex_integration} In this section, we will use a convex integration scheme to reduce the size of the Reynolds stress. The goal is to design a suitable velocity perturbation $w$ to the glued solution $( \overline{u}, \overline{R})$ so that $$ u_1 : =\overline{u} + w $$ solves the equation \eqref{eq:NSR} with a much smaller Reynolds stress $ R_1$. The main goal of the current and the following section is summarized in the following proposition. \begin{proposition}\label{prop:main_2} There exists a universal constant $M >0$ such that for any $p < 2$ and $q<\infty$, there exists $r>1$ depending only on $p$ and $q$ such that the following holds. Let $\delta>0$ and $(\overline{u},\overline{R})$ be a well-prepared smooth solution of \eqref{eq:NSR} given by Proposition \ref{prop:main_step_1} for the set $ I $ and time scale $\tau $. Then there exists another well-prepared smooth solution $(u_1,R_{1})$ of \eqref{eq:NSR} for the same set $I $ and time scale $\tau $ such that $$ \|R_{1} \|_{L^1(0,1; L^r( \TT^d ))} \leq \delta . $$ Moreover, the velocity perturbation $w : = u_{1} - u $ satisfies \begin{enumerate} \item The temporal support of $w$ is contained in $I$, i.e. $$ \Supp w \subset I \times \TT^d; $$ \item The $L^2 L^2 $ estimate, $$ \|w \|_{L^2([0,1] \times \TT^d )} \leq M \| \overline{R} \|_{L^1 ([0,1] \times \TT^d )} ; $$ \item The $L^{p} L^\infty \cap L^1 W^{1,q}$ estimate, $$ \|w \|_{L^{p }(0,1; L^\infty (\TT^d))} + \|w \|_{L^{1 }(0,1; W^{1,q} (\TT^d))} \leq \delta . $$ \end{enumerate} \end{proposition} It is clear that Proposition~\ref{prop:main} follows from Proposition~\ref{prop:main_step_1} and Proposition~\ref{prop:main_2}. In the remainder of this section, we will construct the velocity perturbation $w$ and define its associated Reynolds stress $R_1$ and pressure $p_1$. The well-preparedness of $(u_1, R_1)$ will be an easy consequence of the definition of $w$, whereas all the estimates will be proven in the next section. \subsection{Stationary Mikado flows for the convex integration} The main building blocks of the convex integration scheme is the Mikado flows $\mathbf{W}_{k}:\TT^d \to \RR^d $ introduced in \cite{MR3614753}. In other contexts \cite{MR3884855,MR3951691}, they are called concentrated Mikado flows, or Mikado flows with concentration since they are supported on periodic cylinders with a small radius. Here for brevity, we simply refer to them as the Mikado flows. We start with a geometric lemma that dates back to the work of Nash \cite{MR0065993}. A proof of the following version, which is essentially due to De Lellis and Székelyhidi Jr., can be found in \cite[Lemma~3.3]{MR3340997}. This lemma is the key reason to use Mikado flows in the convex integration. Recall that $ \mathcal{S}^{d \times d}_+$ is the set of positive definite symmetric $d\times d$ matrices and $\mathbf{e}_{k} = \frac{k}{|k|}$ for any $k\in \ZZ^d$. \begin{lemma}\label{lemma:geometric} For any compact subset $\mathcal{N} \subset \mathcal{S}^{d \times d}_+$, there exists a finite set $\Lambda \subset \ZZ^d$ and smooth functions $\Gamma_k \in C^\infty(\mathcal{N}; \RR )$ for any $k \in \Lambda$ such that \begin{align*} R = \sum_{k \in \Lambda } \Gamma_k^2(R) \mathbf{e}_{k} \otimes \mathbf{e}_{k}, \qquad \text{for all } \, R \in \mathcal{N} . \end{align*} \end{lemma} We apply the lemma $\mathcal{N}= B_{\nicefrac{1}{2}}(\Id)$ where $ B_{\nicefrac{1}{2}}(\Id)$ denotes the metric ball of radius $1/2$ around the identity Id in the space $ \mathcal{S}^{d \times d}_+$ to obtain smooth functions $\Gamma_k $ for any $k \in \Lambda \subset \mathbb{Z}^d$. Throughout the paper, the direction set $\Lambda$ is fixed, and we construct the Mikado flows as follows. We choose points $p_k \in (0,1)^d $ such that $p_k \neq p_{-k} $ if both $k,-k \in \Lambda$. For each $k\in \Lambda $, we denote by $l_k \subset \TT^d$ the periodic line passing through $p_k$ in direction $k$, namely $$ l_k =\{ t k + p_k \in \TT^d : t \in \RR \}. $$ Since $\Lambda$ is a finite lattice set and we identify $\TT^d $ with a periodic box $ [0,1]^d$, there exists a geometric constant $C_\Lambda \in \NN$ depending on the set $\Lambda$ such that $$ |l_k \cap l_{k'}| \leq C_\Lambda \quad \text{for any $k, k' \in \Lambda$}, $$ where we note that $ l_k \cap l_{-k} = \emptyset$ due to $p_k \neq p_{-k} $. Here we do not even require nonparallel periodic lines to be disjoint in $d \geq 3$ in contrast to \cite{MR3951691}. Since nonparallel periodic lines have to intersect in the 2D case, we would rather present a unified approach based on the fact that the intersection parts are very small due to the concentration parameter $\mu>0$, see Theorem \ref{thm:main_thm_for_W_k} below. Let $ \mu > 0$ be the spatial concentration parameter whose value will be fixed in the end of the proof. Let $ \phi,\psi \in C^\infty_c ( [1/2 ,1])$ be such that if we define $\psi_k, \phi_k : \TT^d \to \RR$ by \begin{equation}\label{eq:def_psi_k} \psi_k := \mu^{\frac{d-1}{2}} \psi(\mu \dist( l_k, x)) \end{equation} and \begin{equation}\label{eq:def_phi_k} \phi_k := \mu^{\frac{d-1}{2} -2} \phi(\mu \dist( l_k, x)) \end{equation} then \begin{equation}\label{eq:def_phi_k_psi_k} \Delta \phi_k = \psi_k \quad \text{on $\TT^d$} \quad \text{and} \quad \int_{\TT^d} \psi_k^2 \, dx =1. \end{equation} Note that \begin{equation}\label{eq:small_intersection_l_k} \Supp \psi_k \cap \Supp \psi_{k'} \subset \{ x\in \TT^d: \dist( x, l_k \cap l_{k'} )\leq M_\Lambda \mu^{-1}) \end{equation} for a sufficiently large constant $M_\Lambda$ depending on $\Lambda$. Finally, the stationary Mikado flows $\mathbf{W}_{k}:\TT^d \to \RR^d$ are defined by $$ \mathbf{W}_{k} = \psi_k \mathbf{e}_{k}, $$ where the constant vector $\mathbf{e}_{k} = \frac{k}{|k|}$. Using the gradient field $ \nabla \phi_k$, we may write $\mathbf{ W}_{k} $ as a divergence of a skew-symmetric tensor $\mathbf{\Omega}_k \in C^\infty_0(\TT^d, \RR^{d\times d})$, $$ \mathbf{\Omega}_k := \mathbf{e}_{k} \otimes \nabla \phi_k - \nabla \phi_k \otimes \mathbf{e}_{k}. $$ Indeed, $\mathbf{\Omega}_k$ is skew-symmetric by definition, and by a direct computation $$ \D \mathbf{\Omega}_k = \D ( \nabla \phi_k )\mathbf{e}_{k} -( \mathbf{e}_{k} \cdot \nabla )\nabla \phi_k = \Delta \phi_k \mathbf{e}_{k} - 0 = \mathbf{ W}_{k}. $$ We summarize the main properties of the Mikado flows $\mathbf{ W}_{k}$ in the following theorem. \begin{theorem} \label{thm:main_thm_for_W_k} Let $d \geq 2$ be the dimension. The stationary Mikado flows $\mathbf{W}_{k}:\TT^d \to \RR^d$ satisfy the following. \begin{enumerate} \item Each $ \mathbf{W}_{k} \in C^\infty_0(\TT^d)$ is divergence-free, satisfies $$ \mathbf{W}_{k} =\D \mathbf{\Omega}_k, $$ and solves the pressureless Euler equations $$ \D( \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} ) =0. $$ \item For any $1\leq p \leq \infty$, the estimates \begin{align*} \mu^{-m} \| \nabla^m \mathbf{ W}_{k} \|_{L^p(\TT^d)} &\lesssim_m \mu^{\frac{d-1}{2} - \frac{d-1}{p}},\\ \mu^{-m}\| \nabla^m \mathbf{\Omega}_k \|_{L^p(\TT^d)} &\lesssim_m \mu^{-1+\frac{d-1}{2} - \frac{d-1}{p}}, \end{align*} hold uniformly in $ \mu$. \item For any $ k\in \Lambda$, there holds $$ \fint_{\TT^d} \mathbf{W}_{ k}\otimes \mathbf{W}_{ k} = \mathbf{e}_{k} \otimes \mathbf{e}_{k}, $$ and for any $1 \leq p \leq \infty$ $$ \big\| \mathbf{W}_{ k}\otimes \mathbf{W}_{ {k'}}\big\|_{L^p(\TT^d)} \lesssim \mu^{ (d-1)-\frac{d}{p}} \quad \text{if $k \neq k'$}. $$ \end{enumerate} \end{theorem} \begin{proof} We only prove the last claim as the rest of them are standard and can easily be deduced from \eqref{eq:def_psi_k}-\eqref{eq:def_phi_k_psi_k}. It suffices to assume $k \neq -k'$. Using the $L^\infty$ bound in (2), we obtain \begin{align*} \big\| \mathbf{W}_{ k}\otimes \mathbf{W}_{ {k'}}\big\|_{L^p(\TT^d)} \lesssim \mu^{ (d-1)} \Big( \int_{\Supp \psi_k \cap \Supp \psi_{k'} } \, dx\Big)^{\frac{1}{p}}. \end{align*} Note that due to \eqref{eq:small_intersection_l_k} $ \Supp \psi_k \cap \Supp \psi_{k'}$ is contained in a union of finitely many balls of radius $\sim \mu^{-1}$. Thus \begin{align*} \big\| \mathbf{W}_{ k}\otimes \mathbf{W}_{ {k'}}\big\|_{L^p(\TT^d)} \lesssim \mu^{ (d-1) - \frac{d}{p}} . \end{align*} \end{proof} \subsection{Implementation of temporal concentration} Since Mikado flows are stationary, the velocity perturbation will be homogeneous in time if we simply use Lemma~\ref{lemma:geometric} to define the coefficients. To obtain $L^p L^\infty $ and $L^1 W^{1,q}$ estimates, it is necessary to introduce temporal concentration in the perturbation. To this end, we choose temporal functions $ g_{\kappa}$ and $h_\kappa$ to oscillate the building blocks $ \mathbf{W}_k$ intermittently in time. Specifically, $ g_{\kappa}$ will be used to oscillate $ \mathbf{W}_k$ so that the space-time cascade balances the low temporal frequency part of the old stress error $\overline{R} $, whereas $h_\kappa$ is used to define a temporal corrector whose time derivative will further balance the high temporal frequency part of the old stress error $\overline{R} $ . Let $g \in C_c^\infty([0,1])$ be such that $$ \int_{0}^1 g^2(t) \, dt =1. $$ To add in temporal concentration, let $\kappa>0$ be a large constant whose value will be specified later and define $g_\kappa : [0,1] \to \RR$ as the $1$-periodic extension of $\kappa^{1/2} g (\kappa t)$ so that $$ \| g_\kappa \|_{L^p([0,1])} \lesssim \kappa^{\frac{1}{2} - \frac{1}{p}}\quad \text{for all $1\leq p \leq \infty$}. $$ The value of $\kappa$ will be specified later and the function $g_\kappa$ will be used in the definition of the velocity perturbation. As we will see in Lemma \ref{lemma:R_osc_decomposition}, the nonlinear term can only balance a portion of the stress error $\overline{R}$ and there is a leftover term which is of high temporal frequency. This motivates us to consider the following temporal corrector. Let $h_\kappa :[0,1] \to \RR $ be defined by $$ h_\kappa(t) = \int_0^t g_\kappa^2(s) -1 \, ds. $$ In view of the zero-mean condition for $g_\kappa^2(t) -1$, the function $h_\kappa :[0,1] \to \RR $ is well-defined and periodic, and we have \begin{equation}\label{eq:bound_on_h_k} \| h_\kappa \|_{L^\infty([0,1])} \leq 1, \end{equation} uniformly in $\kappa$. We remark that for any $\nu \in \NN$, the periodically rescaled function $g_\kappa( \nu \cdot ): [0,1] \to \RR$ also verifies the bound \begin{equation} \label{e:bound_on_g_k} \| g_\kappa( \nu \cdot) \|_{L^p([0,1])} \lesssim \kappa^{\frac{1}{2} - \frac{1}{p}} \quad \text{for all $1\leq p \leq \infty$}. \end{equation} Moreover, we have the identity \begin{equation} \label{eq:time_derivative_of_h_k} \partial_t \left( \nu^{-1} h_{\kappa}(\nu t ) \right) = g_\kappa^2( \nu t ) -1, \end{equation} which will imply the smallness of the corrector, cf \eqref{eq:def_w_t}. \subsection{Space-time cutoffs} Before introducing the velocity perturbation, we need to define two important cutoff functions, one to ensure Lemma~\ref{lemma:geometric} applies and the other to ensure the well-preparedness of the new solution $(u_1,R_1)$. Since Lemma~\ref{lemma:geometric} is stated for a fix compact set in $\mathcal{S}_+^{d\times d}$, we need to introduce a cutoff for the stress $\overline{R}$. Let $\chi: \RR^{d \times d } \to \RR^+$ be a positive smooth function such that $\chi $ is monotonically increasing with respect to $|x|$ and \begin{equation} \chi (x) = \begin{cases} 1 & \text{if $0 \leq |x| \leq 1$}\\ |x| & \text{if $|x| \geq 2$}. \end{cases} \end{equation} With this cutoff $\chi$, we may define a divisor for the stress $\overline{R}$ so that Lemma \ref{lemma:geometric} applies. Indeed, define $\rho \in C^\infty([0,1] \times \TT^d)$ by \begin{equation}\label{eq:def_rho} \rho = \chi(\overline{R}) . \end{equation} Then immediately $$ \Id - \frac{\overline{R}}{\rho} \in B_{\frac{1}{2}}(\Id) \quad \text{for any $(t,x) \in [0,1] \times \TT^d$}, $$ which means we can use $\Id - \frac{\overline{R}}{\rho}$ as the argument in the smooth functions $\Gamma_k$ given by Lemma \ref{lemma:geometric}. Next, we need another cutoff to take care of the well-preparedness of the new solution $(u_1, R_1) $ as the perturbation has to be supported within $ I$. Let $\theta \in C^\infty_c(\RR)$ be a smooth temporal cutoff function such that \begin{equation}\label{eq:def_theta} \theta(t) = \begin{cases} 1 & \text{if $\dist(t, I^c ) \geq \frac{3\tau}{2}$ } \\ 0 & \text{if $\dist(t, I^c ) \leq \tau$,} \end{cases} \end{equation} where $I \subset [0,1]$ and $\tau>0$ are given by Proposition \ref{prop:main_step_1}. Note that this cutoff ensures that the new solution will still be well-prepared. \subsection{The velocity perturbation} We recall that we have defined four parameters for the perturbation so far: \begin{enumerate} \item Temporal oscillation $ \nu \in \NN$ and temporal concentration $ \kappa >0 $; \item Spatial oscillation $ \sigma \in \NN$ and spatial concentration $ \mu \in \NN$. \end{enumerate} These four parameters are assumed to be sufficiently large for the moment and will be taken to be explicit fixed powers of a frequency parameter $\lambda>0$ in the next section, where we also fix the value of $r>1$ appearing in the main proposition. With all the ingredients in hand, we are ready to define the velocity perturbation. In summary, the velocity perturbation $w:[0,1] \times \TT^d \to \RR^d$ consists of three parts, $$ w = w^{(p)} + w^{(c)} +w^{(t)} . $$ The principle part of the perturbation $w^{(p)}$ consists of super-positions of the building blocks $\mathbf{ W}_{k} $ oscillating with period $\sigma^{-1}$ on $\TT^d$ and period $\nu^{-1}$ on $[0,1]$: \begin{equation}\label{eq:def_w_p} w^{(p)} (t,x ): = \sum_{k \in \Lambda } a_k(t,x ) \mathbf{ W}_{k} (\sigma x ), \end{equation} where the amplitude function $a_k : [0,1] \times \TT^d \to \RR$ is given by \begin{equation}\label{eq:def_a_k} a_k = \theta g_{\kappa }(\nu t) \rho^{\nicefrac{1}{2}} \Gamma_k\Big(\Id - \frac{ \overline{R}}{\rho }\Big). \end{equation} Note that \eqref{eq:def_w_p} is not divergence-free. To fix this, we introduce a divergence-free corrector using the tensor potential $ \mathbf{\Omega}_k$ \begin{equation}\label{eq:def_w_c} w^{(c)} (t,x ): = \sigma^{-1}\sum_{k \in \Lambda } \nabla a_k(t,x ) : \mathbf{\Omega}_k(\sigma x ) . \end{equation} Indeed, we can rewrite $ w^{(p)} + w^{(c)}$ as \begin{equation}\label{eq:w_p_plus_w_c} \begin{aligned} w^{(p)} + w^{(c)} & = \sigma^{-1} \sum_{k \in \Lambda } a_k(t,x )\D \mathbf{\Omega}_k (\sigma x ) + \sigma^{-1} \sum_{k \in \Lambda } \nabla a_k(t,x ) : \mathbf{\Omega}_k(\sigma x ) \\ &= \sigma^{-1}\D\sum_{k \in \Lambda } a_k(t,x ) \mathbf{\Omega}_k (\sigma x ), \end{aligned} \end{equation} where each $a_k \mathbf{\Omega}_k $ is skew-symmetric and hence $ \D ( w^{(p)} + w^{(c)}) = 0 $. Finally, we define a temporal corrector to balance the high temporal frequency part of the interaction. This ansatz was first introduced in \cite{MR3898708} and also used in \cite{1809.00600}. The heart of the argument is to ensure that $$ \partial_t w^{(t)}+ \D(w^{(p)} \otimes w^{(p)} ) = \text{Pressure gradient}+ \text{Terms with high spacial frequencies} +\text{Lower order terms}. $$ However, the key difference between \cite{MR3898708,1809.00600} and the current scheme is that here the smallness of the corrector is free and it does not require much temporal oscillation, which is the reason we must use stationary spatial building blocks. Specifically, the temporal corrector $w^{(t)}$ is defined as \begin{equation}\label{eq:def_w_t} w^{(t)} = \sigma^{-1} h_\kappa(\nu t)\left( \D(\overline{R} ) - \nabla\Delta^{-1}\D \D (\overline{R} ) \right), \end{equation} where we note that $\Delta^{-1}$ is well-defined on $\TT^d$ since $\D \D (\overline{R} ) = \partial_i \partial_j \overline{R}_{ij} $ has zero spatial mean. It is easy to check $\Supp_t w^{(t)} \subset \Supp_t \overline{R} $ and $ w^{(t)}$ is divergence-free. Indeed, $$ \D w^{(t)} = \sigma^{-1} h_\kappa(\nu t)\left( \partial_i \partial_j \overline{R}_{ij} - \partial_k \partial_k\Delta^{-1} \partial_i \partial_j \overline{R}_{ij} ) \right) =0. $$ In the lemma below, we show that the leading order interaction of the principle part $w^{(p)}$ is able to balance the low temporal frequency part of the stress error $ \overline{R} $, which motivates the choice of the corrector $w^{(t)}$, cf. \eqref{eq:New_Reynolds_Stress_2}. \begin{lemma}\label{lemma:a_k_interactions} The coefficients $a_k $ satisfy $$ a_k = 0 \quad \text{if} \quad \dist(t, I^c) \leq \tau, $$ and $$ \sum_{k \in \Lambda} a_k^2 \fint_{\TT^d} \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} \, dx = \theta^2 g_{\kappa }^2(\nu t) \rho \Id - g_{\kappa }^2(\nu t) \overline{R}. $$ \end{lemma} \begin{proof} The first property follows from \eqref{eq:def_theta}. Now since $ \fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} = \mathbf{e}_{k} \otimes \mathbf{e}_{k}$, a direct computation and Lemma~\ref{lemma:geometric} give $$ \sum_{k \in \Lambda } a_k^2 \mathbf{e}_{k} \otimes \mathbf{e}_{k} = \sum_{k \in \Lambda } \theta^2 g_{\kappa }^2( \nu t) \rho \Gamma_k^2 \mathbf{e}_{k} \otimes \mathbf{e}_{k} = \theta^2 g_{\kappa }^2( \nu t) \rho \sum_{k \in \Lambda } \Gamma_k^2\Big(\Id - \frac{ \overline{R} }{\rho }\Big) \mathbf{e}_{k} \otimes \mathbf{e}_{k} = \theta^2 g_{\kappa }^2( \nu t) \rho \Id - \theta^2g_{\kappa }^2( \nu t) \overline{R}. $$ The identity follows from the fact that $$ \Supp \overline{R} \subset \{t: \theta(t)=1 \}. $$ \end{proof} \subsection{The new Reynolds stress} In this subsection, our goal is to design a suitable stress tensor $R_1 : [0,1] \times \TT^d \to \mathcal{S}^{d \times d}_0 $ such that the pair $(u_1, R_1)$ is a smooth solution of \eqref{eq:NSR} for a suitable smooth pressure $p_1$. We first compute the nonlinear term and isolate nonlocal interactions: \begin{align} \label{eq:New_Reynolds_Stress} \D(w^{(p)} \otimes w^{(p)} +\overline{R} ) = \D \Big[ \sum_{k \in \Lambda} a_k^2 \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k}(\sigma x) +\overline{R} \Big] + \D R_{\Far }, \end{align} where $R_{\Far }$ denotes the nonlocal interactions between Mikado flows of different directions \begin{equation} R_{\Far } = \sum_{k \neq k'} a_k a_{k'} \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k'}(\sigma x). \end{equation} And then we proceed to examine the first term in \eqref{eq:New_Reynolds_Stress}, for which by Lemma \ref{lemma:a_k_interactions} we have \begin{align} \D \Big[ \sum_{k \in \Lambda} &a_k^2 \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k}(\sigma x) +\overline{R} \Big] \nonumber \\ & = \D\Big[ \sum_{k \in \Lambda} a_k^2 \big( \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k}(\sigma x) -\fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} \big) +\fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k}+ \overline{R} \Big] \nonumber \\ & = \D \sum_{k \in \Lambda} a_k^2 \big( \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k}(\sigma x) -\fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} \big) + \nabla ( \theta^2 g_k^2 \rho ) + (1 - g_{\kappa }^2(\sigma t) )\D \overline{R}. \label{eq:New_Reynolds_Stress_01} \end{align} Finally, using the product rule, we compute the divergence term as \begin{align} \D \sum_{k \in \Lambda} &a_k^2 \big( \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k}(\sigma x) -\fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} \big) \nonumber \\ &=\sum_{k \in \Lambda} \nabla(a_k^2 ) \cdot \big( \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k}(\sigma x) -\fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} \big) . \label{eq:New_Reynolds_Stress_02} \end{align} Typical in the convex integration, we can gain a factor of $\sigma^{-1}$ in \eqref{eq:New_Reynolds_Stress_02} by inverting the divergence. To this end, let us use the bilinear anti-divergence operator $\mathcal{B}$ defined in Appendix \ref{sec:append_tech}. Since \eqref{eq:New_Reynolds_Stress_02} has zero spatial mean, by \eqref{eq:div_B_tensor} it is equal to $\D R_{\Osc,x} $, where \begin{equation}\label{eq:New_Reynolds_Stress_03} R_{\Osc,x} = \sum_{k \in \Lambda} \mathcal{B}\Big(\nabla(a_k^2 ) , \mathbf{ W}_{k}(\sigma x) \otimes \mathbf{ W}_{k}(\sigma x) -\fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} \Big). \end{equation} Combining \eqref{eq:New_Reynolds_Stress}, \eqref{eq:New_Reynolds_Stress_01}, \eqref{eq:New_Reynolds_Stress_02}, and \eqref{eq:New_Reynolds_Stress_03} we have \begin{equation}\label{eq:New_Reynolds_Stress_2} \D(w^{(p)} \otimes w^{(p)} +\overline{R} ) = \D R_{\Osc,x} + \D R_{\Far } + \nabla ( \theta^2 g_k^2 \rho ) + (1 - g_{\kappa }^2(\sigma t) )\D \overline{R}. \end{equation} In view of the above computations, we define a temporal oscillation error \begin{equation}\label{eq:def_osc_t_error} R_{\Osc,t} = \nu^{-1} h_k( \nu t) \D \partial_t \overline{R} , \end{equation} so that the following decomposition holds. \begin{lemma} \label{lemma:R_osc_decomposition} Let the space-time oscillation error $R_{\Osc}$ be $$ R_{\Osc} = R_{\Osc,x} + R_{\Osc,t} + R_{\Far} . $$ Then $$ \partial_t w^{(t)} + \D(w^{(p)} \otimes w^{(p)} +\overline{R} ) + \nabla P = \D R_{\Osc} . $$ where the pressure term $P$ is defined by \begin{equation}\label{eq:def_pressure_P} P = \theta^2g_k^2(\nu t) \rho + \nu^{-1} \Delta^{-1} \D \D \partial_t \big(\overline{R}h_\kappa( \nu t) \big) . \end{equation} \end{lemma} \begin{proof} By the definition of $ w^{(t)}$, we have \[ \begin{split} \partial_t w^{(t)} &= \partial_t\big(\nu^{-1} h_\kappa( \nu t) \D \overline{R} \big) + \partial_t\big(\nu^{-1} h_\kappa( \nu t) \nabla \Delta^{-1} \D \D \overline{R} \big) \\ &=(g_\kappa^2(\nu t ) -1) \D \overline{R} + \nu^{-1} h_\kappa( \nu t) \D \partial_t \overline{R} + \nabla \Delta^{-1} \D \D \partial_t \big(\overline{R}h_\kappa( \nu t) \big) \\ &=(g_\kappa^2(\nu t ) -1) \D \overline{R} + R_{\Osc,t} + \nabla \Delta^{-1} \D \D \partial_t \big(\overline{R}h_\kappa( \nu t) \big) , \end{split} \] where we used identity~\eqref{eq:time_derivative_of_h_k} for the time derivative of $h_{\kappa}$. The conclusion follows immediately from \eqref{eq:New_Reynolds_Stress_2}. \end{proof} Finally, we can define the correction error and the linear error as usual: \begin{equation}\label{eq:def_R_cor} R_{\Cor} = \mathcal{R} \big( \D (w^{(c)}+w^{(t)}) \otimes w + w^{(p)} \otimes ( w^{(c)} + w^{(t)} ))\big) \end{equation} and \begin{equation}\label{eq:def_R_lin} R_{\Lin} = \mathcal{R}\big( \partial_t (w^{(p)} +w^{(c)} ) - \Delta u + \overline{u} \otimes w + w \otimes \overline{u} \big) . \end{equation} To conclude, we summarize the main results in this section below. \begin{lemma}\label{lemma:new_stress_R_1} Define the new Reynolds stress by $$ R_1 = R_{\Lin} +R_{\Cor} + R_{\Osc} , $$ and the new pressure by $$ p_1 = \overline{p} +P. $$ Then $(u_1,R_1)$ is a well-prepared solution to \eqref{eq:NSR}, $$ \partial_t u_1 -\Delta u_1 + \D(u_1 \otimes u_1 ) + \nabla p_1 = \D R_1, $$ and the velocity perturbation $w = u_1 - \overline{u}$ satisfies $$ \Supp w \subset I \times \TT^d. $$ \end{lemma} \begin{proof} A direct computation of the left-hand side gives \begin{align*} \partial_t u_1 -\Delta u_1 + \D(u_1 \otimes u_1 ) + \nabla p_1 =& \partial_t \overline{u} -\Delta \overline{u} + \D( \overline{u} \otimes \overline{u} ) + \nabla p\\ & + \partial_t w -\Delta w + \D(\overline{u} \otimes w ) + \D(w \otimes \overline{u} ) +\D(w \otimes w ) + \nabla P \\ = & \D \overline{R}+ \partial_t w -\Delta w + \D(\overline{u} \otimes w ) + \D(w \otimes \overline{u} ) +\D(w \otimes w ) + \nabla P, \end{align*} where we have use the fact that $(\overline{u} , \overline{R}) $ solves \eqref{eq:NSR} with pressure $\overline{p}$. From the definitions \eqref{eq:def_R_cor}, \eqref{eq:def_R_lin} and Lemma \ref{lemma:R_osc_decomposition} we can conclude that $(u_1, R_1)$ solves \eqref{eq:NSR}. The claim that $(u_1, R_1)$ is well-prepared and $\Supp_t w \subset I $ follows from the fact that $(\overline{u} , \overline{R})$ is given by Proposition \ref{prop:main_step_1} and the perturbations $w^{(p)}, w^{(c)}$ and $w^{(t)}$ satisfy $$ w^{(p)} = w^{(c)} =0 \quad \text{if $\dist(t, I^c) \leq \tau$} $$ and $$ \Supp_t w^{(t)} \subset \Supp_t \overline{R} . $$ \end{proof} \section{Proof of Proposition \ref{prop:main_2}}\label{sec:proof_step_2} In this section we will show that the velocity perturbation $w$ and the new Reynolds stress $R_1$ derived in Section \ref{sec:proof_step_2_convex_integration} satisfy the claimed properties in Proposition \ref{prop:main_2}. As a general note, we use a constant $C_u$ for dependency on the previous solution $(\overline{u} , \overline{R})$ throughout this section. Unless otherwise indicated, in the statement of below lemmas and propositions the exponents $ p,q $ and $r$ refer to the ones given by Proposition \ref{prop:main}, cf. \eqref{eq:value_gamma} and \eqref{eq:def_of_r}. \subsection{Choice of parameters}\label{subsection:choice_of_parameters} We fix two parameters $0<\gamma<1$ and $1<r<2$ as follows. \begin{enumerate} \item First, we choose $ \gamma>0$ small enough such that \begin{equation}\label{eq:value_gamma} 10 d\gamma \leq \min\Big\{ \frac{1}{p} -\frac{1}{2}, \frac{1}{q} \Big\} . \end{equation} \item Once $\gamma$ is fixed, choose $r > 1$ such that \begin{equation}\label{eq:def_of_r} d - \frac{d }{r} \leq \gamma . \end{equation} \end{enumerate} It is clear that $r>1$ only depends on $p,q$, as claimed in Proposition \ref{prop:main_2}. Let $\lambda$ be a sufficiently large number whose value will be fixed in the end. We choose the parameters $\sigma, \kappa$ along with $ \nu,\mu$ in the building blocks as explicit powers of $\lambda$ as follows. \begin{enumerate} \item Temporal oscillation $\nu\in \NN$ and spatial oscillation $\sigma \in \NN$: \begin{align*} \nu &= \lceil \lambda^{ {\gamma}} \rceil ,\\ \sigma &= \lceil \lambda^{\frac{1}{\gamma}} \rceil. \end{align*} \item Temporal concentration $\kappa >0$ and spatial concentration $\mu >0$: \begin{align*} \kappa &= \lambda^{ \frac{2}{\gamma} + d+1 - 6 \gamma },\\ \mu &= \lambda^{ }. \end{align*} \end{enumerate} Note that we have the hierarchy of parameters $$ \nu \ll \mu \ll \sigma \ll \kappa^{1/2}. $$ More precisely, we have the following useful lemma that will be used throughout the next section. \begin{lemma}\label{lemma:parameters} For any $\lambda>0$, there hold \begin{align} \nu \kappa^{\frac{1}{2}} \sigma^{-1} \mu^{-1}\mu^{ \frac{d-1}{2}- \frac{d-1}{r}}&\leq \lambda^{-\gamma} \label{eq:temporal_constraint}\\ \kappa^{\frac{1}{2}- \frac{1}{p} } \mu^{ \frac{d-1}{2} } &\leq \lambda^{-\gamma}\label{eq:LpLinfty_constraint}\\ \kappa^{- \frac{1}{2} } \sigma \mu \mu^{ \frac{d-1}{2} -\frac{d-1}{q}} &\leq \lambda^{-\gamma}\label{eq:L1W1q_constraint} . \end{align} \end{lemma} \begin{proof} The first inequality \eqref{eq:temporal_constraint} is equivalent to \begin{align*} \gamma+ \left(\frac{1}{\gamma} + \frac{d+1}{2} - 3\gamma \right) -\frac{1}{\gamma} -1 +\left(\frac{d-1}{2} - \frac{d-1}{r}\right) \leq -\gamma \end{align*} It can be simplified to $$ d-1 - \frac{d-1}{r} \leq \gamma $$ which holds due to \eqref{eq:def_of_r}. For the second one \eqref{eq:LpLinfty_constraint}, thanks to \eqref{eq:value_gamma} it suffices to show that $$ \kappa^{- d\gamma} \mu^{\frac{d-1}{2}} \leq \lambda^{-\gamma}. $$ It is equivalent to $$ -2d + \gamma d(d+1-6\gamma) + \frac{d-1}{2} \leq -\gamma $$ which holds trivially since $10 d\gamma \leq 1$. The third one \eqref{eq:L1W1q_constraint}, looking again at the exponents, reads as $$ -\left(\frac{1}{\gamma} + \frac{d + 1}{2} - 3 \gamma\right) +\frac{1}{\gamma} +1 +\left(\frac{d-1}{2} - \frac{d-1}{q}\right) \leq -\gamma. $$ Simplifying, we obtain $$ - \frac{d-1}{q} \leq - 4 \gamma, $$ which also holds due to \eqref{eq:value_gamma}. \end{proof} \subsection{Estimates on velocity perturbation} We first estimate the coefficient $a_k$ of the perturbation $w$. \begin{lemma}\label{lemma:estimates_a_k} The coefficients $a_k$ are smooth on $[0,1] \times \TT^d$ and $$ \| \partial_t^n \nabla^m a_k \|_{L^p(0,1; L^\infty(\TT^d))} \leq C_{u, m,n} (\nu \kappa)^n \kappa^{\frac{1}{2} - \frac{1}{p}} \quad \text{for any $p\in[1, \infty]$. } $$ In addition, the bound $$ \|a_k (t) \|_{L^2(\TT^d)} \lesssim \theta(t) g_{\kappa}(\sigma t) \Big( \int_{\TT^d }\rho(t,x) \, dx \Big)^{ \frac{1}{2}} $$ holds for all time $t\in[0,1]$. \end{lemma} \begin{proof} It follows from definition \eqref{eq:def_a_k} that $a_k$ is smooth. Since the implicit constant is allowed to depend on $(\overline{u} , \overline{R}) $, it suffices to consider only the time differentiation. We have that \[ \begin{split} \big\|\partial^n_t \big[\theta(\cdot)g_\kappa( \nu \cdot) \big]\big\|_{L^p([0,1])} &\leq \|\theta\|_{C^n([0,1])} \|\partial^n_t g_\kappa( \nu \cdot)\|_{L^p([0,1])} \\ &\lesssim (\sigma\kappa)^n \kappa^{\frac{1}{2} - \frac{1}{p}}, \end{split} \] which implies the first bound. The second bound follows immediately from the definition of $a_k$: $$ \|a_k (t) \|_{L^2(\TT^d)} \lesssim \theta g_\kappa \left( \int_{\TT^d} \rho \Gamma_k\big(\Id - \frac{ \overline{R}}{\rho }\big) \,dx \right)^\frac{1}{2} \lesssim \theta g_\kappa \left( \int_{\TT^d} \rho \,dx \right)^\frac{1}{2} . $$ \end{proof} With the estimates of $a_k$ in hand, we start estimating the velocity perturbation. As expected, the principle part $w^{(p)}$ is the largest among all parts in $w$. \begin{proposition} \label{p:estimates_on_w^p} The principle part $w^{(p)}$ satisfies \begin{align*} \| w^{(p)} \|_{L^2([0,1] \times \TT^d) } \lesssim \| \overline{R} \|_{L^1([0,1] \times \TT^d) }^\frac{1}{2} + C_u \sigma^{-\frac{1}{2}}, \end{align*} and \begin{align*} \| w^{(p)} \|_{L^p ( 0,1; L^\infty(\TT^d)) } + \| w^{(p)} \|_{L^1 ( 0,1; W^{1,q}(\TT^d)) } \leq C_u \lambda^{-\gamma}. \end{align*} In particular, for sufficiently large $\lambda$, \begin{align*} \| w^{(p)} \|_{L^2([0,1] \times \TT^d) } & \lesssim \| \overline{R} \|^{\frac{1}{2}}_{L^1([0,1] \times \TT^d) }, \\ \| w^{(p)} \|_{L^p( 0,1; L^\infty(\TT^d)) } & \leq \frac{\delta}{4}. \end{align*} \end{proposition} \begin{proof} We first show the estimate for $L^2_{t,x}$ and then for $ L^p L^\infty$. \noindent {\bf $L^2_{t,x}$ estimate:} Taking $L^2$ norm in space and appealing to Lemma~\ref{lemma:improved_Holder}, we have \begin{align*} \| w^{(p)} (t) \|_{L^2( \TT^d) } \lesssim \sum_{k \in \Lambda } \| a_k(t) \|_2 \| \mathbf{ W}_{k} \|_2 + \sigma^{-\frac{1}{2} } C_u . \end{align*} Recall that $\| \mathbf{ W}_{k} \|_2 \lesssim 1$. Then using Lemma \ref{lemma:estimates_a_k} and taking $L^2$ norm in time gives \begin{align}\label{eq:estimate_w_p_1} \| w^{(p)} \|_{L^2([0,1] \times \TT^d) } \lesssim \sum_{k \in \Lambda } \Big(\int_{0}^1 g_{\kappa}^2(\nu t) \int_{\TT^d} \rho (t,x) \,dx \,dt\Big)^\frac{1}{2} + \sigma^{-\frac{1}{2} } C_u. \end{align} Notice that $$ t \mapsto \int_{\TT^d} \rho (t,x) \,dx $$ is a smooth map on $[0,1]$. Thus, we may apply Lemma \ref{lemma:improved_Holder} once again (with $p=1$) to obtain that \begin{equation}\label{eq:estimate_w_p_2} \int_{0}^1 g_{\kappa}^2(\nu t) \int_{\TT^d} \rho (t,x) \,dx \,dt \lesssim \| \overline{R} \|_{L^1([0,1] \times \TT^d) } + C_u \nu^{-1} , \end{equation} where we have used the fact that $\int g_\kappa^2 =1$ and the bound $$ \int_{\TT^d} \rho (t,x) \,dx \lesssim \|\overline{R} (t)\|_{L^1(\TT^d)}. $$ Hence, combining \eqref{eq:estimate_w_p_1} and \eqref{eq:estimate_w_p_2} gives \begin{align*} \| w^{(p)} \|_{L^2([0,1] \times \TT^d) } &\lesssim \| \overline{R} \|_{L^1([0,1] \times \TT^d) }^\frac{1}{2} + C_u \sigma^{-\frac{1}{2} } . \end{align*} \noindent {\bf$ L^p L^\infty$ estimate:} Taking $L^\infty$ norm in space and using H\"older's inequality give \begin{align*} \| w^{(p)} (t) \|_{L^\infty( \TT^d) } \lesssim \sum_{k \in \Lambda } \| a_k(t) \|_\infty \| \mathbf{ W}_{k} \|_\infty. \end{align*} We can now take $L^p$ in time and apply the estimates in Lemma~\ref{lemma:estimates_a_k} and Theorem~\ref{thm:main_thm_for_W_k} to obtain \begin{align*} \| w^{(p)} \|_{L^p(0,1;L^\infty( \TT^d)) } & \lesssim \mu^{ \frac{d-1}{2} } \sum_{k \in \Lambda } \| a_k \|_{L^p(0,1;L^\infty( \TT^d)) } \\ &\leq C_u \kappa^{ \frac{1}{2} -\frac{1}{p} } \mu^{ \frac{d-1}{2} }, \end{align*} which by Lemma \ref{lemma:parameters} implies that \begin{align*} \| w^{(p)} \|_{L^p(0,1;L^\infty( \TT^d)) } & \leq C_u \lambda^{ - \gamma } . \end{align*} \noindent {\bf$ L^1 W^{1,q}$ estimate:} This part is similar to the $L^p L^\infty$. We first take $W^{1,q}$ norm in space to obtain \begin{align*} \| w^{(p)} (t) \|_{W^{1,q}( \TT^d) } \lesssim \sum_{k \in \Lambda } \| a_k(t) \|_{C^1} \| \mathbf{ W}_{k} ( \sigma \cdot ) \|_{W^{1,q}}. \end{align*} Integrating in time, by Theorem \ref{thm:main_thm_for_W_k}, Lemma \ref{lemma:estimates_a_k} we have that \begin{align*} \| w^{(p)} (t) \|_{L^1(0,1;W^{1,q}( \TT^d) )} & \lesssim \sum_{k \in \Lambda } \| a_k(t) \|_{L^1 C^1} \| \mathbf{ W}_{k} ( \sigma \cdot ) \|_{W^{1,q}} \\ & \lesssim \sigma \mu \mu^{ \frac{d-1}{2} -\frac{d-1}{q} }\sum_{k \in \Lambda } \| a_k(t) \|_{L^1 C^1} \\ &\lesssim \sigma \kappa^{- \frac{1}{2}}\mu \mu^{ \frac{d-1}{2} -\frac{d-1}{q} } , \end{align*} which implies the desired bound thanks to Lemma \ref{lemma:parameters}. \end{proof} Next, we estimate the corrector $ w^{(c)}$, which is expected to be much smaller than $w^{(p)}$ due to the derivative gains from both the fast oscillation $\sigma $ and the tensor potential $\Omega_k$. \begin{proposition} \label{p:estiates_on_w^c} The divergence-free corrector $w^{(c)}$ satisfies \begin{align*} \| w^{(c)} \|_{L^2(0,1; L^\infty (\TT^d) ) } \leq C_u \lambda^{- \gamma }, \end{align*} and \begin{align*} \| w^{(c)} \|_{L^1(0,1; W^{1,q} (\TT^d) ) } \leq C_u \lambda^{- \gamma } . \end{align*} In particular, for sufficiently large $\lambda$, \[ \| w^{(c)} \|_{L^2([0,1] \times \TT^d) } \leq \| \overline{R} \|^{\frac{1}{2}}_{L^1([0,1] \times \TT^d) }, \] and \[ \| w^{(c)} \|_{L^p( 0,1; L^\infty (\TT^d)) } +\| w^{(c)} \|_{L^1(0,1; W^{1,q} (\TT^d) ) } \leq \frac{\delta}{4}. \] \end{proposition} \begin{proof} \noindent {\bf$ L^2 L^{\infty}$ estimate:} From the definition, we have \[ \begin{split} \|w^{(c)} (t)\|_{L^\infty (\TT^d)} &\leq \sigma^{-1}\Big\| \sum_{k \in \Lambda } \nabla a_k(t ) : \mathbf{\Omega}_k(\sigma \cdot ) \Big\|_{L^\infty (\TT^d)}\\ &\lesssim \sigma^{-1} \sum_{k \in \Lambda } \|\nabla a_k (t)\|_{L^\infty (\TT^d)} \| \mathbf{\Omega}_k(\sigma \cdot ) \|_{L^\infty (\TT^d)}. \end{split} \] Now, thanks to Lemma~\ref{lemma:estimates_a_k} and \eqref{eq:def_of_r}, we take $L^2$ in time to obtain \[ \begin{split} \| w^{(c)} \|_{L^2(0,1; L^{\infty } (\TT^d) ) } &\lesssim \sigma^{-1} \mu^{-1 + \frac{d-1}{2}} \sum_{k \in \Lambda } \|\nabla a_k \|_{L^2(0,1; L^{\infty } (\TT^d)) } \\ & \leq C_u \sigma^{-1} \mu^{-1 + \frac{d-1}{2}} , \end{split} \] which by the definition of $\gamma$ implies that $$ \| w^{(c)} \|_{L^2(0,1; L^{\infty } (\TT^d) ) } \leq C_u \lambda^{-\gamma}. $$ \noindent {\bf$ L^1 W^{1,q}$ estimate:} This part is very similar to the estimation of $w^{(p)}$. We first take $W^{1,q}$ in space to obtain that \begin{align*} \|w^{(c)} (t)\|_{ W^{1,q} (\TT^d)} &\leq \sigma^{-1}\Big\| \sum_{k \in \Lambda } \nabla a_k(t ) : \mathbf{\Omega}_k(\sigma \cdot ) \Big\|_{ W^{1,q} (\TT^d)}\\ &\lesssim \sigma^{-1} \sum_{k \in \Lambda } \| a_k (t)\|_{C^2 (\TT^d)} \| \mathbf{\Omega}_k(\sigma \cdot ) \|_{W^{1,q} (\TT^d)}. \end{align*} Integrating in space and using Lemma \ref{lemma:estimates_a_k} and Theorem \ref{thm:main_thm_for_W_k} we have \begin{align*} \|w^{(c)} \|_{L^1(0,1; W^{1,q} (\TT^d))} &\lesssim \sigma^{-1} \sum_{k \in \Lambda } \| a_k (t)\|_{L^1(0,1; C^2 (\TT^d) )} \| \mathbf{\Omega}_k(\sigma \cdot ) \|_{W^{1,q} (\TT^d)}\\ &\lesssim \kappa^{-\frac{1}{2}} \mu^{ \frac{d-1}{2} - \frac{d-1}{q}}, \end{align*} which differs the estimate of $\|w^{(p)} \|_{L^1 W^{1,q} } $ by a factor of $\sigma \mu$ and hence \begin{align*} \|w^{(c)} \|_{L^1(0,1; W^{1,q} (\TT^d))} &\lesssim C_u \lambda^{-\gamma}. \end{align*} \end{proof} Finally, we estimate the temporal corrector $w^{(t)}$. From its definition \eqref{eq:def_w_t}, one can see that the spatial frequency of $w^{(t)}$ is independent from the parameters $\sigma$, $\tau$ and $\mu$. As a result, this term poses no constraints to the choice of temporal and spatial oscillation/concentration at all and is small for basically any choice of parameters (as long as temporal oscillation $\nu$ is present). This is one of the main technical differences from \cite{MR3898708,1809.00600} where the leading order effect is temporal oscillation. \begin{proposition} \label{prop:estimates_on_w^t} The temporal corrector $w^{(t)}$ satisfies \begin{align*} \| w^{(t)} \|_{L^\infty( 0,1; W^{1,\infty} ( \TT^d) )} \leq C_u \nu^{-1}. \end{align*} In particular, for sufficiently large $\lambda$, \[ \| w^{(t)} \|_{L^2([0,1] \times \TT^d) } \leq \| \overline{R} \|^{\frac{1}{2}}_{L^1([0,1] \times \TT^d) }, \] and \[ \| w^{(t)} \|_{L^p( 0,1; L^\infty(\TT^d)) } + \| w^{(t)} \|_{L^1( 0,1; W^{1,q}(\TT^d)) } \leq \frac{\delta}{4}. \] \end{proposition} \begin{proof} It follows directly from the definition of $w^{(t)}$ that $$ \| w^{(t)} \|_{L^\infty( 0,1; W^{1,\infty} ( \TT^d) )} \lesssim \nu^{-1} \| h\|_{L^\infty([0,1])} \| \overline{R} \|_{L^\infty( 0,1; W^{2,\infty} ( \TT^d) )} \leq \nu^{-1} C_u. $$ \end{proof} \subsection{Estimates on the new Reynolds stress} The last step of the proof is to estimate $R_1$. We proceed with the decomposition in Lemma \ref{lemma:new_stress_R_1}. More specifically, we will prove that for all sufficiently large $\lambda$, each part of the stress $R_1$ is less than $\frac{\delta}{4} $. \subsubsection{Linear error} \begin{lemma} For sufficiently large $\lambda$, \[ \| R_{\Lin} \|_{L^1(0,1; L^r(\TT^d))} \leq \frac{\delta}{4}. \] \end{lemma} \begin{proof} We split the linear error into three parts: \begin{equation*} \| R_{\Lin}\|_{L^1(0,1; L^r(\TT^d))} \leq \| \underbrace{\mathcal{R}\left(\Delta w \right) \|_{L^1_t L^r }}_{:=L_1} + \underbrace{ \| \mathcal{R}\big( \partial_t (w^{(p)}+ w^{(c)}) \big) \|_{L^1_t L^r } }_{:= L_2 } + \underbrace{ \| \mathcal{R}\left( \D ( w \otimes \overline{u} )+ \D (\overline{u} \otimes w ) \right) \|_{L^1_t L^r } }_{:=L_3}. \end{equation*} \noindent {\bf Estimate of $L_1$:} By \eqref{eq:appendix_R_2} or boundedness of Riesz transform we have \begin{align*} L_1 &\lesssim \| w \|_{L^1(0,1; W^{1,r}(\TT^d))} . \end{align*} Note that we have estimated $w $ in $L^1 W^{1,q}$ and thus by Proposition \ref{p:estimates_on_w^p}--\ref{prop:estimates_on_w^t} we can conclude that \begin{equation}\label{eq:proof_new_stress_lin_2} L_1 \leq C_u \lambda^{- \gamma}. \end{equation} \noindent {\bf Estimate of $L_2$:} By \eqref{eq:w_p_plus_w_c}, we have $$ \partial_t (w^{(p)}+ w^{(c)}) = \sigma^{-1} \sum_k \D( \partial_t a_k \mathbf{\Omega}_k (\sigma \cdot )), $$ and hence \begin{align*} L_2 & \leq \|\mathcal{R} \partial_t (w^{(p)}+ w^{(c)}) \|_{L^1(0,1; L^r(\TT^d))} \\ & \lesssim \sigma^{-1}\sum_k \| \mathcal{R}\D(\partial_t a_k \mathbf{\Omega}_k (\sigma \cdot ) ) \|_{L^1(0,1; L^r(\TT^d))} . \end{align*} Since $ \mathcal{R}\D$ is a Calder\'on-Zygmund operator on $\TT^d$, we have \begin{align*} L_2 & \lesssim \sigma^{-1} \sum_k \| \partial_t a_k \|_{L^1 L^\infty } \|\mathbf{\Omega}_k \|_{r} . \end{align*} Appealing to Lemma \ref{lemma:estimates_a_k} and estimates of $\mathbf{\Omega}_k $ listed in Theorem \ref{thm:main_thm_for_W_k} we have \begin{align} L_2 & \leq C_{u} (\sigma \kappa) \kappa^{-\frac{1}{2}} \mu^{\frac{d-1}{2} - \frac{d-1}{r}} \nonumber \\ & \leq C_u \lambda^{-\gamma}.\label{eq:proof_new_stress_lin_3} \end{align} \noindent {\bf Estimate of $L_3$:} For the last term we simply use $ L^r$ boundedness of $\mathcal{R}$ to obtain \begin{align*} L_3 \lesssim \| w \otimes \overline{u} \|_{L^1_t L^r }. \end{align*} Here we use a crude bound \begin{align*} \| w \otimes \overline{u} \|_{L^1_t L^r } \lesssim \| w \|_{L^p L^\infty } \| \overline{u} \|_{L^\infty_{t,x} } , \end{align*} and apply the obtained estimates in Proposition \ref{p:estimates_on_w^p}, \ref{p:estiates_on_w^c} and \ref{prop:estimates_on_w^t} to conclude \begin{equation}\label{eq:proof_new_stress_lin_4} L_3 \leq C_u \lambda^{- \gamma }. \end{equation} From \eqref{eq:proof_new_stress_lin_2}, \eqref{eq:proof_new_stress_lin_3}, and \eqref{eq:proof_new_stress_lin_4}, we can conclude that for all sufficiently large $\lambda$, there holds \begin{align*} \| R_{\Lin} \|_{L^1(0,1; L^r(\TT^d))} \leq \frac{\delta}{4}. \end{align*} \end{proof} \subsubsection{Correction error} \begin{lemma} For sufficiently large $\lambda$, \[ \| R_{\Cor} \|_{L^1(0,1; L^r(\TT^d))} \leq \frac{\delta}{4}. \] \end{lemma} \begin{proof} By the boundedness of $\mathcal{R}\D$ in $L^r$ and H\"older's inequality, \begin{equation*} \begin{split} \|R_{\Cor}\|_{L^1(0,1; L^r(\TT^d))} &\lesssim \|(w^{(c)}+w^{(t)}) \otimes w\|_{L^1 L^r } + \|w^{(p)} \otimes ( w^{(c)} + w^{(t)} )\|_{L^1 L^r }\\ & \lesssim \left(\|w^{(c)}\|_{L^2 L^\infty }+\|w^{(t)}\|_{L^2 L^\infty} \right) \|w\|_{L^2 L^{2} } + \|w^{(p)}\|_{L^2 L^{2} } \left( \|w^{(c)}\|_{L^2 L^{\infty} } + \|w^{(t)}\|_{L^2 L^{\infty} } )\right). \end{split} \end{equation*} By Propositions~\ref{p:estimates_on_w^p}, \[ \begin{split} \|w\|_{L^2 L^{2}} &\lesssim \|w^{(p)}\|_{L^2 L^{2}} + \|w^{(c)}\|_{L^2 L^{2}} +\|w^{(t)}\|_{L^2 L^{2}}\\ &\lesssim \| \overline{R} \|^{\frac{1}{2}}_{L^1([0,1] \times \TT^d) }, \end{split} \] and by Proposition \ref{p:estiates_on_w^c} and \ref{prop:estimates_on_w^t}, \[ \|w^{(c)}\|_{L^2 L^{ \infty } }+\|w^{(t)}\|_{L^2 L^{\infty } } \leq C_u \lambda^{- \gamma } . \] So for all $\lambda$ sufficiently large, we can conclude that \[ \|R_{\Cor}\|_{L^1(0,1; L^r(\TT^d))} \leq \frac{\delta}{4}. \] \end{proof} \subsubsection{Oscillation error} \begin{lemma} For sufficiently large $\lambda$, \[ \| R_{\Osc} \|_{L^1(0,1; L^r(\TT^d))} \leq \frac{\delta}{4}. \] \end{lemma} \begin{proof} We will use the decomposition from Lemma~\ref{lemma:R_osc_decomposition} $$ R_{\Osc} = R_{\Osc,x} + R_{\Osc,t} + R_{\Far}. $$ \noindent {\bf Estimate of $R_{\Osc,x}$:} Denote $\mathbf{T}_k :[0,1] \times \TT^d \to \RR^{d\times d}$ by $$ \mathbf{T}_k = \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} - \fint \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} , $$ so that $$ R_{\Osc,x} = \sum_{k \in \Lambda} \mathcal{B}\big(\nabla(a_k^2 ) , \mathbf{T}_k(\sigma \cdot) \big) . $$ Using Theorem \ref{thm:bounded_B}, we can estimate the $L^r$ norm of $R_{\Osc,x}$ as follows. \[ \begin{split} \|R_{\Osc,x}\|_{L^r(\TT^d)} &= \Big\| \sum_{k \in \Lambda} \mathcal{B}\Big(\nabla(a_k^2 ) , \mathbf{T}_k(\sigma \cdot) \Big\|_{L^r}\\ &\lesssim \sum_{k \in \Lambda} \|\nabla (a_k^2)\|_{C^1} \| \mathcal{R}\big( \mathbf{T}_k(\sigma \cdot) \big)\|_{L^r}\\ &\lesssim \sigma^{-1}\sum_{k \in \Lambda} \|\nabla (a_k^2)\|_{C^1} \|\mathbf{T}_k \|_{L^r}, \end{split} \] where we have used the fact that $ \mathbf{T}_k(\sigma \cdot)$ has zero spatial mean in the last inequality. Thanks to Theorem~\ref{thm:main_thm_for_W_k}, for any $k\in \Lambda$ \[ \| \mathbf{T}_k\|_{L^r} \lesssim \| \mathbf{ W}_{k} \otimes \mathbf{ W}_{k} \|_{L^r} \lesssim \|\mathbf{ W}_{k}\|_{L^{2r}}^2 \lesssim \mu^{d- 1-\frac{d -1}{r}}. \] Therefore, by Lemma~\ref{lemma:estimates_a_k}, \[ \begin{split} \| R_{\Osc,x} \|_{L^1(0,1; L^r(\TT^d))} &\leq C_u \sigma^{-1} \mu^{d- 1-\frac{d-1}{r}} . \end{split} \] \noindent {\bf Estimate of $R_{\Osc,t}$:} Using the bound on $g_k$ \eqref{eq:bound_on_h_k}, we infer \[ \begin{split} \|R_{\Osc,t}\|_{L^1(0,1; L^r(\TT^d))} &= \|\sigma^{-1} h_k(\sigma t) \D \partial_t \overline{R}\|_{L^1L^r}\\ &\lesssim \sigma^{-1} \|h_k(\sigma \cdot)\|_{L^1} C_u\\ &\leq C_u\sigma^{-1} . \end{split} \] \noindent {\bf Estimate of $R_{\Far}$:} We can use Theorem~\ref{thm:main_thm_for_W_k} and Lemma~\ref{lemma:estimates_a_k} to obtain \begin{align*} \|R_{\Far }\|_{L^1(0,1; L^r(\TT^d))} & = \Big\| \sum_{k \neq k'} a_k a_{k'} \mathbf{ W}_{k} (\sigma \cdot) \otimes \mathbf{ W}_{{k'}}(\sigma \cdot)\Big\|_{L^1(0,1; L^r(\TT^d))}\\ &\lesssim \sum_{k \neq k'} \|a_k\|_{L^2(0,1; L^{\infty}(\TT^d))} \|a_{k'}\|_{L^2(0,1; L^{\infty}(\TT^d))}\|\mathbf{ W}_{k} \otimes \mathbf{ W}_{{k'}} \|_{L^{r}} \\ &\leq C_u \mu^{d-1 - \frac{d}{r}}. \end{align*} Now we can combine all the estimates and conclude \[ \begin{split} \| R_{\Osc} \|_{L^1(0,1; L^r(\TT^d))} & \leq C_u \left( \sigma^{-1} \mu^{d- 1-\frac{d-1}{r}}+ \sigma^{-1} + \mu^{d-1 - \frac{d}{r}} \right). \end{split} \] Thanks to \eqref{eq:def_of_r}, we have \[ \| R_{\Osc} \|_{L^1(0,1; L^r(\TT^d))} \leq C_u \lambda^{- \gamma} . \] And thus for $\lambda$ large enough, the desire bound holds: \[ \| R_{\Osc} \|_{L^1(0,1; L^r(\TT^d))} \leq \frac{\delta}{4}. \] \end{proof}